uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,563,615 | arxiv | \section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~Max Planck Institute for Dynamics and Self-Organization (MPIDS), Am Fa{\ss}berg 17, 37077 G\"ottingen, Germany. E-mail: [email protected]}}
\footnotetext{\textit{$^{b}$~Laboratoire de Physico-Chimie Th\'eorique, UMR CNRS 7083 Gulliver, ESPCI ParisTech, PSL Reaserch University, 10 rue Vauquelin, 75005 Paris, France}}
\section{Introduction}
Thin liquid films are ubiquitous in many natural systems and technological applications, spanning from e.g.\ the corneal fluid in the human eye and the aqueous glue in spiders silk to mechanical lubrication, protective coatings, microelectronics fabrication and lithography \citep{eijkel2005nanofluidics, craster2009dynamics, jaeger2001introduction, vollrath1989, shyy2001moving}.
Understanding the stability and dynamics of thin films is hence a crucial task.
When a thin liquid layer is deposited on a low-energy surface the substrate might be spontaneously exposed to the vapor phase so to reduce the energy of the system. This phenomenon is achieved either by the nucleation of holes in the film, or by the amplification of capillary waves at the liquid surface \citep{reiter1992dewetting, bischof1996_spinodal, xie1998_spinodal, seemann2001dewetting}, a situation referred to as spinodal dewetting.
Moreover, in polymer films the confinement of the macromolecules \citep{bodiguel2006reduced, fakhraai2008measuring} or hydrodynamic slip at solid-liquid interface \citep{vilmin2006dewetting, baumchen2009reduced} are also known to influence the mobility and the dynamics of the film.
In some cases, due to the finite size of the film, the interaction of the fluid with the substrate is mediated by the presence of a three-phase contact line where liquid, solid and vapor phases coexist.
This is a common situation for instance in the case of liquid droplets supported by a solid surface.
It is of primary importance in many applications, e.g.\ ink-jet printing, to know whether the droplets will spread or not on the substrate. In these wetting problems the movement of the three-phase contact line line is often related to the equilibrium contact angle $\theta_Y$, which follows Young's construction at the contact line: $\cos \theta_Y = ( \gamma_{sv} - \gamma_{sl} ) /\gamma$, where $\gamma_{sv}$, $\gamma_{sl}$ and $\gamma$ are the surface tensions of the solid-vapor, solid-liquid and liquid-vapor interfaces, respectively \citep{degennes2004capillarity, bonn2009wetting}.
In the vicinity of a moving contact line, however, classical fluid dynamics approaches based on corner flow predict a divergence of the viscous dissipation that would require an infinite force to move the line, as first pointed out by \citet{huh1971hydrodynamic}.
This apparent paradox can been solved by including microscopic effects like, for instance, slip at the solid-liquid boundary~\citep{dussan1976slip}, the presence of a precursor film ahead the line~\citep{degennes1985wetting} or a height dependence of the interfacial tension~\citep{pahlavan2015thin}.
A well-established system is that of a liquid droplet spreading onto a complete wettable substrate.
The so-called Tanner's law predicts the growth of the drop radius with a power law evolution \citep{tanner1979spreading}.
Surprisingly, this law is valid for every liquid that wets the substrate, and this universality is related to the presence of a thin precursor film ahead of the contact line \citep{degennes1985wetting, bonn2009wetting}. Recently, it has been proven that the power law changes in case of spreading on a thicker liquid layer \citep{cormier2012}.
Different evolutions are known for droplets on partially wettable substrates. In particular, an exponential relaxation to the equilibrium contact angle is observed when $\theta_Y$ is small and the system is close to equilibrium \citep{deruijter1999, narhe2004contact, ilton2015}.
Tanner's law as well as other investigations of wetting and contact line dynamics are limited to spherical and cylindrical droplets, i.e.\ configurations in which the curvature of the liquid-air interface and therefore the liquid pressure are constant.
More intriguing and complex phenomena may appear in the presence of non-constant film curvature. In the last few years,
in particular, there has been an increased interest in the capillary-driven dynamics of relaxation of stepped and trench-like film topographies~\citep{mcgraw2011capillary, rognin2011viscosity, mcgraw2012, salez2012, baeumchen2013, mcgraw2013}. In these situations the gradients in Laplace pressure drive a capillary flow mediated by viscosity, leading to the levelling of the interface.
All these studies are however limited to pure liquid-air interfaces, i.e.\ in the absence of a three-phase contact line. Nevertheless, we note that the latter aspect was recently considered in the context of studies about residual stress~\citep{Guo2014}.
Here, we are studying the relaxation dynamics of the contact angle at the nanoscale by means of tracking the evolution towards equilibrium of a viscous nanofilm in presence of a three-phase contact line. The specificity of this system is the necessity to relax simultaneously both the contact angle and the curvature of the liquid-vapor interface. The resulting liquid dynamics is driven by both Laplace pressure gradients and the balance of forces at the three-phase contact line, while mediated by viscosity. We carry out experiments involving polystyrene films on silicon wafers, a common system that mimics a partial wetting situation \citep{seemann2001polystyrene}. In our experiments the thin films are invariant in one direction and exhibit a rectangular cross section (c.f.\ Fig.\,\ref{fig:stripe}). The spatial and temporal dynamics of the liquid-air interface $z = h(x,t)$ is monitored and distinct regimes corresponding to different mechanisms of relaxation are observed. We show that in general the advancing or receding of the contact line can not be predicted by the simple observation of the initial contact angle, as in a spherical or cylindrical droplet, and propose a geometrical and an energetic approach to describe the evolution of the interface.
\section{Materials and methods}
Polystyrene (PS) thin films are obtained after a spin-coating process and a transfer on Si wafers, following the technique reported in \citet{mcgraw2011capillary}.
In brief, a solution is prepared by dissolving monodisperse PS (PSS, Germany) in toluene (Sigma-Aldrich, Chromasolv, purity $>99.9\%$).
Three different molecular weights have been used ($M_\mathrm{w} =$ 3.2, 19 and 34\,kg/mol)
and solutions were typically prepared with concentration in the range of $1 - 4\%$ in weight.
The PS solution is spin-coated on freshly cleaved mica sheets (Ted Pella, USA).
During the spin-coating the solvent quickly evaporates resulting in a thin film of the dissolved polymer.
The film is then floated at the surface of ultra-pure (MilliQ) water where
it spontaneously breaks into small pieces.
Pieces are transferred on $1 \times 1$ cm$^2$ Si wafers (Si-Mat, Germany), exhibiting a native oxide layer.
For the purpose of testing the influence of the substrate, Si wafers exhibiting a thick (150\,nm) oxide layer have been used as well.
Prior to the transfer all Si wafers were cleaned by exposing the substrates to a mixture of hydrogenperoxide and sulfuric acid (``piranha solution''), followed by careful rinse with boiling ultra-pure water \citep{neto2003satellite}.
Once the preparation is completed, a nanostripe exhibiting straight edges is identified with an optical microscope and scanned with an atomic force microscope (AFM, Bruker, Multimode) in tapping-mode.
The sample is then annealed above the glass transition temperature of PS on a high-precision heating stage (Linkam, UK) to induce flow.
The annealing temperatures are set to 110\,$^\circ$C for the 3.2\,kg/mol molecular weight and to 140\,$^\circ $C or $150\,^\circ$C for the two others.
After quenching down the PS at room temperature the height profile is scanned with AFM.
This procedure is repeated several times so to record the temporal evolution of the profiles.
As an alternative to this \textit{ex-situ} technique, we also performed \textit{in-situ} measurements
in which the sample is annealed directly on a high-temperature scanner and the liquid interface is monitored by the AFM tip. This way it has been safely ensured that the quenching has no influence on the shape of the profiles.
\section{Results}
\subsection{Relaxation of a rectangular interface}
\begin{figure}[t!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8.8,8.4)
\put(1.02,1.){\includegraphics[width=7.8cm]{stripe}}
\put(0.94,6.3){\includegraphics[width=8.1cm]{scheme}}
\put(4.5,0.2){\Large{$x \; \left[ \mathrm{\mu m} \right]$}}
\put(0.,3){\rotatebox{90}{\Large{$h \; \left[ \mathrm{nm} \right]$} }}
\put(0.8,1.2){\large 0}
\put(0.45,2.4){\large 100}
\put(0.45,3.6){\large 200}
\put(0.45,4.8){\large 300}
\put(4.7,0.7){\large 0}
\put(6.1,0.7){\large 10}
\put(7.6,0.7){\large 20}
\put(3.,0.7){\large -10}
\put(1.5,0.7){\large -20}
\put(7,5.05){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.3,5.05){\small 0}
\put(8.05,5.05){\small 1100}
\put(0.15,5.5){\Large (b)}
\put(0.15,8){\Large (a)}
\put(4.65,6.95){\large $2 \, r$}
\put(4.6,6.55){\large $2 \, \ell_0$}
\put(3.5,7.4){\large $ \theta_Y$}
\put(5.5,7.45){\large $x$}
\put(5,7.96){\large $z$}
\put(1.2,7.45){\large $h_0$}
\end{picture}
\end{center}
\caption{(a) Schematic view of the system: the initial rectangle has width $2 \ell_0$ and height $h_0$, while the final cylindrical cap has a contact line radius $r$ and a contact angle $\theta_Y$.
(b) Series of AFM height profiles displaying the temporal evolution of an initially rectangular PS thin film towards a cylindrical shape ($M_\mathrm{w} = 34$ kg/mol, $T = 140^\circ$C). In the first four profiles the contact line is stationary (within the resolution of the scan), then the dewetting transition takes place and the line recedes. Eventually the coalescence of the two sides leads to a cylindrical cap. Note that the vertical axis is stretched for clarity and the system is invariant in the third direction. }
\label{fig:stripe}
\end{figure}
Figure~\ref{fig:stripe} displays a series of profiles corresponding to the temporal evolution of a rectangular interface (the system is invariant in the third dimension) having initial width $2 \ell_0 = 40 \; \mathrm{\mu}$m and a film thickness $h_0 = 120 \; $nm. In the first measure, recorded after 2 min of annealing, we observe that the high Laplace pressure in the corners is rapidly relaxed which gives rise to the formation of a pair of bumps. Note that the vertical axis in the AFM profiles has been stretched in order to have a clear visualization of the details of the interface; thus, the actual slopes of the liquid interface are very small.
In the following profiles (scanned after 5, 15 and 50 min of annealing, blue lines) the maxima of the bumps slowly move towards the center of the system while both bumps are becoming broader and their height is constant. We note that for these early time profiles no displacement of the contact line is apparent in the AFM data\footnote{Even for the smallest scan sizes of $2 \times 2$\,$\mu m$ and in the presence of a unique reference point, such as a defect on the substrate, no significant movement of the contact line has been detected. Nevertheless, given the limited lateral resolution of the AFM, a displacement of the contact line on the scale of a few nm can not be safely excluded.}.
For the next profile, at $t= 90$ min, it is clearly visible that both contact lines have receded. We also observe that the bumps have grown due to the accumulation of the liquid that has moved. The retraction of the contact lines continues in the successive scans and the liquid keeps accumulating in the rims that become higher and larger. The velocity of the contact lines in this stage is roughly constant, in agreement with earlier observations by \citet{redon1991dynamics} in the presence of no-slip boundary conditions.
Around $ t \simeq 700$\,min the front retraction reaches the unperturbed region in the middle of the film and the rims start to merge. The portion of positive curvature disappears and the interface slowly converges to a cylindrical cap. The velocity of the contact lines slows down during the merging process. Note that at a more advanced stage of the process the system might eventually undergo a Plateau-Rayleigh instability and lose its invariance in the third dimension.
To summarise, from this experiment three different regimes can be identified: i) the initial stationary contact line (SCL) regime is followed by a dewetting transition, as evidenced by ii) the receding contact line (RCL) regime in which the two sides of the nanostripe retract independently, and eventually by iii) the coalescence regime where the two rims merge to form the cylindrical droplet.
Note that all along the process the shape of the system remains symmetric.
At first glance the apparent absence of early spreading, the fixed position of the line at the beginning, and the following dewetting process may sound counterintuitive and surprising.
However, the perfect parity of the profile in Fig.~\ref{fig:stripe} strongly suggests that these features are general, as opposed to pinning of the contact line on random defects. In order to check the validity of the previous observations, a series of experiments has been carried out involving films with various thickness and viscosities. Due to the parity of the profiles, in the next paragraphs we focus on one single edge of the film and discuss in detail the dynamics of the SCL regime and the dewetting transition.
\subsection{Stationary Contact Line Regime}
\begin{figure}[ht]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8.8,6.2)
\put(0.5,0.6){\includegraphics[width=8.2cm]{summary_profiles}}
\put(4.2,0.15){\Large{$x \, \left[ \mathrm{\mu m} \right]$}}
\put(0.15,2.7){\rotatebox{90}{ \Large{$h \, \left[ \mathrm{n m} \right] $ }}}
\put(0.7,5.35){\large 300}
\put(0.7,3.9){\large 200}
\put(0.7,2.45){\large 100}
\put(1.1,1.05){\large 0}
\put(2.4,0.65){\large -10}
\put(4.32,0.65){\large 0}
\put(5.9,0.65){\large 10}
\put(7.65,0.65){\large 20}
\put(7.1,4.13){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.5,4.13){\small 0}
\put(8.2,4.13){\small 90}
\put(7.1,2.36){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.5,2.36){\small 0}
\put(8.1,2.36){\small 140}
\put(7.1,1.2){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.5,1.2){\small 0}
\put(8.1,1.2){\small 400}
\end{picture}
\end{center}
\caption{Early profiles corresponding to the stationary contact line regime in three experiments involving different film thicknesses and viscosities.
The experiments are spaced horizontally by 10 $\mu$m for clarity.
Left (blue) experiment has $h_0 = 280$ nm and $M_\mathrm{w} = 19$ kg/mol at $T = 140\, ^\circ$C,
middle (red) has $h_0 = 130$ nm and $M_\mathrm{w} = 3.2$ kg/mol at $T = 110\, ^\circ$C,
right (green) has $h_0 = 45$ nm and $M_\mathrm{w} = 34$ kg/mol at $T = 140\, ^\circ$C.
}
\label{fig:summary}
\end{figure}
The early evolution of the viscous nanostripe in three different experiments is shown in Fig.~\ref{fig:summary}. All profiles have been recorded before the dewetting transition takes place and corroborate the finding that the contact line is not moving significantly.
The high Laplace pressure of the initial corner creates a bump in the liquid interface. The width of the bump increases in time while it appears that the height is constant in each experiment.
The formation of a bump in the presence of a corner as well as the relaxation of the interfacial slope have been studied in the levelling of a stepped interface in a thin viscous film \citep{mcgraw2012, mcgraw2013}. The relaxation of the rectangular interfaces in Fig.~\ref{fig:summary} can be understood in terms of the flow generated by Laplace pressure gradients. As a consequence of the high viscosity of PS and small thickness of the film, the Reynolds number of the flow is very small and inertial effects can be neglected. The liquid dynamics can be safely described using Stokes equation
$\nabla p = \eta \nabla ^2 \mathbf{v}$,
where $\mathbf{v}$ is the liquid velocity and where the pressure $p$ is related to the curvature of the liquid-air interface by Laplace equation $p = - \gamma \, \partial^2 h / \partial x^2$, only valid for a 2D interface within the small slopes approximation.
Following the theoretical framework summarized in \citet{oron1997long}, the Stokes equation can be further simplified introducing the lubrication approximation. The equation governing the evolution of the liquid interface $h(x,t)$, in the presence of a no-slip boundary condition at the solid-liquid interface, a no-shear boundary condition at the free interface, and in the absence of disjoining forces, can be deduced:
\begin{equation}
\frac{\partial h}{\partial t} + \frac{\gamma}{3 \eta} \frac{\partial}{\partial x} \left( h^3 \frac{\partial^3 h}{\partial x^3} \right) = 0 \;.
\label{eq:tfe}
\end{equation}
It can be proven~\citep{huppert1982propagation, aradian2001} that this equation admits self-similar solutions of the form $h(x,t) = h (x / t^{1/4})$. This self-similiarity has been verified experimentally for different geometries \citep{mcgraw2012, mcgraw2013, baeumchen2013, chai2014}, all of them being limited to pure liquid-air interfaces. We now check whether this self-similarity holds in the presence of a fixed contact line.
\begin{figure*}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(17.5,6.45)
\put(0.6,0.7){\includegraphics[height=5.5cm]{summary_selfsim}}
\put(9.6,0.7){\includegraphics[height=5.5cm]{universal_profile}}
\put(0.1,6.11){\Large (a)}
\put(9.1,6.11){\Large (b)}
\put(0.1,2.9){\rotatebox{90}{\Large{$h \, [ \mathrm{nm}]$ }}}
\put(3,0.2){\Large{$x / t^{1/4} \, [ \, \mu \mathrm{m / min}^{1/4}\, ]$}}
\put(2.,0.75){\large -10}
\put(4.4,0.75){\large 0}
\put(6.5,0.75){\large 10}
\put(0.7,5.4){\large 300}
\put(0.7,4){\large 200}
\put(0.7,2.55){\large 100}
\put(1.1,1.15){\large 0}
\put(11.9,0.15){\Large{$ \left( 3\eta / \gamma \; {h_0}^3 \right)^{1/4} \, x / t^{1/4} $}}
\put(9.1,3.12){\rotatebox{90} {\Large{$h / h_0$ }}}
\put(9.65,5.75){\large 1.2}
\put(9.95,4.96){\large 1}
\put(9.65,4.2){\large 0.8}
\put(9.65,3.45){\large 0.6}
\put(9.65,2.7){\large 0.4}
\put(9.65,1.95){\large 0.2}
\put(9.95,1.2){\large 0}
\put(11.55,0.75){\large 0}
\put(13.8,0.75){\large 10}
\put(16.1,0.75){\large 20}
\put(15.53,2.18){\small Experiments}
\put(15.53,1.7){\small Numerics}
\put(6.9,5.27){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.3,5.27){\small 0}
\put(8.,5.27){\small 90}
\put(6.9,3.7){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.3,3.7){\small 0}
\put(7.9,3.7){\small 140}
\put(6.9,2.25){\small $t \; \left[ \mathrm{min} \right]$}
\put(6.3,2.25){\small 0}
\put(7.9,2.25){\small 400}
\end{picture}
\end{center}
\caption{(a) Self-similar profiles for each of the experiments shown in Fig.~\ref{fig:summary} as obtained by plotting the vertical position $h$ of the interface as a function of $x / t^{1/4}$. The experiments are shifted horizontally for clarity.
(b) A universal profile appears when the vertical axis is rescaled by $h_0$ and the horizontal axis is non-dimensionalized by applying a lateral stretching in each experiment. The numerical solution (orange dashed line) is in excellent agreement with the experiments.}
\label{fig:universal_profile}
\end{figure*}
In Fig.~\ref{fig:universal_profile} (a), the horizontal axis is rescaled by applying the transformation $x \to x / t ^{1/4}$ and we observe that in each experiment the different profiles collapse on a single curve. This collapse demonstrates that the self-similar dynamics $h (x,t) = h (x / t^{1/4})$ is valid even for an interface with contact line, provided that the system is in the SCL regime where the line does not move.
In the following the experimental profiles are compared to the numerical solution of the thin film equation, see Eq.~(\ref{eq:tfe}). The numerical profile is computed using a finite difference method \citep{bertozzi1998, salez2012numerical}. The dimensionless self-similar variable $X / T^{1/4}$ is introduced, where $X = x / \ell_0 $ and $T = \gamma \, t {h_0}^3 / (3 \eta {\ell_0}^4)$, and thus:
\begin{equation}
\frac{X}{T^{1/4}} = \left( \frac{3 \eta}{\gamma \, {h_0}^3} \right)^{1/4 }\frac{x}{t^{1/4}} \;.
\label{eq:num_vs_exp}
\end{equation}
In addition, the condition that the contact line is fixed at $X=0$ is enforced in the algorithm.
Thus, a general picture can be obtained by rescaling the vertical axis of the experimental profiles with the initial thickness, i.e.\ $h \to h / h_0$, and by stretching the horizontal axis $ x \to (3 \eta / \gamma \, {h_0}^3)^{1/4 } \, x / t^{1/4}$. This lateral stretch is a fitting parameter that depends on the experiment and has a clear physical interpretation \citep{mcgraw2013, salez2012numerical}. Applying this rescaling to all experimental profiles leads to a perfect collapse of all the profiles on a single master curve (Fig.~\ref{fig:universal_profile} (b)). This curve represents a universal profile of the SCL regime valid for all the parameters involved in these experiments (annealing time, film thickness, molecular weight and temperature). Note that the rescaled height of the bump is equal to $22 \pm 2 \%$ of the initial thickness of the film.
The excellent agreement between the experiments and the thin film model (Fig.~\ref{fig:universal_profile} (b)) suggests that the thin film equation with a fixed contact line is sufficient to capture the physics of the SCL regime. The values of the resulting fitting parameters are used to compute the capillary velocity $\gamma / \eta$. Based on $\gamma = 30.8$ mN/m~\citep{brandrup1999polymer}, the viscosity $\eta$ of the PS is also evaluated (see Tab.~\ref{tab:viscosities}) and is found to be in excellent agreement with the values reported in the literature \citep{brandrup1999polymer, rubinstein2003polymer}.
\begin{table}[b!]
\begin{center}
\begin{tabular}{| c | c | c | }
\hline
$M_\mathrm{w} \left[ \mathrm{kg / mol} \right]$ & $\quad T \; \left[ ^\circ \mathrm{C} \right] \quad $ & $\quad \eta \; \left[ \mathrm{Pa \; s} \right] \quad$ \\
\hline
3.2 & 110 & $5.8 \times 10^3$ \\
19 & 140 & $7.8 \times 10^3$ \\
19 & 150 & $1.6 \times 10^3$ \\
34 & 140 & $2.9 \times 10^4$ \\
34 & 150 & $3.1 \times 10^3$ \\
\hline
\end{tabular}
\end{center}
\caption{Viscosity $\eta$ of PS as a function of molecular weight $M_\mathrm{w}$ and annealing temperature $T$. Values of the viscosity are obtained by fitting the experimental self-similar profile of each experiment to the numerical solution, computing the capillary velocity $\gamma/\eta$ through Eq.~(\ref{eq:num_vs_exp}) and assuming $\gamma = 30.8\;$mN/m~\citep{brandrup1999polymer}.}
\label{tab:viscosities}
\end{table}
\subsection{Dewetting transition}
Self-similarity breaks down as soon as the dewetting transition takes place. Indeed, during the receding motion of the contact line the liquid accumulates in the region of the bump, causing its growth in width and height (Fig.~\ref{fig:dewetting} (a)). The occurrence of dewetting is a common observation in all the experiments.
To shed light on the time scale of the onset of dewetting, the entire evolution of the contact angle $\theta$ between liquid and solid is monitored. For each profile close-up AFM scans (typically $2 - 4 \, \mu$m) around the contact line are performed. The value of the contact angle $\theta$ is then calculated from these profiles by fitting the shape of the liquid interface with a circular arc, as illustrated in the inset of Fig.~\ref{fig:dewetting} (a), and taking the tangent to the circle at the contact line. The equilibrium contact angle $\theta_Y$ for PS on a Si wafer has also been evaluated by annealing the film for very long time until isolated droplets form. The value $\theta_Y = 10^\circ \pm 2^\circ$ has been obtained, which is in close agreement to the one reported earlier in \citet{seemann2001polystyrene} for the same system.
Fig.~\ref{fig:dewetting} (b) shows a typical evolution of the the contact angle.
During the SCL regime the angle $\theta$ monotonically decreases due to the relaxation of the interface and to the stationary position of the three-phase contact line line. Interestingly, the SCL regime extends even when the angle is smaller than $\theta_\mathrm{Y}$.
Eventually, the receding motion of the line takes place when the angle reaches a critical value $\theta^* < \theta_\mathrm{Y}$. As soon as the contact line retracts, $\theta$ rapidly increases to a roughly constant receding contact angle of the moving front.
In all the experiments $\theta$ decreases following a $t^{-1/4}$ power law (see Fig.~\ref{fig:dewetting} (b), inset) in the SCL regime. This power law is a direct consequence of the self-similar evolution of the liquid interface. Indeed, as shown above, the horizontal length scales evolve as $\sim t^{1/4}$, which implies for a constant vertical length scale $h_0$ that $\tan\theta\sim t^{-1/4}$, and thus $\theta\sim t^{-1/4}$ at small angles. The $\theta \sim t^{-1/4}$ relation leads to a fast decrease of the angle at short time, so that it generally drops to small values in a few minutes.
\begin{figure}[t]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8.8,11)
\put(1.2,0.7){\includegraphics[width=.868\columnwidth]{dewet}}
\put(4.2,0.13){\Large $t \; \left[ \mathrm{min} \right]$}
\put(0.3,2.55){\rotatebox{90}{\Large{$\theta \, \left[ ^\circ \right]$ }}}
\put(4.1,3.15){\small $ t \; \left[ \mathrm{min} \right]$ }
\put(2.22,3.65){\rotatebox{90}{\small{$\theta \, \left[ ^\circ \right]$ }}}
\put(1,5){\large 20}
\put(1,3.9){\large 15}
\put(1,2.85){\large 10}
\put(1.2,1.8){\large 5}
\put(1.2,0.6){\large 0}
\put(2.9,0.52){\large 50}
\put(4.38,0.52){\large 100}
\put(5.95,0.52){\large 150}
\put(7.55,0.52){\large 200}
\put(4.3,5.6){\Large $x \; \left[ \mathrm{\mu m} \right]$}
\put(0.1,8.3){\rotatebox{90}{\Large{$h/h_0$ }}}
\put(1.9,6){\large 0}
\put(3.93,6){\large 5}
\put(5.9,6){\large 10}
\put(7.98,6){\large 15}
\put(0.65,10.07){\large 1.25}
\put(1.1,9.3){\large 1}
\put(0.65,8.55){\large 0.75}
\put(0.8,7.85){\large 0.5}
\put(0.65,7.1){\large 0.25}
\put(1.1,6.35){\large 0}
\put(3.8,1.1){\large SCL}
\put(7.3,1.1){\large RCL}
\put(7.3,7.){\small $ x \; \left[ \mathrm{\mu m} \right]$ }
\put(4.75,7.6){\rotatebox{90}{\small $ h \; \left[ \mathrm{n m} \right]$}}
\put(6.4,7.1){\large $\theta$}
\put(5.95,8.65){\footnotesize circular cap fit}
\put(5.95,8.35){\footnotesize data}
\put(6.7,10.03){\small $t \; \left[ \mathrm{min} \right]$}
\put(5.8,10.03){\small 0}
\put(8.05,10.03){\small 220}
\put(0.1,10.6){\Large (a)}
\put(0.1,4.9){\Large (b)}
\end{picture}
\end{center}
\caption{
(a) Profiles corresponding to the stationary contact line (before 150\,min) and to the receding contact line (after 150\,min) regimes. As soon as the three-phase contact line recedes, the height of the bump grows. Here, $h_0 = 120$\,nm and $M_\mathrm{w} = 34$\,kg/mol at $T = 140\,^\circ$C. The inset displays the profile recorded close to the contact line ($t= 5$\,min, black points) and a circular arc fit (red line) used to calculate the contact angle.
(b) Temporal evolution of the contact angle for the experiment shown in (a). Filled circles correspond to the SCL regime and open circles to the RCL regime. The error bar of the contact angle in the SCL regime is typically $\pm\, 0.3^\circ$ and, thus, smaller than the symbol size.}
\label{fig:dewetting}
\end{figure}
Let us now introduce the dimensionless time $\tau = t\gamma / (h_0\eta)$. Using the values of $h_0$ recorded with the AFM and those of $\eta/\gamma $ extracted from the comparison between the self-similar profile and the numerical solution, $\theta$ is plotted as a function of $\tau$ (see Fig.~\ref{fig:universal_angles}). The values of the contact angles for all the experiments collapse on the same master curve, a significant result because the experiments involve different heights of the film and different capillary velocities, since the viscosity changes by more than one order of magnitude. An important observation is the fact that in all the experiments the retraction of the line precisely appears at the same value of the contact angle $\theta^* = 4.5^\circ \pm 0.5^\circ<\theta_\mathrm{Y}$. From the master curve we, hence, deduce that the dimensionless dewetting time is also universal and equal to $\tau^* \simeq 10^5$.
After the dewetting transition an exponential relaxation of the contact angle might be expected, although the experimental uncertainty of the contact angle measurement together with the time resolution here can not provide a precise validation.
Note that the rescaling with $\tau$ satisfied by the SCL regime does not necessarily apply for the RCL regime.
\section{Discussion}
From these results it appears that the stationary position of the contact line followed by the onset of dewetting are general and robust features and can not be attributed to the presence of randomly distributed pinning sites, i.e.\ local topographical defects and / or chemical inhomogeneities: for all seven experiments and all annealing times the invariance of the profiles in the third dimension as well as a perfect symmetric representation of the profiles throughout the relaxation process (see Fig.~\ref{fig:stripe}) are valid. The universal transition between both regimes is reproducible and appears to be independent of the liquid parameters after proper rescaling of the relaxation dynamics. In the following paragraph, we briefly discuss the influence of the substrate on the dynamics.
Aside from the general features outlined above, a prominent and interesting observation is the fact that a stationary position of the contact line is observed for values of the contact angle smaller than the equilibrium one, but larger than the critical one, i.e.\ $\theta^* <\theta< \theta_\mathrm{Y}$.
This observation has been confirmed even for scan sizes in proximity of the contact line and in the presence of small defects that can be used as reference points, although the limited lateral resolution of the AFM can not accurately detect displacements in the order of a few nanometers.
The dynamics of thin liquid films is governed by short-range as well as long-range forces, e.g.\ originating from van-der-Waals interactions, between the substrate and the liquid. In principle these long-range forces are negligible for a film thickness larger than $\sim 30$\,nm \citep{seemann2001dewetting}, which is always the case in the experiments discussed here. However, in the region close to the three-phase contact line the thickness of the film decreases monotonically to zero and the extent of the zone where $h < 30$\,nm grows as soon as the contact angle decreases. In previous work it has already been shown that long-range intermolecular forces might affect the shape of the liquid interface close to the contact line of nanometric droplets \citep{seemann2001polystyrene}. In order to test if long-range forces play a role in this zone and in particular if they affect the onset of the retraction of the line experiments on Si wafers exhibiting a thick ($150$\,nm) oxide layer have been carried out. This choice is motivated by the fact that the presence of a thick oxide layer considerably changes the effective interface potential comprising short- and long-range forces and represents a well-established model system\citep{seemann2001dewetting, seemann2001polystyrene}. One separate experiment has been performed on thick oxide layer Si wafers (see Fig.~\ref{fig:universal_angles}), and it has not shown any significant difference with respect to the experiments on a native Si oxide layer: the values of the contact angles for this experiment perfectly collapse on the master curve in Fig.~\ref{fig:universal_angles} and the contact angle at the transition is preserved. Hence, we conclude that long-range forces affect neither the relaxation dynamics nor the onset of motion of the contact line. A possible alternative explanation would be the presence of a contact angle hysteresis related to a uniform intrinsic pinning potential of the polymer molecules on the Si wafers, yet to be substantiated in future experimental work.
\begin{figure}[t]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8.8,5.8)
\put(0.75,0.55){\includegraphics[width=.92\columnwidth]{universal_angles}}
\put(3.9,0.15){\Large{$\tau = t \, \gamma / (h_0 \eta)$}}
\put(0.1,2.8){\rotatebox{90}{\Large{$\theta \, \left[ ^\circ \right]$ }}}
\put(1.55,2.7){\scriptsize $M_w = 34$k, $T = 140^\circ$, $h_0 = 120$ nm}
\put(1.55,2.4){\scriptsize $M_w = 3.2$k, $T = 110^\circ$, $h_0 = 130$ nm}
\put(1.55,2.1){\scriptsize $M_w = 19$k, $T = 140^\circ$, $h_0 = 280$ nm}
\put(1.55,1.8){\scriptsize $M_w = 19$k, $T = 150^\circ$, $h_0 = 260$ nm}
\put(1.55,1.5){\scriptsize $M_w = 19$k, $T = 140^\circ$, $h_0 = 125$ nm}
\put(1.55,1.2){\scriptsize $M_w = 34$k, $T = 150^\circ$, $h_0 = 125$ nm}
\put(3.3,5.37){\scriptsize $M_w = 34$k, $T = 140^\circ$, $h_0 = 125$ nm (thick SiO$_\mathrm{x}$)}
\put(0.65,5.45){\large 18}
\put(0.65,4.77){\large 14}
\put(0.65,4.35){\large 12}
\put(0.65,3.87){\large 10}
\put(0.85,3.28){\large 8}
\put(0.85,2.53){\large 6}
\put(0.85,1.45){\large 4}
\put(1.82,0.6){\large 1}
\put(3.65,0.6){\large 5}
\put(4.35,0.6){\large 10}
\put(6.2,0.6){\large 50}
\put(6.88,0.6){\large 100}
\put(7.88,0.59){\large $\times 10^3$}
\put(3.97,4.3){\small 4}
\put(4.35,3.98){\small 1}
\end{picture}
\end{center}
\caption{Contact angle $\theta$ plotted as a function of dimensionless time $\tau$ for all experiments. Data for the SCL regime are plotted with filled symbols while data for RCL regime are plotted with open symbols.
All the data for the SCL regime follow the $\tau ^{-1/4}$ power law and collapse on the same master curve.
The transition to dewetting takes place at the same values of $\theta$ and $\tau$ for all the experiments.}
\label{fig:universal_angles}
\end{figure}
The occurrence of dewetting, despite the fact the initial contact angle is much larger than the equilibrium one, is a common feature in all the experiments and might appear counterintuitive at first glance. Indeed in a liquid interface with constant curvature (a spherical drop, or a cylindrical drop in the 2D case) the advancing or receding motion of the contact line can be ultimately predicted from the value of the contact angle: In particular the situation $\theta_0 > \theta_Y$, which characterizes all our experiments, would have lead to the monotonic spreading of the liquid. In fact it is easy to prove that a spherical or cylindrical interface in the absence of gravity reaches its minimum of energy when the forces at the contact line are at equilibrium, which is precisely the foundation of Young's construction of the equilibrium angle. However, in the more general case of a non-constant curvature interface, the equilibrium of the forces at the contact line given by $\theta = \theta_Y$ does not correspond to the minimum of the energy anymore. The system has to adjust the contact angle \textit{and} to relax the liquid interface at the same time in order to achieve the global minimum and, thus, it is not possible to predict spreading or dewetting \textit{ab initio} only from the value of the contact angle.
A simple geometrical argument based on the comparison of the initial state and the final state of the system can be introduced to anticipate the occurrence of spreading or dewetting. In the 2D configuration, the initial state is a rectangular interface having width $2 \ell_0$ and thickness $h_0$ (see Fig.~\ref{fig:stripe} (b)). The final state is a circular cap having contact line radius $r$ and equilibrium contact angle $\theta_Y$. Invoking volume (area, in 2D) conservation between the two states, the contact line radius can be deduced as a function of $h_0$, $\ell_0$ and $\theta_Y$. We define the wetting parameter $\mathcal{W} = {r}/{\ell_0}$ and show that:
\begin{equation}
\mathcal{W} = \sqrt{\frac{2h_0}{\ell_0}} \frac{\sin \theta_Y}{\sqrt{\theta_Y - \sin \theta_Y \cos \theta_Y}} \;.
\end{equation}
A spreading or dewetting situation is triggered by $\mathcal{W} > 1$ and $\mathcal{W} < 1$, respectively. Note that $\mathcal{W}$ only depends on the aspect ratio of the initial stripe $\ell_0 / h_0$ and the equilibrium contact angle. For the experiment illustrated in Fig.~\ref{fig:stripe} (a), the aspect ratio is $\ell_0/h_0 \simeq 167$ and the wetting parameter that corresponds to the experimental value $\theta_Y = 10^\circ$ is $\mathcal{W} = 0.32$ . Hence, dewetting appears to be an inevitable consequence of the initial geometry.
\section{Conclusions}
In this article we have studied the relaxation dynamics of the contact angle between a viscous liquid and a smooth solid substrate. The temporal evolution of a liquid nanofilm in the presence of three-phase contact lines has been monitored for different film thicknesses and liquid viscosities. In all experiments the initial regime, defined by a stationary position of the contact line, is followed by a second one in which dewetting takes place. We have shown that the stationary contact line regime can be described in terms of levelling dynamics in which the liquid profile exhibits a self-similar evolution in excellent agreement with a numerical solution of the thin film equation. In this regime the energy of the system diminishes due to the relaxation of the curvature of the interface and the contact angle follows a characteristic power law. The self-similarity breaks down as soon as the contact line retracts. The transition between stationary and receding regimes of the contact line is triggered for a critical angle $\theta^* < \theta_\mathrm{Y}$ that is independent of the molecular weight of the polymer, viscosity, film thickness as well as long-range interactions. A universal transition has then emerged in terms of a characteristic dimensionless time.
In future work, we envision to explore whether and how the robust features observed in our experiments can be generalized to different types of substrates exhibiting different surface energies and/or a variation of the hydrodynamic boundary condition between liquid and solid. These experiments might provide new fundamental insights on the contact-line dynamics at the nanoscale.
\section{Acknowledgement}
The authors would like to thank Kari Dalnoki-Veress, Joshua D. McGraw and Stephan Herminghaus for insightful discussions. The German Research Foundation (DFG) is acknowledged for financial support under grant BA 3406/2.\\
|
1,108,101,563,616 | arxiv | \section{Introduction}
Let $S$ be a topological space and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. Let $G$ be a group of homeomorphisms of $S$ acting diagonally on ${{\mathcal C}}_4$ from the left: for each $g\in G$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$,
$$
(g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=(g(p_1),g(p_2),g(p_3),g(p_4)).
$$
The $G$-{\it confiquration space of quadruples of pairwise distinct points in $S$} is the quotient space ${{\mathcal F}}_4={{\mathcal F}}_4(S)$ of ${{\mathcal C}}_4$ cut by the action of $G$. In this general setting, it is not obvious what kind of object ${{\mathcal F}}_4$ is; however, there are tractable cases. If for instance $S$ is a smooth manifold of dimension $s$ and $G$ is a Lie subgroup of diffeomorphisms of $S$ of dimension $g$, then ${{\mathcal F}}_4$ carries the structure of a smooth manifold of dimension $4s-g$ if the diagonal action is proper and free.
The particular cases when $S$ are spheres bounding hyperbolic spaces and $G$ are the sets ${{\mathcal M}}(S)$ of M\"obius transformations acting on $S$ are prototypical; in some of these cases there exists neat parametrisations of the configurations spaces by cross-ratios. The most illustrative (and simplest) example of all of is that of the ${{\mathcal M}}(S^1)$-configuration space ${{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in the unit circle $S^1$. What follows is classical and well-known, see for instance \cite{Be}, but we include it here both for clarity as well as for setting up the notation we shall use throughout the paper.
Let $S^1=\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$ be the unit circle. Recall that if ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ $\in{{\mathcal C}}_4(S^1)$ then its real cross-ratio is defined by
$$
{{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]=\frac{(x_4-x_2)(x_3-x_1)}{(x_4-x_1)(x_3-x_2)},
$$
where we agree that if one of the points is $\infty$ then $\infty:\infty=1$. The set of M\"obius transformations ${{\mathcal M}}(S^1)$ of $S^1$ comprises maps $g:S^1\to S^1$ of the form
$$
g(x)=\frac{ax+b}{cx+d},\quad x\in\overline{{{\mathbb R}}},
$$
where the matrix
$$
A=\left(\begin{matrix}
a&b\\
c&d\end{matrix}\right)
$$
is in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. The cross-ratio ${\rm X}$ is invariant under the diagonal action of ${{\mathcal M}}(S^1)$ in ${{\mathcal C}}_4={{\mathcal C}}_4(S^1)$: If $g\in{{\mathcal M}}(S^1)$, then
$$
{{\rm X}}(g({{\mathfrak x}}))=[g(x_1),g(x_2),g(x_3),g(x_4)]=[x_1,x_2,x_3,x_4]={{\rm X}}({{\mathfrak x}}),
$$
for every ${{\mathfrak x}}\in{{\mathcal C}}_4$. Also, $X$ takes values in ${{\mathbb R}}\setminus\{0,1\}$ and for each quadruple $(x_1,x_2,x_3,x_4)$ satisfies the standard symmetry properties:
\begin{eqnarray*}
&&
\noindent ({\rm S1})\quad {{\rm X}}(x_1,x_2,x_3,x_4)={{\rm X}}(x_2,x_1,x_4,x_3)={{\rm X}}(x_3,x_4,x_1,x_2)=
{{\rm X}}(x_4,x_3,x_2,x_1),\\
&&
\noindent({\rm S2})\quad{{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_2,x_4,x_3)=1,\\
&&
\noindent({\rm S3})\quad
{{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_4,x_2,x_3)\cdot {{\rm X}}(x_1,x_3,x_4,x_2)=-1.
\end{eqnarray*}
Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak x}}$ are eventually functions of ${{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]$. We let ${{\mathcal M}}(S^1)$ act diagonally on ${{\mathcal C}}_4(S^1)$ and let ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ be the ${{\mathcal M}}(S^1)$-configuration space of quadruples of pairwise distinct points in $S^1$. From invariance by cross-ratios we have that the map
$$
{{\mathcal G}}:{{\mathcal F}}_4(S^1)\ni[{{\mathfrak x}}]\mapsto {{\rm X}}({{\mathfrak x}})\in{{\mathbb R}}\setminus\{0,1\},
$$
is well-defined. Also ${{\mathcal G}}$ is surjective; to see this, recall that there is a triply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$; if $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$, then there is a unique $f\in{{\mathcal M}}(S^1)$ such that
$$
f(x_1)=0,\quad f(x_2)=\infty,\quad f(x_3)=1.
$$
Recall at this point that actually $f$ is given in terms of cross-ratios:
$$
[0,\infty,f(x),1]=[x_1,x_2,x,x_3].
$$
Hence if $x\in{{\mathbb R}}\setminus\{0,1\}$, then $[{{\mathfrak x}}]\mapsto x$, where ${{\mathfrak x}}=(0,\infty,x,1)$. Finally, ${{\mathcal G}}$ is injective: If ${{\mathfrak x}}$ and ${{\mathfrak x}}'$ are in ${{\mathcal C}}$ and ${{\rm X}}({{\mathfrak x}})=X({{\mathfrak x}}')=x$, then there exists a $g\in{{\mathcal M}}(S^1)$ such that ${{\mathfrak x}}=g({{\mathfrak x}}')$. All the above discussion boils down to the well-known fact that the configuration space ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in $S^1$ is isomorphic to ${{\mathbb R}}\setminus\{0,1\}$. and therefore it inherits the structure of a one-dimensional disconnected real manifold. Moreover, the following possibilities occur for the relative position of the points $x_i$ of ${{\mathfrak x}}$ on the circle:
\begin{enumerate}
\item $x_1,x_2$ separate $x_3,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<0$.
\item $x_1,x_3$ separate $x_2,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<1$.
\item $x_1,x_4$ separate $x_2,x_3$. This happens if and only if $0<{{\rm X}}({{\mathfrak x}})<1$.
\end{enumerate}
Each of cases (1), (2) and (3) correspond to the connected components of ${{\mathcal F}}_4$.
In an analogous manner, see again \cite{Be}, the ${\rm PSL}(2,{{\mathbb C}})$-configuration space ${{\mathcal F}}_4(S^2)$ of quadruples of pairwise disjoint points in the sphere $S^2$ is isomorphic to ${{\mathbb C}}\setminus\{0,1\}$ and therefore inherits the structure of a one-dimensional complex manifold.
The case of the ${\rm PU}(2,1)$-configuration space ${{\mathcal F}}_4(S^3)$ of quadruples of pairwise distinct points in $S^3$ is much harder but it is treated in the same spirit, see \cite{FP}: Using complex cross-ratios we find that besides a subset of lower dimension, ${{\mathcal F}}_4(S^3)$ is isomorphic to $({{\mathbb C}}\setminus{{\mathbb R}})\times{{\mathbb C}} P^1$, a two-dimensional disconnected complex manifold. Finally, recent treatments for the cases of ${\rm PSp}(1,1)$ and ${\rm PSp}(2,1)$-configuration spaces of quadruples of pairwise distinct points in $S^3$ and $S^7$ may be found in \cite{GM} and \cite{C}, respectively.
As we have mentioned above, spheres may be viewed as boundaries of symmetric spaces of non-compact type and of rank-1; that is hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, where ${{\mathbb K}}$ can be: a) ${{\mathbb R}}$ the set of real numbers, b) ${{\mathbb C}}$ the set of complex numbers, c) $\mathbb{H}$ the set of quaternions and d) $\mathbb{O}$ the set of octonions (in the last case $n=2$). Two problems arise naturally here; first, to describe configuration spaces of four points on products of such spheres by parametrising them with cross-ratios defined on those products and second, to describe configuration spaces of four points in boundaries of symmetric spaces of rank$>$1 again by parametrising them using cross-ratios defined on those boundaries. These two problems are sometimes intertwined and the crucial issue here is the definition of an appropriate cross-ratio; this is directly linked to the M\"obius geometry of the spaces we wish to study, as we explain below. In this paper we deal with both problems by describing in the manner above the configuration space of four points in the torus ${{\mathbb T}}=S^1\times S^1$; the torus is the F\"urstenberg boundary of the symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ which is of rank-2, as well as the ideal boundary of anti-de Sitter space $adS^3$, see Section \ref{sec:cons}.
Returning to our original general setting, suppose that the $G$-configuration space ${{\mathcal F}}_4(S)$ has a real manifold structure due to a proper and free action of $G$ on ${{\mathcal C}}_4(S)$. By taking the product $S\times S$ and the space ${{\mathcal C}}_4(S\times S)$ of quadruples of pairwise distinct points of $S\times S$, the group $G\times G$ acts diagonally as follows: for $g=(g_1,g_2)$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4(S\times S)$, $p_i=(x_i,y_i)$, $i=1,2,3,4$,
$$
(g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=\left((g_1(x_1),g_2(y_1)),(g_1(x_2),g_2(y_2)),(g_1(x_3),g_2(y_3)),(g_1(x_4),g_2(y_4))\right).
$$
Using elementary arguments, one deduces that the action of $G\times G$ on ${{\mathcal C}}_4(S\times S)$ is proper. Towards the direction of the free action of $G\times G$ we observe that from the obvious injection
$
{{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)\to{{\mathcal C}}_4(S\times S)
$
which assigns to each $({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$, ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$, the quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $p_i=(x_i,y_i)$, $i=1,2,3,4.$, we obtain an injection
$$
{{\mathcal F}}_4(S)\times{{\mathcal F}}_4(S)\ni([{{\mathfrak x}}],[{{\mathfrak y}}])\mapsto [{{\mathfrak p}}]\in{{\mathcal F}}_4(S^1\times S^1).
$$
The image of this map is the subset ${{\mathcal F}}_4^\sharp(S\times S)$ of ${{\mathcal F}}_4(S\times S)$ comprising quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$, ${{\mathfrak y}}$ are in ${{\mathcal F}}_4(S)$. We may straightforwardly show that ${{\mathcal F}}_4^\sharp$ comprises principal orbits, that is, orbits of the maximal dimension of the $G\times G$ action; these are orbits of quadruples with trivial isotropy groups. Therefore ${{\mathcal F}}_4^\sharp(S\times S)$ is a manifold of dimension $2n$, where $n=\dim({{\mathcal F}}_4(S))$. If the action is free only on ${{\mathcal F}}^\sharp_4(S\times S)$, the orbits of the remaining points are of dimension less than $2n$. This is exactly the case we study in Section \ref{sec:config}, i.e., the configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points in the torus ${{\mathbb T}}=S^1\times S^1$. The subset of ${{\mathcal F}}_4({{\mathbb T}})$ of maximal dimension two is isomorphic to $$
{{\mathcal F}}_4^\sharp({{\mathbb T}})={{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1)=({{\mathbb R}}\setminus\{0,1\})^2,
$$
a disconnected subset of ${{\mathbb R}}^2$ comprising nine connected components, see Theorem \ref{thm:vec}. Also, by considering the natural involution $\iota_0:{{\mathbb T}}\to {{\mathbb T}}$ which maps each $(x,y)$ to $(y,x)$ and the group $\overline{{{\mathcal M}}({{\mathbb T}})}$ comprising M\"obius transformations of ${{\mathbb T}}$ followed by $\iota_0$, then by taking $\overline{{{\mathcal F}}^\sharp_4({{\mathbb T}})}$ to be ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ cut by the diagonal action of $\overline{{{\mathcal M}}({{\mathbb T}})}$ we find that it is isomorphic with a disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ comprising four open connected components and three components with 1-dimensional boundary (Theorem \ref{thm:band}).
At this point, the goal of parametrising the configuration space by cross-ratios defined on the torus has not yet achieved; by Theorem \ref{thm:vec} the set ${{\mathcal F}}_4({{\mathbb T}})$ admits a parametrisation obtained by assigning to each ${{\mathfrak p}}$ the pair $({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}}))$.
To this end,
for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp$ we define
$$
{{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}),
$$
see Section \ref{sec:realX}, which is ${{\mathcal M}}({{\mathbb T}})$-invariant. Certain symmetries for ${{\mathbb X}}$ exist so that for each quadruple ${{\mathfrak p}}$ all 24 cross-ratios of quadruples resulting from permutations of points of ${{\mathfrak p}}$ are functions of two cross-ratios which we denote by ${{\mathbb X}}_1={{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2={{\mathbb X}}_2({{\mathfrak p}})$. According to Proposition \ref{prop:fundX}, $({{\mathbb X}}_1,{{\mathbb X}}_2)$ lie in a disconnected subset ${{\mathcal P}}$ of ${{\mathbb R}}^2$ comprising six components. Three of these components are open and the remaining three have boundaries which are pieces of the parabola
$$
\Delta(u,v)=u^2+v^2-2u-2v+1-2uv=0.
$$
In Theorem \ref{thm:F4} we prove that ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ is in a 2-1 surjection with ${{\mathcal P}}$ and therefore $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ is isomorphic to ${{\mathcal P}}$. Remark that boundary components of ${{\mathcal P}}$ correspond to quadruples ${{\mathfrak p}}$ such that all points of ${{\mathfrak p}}$ lie on a Circle, that is, a ${{\mathcal M}}({{\mathbb T}})$-image of the diagonal curve $\gamma(x)=(x,x)$, $x\in S^1$, which is fixed by the involution $\iota_0$. Remark also that the parametrisation of $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ by ${{\mathcal Q}}$ and ${{\mathcal P}}$ induces the same differentiable structure.
\medskip
We now discuss in brief some general aspects of M\"obius geometry. Let $S$ be a set comprising at least four points and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. A {\it positive cross-ratio} ${{\bf X}}$ on ${{\mathcal C}}_4$ is a map ${{\mathcal C}}_4\to{{\mathbb R}}_+$ such that for each ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal S}}$, a list of symmetric properties hold; explicitly,
\begin{eqnarray*}
&&
({\rm S1})\quad {{\bf X}}(p_1,p_2,p_3,p_4)={{\bf X}}(p_2,p_1,p_4,p_3)={{\bf X}}(p_3,p_4,p_1,p_2)
={{\bf X}}(p_4,p_3,p_2,p_1),\\
&&
({\rm S2})\quad({{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_2,p_4,p_3)=1,\\
&&
({\rm S3})\quad
{{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_4,p_2,p_3)\cdot {{\bf X}}(p_1,p_3,p_4,p_2)=1.
\end{eqnarray*}
Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak p}}$ are functions of ${{\bf X}}_1({{\mathfrak p}})=$ ${{\bf X}}(p_1,p_2,p_3,p_4)$ and ${{\bf X}}_2({{\mathfrak p}})={{\bf X}}(p_1,p_3,p_2,p_4)$. The {\it M\"obius structure} of $S$ is then defined to be the map
$$
{{\mathfrak M}}_S:{{\mathcal C}}_4(S)\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}_+)^2.
$$
The {\it M\"obius group} ${{\mathfrak M}}(S)$ comprises bijections $g:S\to S$ that leave ${{\bf X}}$ invariant, that is, ${{\bf X}}(g({{\mathfrak p}}))={{\bf X}}({{\mathfrak p}})$. We stress here that the above definitions vary depending on the author; however, all existing definitions are equivalent. An equivalent to our definition is the definition of {\it sub-M\"obius structure} in \cite{Bu}.
Frequently, a M\"obius structure is obtained from a metric (or even a semi-metric) $\rho$ on $S$ and it is called a {\it M\"obius structure associated to $\rho$}. In the primitive case of the circle $S^1$, from the real cross-ratio ${\rm X}$ we obtain a positive cross-ratio ${{\bf X}}$ in ${{\mathcal C}}_4(S_1)$ by assigning to each ${{\mathfrak p}}\in{{\mathcal C}}_4(S^1)$ the number
$$
{{\bf X}}({{\mathfrak p}})=|{\rm X}({{\mathfrak p}})|=\frac{|x_4-x_2||x_3-x_1|}{|x_4-x_1||x_3-x_2|}=\frac{\rho(x_4,x_2)\cdot \rho(x_3,x_1)}{\rho(x_4,x_1)\cdot \rho(x_3,x_2)}.
$$
The metric $\rho$ here is the extension of the euclidean metric in ${{\mathbb R}}$ to $\overline{{{\mathbb R}}}$: $\rho(x,y)=|x-y|$ if $x,y\in{{\mathbb R}}$, $\rho(x,\infty)=+\infty$ and $\rho(\infty,\infty)=0$. One verifies that the positive cross-ratio satisfies properties (S1) and (S2) and (S3). The M\"obius structure of $S^1$ is thus the map
$$
{{\mathfrak M}}_{S^1}({{\mathfrak p}})=({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}^+\setminus\{0,1\})^2.
$$
Note that the M\"obius group ${{\mathfrak M}}(S^1)$ for this M\"obius structure is ${\rm SL}(2,{{\mathbb R}})$, a double cover of ${{\mathcal M}}(S^1)$. Note also that since ${\rm X}_1({{\mathfrak p}})+{\rm X}_2({{\mathfrak p}})=1$, we have by triangle inequality
\begin{equation}\label{eq:ptol}
\left|{{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})\right|\le 1\quad\text{and}\quad{{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})\ge 1.
\end{equation}
Explicitly,
${{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_3$ separate $x_2$ and $x_4$;
${{\bf X}}_2({{\mathfrak p}})-{{\bf X}}_1({{\mathfrak p}})=1$ if $x_1$ and $x_2$ separate $x_3$ and $x_4$;
${{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_4$ separate $x_2$ and $x_3$.
If the M\"obius structure ${{\mathfrak M}}_S$ of a space $S$ satisfies (\ref{eq:ptol}) then it is called Ptolemaean. The subsets of $S$ at which equalities hold in (\ref{eq:ptol}) are called Ptolemaean circles. In this way the M\"obius structure of $S^1$ associated to the euclidean metric is Ptolemaean and $S^1$ itself is a Ptolemaean circle for this M\"obius structure. $S^1$ is the boundary of the hyperbolic disc ${{\bf H}}_{{\mathbb C}}^1$; it is proved in \cite{P} that all M\"obius structures in boundaries of hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, $n=1,2\dots$,
are Ptolemaean; all these M\"obius structures are associated to the the Kor\'anyi metric.
Therefore all boundaries of symmetric spaces of non-compact type and of rank-1 have M\"obius structures which are all associated to a metric and they are all Ptolemaean. In the case of the torus we study here, this does not happen: The M\"obius structure which is defined in Section \ref{sec:mob}
\begin{enumerate}
\item is not associated to any semi-metric on ${{\mathbb T}}$;
\item is not Ptolemaean, but
\item there exist Ptolemaean circles for this structure.
\end{enumerate}
To the direction of defining M\"obius structures in boundaries of symmetric spaces of rank$>1$, little was known till recently;
in his recent work \cite{B}, Byrer explicitly constructs cross-ratio triples in F\"urstenberg boundaries of symmetric spaces of higher rank.
We have already mentioned that the torus ${{\mathbb T}}=S^1\times S^1$ which we study here appears naturally as the F\"urstenberg boundary of the rank-2 symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and is also isomorphic to the ideal boundary of 3-dimensional anti-de Sitter space $Ad S^3$. Our results apply to these spaces which we discuss in Section \ref{sec:cons}.
\bigskip
\noindent{\it Acknowledgements:} Part of this work was carried out while the author visited University of Zurich; hospitality is gratefully appreciated. The author also wishes to thank Viktor Schroeder and Jonas Beyrer for fruitful discussions
\section{The Configuration Space of Four Points in the Torus }\label{sec:config}
Our main results lie in this section. In Section \ref{sec:trans} we study the transitive action of the group of M\"obius transformations of the torus. The results about the configuration space are in Sections \ref{sec:confT} and \ref{sec:realX}.
\subsection{The action of M\"obius transformations in the torus }\label{sec:trans}
The torus ${{\mathbb T}}=S^1\times S^1$ is isomorphic to $\overline{{{\mathbb R}}}\times\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$. Let $(x,y)\in{{\mathbb T}}$; a M\"obius transformation of ${{\mathbb T}}$ is a map $g:{{\mathbb T}}\to{{\mathbb T}}$ of the form
$$
g(x,y)=\left(g_1(x),g_2(y)\right),
$$
where $g_1$ and $g_2$ are in ${{\mathcal M}}(S^1)$, that is,
$$
g_1(x)=\frac{ax+b}{cx+d},\quad g_2(y)=\frac{a'y+b'}{c'y+d'},
$$
where the matrices
$$
A_1=\left(\begin{matrix}
a&b\\
c&d\end{matrix}\right)\quad\text{and}\quad
A_2=\left(\begin{matrix}
a'&b'\\
c'&d'\end{matrix}\right)
$$
are both in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. Thus the set of M\"obius transformations ${{\mathcal M}}({{\mathbb T}})$ is ${{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)={\rm PSL}(2,{{\mathbb R}})\times{\rm PSL}(2,{{\mathbb R}})$.
We wish to describe the action of ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathbb T}}$. First, the action is simply-transitive; this follows directly from simply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$. Secondly, the action is not doubly-transitive in the usual sense. If
$$
{{\mathfrak c}}=(p_1,p_2)=((x_1,y_1),(x_2,y_2)),
$$
is a pair of distinct points on the torus, then the
cases:
a) $x_1=x_2$ or $y_1=y_2$ and
b) $x_1\neq x_2$ and $y_1\neq y_2$,
are completely distinguished: A transformation $g\in{{\mathcal M}}({{\mathbb T}})$ maps couples of the form a) (resp. of the form b)) to couples of the same form; this prevents ${{\mathcal M}}({{\mathbb T}})$ to act doubly-transitively on ${{\mathbb T}}$ in the usual sense.
The doubly-transitive action of ${{\mathcal M}}({{\mathbb T}})$ is rather partial in the sense above. Thirdly, as far as it concerns a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$, distinguished cases appear again.
Indeed, consider an arbitrary triple
$$
{{\mathfrak t}}=(p_1,p_2,p_3)=\left((x_1,y_1),(x_2,y_2),(x_3,y_3)\right),
$$
of pairwise distinct points in ${{\mathbb T}}$
and we have the following distinguished cases:
\begin{enumerate}
\item [{a)}] Both $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ are triples of pairwise distinct points in $S^1$;
\item [{b)}] $y_1=y_2=y_3$ and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points of $S^1$
\item [{c)}]
$x_1=x_2=x_3$ and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points of $S^1$;
\item [{d)}] $x_i=x_j=x$, $x_l\neq x$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points in $S^1$;
\item [{e)}] $y_i=y_j=y$, $y_l\neq y$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$;
\item [{f)}] Two $x_i$'s and two $y_j$'s are equal.
\end{enumerate}
All the above cases are not M\"obius equivalent: A $g\in{{\mathcal M}}({{\mathbb T}})$ maps triples of each of the above categories to a triple of the same category. However, there is a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$ at points which belong to the same category:
Notice for instance that in the first case we have that there exist $g_1$ and $g_2$ in ${{\mathcal M}}(S^1)$ such that
\begin{equation*}\label{eq:g12}
g_1(x_1)=g_2(y_1)=0,\quad g_1(x_2)=g_2(y_2)=\infty,\quad g_1(x_3)=g_2(y_3)=1.
\end{equation*}
We derive that $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ satisfies
\begin{equation*}\label{eq:g}
g({{\mathfrak t}})=\left((0,0),(\infty,\infty),(1,1)\right).
\end{equation*}
In the second case, if $x_1=x_2=x_3$, then $(y_1,y_2,y_3)$ is a triple of distinct points in $S^1$. Therefore there exists a $g=(g_1,g_2)$ in ${{\mathcal M}}({{\mathbb T}})$ such that $g_1(x_i)=0$, $g_2(y_1)=0$, $g_2(y_2)=\infty$, $g_2(y_3)=1$, that is,
$$
g({{\mathfrak t}})=\left((0,0),(0,\infty),(0,1)\right).
$$
Analogously for the case $y_1=y_2=y_3$ we find that there exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that
$$
g({{\mathfrak t}})=\left((0,0),(\infty,0),(1,0)\right).
$$
The remaining cases are treated in the same manner and we leave them to the reader.
\subsubsection{Circles}
For each $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ we get an embedding of $S^1$ into ${{\mathbb T}}$ which is given by the parametrisation
$$
\gamma(x)=(g_1(x),g_2(x)),\quad x\in S^1.
$$
Such embeddings of $S^1$ into ${{\mathbb T}}$ will be called {\it M\"obius embeddings of $S^1$} or {\it Circles} on ${{\mathbb T}}$. Notice first, that each Circle is the image of the {\it standard Circle} $R_0$ via an element of ${{\mathcal M}}({{\mathbb T}})$; here, $R_0$ is the curve $\gamma(x)=(x,x)$, $x\in S^1$. Secondly, the involution $\iota_0$ of ${{\mathbb T}}$ defined by $\iota_0(x,y)=(y,x)$, fixes point-wise $R_0$. Hence to each Circle $R$ is associated an involution $\iota_R$ of ${{\mathbb T}}$ which fixes $R$ point-wise. Moreover, we have
\begin{prop}
Given a triple ${{\mathfrak t}}=(p_1,p_2,p_3)$ of the form a) above, there exists a Circle $R$ passing through the points of ${{\mathfrak t}}$ and thus the involution $\iota_R$ of ${{\mathbb T}}$ associated to $R$ fixes all points of ${{\mathfrak t}}$.
\end{prop}
\begin{proof}
We normalise so that ${{\mathfrak t}}=(p_1,p_2,p_3)$ where
$$
p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(1,1).
$$
Then the Circle passing through $p_i$ is $R_0$ and the involution is $\iota_0$.
\end{proof}
Three distinct points of ${{\mathbb T}}$ might lie on various embeddings of $S^1$;
for instance, triples of the form b) and c) lie on
$\gamma_y(x)=(g_1(x),y)$ for fixed $y$ and $\gamma_x(y)=(x,g_2(y))$ for fixed $x$, respectively, where $g_1,g_2\in{{\mathcal M}}(S^1)$. But in any case, only triples of points of the form a) lie on Circles.
\subsection{The configuration space of four points in ${{\mathbb T}}$}\label{sec:confT}
According to the notation which was set up in the introduction, let ${{\mathcal C}}_4={{\mathcal C}}_4({{\mathbb T}})$ be the space of quadruples of pairwise distinct points in ${{\mathbb T}}$ and let also ${{\mathcal F}}_4={{\mathcal F}}_4({{\mathbb T}})$ be the {\it configuration space of quadruples of pairwise distinct points in} ${{\mathbb T}}$, that is, the quotient of ${{\mathcal C}}_4$ by the diagonal action of the M\"obius group ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathcal C}}_4$. Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$ be arbitrary;
if $p_i=(x_i,y_i)$, $i=1,2,3,4$, we shall denote by ${{\mathfrak x}}$ the quadruple $(x_1,x_2,x_3,x_4)$ and by ${{\mathfrak y}}$ the quadruple $(y_1,y_2,y_3,y_4)$. The isotropy group of ${{\mathfrak p}}$ is
\begin{eqnarray*}
{{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})&=&\{g\in{{\mathcal M}}({{\mathbb T}})\;|\;g({{\mathfrak p}})={{\mathfrak p}}\}\\
&=&\{(g_1,g_2)\in{{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)\;|\;g_1({{\mathfrak x}})={{\mathfrak x}}\;\text{and}\;g_2({{\mathfrak y}})={{\mathfrak y}}\}\\
&=&{{\mathcal M}}(S^1)({{\mathfrak x}})\times{{\mathcal M}}(S^1)({{\mathfrak y}}).
\end{eqnarray*}
Therefore the isotropy group ${{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})$ is trivial if and only if both isotropy groups ${{\mathcal M}}(S^1)({{\mathfrak x}})$ and ${{\mathcal M}}(S^1)({{\mathfrak y}})$ are trivial as well. If ${{\mathfrak p}}$ is such that $[{{\mathfrak p}}]$ is of maximal dimension (that is, both $[{{\mathfrak x}}]$ and $[{{\mathfrak y}}]$ are of maximal dimension), then we call ${{\mathfrak p}}$ {\it admissible}. Note that the dimension of the orbit of an admissible ${{\mathfrak p}}$ is 2. In the opposite case, we call ${{\mathfrak p}}$ {\it non-admissible}.
We start with the non-admissible case first.
Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ as above. We distinguish the following cases for ${{\mathcal M}}(S^1)({{\mathfrak x}})$:
\begin{enumerate}
\item [{${{\mathfrak x}}$-1)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and ${{\mathfrak x}}\in{{\mathcal C}}_4(S^1)$. We may then normalise so that
$$
x_1=0,\quad x_2=\infty,\quad x_3={{\rm X}}({{\mathfrak x}}),\quad x_4=1.
$$
\item [{${{\mathfrak x}}$-2)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and two points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2$, we may normalise so that
$$
x_1=x_2=0,\quad x_3=1,\quad x_4=\infty;
$$
we normalise similarly for the remaining cases.
\item [{${{\mathfrak x}}$-3)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(0,\infty)$: Three points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2=x_3$, we may normalise so that
$$
x_1=x_2=x_3=0,\quad x_4=\infty;
$$
we normalise similarly for the remaining cases.
\item [{${{\mathfrak x}}$-4)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(\infty)$: All points $x_i$ in ${{\mathfrak x}}$, $i=1,2,3,4$, are equal and we may normalise so that $x_i=\infty$.
\end{enumerate}
Notice that there are six sub-cases in ${{\mathfrak x}}$-2) and four sub-cases in ${{\mathfrak x}}$-3); in all, we have twelve distinguished cases. Entirely analogous distinguished cases ${{\mathfrak y}}$-1), ${{\mathfrak y}}$-2), ${{\mathfrak y}}$-3) and ${{\mathfrak y}}$-4) appear for ${{\mathcal M}}(S^1)({{\mathfrak y}})$. Non-admissible quadruples ${{\mathfrak p}}$ are such that all combinations of cases for ${{\mathfrak x}}$ and ${{\mathfrak y}}$ may appear except when ${{\mathfrak x}}$ falls into the case ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ falls into the case ${{\mathfrak y}}$-1). Mind that not all combinations are valid; for instance, there can be no ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-4) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) or ${{\mathfrak y}}$-3). Subsets of ${{\mathcal F}}_4$ corresponding to each valid combination are either of dimension 0 or 1. One-dimensional subsets appear when ${{\mathfrak x}}$ belongs to the ${{\mathfrak x}}$-1) case or ${{\mathfrak y}}$ belongs to the ${{\mathfrak y}}$-1) case. The corresponding subset is then isomorphic to ${{\mathbb R}}\setminus\{0,1\}$ together with a point. For clarity, we will treat two cases: First, suppose that the non-admissible ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_1=y_2$. Then we may normalise so that
$$
p_1=(0,0),\quad p_2=(\infty, 0),\quad p_3=({{\rm X}}({{\mathfrak x}}),1),\quad p_4=(1,\infty).
$$
Therefore the subset of ${{\mathcal F}}_4({{\mathbb T}})$ comprising orbits of such ${{\mathfrak p}}$ is isomorphic to ${{\mathbb R}}\setminus\{0,1\}\times\{b_{12}\}$, where $b_{12}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_1=y_2$. Secondly, suppose that ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-3) with $x_1=x_2=x_3$ and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_3=y_4$. Then we may normalise so that
$$
p_1=(0,0),\quad p_2=(0, \infty),\quad p_3=(0,1),\quad p_4=(\infty,1).
$$
The corresponding subset of the orbit space is then isomorphic to $\{a_{123}\}\times \{b_{34}\}$, where $a_{123}$ is the abstract point corresponding to quadruples ${{\mathfrak x}}$ such that $x_1=x_2=x_3$ and $b_{34}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_3=y_4$. Table \ref{table:1} shows all 68 distinguished subsets of ${{\mathcal F}}_4({{\mathbb T}})$ comprising non-admissible orbits of quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ and ${{\mathfrak x}}$ belong to the above categories:
\begin{table}[h!]
\centering
\begin{tabular}{||c c c c||}
\hline
${{\mathfrak x}}$ & ${{\mathfrak y}}$ & Corresponding subset of ${{\mathcal F}}_4$ & Components \\ [0.5ex]
\hline\hline
${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-2) & $({{\mathbb R}}\setminus\{0,1\})\times\{b_{ij}\}
& 6\\
${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-3) & $({{\mathbb R}}\setminus\{0,1\})\times\{b_{ijk}\}
& 3\\
${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-4) & $({{\mathbb R}}\setminus\{0,1\})\times\{\infty\}$& 1\\
${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-1) & $\{a_{ij}\}\times({{\mathbb R}}\setminus\{0,1\})
& 6\\
${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-2) &$\{a_{ij}\}\times\{b_{kl}\}
& 24\\
${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-3) & $\{a_{ij}\}\times\{b_{klm}\}$&12\\
${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-1) & $\{a_{ijk}\}\times({{\mathbb R}}\setminus\{0,1\})
& 3\\
${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-2) & $\{a_{ijl}\}\times\{b_{lm}\}$&12\\
${{\mathfrak x}}$-4) & ${{\mathfrak y}}$-1) &$\{\infty\}\times({{\mathbb R}}\setminus\{0,1\})$ &1 \\ [1ex]
\hline
\end{tabular}
\caption{Subspaces of non-admissible orbits}
\label{table:1}
\end{table}
If now ${{\mathfrak p}}$ is an admissible quadruple, we have that both ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ are in ${{\mathcal C}}_4(S^1)$. Let ${{\mathcal C}}^\sharp_4={{\mathcal C}}_4^\sharp({{\mathbb T}})$ the subspace of ${{\mathcal C}}_4({{\mathbb T}})$ comprising admissible quadruples and denote by ${{\mathcal F}}_4{{\mathcal F}}_4({{\mathbb T}})$ the corresponding orbit space.
The bijection
$$
{{\mathfrak C}}:{{\mathcal C}}_4^\sharp({{\mathbb T}})\ni {{\mathfrak p}}\mapsto ({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S^1)\times{{\mathcal C}}_4(S^1),
$$
projects into the bijection
$$
{{\mathfrak F}}:{{\mathcal F}}_4^\sharp({{\mathbb T}})\ni [{{\mathfrak p}}]\mapsto ([{{\mathfrak x}}],[{{\mathfrak y}}])\in{{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1),
$$
and therefore we obtain
\begin{thm}\label{thm:vec}
The configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points of the torus ${{\mathbb T}}$ is isomorphic to a set comprising 69 distinguished components: 20 one-dimensional, 48 points and a 2-dimensional subset corresponding to the subset ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ of admissible quadruples. This subset may be identified to $({{\mathbb R}}\setminus\{0,1\})^2$. The identification is given by assigning to each $[{{\mathfrak p}}]$ the vector-valued cross-ratio $
\vec{{\mathbb X}}({{\mathfrak p}})=({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}})).
$
\end{thm}
The set ${{\mathcal F}}_4^\sharp=({{\mathbb R}}\setminus\{0,1\})^2$ is a subset of ${{\mathbb R}}^2$ comprising nine connected open components. We consider the space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$; this is ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ factored by the diagonsl action of $\overline{{{\mathcal M}}({{\mathbb T}})}$: The latter comprises elements of ${{\mathcal M}}({{\mathbb T}})$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of ${{\mathbb T}}$.
We thus have
\begin{thm}\label{thm:band}
The configuration space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$
is identified to the disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ which is induced by identifying points of $({{\mathbb R}}\setminus\{0,1\})^2$ which are symmetric with respect to the diagonal straight line $y=x$. Explicitly, ${{\mathcal Q}}$ has three open components:
\begin{eqnarray*}
&&
{{\mathcal Q}}_1^0=(-\infty,0)\times(0,1);\\
&&
{{\mathcal Q}}_2^0=(-\infty,0)\times(1,+\infty);\\
&&
{{\mathcal Q}}_3^0=(0,1)\times(1,+\infty),
\end{eqnarray*}
and three components with boundary:
\begin{eqnarray*}
&&
{{\mathcal Q}}_1^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x<0,\;y\ge x\};\\
&&
{{\mathcal Q}}_2^1=\{(x,y)\in{{\mathbb R}}^2\;|\;0<x<1,\;y\ge x\};\\
&&
{{\mathcal Q}}_3^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x>0,\;y\ge x\}.
\end{eqnarray*}
\end{thm}
\subsection{Real Cross-Ratios and another parametrisation}\label{sec:realX}
Using the vector-valued $\vec{{\mathbb X}}$ as in Theorem \ref{thm:vec} we define a real cross-ratio in ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ by
$$
{{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}).
$$
One may show that all 24 cross-ratios corresponding to an admissible quadruple ${{\mathfrak p}}$ depend on the following two:
$$
{{\mathbb X}}_1({{\mathfrak p}})=[x_1,x_2,x_3,x_4]\cdot[y_1,y_2,y_3,y_4],\quad {{\mathbb X}}_2({{\mathfrak p}})=[x_1,x_3,x_2,x_4]\cdot[y_1,y_3,y_2,y_4].
$$
We now consider the map ${{\mathcal G}}^\sharp:{{\mathcal F}}_4^\sharp\to{{\mathbb R}}^2_*$, where
$$
{{\mathcal G}}^\sharp([{{\mathfrak p}}])=\left({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}})\right).
$$
The map ${{\mathcal G}}^\sharp$ is well defined since ${{\mathbb X}}$ remains invariant under the action of ${{\mathcal M}}({{\mathbb T}})$. Let
\begin{equation}\label{eq:P}
{{\mathcal P}}=\{(u,v)\in({{\mathbb R}}_*)^2\;|\;\Delta(u,v)=u^2+v^2-2u-2v+1-2uv\ge 0\}.
\end{equation}
The next fundamental inequality for cross-ratios in the following proposition shows exactly that ${{\mathcal G}}^\sharp$ takes its values in ${{\mathcal P}}$:
\begin{prop}\label{prop:fundX}
Let ${{\mathfrak p}}$ be an admissible quadruple of points in ${{\mathbb T}}$ and ${{\mathcal P}}$ as in (\ref{eq:P}). Then
$$
({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))\in{{\mathcal P}}.
$$
Moreover, $\Delta({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))=0$ if and only if all points of ${{\mathfrak p}}$ lie on a Circle.
\end{prop}
\begin{proof}
To prove the first statement, we may normalise so that ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where
$$
p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1).
$$
We then calculate
$$
{{\mathbb X}}_1=xy,\quad {{\mathbb X}}_2=(1-x)(1-y).
$$
Therefore
$
{{\mathbb X}}_2=1+{{\mathbb X}}_1-x-y
$, from where we derive
$$
1+{{\mathbb X}}_1-{{\mathbb X}}_2=x+y.
$$
Taking this to the square we find
$$
(1+{{\mathbb X}}_1-{{\mathbb X}}_2)^2=(x+y)^2\ge 4xy=4{{\mathbb X}}_1,
$$
and the inequality follows. For the second statement, observe that equality holds only if $x=y$, i.e. all points lie on the standard Circle $R_0$ on ${{\mathbb T}}$.
\end{proof}
We proceed by showing that ${{\mathcal G}}^\sharp$ is surjective:
\begin{prop}
Let $(u,v)\in{{\mathcal P}}$.
Then there exist a
$\f
\in{{\mathcal C}}_4^\sharp({{\mathbb T}})$ such that
$$
{{\mathbb X}}_1({{\mathfrak p}})
u\quad\text{and}\quad {{\mathbb X}}_2({{\mathfrak p}})
v.
$$
\end{prop}
\begin{proof}
Since $\Delta=(1+u-v)^2-4u\ge 0$ there exist $x,y$ such that
$$
xy=u\quad\text{and}\quad x+y=1+u-v.
$$
In fact,
$$
x,y=\frac{1+u-v\pm\sqrt{\Delta}}{2}.
$$
Now one verifies that the admissible quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$
where
$$
p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1),
$$
is the quadruple in question. The proof is complete.
\end{proof}
Let $\iota_0$ be the involution associated to the standard Circle $R_0$. Notice in the proof that the quadruple $\iota_0({{\mathfrak p}})$, that is
$$
\iota_0(p_1)=(0,0),\quad \iota_0(p_2)=(\infty,\infty),\quad \iota_0(p_3)=(y,x),\quad \iota_0(p_4)=(1,1),
$$
also satisfies ${{\mathbb X}}_1(\iota_0({{\mathfrak p}}))=
u$ and ${{\mathbb X}}_2(\iota_0({{\mathfrak p}}))=v$.
\begin{prop}
Suppose that ${{\mathfrak p}}$ and ${{\mathfrak p}}'$ are two quadruples in ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ such that
$$
{{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2.
$$
Then one of the following cases occur:
\begin{enumerate}
\item There exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that $g({{\mathfrak p}})={{\mathfrak p}}'$;
\item There exists a $g\in\overline{{{\mathcal M}}({{\mathbb T}})}$ such
that $g({{\mathfrak p}})={{\mathfrak p}}'$.
\end{enumerate}
\end{prop}
\begin{proof}
We may normalise so that
$$
p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1),
$$
and
$$
p_1'=(0,0),\quad p_2'=(\infty,\infty),\quad p_3'=(x',y'),\quad p_4'=(1,1).
$$
Then $
{{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2,
$ imply
$$
xy=x'y'\quad \text{and}\quad (1-x)(1-y)=(1-x')(1-y').
$$
It follows that $xy=x'y'$ and $x+y=x'+y'$. But then either $x=x'$ and $y=y'$ or $x=y'$ and $y=x'$. The proof is complete.
\end{proof}
The above discussion boils down to the following theorem:
\begin{thm}\label{thm:F4}
The ${{\mathcal M}}({{\mathbb T}})$-(rep. $\overline{{{\mathcal M}}({{\mathbb T}})}$)-configuration space ${{\mathcal F}}_4^\sharp={{\mathcal F}}_4^\sharp({{\mathbb T}})$ (resp. $\overline{{{\mathcal F}}_4^\sharp}=\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})})$ of admissible quadruples of points in the torus ${{\mathbb T}}$ is in a 2-1 (resp. 1-1) surjection with the set ${{\mathcal P}}$ given in (\ref{eq:P}).
The subset ${{\mathcal F}}_4^{\sharp,0}$ of both ${{\mathcal F}}_4^\sharp$ and $\overline{{{\mathcal F}}_4^\sharp}$ comprising equivalence classes of quadruples of points in the same Circle is in a bijection with the subset of ${{\mathcal P}}$ comprising $(u,v)$ such that
$$
\Delta=u^2+v^2-2u-2v+1-2uv= 0.
$$
Explicitly, ${{\mathcal P}}$ has three open components
\begin{eqnarray*}
&&
{{\mathcal P}}_1^0=(-\infty, 0)\times(0,+\infty);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v>0\};\\
&&
{{\mathcal P}}_2^0=(-\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v<0\};\\
&&
{{\mathcal P}}_3^0=(0,+\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v<0\}.
\end{eqnarray*}
and three components with one-dimensional boundaries:
\begin{eqnarray*}
&&
{{\mathcal P}}_1^1=\{(u,v)\in{{\mathcal P}}\;|\;0<u<1,\; 0<v<1,\;\Delta\ge 0\};\\
&&
{{\mathcal P}}_2^1=\{(u,v)\in{{\mathcal P}}\;|\;u>1,\;v>0,\;\Delta\ge 0\};\\
&&
{{\mathcal P}}_3^1=\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v>1,\;\Delta\ge 0\}.
\end{eqnarray*}
\end{thm}
\begin{rem}
The change of coordinates
$$
u=xy,\quad v=1-x-y-xy
$$
maps the set ${{\mathcal Q}}$ of Theorem \ref{thm:band} onto the set ${{\mathcal P}}$ in a bijective manner.
\end{rem}
\section{M\"obius Structure}
Towards defining a M\"obius structure from the real cross-ratio ${{\mathbb X}}$ on the torus ${{\mathbb T}}$, we study first the case where both cross-ratios ${{\mathbb X}}_i({{\mathfrak p}})$, $i=1,2$, of an admissible quadruple of points are positive {Section \ref{sec:pos}). We then define ${{\mathcal M}}_{{\mathbb T}}$ and prove in Section \ref{sec:mob} that is is not Ptolemaean.
\subsection{When both cross-ratios are positive}\label{sec:pos}
Let
$$
{{\mathcal P}}^1={{\mathcal P}}_1^1\;\dot\cup\;{{\mathcal P}}_2^1\;\dot\cup\;{{\mathcal P}}_3^1.
$$
This set corresponds exactly to quadruples ${{\mathfrak p}}$ with both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Equivalently, ${{\rm X}}({{\mathfrak x}})$ and ${{\rm X}}({{\mathfrak y}})$ belong to the same connected component of ${{\mathbb R}}\setminus\{0,1\}$, which means that the points of ${{\mathfrak x}}$ and ${{\mathfrak y}}$ have exactly the same type of ordering on the circle: If $x_1,x_2$ separate $x_3,x_4$ then also $y_1,y_2$ separate $y_3,y_4$ and so forth.
Quadruples ${{\mathfrak p}}$ corresponding to ${{\mathcal P}}^1$ have both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Let ${{\mathcal F}}^{\sharp,+}_4=({{\mathcal G}}^\sharp)^{-1}({{\mathcal P}}^1)$.
\begin{prop}\label{prop:Ptol-eq-T}
Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ such that $[{{\mathfrak p}}]\in{{\mathcal F}}^{\sharp,+}_4$ and let ${{\mathbb X}}_i={{\mathbb X}}_i({{\mathfrak p}})$, $i-1,2$. Then
\begin{equation}\label{eq:X12}
{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\ge 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\ge 1,\quad\text{or}\quad{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\le 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\le 1.
\end{equation}
Moreover, $[{{\mathfrak p}}]\in{{\mathcal F}}_4^{\sharp,0}$, that is, all points of ${{\mathfrak p}}$ lie in the same Circle if and only i
$$
{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1\quad \text{or}\quad |{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|=1.
$$
Explicitly,
\begin{enumerate}
\item ${{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_3$ separate $p_2$ and $p_4$;
\item ${{\mathbb X}}_2^{1/2}-{{\mathbb X}}_1^{1/2}=1$ if $p_1$ and $p_2$ separate $p_3$ and $p_4$;
\item ${{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_4$ separate $p_2$ and $p_3$.
\end{enumerate}
\end{prop}
\begin{proof}
From the fundamental inequality for cross-ratios (Proposition \ref{prop:fundX}) we have
\begin{eqnarray*}
0&\le&{{\mathbb X}}_1^2+{{\mathbb X}}_2^2-2{{\mathbb X}}_1-2{{\mathbb X}}_2+1-2{{\mathbb X}}_1{{\mathbb X}}_2\\
&=&({{\mathbb X}}_1+{{\mathbb X}}_2-1)^2-4{{\mathbb X}}_1{{\mathbb X}}_2\\
&=&({{\mathbb X}}_1+{{\mathbb X}}_2-2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)({{\mathbb X}}_1+{{\mathbb X}}_2+2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)\\
&=&\left(({{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2})^2-1\right)\left(({{\mathbb X}}_1^{1/2}
-{{\mathbb X}}_2^{1/2})^2-1\right),
\end{eqnarray*}
and this proves Eqs. (\ref{eq:X12}). The details of the proof of the last statement are left to the reader.
\end{proof}
\subsection{M\"obius structure}\label{sec:mob}
From the real cross-ratio ${{\mathbb X}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\to{{\mathbb R}}$ we define a positive cross-ratio ${{\bf X}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\to{{\mathbb R}}_+$ by setting
$$
{{\bf X}}({{\mathfrak p}})=|{{\mathbb X}}({{\mathfrak p}})|^{1/2},
$$
for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp$. The positive cross-ratio is invariant under $\tilde{{{\mathcal M}}({{\mathbb T}})}$. The M\"obius structure on ${{\mathbb T}}$ associated to ${{\bf X}}$ and restricted to ${{\mathcal C}}_4^\sharp$ is the map
$$
{{\mathfrak M}}_{{\mathbb T}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}})).
$$
Recall that $(S,\rho)$ is a pseudo-semi-metric space if $\rho:S\times S\to{{\mathbb R}}_+$ satisfies a)
$x=y$ implies $\rho(x,y)=0$ and b)
$\rho(x,y)=\rho(y,x)$,
for all $x,y\in S$. The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is associated to the pseudo-semi-metric $\rho:{{\mathbb T}}\times{{\mathbb T}}\to{{\mathbb R}}_+$, given by
$$
\rho\left((x_1,y_1),(x_2,y_2)\right)=|x_1-x_2|^{1/2}\cdot|y_1-y_2|^{1/2},
$$
for each $(x_1,y_1)$ and $(x_2,y_2)$ in ${{\mathbb T}}$. In Section \ref{sec:cons} we explain the reason why we cannot have a natural positive cross-ratio compatible with the group action that is associated to any metric on ${{\mathbb T}}$. The following corollary concerning ${{\mathfrak M}}_T$ follows directly from Proposition \ref{prop:Ptol-eq-T}.
\begin{cor}
The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is not Ptolemaean. However, Circles are Ptolemaean circles for ${{\mathfrak M}}_{{\mathbb T}}$.
\end{cor}
\section{Application to Boundaries of ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and $AdS^3$} \label{sec:cons}
In this section we show how the torus appears as the F\"urstenberg boundary of the symmetric space ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ as well as the ideal boundary of the 3-dimensional anti-de Sitter space $AdS^3$. We refer to \cite{BJ} for compactifications of symmetric spaces, to \cite{B} for recent developments on M\'obius structures in F\"urstenberg boundaries and finally to \cite{D} for a comprehensive treatment of anti-de Sitter space and its relations to Hyperbolic Geometry.
Let ${{\mathbb R}}^{2,2}={{\mathbb R}}^4\setminus\{0\}$ be the real vector space of dimension 4 equipped with a non-degenerate, indefinite pseudo-hermitian form $\langle\cdot,\cdot\rangle$ of signature $(2,2)$. Such a form is given by a $4\times 4$ matrix with 2 positive and 2 negative eigenvalues.
Let ${{\bf x}}=\left[\begin{matrix} x_1 & x_2 &x_3 &x_4\end{matrix}\right]^T$ and ${{\bf y}}=\left[\begin{matrix} y_1 & y_2 &y_3 &y_4\end{matrix}\right]^T$ be column vectors. The pseudo-hermitian form is then defined by
$$
\langle{{\bf x}},{{\bf y}}\rangle=x_1y_4-x_2y_3-x_3y_2+x_4y_1
$$
and it is given by the matrix
$$
J=\left(\begin{matrix}
0&0&0&1\\
0&0&-1&0\\
0&-1&0&0\\
1&0&0&0
\end{matrix}\right).
$$
The isometry group of this pseudo-hermitian form is $G={\rm SO_0}(2,2)$. There is a natural identification of $G$ with ${\rm SL}(2,{{\mathbb R}})\times{\rm SL}(2,{{\mathbb R}})$, see \cite{GPP}; if
$$
A_1=\left(\begin{matrix} a_1&b_1\\
c_1&d_1\end{matrix}\right)\quad\text{and}\quad
A_2=\left(\begin{matrix} a_2&b_2\\
c_2&d_2\end{matrix}\right)\in{\rm SL}(2,{{\mathbb R}}),
$$
then the pair $(A_1,A_2)$ is identified to
$$
A=\left(\begin{matrix}
a_1A_2^{-1}&b_1A_2^{-1}\\
c_1A_2^{-1}&d_1A_2^{-1}\end{matrix}\right)\in{\rm SO}_0(2,2).
$$
The group $K={\rm SO}(2)\times$ ${\rm SO}(2)$ is a maximal compact subgroup of $G$ and $X=G/K$ is a symmetric space of rank-2. The symmetric space $X$ is also realised as ${{\bf H}}_{{\mathbb C}}^1\times{{\bf H}}_{{\mathbb C}}^1$, where ${{\bf H}}_{{\mathbb C}}^1=D$ is the Poincar\'e hyperbolic unit disc. The torus ${{\mathbb T}}=S^1\times S^1$ is the {\it maximal F\"urstenberg boundary} $\mathbb{F}(X)$ of the symmetric space $X$. Recall that if $G$ is a connected semi-simple Lie group and $X=G/K$ is the associated symmetric space, then the maximal F\"urstenberg boundary $\mathbb{F}(X)$ may be thought as $G/P_0$, where $P_0$ is a minimal parabolic subgroup of $G$, see for instance \cite{BJ}. If $X$ is of rank $\ge 2$ then $\mathbb{F}(X)$ cannot be the whole boundary of any compactification of $X$. In particular, if $X=D\times D
, then $\mathbb{F}(X)={\rm SO}_0(2,2)/P_0$, $P_0=AN\times AN$ where $AN$ is the $AN$ group in the Iwasawa $KAN$ decomposition of ${\rm SL}(2,{{\mathbb R}})$. In this manner $\mathbb{F}(X)$ is just the corner of the boundary $(\overline{D}\times S^1)\cup(S^1\times\overline{D})$ of the compactification $\overline{D}\times\overline{D}$ of $X$.
A rather neat way to represent ${{\mathbb T}}={{\mathbb F}}(X)$ is via its isomorphism to the ideal boundary of anti-de Sitter space which is obtained as follows:
For the pseudo-hermitian product there are subspaces of positive (space-like) vectors $V_+$, of null (light-like) vectors $V_0$ and of negative (time-like) vectors $V_-$:
\begin{eqnarray*}
&&
V_+=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle> 0\right\},\\
&&
V_0=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle=0\right\},\\
&&
V_-=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle<0\right\}.
\end{eqnarray*}
If $\lambda$ is a non-zero real, then $\langle\lambda{{\bf x}},\lambda{{\bf x}}\rangle=\lambda^2\langle{{\bf x}},{{\bf x}}\rangle$. Therefore $\lambda{{\bf x}}$ is positive, null or negative if and only if ${{\bf x}}$ is positive, null or negative, respectively. Let $P$ be the projection map from ${{\mathbb R}}^{2,2}$ to projective space $P{{\mathbb R}}^3$. The {\it projective model of anti-de Sitter space} $AdS^3$ is now defined as the collection of negative vectors $PV_-$ in $P{{\mathbb R}}^3$ and its {\it ideal boundary} $\partial_\infty AdS^3$ is defined as the collection $PV_0$ of null vectors. Anti-de Sitter space $AdS^3$ carries a natural Lorentz structure; the isometry group of this structure is the projectivisation of the set ${\rm SO}(2,2)$ of unitary matrices for the pseudo-hermitian form with matrix $J$, that is, ${\rm PSO}_0(2,2)={\rm SO}(2,2)/\{\pm I\}$; here $I$ is the identity $4\times 4$ matrix. From the discussion above we have that ${\rm PSO_0}(2,2)$ is identified to ${\rm PSL}(2,{{\mathbb R}})^2={{\mathcal M}}({{\mathbb T}})$. Now the identification of $\partial_\infty AdS^3$ with the torus ${{\mathbb T}}={{\mathbb F}}(X)$ is given in terms of the Segre embedding ${{\mathcal S}}:{{\mathbb R}} P^1\times{{\mathbb R}} P^1\to{{\mathbb R}} P^3$. Recall that in homogeneous coordinates the Segre embedding
$
w={{\mathcal S}}(x,y)
$
is defined by
$$
({{\bf x}},{{\bf y}})=\left(\left[\begin{matrix} x_1\\
x_2
\end{matrix}\right]\;,\; \left[\begin{matrix} y_1\\
y_2
\end{matrix}\right]\right)\mapsto {{\bf w}}=\left[\begin{matrix} x_1y_1\\
x_1y_2\\
x_2y_1\\
x_2y_2
\end{matrix}\right].
$$
Notice that ${{\bf w}}$ is a null vector.
The action of isometry group ${\rm PSO_0}(2,2)={\rm PSL}(2,{{\mathbb R}})^2$ of $AdS^3$ is extended naturally on the ideal boundary $\partial_\infty AdS^3$ which in this manner is identified to ${{\mathbb T}}$.
We stress at this point that in contrast with the case of hyperbolic spaces, there are distinct points in $\partial AdS^3$ which may be orthogonal. To see this, let $p={{\mathcal S}}(x,y)$ and $p'={{\mathcal S}}(x',y')$ any distinct points; if
$$
{{\bf x}}=\left[\begin{matrix} x_1\\x_2\end{matrix}\right],\quad
{{\bf y}}=\left[\begin{matrix} y_1\\y_2\end{matrix}\right],\quad
{{\bf x}}'=\left[\begin{matrix} x_1'\\x_2'\end{matrix}\right],\quad
{{\bf y}}'=\left[\begin{matrix} y'_1\\y'_2\end{matrix}\right],
$$
then
$$
{{\bf p}}=\left[\begin{matrix}x_1y_1\\
x_1y_2\\
x_2y_1\\
x_2y_2
\end{matrix}\right],\quad{{\bf p}}'=\left[\begin{matrix} x_1y_1\\
x_1'y_2'\\
x_2'y_1'\\
x_2'y_2'\end{matrix}\right].
$$
One then calculates
\begin{equation*}
\langle{{\bf p}},{{\bf p}}'\rangle=(x_1x_2'-x_2x_1')(y_1y_2'-y_2y_1').
\end{equation*}
Therefore $\langle{{\bf p}},{{\bf p}}'\rangle=0$ if and only if $x=x'$ or $y=y'$. For fixed $p\in\partial_\infty AdS^3$, $p={{\mathcal S}}(x,y)$ the locus
$$
p^c=\{{{\mathcal S}}(x,z)\;|\;z\in S^1\}\cup \{{{\mathcal S}}(z,y)\;|\;z\in S^1\}\equiv (\{x\}\times S^1)\cup(S^1\times\{y\})
$$
comprises points of the ideal boundary which are orthogonal $p$. We call $p^c$ the {\it cross-completion} of $p$; transferring the picture in the context of a point $p=(x,y)$ lie on ${{\mathbb T}}$, orthogonal points are all points of $(\{x\}\times S^1)\cup S^1\cup\{y\})$.
The ideal boundary $\partial_\infty AdS^3$ may be thus thought as the union of cross-completion of $\infty={{\mathcal S}}(\infty,\infty)$ and the remaining region of the torus which we denote by $N$. The set $N$ comprises points $p={{\mathcal S}}(x,y)$, $x,y\neq \infty$ with standard lifts
$$
{{\bf p}}=\left[\begin{matrix}xy&x&y&1\end{matrix}\right]^T,
$$
and can be viewed as the saddle surface $x_1=x_2x_3$ embedded in ${{\mathbb R}}^3$. But actually, $N$ admits a group structure: First, if $p={{\mathcal S}}(x,y)\in N$, we call $(x,y)$ the $N$-coordinates of $p$. To each such $p$ we assign the matrix
$$
T(x,y)=\left(\begin{matrix}
1&y&x&xy\\
0&1&0&x\\
0&0&1&y\\
0&0&0&1
\end{matrix}\right),
$$
whose projectivisation gives an element of ${\rm PSO}_0(2,2)$ in the unipotent isotropy group of $\infty$. Note that if $G$ is the isomorphism ${\rm SL}(2,R)^2\to\to{\rm SO}_0(2,2)$, and $KAN$ is the Iwasawa decomposition of ${\rm SL}(2,R)$, then $T(x,y)$ lies in the image $G(N,N)$. It is straightforward to verify that $T(x,y)$ leaves the cross-completion $\infty^c$ of infinity invariant and maps $o={{\mathcal S}}(0,0)$ to $p$.
Also, for $p=(x,y)$ and for $p'=(x',y')$ we have
$$
T(x,y)T(x',y')=T(x+x',y+y'),\quad \left(T(x,y)\right)^{-1}=(-x,-y).
$$
Thus $T$ is a group homomorphism from ${{\mathbb R}}^2$ to ${\rm PSO_0}(2,2)$ with group law
$$
(x,y)\star(x',y')=(x+x',y+y').
$$
In other words $N$ admits the structure of the additive group $({{\mathbb R}}^2,+)$. The natural Euclidean metric $e:N\times N\to{{\mathbb R}}_+$ where
$$
e\left((x,y),(x',y')\right)=((x-x')^{2}+(y-y')^2)^{1/2},
$$
is invariant by the left action of $N$ but its similarity group is not ${\rm PSO}_0(2,2)$. To see this consider
$$
D_\delta=\left(\begin{matrix} \delta&0\\
0&1/\delta\end{matrix}\right)\quad \text{and}\quad
D_{\delta'}=\left(\begin{matrix} \delta'&0\\
0&1/\delta'\end{matrix}\right),\quad\delta,\delta'>0.
$$
Then $G(D_\delta,D_{\delta'})(\xi_1,\xi_2)=(\delta^2\xi_1,(1/\delta')^2\xi_2)$, and
$A=F(D_\delta,D_{\delta'})$ does not scale $e$ unless $\delta\delta'=1$ which is not always the case. Since all metrics in ${{\mathbb R}}^2$ are equivalent to $e$, there is no natural metric in $N$ such that its similarity group equals to ${\rm PSO}_0(2,2)$. In contrast, we define a function $a:N\to{{\mathbb R}}$,
$$
a(x,y)=xy
$$
and a gauge
$$
\|(x,y)\|=|a(x,y)|^{1/2}=|x|^{1/2}|y|^{1/2}.
$$
Essentially, we are mimicking here Kor\'anyi and Reimann and their construction for the Heisenberg group case, see \cite{KR}. The pseudo-semi-metric $\rho:N\times N\to{{\mathbb R}}_+$ is then defined by
$$
\rho\left((x,y),(x',y')\right)=\|(x',y')^{-1}\star(x,y)\|=|x-x'|^{1/2}|y-y'|^{1/2}.
$$
Let $\overline{N}=N\cup\{\infty\}$. The set ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ of admissible quadruples is actually the set ${{\mathcal C}}_4^\sharp(\overline{N})$ of quadruples of points of $\overline{N}$ such that none of these points belongs to the cross-completion of any other point in the quadruple. The configuration space ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ is thus identified to ${{\mathcal C}}_4^\sharp(\overline{N})$ cut by the diagonal action of ${\rm PSO}_0(2,2)$ and the configuration space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ is ${{\mathcal C}}_4^\sharp(\overline{N})$ cut by the diagonal action of $\overline{{\rm PSO}_0(2,2)}$ which comprises elements of ${\rm PSO}_0(2,2)$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of $\overline{N}$.
The real cross-ratio ${{\mathbb X}}$ is thus defined in ${{\mathcal C}}_4^\sharp(\overline{N})$ by
$$
{{\mathbb X}}({{\mathfrak p}})=\frac{a(p_4\star p_2^{-1})\cdot a(p_3\star p_1^{-1})}{a(p_4\star p_1^{-1})\cdot a(p_3\star p_2^{-1})},
$$
for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp(\overline{N})$. The positive cross-ratio is
$$
{{\bf X}}({{\mathfrak p}})=\frac{\rho(p_4,p_2)\cdot\rho(p_3,p_1)}{\rho(p_4,p_1)\cdot\rho(p_3,p_2)}.
$$
The results of the previous section apply now immediately.
|
1,108,101,563,617 | arxiv | \section{Introduction}
General Relativity (GR) is a very successful theory to describe gravitational interaction. It is a theoretically consistent and experimentally tested theory. So far, GR has been confirmed by every experiment in the solar system~\cite{will1993, reasenberg1973, everitt2011,rodrigues2018} and astrophysical phenomena such as the emission of gravitational waves by binary systems and it is in accordance with the bounds on the velocity of gravitational waves \cite{weisberg2010, abbott2017}. However, there are at least three reasons to seriously consider alternative theories of gravity. The necessity of the dark sector (dark energy and dark matter) on the standard cosmological model; the theoretical motivation to unify the gravitational interaction with the quantum interactions in a single theoretical framework; and the epistemological fact that alternative theories can be used to highlight the intrinsic properties of GR by showing how it could be otherwise.
The prototype of alternative theory of gravity is Brans-Dicke theory (BD). Historically, it is one of the most important alternative to the standard General Relativity theory, which was introduced by C. Brans and R. H. Dicke \cite{brans1961} as a possible implementation of Mach's principle in a relativistic theory.
Solar System time-delay experiments set a lower bound on the absolute value of the dimensionless parameter $|\omega| > 500$~\cite{will1993, reasenberg1973}, which means that BD is strongly constrained for the solar system dynamics. The theory is also constrained by the CMB, as pointed out in~\cite{wu2010,avilez2014}. Notwithstanding, there is phenomenological applications for BD in cosmology and indeed it has received recently much attention of the scientific community~\cite{alexander2016,tretyakova2015,hrycyna2013,will2004,kofinas2016,papagiannopoulos2016,alonso2016,roy2017,gerard1995,barrow2008,brando2018}.
In the present work, we display a particular solution of BD with matter content described by a stiff matter barotropic perfect fluid. This is a very interesting solution with exotic characteristics revealing some of the new features, for better or worst, that one can expect to find in BD-like alternative theories of gravity~\cite{stoycho,holdenwands,barrowparsons,faraoni2009}. In particular, the time evolution of the system is independent of the value of the parameter $\omega$. The evolution of the perturbations has only growing modes, which is also another distinct feature of this solution. In addition, the scale factor evolution behaves as $a\propto t^{1/2}$, typical of radiation dominated epoch in GR, hence this configuration might have some application in the early universe. Recentely, this period of the universe filled with stiff matter was also studied in the context of $f(R)$ theories~\cite{odintsovoikonomou}.
It is argued in the literature that BD approaches GR in the $|\omega|\rightarrow \infty$ limit~\cite{weinberg1972}. The crucial point behind this argument is that when the parameter $|\omega| \gg 1$, the field equations seem to show that $\square \varphi=\mathcal{O}\left(\frac1\omega\right)$ and hence
\begin{align}
\varphi=&\frac{1}{G_N} +\mathcal{O}\left(\frac{1}{\omega}\right) \label{BDeq12}\\
G_{\mu\nu}=&8\pi G_N T_{\mu\nu}+\mathcal{O}\left(\frac{1}{\omega}\right)\label{BDeq22}
\end{align}
where $G_N$ is Newton's gravitational constant, $G_{\mu\nu}$ is the Einstein tensor and assume natural unit where $c=1$. However, there are some examples~\cite{anchordoqui1998,jarv2015,nariai1968,banerjee1985,banerjee1986,banerjee1997,hanlon1972,matsuda1972,romero1993a,romero1998,romero1993b,paiva1993a,paiva1993b,scheel1995,faraoni1999,chauvineau2003} where exact solutions can not be continuously deformed into the corresponding\footnote{the word corresponding here is used in the sense of the same matter content as in GR.} GR solutions by taking the $|\omega|\rightarrow\infty$ limit. Their asymptotic behavior differs exactly because these solutions do not decay as Eq.~\eqref{BDeq12} but instead as
\begin{equation}\label{BDeq13}
\varphi=\frac1{G_N} +\mathcal{O}\left(\frac{1}{\sqrt{\omega}}\right)\quad .
\end{equation}
Our particular solution, to be developed in section~\ref{sec:PartSol}, has the novelty of having the appropriate asymptotic behavior given by Eq.~\eqref{BDeq12} (see Eq.~\eqref{phi0rho0wrel}) but no GR limit.
The paper is organized as follows. In section~\ref{sec:EqMot} we briefly describe the system and its equation of motion. In section~\ref{sec:GureSol} we review the general solution studied by Gurevich et al. In section~\ref{sec:PartSol} we analyze a power law solution with peculiar features and develop the perturbation over this specific background in section~\ref{sec:Pert}. In section~\eqref{sec:DynSys} we perform a dynamical system analysis of the system and finally in section~\ref{sec:Concl} we end with some final remarks.
\section{The classical equations of motion}\label{sec:EqMot}
In Brans-Dicke theory the scalar field is understood as part of the geometrical degrees of freedom. This theory has a non-minimal coupling between gravity and the scalar field. The action reads
\begin{equation}\label{eq:Lagrangian}
S=\int {\rm d} x^4 \sqrt{-g}\left[\frac1{16\pi}\left(\phi R-\frac{\omega}{\phi}\nabla_{\alpha}\phi\nabla^{\alpha}\phi
\right)+\mathcal{L}_{\mathrm{m}}\right] \ ,
\end{equation}
where $\mathcal{L}_{m}$ is the ordinary matter and $\omega$ is the scalar field coupling constant. Variation of the action Eq.~\eqref{eq:Lagrangian} with respect to the metric and the scalar field give, respectively, the following field equations
\begin{align}
& G_{\mu \nu} = \frac{8 \pi}{\phi} T_{\mu \nu} + \frac{\omega}{\phi^{2}} \left( \nabla_{\mu} \phi \nabla_{\nu} \phi - \frac{1}{2} g_{\mu \nu} \nabla^{\alpha} \phi \nabla_{\alpha} \phi \right) +\frac{1}{\phi} \left( \nabla_{\mu} \nabla_{\nu} \phi - g_{\mu \nu} \Box \phi \right), \label{eq:fe-1}\\
&\Box \phi = - \frac{\phi}{2 \omega} R +\frac{1}{2\phi} \left( \nabla^{\alpha} \phi \nabla_{\alpha} \phi \right)=\frac{8\pi}{3+2\omega}T \ , \label{eq:fe-2}
\end{align}
where we have used the trace of Eq.~\eqref{eq:fe-1} in the last step of Eq.~\eqref{eq:fe-2}. The action Eq.~\eqref{eq:Lagrangian} is diffeomorphic invariant and since all variables are dynamic fields we have conservation of energy-momentum, i.e.
\begin{align}\label{conservTmunu}
\nabla_\mu T^{\mu \nu}=0 \ .
\end{align}
We shall consider the matter content described by a perfect fluid such that the energy-momentum tensor is
\begin{equation}
T^{\mu\nu}=\left(\rho+p\right)u^{\mu}u^{\nu}-pg^{\mu\nu}\ ,
\end{equation}
and a barotropic equation of state $p=\alpha\rho$ with $,0\leq\alpha\leq 1$. The equation of state parameter $\alpha$ is bounded from above in order to avoid superluminal speed of sound. In the extreme case, $\alpha=1$, the speed of sound equals the speed of light, which corresponds to stiff matter. This equation of state was first proposed by Zeldovich as an attempt to describe matter in extremely dense states such as in the very early universe.
We shall restrict our analysis to the Friedmann-Lema\^\i tre-Robertson-Walker (FLRW) universes where the metric has a preferred foliation given by homogeneous and isotropic spatial sections. In spherical coordinate system for this particular foliation, the line element has the form
\begin{equation}\label{flrwmetric}
{\rm d} s^2={\rm d} t^2-a^2(t)\left[\frac{{\rm d} r^2}{1-kr^2}+r^2\left({\rm d} \theta^2+\sin^2(\theta)\, {\rm d} \phi\right)\right] \ ,
\end{equation}
where $a(t)$ is the scale factor function and $k=0,\pm 1$ defines the spatial section curvature.
A stiff matter fluid has equation of state $p=\rho$. Thus, conservation of the energy-momentum tensor in a FLRW universe implies $\rho=\rho_{0}\, (a_0/a)^{6}$ with $\rho_{0}$ and $a_0$ two constants of integration. The flat FLRW case has been analyzed in a quite general form in Ref. \cite{Gurevich} but not all solutions were fully explored. In particular, we describe some exotic features of a peculiar solution associated with this equation of state. After integrating the conservation of energy-momentum equation, the two remaining independent equations become
\begin{align}
\left(\frac{\dot{a}}{a}\right)^{2} +\frac{k}{a^2}& = \frac{8\pi\rho_{0}}{3\phi}\left(\frac{a_0}{a}\right)^{6}+\frac{\omega}{6}\left(\frac{\dot{\phi}}{\phi}\right)^{2}-\frac{\dot{a}}{a}\frac{\dot{\phi}}{\phi}\ , \label{FLRW1}\\
\ddot{\phi}+3\frac{\dot{a}}{a}\dot{\phi} & = -\frac{16\pi\rho_{0}}{\left(3+2\omega\right)}\left(\frac{a_0}{a}\right)^{6}\ ,\label{KG1}
\end{align}
where a dot denotes differentiation with respect to the cosmic time $t$.
\section{Gurevich's Families of Solutions}\label{sec:GureSol}
In this section we present the general solutions of scale factor and the scalar field for the FLRW flat case in the Brans-Dicke theory obtained by Gurevich \emph{et al. }\cite{Gurevich}. We shall follow closely their presentation but adapting specifically for the stiff matter case. Gurevich \emph{et al.} obtained a class of flat space solutions for
the equation of state $P=\alpha\rho,$ where $0\leq\alpha\leq1$. There are three families of solutions depending on the value of $\Delta\equiv B^2-4AC_0$ where $2A\equiv2(2-3\alpha)+3\omega\left(1-\alpha\right)^2$, $B=3\sigma(1-\alpha)+(1-3\alpha)\beta_0$ and $C_0$ and $\beta_0$ are integration constants. Using the time-time component of Eq.~\eqref{eq:fe-1} and the time component of the conservation of the energy-momentum Eq.~\eqref{conservTmunu}, one can show that for $\alpha\neq 1/3$ the $\Delta$ can be recast as
\[
\Delta= \frac{(1-3\alpha)^2\sigma^2}{1+2\omega/3}(\beta_0-1)^2 \quad \mbox{with}\quad \sigma\equiv 1+\omega\left(1-\alpha\right)\quad .
\]
Note that for $\beta_0\neq 1$ the sign of $\Delta$ is negative for $\omega<-3/2$ and positive for $\omega>-3/2$. It is also useful to define a parametric time $\theta$, which is connected to cosmic time through the relation $dt=a^{3\alpha}d\theta.$
The first family of solutions is given by $\Delta<0$ $\left(\omega<-\frac{3}{2}\right)$. The general solutions for the scale factor and scalar field $\phi$ read
\begin{align}
a & =a_{0}\Big[\left(\theta+\theta_{-}\right)^{2}+\theta_{+}^{2}\Big]^{\sigma/{2A}}e^{ \pm \sqrt{\left(\frac{2|\omega|}{3}-1\right)}f(\theta)}\ ,\label{eq:general solution}\\
\phi & =\phi_{0}\left[\left(\theta+\theta_{-}\right)^{2}+\theta_{+}^{2}\right]^{{(1-3\alpha)}/{2A}}e^{ \mp3(1-\alpha) \sqrt{\left(\frac{2|\omega|}{3}-1\right)}f(\theta)}\ , \label{eq:general solution phi}
\end{align}
where
\begin{align}
f(\theta)&= \frac1A\arctan\left(\frac{\theta+\theta_{-}}{\theta_{+}}\right)\ , \quad \theta_{+}=\frac{\sqrt{|\Delta|}}{2A}\ , \quad \theta_{-}=\frac{B}{2A}\quad .\label{eq:general solution A}
\end{align}
For the stiff matter case $(\alpha=1)$, the solutions \eqref{eq:general solution}-\eqref{eq:general solution phi} simplify to
\begin{align}
a&=a_{0}\left[\left(\theta+\theta_{-}\right)^{2}+\theta_{+}^{2}\right]^{-\frac{1}{2}}\exp\left\{\mp\sqrt{\frac{2|\omega|}{3}-1}\arctan\left(\frac{\theta+\theta_{-}}{\theta_{+}}\right)\right\} ,\label{eq:scale factor-1}\\
\phi & =\phi_{0}\left[\left(\theta+\theta_{-}\right)^{2}+\theta_{+}^{2}\right]\ .\label{eq:scalar field-1}
\end{align}
Note that the scalar field dynamics does not depend on $\omega$ anymore and the scale factor has a mild dependence on this parameter. The asymptotic
behavior of the Eq.s~(\ref{eq:scale factor-1})\eqref{eq:scalar field-1} for $\theta\rightarrow\pm\infty$ are
\begin{align}
a\left(\theta\right)&\propto \exp\left(\epsilon\frac{\pi}{2}\sqrt{\frac{2|\omega|}{3}-1}\right)\frac{1}{\theta},\label{eq:scale factor asymp-1}\\
\phi\left(\theta\right)&\propto \theta^2,\label{eq:scalar field asymp-1}
\end{align}
where $\epsilon=\mp1$ for $\theta\rightarrow-\infty$ and $\epsilon=\pm1$ for $\theta\rightarrow+\infty$. In this asymptotic behavior, the cosmic time goes as $t\propto \theta^{-2}$, hence, the scale factor Eq.~(\ref{eq:scale factor asymp-1}) goes as $a\propto t^{\frac{1}{2}}$ in both asymptotic limits $\theta\rightarrow\pm \infty$. Similarly, the asymptotic behavior of the scalar field is $\phi\propto\theta^{2}\propto t^{-1}$.
The two asymptotic limits $\theta\rightarrow\pm\infty$ describe two distinct possible evolution where the universe behaves as a GR radiation dominated phase. For $\theta\rightarrow-\infty$, the universe starts from an initial singularity at t=0 and expands therefrom with a radiation dominated phase. In the limit $\theta\rightarrow+\infty$, the universe contracts from infinity until it reaches a big crunch singularity again at t=0 during a radiation dominated phase.
The second family is described by $\Delta>0$ $\left(\omega>-\frac{3}{2}\right)$. The general solutions are given by
\begin{align}
a&=a_{0}\left(\theta-\theta_{+}\right)^{\omega/3\Sigma_\mp}\left(\theta-\theta_{-}\right)^{\omega/3\Sigma_\pm}\ ,\label{eq2:scale factor-1}\\
\phi&=\phi_{0}\left(\theta-\theta_{+}\right)^{\left(1\mp\sqrt{{1+2\omega}/{3}}\right)/\Sigma_\mp}\left(\theta-\theta_{-}\right)^{\left(1\pm\sqrt{{1+2\omega}/{3}}\right)/\Sigma_\pm}\ ,\label{eq2:scalar field-1}\\
\Sigma_\pm&=\sigma\pm\sqrt{1+\frac{2\omega}{3}}\label{eq2:A-1}
\end{align}
where now $\theta_{\pm}\equiv(-B\pm\sqrt{\Delta})/2A$ are constants of integration with $\theta_{+}>\theta_{-}$. The solutions for the stiff matter $(\alpha=1)$ reduce to
\begin{align}
a&=a_{0}\left[\theta-\theta_{+}\right]^{\omega/\left[3\left(1\mp\sqrt{1+{2\omega}/{3}}\right)\right]}\left(\theta-\theta_{-}\right)^{\omega/\left[3\left(1\pm\sqrt{{1+2\omega}/{3}}\right)\right]},\label{eq:scale factor-2}\\
\phi & =\phi_{0}\left(\theta-\theta_{+}\right)\left(\theta-\theta_{-}\right)\ .\label{eq:scalar field-2}
\end{align}
The asymptotic behavior of the Eq.s~(\ref{eq:scale factor-2})-\eqref{eq:scalar field-2} show that for $|\theta|\gg \theta_{+}$ we have $a\propto \theta^{-1}\propto t^{\frac{1}{2}}$ and $\phi\propto \theta^2\propto t^{-1}$. Thus, we have the same asymptotic behavior as the $\omega<-\frac{3}{2}$ given by Eq.s~\eqref{eq:scale factor asymp-1}-\eqref{eq:scalar field asymp-1}.
Finally, there is a third family of solutions given by $\Delta=0$ $(\beta_0=1)$ that describes power law solutions, i.e.
\begin{align}
a&=a_{0}\left({\theta}/{\theta_0}\right)^{\sigma/A}
=a_{0}\left({t}/{t_0}\right)^{\sigma/A_\ast}
,\label{eq:scale factor-3}\\
\phi & =\phi_{0}\left({\theta}/{\theta_0}\right)^{\sigma/A}
=\phi_{0}\left({t}/{t_0}\right)^{\left(1-3\alpha\right)/A_\ast}
\ ,\label{eq:scalar field-3}\\
2A_\ast&=4+3\omega\left(1-\alpha^2\right) \ , \label{eq:A-3}
\end{align}
where we have used the relation between the parametric time $\theta$ with the coordinate time $t$. These solutions have singular behavior for $t\rightarrow0$ if the power of the scale factor is positive. This happens when $\omega<-1/(1-\alpha)$ for $\alpha\in \left[-1,1/3\right]$ and $\omega<-4/3(1-\alpha)$ for $\alpha\in \left[1/3,1\right]$.
The third family of solutions can be derived by taking the appropriate limit from the previous two families\footnote{We thank the referees for point out this limiting approach.}. Indeed, for $\Delta<0$ the limit $\Delta\rightarrow0$ implies $\theta_{+}\rightarrow 0$ with $\theta_{-}\neq0$, hence Eq.~\eqref{eq:general solution} shows that $a\propto \left(\theta+\theta_-\right)^{\sigma/A}$. For $\Delta>0$ the limit $\Delta\rightarrow0$ implies $\theta_{+}=\theta_{-}=-B/2A$ and again Eq.~\eqref{eq2:scale factor-1} gives the same result $a\propto \left(\theta- \theta_-\right)^{\sigma/A}$ compatible with Eq.~\eqref{eq:scale factor-3}.
Another interesting asymptotic limit for all these solutions is given by finite time but allowing the BD parameter to increase boundlessly. The limit $\omega\rightarrow\infty$ depend crucially if $\alpha=1$ or not. For $\alpha\neq 1$, the limit $|\omega|\rightarrow\infty$ gives $\sigma =\omega(1-\alpha)$, $A=A_\ast=\frac32\omega(1-\alpha)^2$, hence all three functions diverge such that $\phi \rightarrow \phi_{0}$ in all three cases. Furthermore, the scale factor becomes
\begin{align}
\lim_{\omega\rightarrow-\infty} a & =a_{0}\Big[\left(\theta+\theta_{-}\right)^{2}+\theta_{+}^{2}\Big]^{1/{3(1-\alpha)}} &&,\quad \mbox{for } \beta_0\neq1\ \mbox{and}\ \omega<-\frac32\label{eq:general solution winfty}\\
\lim_{\omega\rightarrow\infty} a & =a_{0}\Big[\left(\theta-\theta_{+}\right)\left(\theta-\theta_{-}\right)\Big]^{1/{3(1-\alpha)}} &&,\quad \mbox{for } \beta_0\neq1\ \mbox{and}\ \omega>-\frac32
\\
\lim_{|\omega|\rightarrow\infty} a & =a_{0}\left({\theta}/{\theta_0}\right)^{2/3\left(1-\alpha\right)}=a_{0}\left({t}/{t_0}\right)^{2/3\left(1-\alpha\right)} &&,\quad \mbox{for}\ \beta_0=1\ \mbox{and } \omega\neq -\frac32
\end{align}
Therefore, if $\alpha\neq 1$, independently of the $sign(\omega)$, all three families of solutions asymptotically approach GR. However, that is not the case for stiff matter. For $\alpha=1$, in the limit $|\omega|\rightarrow\infty$, $\phi$ does not go to a constant and the scalar field does not go to its GR limit. Indeed, the $\alpha=1$ case has to be studied separately, which is what we shall analyze in the next section.
\section{Exact Power Law Solution}\label{sec:PartSol}
Let us study the dynamic for FLRW universe filled with a stiff matter perfect fluid. Following Gurevich's \emph{et al.} family of solutions, we propose a power law ansatz such that
\begin{eqnarray}\label{powerlaw}
a=a_{0}t^{r}\quad,\quad\phi=\phi_{0}t^{s}.
\end{eqnarray}
Equating the power in the Klein-Gordon Eq.~\eqref{KG1}, it is easy to check $6r+s=2 $. Furthermore, the coefficients of Eq.s~\eqref{FLRW1}-\eqref{KG1} imply
\begin{align}
3r^{2} +3rs-\frac{\omega}{2}s^{2}& = \frac{8\pi\rho_{0}}{\phi_{0}}-3 \frac{k}{a_{0}^{2}}t^{2-2r}\, \\
s(s-1)+3rs &= -\frac{16\pi\rho_{0}}{(3+2\omega)\phi_{0}}.
\end{align}
Using the previous relation for $r$ and $s$ and combine these two equations, we find
\begin{eqnarray}
r = \frac{1}{2}\quad, \quad s = -1 \label{basisstiffsolutions}
\end{eqnarray}
for spatial curvature $k=0$, and
\begin{eqnarray}
r = 1 \quad, \quad s = -4.
\end{eqnarray}
for $k\neq0$.
Note also that the equations require the constraint
\begin{eqnarray}\label{phi0rho0wrel}
\phi_{0}=-\frac{32\pi}{(3+2\omega)}\rho_{0}\,
\end{eqnarray}
for flat spatial sections, and
\begin{eqnarray}
\phi_{0} = - \frac{2 \pi}{3+2\omega}\rho_{0}
\end{eqnarray}
with $k=-a_{0}^{2}$ normalized to $-1$ for non-flat. From now on we will focus solely on the solution (\ref{basisstiffsolutions}), for a $k=0$ FLRW universe.
Therefore, in order to have an attractive gravity with positive energy density, i.e. ${\rho_{0}}$ and ${\phi_{0}}>0$, we must have $\omega<-\frac{3}{2}$. During the whole evolution the scale factor mimics a GR radiation dominated expansion, namely $a\propto t^{1/2}$, while the scalar field is always decreasing inversely proportional to the cosmic time $\phi\propto t^{-1}$. There is no GR limit in the sense that the evolution does not depend on the BD parameter $\omega$. Furthermore, Eq.~\eqref{phi0rho0wrel} seems to show that the limit $\omega\rightarrow\infty$ is not even well defined since the gravitation strength is inversely proportional to the scalar field, i.e $G\sim\phi^{-1}$. Notwithstanding, this power law solution is completely consistent for finite values of $\omega$. In the next section we shall study the cosmological perturbations over this particular solution.
\section{Cosmological Perturbation}\label{sec:Pert}
Consider the background solution found above for a flat FLRW universe filled with stiff matter in BD theory, i.e.
\begin{align}\label{backsol}
a=a_0t^{1/2}\quad ,\quad \phi=\phi_0t^{-1} \ .
\end{align}
In Ref.~\cite{plinio} the general perturbed equations for a fluid with equation of state of the type $p=\alpha\rho$, with $\alpha$ constant, has been established for the Brans-Dicke cosmology. The full perturbed dynamical system is given by the perturbed version of Eq.s~\eqref{eq:fe-1}-\eqref{eq:fe-2}. However, the evolution of the matter density perturbation can be analyzed using only the perturbed version of the time-time Einstein's equations, the Klein-Gordon, and the conservation of energy-momentum tensor.
The metric perturbation is defined as $h_{\mu\nu}=g_{\mu\nu}-g^{(0)}_{\mu\nu}$, where $g^{(0)}_{\mu\nu}$ is given by the FLRW solution with Eq.~\eqref{backsol} and $k=0$. Following Ref.~\cite{plinio}, we adopt the synchronous gauge where $h_{0\mu}=0$. It is straightforward to calculate the perturbed Ricci tensor, which has time-time component given by
\begin{displaymath}
\delta R_{00}=\frac{1}{a^{2}}\biggr[\ddot{h}_{kk}-2\frac{\dot{a}}{a}\dot{h}_{kk}+2\biggr(\frac{\dot{a}^{2}}{a^{2}}-\frac{\ddot{a}}{a}\biggl)h_{kk}\biggl] \ .
\end{displaymath}
The time-time component of the perturbed energy-momentum tensor and its trace read
\begin{align} \label{21}
\delta T^{00}=\delta\rho \quad ,\qquad \delta T=\delta\rho-3\delta p\ .
\end{align}
Similarly, the perturbation of the scalar field is defined as $\delta \phi(x)=\phi(x)-\phi^{(0)}(t)$. The d'Alembertian of the scalar field is
\begin{equation}\label{23}
\delta\Box\phi=\delta\ddot\phi+a\dot ah^{kk}\dot\phi-\frac{1}{2a^{2}}\dot{h}_{kk}\dot\phi+3\frac{\dot {a}}{a}\delta\dot\phi-\frac{1}{a^{2}}\nabla^{2}\delta\phi~~.
\end{equation}
It is convenient to define new variables. In particular, we define the usual expression for the density contrast, a similar version for the perturbation of the scalar field, the divergence of the perturbation of the perfect fluid's velocity field $(\delta u^{i})$, and a normalized version of the metric perturbation. They are defined, respectively, as
\begin{eqnarray}\label{defnewvar}
\delta=\frac{\delta\rho}{\rho}\ ,\quad \lambda=\frac{\delta\phi}{\phi}\ ,\quad U=\delta u_{,i}^{i}\ ,\quad h=\frac{h_{kk}}{a^{2}}\ .
\end{eqnarray}
Using \eqref{defnewvar} and decomposing them in Fourier modes $n$, the time-time BD and the Klein-Gordon, equations (\ref{FLRW1}) and (\ref{KG1}), read respectively
\begin{align}
&\frac{\ddot{h}}{2}+H\dot{h}=\frac{8\pi\rho}{\phi}\biggr(\frac{2+\omega+3\alpha(1+\omega)}{3+2\omega}\biggl)(\delta-\lambda)+\ddot{\lambda}+2(1+\omega)\frac{\dot{\phi}}{\phi}\dot{\lambda}\ ,\label{pertFLRW}\\
&\ddot{\lambda}+\biggr(3H+2\frac{\dot{\phi}}{\phi}\biggl)\dot{\lambda}+\biggr[\frac{n^{2}}{a^{2}}+\biggr(\frac{\ddot{\phi}}{\phi}+3H\frac{\dot{\phi}}{\phi}\biggl)\biggl]\lambda=\frac{8\pi(1-3\alpha)\rho}{(3+2\omega)\phi}\delta+\frac{\dot{\phi}}{\phi}\frac{\dot{h}}{2}\ ,\label{pertKG}
\end{align}
In addition, the conservation of energy-momentum tensor decompose in two equations, namely
\begin{align}
&2\dot{\delta}-(1+\alpha)\left(\dot{h}-2U\right)=0\ ,\label{pertconserv1}\\
&(1+\alpha)\left(\dot{U}+(2-3\alpha)H U\right)=\alpha\frac{n^{2}}{a^{2}}\delta\ .\label{pertconserv2}
\end{align}
where again $n$ represents the Fourier mode. In the long wavelength limit, $n\rightarrow0$, Eq.~\eqref{pertconserv2} shows that the perturbation of the four velocity decouples. For a stiff matter fluid, $\alpha=1$, it becomes a growing mode with
\begin{eqnarray}
U \propto a\ .
\end{eqnarray}
This growing mode has nothing to do with BD's extra scalar degree of freedom. Eq.s~\eqref{pertconserv1}-\eqref{pertconserv2} come from perturbing the conservation of the energy-momentum Eq.~\eqref{conservTmunu}, which is identical in GR. For equation of state lower than $\alpha<2/3$, such as for radiation, we have a decaying mode and can ignore it by setting $U=0$ that in turn implies $2\delta=(1+\alpha)h$. In contrast, in our case $\alpha=1$ the growing mode together with Eq.~\eqref{pertconserv1} implies
\begin{eqnarray}\label{dhtheta}
\delta = h - \frac{4}{3}U_0t^{3/2},
\end{eqnarray}
with $U_0$ a constant of integration. For this reason, we will retain this inhomogeneous term.
Using the background expressions (Eq.~\eqref{backsol}), the long wavelength limit $(n\rightarrow0)$, and Eq.~\eqref{dhtheta}, the dynamical system simplifies to
\begin{align}
\frac{\ddot{h}}{2}
+\frac{\dot{h}}{2t}
+\frac{(5+4\omega)}{4t^2}h
=&\ddot{\lambda}
-\frac{2(1+\omega)\dot{\lambda}}{t}
+\frac{(5+4\omega)}{4t^2}\lambda
+ \frac{(5 + 4\omega)U_0}{3t^{1/2}} \ ,\\
-\frac{\dot{h}}{2t}+\frac{h}{2t^{2}}
=&
\ddot{\lambda}
-\frac{\dot{\lambda}}{2t}
+\frac{\lambda}{2t^{2}}
+ \frac{2U_0}{3t^{1/2}}\ .
\end{align}
These equations admit a solution under the form,
\begin{eqnarray}
h = h_{0}t^{m} + f_0t^{\frac{3}{2}},\quad\lambda=\lambda_{0}t^{m} + g_0t^{\frac{3}{2}},
\end{eqnarray}
with $\lambda_{0}$, $h_{0}$, $f_0$ and $g_0$ are constants. Equating the power in the time parameter and the coefficients of the polynomials, we obtain a set of equations connecting these different constants of integration, namely
\begin{align}
\biggr\{m^2 + \frac{(5 + 4\omega)}{2}\biggl\}h_0 &= 2\biggr\{m^2 - (3 + 2\omega)m + \frac{(5 + 4\omega)}{4}\biggl\}\lambda_0,\\
(1 - m)h_0 &= 2\biggr\{m^2 - \frac{3}{2}m + \frac{1}{2}\biggl\}\lambda_0,\\
f_0 &= - \frac{2}{3}(3 + 4\omega)g_0 + \frac{8}{27}(5 + 4\omega)U_0,\\
f_0 &= - 2g_0 + \frac{8}{3}U_0.
\end{align}
The constants $h_0$ and $\lambda_0$ give the homogeneous modes, while $f_0$ and $g_0$ give the inhomogeneous modes associated with the growing mode $U$. The homogeneous mode admit four power solution given by $m=0,-1,\frac{1}{2}$ and $1$. The solutions corresponding to $m=0,-1$ are connected with the residual gauge freedom typical of the synchronous gauge. A remarkable novelty is that the physical solutions correspond only to growing modes. In fact, these modes appear also in the long wavelength limit of the radiative cosmological model both in the GR and BD theories \cite{peebles,plinio}. However, there are two interesting aspects connected with these modes: there is no dependence on $\omega$, and when $m = 1$, $h_0 = \lambda_0 = 0$, while for $m = \frac{1}{2}$, $h = 0$ and $\lambda_0$ is arbitrary. Thus, the most important perturbative modes are the inhomogeneous modes, which are represented by $h=f_0t^{3/2}$, $\lambda=g_0 t^{3/2}$, with
\begin{eqnarray}
f_0 &=& \frac{2}{9}\frac{(7 + 2\omega)}{\omega}U_0\ ,\\
g_0 &=& - \frac{4}{9}\frac{(7 + 6\omega)}{\omega}U_0\ .
\end{eqnarray}
An important feature of these inhomogeneous solutions is that the perturbation grows very quickly with the scale factor, $\delta \propto a^3$. It is also interesting to contrast with the same situation in GR where this inhomogeneous modes identically cancel for the pure stiff matter case~\cite{amendola2015,weinberg2008}. As a final remark, we should stress that these homogeneous and inhomogeneous modes have a well defined $\omega\rightarrow\infty$ limit. However, these solutions depend on the background solution, which is inconsistent in this limit.
\section{Dynamical system analysis}\label{sec:DynSys}
The power law solution for stiff matter fluid in BD displayed in section~\ref{sec:PartSol} has distinct features compared with Gurevich's families with $\alpha\neq1$ see section~\ref{sec:GureSol}. In order to compare these solutions, we perform a dynamical system analysis. Instead of using the conservation equation~\eqref{conservTmunu}, in this section we shall use BD and Klein-Gordon equations of motion. For a flat FLRW, the dynamical system reads
\begin{align}
\left(\frac{\dot{a}}{a}\right)^{2}+\left(\frac{\dot{a}}{a}\right)\frac{\dot{\phi}}{\phi} = & \frac{8\pi\rho}{3\phi}+\frac{\omega}{6}\left(\frac{\dot{\phi}}{\phi}\right)^{2}\ ,\label{dynsysH00}\\
\left(\frac{\ddot{a}}{a}\right)-\left(\frac{\dot{a}}{a}\right)\frac{\dot{\phi}}{\phi} =& -\frac{8\pi}{3\phi}\frac{(3+\omega)\rho+3\omega p}{2\omega+3} -\frac{\omega}{3}\left(\frac{\dot{\phi}}{\phi}\right)^{2} ,\label{dynsysHii}\\
\frac{\ddot{\phi}}{\phi}+3\left(\frac{\dot{a}}{a}\right)\frac{\dot{\phi}}{\phi}=&\frac{8\pi}{(3+2\omega)}\frac{\left(\rho-3p\right)}{\phi}\ .\label{dynsysKG}
\end{align}
It is convenient to define the Hubble factor $H={\dot{a}}/{a}$ and its analogous for the scalar field, namely $F={\dot{\phi}}/{\phi}$. Restricting ourselves to the stiff matter $(p=\rho)$, we can combine the hamiltonian constraint Eq.~\eqref{dynsysH00} with Eq.~\eqref{dynsysHii} and \eqref{dynsysKG} obtaining the following dynamical autonomous system:
\begin{align}
\dot{H} & = -\frac{6}{(3+2\omega)}\left((1+\omega)H^{2}+\frac{\omega}{3} HF+\frac{\omega}{12} F^{2}
\right)\ , \label{Hponto}\\
\dot{F} & =
-\frac{6}{(3+2\omega)}\left(H^{2}+\frac{(5+2\omega)}{2}HF+\frac{(3+\omega)}{6}F^2 \right) \ , \label{Fponto}
\end{align}
In fact, note that Eq.s~\eqref{Hponto}-\eqref{Fponto} can simultaneously describe the stiff matter and the vacuum $(p=\rho=0)$. It is easy to check that there are two fixed points for this dynamical system
\begin{align}
H&=F=0 \quad \mbox{corresponding to the Minkowski case,}\\
H&=-\frac{F}{3} \quad \mbox{with}\quad \omega=-\frac{4}{3}\ .
\end{align}
We can also find the invariant rays defined by the condition $F=q H$ with $q$ constant, which correspond to power law solutions of the system. Using Eq.~\eqref{powerlaw}, this condition translates into $q={s}/{r}$, where $r$ and $s$ are the powers in time of the scale factor and of the scalar field, respectively . Imposing this condition and combining the resulting expressions we find the following third order polynomial for $q$
\begin{eqnarray}
(q+2)\biggr(\frac{\omega}{2}q^{2}-3q-3\biggl)=0,
\end{eqnarray}
with solutions given by
\begin{equation}
q = -2 \quad ,\quad
q_{\pm} = \frac{3}{\omega}\biggr(1\pm\sqrt{1+\frac{2}{3}\omega}\biggl)
\ .\label{qstiff}
\end{equation}
The first root corresponds to the power law solution found previously, for which gravity is attractive only if $\omega<-{3}/{2}$. Indeed, using $F=qH$, the constraint Eq.~\eqref{dynsysH00} reads
\begin{eqnarray}\label{constq}
\biggr(-\frac{\omega}{2}q^{2}+3q+3\biggl)H^{2}=\frac{8\pi\rho}{\phi}\ .
\end{eqnarray}
One can immediately see that if $q=-2$ then the energy density is positive only for $2\omega+3<0$ as already argued in Eq.~\eqref{phi0rho0wrel}. The other two roots correspond to the vacuum solution. Again, Eq.~\eqref{constq} shows that for $q=q_\pm$ the left hand-side of the above equation vanishes implying that $\rho=0$. Note also that the invariant rays $q=q_\pm$ disappear when $\omega<-\frac{3}{2}$ (the roots become imaginary). Varying $\omega$ into negative values makes the two $q_\pm$ rays collapse into $q=-2$ when $\omega=-3/2$. For $\omega<-3/2$ only the $q=-2$ invariant ray remains (see Fig.~\ref{phase}).
The $q=-2$ invariant ray does not depend on $\omega$ which means that is insensitive to the $\omega\rightarrow\infty$ limit. On the other hand, the $q=q_\pm$ decays as
\begin{equation}
\lim_{\omega\rightarrow\infty} q_\pm=\pm\sqrt{\frac{6}{\omega}}\ .
\end{equation}
Thus, since $F=q H$, for arbitrary finite values of $H$ we have $\dot{\phi}\rightarrow 0$ in the $\omega\rightarrow\infty$ limit. Naively, one could expect that a vacuum solution with $\dot{\phi}\rightarrow 0$ should approach the Minkowski spacetime. However, the term $\omega\, \dot{\phi}^2$ does not go to zero in this limit producing a power law expansion with $a\propto t^{1/3}$. Indeed, it has been shown in Ref.~\cite{brando2018} that in this regime the $\omega\, \dot{\phi}^2$ term behaves as an effective stiff matter like term, which is responsible for the $a\propto t^{1/3}$ evolution of the scale factor.
\subsection{Singular Points at Infinity}
The stability of the fixed point at the origin of the phase space can be inferred directly from the phase space diagrams. However, the stability of the invariant rays of the dynamical system must be analyzed at infinity. For this purpose we use the Poincar\'{e} central projection method~ \cite{sansoniconti,bogo,juliojoel}, using the coordinate transformation
\begin{equation}\label{transf}
H = \frac{h}{z}, \ \ F = \frac{f}{z}, \quad \quad \mbox{with}\ h^2 + f^2 + z^2 =1\ .
\end{equation}
Eq.s~\eqref{Hponto}--\eqref{Fponto}) can be recast as $P(H,F)dF - Q(H,F)dH = 0$, which combined with equation Eq.~\eqref{transf} gives us
\begin{equation}\label{dfdhdz}
-zQdh + zPdf + \left(hQ-fP\right)dz=0\ .
\end{equation}
where the functions $P(h,f,z)$ and $Q(h,f,z)$ are given by
\begin{align}
P(h,f,z) &= -\frac{6}{(3+2\omega)}\left((1+\omega)h^{2}+\frac{\omega}{3} hf+\frac{\omega}{12} f^{2}\right)\ , \\
Q(h,f,z) &= -\frac{6}{(3+2\omega)}\left(h^{2}+\frac{(5+2\omega)}{2} hf+\frac{(3+\omega)}{6} f^{2}\right)\ .
\end{align}
Collecting all term we can explicitly write Eq.~\eqref{dfdhdz} in term of the projective coordinates as
\begin{align}\label{dfdhdzexpl}
&z\Big[ 6h^2 +3(2\omega+5)hf + (3+\omega)f^2 \Big]dh\nonumber \\
-&z \left[6(1+\omega)h^2 + 2\omega hf + \frac{\omega}{2} f^2 \right]df\nonumber \\
-& \left[ 6 h^3 - \frac{\omega}{2} f^3 + (3-\omega)hf^2 + 9 h^2f \right] dz= 0\ .
\end{align}
The singular points at infinity have projective coordinates in the plane $(h,f,z=0)$. Given Eq.s~\eqref{transf} and \eqref{dfdhdzexpl}, they are solutions of the system:
\begin{equation}\label{hfsist}
\begin{split}
&h^2+f^2=1\ , \\
& 6 h^3 - \frac{\omega}{2} f^3 + (3-\omega)hf^2 + 9 h^2f =0\ .
\end{split}
\end{equation}
In order to find the invariant rays, we substitute ${f}/{h} = q$ in the system Eq.s~(\ref{hfsist}). As expected, there are three invariant rays
\begin{align}
q&=-2 \ :
&h&= \pm \frac{1}{\sqrt{5}} \ ,
&&f=qh \ , \label{ray1}\\
q_{+}&=\frac{3}{\omega}\left(1+\sqrt{1+\frac{2}{3}\omega}\right) \ :
&h&= \frac{1}{\sqrt{1+q_{+}^2}} \ ,
&&f=q_{+}h\ , \label{ray2}\\
q_{-}&= \frac{3}{\omega} \left(1-\sqrt{1+\frac{2}{3}\omega}\right)\ :
&h&= \frac{1}{\sqrt{1+q_{-}^2}}\ ,
&& f=q_{-}h\ .\label{ray3}
\end{align}
The analytical expressions of the solutions of the scale factor and the scalar field that correspond to these rays are given by\\
For $q=-2$ :
\begin{equation}
a(t) \propto t^{1/2}\quad ,\quad \phi(t) \propto t^{-1} \ , \label{invraystiff}
\end{equation}
For $q_{+}= \frac{3}{\omega}\left(1+\sqrt{1+\frac{2}{3}\omega}\right)$ :
\begin{equation}
a(t) \propto t^{{\omega\left(3+q_{+}\right)}/{3(4+3\omega)}}\quad , \quad \phi(t) \propto t^{{\left(4-\omega q_{+}\right)}/{(4+3\omega)}}\ ,
\label{invrayvac1}
\end{equation}
For $q_{-}= \frac{3}{\omega}\left(1-\sqrt{1+\frac{2}{3}\omega}\right)$ :
\begin{equation}
a(t) \propto t^{{\omega\left(3+q_{-}\right)}/{3(4+3\omega)}}\quad , \quad \phi(t) \propto t^{{\left(4-\omega q_{-}\right)}/{(4+3\omega)}}\ .
\label{invrayvac2}
\end{equation}
The phase portrait for six different values of $\omega$: $-5$, $-4/3$, $-1$, $1$, $50$ and $500$ are plotted in Fig.~\ref{phase}. As mentioned before, the two spread invariant rays for $\omega>-3/2$ are related to the $q_\pm$ vacuum solution ($\rho=0$), while the invariant ray in the middle is for $q=-2$, which corresponds to our solution Eq.~\eqref{powerlaw} for $\omega<-3/2$. Increasing the value of $\omega$ makes the $q_\pm$ invariant rays to move away from the $q=-2$ invariant ray. As can be seen by Eq.~\eqref{constq}, the region between the two invariant rays $q_\pm$ corresponds to negative values of the energy density, hence should be excluded on physical basis. Additionally, for large values of $\omega$ the $q_\pm$ invariant rays tend to lay along the $F=0$ line, which represent the $\omega\rightarrow \infty$ limit.
\begin{figure}[h]
\centering
\includegraphics[width=0.27\textwidth]{phase_space_omega_menos_5.pdf}
\includegraphics[width=0.27\textwidth]{phase_space_omega_menos_4_3.pdf}
\includegraphics[width=0.27\textwidth]{phase_space_omega_menos_1.pdf}\\
\includegraphics[width=0.27\textwidth]{phase_space_omega_mais_1.pdf}
\includegraphics[width=0.27\textwidth]{phase_space_omega_mais_50.pdf}
\includegraphics[width=0.27\textwidth]{phase_space_omega_mais_500.pdf}
\caption{Compacted phase portraits using the variables $h$, $f$ that are respectively related to $H$ and $F$ by Eq.~\eqref{transf}. Each portrait uses a different value of BD parameter $\omega$. In particular, we used the values $\omega=-5,-4/3,-1,1,50,500$ respectively from the top left to the bottom right. The unity circle corresponds to the projected singular points at infinity. The dashed straight lines depict the invariant rays Eq.s~(\ref{ray1}-\ref{ray3}) and the empty region has been excluded since it corresponds to the unphysical situation of negative values of the energy density.}
\label{phase}
\end{figure}
\section{Discussion}\label{sec:Concl}
In this paper we analyzed the cosmological solution for a perfect fluid in Brans-Dicke theory and showed that stiff matter $(p=\rho)$ is a very particular solution. There are power law solution with the scale factor and the scalar field, respectively, proportional to $a\propto t^{1/2}$ and $\phi\propto t^{-1}$. Even though the matter content behaves as stiff matter, the cosmological evolution mimics a radiation dominated dynamics in General Relativity. Furthermore, the scalar field gives the effective gravitational strength, hence, gravity becomes stronger with the expansion of the universe.
Eq.~\eqref{phi0rho0wrel} shows that the scalar field is inversely proportional to the BD parameter $\omega$. This condition is commonly understood as a sufficient condition for a well defined GR limit. However, we have shown that this is not the case for the power law solution \eqref{powerlaw}.
The scalar cosmological perturbation also has interesting features. The velocity field for the stiff matter fluid has a growing mode $U$ that is proportional to scale factor. This extra contribution produce new polynomials solutions for the density contrast $\delta=\delta\rho/\rho$, the fractional scalar field perturbation $\lambda=\delta \phi/\phi$ and the tensor perturbation $h=h_{kk}/a^2$. The homogeneous mode has four power solution in cosmic time $t^m$ with $m=-1,0,1/2,1$. The first two are connected with the residual freedom of the synchronous gauge and the other two are the physical solutions corresponding to two growing modes. There is no decaying mode. The inhomogeneous mode related to the growing mode $U$ goes as $t^{3/2}\propto a^3$, hence it is a steep growth if compared with the standard cosmological model.
The dynamical systems analysis developed in section~\ref{sec:DynSys} shows the existence of three invariant rays associated with the system Eq.s~\eqref{Hponto}-\eqref{Fponto}. The first correspond to the power law solution of section~\ref{sec:PartSol} with constant of proportionally $q=-2$. The other two are vacuum solutions with constant of proportionally $q_\pm$ given by Eq.~\eqref{qstiff}. For $\omega<-3/2$ there is only one invariant ray associated with $q=-2$ while for $\omega>-3/2$ there are two vacuum invariant rays $q_\pm$. The region in the phase space diagram between these two rays has negative energy density, which is unphysical. Increasing the value of the BD parameter $\omega$, these two rays rotate away from each other expanding the unphysical region. The limiting case is for $\omega\rightarrow\infty$ when both $q_\pm$ tends to zero. This limit has a vacuum solution with constant scalar field $F=0$ but the scale factor increases as $a\propto t^{1/3}$, which is characteristic of a FLRW universe with stiff matter in GR. Even though this is the vacuum case and the scalar field is constant, we do not approach Minkowski spacetime.
We would like to point out to a possible cosmological realization of our solution (\ref{basisstiffsolutions}). Alternative theories of gravity, such as BD, are commonly used in cosmology to explain the late time acceleration of our universe. One of these models are Quintessence models, in which the matter component is described by a minimally coupled scalar field, with a potential $V(\psi)$. Some of these models~\cite{amendola2015}, are such that its potential goes to zero at early times, which implies that in this period the scalar field has a stiff matter type equation of state. Therefore, the model described in this work can be interpreted as the early time description of such models.
It is well known in the literature that there are examples where the BD parameter scales as $\phi\sim 1/\sqrt{\omega}$ but the system does not approach a GR regime in the limit $\omega\rightarrow\infty$. Nevertheless, it is commonly expected to recover GR in this limit if $\phi\sim 1/\omega$ and the matter energy-momentum tensor has a nonzero trace. We have explicit showed an exact BD solution with $\phi\sim 1/\omega$ and $T^\mu{}_\mu\neq 0$ that does not approach GR in the limit $\omega\rightarrow\infty$.
\section*{Acknowledgments}
The authors are grateful to Luca Amendola for valuable comments, and to an anonymous referee for important corrections. The authors would also like to thank and acknowledge financial support from the National Scientific and Technological Research Council (CNPq, Brazil) the State Scientific and Innovation Funding Agency of Esp\'\i rito Santo (FAPES, Brazil) and the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES, Brazil).
|
1,108,101,563,618 | arxiv | \section{Interpolation}
Interpolation problems for analytic functions have been a mainstay in complex analysis since its conception in the late 19th century. The general idea is that we have a certain class $\mathcal{X}$ of analytic functions on the open unit disk $\mathbb{D}$ (e.g., all analytic functions, bounded analytic functions, analytic self maps of $\mathbb{D}$, Blaschke products, outer functions). Then, for a sequence $\{\lambda_n\}$ of distinct points in $\mathbb{D}$ and sequence $\{w_n\}$ of complex numbers, we want to find an $f \in \mathcal{X}$ such that $f(\lambda_n) = w_n$ for all $n$. If we are not able to solve this problem for all $\{\lambda_n\}$ and $\{w_n\}$, what restrictions must we have?
Suppose $\mathcal{X}$ is the class of {\em all} analytic functions on $\mathbb{D}$. For a sequence $\{\lambda_n\}$ of distinct points in $\mathbb{D}$ (with no limit point in $\mathbb{D}$) and any sequence $\{w_n\}$, an application of the classical Mittag--Leffler theorem and the Weierstrass factorization theorem produces an analytic function $f$ with $f(\lambda_n) = w_n$ for all $n$. In other words, for the class $\mathcal{X}$ of all analytic functions on $\mathbb{D}$, besides the obvious restriction that $\{\lambda_n\}$ has no limit points in $\mathbb{D}$, there is no other restriction on $\{\lambda_n\}$ to be able to interpolate any sequence $\{w_n\}$ with an analytic function.
Of course, there are the finite interpolation problems. For example, a well known result of Lagrange, from 1795, says that given distinct $\lambda_1, \ldots, \lambda_n$ in $\mathbb{C}$ and arbitrary $w_1, \ldots, w_n$ in $\mathbb{C}$ there is a polynomial $p$ of degree $n - 1$ such that $p(\lambda_j) = w_j$ for all $1 \leq j \leq n$. There is also the often-quoted result of Nevanlinna and Pick (from 1916) which says that given distinct $\lambda_1, \ldots, \lambda_n$ in $\mathbb{D}$ and arbitrary $w_1, \ldots, w_n$ in $\mathbb{D}$, there is an $f \in H^{\infty}$ with $|f| \leq 1$ on $\mathbb{D}$ for which $f(\lambda_j) = w_j$, $1 \leq j \leq n$, if and only if the Nevanlinna-Pick matrix
$$\Big[\frac{1 - \overline{w_j} w_i}{1 - \overline{\lambda_j} \lambda_i}\Big]_{1 \leq i, j \leq n}$$ is positive semidefinite \cite{MR1882259, Garnett}.
When $\mathcal{X}$ is the class of bounded analytic functions on $\mathbb{D}$, denoted in the literature by $H^{\infty}$, a well-known theorem of Carleson \cite{MR117349} (see also \cite{Garnett}) says the following: A sequence $\Lambda = \{\lambda_n\} \subset \mathbb{D}$ has the property that given any bounded sequence $\{w_n\}$ there is a $\varphi \in H^{\infty}$ such that $\varphi(\lambda_n) = w_n$ if and only if
\begin{equation}\label{deltaaaa}
\delta(\Lambda) := \inf_{n \geq 1} \prod_{k=1, k \ne n}^{\infty} \left| \frac{\lambda_k-\lambda_n}{1-\overline{\lambda}_k \lambda_n}\right| > 0.
\end{equation}
Such $\{\lambda_n\}$ are called {\em interpolating sequences}. In this paper we explore the type of functions $\varphi \in H^{\infty}$ that can perform the interpolating. For example, a result of Earl \cite{MR284588} says that when $\delta(\Lambda) > 0$ one can always take the interpolating function
$\varphi$ to be a constant multiple of a Blaschke product. Other types of interpolation problems are discussed in \cite{MR2309193, MR133446}.
Inspired by a common range problem for co-analytic Toeplitz operators on model spaces that we will discuss at the end of this paper, we focus on conditions on the targets $\{w_n\}$ that allow us to take $\varphi$ to be an outer function (bounded outer function).
Our results are as follows: In Theorem \ref{IT001} we prove that for interpolating $\{\lambda_n\}$ and bounded $\{w_n\}$ with
$$\inf_{n \geq 1} |w_n| > 0,$$ there is a bounded outer function $\varphi$ such that $\varphi(\lambda_n) = w_n$ for all $n$. As an application of this, we prove in Proposition \ref{comp886bb} that when $\{w_n\}$ and $\{w^{'}_n\}$ are bounded with
$$0 < m \leq \Big|\frac{w_n}{w^{'}_n}\Big| \leq M < \infty, \quad n \geq 1,$$ then $\{w_n\}$ can be interpolated by an outer function (bounded outer function) if and only if $\{w^{'}_n\}$ can as well. Therefore, for subsequent discussions, without loss of generality, we may consider positive target sequences. To establish conditions for which $\{w_n\}$ can be interpolated by an outer function, we prove in Theorem \ref{T:interpol-outer-inner} that when $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ can be interpolated by an outer function it must be the case that
$$\lim_{n \to \infty} (1 - |\lambda_n|) \log |w_n| = 0.$$ Hence sequences such as
$$w_n = e^{-\frac{1}{1-|\lambda_n|}}, \quad n \geq 1,$$
can not be interpolated by an outer function. In other words, any $H^{\infty}$ function $\varphi$ for which $\varphi(\lambda_n) = w_n$ for all $n$ (and such $\varphi$ exist by Carleson's theorem) must have an inner factor. In fact (Theorem \ref{Blakshkefactor}), any bounded analytic function $\varphi$ which satisfies the stronger decay condition
$$\varphi(\lambda_n) = e^{-\frac{1}{(1 - |\lambda_n|)^2}}$$ must have a Blaschke factor. In Theorem \ref{growthragfetd6dx6} and Theorem \ref{MTThree} we discuss the sharpness of Theorem \ref{T:interpol-outer-inner} and explore conditions on the decay the rate of $(1 - |\lambda_n|) \log |w_n|$ to determine when there is an outer function (bounded outer function) that interpolates $\{w_n\}$.
Worth mentioning here is the paper \cite{MR2309193} which examines the question of when the interpolating function can be zero free. The outer functions are a strict subclass of the zero free functions since the zero free functions can have a singular inner factor.
In the final part of this paper we apply our results to determine the common range for the co-analytic Toeplitz operators on a model space. In fact, this was our original reason for exploring this topic. For an inner function $u$, define the model space $\mathcal{K}_u = (u H^2)^{\perp}$. It is known \cite[p.~106]{MR3526203} that $\mathcal{K}_u$ is an invariant subspace for any co-analytic Toeplitz operator $T_{\overline{\varphi}}$ on $H^2$. In \cite{MR1065054} McCarthy described the set
$$\mathscr{R}(H^2) := \bigcap \big\{T_{\overline{\varphi}} H^2: \varphi \in H^{\infty} \setminus \{0\}\big\},$$
the functions in the common range of all the (nonzero) co-analytic Toeplitz operators. By the Douglas factorization theorem (see \S \ref{Crrrsdsdsd546u} below), $H^{\infty} \setminus \{0\}$ can be replaced by $H^{\infty} \cap \mathcal{O}$, where $\mathcal{O}$ is the class of outer functions.
As an application of our outer interpolation results, we determine
$$\mathscr{R}(\mathcal{K}_u) : = \bigcap \big\{T_{\overline{\varphi}} \mathcal{K}_u: \varphi \in \mathcal{O} \cap H^{\infty}\big\},$$
the common range on a fixed model space. While $\mathscr{R}(H^2)$ is somehow ``large'', for example, it contains all functions that are analytic in a neighborhood of $\overline{\mathbb{D}}$, $\mathscr{R}(\mathcal{K}_u)$ can be considerably smaller. In fact $\mathscr{R}(\mathcal{K}_u) = \{0\}$ for certain $u$ (Example \ref{nosmooth}). We describe $\mathscr{R}(\mathcal{K}_u)$ for any inner function (Theorem \ref{notusefule8shyshhH}) and when $u$ is an interpolating Blaschke product (zeros are an interpolating sequence), we give an alternate, and more tangible, description involving our outer interpolating results (Theorem \ref{Hdecereuiusvnn77}).
\section{Some notation}\label{notataions8}
Let us set our notation and review some well-known facts about the classes of analytic functions that appear in this paper. The books \cite{Duren, Garnett} are thorough references for the details and proofs.
In this paper, $\mathbb{D}$ is the open unit disk $\{z \in \mathbb{C}: |z| < 1\}$, $\mathbb{T}$ the unit circle $\{z \in \mathbb{C}: |z| = 1\}$, and $dm = d \theta/2\pi$ is normalized Lebesgue measure on $\mathbb{T}$.
The space $H^1$, the {\em Hardy space}, is the set of analytic functions $f$ on $\mathbb{D}$ for which
$$\sup_{0 < r < 1} \int_{\mathbb{T}} |f(r \xi)| dm(\xi) < \infty.$$
Standard results say that every $f \in H^1$ has a radial limit
$$f(\xi): = \lim_{r \to 1^{-}} f(r \xi)$$ for almost every $\xi \in \mathbb{T}$ and
$$\int_{\mathbb{T}} |f(\xi)| dm(\xi) = \sup_{0 < r < 1} \int_{\mathbb{T}} |f(r \xi)| dm(\xi).$$
As is the usual practice in Hardy spaces, we use the symbol $f$ to denote the boundary function on $\mathbb{T}$ as well as the analytic function on $\mathbb{D}$.
We let $H^{\infty}$ be the bounded analytic functions on $\mathbb{D}$ and observe that $H^{\infty} \subset H^1$ and thus every $f \in H^{\infty}$ also has a radial boundary function. In fact,
$$\sup_{z \in \mathbb{D}} |f(z)| = \mbox{ess-sup}_{\mathbb{T}} |f|.$$
If $W$ is an extended real-valued integrable function on $\mathbb{T}$,
\begin{equation}\label{outerfunctions88u}
\varphi(z) = \exp\Big(\int_{\mathbb{T}} \frac{\xi + z}{\xi - z} W(\xi) dm(\xi)\Big), \quad z \in \mathbb{D},
\end{equation}
is analytic on $\mathbb{D}$ and is called an {\em outer function}.
Observe that
$$
\log|\varphi(z)| = \int_{\mathbb{T}} \Re\Big(\frac{\xi + z}{\xi - z}\big) W(\xi)\,dm(\xi)
= \int_{\mathbb{T}} \frac{1 - |z|^2}{|\xi - z|^2} W(\xi)\,dm(\xi),$$
which is the Poisson integral of $W$.
By some harmonic analysis \cite[p.~15]{Garnett},
$$\lim_{r \to 1^{-}} \log|\varphi(r \zeta)| = \log |\varphi(\zeta)| = W(\zeta)$$ for almost every $\zeta \in \mathbb{T}$. This process can be reversed and so given an extended real-valued $W \in L^1(\mathbb{T})$, there is an outer $\varphi$ with
\begin{equation}\label{eeewwww}
|\varphi(\zeta)| = e^{W(\zeta)}
\end{equation}
almost everywhere (in terms of radial boundary values).
The outer functions belong to the {\em Smirnov class}
$$N^{+} = \big\{f/g: f \in H^{\infty}, g \in H^{\infty} \cap \mathcal{O}\big\}$$ and every $F \in N^{+}$ can be factored as $F = I_{F} O_{F}$, where $I_{F}$ is inner ($I_F \in H^{\infty}$ with unimodular boundary values almost everywhere on $\mathbb{T}$) and $O_{F}$ is outer. There are also the inclusions
$H^{\infty} \subset H^1 \subset N^{+}$.
\section{Positive results}
We start off with examples of bounded $\{w_n\}$ which can be interpolated by outer functions and explore the ones which can not in the next section.
\begin{Theorem}\label{IT001}
Suppose $\{\lambda_n\} \subset \mathbb{D}$ is interpolating. For a bounded $\{w_n\}$ with
$$\inf_{n \geq 1} |w_n| > 0,$$ there is a bounded outer function $\varphi$ such that $\varphi(\lambda_n) = w_n$ for all $n$.
\end{Theorem}
The proof requires a few preliminaries. The first is a more detailed version of Carleson's result on interpolating sequences \cite[p.~268]{MR1669574}.
\begin{Theorem}[Carleson] \label{T:interpol}
For an interpolating $\Lambda = \{\lambda_n\}$ there is a constant $0<C(\Lambda)<1$ with the following properties.
\begin{enumerate}[(i)]
\item For each $\{w_n\}$ satisfying
$
|w_n| \leq C(\Lambda)$ for all $n \geq 1$,
there is an $\varphi \in H^\infty$ such that $\|\varphi\|_\infty \leq 1$ and
$\varphi(\lambda_n)=w_n$ for all $n$.
\item There are bounded $\{w_n\}$ with
$
|w_n| > C(\Lambda),
$
for at least one $n$,
such that $\|\varphi\|_{\infty} > 1$ for any interpolating function $\varphi \in H^{\infty}$.
\end{enumerate}
\end{Theorem}
The number $C(\Lambda)$ is called the {\em Carleson index} for $\Lambda$ and is related to $\delta(\Lambda)$ from \eqref{deltaaaa} by
$$A \frac{\delta(\Lambda)}{\log\big(\dfrac{e}{\delta(\Lambda)}\big)} \leq C(\lambda) \leq \delta(\Lambda),$$
where $A$ is an absolute constant \cite[p.~268]{MR1669574}. Our first step is to prove a special case of Theorem \ref{IT001}.
\begin{Lemma} \label{L:circle}
Suppose $\{w_n\} \subset \overline{D(a,r)} = \{z: |z - a| \leq r\}$. If
$r/|a| < C(\Lambda)
$
there is a bounded outer $\varphi$ such that $\varphi(\lambda_n) = w_n$ for all $n$.
\end{Lemma}
The hypothesis $r/|a| < C(\Lambda)$ implies that $r < |a|$ and so $\overline{D(a,r)} $ does not contain the origin.
\begin{proof}
Set
$$
t_n := C(\Lambda) \frac{w_n-a}{r}.
$$
Then $|t_n| \leq C(\Lambda)$, and, by Theorem \ref{T:interpol}, there is $g \in H^\infty$ with $\|g\|_\infty \leq 1$ and such that $g(\lambda_n)=t_n$ for all $n$. Define
\[
\varphi := \frac{r}{C(\Lambda)}g+a.
\]
Clearly $\varphi \in H^{\infty}$ and
\[
\varphi(\lambda_n) = \frac{r}{C(\Lambda)}g(\lambda_n)+a = \frac{r}{C(\Lambda)}t_n+a = w_n.
\]
Moreover, with $a = |a|e^{i\alpha}$ and $z \in \mathbb{D}$,
\begin{align*}
\Re(e^{-i\alpha}\varphi(z)) & = \frac{r}{C(\Lambda)} \Re(e^{-i\alpha}g(z)) + |a|
\geq \frac{r}{C(\Lambda)} \Big(\Re(e^{-i\alpha}g(z)) + 1 \Big)
\end{align*}
which is positive.
The condition
$\Re(e^{-i\alpha} \varphi)> 0$
is sufficient to ensure that $e^{-i\alpha} \varphi$ is an outer function \cite[p.~65]{Garnett}. Thus $\varphi$ is outer as well.
\end{proof}
\begin{Remark}\label{detailreg0}
When $\{w_n\} \subset (0, \infty)$, we can choose $\alpha = 0$ in the above proof and thus choose the interpolating function to satisfy $\Re \varphi > 0$. This detail will become important later on.
\end{Remark}
\begin{Remark}
If one just wanted a nonvanishing interpolating function $\varphi \in H^{\infty}$ in Theorem \ref{IT001}, one could take $\varphi = e^{\psi}$, where $\psi \in H^{\infty}$ with $\psi(\lambda_n) = \log w_n$ (for a suitably defined logarithm). See \cite{MR2309193} for more on this.
\end{Remark}
\begin{proof}[Proof of Theorem \ref{T:interpol}]
Fix $0<r<C(\Lambda)$ and consider the closed disk $\overline{D(1,r)}$. Since
$
m \leq |w_n| \leq M$, $n \geq 1$,
for some positive constants $m$ and $M$, there is a positive integer $k$ such that
$
w_n^{1/k} \in \overline{D(1,r)}$ for all $n \geq 1$.
We use the main branch of logarithm to evaluate $w_n^{1/k}$. Therefore, by Lemma \ref{L:circle}, there is a bounded outer function $g$ such that
$
g(\lambda_n) = w_n^{1/k}$ for all $n$.
The function $\varphi := g^k$ is bounded, outer, and
$
\varphi(\lambda_n) = w_n$ for all $n$.
\end{proof}
This next result says that for outer interpolation, we can always assume, for example, that the targets $w_n$ are positive.
\begin{Proposition}\label{comp886bb}
Suppose $\{\lambda_n\}$ is interpolating and $\{w_{n}\}$ and $\{w_{n}'\}$ are bounded with
$$0 < m \leq \big|\frac{w_{n}'}{w_n}\big| \leq M < \infty, \quad n \geq 1.$$
Then $\{w_n\}$ can be interpolated by an outer (bounded outer) function if and only if $\{w_{n}'\}$ can be interpolated by an outer (bounded outer) function.
\end{Proposition}
\begin{proof}
By Theorem \ref{IT001} there is a bounded outer $F$ such that
$F(\lambda_n) = w_{n}'/w_{n}$ for all $n \geq 1$.
If there is an outer (bounded outer) $\varphi$ such that $\varphi(\lambda_n) = w_n$ for all $n$, then the outer (bounded outer) $\varphi F$ performs the desired interpolation for $\{w_{n}'\}$.
\end{proof}
\begin{Remark}
If $\varphi$ is outer (bounded outer) then so is $\varphi^{c}$ for any $c > 0$. Thus $\{w_{n}^{c}\}$ can be interpolated by an outer (bounded outer) function whenever $\{w_n\}$ can.
\end{Remark}
\section{Negative results -- existence of an inner factor}
If $\{\lambda_n\}$ is interpolating we know that given any bounded $\{w_n\}$ there is a $\varphi \in H^{\infty}$ such that $\varphi(\lambda_n) = w_n$. This next result says that under certain circumstances, any Smirnov interpolating function for $\{w_n\}$ must have an inner factor.
\begin{Theorem} \label{T:interpol-outer-inner}
If $\{\lambda_n\}$ is interpolating and $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ satisfies
\[
\lim_{n \to \infty} (1-|\lambda_n|)\log|w_n| \ne 0,
\]
then any $\varphi \in N^{+}$ satisfying $\varphi(\lambda_n) = w_n$ for all $n$ must have a non-trivial inner factor.
\end{Theorem}
\begin{Example} If
$$
w_n := e^{-\frac{1}{1-|\lambda_n|}}, \quad n \geq 1,
$$
then any interpolating $\varphi \in N^{+}$ for $\{w_n\}$ is not outer.
\end{Example}
This result will follow from the lemma below which is probably folklore but we include a proof for completeness.
\begin{Lemma} \label{L:growth-outer}
If $\varphi$ is outer, then
\[
\lim_{|z| \to 1^{-}} (1-|z|) \log|\varphi(z)| = 0.
\]
\end{Lemma}
\begin{proof}
Let $a > 1$ and
$
E_a = \{\xi \in \mathbb{T} : |\varphi(\xi)|>a \}.
$
Then $\log|\varphi| > 0$ on $E_a$ and an application of
\begin{equation}\label{Pposequal1}
\int_{\mathbb{T}} \frac{1 - |z|^2}{|\xi - z|^2}\,dm(\xi) = 1, \quad z \in \mathbb{D};
\end{equation}
and
\begin{equation}\label{poissiner5}
\frac{1 - |z|^2}{|\xi - z|^2} \leq \frac{2}{1 - |z|}, \quad z \in \mathbb{D}, \quad \xi \in \mathbb{T}.
\end{equation}
give us
\begin{align*}
\log|\varphi(z)| &= \int_{\mathbb{T}} \frac{1-|z|^2}{|z-\xi|^2} \log|\varphi(\xi)| dm(\xi)\\
&= \int_{E_a} \frac{1-|z|^2}{|z-\xi|^2} \log|\varphi(\xi)| dm(\xi) + \int_{\mathbb{T}\setminus E_a} \frac{1-|z|^2}{|z-\xi|^2} \log|\varphi(\xi)| dm(\xi)\\
&\leq \frac{2}{1-|z|} \int_{E_a} \log|\varphi(\xi)| dm(\xi)
+ \log a \int_{\mathbb{T}\setminus E_a} \frac{1-|z|^2}{|z-\xi|^2} dm(\xi)\\
&\leq \frac{2}{1-|z|}\int_{E_a} \log|\varphi(\xi)| dm(\xi) + \log a.
\end{align*}
Hence,
\[
(1-|z|)\log|\varphi(z)| \leq 2\int_{E_a} \log|\varphi(\xi)| dm(\xi)+ (1-|z|)\log a, \quad z \in \mathbb{D},
\]
which implies
\[
\varlimsup_{|z| \to 1^{-}}(1-|z|)\log|\varphi(z)| \leq 2\int_{E_a} \log|\varphi(\xi)| dm(\xi).
\]
Now let $a \to +\infty$ and use the fact that $\log |\varphi| \in L^1(\mathbb{T})$ to deduce
\begin{equation}\label{E:limsup-phi}
\varlimsup_{|z| \to 1^{-}}(1-|z|)\log|\varphi(z)| \leq 0.
\end{equation}
Since $1/\varphi$ is also outer , the above argument implies
\[
\varlimsup_{|z| \to 1^{-}}(1-|z|)\log|1/\varphi(z)| \leq 0,
\]
or equivalently
\begin{equation}\label{E:liminf-phi}
\varliminf_{|z| \to 1^{-}}(1-|z|)\log|\varphi(z)| \geq 0.
\end{equation}
The result now follows by comparing \eqref{E:limsup-phi} and \eqref{E:liminf-phi}.
\end{proof}
Let us comment here that when the hypothesis of Theorem \ref{T:interpol-outer-inner} is satisfied, the inner factor that appears in the interpolating function $\varphi$ plays a significant role in the decay of $\varphi$.
\begin{Corollary}\label{yuhsjkfdmgsewdf}
Suppose $\{\lambda_n\}$ is interpolating and $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ is a bounded and satisfies
\[
\lim_{n \to \infty} (1-|\lambda_n|)\log|w_n| \ne 0.
\]
If $I_{\varphi}$ is the inner factor for a $\varphi \in N^{+}$ for which $\varphi(\lambda_n) = w_n$ for all $n$, then
$$\varliminf_{n \to \infty} |I_{\varphi}(\lambda_n)| = 0.$$
\end{Corollary}
\begin{proof}
Let $\varphi = F_{\varphi} I_{\varphi}$, where $F_{\varphi}$ is outer and $I_{\varphi}$ is inner. If $|I_{\varphi}(\lambda_n)| \geq \delta > 0$ for all $n$, then
$I_{\varphi}(\lambda_n) = w_n/F_{\varphi}(\lambda_n)$ satisfies the hypothesis of Theorem \ref{IT001} and so there is a bounded outer $\psi$ with
$\psi(\lambda_n) = I_{\varphi}(\lambda_n)$ and so $F_{\varphi} \psi$ is outer and interpolates $w_n$. This says that $w_n$ can be interpolated by an outer function -- which it can not.
\end{proof}
\begin{Remark}
The above says that a subsequence of $\{\lambda_n\}$ must approach
$$\Big\{\xi \in \mathbb{T}: \varliminf_{z \to \xi} |I_{\varphi}(z)| = 0\Big\},$$
the boundary spectrum of the inner factor $I_{\varphi}$. This set will consist of the accumulation of the zeros of the Blaschke factor of $I_{\varphi}$ as well as the support of the singular measure associated with the singular inner inner factor of $I_{\varphi}$ \cite[p.~152]{MR3526203}.
\end{Remark}
\section{Negative results -- existence of a Blaschke product}
This next result says that under the right circumstances, any Smirnov interpolating must have a Blaschke factor.
\begin{Theorem}\label{Blakshkefactor}
Suppose $\{\lambda_n\}$ is interpolating and $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ is bounded and satisfies
\[
\varlimsup_{n \to \infty} (1-|\lambda_n|)|\log|w_n|| = \infty.
\]
Then any $\varphi \in N^{+}$ for which $\varphi(\lambda_n) = w_n$ for all $n \geq 1$ must have a Blaschke factor.
\end{Theorem}
This says, for example, that for an interpolating $\{\lambda_n\}$ any $\varphi \in H^{\infty}$ for which
$$\varphi(\lambda_n) = \exp\Big(-\frac{1}{(1 - |\lambda_n|)^{2}}\Big)$$ (and such $\varphi$ exist by Carleson's theorem) must have a Blaschke factor.
The proof of this theorem follows from the following lemma on zero-free Smirnov functions. Any zero-free $\varphi \in N^{+}$ can be written as
\begin{equation}\label{zerofreestructure}
\varphi(z) = \exp\Big(\int_{\mathbb{T}} \frac{\xi + z}{\xi - z} W(\xi) dm(\xi)\Big) \exp\Big(-\int_{\mathbb{T}} \frac{\xi + z}{\xi - z}d \mu(\xi)\Big),\end{equation}
where $W$ is a real-valued integrable function and $\mu$ is a positive measure that is singular with respect to Lebesgue measure $m$.
\begin{Lemma}
If $\varphi \in N^{+}$ and zero free, then
$$\varlimsup_{|z| \to 1^{-}} (1 - |z|) \big|\log |\varphi(z)|\big| < \infty.$$
\end{Lemma}
\begin{proof}
From \eqref{zerofreestructure} we have
$$\log|\varphi(z)| = \int_{\mathbb{T}} \frac{1-|z|^2}{|z-\xi|^2} W(\xi) dm(\xi) - \int_{\mathbb{T}} \frac{1-|z|^2}{|z-\xi|^2} d \mu(\xi).$$
The proof of Lemma \ref{L:growth-outer} shows that
$$\lim_{|z| \to 1^{-}} (1 - |z|) \int_{\mathbb{T}} \frac{1-|z|^2}{|z-\xi|^2} W(\xi) dm(\xi) = 0.$$ From \eqref{poissiner5} we have
$$0 \leq ( 1- |z|) \int_{\mathbb{T}} \frac{1-|z|^2}{|z-\xi|^2} d \mu(\xi) \leq 2 \int_{\mathbb{T}} d \mu =2 \mu(\mathbb{T}).$$
Combine these two facts to prove the result.
\end{proof}
\section{A growth rate characterization}
We know from Corollary \ref{yuhsjkfdmgsewdf} that
if $\{w_n\}$ can be interpolated by an outer function, then
$$\lim_{n \to \infty} (1- \lambda_n) \log |w_n| = 0.$$
What is the decay rate of $(1 - \lambda_n) \log |w_n|$? Here we focus our attention on the case when $\{\lambda_n\} \subset (0, 1)$. Though it does not play a role in our results, it is known \cite[p. ~156]{Duren} that $\{\lambda_n\} \subset (0, 1)$ is interpolating if and only if there is a $0 < c < 1$ such that
$$(1 - \lambda_{n + 1}) \leq c (1 - \lambda_{n}), \quad n \geq 1.$$
Such sequences are called {\em exponential sequences}. Naively speaking, the following result says that the decay rate of $(1- \lambda_n) \log |w_n|$ is controlled by an absolutely continuous function. The sharpness of this observation will be studied in Theorem \ref{growthragfetd6dx6}.
\begin{Theorem}\label{MTone}
Suppose $\{\lambda_n\} \subset (0, 1)$ is interpolating and $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ is a bounded with
$$M := \sup_{n \geq 1} |w_n|.$$
Suppose there is an outer function $\varphi$ for which $\varphi(\lambda_n) = w_n$ for all $n$.
Then there is a positive, decreasing, integrable function $h$ on $[0, 1]$ such that
$$- (1 - \lambda_n) \log\Big|\frac{w_n}{M}\Big| \leq \int_{0}^{1 - \lambda_n} h(t) dt, \quad n \geq 1.$$
\end{Theorem}
To prepare for the proof, we require a few comments on the growth of the Poisson kernel.
We begin with the following two normalizing assumptions on $h$:
\begin{equation}\label{RC}
\mbox{$h$ is right continuous};
\end{equation}
\begin{equation}\label{hiszero}
\mbox{$h$ is extended to $[0, \pi]$ by setting $h(x) = 0$ for $x \in [1, \pi]$}.
\end{equation}
The right continuity can be assumed since $h$ is monotone and thus has at most a countable number of jumps. Since the behavior around the origin is our main concern, extending the definition of $h$ to $[0, \pi]$ is merely for aesthetic purposes when working with Poisson integrals below.
Let
$$P_{r}(t) = \frac{1 - r^2}{1 - 2 r \cos t + r^2}, \quad 0 \leq r < 1, \quad -\pi \leq t \leq \pi,$$
be the standard Poisson kernel. We wish to examine the function
\begin{equation}\label{arrrrrr}
A_h(r) = (1 - r) \int_{-\pi}^{\pi} P_{r}(t) h(|t|) \frac{d t}{2 \pi}.
\end{equation}
For $r \in [0, 1)$ and $t \in [-\pi, \pi]$ we have
\begin{equation}\label{99uuUU}
\frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2} = \frac{(1 + r) (1 - r)^2}{(1 - r)^2 + 4 r \sin^{2}(t/2)}.
\end{equation}
This yields
\begin{align*}
A_h(r) & = \int_{-\pi}^{\pi} \frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2} h(|t|) \frac{dt}{2 \pi}\\
& = 2 \int_{0}^{\pi} \frac{(1 + r) (1 - r)^2}{(1 - r)^2 + 4 r \sin^{2}(t/2)} h(t) \frac{dt}{2 \pi}\\
& \geq 2 \int_{0}^{1 - r} \frac{(1 + r) (1 - r)^2}{(1 - r)^2 + 4 r \sin^{2}(t/2)} h(t) \frac{dt}{2 \pi}
\end{align*}
Note that
$$t^2 \sin^{2}(\tfrac{1}{2}) \leq \sin^{2}(\tfrac{t}{2}) \leq \tfrac{1}{4} t^2$$ and so
\begin{align*}
\frac{1}{\pi} \int_{0}^{1 - r} \frac{(1 + r) (1 - r)^2}{(1 - r)^2 + 4 \sin^{2}(t/2)} h(t) dt & \geq \frac{1}{ \pi} \int_{0}^{1 - r} \frac{(1 - r)^2}{(1 - r)^2 + t^2} h(t) dt\\
& \geq \frac{1}{ \pi} \int_{0}^{1 - r} \frac{(1 - r)^2}{(1 - r)^2 + (1 - r)^2} h(t) dt\\
& = \frac{1}{2 \pi} \int_{0}^{1 - r} h(t) dt.
\end{align*}
In summary,
\begin{equation}\label{lower}
A_h(r) \geq \frac{1}{2 \pi} \int_{0}^{1 - r} h(t) dt.
\end{equation}
To obtain an upper bound, note that $A(r)$ is equal to the sum of
$$
\frac{1}{ \pi} \int_{0}^{1 - r} \frac{(1 - r)(1 - r^2)}{(1 - r)^2 + 4 r \sin^{2}(t/2)} h(t) dt$$
and
$$\frac{1}{ \pi} \int_{1 - r}^{\pi} \frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2} h(t) dt.$$
For the first integral, observe that
\begin{align*}
\frac{1}{ \pi} \int_{0}^{1 - r} \frac{(1 + r) (1 - r)^2}{(1 - r)^2 + 4 r \sin^{2}(t/2)} h(t) dt
& \leq \frac{1}{ \pi} \int_{0}^{1- r} \frac{(1 + r) (1 - r)^2}{(1 - r)^2 } h(t) dt\\
& \leq \frac{2}{ \pi} \int_{0}^{1 - r} h(t)dt.
\end{align*}
For the second integral we recall from \eqref{hiszero} that $h(\pi) = 0$ and we use the second mean value theorem \cite{MR2960787} for integrals, along with the right continuity of $h$ from \eqref{RC}, to see that
$$ \frac{1}{ \pi} \int_{1 - r}^{\pi} \frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2} h(t) dt = \frac{1}{\pi} h(1 - r) \int_{1 - r}^{t_{0}} \frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2}dt$$ for some $t_{0} \in [1 - r, \pi]$. Notice that
\begin{align*}
\frac{1}{ \pi} h(1 - r) \int_{1 - r}^{t_{0}} \frac{(1 - r)(1 - r^2)}{1 - 2 r \cos t + r^2}dt & = \frac{1}{\pi} (1 - r) h(1 - r) \int_{1 - r}^{t_{0}} P_{r}(t) dt \\
& \leq (1 - r) h(1 - r).
\end{align*}
In the last step note the use of
$$\int_{1 - r}^{t_0} P_{r}(t) dt \leq \int_{0}^{\pi} P_{r}(t) dt = \pi.$$
We now use the fact that $h$ is decreasing to obtain
$$ \int_{0}^{1 - r} h(t) dt \geq h(1 - r) \int_{0}^{1 - r} dt = (1 - r) h(1 - r).$$
Put this all together to get
\begin{equation}\label{puy6vvVV}
A_h(r) \leq \tfrac{2 + \pi}{\pi} \int_{0}^{1 - r} h(t) dt.
\end{equation}
Thus, combining \eqref{lower} and \eqref{puy6vvVV} we have the summary estimate
\begin{equation}\label{summaruyarrrr}
\frac{1}{2 \pi} \int_{0}^{1 - r} h(t) dt \leq A_h(r) \leq \tfrac{2 + \pi}{\pi} \int_{0}^{1 - r} h(t) dt.
\end{equation}
An important tool for our next step is the {\em symmetric decreasing rearrangement}. If $E$ is a measurable subset of $ [-\pi, \pi]$ let
$$E^{*} = (-\tfrac{1}{2} |E|, \tfrac{1}{2}|E|)$$
be the interval centered about $0$ for which $|E| = |E^{*}|$, where $|\cdot|$ is Lebesgue measure on $[-\pi, \pi]$. For $f \in L^{1}[-\pi, \pi]$ with $f \geq 0$, define
$$f^{*}(x) = \int_{0}^{\infty} \chi_{\{f > t\}^{*}}(x) dt, \quad x \in [-\pi, \pi].$$
This function $f^{*}$ satisfies $f^{*}(x) = f^{*}(|x|)$ on $[- \pi, \pi]$ (i.e., symmetric), is non-increasing on $[0, \pi]$, and has the same integral as $f$. The important fact used here is the following \cite[Ch.~10]{MR0046395}.
\begin{Lemma}[Hardy--Littlewood]\label{hhhllll}
For nonnegative and measurable $f, g$ we have
$$\int_{-\pi}^{\pi} f(x) g(x)\,dx \leq \int_{-\pi}^{\pi} f^{*}(x) g^{*}(x)\,dx.$$
\end{Lemma}
If $f$ is positive and symmetric on $[-\pi, \pi]$ and $f$ is decreasing on $[0, \pi]$, then for each $t$ the set $\{f > t\}$ is the interval
$$\big(- \tfrac{1}{2} |\{f > t\}|, \tfrac{1}{2}|\{f > t\}|\big).$$
In other words, $\{f > t\}^{*} = \{f > t\}$.
By the layer cake representation of $f$ we have
\begin{align*}
f(x) & = \int_{0}^{\infty} \chi_{\{f > t\}}(x)dt\\
& = \int_{0}^{\infty} \chi_{\{f > t\}^{*}}(x) dt\\
& = f^{*}(x).
\end{align*}
Conclusion: If $f$ is positive, symmetric ($f(t) = f(|t|)$ on $[-\pi, \pi]$), and decreasing on $[0, \pi]$, then $f^{*} = f$ (almost everywhere).
\begin{proof}[Proof of Theorem \ref{MTone}]
If $\varphi$ is outer and $\varphi(\lambda_n) = w_n$ for all $n \geq 1$, then $\psi = \varphi/M$ is outer and interpolates $w_n/M$.
For $0 < r < 1$ we have
\begin{align*}
- (1 - r) \log |\psi(r)| & = - (1 - r) \int_{-\pi}^{\pi} P_{r}(t) \log|\psi(e^{i t})| \frac{dt}{2 \pi}\\
& = - (1 - r) \int_{-\pi}^{\pi} P_{r}(t) \max(0, \log|\psi(e^{i t})|) \frac{dt}{2 \pi}\\
& \quad - (1 - r) \int_{-\pi}^{ \pi} P_{r}(t) \min(0, \log|\psi(e^{i t})|) \frac{dt}{2 \pi}\\
& \leq (1 - r) \int_{-\pi}^{\pi} P_{r}(t) k(t) \frac{dt}{2 \pi},
\end{align*}
where
$k = -\min(0, \log |\psi|)$ is nonnegative and integrable on $[-\pi, \pi]$.
Apply the Hardy--Littlewood estimate (Lemma \ref{hhhllll}) to \eqref{arrrrrr} with $f = k$ and $g = P_{r}$ (which is already symmetric and so $g = g^{*}$ -- see the discussion above) to obtain the estimate
$$(1 - r) \int_{-\pi}^{\pi} P_{r}(t) k(t) \frac{dt}{2 \pi} \leq (1 - r) \int_{-\pi}^{\pi} P_{r}(t) k^{*}(t) \frac{dt}{2 \pi}.$$
By \eqref{summaruyarrrr} (note the use of the fact that $k^{*}(|t|) = k^{*}(t)$)
$$-(1 - r) \log|\psi(r)| \leq \frac{2 + \pi}{\pi} \int_{0}^{1 - r} k^{*}(t) dt.$$
Insert $r = \lambda_n$ into the above inequality to complete the proof.
\end{proof}
Next we improve Theorem \ref{MTone} with this sharpness result.
\begin{Theorem}\label{growthragfetd6dx6}
Suppose $\{\lambda_n\} \subset (0, 1)$ is interpolating and $h$ is a positive, decreasing, integrable function on $[0, 1]$.
If $\{w_n\} \subset \mathbb{C} \setminus \{0\}$ is bounded and satisfies
$$-(1 - \lambda_n) \log |w_n| \asymp \int_{0}^{1 - \lambda_n} h(t) dt,$$ then there is a bounded outer function $\psi$ such that
$$-(1 - \lambda_n) \log \psi(\lambda_n) \asymp \int_{0}^{1 - \lambda_n} h(t) dt.$$
\end{Theorem}
In the statement above, $A_{n} \asymp B_{n}$ means there are positive constants $c_1$ and $c_2$, independent of $n$, such that $c_1 A_{n} \leq B_n \leq c_2 A_n$ for all $n$.
\begin{proof}
For $0 < r < 1$, \eqref{summaruyarrrr} yields
$$A_h(r) \asymp \int_{0}^{1 - r} h(t)dt.$$
If $\varphi$ is the (bounded) outer function with
\begin{equation}\label{67ygvvFGVFTYGFTYUHJ}
|\varphi(e^{i t})| = e^{-h(|t|)}
\end{equation}
for almost every $t \in [-\pi, \pi]$ (see \eqref{eeewwww}), then
\begin{align*}
- (1 - r) \log |\varphi(r)| & = - (1 - r) \int_{-\pi}^{\pi} P_{r}(t) \log|\varphi(e^{i t})| \frac{dt}{2 \pi}\\
& = A_h(r)
\asymp \int_{0}^{1 - r} h(t)dt.
\end{align*}
With $r = \lambda_n$ we have
$$ -(1 - \lambda_n) \log |\varphi(\lambda_n)| \asymp \int_{0}^{1 - \lambda_n} h(t)dt.$$
Now apply Proposition \ref{comp886bb} to produce a bounded outer function $\psi$ with $\psi(\lambda_n) = |\varphi(\lambda_n)|$.
\end{proof}
\section{More delicate interpolation}
Given $h$ as in Theorem \ref{growthragfetd6dx6}, there is a bounded outer $\varphi$ such that $$\log \varphi(\lambda_n) \asymp \frac{1}{1 - \lambda_n} \int_{0}^{1 - \lambda_n} h(t)\,dt, \quad n \geq 1.$$ Can we replace $\asymp$ with $=$ in the above?
Equivalently, can we find an outer (bounded outer) $\varphi$ such that
$$\varphi(\lambda_n) = \exp\Big(- \frac{1}{1 - \lambda_n} \int_{0}^{1 - \lambda_n} h(t) dt\Big), \quad n \geq 1?$$
We certainly can find
$$d_n \in \Big[\frac{1}{2 \pi}, \frac{2 + \pi}{\pi}\Big]$$ such that
$$\varphi(\lambda_n)^{d_n} = \exp\Big(- \frac{1}{1 - \lambda_n} \int_{0}^{1 - \lambda_n} h(t) dt\Big).$$
By Theorem \ref{IT001} and Remark \ref{detailreg0} there is a bounded outer $\psi$ with $\Re \psi > 0$ such that
$\psi(\lambda_n) = 1/d_n$ for all $n$.
The function
$f = \varphi^{\psi}$ is analytic on $\mathbb{D}$ with
$$f(\lambda_n) = \exp\Big(- \frac{1}{1 - \lambda_n} \int_{0}^{1 - \lambda_n} h(t) dt\Big)$$
and thus performs the interpolation.
But of course we need to check that $f$ is outer (bounded outer).
Indeed this is something that needs checking since if $\varphi$ and $\psi$ are outer, $f = \varphi^{\psi}$ need not be outer. In fact with $\varphi = e$ (constant outer function) and
$$\psi(z) = -\frac{1 + z}{1 - z},$$ then
$$f = \varphi^{\psi} = \exp\Big(-\frac{1 + z}{1 - z}\Big)$$ is inner! Here is our result concerning when $\varphi^{\psi}$ is outer (bounded outer).
\begin{Proposition} \label{L:ftog}
Let $\varphi$ be outer and $\psi$ be bounded and outer.
\begin{enumerate}
\item[(i)] If $\arg \varphi(\xi) \in L^1(\mathbb{T})$, then $f = \varphi^{\psi}$ is outer.
\item[(ii)] If $\Re \psi > 0$ and $\arg \varphi(\xi) \in L^{\infty}(\mathbb{T})$, then $f = \varphi^{\psi}$ is outer and bounded.
\end{enumerate}
\end{Proposition}
The proof of this proposition needs a few preliminaries. If $u \in L^1(\mathbb{T})$ and $u \geq 0$, the {\em Herglotz integral }
\begin{equation}\label{bbherggg}
H_{u}(z) = \int_{\mathbb{T}} \frac{\xi + z}{\xi - z} u(\xi) dm(\xi)
\end{equation}
is analytic on $\mathbb{D}$ and
$$\Re H_{u}(z) = \int_{\mathbb{T}} \frac{1 - |z|^2}{|\xi - z|^2} u(\xi) dm(\xi) > 0, \quad z \in \mathbb{D}.$$
By a known result \cite[p.~65]{Garnett}, $H_{u}$ is outer. Recall from \S \ref{notataions8} the Hardy space $H^1$ and the Smirnov class $N^{+}$.
\begin{Lemma}
For $f \in H^1$ there are $G_j \in N^{+}$ with $\Re G_j \geq 0$ on $\mathbb{D}$ for $j = 1, 2$ such that $f = G_1 - G_2$.
\end{Lemma}
\begin{proof}
Functions in $H^1$ have radial boundary values almost everywhere on $\mathbb{T}$ and so let $u_{+}$ and $u_{-}$ be defined for almost every $\xi \in \mathbb{T}$ by
$$u_{+} (\xi)= \max(\Re f(\xi), 0), \quad u_{-}(\xi) = \max(-\Re f(\xi), 0).$$
Since $|\Re f(\xi)| \leq |f(\xi)|$ and $|f|$ is integrable on $\mathbb{T}$, we see that $u_{+}, u_{-}$ are nonnegative integrable functions. Furthermore, by the discussion above, $H_{u_{+}}$ and $H_{u_{-}}$ belong to $N^{+}$ and have positive real parts on $\mathbb{D}$. Finally,
$H_{u_{+}} - H_{u_{-}}$ belongs to $N^{+}$ and has the same real part as $f$ on $\mathbb{T}$. Thus, by the uniqueness of the harmonic conjugate,
$f = H_{u_{+}} - H_{u_{-}} + i c$ for some $c \in \mathbb{R}$. This completes the proof.
\end{proof}
\begin{Lemma}\label{h1outern8}
If $f \in H^1$, then $e^f$ is outer.
\end{Lemma}
\begin{proof}
By the previous lemma,
$f = H_{u_{+}} - H_{u_{-}} + i c$ and so
$$e^f = e^{i c} \frac{e^{-H_{u_{-}}}}{e^{-H_{u_{+}}}}.$$
From the formula for the Herglotz integral in \eqref{bbherggg} and the definition of outer from \eqref{outerfunctions88u}, the functions $e^{-H_{u_{+}}}$ and $e^{-H_{u_{-}}}$ are outer. Thus, $e^f$ is also outer.
\end{proof}
\begin{proof}[Proof of Proposition \ref{L:ftog}]
On $\mathbb{T}$ we have
\begin{align*}
|\log \varphi| &\leq \big|\log|\varphi|\big| + |\arg \varphi| \\
&= \big|\log(|\varphi|/\|\varphi\|_\infty)+\log\|\varphi\|_\infty\big| + |\arg \varphi| \\
&\leq \big|\log(|\varphi|/\|\varphi\|_\infty)\big|+\big|\log\|\varphi\|_\infty\big| + |\arg \varphi| \\
&=-\log(|\varphi|/\|\varphi\|_\infty)+\big|\log\|\varphi\|_\infty\big| + |\arg \varphi| \\
&\leq -\log|\varphi| + 2\log^+\|\varphi\|_\infty+ |\arg \varphi|.
\end{align*}
From $\log |\varphi| \in L^1(\mathbb{T})$ and $|\arg \varphi | \in L^{1}(\mathbb{T})$, follows $|\log \varphi| \in L^1(\mathbb{T})$. Since $\varphi$ is outer, $\log \varphi \in N^{+}$. A standard result \cite[p.~28]{Duren} of Smirnov implies $\log \varphi \in H^{1}$. Therefore, $\psi \log \varphi \in H^1$. By the previous lemma, $f= \exp(\psi \log \varphi)$ is outer. This proves (i).
If we assume that
\[
|\Im \log \varphi| = |\arg \varphi| \leq M \quad \mbox{and} \quad \Re \psi \geq 0
\]
on $\mathbb{T}$,
we have
\begin{align*}
|f| &= \exp( \Re \psi \, \log|\varphi| - \Im \psi \, \arg \varphi)\\
&\leq \exp( \|\psi\|_\infty \log(1+\|\varphi\|_\infty) + M\|\psi\|_\infty).
\end{align*}
Thus, $f$ is a bounded outer function. Note that
$$\Re \psi \, \log|\varphi| \leq \|\psi\|_\infty \log(1+\|\varphi\|_\infty)$$ follows from the fact that $\Re \psi \geq 0$ on $\mathbb{T}$. This proves (ii).
\end{proof}
Let us use the results above to refine Theorem \ref{growthragfetd6dx6}.
\begin{Theorem}\label{MTThree}
Suppose $h$ is a positive, decreasing, integrable function on $[0, 1]$.
Let $\{\lambda_n\} \subset (0, 1)$ be interpolating and
$\{w_n\} \subset \mathbb{C} \setminus \{0\}$ be bounded with
$$-(1 - \lambda_n) \log |w_n| \asymp \int_{0}^{1 - \lambda_n} h(t)dt, \quad n \geq 1.$$
\begin{enumerate}
\item[(i)] If $h(|t|) \log^{+} h(|t|) \in L^{1}[-\pi, \pi]$ then there is an outer $\varphi$ such that
$\varphi(\lambda_n) = w_n$ for all $n$.
\item[(ii)] If
$$\operatorname{PV} \int_{-\pi}^{\pi} \cot\Big(\frac{\theta - t}{2}\Big) h(|t|) \frac{dt}{2 \pi}$$ is bounded on $[-\pi, \pi]$
then there is a bounded outer $\varphi$ such that $\varphi(\lambda_n) = w_n$ for all $n$.
\end{enumerate}
\end{Theorem}
\begin{proof}
From the discussion at the very beginning of this section, we can find bounded outer $\varphi$ and $\psi$ such that $f = \varphi^{\psi}$ satisfies $f(\lambda_n) = w_n$ for all $n$. We just need to check that $f$ is outer (bounded outer).
By the proof of Theorem \ref{growthragfetd6dx6} and \eqref{67ygvvFGVFTYGFTYUHJ}, $\log |\varphi(e^{i t})| = -h(|t|)$ and
\begin{align*}
\varphi(z) & = \exp\Big(\int_{\mathbb{T}} \frac{\xi + z}{\xi - z} \log|\varphi(\xi)| dm\Big)\\
& = \exp\Big(\int_{\mathbb{T}} \Re\big( \frac{\xi + z}{\xi - z} \big) \log|\varphi(\xi)|dm + i \int_{\mathbb{T}} \Im\big( \frac{\xi + z}{\xi - z} \big) \log|\varphi(\xi)|dm\Big).
\end{align*}
From
$$\arg \varphi(z) = i \int_{\mathbb{T}} \Im\big( \frac{\xi + z}{\xi - z} \big) \log|\varphi(\xi)|dm, \quad z \in \mathbb{D},$$ and standard theory involving the Hilbert transform on the circle we have
$$\arg \varphi(e^{i \theta}) = - \operatorname{PV} \int_{-\pi}^{\pi} \cot\Big(\frac{\theta - t}{2}\Big) h(|t|) \frac{dt}{2 \pi}.$$
A classical result of Zygmund \cite[Vol I, p. 254]{MR1963498} says that if the function $h(|t|) \log^{+} h(|t|)$ belongs to $ L^{1}[-\pi, \pi]$ then $\arg \varphi \in L^{1}(\mathbb{T})$. An application of Proposition \ref{L:ftog} yields $f = \varphi^{\psi}$ is outer.
If the above Hilbert transform is bounded, another application of Proposition \ref{L:ftog}, along with the fact that we can always choose $\psi$ so that $\Re \psi > 0$ (Remark \ref{detailreg0}), yields $f = \varphi^{\psi}$ is bounded and outer.
\end{proof}
\begin{Example}
If $\{\lambda_n\} \subset (0, 1)$ is interpolating, we know from Theorem \ref{T:interpol-outer-inner} that any $\varphi \in N^{+}$ with
$$\varphi(\lambda_n) = \exp\Big(-\frac{2}{1 - \lambda_n}\Big), \quad n \geq 1$$ must have an inner factor. In fact, the obvious guess at an analytic functions that interpolates this sequence is
$$\varphi(z) = \exp\Big(-\frac{2}{1 - z}\Big)$$ turns out to be a constant multiple of an inner function. Indeed, the singular inner function
$$\exp\Big(-\frac{1 + z}{1 - z}\Big)$$ can be written as
\begin{align*}
\exp\Big(-\frac{1 + z}{1 - z}\Big) & = \exp\Big(- \frac{2 - (1 - z)}{1 - z}\Big)\\
& = \exp\Big(-\frac{2}{1 - z}\Big) e.
\end{align*}
Thus $\varphi$ is a constant multiple of a singular inner function.
\end{Example}
\begin{Example}
Let
$$w_{n} = \exp\Big(-\frac{1}{1 - \lambda_n} \frac{1}{(\log\frac{100}{1 - \lambda_n})^2}\Big).$$
Here
$$h(t) = \frac{2}{t (\log(\frac{100}{t}))^3}, \quad 0 < t < 1,$$
is $h$ is positive and decreasing on $[0, 1]$ and $h(|t|) \log^{+} h(|t|)$ belongs to $L^{1}[-1, 1]$. Thus $\{w_n\}$ can be interpolated with an outer function.
\end{Example}
\begin{Example}
Let
$$w_{n} = \exp\Big(-\frac{1}{(1 - \lambda_n)^{\alpha}}\Big),$$
where $0 <\alpha < 1$. In this case,
$$h(t) = \frac{1 - \alpha}{t^{\alpha}}$$ is positive, decreasing, and $h(|t|) \log^{+} h(|t|) \in L^1[-\pi, \pi]$. Thus, by the previous theorem, $\{w_n\}$ can be interpolated by an outer function. In fact, one can take $\varphi$ to be a bounded outer function. To see this, observe that $(1 - z)^{-\alpha} \in H^1$ and so
$$\varphi(z) = \exp\Big(-\frac{1}{(1 - z)^{\alpha}}\Big)$$ is outer (Lemma \ref{h1outern8}). Furthermore,
\begin{align*}
\frac{1}{1 - e^{i \theta}} &= \frac{e^{i \theta/2}}{e^{- i \theta/2} - e^{i \theta/2}}\\
& = \frac{e^{- i \theta/2}}{-2 i \sin(\theta/2)}\\
& = \frac{1}{2 \sin(\theta/2)} e^{i \frac{\pi - \theta}{2}}.
\end{align*}
Thus,
$$|\varphi(e^{i \theta})| \leq e^{-2^{-\alpha} \cos(\pi \alpha/2)}, \quad \theta \in [-\pi, \pi],$$
and so $\varphi$ is outer and $\varphi$ is bounded on $\mathbb{T}$. A result of Smirnov \cite[p.~28]{Duren} says that $\varphi \in H^{\infty}$.
If $0 < m \leq d_n \leq M < \infty$, one can also interpolate
$$w_n = \exp\Big(- d_n\frac{1}{(1 - \lambda_n)^{\alpha}}\Big)$$ with an outer function.
\end{Example}
\begin{Example}
If $\{\lambda_n\} \subset (0, 1)$ is interpolating and $\{d_n\}$ satisfies $0 < m \leq d_n \leq M < \infty$ for all $n \geq 1$, one can appeal to Proposition \ref{L:ftog} directly to interpolate
$$w_{n} = (1 - \lambda_n)^{d_n}$$ with a bounded outer function. Here $f = \varphi^{\psi}$, where $\varphi(z) = 1 - z$ (which clearly has bounded argument) and $\psi$ is the bounded outer function with $\Re \psi > 0$ and $\psi(\lambda_n) = d_n$ for all $n \geq 1$.
\end{Example}
\section{Common range}\label{Crrrsdsdsd546u}
In this section, we present an application of our outer interpolation results.
For $\varphi \in H^{\infty}$ let $T_{\overline{\varphi}}$ denote the co-analytic Toeplitz operator on the Hardy space $H^2$. By this we mean the operator $T_{\overline{\varphi}}: H^2 \to H^2$ defined by $T_{\overline{\varphi}} f = P_{+}(\overline{\varphi} f),$
where $P_{+}$ is the standard orthogonal projection of $L^2(\mathbb{T})$ onto $H^2$. See \cite[Ch.~4]{MR3526203} for the basics of Toeplitz operators.
Let
$$\mathscr{R}(H^2) := \bigcap \big\{ T_{\overline{\varphi} }H^2: \varphi \in H^{\infty} \setminus \{0\}\big\}$$
denote the {\em common range} of the (nonzero) co-analytic Toeplitz operators on $H^2$. A well-known result is the following:
\begin{Theorem}[McCarthy \cite{MR1065054}]\label{098ueiorge32}
$$\mathscr{R}(H^2) = \{f \in H^{\infty}: \widehat{f}(n) = O(e^{-c_f \sqrt{n}})\}.$$
\end{Theorem}
The above decay on the Fourier coefficients
$\widehat{f}(n)$
shows that $\{n^{K} \widehat{f}(n)\}$ is absolutely summable for all $K \geq 0$ and so functions in $\mathscr{R}(H^2)$ must be infinitely differentiable on $\overline{\mathbb{D}}$.
The Douglas factorization theorem \cite{MR203464} implies that $T_{\overline{\varphi}} H^2 = T_{\overline{\varphi_0}}H^2 $, where $\varphi_{0}$ is the outer part of $\varphi \in H^{\infty}$. Thus,
$$\mathscr{R}(H^2) = \bigcap\big\{ T_{\overline{\varphi} }H^2: \varphi \in H^{\infty} \cap \mathcal{O}\big\}.$$
Recall that $\mathcal{O}$ are the outer functions. Also important here is that $T_{\overline{\varphi}}$ is injective whenever $\varphi \in H^{\infty} \cap \mathcal{O}$.
What does this common range problem look like in model spaces? For an inner function $u$, the {\em model space}
$\mathcal{K}_u := (u H^2)^{\perp}$ is an invariant subspace for any co-analytic Toeplitz operator $T_{\overline{\varphi}}$, $\varphi \in H^{\infty}$. For this and other facts about model spaces used in this section, we refer the reader to \cite{MR3526203}.
For a fixed inner function $u$, what is
$$\mathscr{R}(\mathcal{K}_u) := \bigcap\big\{ T_{\overline{\varphi} } \mathcal{K}_u: \varphi \in H^{\infty} \cap \mathcal{O}\big\}?$$
Since $T_{\overline{\varphi}} \mathcal{K}_u \subset T_{\overline{\varphi}} H^2$ we have
$\mathscr{R}(\mathcal{K}_u) \subset \mathscr{R}(H^2)$ but the inclusion can be strict (see below). Furthermore, $\mathscr{R}(\mathcal{K}_u) \subset \mathcal{K}_u$ since $T_{\overline{\varphi}} \mathcal{K}_u \subset \mathcal{K}_u$ for all bounded outer $\varphi$.
\begin{Example}
If $u(z) = z^{N}$ then $\mathcal{K}_u = \mathscr{P}_{N - 1}$, the polynomials of degree at most $N - 1$. Since $\varphi$ is outer, $T_{\overline{\varphi}}$ is injective and so $T_{\overline{\varphi}} \mathscr{P}_{N - 1} = \mathscr{P}_{N - 1}.$ So in this case
$\mathscr{R}(\mathcal{K}_u) = \mathscr{P}_{N - 1}$.
\end{Example}
\begin{Example}\label{77tRRWO}
In a similar way, for a finite Blaschke product $u$ with distinct zeros $\lambda_1, \ldots, \lambda_n$ in $\mathbb{D}$, we have
$\mathcal{K}_u = \bigvee\{k_{\lambda_1}, \ldots, k_{\lambda_n}\},$
where
$k_{\lambda}(z) = \frac{1}{1 - \overline{\lambda} z}$ are the Cauchy kernels for $H^2$. It follows, using
\begin{equation}\label{5533388UgggGT}
T_{\overline{\varphi}} k_{\lambda_j} = \overline{\varphi(\lambda_j)} k_{\lambda_{j}},
\end{equation}
and $\varphi(\lambda_j) \not = 0$, that
$\mathscr{R}(\mathcal{K}_u) = \bigvee\{k_{\lambda_1}, \ldots, k_{\lambda_n}\}.$
\end{Example}
What are some inhabitants of $\mathscr{R}(\mathcal{K}_u)$ when $u$ is not finite a Blaschke product? If $\lambda$ is a zero of $u$ then $k_{\lambda} \in \mathcal{K}_u$ and so $k_{\lambda} \in \mathscr{R}(\mathcal{K}_u)$ as argued in Example \ref{77tRRWO}. It is more difficult to identify other elements of $\mathscr{R}(\mathcal{K}_u)$.
\begin{Remark}
Since $\mathscr{R}(\mathcal{K}_u) \subset \mathcal{K}_u$, then $\mathscr{R}(\mathcal{K}_u)$ will inherit the properties of functions in $\mathcal{K}_u$. For example, if
$$\Big\{\xi \in \mathbb{T}: \varliminf_{z \to \xi} |u(z)| = 0\Big\},$$
the boundary spectrum of $u$,
omits an an arc $I$ of $\mathbb{T}$, then function in $\mathcal{K}_u$ will have an analytic continuation across $I$. Hence functions $\mathscr{R}(\mathcal{K}_u)$ will also have this property.
\end{Remark}
\begin{Example}\label{nosmooth}
It is possible to produce a suitable singular inner function $u$ for which $\mathcal{K}_u$ contains no nonzero smooth functions \cite{MR2198372}. Since $\mathscr{R}(\mathcal{K}_u)$ is contained in the smooth functions (Theorem \ref{098ueiorge32}), it follows that $\mathscr{R}(\mathcal{K}_u) = \{0\}$.
\end{Example}
\begin{Theorem}\label{notusefule8shyshhH}
If $u$ is inner then
$$\mathscr{R}(\mathcal{K}_u) = \mathcal{K}_{u} \cap \mathscr{R}(H^2) = \{f \in \mathcal{K}_u: \widehat{f}(n) = O(e^{-c_f \sqrt{n}})\}.$$
\end{Theorem}
\begin{proof}
The containment $\subset$ is automatic. Now suppose $f \in \mathcal{K}_u \cap \mathscr{R}(H^2)$. Given any $\varphi \in H^{\infty} \cap \mathcal{O}$ there is a $g_{\varphi} \in H^2$ for which $f = T_{\overline{\varphi}} g_{\varphi}$. Since $f \in \mathcal{K}_u$ we have
$\langle f, u h\rangle = 0$ for all $h \in H^2$. Using the fact that $T_{\overline{\varphi}}^{*} = T_{\varphi}$ (which is just multiplication by $\varphi$), this implies
$$0 = \langle f, u h\rangle = \langle T_{\overline{\varphi}} g_{\varphi}, u h\rangle = \langle g_{\varphi}, \varphi u h\rangle, \quad h \in H^2.$$
The function $\varphi$ is outer and so $\{\varphi h: h \in H^2\}$ is dense in $H^2$ (Beurling's theorem \cite[p.~114]{Duren}). Thus $\langle g_{\varphi}, u k\rangle = 0$ for all $k \in H^2$ and so $g_{\varphi} \in \mathcal{K}_u$. Thus, $f \in \mathscr{R}(\mathcal{K}_u)$.
\end{proof}
Though Theorem \ref{notusefule8shyshhH} is a description of $\mathscr{R}(\mathcal{K}_u)$, it can be difficult to apply. Indeed, the precise contents of a model space are not always well understood and thus determining which of them have the right smoothness property can be quite challenging. In the next section we focus on a special class of inner functions $u$ where we better understand $\mathcal{K}_u$ as well as $\mathscr{R}(\mathcal{K}_u)$.
\section{Interpolating Blaschke products}
In Example \ref{77tRRWO} we computed $\mathscr{R}(\mathcal{K}_{B})$ when $B$ is a finite Blaschke product. In this section we extend our discussion to interpolating Blaschke products. Let
$$\kappa_{\lambda} : = \frac{k_{\lambda}}{\|k_{\lambda}\|} = \frac{\sqrt{1 - |\lambda|^2}}{1 - \overline{\lambda} z}, \quad \lambda \in \mathbb{D},$$
denote the normalized Cauchy kernel for $H^2$. This next proposition is a well-known fact about model spaces \cite[p.~277]{MR3526203}.
\begin{Proposition}\label{tyuhghdfsgh}
If $B$ is an interpolating Blaschke product with zeros $\{\lambda_{n}\}$, then $\{\kappa_{\lambda_n}\}$ is a Riesz basis for $\mathcal{K}_B$. Hence each $f \in \mathcal{K}_B$ has a unique representation as
$
f = \sum_{n \geq 1} a_n \kappa_{\lambda_n},
$
where $\{a_n\} \in \ell^2$, that is, $\sum_{n \geq 1} |a_n|^2 < \infty$. Conversely, any such linear combination belongs to $\mathcal{K}_B$.
\end{Proposition}
We now obtain a more tangible description of $\mathscr{R}(\mathcal{K}_B)$ than the one in Theorem \ref{notusefule8shyshhH}. We start with the following lemma.
\begin{Lemma} \label{L:separated-seq-2}
Let $\varphi$ be bounded and outer and $B$ be an interpolating Blaschke product with zeros $\{\lambda_n\}$. Then
\begin{equation}\label{LLKK8898ui342}
T_{\overline{\varphi}} \mathcal{K}_{B} = \left\{ \sum_{n = 1}^{\infty} b_n \kappa_{\lambda_n} : \sum_{n = 1}^{\infty} \frac{|b_n|^2}{|\varphi(\lambda_n)|^2} < \infty \right\}.
\end{equation}
\end{Lemma}
\begin{proof}
Suppose
$$\sum_{n = 1}^{\infty} \frac{|b_n|^2}{|\varphi(\lambda_n)|^2} < \infty.$$ Then
$$f =\sum_{n = 1}^{\infty} \frac{b_n}{\overline{\varphi(\lambda_n)}} \kappa_{\lambda_n} \in \mathcal{K}_B$$ (Proposition \ref{tyuhghdfsgh}) and, by \eqref{5533388UgggGT},
$$T_{\overline{\varphi}} f = \sum_{n = 1}^{\infty} \frac{b_n}{\overline{\varphi(\lambda_n)}} \overline{\varphi(\lambda_n)} \kappa_{\lambda_n} = \sum_{n = 1}^{\infty} b_n \kappa_{\lambda_n}.$$
Thus, $\sum_{n \geq 1} b_n \kappa_{\lambda_n} \in T_{\overline{\varphi}} \mathcal{K}_B.$
Conversely, suppose $g = T_{\overline{\varphi}} f$ for some $f \in \mathcal{K}_B$. Then
$f = \sum_{n \geq 1} a_n \kappa_{\lambda_n}$ for some unique $\{a_n\} \in \ell^2$. From Proposition \ref{tyuhghdfsgh}
$$g = T_{\overline{\varphi}} f = \sum_{n = 1}^{\infty} a_{n} \overline{\varphi(\lambda_n)} \kappa_{\lambda_n}$$ and so
$\{a_{n} \overline{\varphi(\lambda_n)}\} = \{b_n\} \in \ell^2$. Thus, $\{a_{n}\} = \{b_n/\overline{\varphi(\lambda_n)}\} \in \ell^2$.
\end{proof}
Here is our description of $\mathscr{R}(\mathcal{K}_B)$.
\begin{Theorem}\label{Hdecereuiusvnn77}
Suppose $B$ is an interpolating Blaschke product with zeros $\{\lambda_n\} \subset (0, 1)$. For $\{a_n\} \in \ell^2$ and
$f = \sum_{n \geq 1} a_n \kappa_{\lambda_n}$, the following are equivalent:
\begin{enumerate}
\item[(i)] $f \in \mathscr{R}(\mathcal{K}_B)$;
\item[(ii)] The sum
$$\sum_{n = 1}^{\infty} \frac{|a_n|^2}{|\varphi(\lambda_n)|}$$
is finite for every bounded outer function $\varphi$.
\item[(iii)] The sum
$$\sum_{n = 1}^{\infty} |a_n|^2 \exp\Big(\frac{1}{1 - \lambda_n} \int_{0}^{1 - \lambda_n} h(t)\,dt\Big)$$
is finite
for every positive, decreasing, integrable function $h$ on $[0, 1]$.
\item[(iv)] $$\sum_{n = 1}^{\infty} a_n \sqrt{1 - |\lambda_n|^2} \lambda_{n}^{N} = O(e^{-c_f \sqrt{N}}), \quad N \to \infty.$$
\end{enumerate}
\end{Theorem}
\begin{proof}
The proof of (i) $\iff$ (ii) follows from Lemma \ref{L:separated-seq-2}. The proof of (ii) $\iff$ (iii) follows from Theorem \ref{growthragfetd6dx6}. For the proof that (i) $\iff$ (iv), note that
$$f^{(N)}(z) = N! \sum_{n = 1}^{\infty} a_n \sqrt{1 - |\lambda_n|^2} \frac{\lambda_{n}^{N}}{(1 - \lambda_n z)^{N + 1}}$$
and thus
$$\widehat{f}(N) = \frac{f^{(N)}(0)}{N!} = \sum_{n = 1}^{\infty} a_n \sqrt{1 - |\lambda_n|^2} \lambda_{n}^{N}.$$
Now apply Theorem \ref{notusefule8shyshhH}.
\end{proof}
To obtain a rich class of functions in $\mathscr{R}(\mathcal{K}_B)$, besides the obvious finite linear combinations of $\kappa_{\lambda_n}$, Lemma \ref{L:growth-outer} says that
$$- (1 - \lambda_n) \log |\varphi(\lambda_n)| \to 0, \quad \varphi \in H^{\infty} \cap \mathcal{O}.$$
If $c > 0$ and $\{a_n\}$ satisfies
$$\sum_{n = 1}^{\infty} |a_n|^2 \exp\Big(\frac{c}{1 - \lambda_n}\Big) < \infty,$$ then
$$\frac{1}{|\varphi(\lambda_n)|} \leq \exp\Big(\frac{c}{1 - \lambda_n}\Big)$$
for sufficiently large enough $n$.
Thus, $f = \sum_{n \geq 1} a_n \kappa_{\lambda_n} \in \mathscr{R}(\mathcal{K}_B)$. In other words,
$$\bigcup_{c > 0} \Big\{\sum_{n = 1}^{\infty} a_{n} \kappa_{\lambda_n}: \sum_{n = 1}^{\infty} |a_n|^2 \exp\big(\tfrac{c}{1 - \lambda_n}\big) < \infty\Big\} \subset \mathscr{R}_{B}.$$
\section{Correct definition of the common range?}
When $\varphi \in H^{\infty} \setminus \{0\}$ it is easy to show that $T_{\overline{\varphi}} H^2$ is dense in $H^2$. Thus
$$\bigcap\big\{ T_{\overline{\varphi}}H^2: \varphi \in H^{\infty} \setminus \{0\}\big\},$$
the common range of the non-zero co-analytic Toeplitz operators, is meaningful. It just so happens, through the Douglas factorization theorem mentioned earlier, that $T_{\overline{\varphi}} H^2 = T_{\overline{\varphi_0}} H^2$, where $\varphi_0$ is the outer factor of $\varphi$ and thus
$$\bigcap\big\{ T_{\overline{\varphi}}H^2: \varphi \in H^{\infty} \setminus \{0\}\big\} = \bigcap\big\{ T_{\overline{\varphi}}H^2: \varphi \in H^{\infty} \cap \mathcal{O}\big\}.$$
It made sense to us to define the common range of the co-analytic Toeplitz operators in the model space $\mathcal{K}_u$ as
$$\bigcap\big\{ T_{\overline{\varphi}} \mathcal{K}_{u}: \varphi \in H^{\infty} \cap \mathcal{O}\big\}.$$
Of course the intersection
$$\bigcap\big\{T_{\overline{\varphi}}\mathcal{K}_u: \varphi \in H^{\infty} \setminus \{0\}\big\} = \{0\}$$ since
$\ker T_{\overline{u}} = \mathcal{K}_u$ so there needs to be some further restriction on the intersection.
One might wonder if the ``correct'' definition of the common range in the model space should be
$$\bigcap\{ T_{\overline{\varphi}} \mathcal{K}_u: \varphi \in \mathcal{F}\big\},$$
where $\mathcal{F}$ is the set of $\varphi \in H^{\infty}$ such that $T_{\overline{\varphi}} \mathcal{K}_u$ is dense in $\mathcal{K}_u$. One could make a case for this definition. However, the resulting common range may not be all that interesting.
For example, if $B$ is an interpolating Blaschke product with zeros $\{\lambda_n\}$, the fact that $\{\kappa_{\lambda_n}\}$ is minimal, in fact uniformly minimal \cite[p.~277]{MR3526203}, shows that $T_{\overline{\varphi}} \mathcal{K}_B$ is dense in $\mathcal{K}_B$ if and only if $\varphi(\lambda_n) \not = 0$ for all $n$. From \eqref{LLKK8898ui342} we have
$$T_{\overline{\varphi}} \mathcal{K}_{B} = \left\{ \sum_{n = 1}^{\infty} b_n \kappa_{\lambda_n} : \sum_{n = 1}^{\infty} \frac{|b_n|^2}{|\varphi(\lambda_n)|^2} < \infty \right\}.$$
Thus if $\varphi \in H^{\infty}$ interpolates the nonzero values of $b_n$ (which can be done via Carleson's theorem), the quantity
$$ \sum_{n = 1}^{\infty} \frac{|b_n|^2}{|\varphi(\lambda_n)|^2} $$ will be infinite whenever there are an infinite number of nonzero $b_n$. Thus $\mathcal{F}$ will consist of the $\varphi \in H^{\infty}$ such that $\varphi(\lambda_n) \not = 0$ for all $n$ and
$$\bigcap\{ T_{\overline{\varphi}} \mathcal{K}_B: \varphi \in \mathcal{F}\big\}$$
will just be the finite linear combinations of the $\kappa_{\lambda_n}$.
\begin{comment}
The above two corollaries are not all that useful since they are difficulty to check. What we desire is a better condition on the coefficients $a_n$.
Certainly $\mathscr{R}(\mathcal{K}_B)$ contains
$$f = \sum_{n \geq 1} a_n \kappa_{\lambda_n}$$ for any finitely supported sequence $a_n$.
\begin{Proposition}\label{wofdiscx}
Let $B$ be the Blaschke product formed with the uniformly separated sequence $\{\lambda_n\}_{n \geq 1}$. Then
$$\mathscr{R}(\mathcal{K}_{B}) \supseteq \bigcup_{c > 0} \left\{ \sum_{n \geq 1} a_n \kappa_{\lambda_n} : \sum_{n \geq 1} \exp\left( \frac{c}{1-|\lambda_n|} \right) |a_n|^2< \infty \right\}.$$
\end{Proposition}
\begin{proof}
Suppose that
$$f = \sum_{n \geq 1} a_n \kappa_{\lambda_n}$$ so that
$$\sum_{n \geq 1} |a_n|^2 \exp\Big(\frac{c}{1 - |\lambda_n|}\Big) < \infty$$
for some $c > 0$.
If $\varphi \in H^{\infty}$ and outer, apply Lemma \ref{L:growth-outer} to obtain an $N$ so that for all $n \geq N$ we have
$$\frac{1}{|\varphi(\lambda_n)|^2} \leq \exp\Big(\frac{c}{1 - |\lambda_n|}\Big), \quad n \geq N.$$
Thus
$$\sum_{n \geq N} \frac{|a_n|^2}{|\varphi(\lambda_n)|^2} \leq \sum_{n \geq N} |a_n|^2 \exp\Big(\frac{c}{1 - |\lambda_n|}\Big) < \infty.$$
Apply Lemma \ref{L:separated-seq-2} to complete the proof.
\end{proof}
Of course this leads us to the following conjecture.
\begin{Conjecture}\label{CONju787687weur}
Let $B$ be the Blaschke product formed with the uniformly separated sequence $\{\lambda_n\}_{n \geq 1}$. Then
$$\mathscr{R}(\mathcal{K}_{B}) = \bigcup_{c > 0} \left\{ \sum_{n \geq 1} a_n \kappa_{\lambda_n} : c>0, \, \sum_{n \geq 1} \exp\left( \frac{c}{1-|\lambda_n|} \right) |a_n|^2< \infty \right\}.$$
\end{Conjecture}
Certainly $\mathscr{R}(\mathcal{K}_{B})$ contains $\sum_{n} a_n \kappa_{\lambda_n}$ for finitely supported sequences $a_n$. Here is an example of an infinitely supported sequence.
\begin{Example}
Let $\alpha > 1$ and assume $\lambda_n \in (0, 1)$ and define
$$\varphi(z) = \exp\Big(-\frac{1}{(1 - z) \log^{\alpha} (1 - z)^{-1}}\Big).$$ Then
$$\frac{1}{|\varphi(\lambda_n)|} \leq \exp(\frac{1}{(1 - \lambda_n) \log^{\alpha} (1 - \lambda_n)^{-1}}).$$
Thus
$$\bigcap_{\alpha > 1} \Big\{\sum_{n} a_{n} \kappa_{\lambda_n}: \sum_{n} |a_n|^2 \exp(\frac{1}{(1 - \lambda_n) \log^{\alpha}(1 - \lambda_n)^{-1}}) < \infty\Big\} \subset \mathscr{R}_{B}.$$
\end{Example}
\end{comment}
\begin{comment}
\section{A sufficient condition}
$I$ is an arc around $z/|z|$.
\begin{align*}
\log|1/\varphi(z)| &= \frac{1}{2\pi} \int_{0}^{2\pi} \frac{1-|z|^2}{|e^{i\theta} -z|^2} \, \log|1/\varphi(e^{i\theta})| \, d\theta\\
&= \frac{1}{2\pi} \left(\int_{I} + \int_{\mathbb{T}\setminus I} \right)\frac{1-|z|^2}{|e^{i\theta} -z|^2} \, \log|1/\varphi(e^{i\theta})| \, d\theta\\
&\sim \frac{c}{1-|z|} \int_{I} \log|1/\varphi(e^{i\theta})| \, d\theta+ O(1-|z|).
\end{align*}
Thus for each fixed $\varepsilon>0$,
\[
\frac{1}{|\varphi(z)|} \leq \exp\bigg( \frac{\varepsilon}{1-|z|} \bigg), \quad |z| \to 1.
\]
By Proposition \ref{wofdiscx} we know that
$$\mathscr{R}(\mathcal{K}_{B}) \supseteq \left\{ \sum_{n \geq 1} a_n \kappa_{\lambda_n} : \forall c>0, \, \sum_{n \geq 1} \exp\left( \frac{c}{1-|\lambda_n|} \right) |a_n|^2< \infty \right\}.$$
By considering outer functions of the type
$$\log1/|\varphi| = 1/(|t| \log^{1+\delta} |t|),$$ it seems that we can show the condition
\[
\sum_{n \geq 1} \exp\left(\frac{1}{(1-|\lambda_n|) \, \log^{\delta} 1/(1-|\lambda_n|)}\right) |a_n|^2< \infty, \qquad \forall\delta>0,
\]
is necessary. Certainly, if $\lambda_n$ converge to one point, this condition is necessary.
\section{A construction}
\begin{Lemma} \label{L:conv-out}
Let $(z_n)_{n \geq 1}$ be a Blaschke sequence, and let $(\alpha_n)_{n \geq 1}$ be a sequence on $\mathbb{T}$ such that
\begin{equation} \label{E:cond-bp-t}
\sum_{n=1}^{\infty} |\alpha_n-z_n| < \infty.
\end{equation}
Then
\begin{equation} \label{E:def-h-outer-gen}
h(z) = \prod_{n=1}^{\infty} \frac{1-\bar{\alpha}_n z}{1-\bar{z}_n z}, \qquad (z \in \mathbb{D}),
\end{equation}
is a well-defined outer function on $\mathbb{D}$. Moreover, $h \in H^\infty$ provided that
\begin{equation} \label{E:cond-bp-t-pr}
\prod_{n=1}^{\infty} \frac{2|\alpha_n-z_n|}{1-|z_n|^2} < \infty.
\end{equation}
\end{Lemma}
\begin{proof}
Each factor in \eqref{E:def-h-outer-gen} is an outer function. Hence, we just need to show that the product is convergent. Since
\[
1-\left| \frac{1-\bar{\alpha}_n z}{1-\bar{z}_n z} \right|^2 =
\frac{2\Re \{(\bar \alpha_n- \bar z_n)z\} - (1-|z_n|^2)|z|^2}{|1-\bar{z}_n z|^2},
\]
on the disc $\{|z| \leq R<1\}$ we have
\[
1-\left| \frac{1-\bar{\alpha}_n z}{1-\bar{z}_n z} \right|^2 \leq
\frac{2R}{(1-R)^2} |\alpha_n-z_n| + \frac{2R^2}{(1-R)^2} (1-|z_n|).
\]
Therefore, by \eqref{E:cond-bp-t} and that $(z_n)_{n \geq 1}$ is a Blaschke sequence, the product in \eqref{E:def-h-outer-gen} is convergent on $\mathbb{D}$.
To verify the boundedness, write
\[
w = \frac{1-z}{1-\mu z}
\]
as
\[
\frac{1}{w} = (1-\mu) \left( \frac{1}{1-z} + \frac{\mu}{1-\mu}\right).
\]
Since $z \longmapsto1/(1-z)$ maps $\mathbb{D}$ into the half plane $\Re z > 1/2$, we deduce that the right hand side of the above identity maps $\mathbb{D}$ into a half plane whose shortest distance to the origin is
\[
|1-\mu| \left( \frac{1}{2} + \Re \left\{\frac{\mu}{1-\mu} \right\} \right) = \frac{|1-\mu|}{2} \,\, \Re \left\{\frac{1+\mu}{1-\mu} \right\} = \frac{1-|\mu|^2}{2|1-\mu|}.
\]
Therefore,
\[
\|w\|_{H^\infty} = 2\frac{|1-\mu|}{1-|\mu|^2}.
\]
A slight generalization of this formula gives
\begin{equation} \label{E:form-norm-ifnty}
\left\| \frac{1-\bar{\alpha}_n z}{1-\bar{z}_n z} \right\|_{H^\infty} = 2\frac{|\alpha_n-z_n|}{1-|z_n|^2}
\end{equation}
and then the result follows.
\end{proof}
Note that \eqref{E:cond-bp-t} is a condition about the argument of $\alpha_n$. In fact, writing $z_n=r_ne^{i\theta_n}$ and $\alpha_n=e^{i\vartheta_n}$, we have
\[
|\alpha_n-z_n| = |e^{i\vartheta_n}-r_ne^{i\theta_n}| = |e^{i(\vartheta_n-\theta_n)}-r_n| \asymp (1-r_n) + |\vartheta_n-\theta_n|.
\]
Hence, \eqref{E:cond-bp-t} holds if and only if
\begin{equation} \label{E:cond-bp-tt}
\sum_{n=1}^{\infty} |\vartheta_n-\theta_n| < \infty.
\end{equation}
Hence, we obtain the following special, but important case.
\begin{Corollary} \label{LC:conv-out}
Let $(r_ne^{i\theta_n})_{n \geq 1}$ be any Blaschke sequence in $\mathbb{D}$. Then
\[
h(z) = \prod_{n=1}^{\infty} \frac{1- e^{-i\theta_n} z}{1-r_n e^{-i\theta_n}z}, \qquad (z \in \mathbb{D}),
\]
is a bounded outer function on $\mathbb{D}$.
\end{Corollary}
\begin{proof}
In this case, the argument of $\alpha_n = e^{i\theta_n}$ is precisely the same as the argument of $z_n = r_n e^{i\theta_n}$, and thus \eqref{E:cond-bp-tt} holds. This implies that \eqref{E:cond-bp-t} is also fulfilled. Moreover, the condition \eqref{E:cond-bp-t-pr} simplifies to
\[
\prod_{n=1}^{\infty} \frac{1+r_n}{2} >0.
\]
But, thanks to the Blaschke condition and that
\[
\frac{1+r_n}{2} = 1- \left( \frac{1-r_n}{2} \right),
\]
we see that \eqref{E:cond-bp-t-pr} is also fulfilled. Thus, by Lemma \ref{L:conv-out}, $h$ is a bounded outer function on $\mathbb{D}$.
\end{proof}
\textcolor[rgb]{1.00,0.00,0.00}{Next Step:} I try to see if
\[
h(z) = \prod_{n=1}^{\infty} \left( \frac{1-\bar{\alpha}_n z}{1-\bar{z}_n z} \right)^{s_n}, \qquad (z \in \mathbb{D}),
\]
also works. Of course, with some restrictions on $s_n>0$ it works. The tricky part is to adjust them so to have
\[
|h(z_n)| \leq \exp\left( -\frac{1}{1-|z_n|} \right)
\]
\section{A new rough idea}
Without loss of generality we assume that $|\varphi|\leq 1$.
$I_z$ is the interval of length $1-|z|$ around $z/|z|$.
\begin{eqnarray*}
\log|1/\varphi(z)| &=& \frac{1}{2\pi} \int_{0}^{2\pi} \frac{1-|z|^2}{|e^{i\theta} -z|^2} \, \log|1/\varphi(e^{i\theta})| \, d\theta\\
&=& \frac{1}{2\pi} \left(\int_{I_z} + \int_{\mathbb{T}\setminus I_z} \right)\frac{1-|z|^2}{|e^{i\theta} -z|^2} \, \log|1/\varphi(e^{i\theta})| \, d\theta\\
&\sim& \frac{1}{1-|z|} \int_{I_z} \log|1/\varphi(e^{i\theta})| \, d\theta+ O(1).
\end{eqnarray*}
(The above estimation is rough. but we should be able to make it exact.)
Thus,
\[
\frac{1}{|\varphi(z)|} \sim \exp\bigg( \frac{1}{1-|z|} \int_{I_z} \log|1/\varphi(e^{i\theta})| \, d\theta \bigg), \qquad (z \in \mathbb{D}).
\]
For $h \in L^1(\mathbb{T})$, $h \geq 0$, we define
\[
\tilde{h}(z) := \frac{1}{1-|z|} \int_{I_z} h(e^{i\theta}) \, d\theta, \qquad (z \in \mathbb{D}).
\]
\begin{Conjecture}
Let $B$ be the Blaschke product formed with the uniformly separated sequence $(z_n)_{n \geq 1}$ in $\mathbb{D}$. Then
\[
\mathscr{R}(\mathcal{K}_B) = \left\{ \sum_{n \geq 1} a_n \kappa_{z_n} : \forall h \in L^1(\mathbb{T}), \, h \geq 0, \, \sum_{n \geq 1} e^{\tilde{h}(z_n)} |a_n|^2< \infty \right\}.
\]
\end{Conjecture}
\section{A special case}
In a paper of Taylor and Williams \cite{MR0283176} there is a the following construction. Suppose $\{z_n\})_{n \geq 1} \subset \mathbb{D}$ are such that the function
$$\rho: \overline{\mathbb{D}} \to \{x \in \mathbb{R}: x \geq 0\}, \quad \rho(z) = \operatorname{dist}(z, \{z_{n}\}_{n \geq 1}),$$
satisfies
$$\int_{\mathbb{T}} \log \rho(\xi) dm(\xi) > -\infty.$$
A construction in the Taylor and Williams paper (along the way to proving that $\{z_n\}_{n \geq 1}$ can be the zeros of an analytic function on the disk that is smooth up to the boundary), produces an integrable function $w: \mathbb{T} \to \{x \in \mathbb{R}: x \geq 0\}$ such that the analytic function
$$G(z) = \int_{\mathbb{T}} \frac{\xi + z}{\xi - z} w(\xi) dm(\xi)$$ has the property that
$$|G(z)| \leq \frac{C}{\rho(z)^{r}}, \quad z \in \mathbb{D},$$ for some positive constants $C$ and $r$ independent of $z$. Also notice that $\Re G(z) \geq 0$ for all $z \in \mathbb{D}$.
Use this $G$ to define
$$\varphi(z) = e^{-G(z)}, \quad z \in \mathbb{D},$$ and observe that
\begin{align*}
|\varphi(z)| & = |e^{-G(z)}|\\
& = e^{- \Re G(z)}\\
& \geq e^{-|G(z)|}\\
& \geq e^{-\frac{C}{\rho(z)^{r}}}.
\end{align*}
Of course this is not what we want since the RHS of the above is zero when $z = z_n$. But perhaps this is something to work with or will give us a good idea?
Let's try this idea. Start with with zeros $\{\lambda_n\}$ of an IBP (an exponential sequence for example). Now pick a sequence $\{z_n\}$ of ``nearby points'' (TBD) to the $\lambda_n$ so that
$$\rho(z) = \operatorname{dist}(z, \{z_n\})$$ is log-integrable. The construction in the above yields a $\varphi$ such that
$$|\varphi(z)| \geq e^{-\frac{C}{\rho(z)^r}}, \quad z \in \mathbb{D}$$ for some $C$ and $r$ depending on the sequence $\{z_n\}$. Then
$$\exp\Big(\frac{1}{1 - |\lambda_n|}\Big)|\varphi(\lambda_n)| \geq \exp(\frac{1}{1 - |\lambda_n|} - \frac{C}{\rho(\lambda_n)^r}) $$ Now we just need to arrange things so that
$$\frac{1}{1 - |\lambda_n|} - \frac{C}{\rho(\lambda_n)^r} \geq \delta, \quad n \geq 1.$$
This seems entirely doable since we are not, as in the previous discussion, choosing the $z_n$ to be equal to the $\lambda_n$ which would make the above quantity equal to $-\infty$.
Here is another observation we might be able to exploit. Given the $\lambda_n$ choose a sequence $z_n$ so that
$$\rho(\lambda_n) = 2 (1 - |\lambda_n|).$$ Maybe this can be done by choosing the $z_n$ so that they lie ``in between'' the $\lambda_n$. Then with this $\rho$ construct the $w$ so that
$$|G(z)| \leq \frac{C}{\rho(z)^r}, \quad z \in \mathbb{D}.$$ The form of the function $G$ above says that $\Re G > 0$ and so $G$ has no zeros in $G$. Maybe we can use this to do something like replace $G$ with $G^{1/r}$ tp help get rid of the $r$. Then we can take the resulting $\varphi$ to some appropriate power to get rid of the other constant and so the resulting function $\varphi$ will satisfy
$$|\varphi(z)| \geq e^{-1/\rho(z)}.$$ This should give us the desired estimate for $|\varphi(\lambda_n)|$.
Of course we could explicitly work through the Taylor and Williams paper to see the real source of the $C$ and $r$. Maybe they can be taken to both be equal to $1$?
\section{A fruitful idea involving topology}
I key tool for McCarthy's classification of the common range of the co-analytic Toeplitz operators on $H^2$ \cite{MR1065054, MR1152465} is the classification of the dual space of
$$N^{+} := \{\frac{f}{g}: f, g \in H^{\infty}, g\, \mbox{outer}\},$$
the Smirnov class. For each bounded outer function, he defines $H^{2}(|h|^2)$ to be the closure of the polynomials in $L^2(|h|^2 d \theta)$. Using some standard theory he shows that
$$H^{2}(|h|^2) = \frac{H^2}{h}$$ and that
$$N^{+} = \bigcup\{H^2(|h|^2): \mbox{$h$ is outer}\}$$ and thus can be endowed with some sort of inductive limit topology (which I don't quite understand). He then proves that for $f \in H^2$ that
$$f \in (N^{+})^{*} \iff \mathscr{R}(H^2).$$
Let's try a similar thing with the model spaces $\mathcal{K}_{u}$. Here we can define
$$N^{+}(\mathcal{K}_u) = \{\frac{k}{h}: k \in \mathcal{K}_u, \mbox{$h \in H^{\infty}$ and outer}\}$$ and observe that
this is equal to
$$\bigcup\{\frac{\mathcal{K}_u}{h}: \mbox{$h$ is outer}\}.$$
This allows us to think about $N^{+}(\mathcal{K}_u)$ as a subspace of $N^{+}$ since each $\mathcal{K}_u/h$ is a subspace of $H^2/h = H^{2}(|h|^2)$. Here is the analog of an argument from McCarthy's paper.
\begin{Proposition}
For a $f \in \mathcal{K}_u$ the following are equivalent:
\begin{enumerate}
\item $f \in (N^{+}(\mathcal{K}_u))^{*}$;
\item $f \in \mathscr{R}(\mathcal{K}_u)$.
\end{enumerate}
\end{Proposition}
\begin{proof}
I'm attempting to follow McCarthy's argument. Note that $f \in (N^{+}(\mathcal{K}_u))^{+}$ if and only if $f \in (\mathcal{K}_u/h)^{*}$ for all outer $h$, which means that
$$|\int_{\mathbb{T}} p \overline{f} d \theta|^2 \leq C_{h} \int_{\mathbb{T}} |p|^2 |h|^2 d \theta, \quad p \in \mathcal{K}_u \cap H^{\infty}.$$ This in turn implies that there is a $g \in \mathcal{K}_u/h$ so that
$$\int_{\mathbb{T}} \overline{f} p d \theta = \int_{\mathbb{T}} p \overline{g} |h|^2 d \theta, \quad p \in \mathcal{K}_u \cap H^{\infty}.$$ Call $g = k/h$, where $k \in \mathcal{K}_u$. Then
$$\int_{\mathbb{T}} \overline{f} p \int_{\mathbb{T}} p \overline{k} h d \theta$$ which means that $f = T_{\overline{h}} k$ and so $f \in \mathscr{R}(\mathcal{K}_u)$.
\end{proof}
Though that proof was a bit vague, I'm pretty sure it can be tightened up. But I think this is the correct analog of the McCarthy result for $H^2$. It from here that McCarthy then shows, after a long discussion of the inductive limit topology on $N^{+}$ that $f \in (N^{+})^{*}$ precisely when $\widehat{f}(n) = O(e^{-c_f \sqrt{n}})$. I wonder is something can be done along these lines. Perhaps a growth rate involving the inner function $u$. Keep in mind here, as discussed earlier in these notes, that there are cases, depending the inner function $u$, when $\mathscr{R}(\mathcal{K}_u) = \{0\}$.
\end{comment}
\bibliographystyle{plain}
|
1,108,101,563,619 | arxiv |
\section*{Acknowledgements}
\input{acknowledgements/Acknowledgements}
\printbibliography
\clearpage
\input{atlas_authlist}
\end{document}
\section{Systematic uncertainties}\label{sec:systematics}
Although statistical uncertainties dominate the sensitivity of this analysis given the small number of events, care is taken to make the best possible estimates of all systematic uncertainties, as described in more detail below.
\subsection{Theoretical uncertainties}
Theoretical uncertainties in the production \xs of single Higgs bosons are estimated by varying the renormalisation and factorisation scales.
In addition, uncertainties due to the PDF and the running of the QCD coupling constant ($\alphas$) are considered.
The scale uncertainties reach a maximum of $^{+20\%}_{-24\%}$ and the PDF+$\alphas$ uncertainty is not more than $\pm$3.6\%~\cite{deFlorian:2016spz}.
An uncertainty in the rate of Higgs boson production with associated heavy-flavour jets is also considered.
A 100\% uncertainty is assigned to the \ggH and \WH production modes, motivated by studies of heavy-flavour production in association with top-quark pairs~\cite{TOPQ-2012-16} and $W$ boson production in association with $b$-jets~\cite{STDM-2012-11}.
No heavy-flavour uncertainty is assigned to the \ZH and \ttH production modes, where the dominant heavy-flavour contribution is already accounted for in the LO process.
Finally, additional theoretical uncertainties in single Higgs boson production from uncertainties in the \hyy and \hbb branching fractions are $^{+2.9\%}_{-2.8\%}$ and $\pm$1.7\%, respectively~\cite{deFlorian:2016spz}.
The same sources of uncertainty are considered on the SM \hh signal samples.
The effect of scale and PDF+$\alphas$ uncertainties on the NNLO \xs for SM Higgs boson pair production are 4--8\% and 2--3\% respectively.
In addition, an uncertainty of 5\% arising from the use of infinite top-quark mass in the EFT approximation is taken into account~\cite{Borowka:2016ehy}.
The \xs, scale and PDF uncertainties are decorrelated between the single Higgs boson background and SM \hh signal.
In the search for resonant Higgs boson pair production, uncertainties arising from scale and PDF uncertainties, which primarily affect the signal yield, are neglected.
For this search, the SM non-resonant \hh production is considered as a background, with an overall uncertainty on the \xs of $^{+7\%}_{-8\%}$.
Interference between SM \hh and the BSM signal is neglected.
For all samples, systematic differences between alternative models of parton showering and hadronisation were considered and found to have a negligible impact.
\subsection{Experimental uncertainties}
The systematic uncertainty in the integrated luminosity for the data in this analysis is 2.1\%.
It is derived following a methodology similar to that detailed in Ref.~\cite{DAPR-2013-01}, using beam-separation scans performed in 2015 and 2016.
The efficiency of the diphoton trigger is measured using bootstrap methods \cite{TRIG-2016-01}, and is found to be 99.4\% with a systematic uncertainty of 0.4\%.
Uncertainties associated with the vertex selection algorithm have a negligible impact on the signal selection efficiency.
Differences between data and simulation give rise to uncertainties in the calibration of the photons and jets used in this analysis.
As the continuum backgrounds are estimated from data, these uncertainties are applied only to the signal processes and to the single-Higgs-boson background process.
Experimental uncertainties are propagated through the full analysis procedure, including the kinematic and BDT selections.
The relevant observables are then constructed, before signal and background fits are performed as described in \Sect{\ref{sec:modelling}}.
Changes in the peak location ($m_\mathrm{peak}$), width ($\sigma_\mathrm{peak}$) and expected yield in \myy (\myyjj) for the non-resonant (resonant) model, relative to the nominal fits, are extracted.
The tail parameters are kept at their nominal values in these modified fits.
For the resonant analysis, systematic uncertainties are evaluated for each \mX and the maximum across the range is taken as a conservative uncertainty.
The dominant yield uncertainties are listed in \Tab{\ref{tab:systematic_uncertainties}}.
Uncertainties in the photon identification and isolation directly affect the diphoton selection efficiency; jet energy scale and resolution uncertainties affect the $m_{bb}$ window acceptance \cite{ATL-PHYS-PUB-2015-015,PERF-2012-01,PERF-2016-04}, while flavour-tagging uncertainties lead to migration of events between categories.
Uncertainties in the peak location (width), which are mainly due to uncertainties in the photon energy scale (energy resolution), are about 0.2--0.6\% (5--14\%) for both the single-Higgs-boson and Higgs boson pair samples in the resonant and non-resonant analyses.
The spurious signal for the chosen background model, as defined in \Sects{\ref{subsec:bg_modelling_nonres}}{\ref{subsec:bg_modelling_res}}, is assessed as an additional uncertainty in the total number of signal events in each category.
In the 2-tag (1-tag) category, the uncertainty corresponds to 0.63 (0.25) events for the non-resonant analysis, 0.58 (2.06) events for the resonant analysis with the loose selection, and 0.21 (0.89) events for the resonant analysis with the tight selection.
Finally, as described in \Sect{\ref{subsec:bg_modelling_res}}, an \mX-dependent correction to the signal \xs, together with its associated uncertainty, is applied in the case of the resonant analysis at low masses to adjust for a small degeneracy bias.
\begin{table}[!htb]
\begin{center}
\caption{Summary of dominant systematic uncertainties affecting expected yields in the resonant and non-resonant analyses.
For the non-resonant analysis, uncertainties in the Higgs boson pair signal and SM single-Higgs-boson backgrounds are presented.
For the resonant analysis, uncertainties on the Higgs boson pair signal for the loose and tight selections are presented.
Sources marked `-' and other sources not listed in the table are negligible by comparison.
No systematic uncertainties related to the continuum background are considered, since this is derived through a fit to the observed data.}
\label{tab:systematic_uncertainties}
\resizebox{\textwidth}{!}
{\normalsize
\begin{tabular}{l l r@{} @{}D{.}{.}{1.1} r@{} @{}r@{} @{}D{.}{.}{1.1}@{} @{}l r@{} @{}D{.}{.}{1.1} r@{} @{}r@{} @{}D{.}{.}{2.1}@{} @{}l r@{} @{}D{.}{.}{1.1} r@{} @{}r@{} @{}D{.}{.}{1.1}@{} @{}l r@{} D{.}{.}{1.1} r@{} @{}r@{} @{}D{.}{.}{1.1}@{} @{}l}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{Source of systematic uncertainty}} & \multicolumn{24}{c}{\% effect relative to nominal in the 2-tag (1-tag) category} \\
& & \multicolumn{12}{c}{Non-resonant analysis} & \multicolumn{12}{c}{Resonant analysis: BSM \hh} \\
\midrule
& & \multicolumn{6}{c}{SM \hh signal} & \multicolumn{6}{c}{Single-$H$ bkg} & \multicolumn{6}{c}{Loose selection} & \multicolumn{6}{c}{Tight selection} \\
\midrule
\multicolumn{2}{l}{Luminosity} & $\pm$ & 2.1 & ( & $\pm$ & 2.1 & ) & $\pm$ & 2.1 & ( & $\pm$ & 2.1 & ) & $\pm$ & 2.1 & ( & $\pm$ & 2.1 & ) & $\pm$ & 2.1 & ( & $\pm$ & 2.1 & ) \\
\multicolumn{2}{l}{Trigger} & $\pm$ & 0.4 & ( & $\pm$ & 0.4 & ) & $\pm$ & 0.4 & ( & $\pm$ & 0.4 & ) & $\pm$ & 0.4 & ( & $\pm$ & 0.4 & ) & $\pm$ & 0.4 & ( & $\pm$ & 0.4 & ) \\
\multicolumn{2}{l}{Pile-up modelling} & $\pm$ & 3.2 & ( & $\pm$ & 1.3 & ) & $\pm$ & 2.0 & ( & $\pm$ & 0.8 & ) & $\pm$ & 4.0 & ( & $\pm$ & 4.2 & ) & $\pm$ & 4.0 & ( & $\pm$ & 3.8 & ) \\
\midrule
\multirow{4}{*}{Photon} & identification & $\pm$ & 2.5 & ( & $\pm$ & 2.4 & ) & $\pm$ & 1.7 & ( & $\pm$ & 1.8 & ) & $\pm$ & 2.6 & ( & $\pm$ & 2.6 & ) & $\pm$ & 2.5 & ( & $\pm$ & 2.5 & ) \\
& isolation & $\pm$ & 0.8 & ( & $\pm$ & 0.8 & ) & $\pm$ & 0.8 & ( & $\pm$ & 0.8 & ) & $\pm$ & 0.8 & ( & $\pm$ & 0.8 & ) & $\pm$ & 0.9 & ( & $\pm$ & 0.9 & ) \\
& energy resolution & \multicolumn{6}{c}{-} & \multicolumn{6}{c}{-} & $\pm$ & 1.0 & ( & $\pm$ & 1.3 & ) & $\pm$ & 1.8 & ( & $\pm$ & 1.2 & ) \\
& energy scale & \multicolumn{6}{c}{-} & \multicolumn{6}{c}{-} & $\pm$ & 0.9 & ( & $\pm$ & 3.0 & ) & $\pm$ & 0.9 & ( & $\pm$ & 2.4 & ) \\
\midrule
\multirow{2}{*}{Jet} & energy resolution & $\pm$ & 1.5 & ( & $\pm$ & 2.2 & ) & $\pm$ & 2.9 & ( & $\pm$ & 6.4 & ) & $\pm$ & 7.5 & ( & $\pm$ & 8.5 & ) & $\pm$ & 6.4 & ( & $\pm$ & 6.4 & ) \\
& energy scale & $\pm$ & 2.9 & ( & $\pm$ & 2.7 & ) & $\pm$ & 7.8 & ( & $\pm$ & 5.6 & ) & $\pm$ & 3.0 & ( & $\pm$ & 3.3 & ) & $\pm$ & 2.3 & ( & $\pm$ & 3.4 & ) \\
\midrule
\multirow{3}{*}{Flavour tagging} & $b$-jets & $\pm$ & 2.4 & ( & $\pm$ & 2.5 & ) & $\pm$ & 2.3 & ( & $\pm$ & 1.4 & ) & $\pm$ & 3.4 & ( & $\pm$ & 2.6 & ) & $\pm$ & 2.5 & ( & $\pm$ & 2.6 & ) \\
& $c$-jets & $\pm$ & 0.1 & ( & $\pm$ & 1.0 & ) & $\pm$ & 1.8 & ( & $\pm$ & 11.6 & ) & \multicolumn{6}{c}{-} & \multicolumn{6}{c}{-} \\
& light-jets & $<$ & 0.1 & ( & $\pm$ & 5.0 & ) & $\pm$ & 1.6 & ( & $\pm$ & 2.2 & ) & \multicolumn{6}{c}{-} & \multicolumn{6}{c}{-} \\
\midrule
\multirow{4}{*}{Theory} & PDF{+}$\alphas$ & $\pm$ & 2.3 & ( & $\pm$ & 2.3 & ) & $\pm$ & 3.1 & ( & $\pm$ & 3.3 & ) & \multicolumn{6}{c}{n/a} & \multicolumn{6}{c}{n/a} \\
& \multirow{2}{*}{Scale} & $+$ & 4.3 & ( & $+$ & 4.3 & ) & $+$ & 4.9 & ( & $+$ & 5.3 & ) & \multicolumn{6}{c}{n/a} & \multicolumn{6}{c}{n/a} \\
& & $-$ & 6.0 & ( & $-$ & 6.0 & ) & $+$ & 7.0 & ( & $+$ & 8.0 & ) & \multicolumn{6}{c}{n/a} & \multicolumn{6}{c}{n/a} \\
& EFT & $\pm$ & 5.0 & ( & $\pm$ & 5.0 & ) & \multicolumn{6}{c}{n/a} & \multicolumn{6}{c}{n/a} & \multicolumn{6}{c}{n/a} \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\section{Introduction}\label{sec:introduction}
The Higgs boson ($H$) was discovered by the ATLAS~\cite{HIGG-2012-27} and CMS~\cite{CMS-HIG-12-028} collaborations in 2012 using proton--proton (\pp) collisions at the Large Hadron Collider (LHC).
Measurements of the properties of the boson are in agreement with the predictions of the Standard Model (SM)~\cite{HIGG-2014-14,HIGG-2015-07}.
If SM expectations hold, the production of a Higgs boson pair in a single \pp interaction should not be observable with the currently available LHC data set.
In the SM, the dominant contributions to this process are shown in \Figs{\ref{fig:intro-Higgs-pair-production}(a)}{\ref{fig:intro-Higgs-pair-production}(b)}.
However, some beyond-the-Standard-Model (BSM) scenarios may enhance the Higgs boson pair production rate.
Many BSM theories predict the existence of heavy particles that can decay into a pair of Higgs bosons.
These could be identified as a resonance in the Higgs boson pair invariant mass spectrum.
They could be produced, for example, through the gluon--gluon fusion mode shown in \Fig{\ref{fig:intro-Higgs-pair-production}(c)}.
Models with two Higgs doublets~\cite{Lee:1973iz}, such as the minimal supersymmetric extension of the SM~\cite{Dimopoulos:1981zb}, twin Higgs models~\cite{Chacko:2005vw} and composite Higgs models~\cite{Grober:2010yv,Mrazek:2011iu}, add a second complex scalar doublet to the Higgs sector.
In general, the neutral Higgs fields from the two doublets will mix, which may result in the existence of a heavy Higgs boson that decays into two of its lighter Higgs boson partners.
Alternatively, the Randall--Sundrum model of warped extra dimensions~\cite{Randall:1999ee} predicts spin-0 radions and spin-2 gravitons that could couple to a Higgs boson pair.
In addition to the resonant production, there can also be non-resonant enhancements to the Higgs boson pair \xs.
These can either originate from loop corrections involving new particles, such as light, coloured scalars~\cite{Kribs:2012kz}, or through non-SM couplings.
Changes to the single Higgs boson production \xs arising from such loop-corrections are neglected in this paper.
Anomalous couplings can either be extensions to the SM, such as contact interactions between two top quarks and two Higgs bosons~\cite{Contino:2012xk}, or be deviations from the SM values of the couplings between the Higgs boson and other particles.
In this work, the effective Higgs self-coupling, \lhhh, is parameterised by a scale factor \khhh ($\khhh = \lhhh / \lhhhSM$) where the SM superscript refers to the SM value of this parameter.
The theoretical and phenomenological implications of such couplings for complete models are discussed in \Refs{\cite{DiLuzio:2017tfn}}{\cite{Kribs:2017znd}}.
The Yukawa coupling between the top quark and the Higgs boson is set to its SM value in this paper, consistent with its recent direct observation~\cite{CMS-HIG-17-001,HIGG-2018-13}.
\begin{figure}[!htb]
\centering
\subfloat[]{\includegraphics[height=0.1\textheight]{figures/introduction/dihiggs_production_ggF_box}}
\quad
\subfloat[]{\includegraphics[height=0.1\textheight]{figures/introduction/dihiggs_production_ggF_triangle}}
\quad
\subfloat[]{\includegraphics[height=0.1\textheight]{figures/introduction/dihiggs_production_ggF_BSM_resonance}}
\caption{Leading-order production modes for Higgs boson pairs.
In the SM, there is destructive interference between (a) the heavy-quark loop and (b) the Higgs self-coupling production modes, which reduces the overall \xs.
BSM Higgs boson pair production could proceed through changes to the Higgs couplings, for example the \ttH or $HHH$ couplings which contribute to (a) and (b), or through an intermediate resonance, $X$, which could, for example, be produced through a quark loop as shown in (c).}
\label{fig:intro-Higgs-pair-production}
\end{figure}
This paper describes a search for the production of pairs of Higgs bosons in \pp collisions at the LHC.
The search is carried out in the \yybb final state, and considers both resonant and non-resonant contributions.
Previous searches were carried out by the ATLAS and CMS collaborations in the \yybb channel at $\sqs = \SI{8}{\TeV}$~\cite{HIGG-2013-29,CMS-HIG-13-032}, as well as in other final states~\cite{EXOT-2016-31,HIGG-2013-33, CMS-HIG-17-002,CMS-HIG-17-006} at both $\sqs = \SI{8}{\TeV}$ and $\sqs = \SI{13}{\TeV}$.
Events are required to have two isolated photons, accompanied by two jets with dijet invariant mass (\mjj) compatible with the mass of the Higgs boson, $\mH = \SI{125.09}{\GeV}$~\cite{HIGG-2014-14}.
At least one of these jets must be tagged as containing a $b$-hadron; events are separated into signal categories depending on whether one or both jets are tagged in this way.
\textit{Loose} and \textit{tight} kinematic selections are defined, where the tight selection is a strict subset of the loose one.
The searches for low-mass resonances and for non-SM values of the Higgs boson self-coupling both use the loose selection, as the average transverse momentum (\pT) of the Higgs bosons is lower in these cases~\cite{Kling:2016lay}.
The tight selection is used for signals where the Higgs bosons typically have larger average \pT, namely in the search for higher-mass resonances and in the measurement of SM non-resonant \hh production.
In the search for non-resonant production, the signal is extracted using a fit to the diphoton invariant mass (\myy) distribution of the selected events.
The signal consists of a narrow peak around \mH superimposed on a smoothly falling background.
Only the predominant gluon--gluon fusion production mode, which represents over 90\% of the SM \xs, is considered.
For resonant production, the signal is extracted from the four-object invariant mass (\myyjj) spectrum for events with a diphoton mass compatible with the mass of the Higgs boson, by fitting a peak superimposed on a smoothly changing background.
The narrow-width approximation is used, focusing on a resonance with mass (\mX) in the range $\SI{260}{\GeV} < \mX < \SI{1000}{\GeV}$.
Although this search is for a generic scalar decaying into a pair of Higgs bosons, the simulated samples used to optimise the search were produced in the gluon--gluon fusion mode.
The rest of this paper is organised as follows.
\Sect{\ref{sec:detector}} provides a brief description of the ATLAS detector, while \Sect{\ref{sec:data_MC}} describes the data and simulated event samples used.
An overview of object and event selection is given in \Sect{\ref{sec:selection}}, while \Sect{\ref{sec:modelling}} explains the modelling of signal and background processes.
The sources of systematic uncertainties are detailed in \Sect{\ref{sec:systematics}}.
Final results including expected and observed limits are presented in \Sect{\ref{sec:results}}, and \Sect{\ref{sec:conclusions}} summarises the main findings.
\section{Signal and background modelling}\label{sec:modelling}
Both the resonant and non-resonant searches for Higgs boson pairs proceed by performing unbinned maximum-likelihood fits to the data in the 1-tag and 2-tag signal categories simultaneously.
The non-resonant search involves a fit to the \myy distribution, while the search for resonant production uses the \myyjj distribution.
The signal-plus-background fit to the data uses parameterised forms for both the signal and background probability distributions.
These parameterised forms are determined through fits to simulated samples.
As the loose selection is used for resonances with $\mX \leq \SI{500}{\GeV}$ and the tight selection for resonances with $\mX \geq \SI{500}{\GeV}$, different ranges of \myyjj are used in each case.
For the loose (tight) selection, only events with \myyjj in the range $\SI{245}{\GeV} < \myyjj < \SI{610}{\GeV}$ ($\SI{335}{\GeV} < \myyjj < \SI{1140}{\GeV}$) are considered.
These ranges are the smallest that contain over 95\% of all of the simulated signal sample events with \mX below, or above, \SI{500}{\GeV} respectively.
\subsection{Background composition}
\label{sec:bkg_MC}
Contributions to the continuum diphoton background originate from \yy, \yj, \jy and \jj sources produced in association with jets, where $j$ denotes jets misidentified as photons and \yj and \jy differ by the jet faking the sub-leading or the leading photon candidate respectively.
These are determined from data using a double two-dimensional sideband method (2x2D) based on varying the photon identification and isolation criteria~\cite{STDM-2011-05,STDM-2010-08}.
The number and relative fraction of events from each of these sources is calculated separately for the 1- and 2-tag categories.
In each case the contribution from \yy events is in the range 80--90\%.
The choice of functional form used to fit the background in the final likelihood models is derived using simulated events.
Continuum \yy events were simulated using the \SHERPA event generator as described in \Sect{\ref{sec:data_MC}}.
As this prediction from \SHERPA does not provide a good description of the \myy spectrum in data, the mismodelling is corrected for using a data-driven reweighting function.
In the 0-tag control category, the number of events in data is high enough that the 2x2D method can be applied in bins of \myy.
The events generated by \SHERPA can also be divided into \yy, \yj, \jy and \jj sources based on the same photon identification and isolation criteria as used in data.
For each of these sources, the \myy distributions for both \SHERPA and the data are fit using an exponential function and the ratio of the two fit results is taken as an \myy-dependent correction function.
The size of the correction is less than 5\% for the majority of events.
These reweighting functions are then applied in the 1-tag and 2-tag signal categories to correct the shape of the \SHERPA prediction.
The fractional contribution from the different continuum background sources is fixed to the relative proportions derived in data with the 2x2D method.
Finally, the overall normalisation is chosen such that, in the disjoint sideband region $\SI{105}{\GeV} < \myy < \SI{120}{\GeV}$ and $\SI{130}{\GeV} < \myy < \SI{160}{\GeV}$, the total contribution from all backgrounds is equal to that from data.
The contribution from \yy produced in association with jets is further divided in accord with the flavours of the two jets (for example $bb$, $bc$, $c+\text{light jet}$).
This decomposition is taken directly from the proportions predicted by the \SHERPA event generator and no attempt is made to classify the data according to jet flavour.
The continuum background in the 1-tag category comes primarily from $\yy bj$ events (${\sim}60\%$) and in the 2-tag category from $\yy bb$ events (${\sim}80\%$).
A comparison between data in the 0-tag control category and this data-driven prediction of the total background can be seen in \Fig{\ref{fig:modelling-background-decomposition}(a)} for the \myy distribution from the tight selection and in \Fig{\ref{fig:modelling-background-decomposition}(b)} for the \myyjj distribution from the loose selection.
\begin{figure}[!htb]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/modelling/m_yy_highMass_0tag_tightIsolated_unblinded}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/modelling/m_yyjj_lowMass_0tag_tightIsolated_unblinded}}
\caption{The predicted number of background events from continuum diphoton plus jet production (blue), other continuum photon and jet production (orange) and single Higgs boson production (green) is compared with the observed data (black points) in the 0-tag control category for (a) the \myy distribution with the tight selection and (b) the \myyjj distribution with the loose selection.}
\label{fig:modelling-background-decomposition}
\end{figure}
\subsection{Signal modelling for the non-resonant analysis}
\label{subsec:signal_modelling_nonres}
The shape of the diphoton mass distribution in \hhyyjj events is described by the double-sided Crystal Ball function~\cite{HIGG-2016-21}, consisting of a Gaussian core with power-law tails on either side.
The parameters of this model are determined through fits to the simulated non-resonant SM \hh sample described in \Sect{\ref{subsec:data_MC_samples}}.
\subsection{Background modelling for the non-resonant analysis}
\label{subsec:bg_modelling_nonres}
For the non-resonant analysis, the continuum \myy background is modelled using a functional form obtained from a fit to the data.
The potential bias arising from this procedure, termed `spurious signal', is estimated by performing signal-plus-background fits to the combined continuum background from simulation, including the \yy, \yj, \jy and \jj components~\cite{HIGG-2016-21}.
The maximum absolute value of the extracted signal, for a signal in the range $\SI{121}{\GeV} < \myy < \SI{129}{\GeV}$, is taken as the bias.
This method is used to discriminate between different potential fit functions -- the function chosen is the one with the smallest spurious signal bias.
If multiple functions have the same bias, the one with the smallest number of parameters is chosen.
The first-order exponential function has the smallest bias among the seven functions considered and is therefore chosen.
The background from single Higgs boson production is described using a double-sided Crystal Ball function, with its parameters determined through fits to the appropriate simulated samples.
\subsection{Signal modelling for the resonant analysis}
For each resonant hypothesis, a fit is performed to the \myyjj distribution of the simulated events in a window around the nominal \mX.
The shape of this distribution is described using a function consisting of a Gaussian core with exponential tails on either side.
A simultaneous fit to all signal samples is carried out in which each of the model parameters is further parameterised in terms of \mX.
This allows the model to provide a prediction for any mass satisfying $\SI{260}{\GeV} < \mX < \SI{1000}{\GeV}$, where these boundaries reflect the smallest and largest \mX values among the generated samples described in \Sect{\ref{subsec:data_MC_samples}}.
\subsection{Background modelling for the resonant analysis}
\label{subsec:bg_modelling_res}
For the resonant analysis, a spurious-signal study is also carried out, using the \myyjj distribution for events within the \myy window described in \Sect{\ref{sec:eventselection}}.
The background used to evaluate the spurious-signal contribution is a combination of the continuum \myy backgrounds together with the single Higgs boson backgrounds.
Due to the different \myyjj ranges used with the loose and tight selections, the shape of the \myyjj distribution differs between these two cases and hence different background functions are considered.
For the loose (tight) mass selection, the Novosibirsk function\footnote{$P(x) = \mathrm{e}^{-0.5^{(\ln q_{y})^{2}/\Lambda^{2} + \Lambda^{2}}}$ where $q_{y} = 1 + \Lambda(x - x_{0})/\sigma \times \frac{\sinh(\Lambda\sqrt{\ln{4}})}{\Lambda\sqrt{\ln{4}}}$~\cite{Ikeda:1999aq}.} (exponential function) has the smallest bias among the three (four) functions considered and is therefore chosen.
As a result, for low-mass resonances both the signal and background fit functions have a characteristic peaked shape.
This degeneracy could potentially introduce a bias in the extracted signal \xs.
In order to stabilise the background fit, nominal values of the shape parameters are estimated by fitting to the simulated events described in \Sect{\ref{sec:bkg_MC}}.
The shape is then allowed to vary in the likelihood to within the statistical covariance of this template fit.
Experimental systematics on the background shape have a small effect and are neglected.
The normalisation of the background is estimated by interpolating the \myy sideband data.
Additionally, a simple bias test is performed by drawing pseudo-data sets from the overall probability distribution created by combining the Novosibirsk background function with the signal function.
For each mass point and each value of the injected signal \xs, fits are performed on the ensemble of pseudo-data sets and the median extracted signal \xs is recorded.
For resonances with masses below \SI{400}{\GeV}, a small correction is applied to remove the observed bias.
The correction is less than \SI[parse-numbers=false]{\pm0.05}{\pico\barn} everywhere and a corresponding uncertainty of \SI[parse-numbers=false]{\pm0.02}{\pico\barn} in this correction is applied to the extracted signal \xs.
The corresponding uncertainty in the number of events in each category is roughly half that of the spurious signal.
\section{Results}
\label{sec:results}
The observed data are in good agreement with the data-driven background expectation, as summarised in \Tab{\ref{tab:results-nEvents-observed-expected}}.
Across all categories, the number of observed events in data is compatible with the number of expected background events within the calculated uncertainties.
\begin{table}[!htb]
\begin{center}
\caption{Expected and observed numbers of events in the 1-tag and 2-tag categories for events passing the selection for the resonant analysis, including the \myy requirement.
The event numbers quoted for the SM Higgs boson pair signal assume that the total production \xs is \diHiggsXS.
The uncertainties on the continuum background are those arising from the fitting procedure.
The uncertainties on the single-Higgs-boson and Higgs boson pair backgrounds are from Monte Carlo statistical error.
The loose and tight selections are not orthogonal.}
\label{tab:results-nEvents-observed-expected}
\sisetup{separate-uncertainty, table-format = 1.3(4), table-align-exponent = false, table-align-uncertainty = true}
\resizebox{\textwidth}{!}
{\normalsize
\begin{tabular}{l D{.}{.}{3.3} @{}c@{} D{.}{.}{1.3} D{.}{.}{2.3} @{}c@{} D{.}{.}{1.3} D{.}{.}{2.3} @{}c@{} D{.}{.}{1.3} D{.}{.}{1.3} @{}c@{} D{.}{.}{1.3}}
\toprule
& \multicolumn{6}{c}{1-tag} & \multicolumn{6}{c}{2-tag} \\
& \multicolumn{3}{c}{Loose selection} & \multicolumn{3}{c}{Tight selection} & \multicolumn{3}{c}{Loose selection} & \multicolumn{3}{c}{Tight selection} \\
\midrule
Continuum background & 117.5 & $\pm$ & 4.7 & 15.7 & $\pm$ & 1.6 & 21.0 & $\pm$ & 2.0 & 3.74 & $\pm$ & 0.78 \\
SM single-Higgs-boson background & 5.51 & $\pm$ & 0.10 & 2.20 & $\pm$ & 0.05 & 1.63 & $\pm$ & 0.04 & 0.56 & $\pm$ & 0.02 \\
\midrule
Total background & 123.0 & $\pm$ & 4.7 & 17.9 & $\pm$ & 1.6 & 22.6 & $\pm$ & 2.0 & 4.30 & $\pm$ & 0.79 \\
\midrule
SM Higgs boson pair signal & 0.219 & $\pm$ & 0.006 & 0.120 & $\pm$ & 0.004 & 0.305 & $\pm$ & 0.007 & 0.175 & $\pm$ & 0.005 \\
\midrule
Data & 125 & & & 19 & & & 21 & & & 3 & & \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
The signal and background models described in \Sect{\ref{sec:modelling}} are used to construct an unbinned likelihood function which is maximised with respect to the observed data.
In each case the parameter of interest is the signal \xs, which is related in the likelihood model to the number of signal events after considering the integrated luminosity, branching ratio, phase-space acceptance and detection efficiency of the respective categories.
The models for the statistically independent 1-tag and 2-tag categories are simultaneously fit to the data by maximizing the product of their likelihoods.
The likelihood model also includes a number of nuisance parameters associated with the background shape and normalisation, as well as the theoretical and experimental systematic uncertainties described in \Sect{\ref{sec:systematics}}.
These nuisance parameters are included in the likelihood as terms which modulate their respective parameters, such as signal yield, along with a constraint term which encodes the scale of the uncertainty by reducing the likelihood when the parameter is pulled from its nominal value.
In general the nuisance parameter for each systematic uncertainty has a correlated effect between 1-tag and 2-tag categories, with the exception of the spurious signal and background shape parameters, which are considered as individual degrees of freedom in each category.
\begin{figure}[!htb]
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yy_lowMass_1tag_bkg_fit_to_data_withRatio}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yy_lowMass_2tag_bkg_fit_to_data_withRatio}}
\\
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yy_highMass_1tag_bkg_fit_to_data_withRatio}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yy_highMass_2tag_bkg_fit_to_data_withRatio}}
\caption{For the non-resonant analysis, data (black points) are compared with the background-only fit (blue solid line) for \myy in the 1-tag (left) and 2-tag (right) categories with the loose (top) and tight (bottom) selections.
Both the continuum \yy background and the background from single Higgs boson production are considered.
The lower panel shows the residuals between the data and the best-fit background.
}
\label{results-myy}
\end{figure}
\Fig{\ref{results-myy}} shows the observed diphoton invariant mass spectra for the non-resonant analysis with the loose (top) and tight (bottom) selections.
The best-fit Higgs boson pair \xs is \SI[parse-numbers = false]{0.04_{-0.36}^{+0.43}~(-0.21_{-0.25}^{+0.33})}{\pico\barn} for the loose (tight) selection.
\Fig{\ref{results-myyjj}} shows the observed four-body invariant mass spectra for the resonant analysis in the loose (top) and tight (bottom) selections.
Maximum-likelihood background-only fits are also shown.
While local fluctuations may appear in some of the categories shown in \Fig{\ref{results-myyjj}}, only the combined two-category unbinned likelihood is considered for setting limits.
The largest discrepancy between the background-only hypothesis and the data manifests as an excess at \SI{480}{\GeV} with a local significance of $1.2~\sigma$.
The results are also interpreted as upper limits on the relevant Higgs boson pair production \xs{s}.
\begin{figure}[!htb]
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yyjj_lowMass_1tag_bkg_fit_to_data_withRatio}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yyjj_lowMass_2tag_bkg_fit_to_data_withRatio}}
\\
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yyjj_highMass_1tag_bkg_fit_to_data_withRatio}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/m_yyjj_highMass_2tag_bkg_fit_to_data_withRatio}}
\caption{For the resonant analysis, data (black points) are compared with the background-only fit (blue solid line) for \myyjj in the 1-tag (left) and 2-tag (right) categories with the loose (top) and tight (bottom) selections.
The lower panel shows the residuals between the data and the best-fit background.}
\label{results-myyjj}
\end{figure}
Exclusion limits are set on Higgs boson pair production in the \yybb final state.
The limits for both resonant and non-resonant production are calculated using the \CLs method~\cite{Read:2002hq}, with the likelihood-based test statistic $\tilde{q}_\mu$ which is suitable when considering signal strength $\mu\geq0$~\cite{Cowan:2010js,ATL-PHYS-PUB-2011-011}.
Because both the expected and observed numbers of events are small in the case of the resonant analysis, test-statistic distributions are evaluated by pseudo-experiments generated by profiling the nuisance parameters of the likelihood model on the observed data, as described in \Ref{\cite{ATL-PHYS-PUB-2011-011}}.
Better limits on \khhh are expected with the loose selection, whereas for the SM value $\khhh = 1$ the strongest limits on the Higgs boson pair \xs are derived from the tight selection.
\subsection{Exclusion limits on non-resonant \hh production}\label{sec:results_nonresonant_exclusion}
The 95\% confidence level (CL) upper limit for the non-resonant Higgs boson pair \xs is obtained using the tight selection.
\Fig{\ref{fig:results-non-resonant-limits}(a)} shows this upper limit, together with $\pm1\sigma$ and $\pm2\sigma$ uncertainty bands.
The observed (expected) value is \SI[parse-numbers = false]{\nonresobs~(\nonresexp)}{\pico\barn}.
As a multiple of the SM production \xs, the observed (expected) limits are \nonresnSigmaobs~(\nonresnSigmaexp).
The limits and the $\pm1\sigma$ band around each expected limit are presented in \Tab{\ref{tab:results-limitsWrtSM}}.
\begin{table}[!htb]
\begin{center}
\caption{The 95\% CL observed and expected limits on the Higgs boson pair \xs in \si{\pico\barn} and as a multiple of the SM production \xs.
The $\pm1\sigma$ band around each 95\% CL limit is also indicated.}
\label{tab:results-limitsWrtSM}
\sisetup{separate-uncertainty, table-format = 1.3(4), table-align-exponent = false}
\begin{tabular}{l c c c c}
\toprule
& \multicolumn{1}{c}{Observed} & \multicolumn{1}{c}{Expected} & \multicolumn{1}{c}{$-1 \sigma$} & \multicolumn{1}{c}{$+1 \sigma$} \\
\midrule
\xsHH [\si{\pico\barn}] & \nonresobs & \nonresexp & \nonresexpDown & \nonresexpUp \\
& & \\ [-0.75em]
As a multiple of $\sigma_\mathrm{SM}$ & \nonresnSigmaobs & \nonresnSigmaexp & \nonresnSigmaexpDown & \nonresnSigmaexpUp \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Exclusion limits on \texorpdfstring{\lhhh}{lambdahhh}}
Varying the Higgs boson self-coupling, \lhhh, affects both the total \xs of the non-resonant Higgs boson pair production and the event kinematics, affecting the signal selection efficiency.
In the non-resonant analysis, results are interpreted in the context of \khhh, using the loose selection, which is more sensitive for the range of \khhh values accessible with this data set.
As discussed in \Sect{\ref{subsec:data_MC_samples}}, the samples used for this interpretation were generated at \LO.
The 95\% CL limits on \xsHH are shown together with $\pm1\sigma$ and $\pm2\sigma$ uncertainty bands around the expected limit in \Fig{\ref{fig:results-non-resonant-limits}(b)}.
The limits are calculated using the asymptotic approximation~\cite{Cowan:2010js} for the profile-likelihood test statistic.
Fixing all other SM parameters to their expected values, the Higgs boson self-coupling is constrained at 95\% CL to $\lambdaminobs < \khhh < \lambdamaxobs$ whereas the expected limits are $\lambdaminexp < \khhh < \lambdamaxexp$.
\begin{figure}[!htb]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/results/Limits_nonresonant_CLs}}
\quad
\includegraphics[width=0.48\textwidth]{figures/results/Limits_nonresonant_lambda}
\caption{The expected and observed 95\% CL limits on the non-resonant production \xs \xsHH (a) for the SM-optimised limit using the tight selection and (b) as a function of $\kappa_\lambda$ using the loose selection.
In (a) the red line indicates the 95\% confidence level.
The intersection of this line with the observed, expected, and $\pm1\sigma$ and $\pm2\sigma$ bands is the location of the limits.
In (b) the red line indicates the predicted \hh \xs if \khhh is varied but all other couplings remain at their SM values.
The red band indicates the theoretical uncertainty of this prediction.}
\label{fig:results-non-resonant-limits}
\end{figure}
\subsection{Exclusion limits on resonant \hh production}
\label{sec:results_resonance_exclusion}
The 95\% CL limits on resonant Higgs boson pair production are shown in \Fig{\ref{fig:results-resonant-limits}}, utilising both the loose and tight selections.
The SM \hh contribution is considered as part of the background in this search although its inclusion has a negligible impact on the results.
For resonance masses in the range $\SI{260}{\GeV} < \mX < \SI{1000}{\GeV}$, the observed (expected) limits range between \SI[parse-numbers=false]{1.14~(0.90)}{\pico\barn} and \SI[parse-numbers=false]{0.12~(0.15)}{\pico\barn}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{figures/results/Limits_resonant_lowMass_highMass_stitched}
\caption{The expected and observed 95\% CL limits on the resonant production \xs, $\xsX \times \mathcal{B}\left(X \rightarrow \hh\right)$ as a function of \mX.
The loose selection is used for $\mX \leq 500\GeV$, while the tight selection is used for $\mX \geq 500\GeV$.
This is delineated with the blue dashed line.}
\label{fig:results-resonant-limits}
\end{figure}
\section{Data and simulated samples}\label{sec:data_MC}
\subsection{Data selection}
This analysis uses the \pp data sample collected at $\sqs = \SI{13}{\TeV}$ with the ATLAS detector in 2015 and 2016, corresponding to an integrated luminosity of \lumiInFemtoBarn.
All events for which the detector and trigger system satisfy a set of data-quality criteria are considered.
Events are selected using a diphoton trigger, which requires two photon candidates with transverse energy (\ET) above \num{35} and \SI{25}{\GeV}, respectively.
The overall trigger selection efficiency is greater than 99\% for events having the characteristics to satisfy the event selection detailed in \Sect{\ref{sec:selection}}.
\subsection{Simulated event samples}
\label{subsec:data_MC_samples}
Non-resonant production of Higgs boson pairs via the gluon--gluon fusion process was simulated at next-to-leading-order (\NLO) accuracy in QCD using an effective field theory (EFT) approach, with form factors for the top-quark loop from HPAIR~\cite{Dawson:1998py,Plehn:1996wb} to approximate finite top-quark mass effects.
The simulated events were reweighted to reproduce the \mhh spectrum obtained in \Refs{\cite{Borowka:2016ehy}}{\cite{Borowka:2016ypz}}, which calculated the process at \NLO in QCD while fully accounting for the top-quark mass.
The total \xs is normalised to \diHiggsXS, in accordance with a calculation at next-to-next-to-leading order (NNLO) in QCD~\cite{deFlorian:2016spz,Grazzini:2018bsd}.
Non-resonant BSM Higgs boson pair production with varied \khhh was simulated at \LO accuracy in QCD~\cite{Hespel:2014sla} for eleven values of \khhh in the range $-10 < \khhh < 10$.
The total cross-sections for these samples were computed as a function of \khhh at \LO accuracy in QCD.
A constant NNLO/LO $K$-factor (2.283) computed at $\khhh = 1$, was then applied.
As the amplitude for Higgs boson pair production can be expressed in terms of \khhh and the top quark's Yukawa coupling, weighted combinations of the simulated samples can produce predictions for other values of \khhh.
Resonant BSM Higgs boson pair production via a massive scalar, was simulated at \NLO accuracy for ten different mass points (\num{260}, \num{275}, \num{300}, \num{325}, \num{350}, \num{400}, \num{450}, \num{500}, \num{750} and \SI{1000}{\GeV}) using the narrow-width approximation.
For all generated Higgs boson pair samples, both resonant and non-resonant, the branching fractions for \hbb and \hyy and are taken to be 0.5809 and 0.00227 respectively~\cite{deFlorian:2016spz}.
This analysis is affected both by backgrounds from single-Higgs-boson production and by non-resonant backgrounds with continuum \myy spectra.
Background estimation is carried out using data-driven methods whenever possible; in particular, data are used to estimate the continuum background contribution from SM processes with multiple photons and jets, which constitute the dominant background for this search.
Monte Carlo event generators were used for the simulation of different signal hypotheses and the background from SM single Higgs boson production.
The major single Higgs boson production channels contributing to the background are gluon--gluon fusion (\ggH), associated production with a $Z$ boson (\ZH), associated production with a top quark pair (\ttH) and associated production with a single top quark (\tH).
In addition, contributions from vector-boson fusion (VBF H), associated production with a $W$ boson (\WH) and associated production with a bottom quark pair (\bbH) are also considered.
Overall, the largest contributions come from \ttH and \ZH.
More information about these simulated background samples can be found in \Ref{\cite{HIGG-2016-21}} and in \Tab{\ref{tab:data_MC-summary}}.
For all matrix element generators other than \SHERPA, the resulting events were passed to another program for simulation of parton showering, hadronisation and the underlying event.
This is either \HERWIGpp with the CTEQ6L1 parton distribution function (PDF) set~\cite{Pumplin:2002vw} using the UEEE5 set of tuned parameters~\cite{Gieseke:2012ft} or \PYTHIAV{8} with the NNPDF 2.3 LO PDF set~\cite{Ball:2012cx} and the A14 set of tuned parameters~\cite{ATL-PHYS-PUB-2014-021}.
For all simulated samples except those generated by \SHERPA, the \EVTGEN v1.2.0 program~\cite{Lange:2001uf} was used for modelling the properties of $b$- and $c$-hadron decays.
Multiple overlaid \pp collisions (pile-up) were simulated with the soft QCD processes of \PYTHIAV{8.186} using the A2 set of tuned parameters~\cite{ATL-PHYS-PUB-2012-003} and the MSTW2008LO PDF set~\cite{Martin:2009iq}.
The distribution of the number of overlaid collisions simulated in each event approximately matches what was observed during 2015 and 2016 data-taking.
Event-level weights were applied to the simulated samples in order to improve the level of agreement.
The final-state particles were passed either through a \GEANT~4~\cite{Agostinelli:2002hh} simulation of the ATLAS detector, or through the ATLAS fast simulation framework~\cite{SOFT-2010-01}, which has been extensively cross-checked against the \GEANT~4 model.
The output from this detector simulation step is then reconstructed using the same software as used for the data.
A list of the signal and dominant background samples used in the paper is shown in \Tab{\ref{tab:data_MC-summary}}.
\begin{table}[!htb]
\begin{center}
\caption{Summary of the event generators and PDF sets used to model the signal and the main background processes.
The SM \xs{s} $\sigma$ for the Higgs boson production processes with \mH = \SI{125.09}{\GeV} are also given separately for $\sqs = \SI{13}{\TeV}$, together with the orders of the calculation corresponding to the quoted \xs{s}, which are used to normalise samples.
The following generator versions were used: \PYTHIAV{8.212}~\cite{Sjostrand:2014zea} (event generation), \PYTHIAV{8.186}~\cite{Sjostrand:2007gs} (pile-up overlay); \HERWIGpp 2.7.1~\cite{Bahr:2008pv,Bellm:2013hwb}; \POWHEGBOX r3154 (base) v2~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd}; \MGMCatNLO 2.4.3~\cite{Alwall:2014hca}; \SHERPA 2.2.1~\cite{Gleisberg:2008ta,Hoeche:2009rj,Gleisberg:2008fv,Schumann:2007mg}.
The PDF sets used are: CT10 NLO~\cite{Lai:2010vv}, CTEQ6L1~\cite{Pumplin:2002vw}, NNPDF 2.3 LO~\cite{Ball:2012cx}, NNPDF 3.0 LO~\cite{Ball:2014uwa}, PDF4LHC15~\cite{Butterworth:2015oua}.
For the BSM signals, no cross-section is specified as it is the parameter of interest for measurement.
For the \SHERPA background, no cross-section is used, as the continuum background is fit in data.
}
\label{tab:data_MC-summary}
\resizebox{\textwidth}{!}
{\normalsize
\begin{tabular}{l l l l r l l}
\toprule
Process & Generator & Showering & PDF set & $\sigma$ [fb] & Order of calculation of $\sigma$ & Simulation \\
\midrule
Non-resonant SM \hh & \MGMCatNLO & \HERWIGpp & CT10 NLO & \diHiggsXSnoUnits & NNLO+NNLL & Fast \\
Non-resonant BSM \hh & \MGMCatNLO & \PYTHIAV{8} & NNPDF 2.3 LO & - & LO & Fast \\
Resonant BSM \hh & \MGMCatNLO & \HERWIGpp & CT10 NLO & - & NLO & Fast \\
\midrule
\yy plus jets & \SHERPA & \SHERPA & CT10 NLO & - & LO & Fast \\
\midrule
\ggH & \POWHEGBOX NNLOPS (r3080)~\cite{Bagnaschi:2011tu} & \PYTHIAV{8} & PDF4LHC15 & 48520 & N$^{3}$LO(QCD)+NLO(EW) & Full \\
VBF & \POWHEGBOX (r3052)~\cite{Nason:2009ai} & \PYTHIA & PDF4LHC15 & 3780 & NNLO(QCD)+NLO(EW) & Full \\
\WH & \POWHEGBOX (r3133)~\cite{Luisoni:2013kna} & \PYTHIA & PDF4LHC15 & 1370 & NNLO(QCD)+NLO(EW) & Full \\
$q\bar{q} \rightarrow \ZH$ & \POWHEGBOX (r3133)~\cite{Luisoni:2013kna} & \PYTHIAV{8} & PDF4LHC15 & 760 & NNLO(QCD)+NLO(EW & Full \\
\ttH & \MGMCatNLO & \PYTHIAV{8} & NNPDF3.0 & 510 & NLO(QCD)+NLO(EW) & Full \\
$gg \rightarrow \ZH$ & \POWHEGBOX (r3133) & \PYTHIAV{8} & PDF4LHC15 & 120 & NLO+NLL(QCD) & Full \\
\bbH & \MGMCatNLO & \PYTHIA & CT10 NLO & 490 & NNLO(5FS)+NLO(4FS) & Full \\
t-channel \tH & \MGMCatNLO & \PYTHIAV{8} & CT10 NLO & 70 & LO(4FS) & Full \\
$W$-associated \tH & \MGMCatNLO & \HERWIGpp & CT10 NLO & 20 & NLO(5FS) & Full \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\section{ATLAS detector}\label{sec:detector}
The ATLAS detector~\cite{PERF-2007-01} at the LHC is a multipurpose particle detector with a forward--backward symmetric cylindrical geometry\footnote{\AtlasCoordFootnote} and a near $4\pi$ coverage in solid angle.
It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a \SI{2}{\tesla} axial magnetic field, electromagnetic (EM) and hadronic calorimeters, and a muon spectrometer (MS).
The inner tracking detector, consisting of silicon pixel, silicon microstrip, and transition radiation tracking systems, covers the pseudorapidity range $|\eta| < 2.5$.
The innermost pixel layer, the insertable B-layer (IBL)~\cite{ATLAS-TDR-19}, was added between the first and second runs of the LHC, around a new, narrower and thinner beam pipe.
The IBL improves the experiment's ability to identify displaced vertices and thereby improves the performance of the \btagging algorithms~\cite{ATL-PHYS-PUB-2015-022}.
Lead/liquid-argon (LAr) sampling calorimeters with high granularity provide energy measurements of EM showers.
A hadronic steel/scintillator-tile calorimeter covers the central pseudorapidity range ($|\eta| < 1.7$), while a LAr hadronic endcap calorimeter provides coverage over $1.5<|\eta|<3.2$.
The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to $|\eta| = 4.9$.
The MS surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets, each with eight coils, and with bending power in the range of \num{2.0} to \SI{7.5}{\tesla\metre}.
It includes a system of precision tracking chambers, covering the region $|\eta| < 2.7$, and fast detectors for triggering purposes, covering the range $|\eta| < 2.4$.
A two-level trigger system is used to select interesting events~\cite{TRIG-2016-01}.
The first-level trigger is implemented in hardware and uses a subset of the total available information to make fast decisions to accept or reject an event, aiming to reduce the rate to around \SI{100}{\kilo\hertz}.
This is followed by the software-based high-level trigger (HLT), which runs reconstruction and calibration software, reducing the event rate to about \SI{1}{\kilo\hertz}.
\section{Conclusions}\label{sec:conclusions}
Searches for resonant and non-resonant Higgs boson pair production in the \yybb final state are performed using \lumiInFemtoBarn of \pp collision data collected at $\sqs = \SI{13}{\TeV}$ with the ATLAS detector at the LHC in 2015 and 2016.
No significant deviations from the Standard Model predictions are observed.
A 95\% CL upper limit of \SI{\nonresobs}{\pico\barn} is set on the \xs for non-resonant production, while the expected limit is \SI{\nonresexp}{\pico\barn}.
This observed (expected) limit is \nonresnSigmaobs~(\nonresnSigmaexp) times the predicted SM \xs.
The Higgs boson self-coupling is constrained at 95\% CL to $\lambdaminobs < \khhh < \lambdamaxobs$ whereas the expected limits are $\lambdaminexp < \khhh < \lambdamaxexp$.
For resonant production of \Xhhyybb, a limit is presented for the narrow-width approximation as a function of \mX.
The observed (expected) limits range between \SI{\resmaxobs}{\pico\barn}~(\SI{\resmaxexp}{\pico\barn}) and \SI{\resminobs}{\pico\barn}~(\SI{\resminexp}{\pico\barn}) in the range $\SI{260}{\GeV} < \mX < \SI{1000}{\GeV}$.
\section{Object and event selection}\label{sec:selection}
The photon selection and event selection for the present search follow those in another published ATLAS \hyy analysis~\cite{HIGG-2016-21}.
The subsections below detail the selection and identification of all detector-level objects used in the analysis, followed by the event selection criteria and the classification into signal and background control categories.
\subsection{Object selection}\label{sec:objectselection}
Photon candidates are reconstructed from energy clusters in the EM calorimeter~\cite{PERF-2013-05}.
The reconstruction algorithm searches for possible matches between energy clusters and tracks reconstructed in the inner detector and extrapolated to the calorimeter.
Well-reconstructed tracks matched to clusters are classified as electron candidates, while clusters without matching tracks are classified as unconverted photon candidates.
Clusters matched to a reconstructed conversion vertex or to pairs of tracks consistent with the hypothesis of a \yee conversion process are classified as converted photon candidates.
Photon energies are determined by summing the energies of all cells belonging to the associated cluster.
Simulation-based corrections are then applied to account for energy losses and leakage outside the cluster~\cite{PERF-2013-05}.
The absolute energy scale and response resolution is calibrated using $\Zboson \rightarrow e^+ e^-$ events from data.
For the photons considered in this analysis, the reconstruction efficiency for both the converted and unconverted photons is 97\%.
Photon identification is based on the lateral and longitudinal energy profiles of EM showers measured in the calorimeter~\cite{PERF-2013-04}.
The reconstructed photon candidates must satisfy tight photon identification criteria.
These exploit the fine granularity of the first layer of the EM calorimeter in order to reject background photons from hadron decays.
The photon identification efficiency varies as a function of \ET and $|\eta|$ and is typically 85--90\% (85--95\%) for unconverted (converted) photons in the range of $\SI{30}{\GeV} < \ET < \SI{100}{\GeV}$.
All photon candidates must satisfy a set of calorimeter- and track-based isolation criteria designed to reject the background from jets misidentified as photons and to maximise the signal significance of simulated \hyy events against the continuum background.
The calorimeter-based isolation variable \ETiso is defined as the sum of the energies of all topological clusters of calorimeter cells within $\Delta R = 0.2$ of the photon candidate, excluding clusters associated to the photon candidate.
The track-based isolation variable \pTiso is defined as the sum of the transverse momenta (\pT) of all tracks with $\pT > \SI{1}{\GeV}$ within $\Delta R = 0.2$ of the photon candidate, excluding tracks from photon conversions and tracks not associated with the interaction vertex.
Candidates with \ETiso larger than 6.5\% of their transverse energy or with \pTiso greater than 5\% of their transverse energy are rejected.
The efficiency of this isolation requirement is approximately 98\%.
Photons satisfying the isolation criteria are required to fall within the fiducial region of the EM calorimeter defined by \abseta $<$ 2.37, excluding a transition region between calorimeters ($1.37 < \abseta < 1.52$).
Among the photons satisfying the isolation and fiducial criteria, the two with the highest \pT are required to have $\ET/\myy > 0.35$ and $0.25$, where \myy is the invariant mass of the diphoton system.
A neural network, trained on a simulated gluon--gluon fusion single-Higgs-boson sample, is used to select the primary vertex most likely to have produced the diphoton pair.
The algorithm uses the directional information from the calorimeter and, in the case of converted photons, tracking information, to extrapolate the photon trajectories back to the beam axis.
Additionally, vertex properties such as the sum of the squared transverse momenta or the scalar sum of the transverse momenta of the tracks associated with the vertex, are used as inputs to this algorithm.
Due to the presence of two high-\pT jets in addition to the two photons, the efficiency for selecting the correct primary vertex is more than 85\%.
All relevant tracking and calorimetry variables are recalculated with respect to the chosen primary vertex~\cite{HIGG-2016-21}.
Jets are reconstructed via the FastJet package~\cite{Cacciari:2011ma} from topological clusters of energy deposits in calorimeter cells~\cite{PERF-2014-07}, using the \antikt algorithm~\cite{Cacciari:2008gp} with a radius parameter of $R = 0.4$.
Jets are corrected for contributions from pile-up by applying an event-by-event energy correction evaluated using calorimeter information~\cite{PERF-2014-03}.
They are then calibrated using a series of correction factors, derived from a mixture of simulated events and data, which correct for the different responses to EM and hadronic showers in each of the components of the calorimeters~\cite{PERF-2016-04}.
Jets that do not originate from the diphoton primary vertex, as detailed above, are rejected using the jet vertex tagger (JVT)~\cite{ATLAS-CONF-2014-018}, a multivariate likelihood constructed from two track-based variables.
A JVT requirement is applied to jets with $\SI{20}{\GeV} < \pT < \SI{60}{\GeV}$ and $\abseta < 2.4$. This requirement is 92\% efficient at selecting jets arising from the chosen primary vertex.
Jets are required to satisfy $\abseta < 2.5$ and $\pT > \SI{25}{\GeV}$; any jets among these that are within $\Delta R = 0.4$ of an isolated photon candidate or within $\Delta R = 0.2$ of an isolated electron candidate are discarded.
The selected jets are classified as \bjet{s} (those containing $b$-hadrons) or other jets using a multivariate classifier taking impact parameter information, reconstructed secondary vertex position and decay chain reconstruction as inputs~\cite{PERF-2012-04,ATL-PHYS-PUB-2015-022}.
Working points are defined by requiring the discriminant output to exceed a particular value that is chosen to provide a specific $b$-jet efficiency in an inclusive \ttbar sample.
Correction factors derived from \ttbar events with final states containing two leptons are applied to the simulated event samples to compensate for differences between data and simulation in the \btagging efficiency~\cite{PERF-2016-05}.
The analysis uses two working points which have a \btagging efficiency of 70\% (60\%), a $c$-jet rejection factor of 12 (35) and a light-jet rejection factor of 380 (1540) respectively.
Muons~\cite{PERF-2015-10} within $\Delta R = 0.4$ of a \btagged jet are used to correct for energy losses from semileptonic $b$-hadron decays.
This correction improves the energy measurement of \bjet{s} and improves the signal acceptance by 5--6\%.
\subsection{Event selection and categorisation}\label{sec:eventselection}
Events are selected for analysis if there are at least two photons and at least two jets, one or two of which are tagged as \bjet{s}, which satisfy the criteria outlined in \Sect{\ref{sec:objectselection}}.
The diphoton invariant mass is initially required to fall within a broad mass window of $\SI{105}{\GeV} < \myy < \SI{160}{\GeV}$.
In order to remain orthogonal to the ATLAS search for $\hh \rightarrow b\bar{b}b\bar{b}$~\cite{EXOT-2016-31}, any event with more than two \bjets using the 70\% efficient working point is rejected, before the remaining events are divided into three categories.
The 2-tag signal category consists of events with exactly two \bjets satisfying the requirement for the 70\% efficient working point.
Another signal category is defined using events failing this requirement but nevertheless containing exactly one \bjet identified using a more stringent (60\% efficient) working point.
Here the second jet, which is in this case not identified as a \bjet, is chosen using a boosted decision tree (BDT).
Different BDTs are used when applying the loose and tight kinematic selections.
These are optimised using simulated continuum background events as well as signal events from lower-mass or higher-mass resonances, respectively.
For each event in the simulated samples used for training the BDTs, every pair of jets in the event is considered.
A maximum of one of these jet pairs is correct; in the case of the background sample there are no correct pairs.
The training is then performed on these correct and incorrect jet pairs.
The BDTs use kinematic variables, namely jet \pT, dijet \pT, dijet mass, jet $\eta$, dijet $\eta$ and the $\Delta\eta$ between the selected jets, as well as information about whether each jet satisfied less stringent \btagging criteria.
The ranking of the jets from best to worst in terms of closest match between the dijet mass and \mH, highest jet \pT and highest dijet \pT are also used as inputs.
The jet with the highest BDT score is selected and the event is included in the 1-tag signal category.
The efficiency with which the correct jet is selected by this BDT is 60--80\% across the range of resonant and non-resonant signal hypotheses considered in this paper.
If the event contains no \bjet from either working point, the event is not directly used in the analysis, but is instead reserved for a 0-tag control category, which is used to provide data-driven estimates of the background shape in the signal categories.
Further requirements are then made on the \pT of the jets and on the mass of the dijet system, which differ for the loose and tight selections.
In the loose selection, the highest-\pT jet is required to have $\pT > \SI{40}{\GeV}$, and the next-highest-\pT jet must satisfy $\pT > \SI{25}{\GeV}$, with the invariant mass of the jet pair (\mjj) required to lie between \num{80} and \SI{140}{\GeV}.
For the tight selection, the highest-\pT and the next-highest-\pT jets are required to have $\pT > \SI{100}{\GeV}$ and $\pT > \SI{30}{\GeV}$, respectively, with $\SI{90}{\GeV} < \mjj < \SI{140}{\GeV}$.
Finally, in the resonant search, the diphoton invariant mass is required to be within \SI[parse-numbers=false]{4.7~(4.3)}{\GeV} of the Higgs boson mass for the loose (tight) selection.
This additional selection on \myy is optimised to contain at least 95\% of the simulated Higgs boson pair events for each mass hypothesis.
For non-resonant Higgs boson pair production, among events in the 2-tag category, the efficiency with which the kinematic requirements are satisfied is 10\% and 5.8\% for the loose and tight selections, respectively.
In the 1-tag category, the corresponding efficiencies are 7.2\% and 3.9\%, which are slightly lower than for the 2-tag category due to the lower probability of selecting the correct jet pair.
For the resonant analysis, efficiencies range from 6\% to 15.4\% in the 2-tag category and from 5.1\% to 12.3\% in the 1-tag category for $\SI{260}{\GeV} < \mX < \SI{1000}{\GeV}$.
Due to the differing jet kinematics, the signal acceptance is lower in all cases for the generated NLO signal than for a LO signal.
The acceptance of the LO prediction is approximately 15\% higher when using the tight selection and 10\% higher when using the loose selection.
In the resonant analysis, before reconstructing the four-object mass, \myyjj, the four-momentum of the dijet system is scaled by $\mH/\mjj$.
As shown in \Fig{\ref{fig:mbb-scaling}}, this improves the four-object mass resolution by $60$\% on average across the resonance mass range of interest.
It also modifies the shape of the non-resonant background in the region below \SI{270}{\GeV}.
After the correction, the \myyjj resolution is approximately 3\% for all signal hypotheses considered in this paper.
For the diphoton system, no such scaling is necessary due to the small (approximately $1.5$\%) diphoton mass resolution.
\begin{figure}[!htb]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/selection/m_yybb_low_2}}
\quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{figures/selection/m_yybb_high_1}}
\caption{Reconstructed \myyjj with (solid lines) and without (dashed lines) the dijet mass constraint, for a subset of the mass points used in the resonant analysis.
The examples shown here are for (a) the 2-tag category with the loose selection and (b) the 1-tag category with the tight selection.
The effect on the continuum background is also shown in (a).}
\label{fig:mbb-scaling}
\end{figure}
|
1,108,101,563,620 | arxiv |
\section{Introduction}\label{SEC:introduction}
Model checking is an automatic analysis method which explores all possible states of a modeled system to verify whether the system satisfies a formally specified property. It was popularized in industrial applications, e.g., for computer hardware and software, and has great potential for modeling complex and distributed business processes. \emph{Timed} model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. However, standard model checkers like SPIN \cite{Holzmann91BKspin} and SMV \cite{McMillan92THsmv} can generally only represent and verify the \emph{qualitative} relations between events, which constrains their use for real-time systems. \emph{Quantified} time notions, including time instant and duration, must be taken into account for timed model checking. For example in a safety critical application such as in an emergency department, after an emergency case arrives at the hospital, standard model checking can only verify whether ``the patient receives a certain treatment'', but to save the patient's life, it should be verified whether ``the patient receives a certain treatment within 1 hour''.
Many formalisms with time extensions have been presented as the basis for timed model checkers. Two popular ones are: (1) \emph{timed automata} \cite{DBLP:journals/tcs/AlurD94}, which is an extension of finite-state automata with a set of clock variables to keep track of time; (2) \emph{time Petri Nets} \cite{Merlin74TCPNthesis}, which is an extension of the Petri Nets with timing constraints on the firings of transitions. Various translation methods have been presented between time Petri Nets to timed automata \cite{DBLP:conf/apn/PenczekP04TCPNtoTA} in order to apply time-automata-based methods to time Petri Nets. UPPAAL \cite{bllpwDimacs95uppaal} and KRONOS \cite{DBLP:journals/sttt/Yovine97kronos} are two well-known timed automata based model checkers; they have been successfully applied to various real-time controllers and communication protocols. Conventional temporal logics like \emph{Linear Temporal Logic} (LTL) or \emph{Computation Tree Logic} (CTL) must be extended \cite{DBLP:conf/rex/AlurH91} to handle the specification of properties of timed automata. In order to handle continuous-time semantics, specialized data structures are needed to represent real clock variables, e.g. Difference Bounded Matrices \cite{DBLP:conf/avmfss/Dill89DiffBndMatrix} (employed by UPPAAL and KRONOS).
The foundation for the decidability results in timed automata is based on the notion of \emph{region equivalence} over the clock assignment \cite{BengtssonY03timedAutomata}. Models in a timed automata based model checker can not represent at which time instant a transition is executed within a time region; such model checkers can only deal with a specification involving a time region or a pre-specified time instant and cannot store the exact time instant when the transition is executed. However, many real-time systems, especially those with pre-emptive scheduling, need this information for succeeding calculations. For example, triage is widely practiced in medical procedures; the caregiver \emph{C} may be administering some required but non-critical treatment on patient \emph{A} when another patient \emph{B} presents with a critical situation, such as a cardiac arrest. \emph{C} then must move to the higher priority task of treating \emph{B}, but it is necessary to store the elapsed time of \emph{A}'s treatment to determine how much time is still needed or else the treatment must be restarted. The \emph{stop-watch} automata \cite{DBLP:conf/tacas/AbdeddaimM02StopWatchA}, an extension of timed automata, is proposed to tackle this; unfortunately as Krc{\'a}l and Yi discussed in \cite{DBLP:conf/tacas/KrcalY04decidableTA}, since the reachability problem for this class of automata is undecidable, there is no guarantee for termination in the general case.
Lamport \cite{Lamport05TRrealSimple} advocated \emph{explicit-time description methods} using general model constructs, e.g., global integer variables or synchronization between processes commonly found in standard un-timed model checkers, to realize timed model checking. He presented an explicit-time description method, which we refer to as LEDM, using a clock-ticking process (\emph{Tick}) to simulate the passage of time, and a pair of global variables to store the time lower and upper bounds for each modeled system process. The method has been implemented with popular model checkers SPIN (sequential) and SMV. We presented two methods, (1) the \emph{Sync-based Explicit-time Description Method} (SEDM) \cite{Hao09SEDM} using rendezvous synchronization steps between the \emph{Tick} and each of the system processes; and (2) the \emph{Semaphore-based Explicit-time Description Method} (SMEDM) \cite{Hao09SMEDM} using only one global semaphore variable. Both these methods enable the time lower and upper bounds to be defined locally in system processes so that they provide better \emph{modularity} in system modeling and facilitate the use of more complex timing constraints. Our experiments \cite{Hao09SMEDM,Hao09SEDM} showed that the time and memory efficiencies of these two methods are comparable to that of LEDM.
The explicit-time description methods have three advantages over timed-automata-based model checkers: (1) they do \emph{not} need specialized languages or tools for time description so they can be applied in standard un-timed model checkers. Recently, Van den Berg et al. \cite{DBLP:conf/fmics/BergSW07LEDMcaseStudy} successfully applied LEDM to verify the safety of railway interlockings for one of Australia's largest railway companies; (2) they enable the accessing and storing of the current time \cite{Hao09SMEDM}, a useful feature for pre-emptive scheduling problems; and (3) they enable the usage of large-scale distributed model checkers, e.g., {\sc DiVinE}, for timed model checking.
Orthogonally, model checking has been studied in parallel and distributed computing platforms. Because real world models often come with gigantic state spaces which can not fit into the memory of a standard computer, inevitably a portion of the state space needs to be accessed from the secondary storage and the model checking algorithm becomes very slow \cite{Brim2004ErcimNews}. This problem is known as \emph{state explosion}. Large-scale analysis is needed in many practical cases. Distributed model checkers exploit the power of distributed computing facilities so that much larger memory is available to accommodate the state space of the system model; parallel processing of the states can, moreover, reduce the verification time. Our experiments \cite{Hao09SEDM} compared the time efficiency between the sequential SPIN and {\sc DiVinE} \cite{divinePrjPage}, a well-known distributed model checker. When using the same explicit-time description method, {\sc DiVinE} can verify much larger models and finish the verification for models of the same size in significantly less time than SPIN.
In this paper, we present a new explicit-time description method called \emph{Efficient Explicit-time Description Method} (EEDM). We found that the former three methods (LEDM, SEDM and SMEDM) suffer from one common problem: as the \emph{Tick} process increments the time by one unit in each tick, the state space grows relatively fast as the time parameters increase. E.g., in our experiment \cite{Hao09SMEDM} using LEDM, the number of states doubles as time bounds grow from 12 to 14. In the new EEDM, the \emph{Tick} can increment the time in two modes: the \emph{standard} mode and the \emph{leaping} mode. When it is necessary to store the current time to allow access for future calculations, it ticks in the standard mode; otherwise, it ticks in the leaping mode. For each system process, we define one global variable indicating whether the process needs to store and access the current time, allowing the \emph{Tick} process to switch between the standard mode and the leaping mode. For the experiments, we continue using {\sc DiVinE} (the method is also applicable to other standard model checkers); the results show that: in the leaping mode, the number of states can be reduced significantly, so it is much less affected by the increase of time parameters; in the standard mode, the time and memory efficiencies are comparable with the former methods.
The remainder of the paper is organized as follows. Section \ref{SEC:preliminary} gives background information with respect to the {\sc DiVinE} model checker. The new explicit-time description method implemented in {\sc DiVinE} is presented in Section \ref{SEC:timeDesc}; for comparison, LEDM is also briefly described in the same section. Section \ref{SEC:exprDesc} describes our experiments and the results. Section \ref{SEC:conclude} concludes the paper.
\section{Preliminaries}\label{SEC:preliminary}
Section \ref{SUBSEC:algoDiVinE} is adapted from \cite{VBBBipdps09divine}; the syntax outlined in Section \ref{SUBSEC:preDVE}, while incomplete, is meant for the presentation of the time-explicit description methods; the complete description can be found in \cite{divinePrjPage}.
\subsection{Distributed Model Checking Algorithms in {\sc DiVinE}}\label{SUBSEC:algoDiVinE}
{\sc DiVinE} is an explicit-state LTL model checker based on the automata-based procedure by Vardi and Wolper \cite{DBLP:conf/lics/VardiW86}. The property to be specified is described by an LTL formula. In LTL model checking, all efficient \emph{sequential} algorithms are based on the \emph{postorder} exploration as computed by a depth-first search (DFS) of the state space. However, computing DFS postorder is P-complete \cite{DBLP:journals/ipl/Reif85DFSisPcomplete}, so no benefit in terms of either time or space will result from parallelization of this type of algorithm.
Two algorithms, OWCTY and MAP \cite{Barnat05DivineAlgo}, are introduced in {\sc DiVinE}. The sequential complexity of each is worse than that of the DFS-based algorithms but both can be efficiently implemented in parallel. OWCTY, or \emph{One Way to Catch Them Young}, is based on the fact that a directed graph can be topologically sorted if and only if it is acyclic. The algorithm applies a standard linear topological sort algorithm to the graph. Failure in the sorting means the graph contains a cycle. Accepting cycles are detected with multiple rounds of the sorting. MAP, or \emph{Maximal Accepting Predecessors}, is based on the fact that each accepting vertex in an accepting cycle is its own predecessor. To improve memory efficiency, the algorithm only stores a single representative accepting predecessor for each vertex by choosing the maximal one in a linear ordering of vertices.
These two algorithms are preferable in different cases. If the property of a model is expected to hold, and the state space can fit completely into (distributed) memory, OWCTY is preferable as it is three times faster than MAP to explore the whole state space. On the other hand, MAP can generally find a counterexample (if it exists) more quickly as it works on-the-fly.
\subsection{{\sc DiVinE} Modeling Language}\label{SUBSEC:preDVE}
DVE is the modeling language of {\sc DiVinE}. Like in Promela (the modeling language of SPIN), a model described in DVE consists of processes, message channels and variables. Each process, identified by a unique name $procid$, consists of lists of local variable declarations and state declarations, the initial state declaration and a list of transitions.
A transition transfers the process state from $ stateid_{1}$ to $ stateid_{2}$. The transition may contain a guard (which decides whether the transition can be executed), a synchronization (which communicates data with another process) and an effect (which assigns new values to local or global variables). So we have
\
{\tt \ Transition ::= $ stateid_{1}$ -> $ stateid_{2}$ \{ Guard Sync Effect \} }
\
The {\tt Guard} contains the keyword {\tt guard} followed by a boolean expression and the {\tt Effect} contains the keyword {\tt effect} followed by a list of assignments. The {\tt Sync} follows the denotation for communication in CSP, `!' for the sender and `?' for the receiver. The synchronization can be either asynchronous or rendezvous. Value(s) is transferred in the channel identified by $chanid$. So we have
\
{\tt \ Sync ::= sync $chanid$ ! SyncValue $\vert$ $chanid$ ? SyncValue ;}
\
A \emph{property process} is automatically generated for the corresponding property written as an LTL formula. Modeled system processes and the property process progress synchronously, so the latter can observe the system's behavior step by step and catch errors.
\section{Explicit-Time Description Methods}\label{SEC:timeDesc}
With explicit-time description methods, the passage of time and timed quantified values can be expressed in un-timed languages and properties to be specified can be expressed in conventional temporal logics. This section describes Lamport's LEDM before detailing our new EEDM. At the end of this section, we study a small pre-emptive example with respect to explicit-time description methods.
\subsection{The Lamport Explicit-time Description Method}\label{SUBSEC:timeLamport}
In LEDM, current time is represented with a global variable \emph{now} that is incremented by an added \emph{Tick} process. As we mentioned earlier, standard model checkers can only deal with integer variables, and a real-time system can only be modeled in discrete-time using an explicit-time description. So the \emph{Tick} process increments \emph{now} by 1. Note that in explicit-time description methods for standard model checkers, the real-valued time variables must be replaced by integer-valued ones. Therefore, these methods in general do not preserve the continuous-time semantics; otherwise an inherently infinite-state specification will be produced and the verification will be undecidable. However, they are sound for a commonly used class of real-time systems and their properties \cite{DBLP:conf/icalp/HenzingerMP92}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3in]{timeLineNew.eps}\\
\caption{States and Timeline of process $P_{i}$}\label{Fig:timelinePi}
\end{center}
\end{figure}
Placing lower-bound and upper-bound timing constraints on transitions in processes is the common way to model real-time systems. Figure \ref{Fig:timelinePi} shows a simple example of only two transitions: transition $\tau_{A}$: {\tt $ stateid_{l}$ -> $ stateid_{m}$} is followed by the transition $\tau_{B}$: {\tt $ stateid_{m}$ -> $ stateid_{n}$}. An upper-bound timing constraint on when transition $\tau_{B}$ must occur is expressed by a guard on the transition in the \emph{Tick} process so as to prevent an increase in time from violating the constraint. A lower-bound constraint on when transition $\tau_{B}$ may occur is expressed by a guard on $\tau_{B}$ so it cannot be executed earlier than it should be. Each system process $P_{i}$ has a pair of count-down timers denoted as global variables $ubtimer_{i}$ and $lbtimer_{i}$ for the timing constraints on its transitions. A large enough integer constant, denoted as {\tt INFINITY}, is defined. All upper bound timers are initialized to {\tt INFINITY} and all lower bound timers are initialized to zero. Upper bound timers with the value of {\tt INFINITY} are not active and the \emph{Tick} process will not decrement them. For transition $\tau_{B}$, the timers will be set to the correct values by $\tau_{A}$: {\tt $ stateid_{l}$ -> $ stateid_{m}$}. As \emph{now} is incremented by 1, each non{\tt -INFINITY} {\tt ubtimer} and non-zero {\tt lbtimer} is decremented by 1.
\begin{figure}[h!]
{\tt \hspace{16 pt} process P\_Tick \{ }
{\tt \hspace{30 pt} state tick; }
{\tt \hspace{30 pt} init tick; }
{\tt \hspace{30 pt} trans }
{\tt \hspace{50 pt} tick -> tick \{ guard $all$ $ubtimers$ > 0; }
{\tt \hspace{128 pt} effect now = now + 1, }
{\tt \hspace{158 pt} $decrements$ $all$ $timers$; \} ; }
{\tt \hspace{16 pt} \} }
\caption[]{\emph{Tick} process in DVE for LEDM \label{Fig:tickProcLamportMethod}}
\end{figure}
In Figure \ref{Fig:timelinePi}, initially, $(ubtimer_{i},lbtimer_{i})$ is set to $({\tt INFINITY},0)$. Transition $\tau_{A}$ is executed at time instant $t_{0}$, and $(ubtimer_{i},lbtimer_{i})$ is set to $(\xi_{2},\xi_{1})$. After $\xi_{1}$ time units, i.e., at time instant $t_{1}$ when $(ubtimer_{i},lbtimer_{i})$ is equal to $(\xi_{2}-\xi_{1},0)$, transition $\tau_{B}$ is enabled. Both timers will be reset or set to new time bounds after the execution of $\tau_{B}$. If transition $\tau_{B}$ is still not executed when the time reaches $t_{2}$ and $ubtimer_{i}$ is equal to 0, the transition in the \emph{Tick} process is disabled. This forces transition $\tau_{B}$ (it is the only transition possible at this time) to set the $ubtimer_{i}$; then the \emph{Tick} process can start again. In this way, the time upper-bound constraint is realized. The \emph{Tick} process and the system process $P_{i}$ in DVE are described in Figure \ref{Fig:tickProcLamportMethod} and Figure \ref{Fig:sysProcLamportMethod}.
\begin{figure}[h!]
{\tt \hspace{16 pt} process P\_i \{ }
{\tt \hspace{30 pt} state ..., state\_l, state\_m, state\_n; }
{\tt \hspace{30 pt} init ...; }
{\tt \hspace{30 pt} trans }
{\tt \hspace{70 pt} ... -> ... , }
{\tt \hspace{50 pt} state\_l -> state\_m \{ ...; }
{\tt \hspace{150 pt} effect $set$ $timers$ $for$ $transition\ \tau_{B}$;\}, }
{\tt \hspace{50 pt} state\_m -> state\_n \{ guard lbtimer$[i]$==0 ; effect ... ; \}, }
{\tt \hspace{70 pt} ... -> ... ; }
{\tt \hspace{16 pt} \} }
\caption[]{System process $P_{i}$ in DVE for LEDM \label{Fig:sysProcLamportMethod}}
\end{figure}
We observe that the value of \emph{now} is limited by the size of type {\tt integer} and careless incrementing can cause overflow error. This can be avoided by incrementing \emph{now} using modular arithmetic, i.e., setting $now = (now+1)$ {\tt mod MAXIMAL} ({\tt MAXIMAL} is the maximal integer value supported by the model checker). The value limit can also be increased by linking several integers, i.e., every time {\tt ($int_1$+1) mod MAXIMAL} becomes zero again, $int_2$ increments by 1, and so on. Note that the variable \emph{now} is only incremented in the \emph{Tick} process and does not appear in any other process. So for general system models in which time lower and upper bounds suffice, the variable \emph{now} should be removed.
\subsection{The New Efficient Explicit-Time Description Method}\label{SUBSEC:timeEEDM}
This section is organized as follows. First, we describe the leaping mode and the standard mode of the new EEDM in section \ref{SUBSUBSEC:LeapingTickEEDM} and \ref{SUBSUBSEC:StandardTickEEDM} respectively. Second, we present some discussions (clarifications) of issues on EDMs and EEDM in section \ref{SUBSUBSEC:EEDMissues}. Finally, a pre-emptive scheduling modeling example using EEDM is described in section \ref{SUBSUBSEC:withTimedAutomata}.
\subsubsection{Leaping Ticks}\label{SUBSUBSEC:LeapingTickEEDM}
All aforementioned explicit-time description methods (LEDM, SEDM and SMEDM) increase $now$ by 1 each tick. On the other hand, consider Figure \ref{Fig:timelinePiPj}: we observe that when the system contains \emph{only} one process, $P_{i}$, after $t_{0}$, $\tau_{B}$ cannot be executed until time reaches $t_{2}$. Therefore, the ticks between $t_{0}$ and $t_{1}$ serve no purpose; optimally, the \emph{Tick} process should directly ``leap'' to $t_{2}$. Similarly, $\tau_{B}$ is enabled between $t_{2}$ and $t_{4}$, so either $\tau_{B}$ is executed before $t_{4}$ or time reaches $t_{4}$ and $\tau_{B}$'s execution is forced; therefore, the \emph{Tick} process can leap to $t_{4}$ from $t_{2}$. When we include $P_{j}$, after $t_{0}$, the \emph{Tick} should first leap to $t_{1}$ so $P_{j}$ can enable transition $\tau_{C}$; then it should leap to $t_{2}$ and so on.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3in]{timeLineEEDM.eps}\\
\caption{Timeline of process $P_{i}$ and $P_{j}$}\label{Fig:timelinePiPj}
\end{center}
\end{figure}
Based on these observations, in the new EEDM, we use one global count-down timer for each system process, e.g., $timer_{i}$ for $P_{i}$ in Figure \ref{Fig:timelinePiPj} is set to $\xi_{1}$ on $t_{0}$ and to $\xi_{2}-\xi_{1}$ on $t_{2}$. The \emph{Tick} process increments \emph{now} by the value of the smallest timer on condition that no timer equals zero and at least one timer is non{\tt -INFINITY}. In fact, the \emph{Tick} process, leaping in this way, is running in the \emph{leaping} mode; the \emph{Tick} process in leaping mode and the corresponding system process $P_{i}$ in DVE are described in Figure \ref{Fig:tickProcEEDM1} and Figure \ref{Fig:systemProcEEDM1} ($N$ is the number of system processes).
\begin{figure}[h!]
{\tt \hspace{16 pt} process P\_Tick \{ }
{\tt \hspace{30 pt} state tick; }
{\tt \hspace{30 pt} init tick; }
{\tt \hspace{30 pt} trans }
{\tt \hspace{50 pt} tick -> tick \{ guard $(\wedge_{1..N}({\tt timer}[i]>0))\wedge(\vee_{1..N}({\tt timer}[i]\neq{\tt INFINITY}))$; }
{\tt \hspace{125 pt} effect $now=now+\min_{1..N}({\tt timer}[i])$, }
{\tt \hspace{160 pt} $decrement$ $all$ $timers$ $by$ $\min_{1..N}({\tt timer}[i])$;\}; }
{\tt \hspace{16 pt} \} }
\caption[]{\emph{Tick} process in leaping mode in DVE for EEDM \label{Fig:tickProcEEDM1}}
\end{figure}
\begin{figure}[h!]
{\tt \hspace{16 pt} process P\_i \{ }
{\tt \hspace{30 pt} state state\_l, state\_m1, state\_m2, state\_n, ...; }
{\tt \hspace{30 pt} init ...; }
{\tt \hspace{30 pt} trans }
{\tt \hspace{70 pt} ... \ -> \ ... , }
{\tt \hspace{50 pt} state\_l \ -> state\_m1 \{ ...; effect {\tt timer}[i]=$\xi_{1}$;\},}
{\tt \hspace{50 pt} state\_m1 -> state\_m2 \{ guard {\tt timer}[i]=0; effect {\tt timer}[i]=$\xi_{2}-\xi_{1}$; \}, }
{\tt \hspace{50 pt} state\_m2 -> state\_n \{ $executes\ \tau_{B}$ $and$ $resets\ {\tt timer}[i]$; \}, }
{\tt \hspace{70 pt} ... \ -> \ ... ; }
{\tt \hspace{16 pt} \} }
\caption[]{System process $P_{i}$ in DVE for EEDM \label{Fig:systemProcEEDM1}}
\end{figure}
\subsubsection{To Know the Current Time Instant}\label{SUBSUBSEC:StandardTickEEDM}
Careful readers may notice that there is one penalty for \emph{Tick} to leap: the actual time instant when $\tau_{B}$ is executed is unknown unless it is at $t_{4}$. In fact, in the leaping mode, it is only known that a transition is executed between the two closest ticks that nest the transition. Consider the example in Figure \ref{Fig:timelinePiPj}; the \emph{Tick} will sequentially leap from $t_{0}$ through $t_{4}$; $\tau_{B}$ may be executed on: (1) some time instant between $t_{2}$ and $t_{3}$; or (2) some time instant between $t_{3}$ and $t_{4}$; or (3) the time instant of $t_{4}$. However, as we discussed earlier in Section \ref{SEC:introduction} and in \cite{Hao09SMEDM}, in many systems, especially those with pre-emptive scheduling, it is necessary to know the actual time instant when the transition is executed.
\begin{figure}[h!]
{\tt \hspace{8 pt} process P\_Tick \{ }
{\tt \hspace{21 pt} state tick; }
{\tt \hspace{21 pt} init tick; }
{\tt \hspace{21 pt} trans }
{\tt \hspace{35 pt} tick -> tick \{ }
{\tt \hspace{46 pt} guard $(\wedge_{1..N}({\tt timer}[i]>0))\wedge(\vee_{1..N}({\tt timer}[i]\neq{\tt INFINITY}))\wedge(\wedge_{1..N}({\tt signal}[i]==0))$; }
{\tt \hspace{46 pt} effect $now=now+\min_{1..N}({\tt timer}[i])$, }
{\tt \hspace{81 pt} $decrement$ $all$ $timers$ $by$ $\min_{1..N}({\tt timer}[i])$;\}, }
{\tt \hspace{35 pt} tick -> tick \{ }
{\tt \hspace{46 pt} guard $(\wedge_{1..N}({\tt timer}[i]>0))\wedge(\vee_{1..N}({\tt timer}[i]\neq{\tt INFINITY}))\wedge(\vee_{1..N}({\tt signal}[i]==1))$; }
{\tt \hspace{46 pt} effect $now=now+1$, }
{\tt \hspace{81 pt} $decrement$ $all$ $timers$ $by$ $1$;\}; }
{\tt \hspace{8 pt} \} }
\caption[]{\emph{Tick} process in standard mode in DVE for EEDM \label{Fig:tickProcEEDM2}}
\end{figure}
\begin{figure}[h!]
{\tt \hspace{10 pt} process P\_i \{ }
{\tt \hspace{24 pt} state state\_l, state\_m1, state\_m2, state\_n, ...; }
{\tt \hspace{24 pt} init ...; }
{\tt \hspace{24 pt} trans }
{\tt \hspace{64 pt} ... \ -> \ ... , }
{\tt \hspace{44 pt} state\_l \ -> state\_m1 \{ ...; effect {\tt timer}[i]=$\xi_{1}$;\},}
{\tt \hspace{44 pt} state\_m1 -> state\_m2 \{ guard {\tt timer}[i]=0; }
{\tt \hspace{160 pt} effect {\tt timer}[i]=$\xi_{2}-\xi_{1}$, {\tt signal}[i]=1; \}, }
{\tt \hspace{44 pt} state\_m2 -> state\_n \{ $executes\ \tau_{B}$ $and$ $resets\ {\tt timer}[i]$, {\tt signal}[i]=0; \}, }
{\tt \hspace{64 pt} ... \ -> \ ... ; }
{\tt \hspace{10 pt} \} }
\caption[]{System process $P_{i}$ to illustrate the standard mode \label{Fig:systemProcEEDM2}}
\end{figure}
To overcome this problem, we allow the \emph{Tick} process to run in the \emph{standard} mode. We define a global signal variable for each system process. All signals are set to 0 at the initial state. Whenever a system process $P_{i}$ requires the current time for future calculation, $signal_{i}$ should be set to 1; the \emph{Tick} process in turn will run in the standard mode with which it will increment \emph{now} by 1 in each tick. E.g., when time reaches $t_{2}$ in Figure \ref{Fig:timelinePiPj}, $P_{i}$'s signal $signal_{i}$ is set to 1 in order to store the time instant at which $\tau_{B}$ is executed; when time reaches $t_{4}$, $signal_{i}$ is set back to 0 so that the \emph{Tick} switches back to leaping mode. Both the \emph{Tick} process and the system process need to be updated to incorporate the standard mode, see Figure \ref{Fig:tickProcEEDM2} and Figure \ref{Fig:systemProcEEDM2}.
\subsubsection{Issues on EDMs and EEDM}\label{SUBSUBSEC:EEDMissues}
Readers may be concerned about the verification capability of explicit-time description methods. As in our earlier discussion, EDMs simulate a \emph{discrete} timer by making use of existing constructs in standard un-timed model checkers; in other words, time is just another normal variable in an un-timed model. Therefore, EDMs are not affected by verification issues such as whether the property is specified as an LTL or CTL formula or whether the property is verified using explicit-state based (e.g., Spin) or symbolic model checking (e.g., SMV) algorithms. These verification issues depend on what standard un-timed model checker is used.
Discrete timed model checkers suffer from a common problem: how to find the right time quantum (granularity) that does not mask errors. E.g., for processes in a hospital, a time unit defined as a day will definitely mask an error which violates the property ``the patient receives a certain treatment within 1 hour''. On the other hand, the state space can easily blow up if a finer time unit is used. Readers may be concerned that the introduction of leaping ticks may add to this problem. Actually, leaping ticks do not musk errors in this aspect. The difference between LEDM and EEDM in leaping mode is that EEDM in leaping mode cannot record and use the exact time instant when a transition is executed in the model or the specified properties. For example, the LTL property that $b$ becomes true before 10 time units have elapsed since $\tau_{B}$ is executed cannot be verified using EEDM in leaping mode. For this reason, we introduce the mode-switching mechanism in EEDM.
To reduce the state space, Lamport \cite{Lamport05TRrealSimple} proposed the use of view symmetry, which is equivalent to abstraction for a symmetric specification \emph{S}. Abstraction consists of checking \emph{S} by model checking a different specification \emph{A} called an abstraction of \emph{S}. This technique has two restrictions: (1) the \emph{now} variable must be eliminated, which means the current time instant is not accessible in this case; (2) if the model checker does not support checking under view symmetry or abstraction, the abstraction specification \emph{A} must be constructed by hand. In addition, this reduction technique is orthogonal to our EEDM, i.e., we can use Lamport's abstraction technique in conjunction with EEDM.
The idea of leaping ticks in EEDM is quite similar to the notion of time regions in time-automata-based model checkers, which advances time up to the point where a transition must be executed in order not to violate the invariant defined on the corresponding state. However, the implementations are fundamentally different: time-automata-based model checkers introduce specialized data structures \cite{DBLP:conf/tacas/KrcalY04decidableTA} to store time regions and use symbolic model checking algorithms extended for time; on the other hand, EEDM, as with LEDM, only uses an explicit \emph{tick} process and some global variables, and the leaping way of advancing time is obtained by letting the tick leap to the next closest time bound of all systems processes.
\subsubsection{To Know The Current Time Instant: A Pre-emptive Scheduling Example}\label{SUBSUBSEC:withTimedAutomata}
Following the triage example described in Section \ref{SEC:introduction}, we consider a system of multiple parallel tasks with different priorities, assuming that the right to an exclusive resource is deprivable, i.e., a higher priority task \emph{B} may deprive the resource from the currently running task \emph{A}. In this case, the elapsed time of \emph{A}'s execution must be stored for a future resumed execution.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3in]{deprivableSchedulingNew.eps}\\
\caption{An Example Case of Pre-emptive Scheduling}\label{Fig:dynamicScheduling}
\end{center}
\end{figure}
\begin{figure}[h!]
{\tt \hspace{16 pt} byte isROccupied=0; $//$0 means available}
{\tt \hspace{16 pt} process A \{ }
{\tt \hspace{30 pt} default(Tag,$tag_{A}$) }
{\tt \hspace{30 pt} int timeToGo=10; }
{\tt \hspace{30 pt} state s\_i, s\_Exec, s\_Deprived, ...; }
{\tt \hspace{30 pt} init ...; }
{\tt \hspace{30 pt} trans }
{\tt \hspace{50 pt} ... -> ... ; }
{\tt \hspace{50 pt} s\_i -> s\_Exec \{ }
{\tt \hspace{120 pt} guard isROccupied==0; }
{\tt \hspace{120 pt} effect isROccupied=Tag, timer[A]=timeToGo, signal[A]=1; }
{\tt \hspace{50 pt} s\_Exec -> s\_Deprived \{ }
{\tt \hspace{120 pt} guard isROccupied\!=Tag \&\& timer[A]>0; }
{\tt \hspace{120 pt} effect timeToGO=timer[A]; \}, }
{\tt \hspace{50 pt} s\_Deprived -> s\_Exec \{ }
{\tt \hspace{120 pt} guard isROccupied==0; }
{\tt \hspace{120 pt} effect isROccupied=Tag, timer[A]=timeToGo; \}, }
{\tt \hspace{50 pt} s\_Exec -> s\_Next \{ }
{\tt \hspace{120 pt} guard timer[A]==0; }
{\tt \hspace{120 pt} effect isROccupied=0, signal[A]=0; \}, }
{\tt \hspace{50 pt} ... -> ... ; }
{\tt \hspace{16 pt} \} }
\caption[]{Process in DVE for Pre-emptive Scheduling Example using EEDM} \label{Fig:ProcDynamic}
\end{figure}
Figure \ref{Fig:dynamicScheduling} shows a portion of a state transition diagram for task \emph{A}, assuming \emph{A} needs the exclusive resource \emph{R} for 10 time units; when \emph{R} becomes available at time instant $t_{0}$, \emph{A} starts its execution by entering the state \emph{Exec}; at time instant $t_{1}$, \emph{B} deprives \emph{A}'s right to \emph{R}, and \emph{A} changes to the state \emph{Deprived} and stores the elapsed $t_{1}-t_{0}$ time units; when \emph{R} becomes available again, \emph{A} resumes it execution to state \emph{Exec} for the remaining $10-(t_{1}-t_{0})$ units. Implementation of this example using any one of the three explicit-time description methods is straightforward. Figure \ref{Fig:ProcDynamic} shows the process for task \emph{A} in DVE using EEDM (assuming \emph{A} has the lowest priority).
\section{Experiments}\label{SEC:exprDesc}
\subsection{Overview}\label{SUBSEC:fischerAlgo}
For the convenience of comparison with LEDM in DiVinE, we use the Fischer's mutual exclusion algorithm as in \cite{Hao09SEDM} \cite{Hao09SMEDM}; this algorithm is a well-known benchmark for timed model checking. The description of the algorithm below is adapted from \cite{Lamport05TRrealSimple}. Our experiment is to model the algorithm in {\sc DiVinE} using EEDM in both standard and leaping modes, and compare the time and memory efficiency and size of state space with that of LEDM (we omit the experiments for SEDM and SMEDM because they are comparable with LEDM in the aforementioned three numeric criteria).
Fischer's algorithm is a shared-memory, multi-threaded algorithm. It uses a shared variable \emph{x} whose value is either a thread identifier (starting from 1) or zero; its initial value is zero. For the convenience of specification of the safety property in our experiments, we use a counter \emph{c} to count the number of threads that are in the critical section. The program for thread \emph{t} is described in Figure \ref{Fig:fischerAlgo}.
\begin{figure}[h!]
{\parindent170pt {\it ncs}: noncritical section;
{\it \ \ \ a}: {\bf wait until} {\it x} = 0;
{\it \ \ \ b}: {\it x} := {\it t};
{\it \ \ \ c}: {\bf if} {\it x} $\neq$ {\it t} {\bf then goto} {\it a};
{\it \ \ cs}: critical section;
{\it \ \ \ d}: {\it x} := 0; {\bf goto} {\it ncs};
}
\caption[]{Program of thread \emph{t} in Fischer's algorithm \label{Fig:fischerAlgo}}
\end{figure}
The timing constraints are: first, step \emph{b} must be executed at most $\delta_{b}^{u}$ time units (as an upper bound) after the preceding execution of step \emph{a}; second, step \emph{c} cannot be executed until at least $\delta_{c}^{l}$ time units (as a lower bound) after the preceding execution of step \emph{b}. For step \emph{c}, there is an additional upper bound $\delta_{c}^{u}$ to ensure fairness, i.e., step \emph{c} will eventually be executed. The algorithm is tested for 6 threads. The safety property to be verified, {\it ``no more than one process can be in the critical section''}, is specified as $G (c<2)$ for the model.
Version 0.8.1 of the {\sc DiVinE}-Cluster is used. This version has the new feature of pre-compiling the model in DVE into dynamically linked C functions; this feature speeds up the state space generation significantly. As the example property is known to hold, the OWCTY algorithm is chosen for better time efficiency.
All experiments are executed on the Mahone cluster of ACEnet \cite{ACEnetPrjPage}, the high performance computing consortium for universities in Atlantic Canada. The cluster is a Parallel Sun x4100 AMD Opteron (dual-core) cluster equipped with Myri-10G interconnection. Parallel jobs are assigned using the Open MPI library.
\subsection{Experiment 1}\label{SUBSEC:expr1}
For the first experiment, we use the same value for three constraints, i.e., $\delta_{b}^{u}=\delta_{c}^{l}=\delta_{c}^{u}=T$. Figure \ref{Fig:timeResults1} compares time and memory efficiency for the two explicit-time description methods with 16 CPUs.
\begin{figure*}[h!]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\hline
& \multicolumn{3}{c|}{LEDM} & \multicolumn{6}{c|}{EEDM} \\ \cline{5-10}
& \multicolumn{3}{c|}{\ } & \multicolumn{3}{c|}{standard} & \multicolumn{3}{c|}{leaping} \\ \cline{2-10}
\emph{T} & {States} & {Time} & {Memory} & {States} & {Time} & {Memory} & {States} & {Time} & {Memory} \\ \hline
2 & 644,987 & 1.8 & 4,700.1 & 626,312 & 1.9 & 4,689.6 & 141,695 & 1.4 & 4,606.2 \\
3 & 1,438,204 & 2.4 & 4,822.3 & 2,375,451 & 3.4 & 4,982.7 & 141,695 & 1.5 & 4,612.8 \\
4 & 3,048,515 & 3.3 & 4,942.8 & 7,363,766 & 5.0 & 5,820.9 & 141,695 & 1.5 & 4,603.6 \\
5 & 6,033,980 & 4.2 & 5,603.4 & 19,471,191 & 10.4 & 7,855.2 & 141,695 & 1.4 & 4,604.9 \\
6 & 11,201,179 & 7.2 & 6,343.4 & 45,552,076 & 24.4 & 12,241.1 & 141,695 & 1.4 & 4,620.6 \\
7 & 19,671,092 & 11.1 & 7,885.7 & 96,871,373 & 52.1 & 20,663.7 & 141,695 & 1.6 & 4,605.7 \\
8 & 32,952,899 & 18.6 & 9,958.9 & 190,941,594 & 133.0 & 37,503.6 & 141,695 & 1.4 & 4,601.8 \\
9 & 53,025,700 & 30.2 & 13,288.7 & 353,811,115 & 246.5 & 63,572.8 & 141,695 & 1.4 & 4,622.4 \\ \hline
\end{tabular}
\caption[]{Number of states, Time (in seconds) and memory usage (in MB) for Experiment 1 \label{Fig:timeResults1}}
\end{center}
\end{figure*}
We can see the significant advantage of EEDM in leaping mode: the number of states, verification time and memory usage remain virtually the same for all \emph{T}s. Remark that all timing bounds are the same for all threads; the \emph{Tick} process always leaps $T$ time units in each tick (it ticks only when there is at least one active timer). Therefore, changing the value of $T$ will not change the number of states.
Now we compare LEDM and EEDM in standard mode. Let ${\tt states}(X)$ be the number of states of method $X$. We can see that, after $T=3$, ${\tt states(EEDM_{standard})}>{\tt states(LEDM)}$. As $T$ increases from 2 to 9, ${\tt states(EEDM_{standard})}$ increases by a factor of 564.9 while ${\tt states(LEDM)}$ increases by a factor of only 82.2; a comparison of the verification time yields similar results. The system process in EEDM has more transitions than LEDM because there is only one timer for each system process and a timer needs to be assigned twice if the next transition has both lower and upper bounds (e.g. $\tau_{B}$ of $P_{i}$ in Figure \ref{Fig:timelinePiPj}, {\tt timer[i]} is assigned to be $\xi_{1}$ and $\xi_{2}-\xi_{1}$ at $t_{0}$ and $t_{2}$ respectively); on the other hand, LEDM has two timers for each system process so assigning both bounds can be made in one step.
\subsection{Experiment 2}\label{SUBSEC:expr2}
For the second experiment, we set $\delta_{b}^{u}$ and $\delta_{c}^{l}$ to 4 and vary $\delta_{c}^{u}$. Figure \ref{Fig:timeResults2} compares the number of states, time and memory efficiency for the two explicit-time description methods with 16 CPUs. Figure \ref{Fig:timeResultsChart2} shows how the size of the state space and verification time grow as $\delta_{c}^{u}$ increases. The extra experimental data for $\delta_{c}^{u}=\{13,14,15,16\}$ are intended to articulate the growing pattern of the state space of EEDM in leaping mode.
\begin{figure*}[h!]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\hline
& \multicolumn{3}{c|}{LEDM} & \multicolumn{6}{c|}{EEDM} \\ \cline{5-10}
& \multicolumn{3}{c|}{\ } & \multicolumn{3}{c|}{Standard} & \multicolumn{3}{c|}{Leaping} \\ \cline{2-10}
$\delta_{c}^{u}$ & {States} & {Time} & {Memory} & {States} & {Time} & {Memory} & {States} & {Time} & {Memory} \\ \hline
5 & 3,659,317 & 3.5 & 5,199.1 & 10,865,877 & 7.2 & 6,415.6 & 1,122,491 & 2.2 & 4,771.0 \\
6 & 6,783,455 & 4.2 & 5,770.2 & 15,221,140 & 10.2 & 7,150.3 & 1,046,759 & 2.0 & 4,758.0 \\
7 & 12,907,369 & 7.2 & 6,754.2 & 21,451,024 & 13.2 & 8,198.5 & 3,516,193 & 3.6 & 5,182.7 \\
8 & 25,723,697 & 13.3 & 8,898.8 & 31,934,332 & 20.2 & 9,946.8 & 365,279 & 1.6 & 4,651.1 \\
9 & 50,500,739 & 28.2 & 13,047.6 & 48,889,270 & 31.2 & 12,721.1 & 10,998,335 & 7.1 & 6,434.9 \\
10 & 93,349,553 & 52.3 & 20,146.1 & 73,501,090 & 50.7 & 16,858.4 & 3,828,687 & 3.8 & 5,228.0 \\
11 & 161,886,059 & 111.9 & 31,722.6 & 108,005,926 & 78.5 & 23,104.9 & 46,149,106 & 24.9 & 12,313.8 \\
12 & 266,256,377 & 199.2 & 49,154.8 & 154,662,946 & 112.2 & 30,045.6 & 857,773 & 1.9 & 4,735.3 \\
13 & & & & & & & 92,147,198 & 48.4 & 19,928.2 \\
14 & & & & & & & 12,275,835 & 7.3 & 6,650.4 \\
15 & & & & & & & 180,459,742 & 114.1 & 34,098.7 \\
16 & & & & & & & 1,847,395 & 2.7 & 4,911.5 \\
\hline
\end{tabular}
\caption[]{Number of states, Time (in seconds) and memory usage (in MB) for Experiment 2 \label{Fig:timeResults2}}
\end{center}
\end{figure*}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6in]{resultChart2.eps}\\
\caption{Number of states and Time (in seconds) for Experiment 2}\label{Fig:timeResultsChart2}
\end{center}
\end{figure}
As opposed to the results in experiment 1, in this experiment EEDM in standard mode performs better than LEDM. We can see that after $\delta_{c}^{u}=9$, ${\tt states(EEDM_{standard})}<{\tt states(LEDM)}$; as the model becomes larger, ${\tt states(EEDM_{standard})}$ increases more slowly than ${\tt states(LEDM)}$. In fact, as $\delta_{c}^{u}$ increases from 5 to 12, ${\tt states(LEDM)}$ increases by a factor of 72.8 while ${\tt states(EEDM_{standard})}$ increases by a factor of only 14.2; we can see similar comparison results in terms of the verification time.
EEDM in leaping mode still shows much better performance than LEDM and EEDM in standard mode; ${\tt states(EEDM_{leaping})}$ also shows an interesting phenomenon as $\delta_{c}^{u}$ increases. The number of states of both EEDM in standard mode and LEDM increase at a relatively more steady speed: as $\delta_{c}^{u}$ increases by 1, ${\tt states(EEDM_{standard})}$ increases by a factor of about 1.45 and ${\tt states(LEDM)}$ increases by a factor of about 1.8. On the other hand, the increments of ${\tt states(EEDM_{leaping})}$ are grouped by the value of $s=(\delta_{c}^{u}\ {\tt mod}\ \delta_{c}^{l})$. We can see that, for the same $\lfloor {\delta_{c}^{u} \over \delta_{c}^{l}} \rfloor$, ${\tt states(EEDM_{leaping})_{s=0}}<{\tt states(EEDM_{leaping})_{s=2}}<{\tt states(EEDM_{leaping})_{s=1}}<{\tt states(EEDM_{leaping})_{s=3}}$. For $s=0$, whenever there is more than one active timer, their values are integer multiples of $\delta_{c}^{l}$ (4 in this experiment), so the \emph{Tick} still leaps at least 4 time units each tick; in the case of $s=2$, the \emph{Tick} leaps at least 2 time units each tick. On the other hand, for $s=1$ and $s=3$, in the worst case, the \emph{Tick} leaps only 1 time units each tick. From these observations, we can conclude that EEDM in leaping mode performs better the greater the \emph{greatest common divisor} (gcd) of all timing bounds of all system processes.
\section{Conclusion}\label{SEC:conclude}
In this paper, we present a new explicit-time description method, Efficient Explicit-time Description Method (EEDM) which is significantly more efficient than LEDM, SEDM and SMEDM. In addition to the improved efficiency, EEDM still retains the ability to store and access the current time for future calculations in the system model. Altogether, we have devised methods that have advantages in different aspects of real-time modeling: SEDM and SMEDM have better modularity and adaptability; EEDM is more efficient. These explicit-time description methods provide systematic ways to represent discrete time in un-timed model checkers like SPIN, SMV and {\sc DiVinE}.
In fact, the explicit-time description methods are intended to offer more options for the verification of real-time systems. First, as Van den Berg et al. mention in \cite{DBLP:conf/fmics/BergSW07LEDMcaseStudy}, in some real-world scenarios when significant resources have been invested into the model for a standard model checker, it is much easier and therefore preferable to extend the existing model to represent time notions rather than re-modeling the entire system for a specialized timed model checker. Second, explicit-time description methods provide a solution for accessing and storing the current clock value for timed-automata-based model checkers. Last and most important, explicit-time description methods, especially the EEDM, enable the usage of large-scale distributed model checkers so that we can verify much bigger real-time systems.
This research is part of an ambitious research and development project, Building Decision-support through Dynamic Workflow Systems for Health Care \cite{keith09CareMS}. Real world workflow processes can be highly dynamic and complex in a health care setting. Verification that the system meets its specifications is essential. Standard workflow patterns are widely used in business processes modeling, so we have translated most of the control-flow patterns into DVE and applied them in verifying two small process models \cite{Mashiyat09Pattern}. As a continuous effort, we will incorporate explicit-time description methods into workflow patterns' DVE specification and verify a larger model of the real-world healthcare processes with timing information.
As a more complex case study of EEDM, we are now building a pre-emptive scheduling model in the setting of the Dynamic Voltage Scaling (DVS) technique. We also plan to study the possibility of applying different abstraction techniques to the explicit-time description methods: Dutertre and Sorea \cite{DBLP:DutertreS04caldrAutomata} and Clarke et al. \cite{clarke07abstraction} recently presented two different abstraction techniques for timed automata and the abstraction outcome can be verified using un-timed model checkers.
\section*{Acknowledgment}
This research is sponsored by Natural Sciences and Engineering Research Council of Canada (NSERC), an Atlantic Computational Excellence Network (ACEnet) Post Doctoral Research Fellowship and the Atlantic Canada Opportunities Agency (ACOA) through an Atlantic Innovation Fund project. The computational facilities are provided by ACEnet. We thank Jiri Barnat, Keith Miller and the anonymous reviewers of PDMC'09 for their valuable comments.
\bibliographystyle{eptcs}
|
1,108,101,563,621 | arxiv | \section{Introduction}
Wireless cellular networks are constantly evolving to cope with the accelerating increase
of the traffic demand. The technology progressed from 3G enhancement with HSDPA
(High-Speed Downlink Packet Access) to 4G with LTE (Long Term Evolution). The
networks become also more dense and more heterogeneous; i.e. new base
stations (BS) of different types are added. In particular,
operators introduce \emph{micro} BS, which transmit with smaller
powers than the original ones (called \emph{macro} BS) in order to cope with local
increase of the traffic demand (hotspots). The reasons for using smaller
transmitting powers is to avoid a harmful increase of interference
and reduce energy consumption as well as human exposure to the electromagnetic radiation.
The deployment of micro BS is expected to increase significantly in
the nearest future.
Usage of different tiers of BS (as the micro and macro stations) with variable transmission powers as well as
antenna gains, height etc, makes cellular networks heterogeneous.
Besides, even the macro tiers in commercial cellular networks are never perfectly regular:
the locations of BS is usually far from being perfectly hexagonal,
because of various deployment constraints. Irregularity of the spatial patterns of BS
is usually more pronounced in dense urban environments. Physical
irregularity of the urban environment (shadowing) induces additional
variability of radio conditions.
Irregularity and heterogeneity of cellular networks implies a spatial
disparity of base station performance metrics and quality of service (QoS) parameters observed by users
in different cells of the network.
This represents a challenge
for the network operators, in particular in the context of the network dimensioning.
How to describe and analyze the performance of a large, irregular, heterogeneous
network? Which tier in the given network disposes larger
capacity margins? Is it the macro tier since its BS transmit with
larger powers or the micro tier, whose BS serve smaller zones?
The goal of this paper is to propose a model, validated with
respect to real field measurements in an operational network,
which can help answering these questions.
Our objective faces us with the following important aspects of the
modeling problem:
(i) capturing the static but irregular and
heterogeneous network geometry, (ii) considering the dynamic user service process
at individual network BS (cells), and last but not least
(iii) taking into account the dependence between these service
processes. This latter dependence is
due to the fact that the extra-cell interference makes the service of a
given cell depend on the ``activity'' of other cells in the network.
Historically, geometric (i) and dynamic (ii) aspects are usually
addressed separately on the ground of stochastic geometry and queueing
theory, respectively.
Cellular network models based on the planar Poisson point process
have been shown recently to give tractable expressions for
many characteristics built from the powers of different BS received at one
given location, as e.g. the signal-to-interference-and
noise ratio(s) (SINR) of the, so-called, typical user. They describe
potential resources of the network (peak bit-rates, spectral or energy
efficiency etc) but not yet its real performance when several users
have to share these resources. On the other hand,
various classical queueing models can be tailored to represent the
dynamic resource sharing at one or several BS (as e.g. loss models for constant
bit-rates services and processor sharing queues for variable bit-rates
services).
Our model considered in this paper combines the stochastic-geometric
approach with the queueing one to represent the network in its spacial
irregularity and temporal evolution.
It assumes the usual {\em multi-tier Poisson model for BS locations} with
shadowing and the {\em space-time Poisson process of call arrivals} independently marked by
data volumes. Each station applies a processor sharing policy to serve
users which receive its signal as the strongest one, with the
peak service rates depending on the respective SINR.
The mutual-dependence of cell performance (iii)
is captured via a system of {\em cell-load (fixed point) equations}.
By the load we mean the ratio of the
actual traffic demand to its critical value, which can be interpreted, when
it is smaller than one, as the busy probability in the classical
processor sharing queue.
The cell load equations make the load of a given station dependent
on the busy probabilities (hence loads) of other stations, by taking
them as weighting factors of the interference induced by these
stations. Given network realization, this decouples the temporal (processor-sharing) queueing processes of
different cells, allowing us to use the classical results to
evaluate their steady state characteristics (which depend on the
network geometry).
We (numerically) solve the cell-load fixed point problem calculating loads and
other characteristics of the individual cells. Appropriate spatial (network)
averaging of these characteristics, expressed using the formalism of
the typical cell offered by Palm theory of point processes,
provides useful macroscopic description of the network performance.
The above approach is validated by estimating the model parameters from the real field measurements
of a given operational network and comparing the macroscopic network performance characteristics
calculated using this model to the performance of the real network.
{\em The remaining part of the paper} is organized as follows:
In Section~\ref{ss.RelatedWork} we briefly present the related work.
Our model is introduced in Section~\ref{s.ModelDescription}
and studied in Section~\ref{s.Analysis}. Numerical results
validating our approach are presented in Section~\ref{s.NumericalResults}.
\subsection{Related work}
\label{ss.RelatedWork}
There are several ``pure'' simulation tools developed for the performance evaluation of
cellular networks such as those
developed by the industrial contributors to 3GPP (\emph{3rd Generation
Partnership Project})~\cite{3GPP36814-900},
TelematicsLab LTE-Sim~\cite{Piro2011}, University of Vien LTE
simulator~\cite{Mehlfuhrer2011,Simko2012} and LENA
tool~\cite{Baldo2011,Baldo2012} of CTTC.
They do not unnecessarily allow to identify the macroscopic laws regarding network performance metrics.
A possible analytical approach to this problem is based on the information
theoretic
characterization of the individual link performance; cf
e.g.~\cite{GoldsmithChua1997,Mogensen2007}, in conjunction with a queueing
theoretic
modeling and analysis of the user traffic
cf. e.g.~\cite{Borst2003,BonaldProutiere2003,HegdeAltman2003,BonaldBorstHegdeJP2009,RongElayoubiHaddada2011,KarrayJovanovic2013Load
. These works are usually focused on some particular aspects of the
network and do not consider a large, irregular, heterogeneous, multi-cell scenario.
Stochastic geometric
approach~\cite{HABDF:2009} to wireless communication networks consist in taking
spatial averages over node (emitter, receiver) locations.
It was first shown in~\cite{ANDREWS2011} to give analytically tractable
expressions for the typical-user characteristics in Poisson
models of cellular networks,
with the Poisson assumption
being justified by representing highly irregular base station deployments in urban
areas~\cite{ChiaHan_etal2012} or mimicking strong log-normal
shadowing~\cite{hextopoi,hextopoi-journal}, or both. Expressions for the SINR coverage in
multi-tier network models were
developed in~\cite{mukherjee2011downlink,DHILLON2012,mukherjee2012downlink,BlaszczyszynKK2013SINR}.
Several extensions of this initial model are reported in~\cite{mukherjee2014analytical}.
The concept of equivalence of heterogeneous networks (from the point
of view of its typical user), which we use in the present paper, was
recently formulated in~\cite{equivalence2013}, but previously used
in several works e.g. in~\cite{blaszczyszyn2010impact,MADBROWN2011,PINTO2012,BlaszczyszynKK2013SINR}.
The fixed-point cell-load equation was postulated independently in
~\cite{KarrayJovanovic2013Load} and~\cite{siomina2012analysis}
to capture the dependence of processor sharing queues modeling
performance of individual BS, in the context of regular hexagonal and
fixed deterministic network models, respectively.
Our present paper, combining stochastic geometry with queueing
theory complements~\cite{blaszczyszyn2014user}, where a homogeneous
network is considered, and~\cite{JovaQoS} where the distribution of the QoS
metrics in the heterogeneous network has been studied by simulation.
A network dimensioning methodology based on this approach was recently proposed in~\cite{dimension}.
\section{Model description}
In this section we describe the components of our model.
\label{s.ModelDescription}
\subsection{Network geometry}
\subsubsection{Multi-tier network of BS}
We consider a multi-tier cellular network consisting of $J$ types
(tiers) of BS characterized by different transmitting
powers $P_j$, $j=1,\ldots,J$.
Locations of BS are modeled by independent homogeneous Poisson point processes
$\Phi_j$ on
the plane, of intensity $\lambda_j$ stations per $\mathrm{km}^2$.
Let $\Phi=\{X_n\}$ be the superposition of $\Phi_{1},\ldots,\Phi_J$
(capturing the locations of all BS of the network).
Denote by $Z_n\in\{1,\ldots,J\}$ the type of BS $X_n\in\Phi$
(i.e., the index of the tier it belongs to).
It is known that $\Phi$ is a Poisson point process of intensity parameter $\lambda=\sum_{j=1}^J \lambda
_{j}$ and $Z_n$ form independent, identically distributed (i.i.d) marks of
$\Phi$ with $\Pro(Z_n=j)=\lambda_j/\lambda$.
\subsubsection{Propagation effects}
\label{sss.Propagation}
The propagation loss
is modeled by a deterministic
{\em path-loss function} $l(x)=(K\left\vert x\right\vert )^{\beta}$, where
$K>0$ and $\beta>2$\ are given constants, and some random propagation effects.
We split these effects into two categories conventionally called {\em
(fast) fading} and {\em shadowing}. The former will be accounted in the
model at the link-layer (in the peak bit-rate function
cf.~(\ref{e.Shannon})). The latter impacts the choice of the serving
BS and thus needs to be considered together with the network
geometry. To this regard we assume that the shadowing between a given station $X_{n}\in\Phi$ and all
locations $y$ on the plane is modeled by some positive valued stochastic process
$\mathbf{S}_{n}\left( y-X_{n}\right) $. We assume that the processes $\mathbf{S}_{n}\left(
\cdot\right) $ are i.i.d. marks of
$\Phi$.~\footnote{The assumption that all types of base stations have
the same distribution of the shadowing can be easily relaxed.}
Moreover we assume that $\mathbf{S}_{1}(y)$ are identically
distributed across $y$, but do not make any assumption
regarding the dependence of $\mathbf{S}_{n}(y)$ across $y$.
Thus the inverse of the power averaged over fast fading, received at $y$ from BS $X_n$, denoted by
$L_{X_{n}}\left( y\right)=L_{n}\left( y\right)$ which we call (slightly abusing the
terminology) the propagation-loss from this station
is given by
\begin{equation}
L_{X_n}\left( y\right) =\frac{l\left( \left\vert y-X_{n}\right\vert
\right) }{P_{Z_n}\mathbf{S}_n\left( y-X_{n
\right) }. \label{e.Propagation
\end{equation}
In what follows, we will often simplify the notation writing $L_X(\cdot)$ for the propagation-loss of BS $X\in\Phi$.
\subsubsection{Service zones, SINR and peak bit-rates}
\label{sss.Cells}
We assume that each (potential) user located at $y$ on the plane is served by the BS offering the strongest received power among all
the BS in the network. Thus, the zone served by BS $X\in\Phi$,
denoted by $V(X)$,
which we keep calling {\em cell} of $X$ (even if random shadowing
makes it need not to be a polygon or even a connected set) is given by
\begin{equation}
V\left( X\right) =\left\{ y\in\mathbb{R}^{2}:L_{X}\left( y\right) \leq
L_{Y}\left( y\right)\;\text{for all\,} Y\in\Phi\right\} \label{e.Cell
\end{equation}
We define the (downlink) SINR at location $y\in V\left( X\right)$
(with respect to the serving BS $X\in\Phi$) as follows
\begin{equation}
\SINR\left( y,\Phi\right) :=\frac{1/L_{X}\left( y\right)
{N+\sum_{Y\in\Phi\backslash\left\{ X\right\} } \varphi_Y/L_{Y}\left(
y\right) },\label{e.SINR
\end{equation}
where $N$\ is the noise power and the {\em activity factors} $\varphi_Y\in[0,1]$ account (in a way that
will be made specific in Section~\ref{ss.fixedPoing}) for the activity of stations
$Y\in\Phi$. In general, we assume that
$\varphi_Y$ are additional (not necessarily independent) marks of the
point process $\Phi$, possibly dependent on tiers and
shadowing of all BS.
We assume that the {\em (peak) bit-rate} at location $y$, defined as
the number of bits per second a user located at $y$ can download
when served alone by its BS, is some function $R(\SINR)$ of the SINR. Our general analysis presented in Section~\ref{s.Analysis} does not
depend on any particular form of this function. A specific
expression will be assumed for the numerical results in Section~\ref{s.NumericalResults}.
\subsection{Network users}
\subsubsection{User-arrival process}
\label{sss.arrival}
We consider variable bit-rate (VBR) traffic; i.e., users arrive to the network
and require to transmit some volumes of data at bit-rates induced by the
network. We assume a {\em homogeneous time-space Poisson point process
of user arrivals} of intensity $\gamma$ arrivals per second per
$\mathrm{km}^2$. This means that the time between two successive
arrivals in a given zone of surface $S$ is exponentially distributed
with parameter $\gamma\times S$, and all users arriving
to this zone take their locations independently and uniformly.
The time-space process of user arrivals is independently marked
by random, identically distributed volumes of data the users want to
download from their respective serving BS. These volumes are arbitrarily
distributed and have mean $1/\mu$ bits.
The above arrival process
induces the {\em traffic demand per surface unit}
$\rho=\gamma/\mu$
expressed in bits per second per $\mathrm{km}^{2}$.
The {\em traffic demand in the cell of BS $X\in\Phi$} equals
\begin{equation}
\rho\left( X\right) =\rho\left\vert V\left( X\right) \right\vert, \label{e.TrafficDemand
\end{equation}
where $\left\vert A\right\vert $\ denotes the surface of the set $A$;
$\rho(X)$ is expressed in bits per second.
\subsubsection{Processor-sharing service policy}
We shall assume that the BS allocates an equal fraction of its
resources to all users it serves at a given time. Thus, when there are $k$ users in a cell,
each user obtains a bit-rate equal to its peak bit-rate divided by $k$.
More explicitly, if a base station
located at $X$ serves $k$ users located at $y_{1},y_{2},\ldots,y_{k}\in
V\left( X\right) $\ then the bit-rates of these users are equal to
$R\left( \mathrm{SINR}\left( y_{j},\Phi\right) \right)/k $,
$j= 1,2,\ldots,k $, respectively.
Users having completed their service (download of the requested
volumes) leave the system.
\subsection{Time-averaged cell characteristics}
\label{s.CellCharacteristics}
Given the network realization (including the shadowing and the cell activity
factors), the performance of each cell $V(X)$ of $X\in\Phi$ corresponds to
a (spatial version of the) processor sharing-queue. More specifically,
due to complete independence property of the Poisson process of
arrivals, the temporal dynamics of these queses are independent. Thus,
we can use the classical queuing-theoretic results regarding
processor-sharing queues to describe the time-averaged (steady-state)
characteristic of all individual cells.
Besides the traffic demand $\rho(X)$ already specified in Section~\ref{sss.arrival},
these characteristics are: the critical traffic $\rho_c(X)$, cell load $\theta(X)$, mean number of
users $N(X)$, average user throughout $r(X)$, busy (non-idling)
probability $p(X)$.
In what follows we present these characteristics in a form tailored to
our wireless context; cf~\cite{BonaldProutiere2003}.
All these characteristics can be seen as further, general
(non-independent) marks of the point process $\Phi$ and depend also on
BS types, their activity factors and shadowing processes.
\subsubsection{Critical traffic}
The processor-sharing queue of the base station $X\in\Phi$ is stable
if and only if its traffic demand $\rho(X)$ is smaller than
the critical value which is the
harmonic mean of the peak bit-rates over the cell; cf~\cite{KarrayJovanovic2013Load}
\begin{equation}
\rho_{\mathrm{c}}\left( X\right) :=\left\vert V\left( X\right) \right\vert
\left( \int_{V\left( X\right) }R^{-1}\left( \mathrm{SINR}\left(
y,\Phi\right) \right) dy\right)^{-1}\,. \label{e.CriticalTraffic
\end{equation}
\subsubsection{Cell load} We define it as the ratio between
the (actual) cell traffic demand and its critical value
\begin{equation}
\theta\left( X\right): =\frac{\rho\left( X\right) }{\rho_{\mathrm{c
}\left( X\right) }=\int_{V\left( X\right) }\rho R^{-1}\left(
\mathrm{SINR}\left( y,\Phi\right) \right) dy\,. \label{e.Load1
\end{equation}
\subsubsection{Mean number of users}
The mean number of users in the steady state of the processor sharing
queue at BS $X\in\Phi$ can be expressed as
\begin{equation}\label{e.UsersNumber1
N\left( X\right):=
\begin{cases}\displaystyle
\frac{\theta\left( X\right) }{1-\theta\left( X\right) }&
\text{if $\theta(X)<1$}\\
\infty&\text{otherwise}\,.
\end{cases}
\end{equation}
\subsubsection{User throughput}
\label{sss.throughput-cell} is defined as the ratio between the
mean volume request $1/\mu$ and the mean typical-user service time in
the cell $X$. By the Little's law it can be expressed as
\begin{equation}\label{e.UserThroughput1
r\left( X\right):=\frac{\rho(X)}{N(X)}\,.
\end{equation}
\subsubsection{Busy probability} The probability that the BS
$X\in\Phi$ is not idling (serves at least one user) in the steady
state is equal to
\begin{equation}
p\left( X\right) =\min\left( \theta\left( X\right) ,1\right)\,.
\label{e.Proba
\end{equation}
It is easy to see that all the above characteristics (marks) of the BS
$X\in\Phi$ can be expressed using the traffic demand $\rho(X)$ and
the cell load $\theta(X)$ in the following order
\begin{align}\label{e.CriticalTraffic2}
\rho_c(X)&=\frac{\rho(X)}{\theta(X)}\,,\\
r(X)&=\max(\rho_{\mathrm{c}}\left( X\right) -\rho\left(
X\right) ,0) \label{e.UserThroughput}\,,\\%
N\left( X\right) &=\frac{\rho\left( X\right) }{r\left( X\right) }\,.
\label{e.UsersNumber
\end{align}
\subsection{Spatial inter-dependence of cells --- cell load equations}
\label{ss.fixedPoing}
The individual cell characteristics described in the previous section depend on
the location of all base stations, shadowing realizations but also on
the cell activity factors $\varphi_X$, $X\in\Phi$, introduced in
Section~\ref{sss.Cells} to weight the extra cell interference in the
SINR expression, and which have been arbitrary numbers between
0 and 1 up to now. These factors suppose to account for the fact that
BS might not transmit with their respective maximal powers $P_j$ depending
on the BS types $j=1,\ldots,J$ all the time.
It is quite natural to think that BS transmit only when they serve at least one
user.~\footnote{Analysis of more sophisticated power control schemes is beyond
the scope of this paper.} Taking this fact into account in an exact way
requires introducing in the denominator of~(\ref{e.SINR}) the indicators that
a given station $Y\in\Phi$ at a given time is not idling. This, in
consequence, would lead to the probabilistic dependence of the service process
at different cell, thus revoking the explicit expressions for their
characteristics presented in Section~\ref{s.CellCharacteristics}
and the model becomes non-tractable.~\footnote{We are even not aware
of any result regarding the
stability of such a family of dependent queues.}
For this reason, we take into account whether $Y$ is idling or not in a
simpler way, multiplying its maximal transmitted power by the \emph{probability} $p(Y)$ that
it is busy in the steady state.
In other words, in the SINR expression~(\ref{e.SINR}) we take
$\varphi_Y=p(Y)$
where $p(Y)$ is given by~(\ref{e.Proba}); i.e.,
\begin{equation}
\mathrm{SINR}\left( y,\Phi\right) =\frac{\frac{1}{L_{X}\left( y\right)
}{N+\sum_{Y\in\Phi\backslash\left\{ X\right\} }\frac{\min\left(
\theta\left( Y\right) ,1\right) }{L_{Y}\left( y\right) }}\,.
\label{e.LoadInterference
\end{equation}
We call this
model {\em (load)-weighted interference model}. Clearly this assumption means
that $\theta(X)$ cannot be calculated independently for all cells but
rather are solutions of the following fixed point problem, which we
call {\em cell load equations}
\begin{equation}
\theta\left( X\right) =\rho\int_{V\left( X\right) }R^{-1}\left(
\frac{\frac{1}{L_{X}\left( y\right) }}{N+\sum_{Y\in\Phi\backslash\left\{
X\right\} }\frac{\min\left( \theta\left( Y\right) ,1\right)
{L_{Y}\left( y\right) }}\right) dy\,. \label{e.FixedPoint
\end{equation}
This is a system of equations which needs to be solved for $\left\{ \theta\left(
X\right) \right\} _{X\in\Phi}$ given network and shadowing
realization. In the remaining part of this paper we assume
that such a solution exists and is unique.~\footnote{Note that the mapping in the right-hand-side of~(\ref{e.FixedPoint})
is increasing in all $\theta(Y)$, $Y\in\Phi$ provided function $R$ is increasing.
Using this property it is easy to see that successive iterations of this mapping
started off $\theta(Y)\equiv 0$ on one hand side and off $\theta(Y)=1$
(full interference model) on the other side,
converge to a minimal and maximal solution of~(\ref{e.FixedPoint}),
respectively. The uniqueness of the solution (in the Poisson or more
general) network is an interesting theoretical question, which is however beyond the scope of this paper.
A very similar problem (with finite number of stations and a discrete traffic demand)
is considered in~\cite{siomina2012analysis}, where the uniqueness of
the solution is proved.}
The other characteristics of each cell are then deduced from the cell
load and traffic demands using the relations described in
Section~\ref{s.CellCharacteristics}.
\section{Model analysis}
\label{s.Analysis}
We begin our analysis by recalling some useful results regarding the Poisson
network model. Next, in Section~\ref{ss.TypicalCell} we present our
main results and in Section~\ref{ss.MeanCell} postulate some
simplified approach inspired by these results.
\subsection{Preliminaries: typical and zero-cell of the multi-tier network}
We briefly recall here the notions of the typical and zero-cell,
usually considered for the Voronoi tessellation and here regarding
our network cells.
Both objects will play their respective roles in the remaining part of
the paper.
We denote by $\Pro$ the
probability corresponding to the stationary distribution of our model
as described in Section~\ref{s.ModelDescription}.
\subsubsection{The typical cell}
\label{e.TypCell}
This is a mathematical formalization of a cell whose BS is ``arbitrarily chosen''
from the set of all stations, without any bias towards its
characteristics, in particular its type and the cell size. The formalization is made
on the ground of Palm theory, where the typical cell $V(0)$ is
this of the BS $X_0=0$ located at the origin under the {\em Palm
probability} $\Pro^0$. By the Slivnyak's theorem the Palm distribution of
the Poisson process corresponds to the homogeneous (stationary) one,
with the ``extra'' point $X_0=0$ added at the origin. In the case of
i.i.d. marked Poisson process, as in our case,
this extra point gets and independent copy of the mark, with
the original mark distribution.
Note that in our network the probability that an ``arbitrarily
chosen'' BS is of type $j$, $j=1,\ldots,J$, is equal to
$\lambda_j/\lambda$. More formally,
\begin{equation}\label{e.PrZ0}
\Pro^0(Z_0=j)=\lambda_j/\lambda\,.
\end{equation}
We remark, that the typical cell does not
have any physical existence in a given network. It is rather a useful
mathematical tool, in the sense that
the mathematical expectations under
$\Ex^0$ of the typical cell $V(0)$ characteristics (as the cell
traffic demand $\rho(0)$, cell load $\theta(0)$, etc) can be interpreted
as network-averages of the (already time-averaged) cell performance metrics.
For example the network-averaged traffic demand per cell, considering
all cells or only cells of type $j=1,\ldots,J$, equal,
respectively
\begin{align}\label{e.TypicalTraffic-general
\bar{\rho}&:=\Ex^0[\rho(0)]=\lim_{\left\vert A\right\vert \rightarrow\infty}\frac{1
{\Phi\left( A\right) }\sum_{X\in\Phi\cap A}\rho\left( X\right)\,,\\
\bar{\rho}_{j}&:=\Ex^0[\rho(0)\,|\,Z_0=j]=\lim_{\left\vert A\right\vert \rightarrow\infty}\frac{1
{\Phi_{j}\left( A\right) }\sum_{X\in\Phi_{j}\cap A}\rho\left( X\right)\,.
\label{e.TypicalTraffic
\end{align}
where $A$ denotes a disc centered at the origin, of radius
increasing to infinity. The convergence is $\Pro$-almost sure and follows
from the ergodic theorem for point processes (see~\cite[Theorem~13.4.III]{DaleyVereJones2003}).
We define similarly the network-average load (overall and per cell type)
\begin{align}
\bar{\theta}&:=\Ex^0[\theta(0)]\,,\\
\bar{\theta}_{j}&:=\Ex^0[\theta(0)\,|\,Z_0=j]\quad j=1,\ldots,J.
\end{align}
The convergence analogue to~(\ref{e.TypicalTraffic-general}), (\ref{e.TypicalTraffic})
holds for each of the previously considered local characteristics.
However (at least for Poisson network) it is not customary to consider directly
$\Ex^0[N(0)]$ since the (almost sure) existence of some (even arbitrarily small) fraction of BS
$X$ which are not stable (with $\rho(X)\ge\rho_{c}(X)$, hence $N(X)=\infty$)
makes $\Ex^{0}[N(0)]=\infty$.~\footnote
For a well dimensioned network one does not expect
unstable cells. For a perfect hexagonal network model $\Phi$ without
shadowing \emph{all} cells
are stable or unstable depending on the value of the per-surface traffic
demand $\rho$. For an (infinite) homogeneous Poisson model $\Phi$,
for arbitrarily small $\rho$ there exists a non-zero fraction of BS
$X\in\Phi$, which are non-stable. This fraction is very small for reasonable
$\rho$, allowing to use Poisson to study QoS metrics which, unlike
$\mathbf{E}^{0}[N(0)]$, are not ``sensitive'' to this artifact.
}
Also, as we will explain in what follows, $\Ex^0[r(0)]$ does not
have a natural interpretation. In particular it {\em cannot} be interpreted
as the mean user throughput.
\subsubsection{Zero cell} This is the cell (of the stationary
distributed network) that covers the origin $0$
of the plane, which plays the role of an arbitrarily fixed location.
The characteristics of the zero-cell correspond to the characteristics
of the cell which serves the typical user. Clearly this is a size-biased choice and indeed the zero cell has
different distributional characteristics from the typical cell.
Let us denote by $X^*$ the location of the BS serving the zero-cell
and its type by $Z^*$.
We will recall now a useful result regarding
multi-tier networks, from which we will derive the distribution of
$Z^*$; cf~\cite[Lemma~1]{equivalence2013}.
\begin{lemma}\label{l.Lambda}
Assume that $\mathbb{E}\left[ S^{2/\beta}\right] <\infty$. Then $\hat{\Phi
}=\left\{ (L_n=L_{n}(0),Z_n)\right\} _{n}$ is a Poisson point process on
$[0,\infty)\times\{1,\ldots,J\}$ with intensity measure
\begin{equation}\label{e.Lambda}
\Lambda\left(( 0,t\right]\times\{j\}):=\Ex[\#\{n:L_n\le t, Z_n=j\}] =a_jt^{2/\beta}\,
\end{equation}
$t\ge 0$, $j=1,\ldots,J$, where
\begin{equation}\label{e.aj}
a_{j}:=\frac{\pi\mathbb{E}\left[ S^{2/\beta}\right] }{K^{2}}\lambda_{j
P_{j}^{2/\beta}\,.
\end{equation}
\end{lemma}
\begin{remark}\label{r.Z*}
The form~(\ref{e.Lambda}) of the intensity measure $\Lambda$ of
$\hat{\Phi}$ allows us
to conclude that the point process $\{L_n(0)\}_n$ of propagation-loss values (between all base stations and the
origin) is a Poisson point process of intensity
$\Lambda((0,t]\times\{1,\ldots,J\})=at^{2/\beta}$, where
\begin{equation}\label{e.a}
a:=\sum_{j=1}^Ja_j\,.
\end{equation}
Moreover, the types $Z_n$ of the BS corresponding to the respective
propagation-loss values $L_n$ constitute i.i.d. marking of this latter
process of propagation-loss values,
with the probability that an arbitrarily chosen propagation-loss
comes from a BS of type $j$ having probability $a_j/a$.
In particular, for the serving station (offering the smallest propagation-loss)
we have
\begin{equation}\label{e.PrZ*}
\Pro\{\,Z^*=j\,\}=a_j/a\,.\footnote{Interpreting~(\ref{e.PrZ0}) and (\ref{e.PrZ*}) we can say
that an arbitrarily chosen BS is of type $j$ with probability
$\lambda_j/\lambda$, while an arbitrarily chosen propagation-loss
(measured at the origin) comes from a BS of type $j$ with
probability $a_j/a$.}
\end{equation}
\end{remark}
Our second remark on the result of Lemma~\ref{l.Lambda} regards an
equivalent way of generating the Poisson point process of
intensity~(\ref{e.Lambda}).
\begin{remark}\label{r.equivalence}
Consider a homogeneous Poisson network of intensity $\lambda$, in
which all stations emit with the same power
\begin{equation}
P=\left( \sum_{j=1}^ J\frac{\lambda_{j}}{\lambda}P_{j}^{2/\beta}\right)
^{\beta/2}\, \label{e.Power
\end{equation}
and assume the same model of the propagation-loss with shadowing as described
in Section~\ref{sss.Propagation}. Let us ``artificially'' (without
altering the power $P$) mark these
BS by randomly, independently selecting a mark $j=1\ldots,J$ for each
station with probability $a_j/a$. A direct
calculation shows that the marked propagation-loss process observed
in this homogeneous network by a user located at the origin, analogue
to $\hat\Phi$, has the same intensity measure $\Lambda$ given
by~(\ref{e.Lambda}). Consequently, the distribution of all user/network
characteristics, which are functionals of the marked propagation-loss
process $\hat\Phi$ can be equivalently calculated using this {\em equivalent homogeneous}
model.
\end{remark}
\subsection{Global network performance metrics}
\label{ss.TypicalCell}
The objective of this section is to express pertinent,
global network characteristics
and relate them
to mean throughput of the typical user of the network.
\subsubsection{Traffic and load per cell}
The mean traffic demand and load of the
typical cell, globally and per cell type, can be expressed as follows.
\begin{proposition}
\label{p.TypicalDemand}
We have for the traffic demand
\begin{align}
\bar{\rho}& =\frac{\rho}{\lambda}\nonumber\\
\bar{\rho}_{j} &
=\bar{\rho}\frac{P_{j}^{2/\beta}}{P^{2/\beta}},\quad j=1,\ldots,J,
\label{e.TrafficPerType
\end{align}
where $P$\ is the ``equivalent network'' power given by~(\ref{e.Power}).
\end{proposition}
\begin{proof}
We have
\[
\bar{\rho}=\Ex^{0}\left[ \rho\left( 0\right) \right] =\rho
\Ex^{0}\left[ \left\vert V\left( 0\right) \right\vert \right]
=\frac{\rho}{\lambda}\,
\]
where the second equality is due to~(\ref{e.TrafficDemand}) and the last
one follows from the inverse formula of Palm calculus~\cite[Theorem
4.2.1]{BaccelliBlaszczyszyn2009T1} (which may be extended to the case where
the cell associated to each BS is not necessarily the Voronoi cell; the only
requirement is that the user located at $0$ belongs to a unique cell almost
surely). Similarly,
\begin{align*}
\bar{\rho}_{j}
&=\rho\Ex^{0}\left[ \left\vert V\left( 0\right)\, \right\vert
\,Z_0=j\right]
=\rho\frac{\Ex^{0}\left[ \left\vert V\left( 0\right) \right\vert
\times\ind\left\{ Z_0=j\right\} \right] }{\mathbb{P}^{0}\left(
Z_0=j\right) }\nonumber\\
& =\frac{\rho}{\lambda}\frac{\mathbb{P}\left( Z^{\ast}=j\right)
}{\mathbb{P}^{0}\left( Z_0=j\right) }\nonumber
=\bar\rho\frac{a_{j}/a}{\lambda_{j}/\lambda}
=\bar\rho \frac{P_{j}^{2/\beta}}{P^{2/\beta}}\,
\end{align*}
where the third equality follows again from the inverse formula of
Palm calculus and the two remaining ones from~(\ref{e.PrZ0}), (\ref{e.PrZ*}) and
(\ref{e.aj}), (\ref{e.a}), respectively.
\end{proof}
\begin{proposition}
\label{p.TypicalLoad} We have for the cell load
\begin{align}
\bar{\theta} & =\frac{\rho}{\lambda}\Ex\left[ R^{-1}\left( \mathrm{SINR
\left( 0,\Phi\right) \right) \right]\,, \label{e.Load}\\
\bar{\theta}_{j} &
=\bar{\theta}\frac{P_{j}^{2/\beta}}{P^{2/\beta}},\quad j=1,\ldots, J\,,
\label{e.LoadPerType
\end{align}
where $P$\ is given by~(\ref{e.Power}).
\end{proposition}
\begin{proof}
Denote $g\left( y,\Phi\right) =R^{-1}\left(
\mathrm{SINR}\left( y,\Phi \right) \right)$.
In the same lines as the proof of Proposition~\ref{p.TypicalDemand},
by the inverse formula of Palm calculus
$\bar{\theta} =\Ex^{0}\left[ \theta\left( 0\right) \right]
=\frac{\rho}{\lambda
}\Ex\left[ g\left( 0,\Phi\right) \right]$.
Similarly
\begin{align*}
\bar{\theta}_{j}&=\Ex^{0}\left[ \theta\left( 0\right) \,|\,Z_0=j\right]\\
& =\rho\Ex^{0}\left[ \int_{V\left( 0\right) }g\left( y,\Phi\right)
\ind\left\{ Z_0=j\right\} dy\right] /\mathbb{P}^{0}\left(
Z_0=j\right) \\
& =\frac{\rho}{\lambda}\Ex\left[ g\left( 0,\Phi\right) \ind\left\{
Z^{\ast}=j\right\} \right] /\mathbb{P}^{0}\left( Z_0=j\right) \\
&=\frac{\rho}{\lambda}\Ex\left[ g\left(
0,\Phi\right) \right] \frac{a_{j}/a}{\lambda_{j}/\lambda}\,
\end{align*}
where the third equality follows from the inverse formula of Palm
calculus, and the fourth equality from the independent marking of the
propagation-loss process by the BS types; cf Remark~\ref{r.Z*}.
\end{proof}
\subsubsection{Number of users per cell and mean user throughput}
For the reasons already explained at the end of
Section~\ref{e.TypCell} it is more convenient to average the number of
users per cell in the stable part of the network. To this regard we define
the network-averaged number of users per {\em stable} cell as
\begin{align*}
\bar{N}
:=\Ex^{0}\left[ N\left( 0\right) \ind\left\{ \theta\left( 0\right)
<1\right\} \right]
\end{align*}
and similarly for each cell tier $j=1,\ldots,J$\
\begin{align}
\bar{N}_{j}
: =\Ex^{0}\left[ N\left( 0\right) \ind\left\{ \theta\left( 0\right)
<1\right\} |Z_0=j\right]\,. \nonumber
\end{align}
Note that the mean traffic demand $\bar\rho$, load $\bar\theta$ and number
of users $\barN$ per (stable) cell characterize network performance from the
point of view of its typical (or averaged) cell. We move now to a
typical user performance metric that is its mean throughput. This
latter QoS metric is traditionally (in queueing theory) defined as
the mean data volume requested by the typical user to the mean
service duration of the typical user. In what follows we apply this
definition (already retained at the local, cell level in
Section~\ref{sss.throughput-cell}) globally to the whole network,
filtering out the impact of unstable cells.
Denote by $\mathcal{S}_{j}$\ the union of stable cells of type $j=1,\ldots, J$; that
is
$\mathcal{S}_{j}=\bigcup_{X\in\Phi_{j}:\theta\left( X\right) <1}V\left(
X\right)$
and $\mathcal{S}=\bigcup_{j=1}^J\mathcal{S}_{j}$.
Let $\pi^\calS$ ($\pi^\calS_j$) be the probability the typical
user is served in a stable cell (of type $j=1,\ldots,J$)
\begin{align*}
\pi^\calS & =\mathbb{P}\left( \theta\left( X^{\ast}\right) <1\right)\\
\pi^\calS_{j} & =\mathbb{P}\left( \theta\left( X^{\ast}\right) <1\,|\,Z^{\ast}=j\right) ,\quad j=1,\ldots, J\,,
\end{align*}
where (recall) $X^{\ast}$ is the BS whose cell covers the origin and
$Z^*$ is its type.
Note that
$\pi^{\mathcal{S}}_j=a/a_j\mathbf{E}[\mathbf{1}\{0\in\mathcal{S}_j\}]$ and
thus it can be related to the {\em volume fraction} of the stable
part of the network served by tier $j$ and similarly for
$\pi^\calS=\mathbf{E}[\ind\{0\in\mathcal{S}\}]$.
We define the {\em (global) mean user throughput} as
\[
\bar{r}:=\lim_{\left\vert A\right\vert \rightarrow\infty}\frac{1/\mu
}{\text{mean call duration in }A\cap\mathcal{S}
\]
and for each cell type $j=1,\ldots,J$,
\[
\bar{r}_{j}:=\lim_{\left\vert A\right\vert \rightarrow\infty}\frac{1/\mu
}{\text{mean call duration in }A\cap\mathcal{S}_{j}}\,
\]
where $A$ denotes a disc centered at the origin of radius
increasing to infinity. These limits exist almost surely by the
ergodic theorem; cf~\cite[Theorem~13.4.III]{DaleyVereJones2003}.
Here is our main result regarding this mean user QoS. It can be seen
as a consequence of a spatial version of the Little's law.
\begin{proposition}
\label{p.TypicalThroughput}We have for the mean user throughput
\begin{align}
\bar{r} & =\frac{\bar{\rho}}{\bar{N}}\pi^\calS\nonumber\\
\bar{r}_{j} & =\frac{\bar{\rho}_{j}}{\bar{N}_{j}}\pi^\calS_{j},\quad j=1,\ldots, J
\label{e.TypicalThroughput
\end{align}
\end{proposition}
\begin{proof}
Let $W_{j}=\bigcup_{X\in A\cap\mathcal{S}_{j}}V\left( X\right) $. Consider
call arrivals and departures to $W_{j}$.
By Little's law
\[
N^{W_{j}}=\gamma\left\vert W_{j}\right\vert T^{W_{j}}\,
\]
where $T^{W_{j}}$ is the mean call duration in $W_{j}$
and $N^{W_{j}}$ is the steady-state mean number of users in $W_{j}$.
Thus mean user throughput, with users restricted to $W_{j}$, equals
\begin{align*}
\frac{1/\mu}{T^{W_{j}}} & =\frac{\rho\left\vert W_{j}\right\vert }{N^{W_{j
}}
=\rho\frac{\sum_{X\in A\cap\Phi}\left\vert V\left( X\right) \right\vert
\ind\left\{ \theta\left( X\right) <1,X\in\Phi_{j}\right\} }{\sum_{X\in
A\cap\Phi}N\left( X\right) \ind\left\{ \theta\left( X\right) <1,X\in\Phi
_{j}\right\} }\,
\end{align*}
Letting $\left\vert A\right\vert \rightarrow\infty$, it follows from the ergodic
theorem that
\[
\bar{r}_{j}=\rho\frac{\Ex^{0}\left[ \left\vert V\left( 0\right)
\right\vert \ind \left\{ \theta\left( 0\right) <1,Z_0=j\right\}
\right] }{\Ex^{0}\left[ N\left( 0\right) \ind \left\{ \theta\left(
0\right) <1,Z_0=j\right\} \right] }\,
\]
By the inverse formula of Palm calculus
\[
\Ex^{0}\left[ \left\vert V\left( 0\right) \right\vert \ind \left\{
\theta\left( 0\right) <1,Z_0=j\right\} \right] =\frac{1}{\lambda
}\mathbb{P}\left( \theta\left( X^{\ast}\right) <1,Z^{\ast}=j\right)
\]
and consequently
\begin{align*}
\bar{r}_{j} & =\frac{\rho}{\lambda}\frac{\mathbb{P}\left( \theta\left(
X^{\ast}\right) <1,Z^{\ast}=j\right) }{\mathbb{P}^{0}\left(
Z_0=j\right) \bar{N}_{j}}\\
&=\frac{\bar{\rho}_{j}}{\bar{N}_{j}}\mathbb{P}\left( \theta\left( X^{\ast
}\right) <1|Z^{\ast}=j\right)
=\frac{\bar{\rho}_{j}}{\bar{N}_{j}}\pi_{j}\,
\end{align*}
The expression for $\bar{r}$\ may be proved in the same lines as above.
\end{proof}
The mean number of users per stable cell and the
mean user throughput in the stable part of the network do not
admit explicit analytic expressions. We calculate these expressions
by Monte-Carlo simulation of the respective expectations with respect
to the distribution of the Poisson network model. We call this
semi-analytic approach the {\em typical cell} approach.
\subsection{Mean cell approach}
\label{ss.MeanCell}
We will propose now a more heuristic approach, in which we try to capture
the performance of the heterogeneous network considering $J$ simple
M/G/1 processor sharing queues related to each other via their cell
loads, which solve a simplified version of the cell load equation.
Recall that in the original approach, in the cell load fixed point equation~(\ref{e.FixedPoint})
we have an unknown cell loads $\theta(X)$ for each cell of the
network. Recall
also that knowing all these cell loads and the cell traffic demands
(which depend directly on the cell surfaces) we can
calculate all other cell characteristics. We will consider now a
simpler ``mean'' cell load fixed point equation in which all cells of
a given type $j=1,\ldots,J$ share the same constant
unknown $\tilde\theta_j$.~\footnote{Recall that~(\ref{e.FixedPoint}) is already a
simplification of the reality in which the extra cell interference
should be wighted by the dynamic (evolving in time) factors capturing
cells' activity.}
Specifically, in analogy to~(\ref{e.LoadPerType}), we assume that the new
unknowns~$\tilde\theta_j$ are related to each other by
\begin{align}
\tilde\theta_{j}&=\tilde\theta\frac{P_{j}^{2/\beta}}{P^{2/\beta}},\quad j=1,\ldots,J\, \label{e.TildeThetaj
\end{align}
where $P$ is given by~(\ref{e.Power}) and $\tilde\theta$ solves the following equation
\begin{equation}
\tilde{\theta}=\frac{\rho}{\lambda}\Ex\left[ R^{-1}\left( \frac
{\frac{1}{L_{X^*}\left( 0\right) }}{N+\tilde\theta \sum_{j=1}^J\frac{P_{j}^{2/\beta}}{P^{2/\beta}}
\sum_{Y\in\Phi_{j}\setminus\{X^*\} }\frac{1}{L_{Y}\left(
0\right) }}\right) \right]\, \label{e.meanFixedPoint}
\end{equation}
The mean fixed point cell load equations boils down hence to an equation in one variable~$\tilde\theta$.
Note that the argument of $R^{-1}$ in~(\ref{e.meanFixedPoint}) is a
functional of the marked path-loss process $\hat\Phi$ and thus
the expectation in this expression can be evaluated using the equivalent homogeneous
model described in Remark~\ref{r.equivalence}.
By the {\em mean cell of type} $j=1,\ldots,J$ we understand a (virtual) processing
sharing queue with the traffic demand
$$\tilde{\rho}_j:=\bar{\rho}_j=\frac{\rho}{\lambda}\frac{P_j^{2/\beta}}{P^{2/\beta}}\, $$
and the traffic load $\tilde\theta_j$ given by~(\ref{e.TildeThetaj}),
where $\tilde\theta$ is the solution of~(\ref{e.meanFixedPoint}).
The remaining mean cell characteristics (the critical load, user
throughput and the number of users) are related to these two ``primary''
characteristics in analogy to~(\ref{e.CriticalTraffic2}),
(\ref{e.UserThroughput}) and (\ref{e.UsersNumber}) via
\begin{align}\label{e.menCriticalTraffic2}
\tilde{\rho_c}_j&:=\frac{\tilde\rho_j}{\tilde\theta_j}\,,\\
\tilde r_j&:=\max(\tilde{\rho_{\mathrm{c}}}_j -\tilde\rho_j ,0) \label{e.menUserThroughput}\,,\\%
\tilde N_j&:=\frac{\tilde \rho_j }{\tilde r_j }\,,
\label{e.meanUsersNumber
\end{align}
$j=1,\ldots,J$.
We will also consider a (global) {\em mean cell} having, respectively,
the traffic demand and cell load given by
$\tilde{\rho}:=\bar{\rho}=\frac{\rho}{\lambda}$ and $\tilde\theta$,
and the remaining characteristics $\tilde\rho_c,\tilde r,\tilde N$
given, respectively by~(\ref{e.menCriticalTraffic2}),
(\ref{e.menUserThroughput}) and (\ref{e.meanUsersNumber}) where
the subscript $j$ is dropped.
In the next section we shall evaluate the mean cell approximation (both globally and per type)
by comparison to the characteristics of the typical cell obtained both from
simulation and from real field measurements.
\section{Numerical results and model validation}
\label{s.NumericalResults}
In this section we present numerical results of the analysis of our
model and compare them to the corresponding statistics obtained from
some real field measurements. We show that the obtained results match
the real field measurements.
Our numerical assumptions, to be presented in Section~\ref{sss.NumercialAssumptions},
correspond to an operational network in some big city in Europe in
which two types of BS can be distinguished, conventionally called
macro and micro base stations.~\footnote{Let us explain what we mean
here by macro and micro BS:
Historically, the operator deployed first what we call here macro BS.
Powers of these stations slightly vary around some
mean value as a consequence of some local adaptations. We assume
them constant.
In order to cope with the increase of the traffic demand, new stations are
added progressively. These new stations, which we call micro BS, emit with
the power about 10 times smaller than the macro
BS. Figure~\ref{f.PowerCDF} shows the cumulative distribution
function (CDF) of the antenna powers (without antenna gains).}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\linewidth, height=0.5\linewidth
{PowerCDF_WithoutAntennaGain
\vspace{-2ex}
\caption{CDF of the emitted antenna powers, without antenna gains, in the
network consisting of micro (power $\le35$dBm) and macro BS (power
$\ge 35$dBm). The average micro BS power is $33.42$dBm and the
average macro BS power is $41.26$dBm. Adding antenna gains, which are equal respectively $14$dBm and
$17$dBm for micro and macro BS, we obtain
$P_{2}=47.42$dBm, $P_{1}=58.26$dBm.
\label{f.PowerCDF}
\end{center}
\vspace{-3ex}
\end{figure}
The real field measurements are obtained using a methodology described in Section~\ref{sss.Measurements}.
In Section~\ref{ss.Results} the statistics obtained from these
measuremetns will be compared to the performance of each category of
BS calculated using the approach proposed in the present paper.
\subsection{Model specification}
\subsubsection{Measurements}
\label{sss.Measurements}
The raw data are collected
using a specialized tool used by network maintenance engineers.
This tool has an interface allowing to fetch values of several
parameters for every base
station 24 hours a day, at a time scale of one hour. For a given day, for every hour, we obtain information
regarding the BS coordinates, type, power, traffic demand, number of users
and cell load calculated as the percentage of the Transmission Time
Intervals (TTI) scheduled for transmissions. Then we estimate the global cell performance
metrics for the given hour averaging over time (this hour) and next
over all considered network cells. The mean user throughput is calculated as the
ratio of the mean traffic demand to the mean number of users.
The mean traffic demand $\rho$ is also used as the input of our model.
Knowing all cell coordinates, their types and the surface of the
deployment region we deduce the network density and fraction of BS in
the two tiers.
\subsubsection{Numerical assumptions}
\label{sss.NumercialAssumptions}
The BS locations are generated as a realization of a
Poisson point process of intensity $\lambda=\lambda_{1}+\lambda_{2
=4.62$km$^{-2}$ (which corresponds to an average distance between two base
stations of $0.5$km) over a sufficiently large observation window which is
taken to be the disc of radius $2.63$km. The ratio of the micro to macro BS
intensities equals $\lambda_{2}/\lambda_{1}=0.039$.
The transmitted powers by macro and micro BS are $P_{1}=58.26
dBm,\ $P_{2}=47.42$dBm, respectively. The power~(\ref{e.Power}) of the ``equivalent''
homogeneous network is $P=58.03$dBm.
The propagation loss due to distance is $l(x)=(K\left\vert x\right\vert
)^{\beta}$ where $K=7117$km$^{-1}$ and the path loss exponent $\beta=3.8$.
Shadowing is assumed log-normally distributed with standard deviation
$\sigma=10$dB and spatial correlation $0.05$km.
The technology is HSDPA (High-Speed Downlink Packet Access) with MMSE (Minimum
Mean Square Error) receiver in the downlink. The peak bit-rate equals to
$30\%$ of the information theoretic capacity of the Rayleigh fading channel
with AWGN; that is
\begin{equation}\label{e.Shannon}
R\left( \mathrm{SINR}\right) =0.3W\Ex\left[ \log_{2}\left( 1+\left\vert
H\right\vert ^{2}\mathrm{SINR}\right) \right]
\end{equation}
where the expectation $\Ex\left[ \cdot\right] $ is with respect to the
Rayleigh fading $H$\ of mean power $\Ex[\left\vert H\right\vert ^{2}]=1$, and
$W=5$MHz\ is the frequency bandwidth.
A fraction $\epsilon=10\%$ of the transmitted power is used by the pilot
channel (which is always transmitted whether the BS serves users or
not)
~\footnote{It is taken into account by replacing
$\min(\theta(Y),1)$ in~(\ref{e.FixedPoint}) by $\min(\theta(Y),1)(1-\epsilon)
+\epsilon$. Similar modification concerns $\tilde{\theta}$ in the right-hand-side
of~(\ref{e.meanFixedPoint}).}
The
antenna pattern is described in~\cite[Table A.2.1.1-2]{3GPP36814-900}. The
noise power is $-96$dBm.
\subsection{Results}
\label{ss.Results}
\exclude{
\textcolor{blue}{We consider first the full interference model (i.e. each BS always transmits
at its maximal power even when it has no user to serve). This model is
analyzed by simulation with the typical cell approach or analytically with the
mean cell approach (no measurements are available in the full interference case).
Figure~\ref{f.FullInterference_Load} shows the mean cell load of the typical
cell $\bar{\theta}$ and the stable fraction of the network $\pi$ obtained from
simulations, as well as the load of the mean cell $\tilde{\theta}$ calculated
analytically, versus mean traffic demand per cell $\rho/\lambda$. This figure
confirms that the typical cell and the mean cell models have the same load
both globally and for each category of BS.
Figure~\ref{f.FullInterference_UsersNumber} shows the mean number of users per
cell $\bar{N}$ (obtained from simulations) and the analytically calculated
number of users in the mean cell $\tilde{N}$ versus mean traffic demand per
cell. Again the mean cell reproduces well the results of the typical cell at
least for moderate traffic demands; i.e. as long as the stable fraction of the
network $\pi$\ remains close to $1$ as it may be seen in
Figure~\ref{f.FullInterference_Load}.
Finally, Figure~\ref{f.FullInterference_UserThroughput} presents the
dependence of the mean user throughput in the network on the mean traffic
demand per cell obtained using the two approaches: $\bar{r}$ and for the
typical cell and $\tilde{r}$ for the mean cell. Observe again for moderate
traffic demans the good fit between the mean and typical cells both globally
and for each BS category.}
}
We present now the results obtained form the analysis of our two-tier Poisson
model conformal to a given region of the operational network, adopting both the typical cell approach described in
Section~\ref{ss.TypicalCell} and the mean cell approach explained in Section~\ref{ss.MeanCell}.
The obtained results are compared to the respective quantities
estimated in the given operational network.
Error bars on all figures represent the standard deviation in the
averaging over 10 realizations of the Poisson network in the
Monte-Carlo estimation of the respective expectations.
Figure~\ref{f.PonderedInterference_Load} shows the mean cell load together with the stable fraction of
the network, both globally and separately for the two tiers, as functions of
the mean traffic demand per cell $\bar\rho=\rho/\lambda$.
The mean cell load is calculated using the two approaches: the
typical cell and the mean cell one. The stable fraction of the network
is available only in the typical cell approach.
Figure~\ref{f.PonderedInterference_Load} presents also 24 points
presenting the mean cell load estimated from the real field measurements done during 24 different hours of some given day.
Note a good fit of our results and the network measurements.
Observe also that all real field measurements fall within the range
of the traffic demand ($\bar\rho\le 600$kbps) for which the stable fraction of the network for
both network tiers is very close to~1. This is of course a consequence of
a good dimensioning of the network. Interestingly these latter metrics
allows us to reveal existing dimensioning margins. Specifically, we
predict that there will be no unstable macro cells with the traffic
demand slightly less than $\bar\rho\le 700$kbps and the micro cells remain stable for much
higher traffic demand of order $\bar\rho\approx1000$kbps.
We move now to the mean number of users per cell presented on
Figure~\ref{f.PonderedInterference_UsersNumber}, again, as function of
the mean traffic demand per cell $\bar\rho=\rho/\lambda$.
Both approaches (typical and mean cell) are adopted and the two
network tiers are analyzed jointly and separately.
As for the load, we present also 24 points corresponding the network
measurements. Note a good fit of our model results and the network
measurements.
Note also that the prediction of the model performance for
the traffic demand $\bar\rho\ge 700$kbps , where the fraction of
unstable cell is non-negligible (cf
Figure~\ref{f.PonderedInterference_Load}), is much more
volatile. More precisely, the relatively large error-bars
of the mean number of users for $\bar\rho\ge 700$kbps can be explained
by a non-negligible probability of finding a cell whose load is
just below~1. It still contributes to the calculation of the mean
number of users (as we remove only strictly unstable cells) and makes
the empirical mean very large for this simulation experiment.
Finally, Figure~\ref{f.PonderedInterference_UserThroughput} shows the
relation between the mean user throughput and
the mean traffic demand per cell $\bar\rho$ obtained via the two modeling
approaches and real field measurements. The global performance of the
network and its macro-tier are quite well captured by our two
modeling approaches. The micro-tier analysis via the typical cell and the real
field measurements exhibit important volatility due to
a relatively small number of such cells in the network.
The mean cell model allows to predict however a macroscopic law in
this regard.
\exclude{
\begin{figure}
[p]
\begin{center}
\includegraphics[width=1\linewidth
{SimulationsPIM/FullInterference_Load
\caption{Cell load versus traffic demand per cell in the full interference
model.
\label{f.FullInterference_Load
\end{center}
\end{figure}
\begin{figure}
[p]
\begin{center}
\includegraphics[
height=2.4735in,
width=3.5276in
{SimulationsPIM/FullInterference_UsersNumber
\caption{Number of users per cell versus traffic demand per cell in the full
interference model.
\label{f.FullInterference_UsersNumber
\end{center}
\end{figure}
\begin{figure}
[ph]
\begin{center}
\includegraphics[
height=2.4735in,
width=3.5276in
{SimulationsPIM/FullInterference_UserThroughput
\caption{Mean user throughput in the network versus traffic demand per cell in
the full interference model.
\label{f.FullInterference_UserThroughput
\end{center}
\end{figure}
}
\begin{figure}[t!]
\begin{center}
\hspace{-1em
\includegraphics[width=0.9\linewidth
{SimulationsPIM/PonderedInterference_LoadBB
\vspace{-2ex}
\caption{Cell load versus traffic demand per cell.
\label{f.PonderedInterference_Load
\end{center}
\begin{center}
\hspace{-1em
\includegraphics[width=0.9\linewidth
{SimulationsPIM/PonderedInterference_UsersNumberBB
\vspace{-2ex}
\caption{Number of users per cell versus traffic demand per cell.
\label{f.PonderedInterference_UsersNumber
\end{center}
\begin{center}
\hspace{-1em
\includegraphics[width=0.9\linewidth
{SimulationsPIM/PonderedInterference_UserThroughputBB
\vspace{-2ex}
\caption{Mean user throughput in the network versus traffic demand per cell.
\label{f.PonderedInterference_UserThroughput
\end{center}
\vspace{-5ex}
\end{figure}
\section{Conclusions}
A heterogeneous cellular network model allowing for different
BS types (having different transmission powers) is proposed, aiming to help in
performance evaluation and dimensioning of real (large, irregular) operation networks.
It allows one to identify key laws relating the performance of the different base station
types. In particular, we show how the mean load of different types of
BS is related in a simple way to their transmission powers.
The results of the model analysis are compared to
real field measurement in an operational network showing its pertinence.
\addtocounter{section}{1}
\addcontentsline{toc}{section}{References}
{\small
\bibliographystyle{IEEEtran}
\input{HetNets9.bbl}
}
\end{document}
|
1,108,101,563,622 | arxiv | \section{Introduction}
A \textit{repetition} in a sequence $\varphi$ is a subsequence $\rho = x_1 \dots x_{2t}$
of consecutive terms of $\varphi$ such that $x_i = x_{t+i}$ for every $i=1,\dots,t$.
The length of a repetition is hence always even and comprised of two identical \textit{repetition blocks},
$\rho_1 = x_1\dots x_t$ and $\rho_2 = x_{t+1}\dots x_{2t}$.
A sequence is called \textit{non-repetitive} or \textit{Thue} if it does not contain any repetition.
Surprisingly, as shown by Thue~\cite{Thu06} (see \cite{Ber94} for a translation),
having three distinct symbols suffices to construct non-repetitive sequences of arbitrary lengths.
This result is a fundamental piece in the theory of combinatorics on words. After that,
a number of other concepts related to repetitions has been presented (see e.g.~\cite{BerPer07} for more details).
In this paper, we continue dealing with the following generalization.
A (possibly infinite) sequence $\varphi$ is \textit{$k$-Thue} (or \textit{non-repetitive up to mod $k$})
if every $d$-subsequence of $\varphi$ is Thue, for $1 \le d \le k$.
By a \textit{$d$-subsequence} of $\varphi$ we mean an arithmetic subsequence $x_i x_{i+d} x_{i+2d} \dots$ of $\varphi$.
Consider a sequence
$$
a \ \underline{b} \ d \ \underline{c} \ b \ \underline{c},
$$
which is Thue, but not $2$-Thue, since the $2$-subsequence $b \ c \ c$ is not Thue.
On the other hand,
$$
\underline{a} \ b \ c \ \underline{a} \ d \ b
$$ is $2$-Thue, but not $3$-Thue, due to the repetition in the $3$-subsequence $a \ a$.
This generalization was introduced by Currie and Simpson~\cite{CurSim02} and has been immediately followed
by an intriguing conjecture due to Grytczuk~\cite{Gry02}.
\begin{conjecture}[Grytczuk, 2002]
\label{conj:k+2}
For any positive integer $k$, $k+2$ distinct symbols suffice to construct a $k$-Thue sequence of any length.
\end{conjecture}
It is easy to show that having only $k+1$ symbols there is a repetition in any sequence of length at least $2k+2$,
so the bound $k+2$ is tight.
Since $1$-Thue sequences are simply Thue sequences, the above mentioned result establishes the conjecture for $k=1$.
The conjecture has also been confirmed for $k=2$ in~\cite{CurSim02} and independently in~\cite{KraLuzMocSot15},
for $k=3$ in~\cite{CurSim02}, and for $k=5$ in~\cite{CurMoo03}.
Although it has been considered also for the case $k=4$ by Currie and Pierce~\cite{CurPie03}
using an application of the fixing block method, it remains open for all the cases except $k \in \set{1,2,3,5}$.
Several upper bounds have been established, first being $e^{33}k$ due to Grytczuk~\cite{Gry02},
and then substantially improved to $2k + O(\sqrt{k})$ in~\cite{GryKozWit11}.
Currently the best known upper bound is due to Kranjc et al.~\cite{KraLuzMocSot15}.
\begin{theorem}[Kranjc et al., 2015]
\label{thm:2k}
For any integer $k \ge 2$, $2k$ distinct symbols suffice to construct a $k$-Thue sequence of any length.
\end{theorem}
\noindent The proof of the above is constructive and provides $k$-Thue sequences of given lengths.
The aim of this paper is two-fold. The main contribution is answering Conjecture~\ref{conj:k+2} in affirmative
for several additional values of $k$.
\begin{theorem}
\label{thm:45678}
For any $k \in \set{4,5,6,7,8}$, $k+2$ distinct symbols suffice to construct a $k$-Thue sequence of any length.
\end{theorem}
Moreover, we present two different techniques of proving the above theorem.
In the former, described in Section~\ref{sec:pas}, we use exhaustive computer search to determine morphisms
for each $k$, $k \in \set{4,5,6,7,8}$, from which we construct $k$-Thue sequences.
In the latter, described in Section~\ref{sec:cons}, we use concatenation of special blocks given by another morphism.
The purpose of the latter one is to introduce its ability to deal with larger $k$'s,
therefore we only prove the cases $k=4$ and $k=6$.
We believe, in the future, it could be used for proving Conjecture~\ref{conj:k+2} for infinitely many values of $k$.
\section{Preliminaries}
\label{sec:prel}
In this section, we introduce additional terminology and notation used in the paper.
Throughout the paper, $i$ and $t$ are used to determine positive integers, unless more details are given.
An \textit{$\mathbb{A}$-sequence} (or simply a \textit{sequence} when the alphabet is known from the context or not relevant)
of length $t$ is an ordered tuple of $t$ symbols from some alphabet $\mathbb{A}$.
Let $\varphi = x_1 \dots x_t$ be a sequence.
A subsequence of $\varphi$ of consecutive terms $x_i \dots x_{j}$,
for some $i,j$, $1 \le i \le j \le t$, is denoted by $\varphi(i,j)$.
A \textit{term} indicates an element of a sequence at a specified index.
A \textit{block} is a subsequence of consecutive terms of some sequence.
When we refer to a term as a term of a block, by its index we mean the index of a term in the block.
We denote the term at index $i$ in a sequence $\varphi$ (resp. a block $\beta$)
by $\varphi(i)$ (resp. $\beta(i)$).
A \textit{prefix} of a sequence $\varphi = x_1\dots x_r$ is a sequence $\pi = x_1\dots x_s$, for some integer $s \le r$.
A \textit{suf\mbox{}f\mbox{}ix} is defined analogously.
In a sequence $\varphi$ consider a pair of sequences $\pi$ and $\varepsilon$ such that $\pi \varepsilon$
is a subsequence of $\varphi$, $\pi$ has length at least $1$, and $\varepsilon$ is a prefix of $\pi \varepsilon$.
The \emph{exponent} of $\pi \varepsilon$ is
$$
\exp(\pi \varepsilon) = \tfrac{|\pi \varepsilon|}{|\pi|}.
$$
If a sequence has exponent $p$, we call it a \textit{$p$-repetition}.
A sequence is \textit{$q^+$-free} if it contains no $p$-repetition such that $p>q$.
For sequences over $3$-letter alphabets, Dejean~\cite{Dej72} proved the following.
\begin{theorem}[Dejean, 1972]
\label{thm:dej}
Over $3$-letter alphabets there exist $\frac{7}{4}^+$-free sequences of arbitrary lengths.
\end{theorem}
A \textit{morphism} is a mapping $\mu$ which assigns to each symbol of an alphabet a sequence.
Applied to a sequence $\varphi$, $\mu(\varphi)$ is the sequence obtained from $\varphi$ where every
symbol is replaced by its image according to $\mu$. We say that a morphism is \textit{$k$-uniform} if it maps
every symbol from the domain to some sequence of length $k$.
Given a sequence $\varphi = \beta_1 \dots \beta_t$ comprised of blocks $\beta_i$, for $1 \le i \le t$,
the \textit{covering subsequence} $\hat{\sigma}$ of a subsequence $\sigma$ in $\varphi$
is the subsequence $\varphi(i,j)$, where $i$ is the index of the first term of the block containing the first term of $\sigma$,
and $j$ is the index of the last term of the block containing the last term of $\sigma$.
An \textit{$i$-shift} of $\varphi$ is the sequence $\varphi^i = x_{i+1} \dots x_{\ell} x_{1} \dots x_{i}$, i.e. the sequence $\varphi$
with the subsequence of the first $i$ elements moved to the end.
Let $\varphi$ be a sequence of length $\ell$.
We define the \textit{circular sequence} $\cir{\ell}{t}$ of order $\ell$ and length $\ell^2 \cdot t$ as
$$
\cir{\ell}{t} = \underbrace{\varphi^0 \varphi^1 \dots \varphi^{\ell-1} \ \dots \ \varphi^0 \varphi^1 \dots \varphi^{\ell-1}}_{t}.
$$
We call each subsequence $\varphi^i$ of $\cir{\ell}{t}$ a \textit{$\zeta$-block}.
Apart from concatenation of sequences, we define another sequence combining operation.
Let $\varphi_1$ and $\varphi_2$ be sequences of lengths $\ell \cdot t$.
A \textit{sequence wreathing of order $\ell$} of $\varphi_1$ and $\varphi_2$, denoted by $\varphi_1 \wreath_\ell \varphi_2$,
is consecutive concatenation of $k$ subsequent elements of $\varphi_1$ and $\varphi_2$, i.e.
$$
\varphi_1 \wreath_\ell \varphi_2 = \varphi_1(1,\ell) \ \varphi_2(1,\ell) \dots \varphi_1((t-1)\ell + 1, t \cdot \ell) \ \varphi_2((t-1)\ell+1, t \cdot \ell)\,.
$$
We call the sequences $\varphi_1$ and $\varphi_2$ the \textit{base} and the \textit{wrap} of sequence wrapping $\varphi_1 \wreath_\ell \varphi_2$, respectively.
Additionally, the blocks $\varphi_1(i\ell + 1, (i+1)\ell)$ and $\varphi_2(i\ell + 1, (i+1)\ell)$ are respectively called a \textit{base-block} and a \textit{wrap-block}.
We conclude this section with two lemmas we will use in the forthcoming sections.
The former, due to Currie~\cite{Cur91},
states that insertion of non-repetitive subsequences (over distinct alphabets) into a non-repetitive sequence preserves non-repetitiveness.
\begin{lemma}[Currie, 1991]
\label{lem:insert}
Let $\varphi_0 = x_1\dots x_t$ be a non-repetitive $\mathbb{A}$-sequence,
and $\varphi_1,\dots \varphi_{t+1}$ be non-repetitive $\mathbb{B}$-sequences,
where $\mathbb{A}$, $\mathbb{B}$ are disjoint alphabets.
Additionally, the length of any $\varphi_i$, $1 \le i \le t+1$, may be $0$.
Then, the sequence $\varphi_1 x_1 \dots \varphi_t x_t \varphi_{t+1}$ is non-repetitive.
\end{lemma}
Proving that a non-repetitive sequence $\varphi$ is $k$-Thue for some integer $k > 1$,
one needs to show that every $\ell$-subsequence of $\varphi$ is non-repetitive for every integer $\ell$, $1 \le \ell \le k$.
To prove that an $\ell$-subsequence is non-repetitive, it suffices to have enough information about $\varphi$ as we show in the next lemma.
Let $\varphi = \beta_1 \dots \beta_t$ be a sequence comprised of blocks $\beta_i$, $1 \le i \le t$.
We say that a block $\beta_i$ is \textit{uniquely determined by a subset of terms} if there is no block $\beta_j$, $\beta_i \ne \beta_j$,
having the same terms at the same positions.
E.g., from the construction of circular sequences, we have the following.
\begin{observation}
\label{obs:circular}
A $\zeta$-block $\varphi^{i}$, $0 \le i \le \ell-1$, is uniquely determined by one term, i.e.,
given at least one term of a $\varphi^{i}$, one can determine $i$.
\end{observation}
We use the following lemma as a tool for proving that some $d$-subsequence of a Thue sequence does not contain a repetition.
\begin{lemma}
\label{lem:deter}
Let $\sigma$ be an $\ell$-subsequence of a sequence $\varphi = \beta_1 \beta_2 \dots \beta_t$, for some positive integers $\ell$ and $t$.
Let $\rho_1 \rho_2$ be a repetition in $\sigma$, and let, for some $j$, $\gamma_1 = \beta_{j+1} \dots \beta_{j+r}$, $\gamma_2 = \beta_{j+r+1} \dots \beta_{j+2r}$
be the covering sequences of $\rho_1$ and $\rho_2$, respectively.
If it holds that
\begin{itemize}
\item{} the terms of $\rho_1$ uniquely determine the blocks $\beta_{i}$, for $i \in \set{j+1,j+r}$;
\item{} the terms of $\rho_2$ uniquely determine the blocks $\beta_{i}$, for $i \in \set{j+r+1,j+2r}$;
\item{} all the terms of $\rho_1$, $\rho_2$ appear in $\gamma_1$, $\gamma_2$ at the same indices within their blocks, respectively;
\end{itemize}
then $\gamma_1 \gamma_2$ is a repetition in $\varphi$.
\end{lemma}
\begin{proof}
Since all the blocks are uniquely determined and $r > 0$, it follows that $\beta_{j + i} = \beta_{j + r + i}$ for every $j$.
\end{proof}
\section{Technique \#1: Exhaustive Search for Morphisms}
\label{sec:pas}
The aim of this section is to present a compact proof of Theorem~\ref{thm:45678}.
For completeness, in the proof, we provide constructions of $k$-Thue sequences also for $k \in \set{2,3}$.
We used an exhaustive computer search to determine appropriate morphisms which are then applied to appropriate sequences.
\begin{proof}[Proof of Theorem~\ref{thm:45678}]
Let $\mathbb{A}_3$ and $\mathbb{A}_{k+2}$ be alphabets on $3$ and $k+2$ letters, respectively.
For every $k$, $2 \le k \le 8$, let a morphism $\mu_k:\mathbb{A}_3^* \to \mathbb{A}_{k+2}^*$ be defined as given below:
\begin{itemize}
\item{} $k=2$: a $7$-uniform morphism
$$
\begin{array}{c}
\mu_2(\texttt{0})=\texttt{0310213}\\
\mu_2(\texttt{1})=\texttt{0230132}\\
\mu_2(\texttt{2})=\texttt{0120321}\\
\end{array}
$$
\item{} $k=3$: a $14$-uniform morphism
$$
\begin{array}{c}
\mu_3(\texttt{0})=\texttt{10231402310243}\\
\mu_3(\texttt{1})=\texttt{01243024130243}\\
\mu_3(\texttt{2})=\texttt{01240312401234}\\
\end{array}
$$
\item{} $k=4$: a $12$-uniform morphism
$$
\begin{array}{c}
\mu_4(\texttt{0})=\texttt{012350412534}\\
\mu_4(\texttt{1})=\texttt{012345103245}\\
\mu_4(\texttt{2})=\texttt{012340521345}\\
\end{array}
$$
\item{} $k=5$: a $27$-uniform morphism
$$
\begin{array}{c}
\mu_5(\texttt{0})=\texttt{012345601235460235146023546}\\
\mu_5(\texttt{1})=\texttt{012345601234650134625013465}\\
\mu_5(\texttt{2})=\texttt{012345061234065123460152346}\\
\end{array}
$$
\item{} $k=6$: a $23$-uniform morphism
$$
\begin{array}{c}
\mu_6(\texttt{0})=\texttt{01234560172436501243756}\\
\mu_6(\texttt{1})=\texttt{01234560127354061235476}\\
\mu_6(\texttt{2})=\texttt{01234560123746510324657}\\
\end{array}
$$
\item{} $k=7$: a $36$-uniform morphism
$$
\begin{array}{c}
\mu_7(\texttt{0})=\texttt{012345670812345608721345687201345678}\\
\mu_7(\texttt{1})=\texttt{012345670182345601872345618702345687}\\
\mu_7(\texttt{2})=\texttt{012345670128345670281345762801345768}\\
\end{array}
$$
\item{} $k=8$: a $30$-uniform morphism
$$
\begin{array}{c}
\mu_8(\texttt{0})=\texttt{012345678902315647890312645789}\\
\mu_8(\texttt{1})=\texttt{012345678902143675982014365789}\\
\mu_8(\texttt{2})=\texttt{012345678019324568079123548679}\\
\end{array}
$$
\end{itemize}
In what follows, we show that $\mu_k(\varphi')$ is $k$-Thue for every $\paren{\tfrac74}^+$-free sequence $\varphi' \in \mathbb{A}_3^*$.
Using a computer, we have verified the following.
\begin{claim}
\label{cl:morf}
Let $\varphi$ be any non-repetitive sequence over $\mathbb{A}_3$ of length at most $40$.
For each morphism $\mu_k$, $\mu_k(\varphi)$ is $k$-Thue.
\end{claim}
Next, for every $k$ and $d$ such that $2 \le k \le 8$ and $1 \le d \le k$,
we consider every sequence $\delta = x_1x_2x_3x_4$ of length $4$ over $\mathbb{A}_3$
and every $d$-subsequence $\sigma$ of $\mu_k(\delta)$ such that $\sigma$ intersects both the prefix $\mu_k(x_1)$ and the suf\mbox{}f\mbox{}ix $\mu_k(x_4)$ of $\mu_k(\delta)$.
We again used a computer to check that if such a $d$-subsequence appears in two sequences $\mu(\delta)$ and $\mu(\delta')$,
where $\delta=x_1x_2x_3x_4$ and $\delta'=x'_1x'_2x'_3x'_4$, then $x_2x_3=x'_2x'_3$.
Thus, long enough $d$-subsequences of $\mu_k(\varphi)$ allow to determine $\varphi$, except maybe for the first and the last term of $\varphi$.
So, if a large repetition $\rho$ occurs in some $d$-subsequence of $\mu_k(\varphi)$, then $\varphi$ contains a factor $uvu$ such that $u$ is large and $|v|\le 2$.
For $|u| \ge 7$, such a factor $uvu$ cannot appear in a $\paren{\tfrac74}^+$-free sequence.
On the other hand, if $|u| \le 6$, then the length of $\varphi$ is at most $18$ (including possible first and last term).
For such sequences, $\mu_k(\varphi)$ are $k$-Thue by Claim~\ref{cl:morf}.
This completes the proof.
\end{proof}
\section{Construction of Thue sequences using Hexagonal Morphism}
\label{sec:koc}
Recently, in his master thesis, Ko\v{c}i\v{s}ko~\cite{Koc13} introduced a uniform morphism $\kappa$,
which maps a term $x$ of a sequence to a block of three symbols regarding the mapping of the predecessor of $x$.
In particular, instead of using an alphabet $\mathbb{A} = \set{1,2,3}$ an auxiliary alphabet
$$
\overline{\mathbb{A}} = \set{\overline{1}, \underline{1}, \overline{2}, \underline{2}, \overline{3}, \underline{3}}
$$
is used. The morphism $\kappa$ is then defined as
\begin{eqnarray*}
\kappa(\overline{1}) = \overline{1} \ \underline{2} \ \overline{3}\,,
\quad \quad
\kappa(\overline{2}) = \overline{2} \ \underline{3} \ \overline{1}\,,
\quad \quad
\kappa(\overline{3}) = \overline{3} \ \underline{1} \ \overline{2}\,, \\
\kappa(\underline{1}) = \underline{3} \ \overline{2} \ \underline{1}\,,
\quad \quad
\kappa(\underline{2}) = \underline{1} \ \overline{3} \ \underline{2}\,,
\quad \quad
\kappa(\underline{3}) = \underline{2} \ \overline{1} \ \underline{3}\,.
\end{eqnarray*}
For a positive integer $t$, we recursively define the sequence
$$
\skap{t} = \kappa(\skap{t-1})\,,
$$
where $\skap{0} = \overline{1}$.
Notice that for every $t$, every symbol from $\overline{\mathbb{A}}$ is a neighbor of at most two symbols of $\overline{\mathbb{A}}$ (if $t > 3$, then precisely two);
we say that neighboring symbols are \textit{adjacent}.
The adjacency is also preserved between the blocks of three symbols to which the
symbols from $\overline{\mathbb{A}}$ are mapped by $\kappa$;
we denote these blocks \textit{$\overline{\kappa}$-triples}.
Due to its structure, we refer to $\kappa$ as the \textit{hexagonal morphism}.
In Fig.~\ref{fig:koci}, the adjacencies between the symbols and the $\overline{\kappa}$-triples,
and the mappings of $\kappa$ are depicted.
\begin{figure}[htp!]
$$
\includegraphics{koci}
$$
\caption{The graph of adjacencies between the symbols of $\overline{\mathbb{A}}$ and $\overline{\kappa}$-triples, and the mappings defined by $\kappa$.}
\label{fig:koci}
\end{figure}
Let $\pi \ : \ \overline{\mathbb{A}} \rightarrow \mathbb{A}$ be a projection of symbols from the auxiliary alphabet
$\overline{\mathbb{A}}$ to $\mathbb{A}$ defined as $\pi(\overline{a}) = a$ and $\pi(\underline{a}) = a$, for every $a \in \set{1,2,3}$.
By $\kap{t}$, we denote the projected sequence $\skap{t}$, i.e. $\kap{t} = \pi(\skap{t})$; similarly a projected $\overline{\kappa}$-triple $\tau$,
$\pi(\tau)$, is referred to as a $\kappa$-block.
By the definition of $\kap{t} = \set{x_i}_{i=1}^{3^t}$ and the mapping $\kappa$, one can easily derive the following basic properties:
\begin{itemize}
\item[$(K_1)$] \quad For every pair of adjacent $\kappa$-blocks $\tau$ and $\sigma$, the sequence $\tau\sigma$ is Thue.
\item[$(K_2)$] \quad The length of $\kap{t}$ is $3^t$, and $x_{3i+1}x_{3i+2}x_{3i+3}$ is a $\kappa$-block for every $i$, $0 \le i < 3^{t-1}$.
\item[$(K_3)$] \quad $\set{x_{3i+1}, x_{3i+2}, x_{3i+3}} = \set{1,2,3}$ for every $i$, $0 \le i < 3^{t-1}$.
\item[$(K_4)$] \quad $x_{3i+2} \neq x_{3(i+1)+2}$ for every $i$, $0 \le i < 3^{t-1}-1$.
\item[$(K_5)$] \quad Any three consecutive terms $x_{j+1}x_{j+2}x_{j+3}$ of $\kap{t}$, which do not belong to the same $\kappa$-block,
uniquely determine the two $\kappa$-blocks they belong to.
\item[$(K_6)$] \quad For a pair $\tau_1$, $\tau_2$ of adjacent $\kappa$-blocks it holds that the first term of $\tau_1$ is distinct
from the third term of $\tau_2$.
\item[$(K_7)$] \quad If a pair of distinct $\kappa$-blocks has the same first or last term, then they are adjacent.
\item[$(K_8)$] \quad A pair of adjacent $\kappa$-blocks is not adjacent to any other common $\kappa$-block.
\item[$(K_9)$] \quad The middle term of the $\kappa$-block $\pi(\kappa(i))$, $i \in \overline{\mathbb{A}}$, equals $\pi(i)+1$ (modulo $3$).
\item[$(K_{10})$] \quad A pair of distinct $\kappa$-symbols $x_1$ and $x_2$, where $x_1$ and $x_2$ are the first (last) terms of adjacent $\kappa$-blocks $\tau_1$
and $\tau_2$, uniquely determines $\tau_1$ and $\tau_2$.
\item[$(K_{11})$] \quad A $\kappa$-block $\tau_1$ and at least one term of a $\kappa$-block $\tau_2$ adjacent to $\tau_1$ uniquely determine $\tau_2$.
\item[$(K_{12})$] \quad A pair of adjacent $\kappa$-blocks is in $\kap{t}$ always separated by an even number of $\kappa$-blocks, since the graph of
adjacencies is bipartite.
\end{itemize}
We use (some of) the properties above, to prove the following theorem.
\begin{theorem}[Ko\v{c}i\v{s}ko, 2013]
\label{thm:koci}
The sequence $\kap{t}$ is Thue, for every non-negative integer $t$.
\end{theorem}
For the sake of completeness, we present a short proof of Theorem~\ref{thm:koci} here also.
\begin{proof}
We prove the theorem by induction. Clearly, $\kap{0}$ is Thue.
Consider the sequence $\kap{t} = \set{x_i}_{i=1}^{3^t}$ and suppose that $\kap{j}$ is Thue for every $j < t$.
Suppose for a contradiction that there is a repetition in $\kap{t}$ and
let $\rho_1\rho_2 = y_1 \dots y_r y_{r+1} \dots y_{2r}$ be a repetition with the minimum length
(for later purposes we distinguish two repetition factors, although $\rho_1 = \rho_2$).
By $(K_1)$, we have that $r \ge 3$. We consider two subcases regarding the length $r$ of $\rho_1 (=\rho_2)$.
Suppose first that $r$ is divisible by $3$. Then, as we show in the following claim,
we may assume that the term $y_1$ is the first term of some $\kappa$-block.
\begin{claim}
\label{cl:koci1}
Let $r$ be divisible by $3$. If $y_1 = x_{3i+2}$ (resp. $y_1 = x_{3i+3}$) for some $i$, $0 \le i < 3^{t-1}$,
then $x_{3i+1} x_{3i+2} \dots x_{3i + 2r}$ (resp. $x_{3(i+1)+1} x_{3(i+1)+2} \dots x_{3(i+1) + 2r}$) is also a repetition.
\end{claim}
\begin{proofclaim}
Suppose that $y_1 = x_{3i+2}$. By $(K_3)$, every $\kappa$-block is uniquely determined by two symbols. So $x_{3i+1} = y_r$
and hence $x_{3i+1} \dots x_{3i + 2r} = y_r y_1 \dots y_{2r-1}$ is a repetition.
A proof for the case $y_1 = x_{3i+3}$ is analogous.
\end{proofclaim}
Hence, we have that $\rho = \tau_1 \dots \tau_{\frac{r}{3}}$,
where $\tau_j$ are $\kappa$-blocks for every $j$, $1 \le j \le \frac{r}{3}$.
But in this case, there is a repetition already in $\kap{t-1}$, contradicting the induction hypothesis.
Therefore, we may assume that $r$ is not divisible by $3$.
This means that the first terms $y_1$ and $y_{r+1}$ of the two repetition factors $\rho_1$ and $\rho_2$, respectively,
are at different positions within the $\kappa$-blocks they belong to. For example, if $r = 3k+1$, and $y_1$ is the first term
of the $\kappa$-block $y_1 y_2 y_3$, then $y_{r+1}$ is the second term of the $\kappa$-block $y_r y_{2r+1} y_{2r+2}$.
There are hence six possible cases regarding the position of $y_1$ and $y_{r+1}$ in their $\kappa$-blocks.
Suppose first that $y_1$ is the first term of the $\kappa$-block $x_1 x_2 x_3$.
By $(K_3)$, $x_1$, $x_2$, and $x_3$ are pairwise distinct.
Since $\rho_1 = \rho_2$, we thus know three consecutive elements of two $\kappa$-blocks
(the one of $y_{r+1}$ and the subsequent one).
By $(K_5)$, we can determine both $\kappa$-blocks, which gives us information about the term $y_4$.
Using $(K_5)$ again, we can determine the $\kappa$-block $y_4 y_5 y_6$,
namely $y_4 y_5 y_6 = x_2 x_1 x_3$ in the case when $r \equiv 1 \bmod{3}$,
and $y_4 y_5 y_6 = x_1 x_3 x_2$ in the case when $r \equiv 2 \bmod{3}$.
Using the information obtained by determining $\kappa$-blocks using $(K_5)$,
we infer that every $\kappa$-block of $\rho_1$ ends with $x_3$ in the former case, or
starts with $x_1$ in the latter case. As $\rho_1$ and $\rho_2$ are concatenated,
this leads us to contradiction on the existence of a repetition.
With a similar argument, we obtain a contradiction in the case when $y_{r+1}$ is the first term of its $\kappa$-block.
Suppose now that $r = 3k+1$, for some positive integer $k$,
and $y_1$ is the second term of its $\kappa$-block, say $x_3 x_1 x_2$.
Then $y_{r+1} = x_1$ and $y_{r+2} = x_2$, where $y_{r+1}$ and $y_{r+2}$ belong to distinct $\kappa$-blocks.
Notice that there are two possibilities for the value of $y_{r+3}$, namely $x_1$ and $x_3$.
However, regardless the choice, after determining the $\kappa$-block $y_{r+2} y_{r+3} y_{r+4}$ by $(K_5)$,
and continue by alternately determining $\kappa$-blocks in $\rho_1$ and $\rho_2$, as described above,
we infer that in both cases, every $\kappa$-block in $\rho_1$ ends with $x_2$, a contradiction.
An analogous analysis may be performed in the last case, when $r = 3k+2$ and $y_1$ being the third term of its $\kappa$-block.
\end{proof}
\section{Technique \#2: Transposition \& Cyclic Blocks}
\label{sec:cons}
In this section, we present alternative proofs to answer Conjecture~\ref{conj:k+2} in affirmative for the cases $k = 4$ and $k = 6$.
For each of the two cases we present a special morphism and apply it on a non-repetitive sequence. Then, we use sequence wreathing to
extend the sequence by circular blocks.
\subsection{The case $k=4$}
\label{sub:4}
In this part, to prove the case $k=4$ in Theorem~\ref{thm:45678},
we combine the sequence $\kap{t}$ obtained by the hexagonal morphism and the circular sequence $\cir{3}{3^t}$ by wreathing.
We construct $\kap{t}$ over the alphabet $\set{1,2,3}$, and $\cir{3}{3^{t-1}}$ over the alphabet $\set{4,5,6}$.
We define
$$
\varphi_4^t = \kap{t} \wreath_3 \cir{3}{3^{t-1}}\,.
$$
For clarity, we refer to the base-blocks of $\varphi_4^t$ as \textit{$\kappa$-blocks} (recall that the wrap-blocks are called \textit{$\zeta$-blocks}).
Additionally, the terms from $\kappa$-blocks (resp. $\zeta$-blocks) are called $\kappa$-terms (resp. $\zeta$-terms).
\begin{lemma}
\label{lem:4thue}
The sequence $\varphi_4^t$ is $4$-Thue for every non-negative integer $t$.
\end{lemma}
\begin{proof}
Since $\kap{t}$ is Thue by Theorem~\ref{thm:koci}, $\varphi_4^t$ is also Thue by Lemma~\ref{lem:insert}.
Thus, it remains to prove that every $d$-subsequence of $\varphi_4^t$ is Thue, for every $d \in \set{2,3,4}$.
Observe first that by $(K_1)$, $(K_6)$, and the definition of circular sequences,
every five consecutive terms of $\varphi_4^t$ are distinct.
This in particular means that
\begin{itemize}
\item[$(P_1)$] \quad \textit{there are no repetitions of length $2$ or $4$ in any $d$-subsequence of $\varphi_4^t$.}
\end{itemize}
Moreover,
\begin{itemize}
\item[$(P_2)$] \quad \textit{in every $d$-subsequence of $\varphi_4^t$ there are at most two consecutive $\kappa$-terms or $\zeta$-terms;}
\item[$(P_3)$] \quad \textit{in every $d$-subsequence of $\varphi_4^t$ any repetition contains $\kappa$-terms and $\zeta$-terms;}
\item[$(P_4)$] \quad \textit{if a $\kappa$-term (resp. $\zeta$-term) in a $d$-subsequence $\sigma$ of $\varphi_4^t$,
whose predecessor and successor in $\sigma$ are $\zeta$-terms (resp. $\kappa$-terms), is at index $i$ within its $\kappa$-block (resp. $\zeta$-block),
then every $\kappa$-term (resp. $\zeta$-term) in $\sigma$ is at index $i$ within its $\kappa$-block (resp. $\zeta$-block).}
\end{itemize}
All the latter three properties are direct corollaries of $(P_1)$ and the fact that every $\kappa$-block and $\zeta$-block is of length $3$.
Now, we prove that every $d$-subsequence of $\varphi_4^t$ is non-repetitive, considering three cases with regard to $d$.
In each case, we assume there is a repetition $\rho_1 \rho_2 = y_1 \dots y_{r} y_{r+1} \dots y_{2r}$ in
some $d$-subsequence $\sigma$ and eventually reach a contradiction on its existence.
By $(P_3)$, there is at least one $\zeta$-term in $\rho_1$.
Moreover, by the definition of circular sequences and $\varphi_4^t$,
every three consecutive $\zeta$-terms in $\rho_1$ (ignoring the $\kappa$-terms) are distinct,
unless $d=4$ and the $\zeta$-terms of $\rho_1$ are at indices $1$ and $3$ in $\zeta$-blocks.
However, in such a case, by construction of circular sequences, without loss of generality,
consecutive $\zeta$-terms of $\rho_1$ are $4\ 4\ 5\ 5\ 6\ 6\dots$, which means that $r$
must be divisible by $6$, to have the same sequence of $\zeta$-terms in $\rho_2$.
This implies that
\begin{itemize}
\item[$(P_5)$] \quad \textit{the number of $\zeta$-terms in $\rho_1$ is divisible by $3$,}
\end{itemize}
and consequently, since in $\zeta$-blocks the symbols repeat at the same indices in every third block:
\begin{itemize}
\item[$(P_6)$] \quad \textit{the number of $\zeta$-blocks to which the $\zeta$-terms of $\rho_1$ belong to in $\varphi_4^t$ is divisible by $3$.}
\end{itemize}
Observe that, by the above properties,
\begin{itemize}
\item[$(P_7)$] \quad \textit{the first terms of $\rho_1$ and $\rho_2$ are either both $\kappa$-terms or $\zeta$-terms, and moreover,
they appear at the same index within their blocks in $\varphi_4^t$.}
\end{itemize}
Now, we start the analysis regarding $d$:
\begin{itemize}
\item{} \textit{$d = 2$.} \\
Suppose first that $y_1$ is the first term of some $\kappa$-block.
Then, $\rho_1$ is comprised alternately of two $\kappa$-terms (the first and the third terms of a $\kappa$-block in $\varphi_4^t$)
and one $\zeta$-term (the second term of its $\zeta$-block in $\varphi_4^t$).
Consequently, $y_{r+1}$ is the first term of a $\kappa$-block also,
and the last term of $\rho_1$ must be a $\zeta$-term.
By $(K_3)$, every $\kappa$-block is uniquely determined by two of its terms, hence
one can determine all $\kappa$-blocks to which the $\kappa$-terms of $\rho_1$ and $\rho_2$ belong to in $\varphi_4^t$.
Similarly, all the $\zeta$-blocks, to which $\zeta$-terms of $\rho_1$ and $\rho_2$ belong, are uniquely determined by Observation~\ref{obs:circular}.
Moreover, since the terms $y_{r}$ and $y_{r+1}$ belong to different
blocks, we can apply Lemma~\ref{lem:deter} obtaining a contradiction on the existence of $\rho_1 \rho_2$.
Suppose now that $y_1$ is the third term of some $\kappa$-block.
A similar argument as in the paragraph above shows that $y_r$
is the first term of some $\kappa$-block $\gamma_r$ of $\varphi_4^t$,
while $y_{r+1}$ is the third term of $\beta_r$. Note that the terms $y_2,y_3$ and $y_{r+2},y_{r+3}$ uniquely determine the same block $\gamma_2$.
Consider now the $\kappa$-block $\gamma_{1}$ to which $y_1$ belongs.
By $(K_7)$, it is one of the two possible $\kappa$-blocks that end with $y_1$, and since $\gamma_1$ and $\gamma_r$ are adjacent to $\gamma_2$,
by $(K_8)$, we infer that $\gamma_1 = \gamma_r$.
Thus, taking the first term $z$ of $\gamma_1$, we have a repetition $z y_1 \dots y_{r-1} y_r \dots y_{2r-1}$ in $\sigma$,
which satisfies the assumptions of Lemma~\ref{lem:deter}.
Hence, there is a repetition in $\varphi_4^t$, a contradiction.
Next, suppose that $y_1$ is the second term of some $\kappa$-block.
Then, by $(P_4)$, all the $\kappa$-terms in $\rho_1$ and $\rho_2$ are the second terms of $\kappa$-blocks in $\varphi_4^t$.
By $(K_9)$, we have that the second terms of $\kappa$-blocks in $\varphi_4^t$ are exactly the terms of $\kap{t-1}$ shifted by $1$,
and thus form a non-repetitive sequence.
Using Lemma~\ref{lem:insert}, we infer that the sequence $\sigma$ is also non-repetitive, a contradiction.
Finally, suppose that $y_1$ is a $\zeta$-term.
Let $\gamma$ be the last $\zeta$-block in $\varphi_4^t$ to which some term of $\rho_2$ belongs.
Since a $\zeta$-block is uniquely determined by at least one of its terms, using $(P_5)$, we infer that the
$\zeta$-block of $\varphi_4^t$ following $\gamma$ is equal to the $\zeta$-block uniquely determined by $y_1$.
Let $y \in \set{y_2, y_3}$ be the first $\kappa$-term of $\rho_1$.
The observation above implies that there exists a repetition in $\sigma$ starting with $y$ and ending with the $\zeta$-terms before $y$ in $\rho_1$.
Such a repetition cannot exist due to the analysis of the cases above.
\item{} \textit{$d = 3$.} \\
Suppose that $y_1$ is the first term of some $\kappa$-block.
Clearly, the first term of $\rho_2$ is also a $\kappa$-term, and thus the number of $\kappa$-terms
in $\rho_1$ is divisible by $3$, by $(P_5)$.
Therefore, there are at least six $\kappa$-terms in $\rho_1 \rho_2$,
meaning there are two distinct consecutive $\kappa$-terms.
Using $(K_{10})$ and $(K_{11})$, we can uniquely determine all $\kappa$-blocks to which the $\kappa$-terms of $\rho_1$ and $\rho_2$ belong.
So, by Lemma~\ref{lem:deter}, we obtain a contradiction.
If $y_1$ is the third term of some $\kappa$-block, we use the same argument as in the paragraph above.
The argument when $y_1$ is the second term of some $\kappa$-block is analogous to the subcase in the case $d=2$,
where $y_1$ is the second term of some $\kappa$-block.
In the case when $y_1$ is a $\zeta$-term, we can again translate the analysis to the one of the above
cases, since the $\zeta$-triples have period $3$.
\item{} \textit{$d = 4$.} \\
Suppose that $y_1$ is the first term of some $\kappa$-block.
By $(P_1)$, the length of $\rho_1$ is at least $3$.
Furthermore, $y_2$ and $y_3$ are a $\zeta$-terms (the second terms of some $\zeta$-block)
and a $\kappa$-term (the third term of some $\kappa$-block), respectively.
By $(P_4)$, all $\zeta$-terms in $\rho_1$ are the second terms of $\zeta$-blocks.
Thus, by $(P_5)$ and the fact that for every $\zeta$-term in $\rho_1$ there are two $\zeta$-blocks in $\hat{\rho_1}$,
we have that the number of $\zeta$-blocks in $\hat{\rho_1}$ is divisible by $6$.
By $(P_7)$, $y_{r+1}$ is also the first term of some $\kappa$-block in $\varphi_4^t$,
meaning that the number of $\kappa$-blocks in $\hat{\rho_1}$ is also divisible by $6$
and that the number of $\kappa$-blocks between the blocks of $y_1$ and $y_{r+1}$ is odd.
Hence, by $(K_{12})$, the $\kappa$-blocks of $y_1$ and $y_{r+1}$ are the same.
Analogously, all the blocks of the $\kappa$-terms $y_i$ in $\rho_1$
are the same as the $\kappa$-blocks of $y_{i+r}$ in $\rho_2$.
Thus, there is a repetition in $\varphi_4^t$ also, a contradiction.
Suppose now that $y_1$ is the second term of some $\kappa$-block.
Similarly as in the case above, we notice that all $\kappa$-terms of $\rho_1$ are the second terms in their $\kappa$-blocks
in $\hat{\rho_1}$, and that the number of $\kappa$-blocks in $\hat{\rho_1}$ is divisible by $6$.
Again, we deduce that for every two $\kappa$-terms $y_i$ and $y_j$ in $\sigma$, there are even number of $\kappa$-blocks between the
$\kappa$-blocks of $y_i$ and $y_j$ in $\hat{\sigma}$.
It follows that every pair of equal $\kappa$-symbols in $\sigma$ belongs
to the same $\kappa$-block, and hence $\hat(\rho_1) = \hat{\rho_2}$, a contradiction.
The cases, when $y_1$ is the third term of some $\kappa$-block, or the second term of some $\zeta$-block
are analogous to the first case.
The cases, when $y_1$ is the first or the third term of some $\zeta$-block are analogous to the second case.
\end{itemize}
\end{proof}
\subsection{The case $k=6$}
In this part, we present a construction of a $6$-Thue sequence using $8$ symbols,
in a similar way as for the case $k=4$. Again, we wreath a Thue sequence with a circular sequence,
but now, the base sequence is formed by blocks of four symbols,
where in each block we only permute symbols in fixed pairs.
Similarly as in Section~\ref{sec:koc}, we start by constructing a Thue sequence over an alphabet
$$
\mathbb{B} = \set{1,2,3,4}
$$
of $4$ symbols. Let a morphism $\lambda$, mapping a symbol from the sequence to a block of four distinct symbols, be defined as
\begin{eqnarray*}
\lambda(1) = 1 \ 2 \ 3 \ 4 \,,
\quad
\lambda(2) = 2 \ 1 \ 4 \ 3 \,,
\quad
\lambda(3) = 1 \ 2 \ 4 \ 3 \,,
\quad
\lambda(4) = 2 \ 1 \ 3 \ 4 \,.
\end{eqnarray*}
For a positive integer $t$, we recursively define the sequence
$$
\slam{t} = \lambda(\slam{t-1})\,,
$$
where $\slam{0} = 1$.
Notice that for every positive integer $t$,
every symbol from $\mathbb{B}$ is a neighbor of all symbols of $\mathbb{B}$.
The blocks of four symbols to which the symbols from $\mathbb{B}$ are mapped by $\lambda$,
are referred to as \textit{$\lambda$-blocks}.
In Fig.~\ref{fig:k6}, the mappings of $\lambda$ are depicted.
\begin{figure}[htp]
$$
\includegraphics{6-thue_graph}
$$
\caption{The graph of adjacencies between the symbols of $\mathbb{B}$ and $\lambda$-blocks, and the mappings defined by~$\lambda$.}
\label{fig:k6}
\end{figure}
We first observe some basic properties of the sequence $\slam{t}$, for any positive integer $t$.
\begin{itemize}
\item[$(L_1)$] \quad For any pair of adjacent $\lambda$-blocks $\gamma_1$ and $\gamma_2$, the sequence $\gamma_1 \gamma_2$ is Thue.
\item[$(L_2)$] \quad The length of $\slam{t}$ is $4^t$, and $x_{4i+1}x_{4i+2}x_{4i+3}x_{4i+4}$ is a $\lambda$-block for every $i$, $0 \le i \le 4^{t-1}-1$.
\item[$(L_3)$] \quad $\{x_{4i+1},x_{4i+2}\} = \set{1,2}$ and $\{x_{4i+3},x_{4i+4}\} = \{ 3,4 \}$ for every $i$, $0 \le i \le 4^{t-1}-1$.
Consequently, by knowing at least one term at index $1$ or $2$, and at least one term at index $3$ or $4$,
the $\lambda$-block is uniquely determined.
\item[$(L_4)$] \quad For every $i$, $0 \le i \le 4^{t-1} - 3$, it holds: $x_{4i+1}x_{4i+2}x_{4i+3}x_{4i+4} \neq x_{4i+9}x_{4i+10}x_{4i+11}x_{4i+12}$
(this is in fact a consequence of $(L_3)$).
\item[$(L_5)$] \quad Two consecutive $\lambda$-blocks with the same first two terms are mapped from $\set{1,3}$ or $\set{2,4}$.
Similarly, two consecutive $\lambda$-blocks with the same last two terms are mapped from $\set{1,4}$ or $\set{2,3}$.
\item[$(L_6)$] \quad Let $\gamma_1$ and $\gamma_2$ be distinct $\lambda$-blocks with equal terms
at indices $1$ and $2$ or at indices $3$ and $4$.
For $\lambda$-blocks $\gamma_3$, $\gamma_4$, and $\gamma_5$,
in $\slam{t}$, there is at most one of the subsequences $\gamma_1 \gamma_3 \gamma_5$ and $\gamma_2 \gamma_4 \gamma_5$,
since otherwise the property $(L_3)$ would be violated in $\slam{t-1}$.
\item[$(L_7)$] \quad If for two $\lambda$-blocks $\gamma_1 = x_{4i+1}x_{4i+2}x_{4i+3}x_{4i+4}$ and $\gamma_2 = x_{4j+1}x_{4j+2}x_{4j+3}x_{4j+4}$
there is such $\ell \in \set{1,2,3,4}$ that $x_{4i+\ell} = x_{4j+\ell}$ and $4$ divides $|j-i|$, then $\gamma_1 = \gamma_2$.
On the other hand, if $4$ does not divide $|j-i|$, but $|j-i|$ is even, then $\gamma_1 \neq \gamma_2$.
\item[$(L_8)$] \quad If for a $\lambda$-block $\gamma$ one term is known, then it is one of two possible $\lambda$-blocks.
In particular, if the known term is at index $1$ or $2$ in $\gamma$, then either $\lambda^{-1}(\gamma) \in \set{1,3}$ or $\lambda^{-1}(\gamma) \in \set{2,4}$.
If the known term is at index $3$ or $4$ in $\gamma$, then either $\lambda^{-1}(\gamma) \in \set{1,4}$ or $\lambda^{-1}(\gamma) \in \set{2,3}$.
\end{itemize}
We leave the above properties to the reader to verify and proceed by proving that $\slam{t}$ is Thue.
\begin{lemma}
\label{lem:k6}
The sequence $\slam{t}$ is Thue for every non-negative integer $t$.
\end{lemma}
\begin{proof}
Suppose the contrary, and let $t$ be the minimum such that there is a repetition in $\slam{t}$.
Denote the $i$-th term of $\slam{t}$ by $x_i$.
Let $\rho_1 \rho_2 = y_1\dots y_r y_{r+1}\dots y_{2r}$ be a repetition of minimum length.
We first show that $r > 4$. The cases with $r \le 3$ are trivial, so suppose $r=4$.
By $(L_1)$, we have that $y_1$ is not at index $4i+1$ in $\slam{t}$ (for any $i$, $0\le i \le 4^{t-1}-1$),
and by $(L_3)$, it is not at index $4i+2$ nor $4i+4$.
Hence, assume $y_1$ is at index $4i+3$.
Denote the $\lambda$-block $x_{4i+5}x_{4i+6}x_{4i+7}x_{4i+8} (= y_3y_4y_5y_6)$ by $\gamma_1$.
By $(L_1)$, we have that $\gamma_0 = x_{4i+1}x_{4i+2}x_{4i+3}x_{4i+4} = y_4y_3y_5y_6$ and similarly,
$\gamma_2 = x_{4i+9}x_{4i+10}x_{4i+11}x_{4i+12} = y_3y_4y_6y_5$. By $(L_5)$,
this means that if $\gamma_1 \in \set{1,2}$, then $\gamma_0, \gamma_2 \in \set{3,4}$, and analogously,
if $\gamma_1 \in \set{3,4}$, then $\gamma_0, \gamma_2 \in \set{1,2}$, a contradiction to $(L_3)$.
Hence, $r > 4$.
Let $j$ be the index of $y_1$ in $\slam{t}$, i.e. $y_1 = x_j$.
If $j$ is odd, then by $(L_3)$, either $x_{j}x_{j+1} = \set{1,2}$ or $x_{j}x_{j+1} = \set{3,4}$,
and without loss of generality, we may assume the former.
Thus, also $x_{j+r}x_{j+r+1} = \set{1,2}$, which implies that $r$ must be even.
In the case when $j$ is even, $(L_3)$ similarly implies that $x_j \in \set{1,2}$ and $x_{j+1} \in \set{3,4}$,
and hence $x_{j+r} \in \set{1,2}$ and $x_{j+r+1} \in \set{3,4}$. Consequently, $r$ is again even.
Finally observe that by $(L_2)$, from $r$ being even and $x_j = x_{j+r}$ it follows that $r$ is divisible by $4$.
Suppose now that $j = 4i+1$, for some $i$.
Then, since $r$ is divisible by $4$, $\rho_1$ and $\rho_2$ are comprised of $\tfrac{r}{4}$ $\lambda$-blocks each,
the first starting with $x_j$.
This in turn means that there is a repetition in $\slam{t-1}$ as every $\lambda$-block represents one term in $\slam{t-1}$,
a contradiction to the minimality of $t$.
Next, suppose $j = 4i + 2$. By $(L_3)$, we have that $x_{j-1} = x_{j+r-1}$, and hence
$\rho_1' \rho_2' = x_{j-1}x_{j}\dots x_{j+r-1} x_{j+r} \dots x_{j+2r-2}$ is also a repetition in $\slam{t}$,
where $j-1 = 4i + 1$, and hence the reasoning in the above paragraph applies.
Suppose $j = 4i + 4$. Then, analogous to the previous case, we infer $x_{j+r} = x_{j+2r}$, and hence
$\rho_1' \rho_2' = x_{j+1}\dots x_{j+r} x_{j+r+1} \dots x_{j+2r}$ is also a repetition in $\slam{t}$,
where $j+1 = 4(i+1) + 1$, so the reasoning for $j = 4i+1$ applies again.
Finally, consider the case with $j = 4i + 3$. If $r = 8$, from $x_{j+2}x_{j+3}x_{j+4}x_{j+5} = x_{j+10}x_{j+11}x_{j+12}x_{j+13}$
it follows that the $\lambda$-block $x_{j+6}x_{j+7}x_{j+8}x_{j+9}$ is surrounded by the same $\lambda$-blocks, which
contradicts $(L_4)$.
Hence, we may assume $r \ge 12$. Since the $\lambda$-blocks $x_{j+6}x_{j+7}x_{j+8}x_{j+9}$ and $x_{j+r+6}x_{j+r+7}x_{j+r+8}x_{j+r+9}$ are equal,
and $r$ is divisible by $4$, it follows that also $x_{j-2}x_{j-1}x_{j}x_{j+1} = x_{j+r-2}x_{j+r-1}x_{j+r}x_{j+r+1}$
and we may apply the reasoning for the case with $j=4i+1$ on the repetition $x_{j-2}\dots x_{j+2r-3}$.
Hence, $\slam{t}$ is Thue.
\end{proof}
Now, take the circular sequence $\cir{4}{4^{t-1}}$, with $\varphi = 5 \ 6 \ 7 \ 8$,
and use sequence wreathing on $\slam{t}$ and $\cir{4}{4^{t-1}}$ to obtain the sequence
$$
\varphi_6^t = \slam{t} \wreath_4 \cir{4}{4^{t-1}}\,.
$$
Similarly as above, we refer to the base-blocks of $\varphi_6^t$ as $\lambda$-blocks, and to the wrap-blocks as $\zeta$-blocks.
The terms of $\lambda$-blocks (resp. $\zeta$-blocks) are referred to as $\lambda$-terms (resp. $\zeta$-terms).
The sequence $\varphi_6^2$ is hence:
$$
\underbrace{1 \ 2 \ 3 \ 4}_{\lambda(1)} \ \ 5 \ 6 \ 7 \ 8 \ \ \ \underbrace{2 \ 1 \ 4 \ 3}_{\lambda(2)} \ \ 6 \ 7 \ 8 \ 5 \ \ \ \underbrace{1 \ 2 \ 4 \ 3}_{\lambda(3)} \ \ 7 \ 8 \ 5 \ 6 \ \ \underbrace{2 \ 1 \ 3 \ 4}_{\lambda(4)} \ \ 8 \ 5 \ 6 \ 7
$$
It remains to prove that $\varphi_6^t$ is also $6$-Thue.
\begin{lemma}
\label{lem:6k}
The sequence $\varphi_6^t$ is $6$-Thue for every non-negative integer $t$.
\end{lemma}
\begin{proof}
By Lemmas~\ref{lem:insert} and~\ref{lem:k6}, we have that $\varphi_6^t$ is Thue.
Thus, we only need to prove that every $d$-subsequence of $\varphi_6^t$ is also Thue, for every $d \in \set{2,3,4,5,6}$.
First, we list some general properties and then consider $d$-subsequences separately regarding the values of $d$.
By $(L_3)$ and the definition of circular sequences, every seven consecutive terms of $\varphi_6^t$ are distinct.
Hence,
\begin{itemize}
\item[$(R_1)$] \quad there are no repetitions of length $2$ or $4$ in any $d$-subsequence $\varphi_6^t$.
\end{itemize}
Furthermore, since the length of any $\lambda$-block and $\zeta$-block in $\varphi_6^t$ is $4$, one can deduce that:
\begin{itemize}
\item[$(R_2)$] \quad \textit{in every $d$-subsequence of $\varphi_6^t$ there are at most two consecutive $\lambda$-terms or $\zeta$-terms;}
\item[$(R_3)$] \quad \textit{in every $d$-subsequence of $\varphi_6^t$ any repetition contains $\lambda$-terms and $\zeta$-terms;}
\end{itemize}
Given a $d$-subsequence $\sigma = z_1 z_2 \dots z_n$ of $\varphi_6^t$ consisting of $n$ elements, we define a
mapping $\vartheta : \Sigma \rightarrow \set{N,C}^n$, where $\Sigma$ represents the set of all $d$-subsequences of $\varphi_6^t$,
mapping $\sigma$ to an $n$-component vector, $i$-th component being $N$ if $z_i$ belongs to a $\lambda$-block and $C$ otherwise
($N$ and $C$ standing for a \textbf{n}on-cyclic and \textbf{c}yclic element, respectively).
We call $\vartheta(\sigma)$ the \textit{type vector} of $\sigma$.
\begin{itemize}
\item[$(T_1)$] \quad The type vector of any $2$-subsequence contains $CCNN$ or $NNCC$ in the first five components
(depending on the position of the first term in the sequence).
\item[$(T_2)$] \quad The type vector of any $4$-subsequence equals $NCNC$ or $CNCN$ in the first four components.
\end{itemize}
Now, suppose the contrary, and let $\rho=\rho_1 \rho_2 = y_1\dots y_r y_{r+1}\dots y_{2r}$ be a repetition in some $d$-subsequence of $\varphi_6^t$.
We start by analyzing possible values of $r$.
By $(R_1)$, $r \ge 3$, so suppose first that $r = 3$. We will consider the cases regarding the type vectors of $\rho$.
By $(R_2)$ and $(R_3)$, there are six possible type vectors for $\rho$, namely:
$CCN \ CCN$, $CNC \ CNC$, $CNN \ CNN$, $NCC \ NCC$, $NCN \ NCN$, and $NNC \ NNC$. By $(T_1)$ and $(T_2)$, such a sequence does not appear
in any $\ell$-sequence for $\ell \in \set{2,4}$. Hence, it remains to consider $\ell \in \set{3,5,6}$.
Let $j$, $j \in \set{1,2,3,4}$, be the index of $y_1$ in the $\lambda$- or $\zeta$-block it belongs to.
In Table~\ref{tbl:types3}, we present type vectors regarding $j$'s and $\ell$'s.
\begin{table}[htp!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$j \ / \ \ell$ & 3 & 5 & 6 \\\hline
$1$ & NNC \ NCC & NCN \ CCN & NCC \ NNC \\\hline
$2$ & NCC \ NCN & \textbf{NCN \ NCN} & NCC \ NNC \\\hline
$3$ & \textbf{NCN \ NCN} & NCC \ NCN & NNC \ CNN \\\hline
$4$ & NCN \ CCN & NNC \ NCC & NNC \ CNN \\\hline
\end{tabular}
\caption{The type vectors of $\rho$ regarding $j$'s and $\ell$'s in the case $r=3$, assuming the first term $y_1$
lies in a $\lambda$-block. In the symmetric case, when $y_1$ is in a $\zeta$-block,
the type vector values are simply interchanged.}
\label{tbl:types3}
\end{table}
The only two type vectors matching the possibilities for the type vectors of $\rho$ are in the cases $(j,\ell) \in \set{(3,3),(2,5)}$.
In the case $(3,3)$, the indices of $y_1,\dots,y_{6}$ within their blocks are respectively $3,2,1,4,3,2$.
When $y_1$ belongs to a $\lambda$-block, $y_2$ and $y_5$ must belong to consecutive $\zeta$-blocks. But, then $y_2 \neq y_5$,
due to the construction of circular sequences.
On the other hand, if $y_1$ belongs to a $\zeta$-block, then $y_2$ is at index $2$ in a $\lambda$-block
and $y_5$ is at index $3$ in a $\lambda$-block, so again $y_2 \neq y_5$, due to $(L_3)$.
In the case $(2,5)$, the indices of $y_1,\dots,y_{6}$ within their blocks are respectively $2,3,4,1,2,3$.
Suppose first that $y_1$ belongs to a $\zeta$-block.
Then, $y_2$ is at index $3$ in a $\lambda$-block
and $y_5$ is at index $2$ in a $\lambda$-block, and hence $y_2 \neq y_5$, due to $(L_3)$.
Finally, suppose $y_1$ belongs to a $\lambda$-block.
Then, $y_2$ is at index $3$ in a $\zeta$-block and $y_5$ is at index $2$ in a $\zeta$-block,
however, the two $\zeta$-blocks are not consecutive, and hence $y_2 \neq y_5$.
It follows that $r \ge 4$.
Using the construction properties of circular sequences,
we can obtain additional properties of $r$ regarding the structure of type vectors.
\begin{claim}
\label{cl:zeta_consec}
If there are two consecutive $\zeta$-terms in $\rho_1$, then $32$ divides $d \cdot r$.
\end{claim}
Note that we do not require the two terms being in the same $\zeta$-block.
\begin{proofclaim}
We prove the claim by showing that having two consecutive $\zeta$-terms, $x_{i}$ and $x_{i+d}$, in $\rho_1$ imply
that the corresponding two $\zeta$-terms , $x_{j}$ and $x_{j+d}$, in $\rho_2$ must appear at the same indices in their $\zeta$-blocks.
This fact further implies that the difference between $i$ and $j$ is $(8 \cdot 4)t$
($8$ since each pair of $\lambda$- and $\zeta$-blocks has $8$ terms,
and $4$, since $\zeta$-blocks have period $4$ in $\cir{4}{4^t}$), for some positive integer $t$.
On the other hand, there are $d \cdot r$ terms between $x_i$ and $x_j$, and hence $32$ divides $d \cdot r$.
We consider the cases regarding $d$. For $d=1$, the claim is trivial.
For $d=2$, the pair of terms $x_{i}$ and $x_{i+2}$ can appear twice in four distinct $\zeta$-blocks.
However, since the parity of the indices $i$ and $j$ must be the same in this case, they must
appear in the same $\zeta$-block in $\rho_2$.
In the case $d=3$ a pair of two symbols appear only once in four distinct $\zeta$-blocks, hence there is nothing to prove.
In the case $d=4$, it is not possible to have two consecutive $\zeta$-terms.
In the cases $d=5$ and $d=6$, the two terms belong to two consecutive $\zeta$-blocks.
In the former, there is again only one appearance of each pair per four blocks, so it remains to consider the case $d=6$.
There are two possible appearances of a pair, but since the indices must have the same parity, the pair must appear in the
same two $\zeta$-blocks. This completes the proof of the claim.
\end{proofclaim}
We continue by considering the cases regarding $d$.
\begin{itemize}
\item{} $d=2$.
If there are no two consecutive terms of $\rho_1$ that belong to the same $\zeta$-block, then $r = 4$
and $y_1$ is a part of a $\zeta$-block.
But in this case, there are two consecutive $\lambda$-blocks, uniquely determined by $y_2$, $y_3$ and
$y_6$, $y_7$, which must be equal as $y_2 = y_6$ and $y_3 = y_7$, a contradiction to Lemma~\ref{lem:k6}.
So, there is at least one $\zeta$-block which contains two terms of $\rho_1$.
By Claim~\ref{cl:zeta_consec}, $r$ is divisible by $32/2 = 16$.
Suppose $y_1$ is the first (resp. the second) term of some $\lambda$-block.
Then, $y_2$, $y_{r+1}$, and $y_{r+2}$ are also $\lambda$-terms, and hence every $\lambda$-block of $\rho$ is uniquely determined.
By Lemma~\ref{lem:deter}, it follows there is a repetition in $\slam{t}$, a contradiction to Lemma~\ref{lem:k6}.
Now, suppose $y_1$ is the third (resp. the fourth) term of some $\lambda$-block $\gamma_1$.
Let $\gamma_2$ be the $\lambda$-block determined by $y_r$ and $y_r+1$. Clearly, $\gamma_1 \neq \gamma_2$,
otherwise there is a repetition in $\slam{t}$, by Lemma~\ref{lem:deter}.
However, since the third and the fourth terms of $\gamma_1$ and $\gamma_2$ are equal,
they are either mapped by $\lambda$ from $\set{1,4}$ or $\set{2,3}$.
As the $\lambda$-block in $\sigma_1$ following $\gamma_1$ is the same as the $\lambda$-block in $\sigma_2$ following $\gamma_2$,
we obtain a contradiction due to $(L_6)$.
Finally, suppose $y_1$ is a part of a $\zeta$-block. Since $r$ is divisible by $16$,
the number of $\lambda$-blocks in each of $\rho_1$ and $\rho_2$ is divisible by $4$,
and since all of them are uniquely determined, we have a repetition in $\slam{t}$
(in fact already in $\slam{t-1}$), a contradiction.
\item{} $d=3$.
We first show that there are two consecutive $\zeta$-terms in $\rho_1$.
Suppose the contrary. Then, since $r > 3$ and the fact that the type
vectors of $\rho_1$ and $\rho_2$ must match, there are two consecutive $\lambda$-terms in $\rho_1$.
But, in the type vector, between two pairs of two consecutive $\lambda$-terms, for $d=3$,
there are two consecutive $\zeta$-terms, a contradiction.
Hence, we may assume there are two consecutive $\zeta$-terms in $\rho_1$ and by Claim~\ref{cl:zeta_consec},
$32$ divides $3r$. Observe also that for $d=3$, there is at least one term from $\rho$ in every
$\lambda$-block of the covering sequence of $\rho$. Then, by $(L_7)$, we infer that all
$\lambda$-blocks in the covering sequences of $\rho_1$ are equal to the corresponding
$\lambda$-blocks in the covering sequences of $\rho_2$, and hence there is a repetition in $\slam{t-1}$, a contradiction.
\item{} $d=4$.
In this case, all the terms of $\rho$ are at the same indices in their $\lambda$- and $\zeta$-blocks.
As there is at least one $\zeta$-term, by construction of circular sequences,
we have that $32$ divides $4r$, and hence $8$ divides $r$.
Thus, by $(L_7)$, all $\lambda$-blocks in the covering sequences of $\rho_1$ are equal to the corresponding ones in the covering sequences of $\rho_2$,
and hence there is a repetition in $\slam{t-1}$, a contradiction.
\item{} $d=5$.
In this case, $\lambda$- and $\zeta$-blocks of the covering sequence of $\rho_1$ contain
precisely one term from $\rho_1$ with an exception of every fifth block, which is being skipped.
Hence, there are two consecutive $\lambda$- or $\zeta$-terms in $\rho_1$ as soon as $r > 4$.
As $r > 3$, the only possible $\rho_1$ with no consecutive terms of the same type has length $4$.
However, in such a case, the terms $y_1$ and $y_{r+1}$ are not of the same type, so $r > 4$.
Suppose first there are no consecutive $\zeta$-terms in $\rho_1$. In that case, there are
two consecutive $\lambda$-terms in $\rho_1$, and hence also in $\rho_2$. Moreover, since
between every pair of consecutive $\lambda$-terms there are two consecutive $\zeta$-terms,
the only possible $r$ for such $\rho$, satisfying also that the type vectors of $\rho_1$ and $\rho_2$
are the same, is $8$. However, then $y_1$ and $y_{r+1}$ are both $\zeta$-terms but the difference
between their indices in the covering sequence is $5\cdot 8 = 40$, meaning that $y_{1} \neq y_{r+1}$.
So, we may assume there are two consecutive $\zeta$-terms in $\rho_1$ and, by Claim~\ref{cl:zeta_consec},
$32$ divides $5r$ (hence $32$ divides $r$ also).
By $(L_7)$, all $\lambda$-blocks in the covering sequence of $\rho_1$ that contain one term from $\rho_1$
are equal to the corresponding $\lambda$-blocks in the covering sequence of $\rho_2$.
Furthermore, since in the covering sequence of $\rho$ three out of every four $\lambda$-blocks contain
one term from $\rho$, also the $\lambda$-block $\gamma_0$ without a term is uniquely determined, unless it is the first
$\lambda$-block of $\hat{\rho_1}$ or the last $\lambda$-block of $\hat{\rho_2}$.
In the case when $\gamma_0$ is determined, the covering sequence of $\rho_1$ contains the same sequence of $\lambda$-blocks as
the covering sequence of $\rho_2$, and so there is a repetition in $\slam{t-1}$, a contradiction.
Hence, we may assume $\gamma_0$ is not uniquely determined, and without loss of generality, suppose it is
the first $\lambda$-block of $\hat{\rho_1}$.
Since $\gamma_0$ is not uniquely determined, it is mapped from either the third or the fourth symbol of some $\lambda$-block $\xi_0$ of $\slam{t-1}$.
In the former case, $\xi_0$ is completely determined, since $r \ge 32$ and one can determine the $\lambda$-block following $\xi_0$ in $\slam{t-1}$,
and hence also $\gamma_0$ is completely determined.
In the latter case, observe that, $y_2\dots y_{r+1}y_{r+2}\dots y_{2r}x_{j+5}$ (with $y_{2r} = x_j$) is also a repetition,
and considering it, we have all $\lambda$-blocks in $\hat{\rho}$ determined, a contradiction.
\item{} $d=6$.
In this case, $\rho_1$ alternately contains two consecutive $\lambda$- and two consecutive $\zeta$-terms,
with a possible shift in the beginning depending on the index of first term in the covering sequence of $\rho$.
Hence, as $r > 3$, there are always two consecutive $\zeta$-terms in $\rho_1$ unless $r=4$ and
the type vector of $\rho_1$ is $CNNC$. However, in that case $y_1 \neq y_{r+1}$ by the construction of circular sequences.
Thus, we may assume there are two consecutive $\zeta$-terms in $\rho_1$ and, by Claim~\ref{cl:zeta_consec},
$32$ divides $6r$, hence $16$ divides $r$.
Let $r = 16t$; then the length of the covering sequence of $\rho_1$ is $6 \cdot 16t = 96t$
and therefore there are $12t$ $\lambda$-blocks, where every two out of three consecutive $\lambda$-blocks contain a term from $\rho_1$.
By $(L_7)$, all $\lambda$-blocks in the covering sequence of $\rho_1$ that contain one term from $\rho_1$
are equal to the corresponding $\lambda$-blocks in the covering sequence of $\rho_2$.
Recall that a $\lambda$-block is not uniquely determined by one term; it can be one of two possible (see $(L_8)$).
Let $\sigma^{t}$ be the covering sequence of $\rho$ with all $\zeta$-blocks removed and let
$\sigma^{t-1} = \lambda^{-1}(\sigma)$. Clearly, $\sigma^{t-1}$ is a subsequence of $\slam{t-1}$.
Let $\sigma_1^{t-1}$ and $\sigma_2^{t-1}$ be the sequences defined analogously for $\rho_1$ and $\rho_2$, respectively.
As we deduced above, $\sigma_1^{t-1} = z_1z_2\dots z_{12t}$ has $12t$ elements.
We consider four subcases regarding the index of $z_1$ in $\slam{t-1}$.
Note that in each of the four cases, for every $z_i$ that is a preimage of some $\lambda$-block with one term from $\rho$,
we can uniquely determine which symbol $z_i$ represents simply by $(L_8)$ and the position of $z_i$ in the $\lambda$-block of $\slam{t-1}$.
Consequently, every ``complete'' $\lambda$-block of $\sigma_1^{t-1}$ is uniquely determined by $(L_3)$, since we know at least two of its terms, and in the case,
when two terms are known, they are at indices $2$ and $3$.
Suppose first $z_1$ is at index $4i+1$ in $\lambda^{t-1}$ for some $i$.
Then, there are $3t$ complete uniquely determined $\lambda$-blocks in $\sigma_1^{t-1}$, and hence by Lemma~\ref{lem:deter},
there is a repetition also in $\slam{t-1}$, a contradiction.
Next, suppose $z_1$ is at index $4i+4$ in $\lambda^{t-1}$. There are $3t-1$ complete uniquely determined $\lambda$-blocks in $\sigma_1^{t-1}$
and one $\lambda$-block, with $3$ terms $z_{12t-2}z_{12t-1}z_{12t}$. However, as argued above, the latter is also uniquely determined, which
means that $z_2\dots z_{24t}w$, where $w$ is the element at index $4i+4+24t+1$ in $\slam{t-1}$, is also a repetition, and hence we may use the
argumentation for $z_1$ being at index $4i+1$.
Suppose $z_1$ is at index $4i+3$ in $\lambda^{t-1}$. In this case, there are $3t-2$ complete uniquely determined $\lambda$-blocks in $\sigma_1^{t-1}$,
and two $\lambda$-blocks having two terms in $\sigma_1^{t-1}$. The second one, $z_{12t-1}z_{12t}z_{12t+1}z_{12t+2}$ has the other two terms in $\sigma_2^{t-1}$.
Now, if $3t$ is divisible by $4$, then the $\lambda$-block $z_{-1}z_{0}z_1z_2$ equals $z_{12t-1}z_{12t}z_{12t+1}z_{12t+2}$ and we again can shift the
sequence to the left as above, obtaining a repetition.
Hence $3t$ is not divisible by $4$.
Consider the $\lambda$-blocks $\alpha_1=z_{3}z_{4}z_{5}z_{6}$ and $\alpha_1=z_{7}z_{8}z_{9}z_{10}$. They are uniquely determined and they must be
equal to the $\lambda$-blocks $z_{3+12t}z_{4+12t}z_{5+12t}z_{6+12t}$ and $z_{7+12t}z_{8+12t}z_{9+12t}z_{10+12t}$, which is not possible due to
the parity condition and $(L_6)$.
Finally, suppose $z_1$ is at index $4i+2$ in $\lambda^{t-1}$. Again, if $3t$ is divisible by $4$, then we shift the sequence by one to the right
(I will write this nicer), as the first (incomplete) $\lambda$-block in $\sigma_1^{t-1}$ must match the first (incomplete) $\lambda$-block in $\sigma_2^{t-1}$,
and we obtain a repetition in $\slam{t-1}$. Otherwise, $3t$ is not divisible by $4$, and we obtain a contradiction on the equality of first
two complete $\lambda$-blocks in $\sigma_1^{t-1}$ and $\sigma_2^{t-1}$.
\end{itemize}
\end{proof}
\section{Discussion}
In this paper, we improve the current state of Conjecture~\ref{conj:k+2} by showing that it is true for every integer $k$ between $1$ and $8$.
In particular, we present two proving techniques, which are in their essence similar, but very much different in practice.
Namely, the proving technique presented in Section~\ref{sec:pas} is (provided there are available computing resources) efficient for confirming Conjecture~\ref{conj:k+2}
for small $k$'s, since one can employ computing resources to verify small instances, while the statement of the conjecture then holds almost trivially for larger instances.
However, to be able to prove Conjecture~\ref{conj:k+2} in general or at least for an infinite number of integers, it will fail.
On the other hand, the method described in Section~\ref{sec:cons} is more promising.
We are using a special construction of Thue sequences with properties allowing to prove that they are also $k$-Thue.
This technique needs more argumentation for proving that the generated sequences are indeed $k$-Thue, but allows for establishing properties
for a bigger set of $k$'s, possibly infinite.
\bibliographystyle{plain}
|
1,108,101,563,623 | arxiv | \section{Introduction}
Let $G$ be a group and $X$ a $G$-module.
Motivated by topological geometry,
Staic \cite{staic20093} defined symmetric cohomology of groups
by constructing an action of the symmetric group $S_{\bullet +1}$
on the standard resolution $\cpx{C}{}{\bullet}{G}{X}$
which gives the group cohomology $\hcoho{\bullet}{G}{X}$.
That is, by taking cohomology of the subcomplex
$\cpx{CS}{}{\bullet}{G}{X} = \cpx{C}{}{\bullet}{G}{X}^{S_{\bullet+1}}$
of $\cpx{C}{}{\bullet}{G}{X}$ fixed by $S_{\bullet +1}$,
Staic defined the symmetric cohomology
$\shcoho{\bullet}{G}{X} = \coho{\bullet}{\cpx{CS}{}{\ast}{G}{X}}$.
In the same paper, for a topological space $U$
and the $i$-th homotopy group $\pi_i(U)$,
Staic proved that, if $\pi_1(U)$ has no elements of order $2$ and $3$,
then the image of $\alpha \in \shcoho{3}{\pi_1(U)}{\pi_2(U)}$
in $\hcoho{3}{\pi_1(U)}{\pi_2(U)}$
by the canonical map is Postnikov invariant.
In \cite{staic2009symmetric}, it was proved
that the secondary cohomology group $\shcoho{2}{G}{X}$
is corresponding to extensions of groups
which satisfies some conditions.
Moreover, Staic studied the injectivity of the canonical map
$\shcoho{\bullet}{G}{X} \longrightarrow \hcoho{\bullet}{G}{X}$
induced by the inclusion
$\cpx{CS}{}{\bullet}{G}{X} \longhookrightarrow \cpx{C}{}{\bullet}{G}{X}$.
Singh \cite{singh2013symmetric} defined the symmetric continuous
cohomology of topological groups and the symmetric smooth cohomology
of Lie groups.
Recently, Coconet-Todea \cite{coconet2021symmetric} defined
the symmetric Hochschild cohomology of twisted group algebras
which is a generalization of group algebras.
Our aim of this paper is to study about the symmetric cohomology
and the symmetric Hochschild cohomology
for cocommutative Hopf algebras as another generalization of group algebras.
This paper is organized as follows:
In Section 2, we recall some properties of Hopf algebras
and the definition of symmetric cohomology of groups.
Let $k$ be a field and $A$ a cocommutative Hopf algebra over $k$.
In Section 3, we define the symmetric cohomology
$\shcoho{\bullet}{A}{M}$ of $A$ with coefficients
in any left $A$-module $M$ by constructing an action
of the symmetric group $S_{\bullet+1}$ on the standard non-homogeneous
complex $\cpx{C}{}{\bullet}{A}{M}$
which gives the Hopf algebra cohomology $\hcoho{\bullet}{A}{M}$.
That is, this action gives the fixed subcomplex
$\cpx{CS}{}{\bullet}{A}{M} = \cpx{C}{}{\bullet}{A}{M}^{S_{\bullet +1}}$
of $\cpx{C}{}{\bullet}{A}{M}$, and this defines the symmetric cohomology
$\shcoho{\bullet}{A}{M} = \coho{\bullet}{\cpx{CS}{}{\ast}{A}{M}}$.
Similarly, we define the symmetric Hochschild cohomology
$\shoch{\bullet}{A}{M}$ of $A$ with coefficients
in any $A$-bimodule $M$ by constructing an action
of the symmetric group $S_{\bullet+1}$ on the standard non-homogeneous
complex $\cpx{C}{e}{\bullet}{A}{M}$
which gives the Hochschild cohomology $\hoch{\bullet}{A}{M}$.
This action gives the fixed subcomplex
$\cpx{CS}{e}{\bullet}{A}{M} = \cpx{C}{e}{\bullet}{A}{M}^{S_{\bullet +1}}$
of $\cpx{C}{e}{\bullet}{A}{M}$.
From this, we define the symmetric cohomology
$\shoch{\bullet}{A}{M} = \coho{\bullet}{\cpx{CS}{e}{\ast}{A}{M}}$.
In Section 4, first, we consider the resolution of $k$
which gives the symmetric cohomology and the resolution of $A$
which gives the symmetric Hochschild cohomology.
Eilenberg and MacLane proved an isomorphism
between the group cohomology and the Hochschild cohomology
of group algebras (\cite{eilenberg1947cohomology}, see Theorem \ref{thm.iso-grp-coho}).
Moreover, its isomorphism was generalized to the case of Hopf algebras
by Ginzburg and Kumar (\cite{ginzburg1993cohomology}, see Remark \ref{rem.iso-hopf-coho}).
In Theorem \ref{thm.iso-symcoho-symHoch},
we obtain an isomorphism, which is a symmetric version
of these isomorphisms,
between the symmetric cohomology $\shcoho{n}{A}{{}^\mathrm{ad} M}$
and the symmetric Hochschild cohomology $\shoch{n}{A}{M}$
for any $A$-bimodule $M$ and $n \geq 0$,
where ${}^\mathrm{ad} M$ is a left $A$-module via the left adjoint action.
Also, similar to the case of symmetric cohomology of groups,
there is an isomorphism of $k$-vector spaces
$\shcoho{1}{A}{M} \cong \hcoho{1}{A}{M}$
and the canonical map $\shcoho{2}{A}{M} \longrightarrow \hcoho{2}{A}{M}$
induced by the inclusion
$\cpx{CS}{}{\bullet}{A}{M} \longhookrightarrow \cpx{C}{}{\bullet}{A}{M}$
is injective.
Moreover, we obtain a result for the projectivity of the resolution
of $k$ above in Theorem \ref{thm.res-proj}.
Finally, we calculate the resolution which gives symmetric cohomology
of group algebras of cyclic groups of odd prime order.
Throughout the paper, let $k$ be a field,
and we write $\otimes$ for $\otimes_k$.
\section{Preliminaries}
In this section, we describe some properties of Hopf algebras
and the definition of symmetric cohomology of groups.
\subsection{Properties of Hopf algebras}
Let $A$ be a Hopf algebra
with a coproduct $\map{\Delta}{A}{A \otimes A}$,
a counit $\map{\varepsilon}{A}{k}$,
and an antipode $\map{S}{A}{A}$.
We say that $A$ is \textit{cocommutative}
if $ \mathrm{tw} \circ \Delta = \Delta$,
where $\map{\mathrm{tw}}{A \otimes A}{A \otimes A}$ is the morphism given
by $\mathrm{tw}(a \otimes b) = b \otimes a$ for any $a,b \in A$.
We will use some standard notation for the coproduct,
so called \textit{Sweedler notation}; we write
$\Delta(a) = \sum a^{(1)} \otimes a^{(2)}$,
where the notation $a^{(1)},a^{(2)}$ for tensor factors is symbolic.
Throughout the paper, we omit the summation symbol $\sum$
of Sweedler notation when no confusion occurs.
Next, we recall some properties of Hopf algebras.
\begin{prop}[{\cite[Proposition 4.0.1]{swedler1969hopf}}]
Let $A$ be a Hopf algebra. Then the followings hold.
\begin{enumerate}[{\rm (1)}]
\item $S(ab) = S(b)S(a)$ for any $a,b \in A$,
\item $S(1_A) = 1_A$,
\item $\varepsilon \circ S = \varepsilon$,
\item $(S \otimes S) \circ \mathrm{tw} \circ \Delta = \Delta \circ S$,
\item If $A$ is commutative or cocommutative,
then $S^2 = \id{A}$ holds.
\end{enumerate}
\end{prop}
For left $A$-modules $M$ and $N$,
left $A$-module structures on $M \otimes N$ and $\Hom{k}{M}{N}$
are defined as follows referring
to Witherspoon \cite{witherspoon2019hochschild}.
\begin{defin}[cf. {\cite[Section 9.2]{witherspoon2019hochschild}}]
Let $A$ be a Hopf algebra, and $M$ and $N$ left $A$-modules.
\begin{enumerate}[{\rm (1)}]
\item The $k$-vector space $M \otimes N$ is a left $A$-module via
$a \cdot ( m \otimes n) = a^{(1)}m \otimes a^{(2)}n$
for any $a \in A$, $m \in M$ and $n \in N$.
\item The $k$-vector space $\Hom{k}{M}{N}$ is a left $A$-module via
$(a \cdot f)(m) = a^{(1)} f(S(a^{(2)}) m)$
for any $a \in A$, $m \in M$ and $f \in \Hom{k}{M}{N}$.
In particular, if $N = k$, then $\Hom{k}{M}{k}$
has a left $A$-module structure given by
$(a \cdot f)(m) = f(S(a) m)$
for any $a \in A$ and $m \in M$, where $k$ is a trivial $A$-module via
$a \cdot x = \varepsilon (a)x$
for any $a \in A$ and $x \in k$.
\item Let ${}^A M$ denote the $A$-submodule of $M$ given by
$$
{}^A M = \{ m \in M \mid a \cdot m = \varepsilon(a)m,
\text{for any } a\in A \},
$$
which is called \textit{the submodule of $A$-invariants of $M$}.
Similarly, we denote the submodule of $A$-invariants of $M$
by $M^A$ when $M$ is a right $A$-module.
\item Let $M$ be an $A$-bimodule.
The left action on $M$ is defined by
$a \cdot m = a^{(1)} m S(a^{(2)})$
for any $a \in A$ and $m \in M$,
which is called \textit{the left adjoint action}.
Also, let ${}^{\mathrm{ad}}M$ denote by a left $A$-module $M$
with the left adjoint action.
\end{enumerate}
\end{defin}
\begin{lem}[cf. {\cite[Lemma 9.2.2]{witherspoon2019hochschild}}]
\label{lem.hopf-hom-invariant}
Let $A$ be a Hopf algebra, and $M$ and $N$ left $A$-modules.
Then there is an isomorphism
$\Hom{A}{M}{N} \cong {}^A(\Hom{k}{M}{N})$ as $k$-vector spaces.
\end{lem}
We recall the relationships among $A$-modules
that are obtained by taking the tensor product
and the set of homomorphisms.
\begin{lem}[cf. {\cite[Lemma 9.2.5]{witherspoon2019hochschild}}]
\label{lem.hopf-adjunction}
Let $A$ be a Hopf algebra,
and $L,M$ and $N$ left $A$-modules.
Then there is a natural isomorphism
$\Hom{k}{L \otimes M}{N} \cong \Hom{k}{L}{\Hom{k}{M}{N}}$
as left $A$-modules,
and a natural isomorphism
$\Hom{A}{L \otimes M}{N} \cong \Hom{A}{L}{\Hom{k}{M}{N}}$
as $k$-vector spaces.
\end{lem}
\begin{lem}[cf. {\cite[Lemma 9.2.7]{witherspoon2019hochschild}}]
\label{lem.hopf-hom-dual}
Let $A$ be a Hopf algebra,
and $L$ and $M$ left $A$-modules.
If $L$ is finite dimensional as a $k$-vector space,
then there is a natural isomorphism
$\Hom{k}{L}{M} \cong M \otimes \Hom{k}{L}{k}$ as left $A$-modules.
\end{lem}
The following fact tells us the projectivity of modules over Hopf algebras.
\begin{lem}[cf. {\cite[Lemma 9.2.9]{witherspoon2019hochschild}}]
\label{lem.hopf-proj}
Let $P$ be a projective $A$-module and $M$ a left $A$-module.
Then $P \otimes M$ is a projective $A$-module.
If the antipode $S$ is bijective,
then $M \otimes P$ is a projective $A$-module.
\end{lem}
\subsection{Symmetric cohomology of groups}
In this subsection, we recall the definition
of the symmetric cohomology of groups
which is introduced by Staic \cite[Section 5]{staic20093}.
Let $n$ be a non-negative integer,
$G^n$ direct products of $n$ times of a group $G$,
when $n=0$, $G^0$ is considered as the trivial group.
For a $G$-module $X$, we put
$\cpx{C}{}{n}{G}{X} = \{ \map{f}{G^n}{X} \}$
and define $\map{\dif{C}{n}}{\cpx{C}{}{n}{G}{X}}{\cpx{C}{}{n+1}{G}{X}}$ by
\begin{align*}
\dif{C}{n}(f)(g_1, \dots ,g_{n+1}) &= g_1 f(g_2,\dots ,g_{n+1}) \\
&\quad\,+ \sum_{i=1}^n (-1)^i f(g_1 , \dots , g_i g_{i+1} , \dots ,g_{n+1}) + (-1)^{n+1} f(g_1 , \dots ,g_n).
\end{align*}
Then $\cpx{C}{}{\bullet}{G}{X}$ is a complex of abelian groups.
Its cohomology is called the \textit{group cohomology}
and is denoted by $\hcoho{\bullet}{G}{X}$.
It is constructed an action of the symmetric group $S_{n+1}$
on $\cpx{C}{}{n}{G}{X}$ for each $n\geq 0$.
Namely, the element $\sigma_i = (i,i+1) \in S_{n+1}$
acts on $\cpx{C}{}{n}{G}{X}$ by
\begin{align*}
(\sigma_1 \cdot f)(g_1 , \dots , g_n)
&= -g_1 f(g_1^{-1},g_1g_2,g_3 , \dots ,g_n), \\
(\sigma_i \cdot f)(g_1 , \dots , g_n)
&= -f(g_1,\dots ,g_{i-1}g_i,g_i^{-1},g_ig_{i+1},\dots ,g_n) ,
\text{ for } 2 \leq i \leq n-1 ,\\
(\sigma_n \cdot f)(g_1 , \dots , g_n)
&= -f(g_1 , \dots ,g_{n-1}g_n , g_n^{-1})
\end{align*}
for $f \in \cpx{C}{}{n}{G}{X}$ and $g_1 , \dots , g_n \in G$.
Note that this action is compatible with the differential $\dif{C}{}$,
and hence $\cpx{CS}{}{\bullet}{G}{X} = \cpx{C}{}{\bullet}{G}{X}^{S_{\bullet +1}}$
is a subcomplex of $\cpx{C}{}{\bullet}{G}{X}$.
Its cohomology is called the \textit{symmetric cohomology}
and is denoted by $\shcoho{\bullet}{G}{X}$.
On the other hand, $\hcoho{\bullet}{G}{X}$ can be defined alternatively
using a homogeneous complex.
For a non-negative integer $n$, and let $\mathbb{Z} [G^{n+1}]$
be a $G$-module via
$$
g \cdot (g_0 , \dots , g_n) = (gg_0, \dots ,gg_n)
$$
for any $g,g_0, \dots , g_n \in G$.
There is a projective resolution of $\mathbb{Z}$ as $\mathbb{Z} G$-modules:
$$
\cdots \longrightarrow \mathbb{Z} [G^{n+1}]
\overset{\diff{n}{}}{\longrightarrow} \mathbb{Z} [G^{n}]
\longrightarrow
\cdots \longrightarrow \mathbb{Z} [G^2]
\overset{\diff{1}{}}{\longrightarrow} \mathbb{Z} [G^1]
\overset{\diff{0}{}}{\longrightarrow} \mathbb{Z} \longrightarrow 0 ,
$$
where we set
$\diff{n}{}(g_0 , \dots , g_{n})
= \sum_{i=0}^{n} (-1)^i (g_0 , \dots , g_{i-1} , g_{i+1} , \dots , g_{n})$.
Then $\cpx{K}{}{\bullet}{G}{X}$ and $\cpx{C}{}{\bullet}{G}{X}$
are isomorphic as complexes
where we put $\cpx{K}{}{n}{G}{X} = \Hom{G}{\mathbb{Z} [G^{n+1}]}{X}$
and $\dif{K}{n} = \Hom{G}{\diff{n+1}{}}{X}$. Hence $\hcoho{\bullet}{G}{X}$
is also defined using $\cpx{K}{}{\bullet}{G}{X}$.
We define an action of $S_{n+1}$ on $\cpx{K}{}{n}{G}{X}$
for each $n \geq 0$.
Namely, the element $\sigma_i$ acts on $\cpx{K}{}{n}{G}{X}$ by
$$
(\sigma_i \cdot f)(g_0 , \dots , g_n)
= -f(g_0 , \dots , g_{i} , g_{i-1} , \dots , g_n), \text{ for }1 \leq i \leq n
$$
for $f \in \cpx{K}{}{n}{G}{X}$ and $g_0, \dots , g_n \in G$.
Note that this action is compatible with the differential $\dif{K}{}$,
and hence
$\cpx{KS}{}{\bullet}{G}{X} = \cpx{K}{}{\bullet}{G}{X}^{S_{\bullet +1}}$
is a subcomplex of $\cpx{K}{}{\bullet}{G}{X}$.
Moreover, there is an isomorphism
$\cpx{CS}{}{\bullet}{G}{X} \cong \cpx{KS}{}{\bullet}{G}{X}$
as complexes
(see Bardakov-Neshchadim-Singh \cite{bardakov2018exterior},
Pirashvili \cite{pirashvili2018symmetric}).
Hence, we see that $\shcoho{\bullet}{G}{X}$ is also defined
using $\cpx{KS}{}{\bullet}{G}{X}$.
\section{Symmetric cohomology and symmetric Hochschild cohomology}
In this section, we recall the definition
of the Hopf algebra cohomology
which is a generalization of the group cohomology
and define the symmetric cohomology
for cocommutative Hopf algebras.
\subsection{Definition of symmetric cohomology}
Let us start with the definition of the Hopf algebra cohomology.
\begin{defin}[cf. {\cite[Definition 9.3.5]{witherspoon2019hochschild}}]
Let $A$ be a Hopf algebra and $M$ a left $A$-module.
The \textit{Hopf algebra cohomology} $\hcoho{\bullet}{A}{M}$
of $A$ with coefficients in $M$ is defined by
$$
\hcoho{n}{A}{M} = \ext{A}{n}{k}{M}.
$$
\end{defin}
We construct a standard non-homogeneous complex
which gives the Hopf algebra cohomology.
Let $n$ be a non-negative integer.
Suppose that, for any $b, a_0, \dots , a_n \in A$,
$\res{T}{n}{}{A} = A^{\otimes n+1}$ is a left $A$-module via
$
b \cdot (a_0 \otimes a_1 \otimes \cdots \otimes a_n)
= ba_0 \otimes a_1 \otimes \cdots \otimes a_n.
$
Then there is a projective resolution of $k$ as left $A$-modules:
$$
\cdots \longrightarrow \res{T}{n}{}{A}
\overset{\diff{n}{T}}{\longrightarrow} \res{T}{n-1}{}{A}
\longrightarrow \cdots \longrightarrow
\res{T}{1}{}{A} \overset{\diff{1}{T}}{\longrightarrow}
\res{T}{0}{}{A} \overset{\diff{0}{T}}{\longrightarrow} k \longrightarrow 0,
$$
where we set
\vspace{-0.5em}
\begin{align*}
&\diff{0}{T} = \varepsilon ,\\
&\diff{n}{T}(a_0 \otimes \cdots \otimes a_n)
= \sum_{i=0}^{n-1} (-1)^i a_0 \otimes \cdots
\otimes a_i a_{i+1} \otimes \cdots \otimes a_n \\
&\hspace{35mm} + (-1)^{n} a_0 \otimes \cdots
\otimes a_{n-1} \varepsilon(a_n), \text{ for } n \geq 1.
\end{align*}
Moreover, we denote
$\cpx{C}{}{n}{A}{M} = \Hom{A}{\res{T}{n}{}{A}}{M}$
and $\dif{C}{n} = \Hom{A}{\diff{n+1}{T}}{M}$.
Next, we define an action of the symmetric group $S_{n+1}$
on $\cpx{C}{}{n}{A}{M}$ for each $n \geq 0$.
Let $A$ be a cocommutative Hopf algebra.
Then we define the action of $\sigma_i = (i,i+1) \in S_{n+1}$
on $\cpx{C}{}{n}{A}{M}$ by
\begin{align}
\label{eq:sa1}
\begin{cases}
(\sigma_i f)(a_0 \otimes \cdots \otimes a_n)
= -f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}
\otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1} \otimes
\cdots \otimes a_n), \\
\hspace{105.5mm}\text{ for } 1 \leq i \leq n-1, \\
(\sigma_n f)(a_0 \otimes \cdots \otimes a_n)
= -f(a_0 \otimes \cdots \otimes a_{n-1}a_n^{(1)} \otimes S(a_n^{(2)}))
\end{cases}
\end{align}
for $f \in \cpx{C}{}{n}{A}{M}$ and $a_0 , \dots , a_n \in A$.
We show that these formulas \eqref{eq:sa1} are well-defined
as an action, and the action is compatible with the differential.
\begin{prop}
\label{prop.coho-nonhomo-welldef}
The above formulas \eqref{eq:sa1} define an action
of the symmetric group $S_{n+1}$ on $\cpx{C}{}{n}{A}{M}$
which is compatible with the differential $\dif{C}{n}$
for each $n \geq 0$.
\end{prop}
\begin{proof}
First, we check that, for $1 \leq i \leq n$,
the action of $\sigma_i$ satisfies relations of the Coxeter presentation
of $S_{n+1}$ which is
\begin{align*}
S_{n+1} = \langle \sigma_i , i\in \{1,2,\cdots ,n\} \mid \,&\sigma_i^2
= e \ \text{for} \ i\in \{1,\cdots , n\} , \\
&\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}
\ \text{for} \ i \in \{1,\cdots ,n-1\} , \\
&\sigma_i \sigma_j = \sigma_j \sigma_i\
\text{for}\ |i-j|\geq 2 \rangle.
\end{align*}
Let $f \in \cpx{C}{}{n}{A}{M}$, $a_0,\dots ,a_n \in A$
and $i$ an integer such that $1 \leq i \leq n-1$.
For the left hand side of the second relation
$\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}$,
we have
\begin{align*}
&(\sigma_i \sigma_{i+1} \sigma_i f)(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad=-(\sigma_{i+1} \sigma_i f)(a_0 \otimes
\cdots \otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n} ) \\
&\quad=(\sigma_i f)(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)})a_i^{(3)} a_{i+1}^{(1)}
\otimes S(a_i^{(4)} a_{i+1}^{(2)}) \otimes a_i^{(5)} a_{i+1}^{(3)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad=(\sigma_i f)(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes \varepsilon(a_i^{(2)}) a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)})S(a_i^{(3)}) \otimes a_i^{(4)} a_{i+1}^{(3)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad=(\sigma_i f)(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}
\otimes a_{i+1}^{(1)} \otimes S(a_{i+1}^{(2)})S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(3)} a_{i+2} \otimes \cdots \otimes a_{n}) \\
&\quad=-f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)}) \otimes a_{i+1}^{(3)} S(a_{i+1}^{(4)})S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(5)} a_{i+2} \otimes \cdots \otimes a_{n}) \\
&\quad=-f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)}) \otimes \varepsilon(a_{i+1}^{(3)})S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(4)} a_{i+2} \otimes \cdots \otimes a_{n}) \\
&\quad=-f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)}) \otimes S(a_i^{(2)}) \otimes a_i^{(3)}
\varepsilon(a_{i+1}^{(3)}) a_{i+1}^{(4)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad=-f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)}) \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(3)} a_{i+2} \otimes \cdots \otimes a_{n}).
\end{align*}
On the other hand, for the right hand side, we have
\vspace{-0.5em}
\begin{align*}
&(\sigma_{i+1} \sigma_i \sigma_{i+1} f)(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad=-(\sigma_i \sigma_{i+1} f)(a_0 \otimes \cdots
\otimes a_i a_{i+1}^{(1)} \otimes S(a_{i+1}^{(2)})
\otimes a_{i+1}^{(3)} a_{i+2} \otimes \cdots \otimes a_{n}) \\
&\quad=(\sigma_{i+1} f)(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)} \otimes S(a_i^{(2)} a_{i+1}^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(3)} S(a_{i+1}^{(4)})
\otimes a_{i+1}^{(5)} a_{i+2} \otimes \cdots \otimes a_{n}) \\
&\quad=(\sigma_{i+1} f)(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)} \otimes S(a_{i+1}^{(2)})S(a_i^{(2)})
\otimes a_i^{(3)} \varepsilon(a_{i+1}^{(3)}) \otimes a_{i+1}^{(4)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad=(\sigma_{i+1} f)(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)} \otimes
S(a_{i+1}^{(2)})S(a_i^{(2)}) \otimes a_i^{(3)} \otimes a_{i+1}^{(3)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad= -f(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)})S(a_i^{(2)}) a_i^{(3)}
\otimes S(a_i^{(4)}) \otimes a_i^{(5)} a_{i+1}^{(3)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad= -f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)})\varepsilon(a_i^{(2)})
\otimes S(a_i^{(3)}) \otimes a_i^{(4)} a_{i+1}^{(3)} a_{i+2}
\otimes \cdots \otimes a_{n}) \\
&\quad= -f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} a_{i+1}^{(1)}
\otimes S(a_{i+1}^{(2)}) \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1}^{(3)} a_{i+2} \otimes \cdots \otimes a_{n}).
\end{align*}
Hence, $\sigma_i \sigma_{i+1} \sigma_i f = \sigma_{i+1} \sigma_i \sigma_{i+1} f$ holds.
Similarly, the other relations can be checked by calculations.
Next, we show that this action is compatible
with the differential $\dif{C}{n}$.
Let $f\in \cpx{C}{}{n}{A}{M}$ be invariant under the action of $S_{n+1}$ and $i$ an integer such that
$3 \leq i \leq n-2$.
We have
\vspace{-0.5em}
\begin{align*}
&(\sigma_i \dif{C}{n}(f))(a_0 \otimes \cdots \otimes a_{n+1}) \\
&\quad= -\dif{C}{n}(f)(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}
\otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1} \otimes \cdots
\otimes a_{n+1}) \\
&\quad= -\Bigg\{ \sum_{j=0}^{i-3} (-1)^j
f(a_0 \otimes \cdots \otimes a_j a_{j+1} \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^{i-2} f(a_0 \otimes \cdots
\otimes a_{i-2} a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^{i-1}
f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^i f(a_0 \otimes \cdots
\otimes a_{i-1}a_i^{(1)} \otimes S(a_i^{(2)}) a_i^{(3)} a_{i+1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^{i+1} f(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} a_{i+2} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\, +\sum_{j=i+2}^{n}(-1)^j
f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_j a_{j+1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^{n+1} f(a_0 \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1}
\otimes \cdots \otimes a_n \varepsilon(a_{n+1})) \Bigg\}.
\end{align*}
By deforming the third and fourth term of the above formula, we have
\begin{align*}
&(-1)^{i-1} f(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad= (-1)^{i-1} f(a_0 \otimes \cdots \otimes a_{i-1}
\varepsilon(a_i^{(1)}) \otimes a_i^{(2)} a_{i+1} \otimes \cdots
\otimes a_{n+1}) \\
&\quad= (-1)^{i-1} f(a_0 \otimes \cdots \otimes a_{i-1}
\otimes a_i a_{i+1} \otimes \cdots \otimes a_{n+1}) ,\\
&(-1)^i f(a_0 \otimes \cdots \otimes a_{i-1}a_i^{(1)}
\otimes S(a_i^{(2)}) a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad=(-1)^i f(a_0 \otimes \cdots \otimes a_{i-1}a_i^{(1)}
\otimes \varepsilon(a_i^{(2)}) a_{i+1} \otimes
\cdots \otimes a_{n+1}) \\
&\quad=(-1)^i f(a_0 \otimes \cdots \otimes a_{i-1}a_i
\otimes a_{i+1} \otimes \cdots \otimes a_{n+1}).
\end{align*}
Moreover, using that $f$ is invariant under the action of $S_{n+1}$,
the first term is deformed by
\begin{align*}
&f(a_0 \otimes \cdots \otimes a_j a_{j+1} \otimes \cdots
\otimes a_{i-1} a_i^{(1)} \otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad= -(\sigma_i f)(a_0 \otimes \cdots \otimes a_j a_{j+1}
\otimes \cdots \otimes a_{i-1} \otimes a_i \otimes a_{i+1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad= -f(a_0 \otimes \cdots \otimes a_j a_{j+1} \otimes
\cdots \otimes a_{i-1} \otimes a_i \otimes a_{i+1} \otimes \cdots \otimes a_{n+1}).
\end{align*}
Similarly, other terms can be deformed.
Hence, for $3 \leq i \leq n-2$,
$\sigma_i \dif{C}{n}(f) = \dif{C}{n}(f)$ holds.
By same as the above calculations,
the same equation holds for other cases.
\end{proof}
According to Proposition \ref{prop.coho-nonhomo-welldef},
the sequence of invariant subspaces
$\cpx{C}{}{\bullet}{A}{M}^{S_{\bullet +1}}$
by the action of symmetric groups becomes a subcomplex
of $\cpx{C}{}{\bullet}{A}{M}$ and is denoted by $\cpx{CS}{}{\bullet}{A}{M}$.
\begin{defin}
\label{def.sym-coho}
Let $A$ be a cocommutative Hopf algebra
and $M$ a left $A$-module.
The \textit{symmetric cohomology} $\shcoho{\bullet}{A}{M}$ of $A$
with coefficients in $M$ is defined by
$$
\shcoho{n}{A}{M} = \coho{n}{\cpx{CS}{}{\bullet}{A}{M}}.
$$
\end{defin}
\begin{rem}
By the structure of group algebras as Hopf algebras,
we note that the symmetric cohomology of cocommutive Hopf algebras
in Definition \ref{def.sym-coho}
is a generalization of the symmetric cohomology of groups.
\end{rem}
\subsection{Symmetric cohomology constructed by the homogeneous complex}
Similar to group cohomology,
we can construct a homogeneous complex
which gives the Hopf algebra cohomology.
We describe its construction and an action of symmetric groups
on the homogeneous complex.
Let $A$ be a Hopf algebra and $n$ a non-negative integer.
Then $\res{\widetilde{T}}{n}{}{A} = A^{\otimes n+1}$
is a left $A$-module via
$$
b \cdot (a_0 \otimes a_1 \otimes \cdots \otimes a_n)
= b^{(1)} a_0 \otimes b^{(2)} a_1 \otimes \cdots \otimes b^{(n+1)} a_n
$$
for any $b, a_0, \dots , a_n \in A$,
and there is a projective resolution of $k$ as left $A$-modules:
\begin{align*}
\cdots \longrightarrow \res{\widetilde{T}}{n}{}{A}
\overset{\diff{n}{\widetilde{T}}}{\longrightarrow}
\res{\widetilde{T}}{n-1}{}{A} \longrightarrow
\cdots \longrightarrow \res{\widetilde{T}}{1}{}{A}
\overset{\diff{1}{\widetilde{T}}}{\longrightarrow}
\res{\widetilde{T}}{0}{}{A}
\overset{\diff{0}{\widetilde{T}}}{\longrightarrow} k \longrightarrow 0,
\end{align*}
where we set
\begin{align*}
&\diff{0}{\widetilde{T}} = \varepsilon , \\
&\diff{n}{\widetilde{T}}(a_0 \otimes \cdots \otimes a_n)
= \sum_{i=0}^{n}(-1)^i \varepsilon(a_i) a_0 \otimes
\cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots
\otimes a_n , \text{ for } n \geq 1.
\end{align*}
Moreover, we denote
$\cpx{K}{}{n}{A}{M} = \Hom{A}{\res{\widetilde{T}}{n}{}{A}}{M}$
and $\dif{K}{n} = \Hom{A}{\diff{n+1}{\widetilde{T}}}{M}$.
Next, we define an action of $S_{n+1}$ on $\cpx{K}{}{n}{A}{M}$
for each $n \geq 0$.
Let $A$ be a cocommutative Hopf algebra.
Then, for $f \in \cpx{K}{}{n}{A}{M}$, $a_0, \dots , a_n \in A$
and $1 \leq i \leq n$,
we define the action of $\sigma_i$ on $\cpx{K}{}{n}{A}{M}$ by
\begin{align}
\label{eq:sa2}
(\sigma_i \cdot f)(a_0 \otimes \cdots \otimes a_n)
= - f(a_0 \otimes \cdots \otimes a_i \otimes a_{i-1}
\otimes \cdots \otimes a_n).
\end{align}
We show that these formulas \eqref{eq:sa2}
are well-defined as an action, and the action is compatible with the differential.
\begin{prop}
\label{prop.coho-homo-welldef}
The above formulas \eqref{eq:sa2} define an action
of the symmetric group $S_{n+1}$ on $\cpx{K}{}{n}{A}{M}$
which is compatible with the differential $\dif{K}{n}$.
\end{prop}
\begin{proof}
First, we check that, for $1 \leq i \leq n$,
the action of $\sigma_i$ satisfies relations of the Coxeter presentation
of $S_{n+1}$.
Let $f \in \cpx{K}{}{n}{A}{M}$, $a_0,\dots ,a_n \in A$
and $i$ an integer such that $1 \leq i \leq n-1$.
For the second relation
$\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}$,
we have
\begin{align*}
(\sigma_i \sigma_{i+1} \sigma_i f)(a_0 \otimes \cdots \otimes a_{n})
&=-(\sigma_{i+1} \sigma_i f)(a_0 \otimes \cdots \otimes a_i
\otimes a_{i-1} \otimes \cdots \otimes a_{n}) \\
&=(\sigma_i f)(a_0 \otimes \cdots \otimes a_i \otimes a_{i+1}
\otimes a_{i-1} \otimes \cdots \otimes a_{n}) \\
&=-f(a_0 \otimes \cdots \otimes a_{i+1} \otimes a_i
\otimes a_{i-1} \otimes \cdots \otimes a_{n}) \\
&=(\sigma_{i+1} f)(a_0 \otimes \cdots \otimes a_{i+1}
\otimes a_{i-1} \otimes a_i \otimes \cdots \otimes a_{n}) \\
&=-(\sigma_i \sigma_{i+1} f)(a_0 \otimes \cdots \otimes a_{i-1}
\otimes a_{i+1} \otimes a_i \otimes \cdots \otimes a_{n}) \\
&=(\sigma_{i+1} \sigma_i \sigma_{i+1} f)(a_0 \otimes \cdots
\otimes a_{i-1} \otimes a_i \otimes a_{i+1} \otimes \cdots \otimes a_{n})
\end{align*}
Hence, $\sigma_i \sigma_{i+1} \sigma_i
= \sigma_{i+1} \sigma_i \sigma_{i+1}$ holds.
Similarly, the other relations can be checked by calculations.
Next, we show that this action is compatible
with the differential $\dif{K}{n}$.
Let $f\in \cpx{K}{}{n}{A}{M}$ be invariant under the action of $S_{n+1}$ and $i$ an integer
such that $3 \leq i \leq n$.
Then we have
\begin{align*}
&(\sigma_i \dif{K}{n}(f))(a_0 \otimes \cdots \otimes a_{n+1}) \\
&\quad\,= -\dif{K}{n}(f)(a_0 \otimes \cdots
\otimes a_i \otimes a_{i-1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\,= -\Bigg\{\sum_{j = 0}^{i-3} (-1)^j
f(a_0 \otimes \cdots \otimes \varepsilon(a_j)a_{j+1}
\otimes \cdots \otimes a_i \otimes a_{i-1} \otimes \cdots
\otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^{i-2}
f(a_0 \otimes \cdots \otimes \varepsilon(a_{i-2})a_i
\otimes a_{i-1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+ (-1)^{i-1}
f(a_0 \otimes \cdots \otimes \varepsilon(a_i)a_{i-1}
\otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+(-1)^i
f(a_0 \otimes \cdots \otimes a_i \otimes
\varepsilon(a_{i-1})a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\quad\quad\,+\sum_{j = i+1}^{n+1} (-1)^j
f(a_0 \otimes \cdots \otimes a_i \otimes a_{i-1} \otimes \cdots
\otimes \varepsilon(a_j)a_{j+1} \otimes \cdots
\otimes a_{n+1}) \Bigg\} \\
&\quad\,= -\sum_{j = 0}^{i-3} (-1)^j
f(a_0 \otimes \cdots \otimes \varepsilon(a_j)a_{j+1} \otimes
\cdots \otimes a_i \otimes a_{i-1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\,-(-1)^{i-2} f(a_0 \otimes \cdots \otimes a_i
\otimes \varepsilon(a_{i-2})a_{i-1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\,-(-1)^i f(a_0 \otimes \cdots \otimes
\varepsilon(a_{i-1})a_i \otimes a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\,- (-1)^{i-1}
f(a_0 \otimes \cdots \otimes a_{i-1} \otimes
\varepsilon(a_i)a_{i+1} \otimes \cdots \otimes a_{n+1}) \\
&\quad\quad\,-\sum_{j = i+1}^{n+1} (-1)^j
f(a_0 \otimes \cdots \otimes a_i \otimes a_{i-1} \otimes
\cdots \otimes \varepsilon(a_j)a_{j+1} \otimes \cdots \otimes a_{n+1}).
\end{align*}
Here, using that $f$ is invariant under the action of $S_{n+1}$,
each term can be deformed.
Hence, for $3 \leq i \leq n$, $\sigma_i \dif{K}{n}(f) = \dif{K}{n}(f)$ holds.
By same as the above calculations,
the same equation holds for other cases.
\end{proof}
According to Proposition \ref{prop.coho-homo-welldef},
the sequence of invariant subspaces
$\cpx{K}{}{\bullet}{A}{M}^{S_{\bullet +1}}$
by the action of symmetric groups becomes a subcomplex
of $\cpx{K}{}{\bullet}{A}{M}$ and is denoted by $\cpx{KS}{}{\bullet}{A}{M}$.
Moreover, we show that there is an isomorphism
between the non-homogeneous complex and the homogeneous complex.
\begin{prop}
\label{prop.nonhomo-homo-iso}
Let $A$ be a Hopf algebra and $M$ a left $A$-module.
Then there is an isomorphism
$\cpx{C}{}{\bullet}{A}{M} \cong \cpx{K}{}{\bullet}{A}{M}$
as complexes.
Moreover, if $A$ is cocommutative,
then this isomorphism induces an isomorphism
$\cpx{CS}{}{\bullet}{A}{M} \cong \cpx{KS}{}{\bullet}{A}{M}$
as complexes.
Therefore, $\shcoho{n}{A}{M} = \coho{n}{\cpx{KS}{}{\bullet}{A}{M}}$ holds.
\end{prop}
\begin{proof}
First, we will construct an isomorphism
$\cpx{C}{}{\bullet}{A}{M} \cong \cpx{K}{}{\bullet}{A}{M}$
as complexes.
For any $f \in \cpx{K}{}{n}{A}{M}$ and $a_0, \dots ,a_n \in A$, define the morphism
$\map{\varphi^\bullet}{\cpx{K}{}{\bullet}{A}{M}}{\cpx{C}{}{\bullet}{A}{M}}$
by
$$
\varphi^n(f)(a_0 \otimes \cdots \otimes a_{n})
= f(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes \cdots
\otimes a_0^{(n+1)}a_1^{(n)} \cdots a_{n-1}^{(2)} a_{n}).
$$
Then we show that $\varphi^\bullet$ is compatible with the differential.
Namely, we show
$\varphi^{n+1} \circ \dif{k}{n} = \dif{C}{n} \circ \varphi^n$.
For the left hand side, we have
\begin{align*}
(&\varphi^{n+1} \circ \dif{K}{n})(f)(a_0 \otimes \cdots \otimes a_{n+1}) \\
&= \dif{K}{n}(f)(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes
\cdots \otimes a_0^{(n+2)}a_1^{(n+1)} \cdots a_{n}^{(2)} a_{n+1}) \\
&= \sum_{i = 0}^{n+1} (-1)^i f(a_0^{(1)} \otimes \cdots
\otimes \varepsilon(a_0^{(i+1)} \cdots a_i^{(1)})a_0^{(i+2)}
\cdots a_{i+1}^{(1)} \otimes \cdots \otimes a_0^{(n+2)}a_1^{(n+1)}
\cdots a_{n}^{(2)} a_{n+1}) \\
&= \sum_{i = 0}^{n} (-1)^i f(a_0^{(1)} \otimes \cdots
\otimes \varepsilon(a_0^{(i+1)})a_0^{(i+2)}
\varepsilon(a_1^{(i)})a_1^{(i+1)} \cdots
\varepsilon(a_i^{(1)})a_i^{(2)}a_{i+1}^{(1)} \otimes \cdots \\
&\quad\, \otimes a_0^{(n+2)}a_1^{(n+1)} \cdots
a_{n}^{(2)} a_{n+1}) +(-1)^{n+1} f(a_0^{(1)} \otimes \cdots
\otimes a_0^{(n+1)}a_1^{(n)} \cdots a_{n-1}^{(2)} a_{n}
\varepsilon(a_{n+1}))\\
&= \sum_{i = 0}^{n} (-1)^i f(a_0^{(1)} \otimes \cdots
\otimes a_0^{(i+1)}a_1^{(i)} \cdots a_{i-1}^{(2)} a_i^{(1)} a_{i+1}^{(1)}
\otimes \cdots \otimes a_0^{(n+1)}a_1^{(n)} \cdots \\
&\quad\, \times a_i^{(n-i+1)}a_{i+1}^{(n-i+1)} \cdots
a_{n}^{(2)} a_{n+1})+(-1)^{n+1} f(a_0^{(1)} \otimes \cdots
\otimes a_0^{(n+1)}a_1^{(n)} \cdots a_{n-1}^{(2)} a_{n} \varepsilon(a_{n+1})).
\end{align*}
On the other hand, for the right hand side, we have
\begin{align*}
(&\dif{C}{n} \circ \varphi^n)(f)(a_0 \otimes \cdots \otimes a_{n+1}) \\
&= \sum_{i = 0}^{n} (-1)^i \varphi^n(f)(a_0 \otimes \cdots
\otimes a_ia_{i+1} \otimes \cdots \otimes a_{n+1}) + (-1)^{n+1}
\varphi^n(f)(a_0 \otimes \cdots \otimes a_n \varepsilon(a_{n+1})) \\
&= \sum_{i = 0}^{n} (-1)^i f(a_0^{(1)} \otimes \cdots
\otimes a_0^{(i+1)} a_1^{(i)}\cdots a_{i-1}^{(2)} a_i^{(1)}a_{i+1}^{(1)}
\otimes \cdots \otimes a_0^{(n+1)}a_1^{(n)} \cdots \\
&\quad\, \times a_i^{(n-i+1)}a_{i+1}^{(n-i+1)} \cdots
a_{n}^{(2)}a_{n+1})+(-1)^{n+1} f(a_0^{(1)} \otimes \cdots
\otimes a_0^{(n+1)}a_1^{(n)} \cdots a_{n-1}^{(2)} a_{n} \varepsilon(a_{n+1})).
\end{align*}
Hence, $\varphi^{n+1} \circ \dif{K}{n} = \dif{C}{n} \circ \varphi^n$ holds.
While, for any $g \in \cpx{C}{}{n}{A}{M}$ and $a_0 , \dots , a_n \in A$,
we define the morphism
$\map{\psi^\bullet}{\cpx{C}{}{\bullet}{A}{M}}{\cpx{K}{}{\bullet}{A}{M}}$ by
$$
\psi^n (g)(a_0 \otimes \cdots \otimes a_{n})
= g(a_0^{(1)} \otimes S(a_0^{(2)}) a_1^{(1)}
\otimes S(a_1^{(2)}) a_2^{(1)} \otimes \cdots
\otimes S(a_{n-1}^{(2)}) a_{n}).
$$
Then we show that $\psi^\bullet$ is the inverse of $\varphi^\bullet$.
For all $g \in \cpx{C}{}{n}{A}{M}$, we have
\vspace{-0.5em}
\begin{align*}
(&\varphi^n \circ \psi^n)(g)(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad= \psi^n(g)(a_0^{(1)} \otimes a_0^{(2)} a_1^{(1)}
\otimes \cdots \otimes a_0^{(n+1)} \cdots a_{n-1}^{(2)} a_{n}) \\
&\quad= g(a_0^{(1)} \otimes S(a_0^{(2)})a_0^{(3)} a_1^{(1)}
\otimes S(a_0^{(4)}a_1^{(2)})a_0^{(5)}a_1^{(3)}a_2^{(1)} \otimes \cdots \\
&\quad\quad\otimes S(a_0^{(2n)}a_1^{(2n-2)} \cdots
a_{n-1}^{(2)})a_0^{(2n+1)}a_1^{(2n-1)}\cdots a_{n-1}^{(3)} a_{n}) \\
&\quad= g(a_0^{(1)} \otimes \varepsilon(a_0^{(2)})a_1^{(1)} \otimes
\varepsilon(a_0^{(3)})\varepsilon(a_1^{(2)})a_2^{(1)} \otimes \cdots
\otimes \varepsilon(a_0^{(n+1)})\varepsilon(a_1^{(n)}) \cdots
\varepsilon(a_{n-1}^{(2)}) a_{n}) \\
&\quad= g(a_0^{(1)}\varepsilon(a_0^{(2)}) \cdots
\varepsilon(a_0^{(n+1)}) \otimes a_1^{(1)}\varepsilon(a_1^{(2)})
\cdots \varepsilon(a_1^{(n)}) \otimes \cdots \otimes a_{n-1}^{(1)}
\varepsilon(a_{n-1}^{(2)}) \otimes a_{n}) \\
&\quad= g(a_0 \otimes \cdots \otimes a_{n+1}).
\end{align*}
Hence, $\varphi^n \circ \psi^n = \id{}$ holds.
On the other hand, for all $f \in \cpx{K}{}{n}{A}{M}$, we have
\begin{align*}
& (\psi^n \circ \varphi^n)(f)(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad= \varphi^n(f)(a_0^{(1)} \otimes S(a_0^{(2)}) a_1^{(1)}
\otimes \cdots \otimes S(a_{n-1}^{(2)}) a_{n}) \\
&\quad= f(a_0^{(1)} \otimes a_0^{(2)} S(a_0^{(3)}) a_1^{(1)}
\otimes a_0^{(4)} S(a_0^{(5)}) a_1^{(2)} S(a_1^{(3)}) a_2^{(1)}
\otimes \cdots \\
&\quad\quad \otimes
a_0^{(2n)}S(a_0^{(2n+1)}) a_1^{(2n-2)}S(a_1^{(2n-1)})
\cdots a_{n-1}^{(2)}S(a_{n-1}^{(3)}) a_{n}) \\
&\quad= f(a_0^{(1)} \otimes \varepsilon(a_0^{(2)})a_1^{(1)}
\otimes \varepsilon(a_0^{(3)}) \varepsilon(a_1^{(2)}) a_2^{(1)}
\otimes \cdots \otimes \varepsilon(a_0^{(n+1)}) \cdots
\varepsilon(a_{n-1}^{(2)}) a_{n}) \\
&\quad= f(a_0^{(1)} \varepsilon(a_0^{(2)}) \cdots
\varepsilon(a_0^{(n+1)}) \otimes \cdots \otimes a_{n-1}^{(1)}
\varepsilon(a_{n-1}^{(2)}) \otimes a_{n}) \\
&\quad= f(a_0 \otimes \cdots \otimes a_{n}).
\end{align*}
Therefore, we have $\psi^n \circ \varphi^n = \id{}$,
and hence $\cpx{C}{}{\bullet}{A}{M}$ and $\cpx{K}{}{\bullet}{A}{M}$
are isomorphic.
Next, if $A$ is cocommutative,
we will show that $\cpx{CS}{}{\bullet}{A}{M}$
and $\cpx{KS}{}{\bullet}{A}{M}$ are isomorphic
by $\varphi^\bullet$ and $\psi^\bullet$.
For all $f \in \cpx{KS}{}{n}{A}{M}$ and $1 \leq i \leq n$, we have
\begin{align*}
(&\sigma_i \varphi^n (f))(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad= - \varphi^n (f)(a_0 \otimes \cdots \otimes a_{i-1} a_i^{(1)}
\otimes S(a_i^{(2)}) \otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n}) \\
&\quad= -f(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes \cdots
\otimes a_0^{(i)} \cdots a_{i-2}^{(2)} a_{i-1}^{(1)} a_i^{(1)}
\otimes a_0^{(i+1)} \cdots a_{i-1}^{(2)} a_i^{(2)} S(a_i^{(3)}) \\
&\quad\quad \otimes a_0^{(i+2)}
\cdots a_{i-1}^{(3)} a_i^{(4)} a_{i+1}^{(1)}\otimes \cdots
\otimes a_0^{(n+1)}\cdots a_{n-1}^{(2)}a_{n}) \\
&\quad= -f(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes \cdots
\otimes a_0^{(i)} \cdots a_{i-2}^{(2)} a_{i-1}^{(1)} a_i^{(1)}
\otimes a_0^{(i+1)} \cdots a_{i-1}^{(2)} \varepsilon(a_i^{(2)}) \\
&\quad\quad \otimes a_0^{(i+2)}
\cdots a_{i-1}^{(3)} a_i^{(3)}a_{i+1}^{(1)}\otimes \cdots
\otimes a_0^{(n+1)}\cdots a_{n-1}^{(2)}a_{n}) \\
&\quad= -f(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes
\cdots \otimes a_0^{(i)} \cdots a_{i-2}^{(2)} a_{i-1}^{(1)} a_i^{(1)}
\otimes a_0^{(i+1)} \cdots a_{i-1}^{(2)}\otimes a_0^{(i+2)}
\cdots a_{i-1}^{(3)} a_i^{(2)}a_{i+1}^{(1)} \\
&\quad\quad \otimes \cdots \otimes a_0^{(n+1)}
\cdots a_{n-1}^{(2)}a_{n}) \\
&\quad= -(\sigma_i f)(a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)}
\otimes \cdots \otimes a_0^{(i)} \cdots
a_{i-2}^{(2)} a_{i-1}^{(1)} a_i^{(1)} \otimes a_0^{(i+1)}
\cdots a_{i-1}^{(2)} \\
&\quad\quad \otimes a_0^{(i+2)} \cdots
a_{i-1}^{(3)} a_i^{(2)} a_{i+1}^{(1)}\otimes \cdots \otimes a_0^{(n+1)}
\cdots a_{n-1}^{(2)}a_{n}) \\
&\quad= f (a_0^{(1)} \otimes a_0^{(2)}a_1^{(1)} \otimes
\cdots \otimes a_0^{(i)} \cdots a_{i-1}^{(1)} \otimes
a_0^{(i+1)} \cdots a_{i-2}^{(3)} a_{i-1}^{(2)} a_i^{(1)}
\otimes a_0^{(i+2)} \cdots a_{i-1}^{(3)} a_i^{(2)} a_{i+1}^{(1)} \\
&\quad\quad \otimes \cdots \otimes a_0^{(n+1)}
\cdots a_{n-1}^{(2)}a_{n}) \\
&\quad= \varphi^n (f)(a_0 \otimes \cdots \otimes a_{n}).
\end{align*}
Hence, $\varphi^n(f) \in \cpx{CS}{}{n}{A}{M}$ holds.
While, for all $g \in \cpx{CS}{}{n}{A}{M}$ and $1 \leq i \leq n$,
we have
\begin{align*}
&(\sigma_i \psi^n (g))(a_0 \otimes \cdots \otimes a_{n}) \\
&\quad= - \psi^n (g)(a_0 \otimes \cdots \otimes a_i
\otimes a_{i-1} \otimes \cdots \otimes a_{n}) \\
&\quad= -g(a_0^{(1)} \otimes S(a_0^{(2)})a_1^{(1)}
\otimes \cdots \otimes S(a_{i-2}^{(2)}) a_i^{(1)}
\otimes S(a_i^{(2)}) a_{i-1}^{(1)} \otimes S(a_{i-1}^{(2)})a_{i+1}^{(1)}\\
&\quad\quad \otimes \cdots \otimes S(a_{n-1}^{(2)})a_{n}) \\
&\quad= -(\sigma_i g)(a_0^{(1)} \otimes S(a_0^{(2)})a_1^{(1)}
\otimes \cdots \otimes S(a_{i-2}^{(2)}) a_i^{(1)} \otimes
S(a_i^{(2)}) a_{i-1}^{(1)} \otimes S(a_{i-1}^{(2)})a_{i+1}^{(1)}\\
&\quad\quad \otimes \cdots \otimes S(a_{n-1}^{(2)})a_{n}) \\
&\quad= g(a_0^{(1)} \otimes S(a_0^{(2)})a_1^{(1)} \otimes
\cdots \otimes S(a_{i-2}^{(2)}) a_i^{(1)} S(a_i^{(2)})a_{i-1}^{(1)}
\otimes S(S(a_i^{(3)}) a_{i-1}^{(2)}) \\
&\quad\quad \otimes S(a_i^{(4)}) a_{i-1}^{(3)} S(a_{i-1}^{(4)})
a_{i+1}^{(1)}\otimes \cdots \otimes S(a_{n-1}^{(2)})a_{n}) \\
&\quad= g(a_0^{(1)} \otimes S(a_0^{(2)})a_1^{(1)}
\otimes \cdots \otimes S(a_{i-2}^{(2)})
\varepsilon(a_i^{(1)}) a_{i-1}^{(1)} \otimes S( a_{i-1}^{(2)})a_i^{(2)}
\otimes S(a_i^{(3)}) \varepsilon(a_{i-1}^{(3)}) a_{i+1}^{(1)} \\
&\quad\quad \otimes \cdots \otimes S(a_{n-1}^{(2)})a_{n}) \\
&\quad= g(a_0^{(1)} \otimes S(a_0^{(2)})a_1^{(1)} \otimes
\cdots \otimes S(a_{i-2}^{(2)}) a_{i-1}^{(1)}
\otimes S( a_{i-1}^{(2)})a_i^{(1)} \otimes S(a_i^{(2)}) a_{i+1}^{(1)}
\otimes \cdots \otimes S(a_{n-1}^{(2)})a_{n}) \\
&\quad= \psi^n (g)(a_0 \otimes \cdots \otimes a_{n}).
\end{align*}
So, we have $\psi^n(g) \in \cpx{KS}{}{n}{A}{M}$.
Therefore, $\cpx{CS}{}{\bullet}{A}{M}$ and $\cpx{KS}{}{\bullet}{A}{M}$
are isomorphic as complexes.
Moreover, $\shcoho{n}{A}{M} = \coho{n}{\cpx{KS}{}{\bullet}{A}{M}}$ holds.
\end{proof}
\subsection{Definition of symmetric Hochschild cohomology}
In this subsection, we recall the definition of Hochschild cohomology
and define symmetric Hochschild cohomology
for cocommutative Hopf algebras.
\begin{defin}[cf. {\cite[Section 1.1]{witherspoon2019hochschild}}]
Let $A$ be a Hopf algebra and $M$ an $A$-bimodule.
The \textit{Hochschild cohomology} $\hoch{\bullet}{A}{M}$ of $A$
with coefficients in $M$ is defined by
$$
\hoch{n}{A}{M} = \ext{A^e}{n}{A}{M}.
$$
\end{defin}
We construct a standard non-homogeneous complex
which gives the Hochschild cohomology.
Let $n$ be an integer such that $n \geq 0$,
and let $\res{T}{n}{e}{A} = A^{\otimes n+2}$ be an $A$-bimodule via
$$
(b \otimes c^\mathrm{op}) \cdot (a_0 \otimes a_1 \otimes \cdots \otimes a_{n+1})
= ba_0 \otimes a_1 \otimes \cdots \otimes a_{n+1}c
$$
for any $b,c, a_0, \dots , a_n \in A$.
Then there is a projective resolution of $A$ as $A^\mathrm{e}$-modules:
$$
\cdots \longrightarrow \res{T}{n}{e}{A}
\xrightarrow{\diff{n}{T^\mathrm{e}}} \res{T}{n-1}{e}{A}
\longrightarrow
\cdots \longrightarrow
\res{T}{1}{e}{A} \xrightarrow{\diff{1}{T^\mathrm{e}}}
\res{T}{0}{e}{A} \xrightarrow{\diff{0}{T^\mathrm{e}}} A \longrightarrow 0,
$$
where we set
\begin{align*}
&\diff{0}{T^\mathrm{e}}(a_0 \otimes a_1) = a_0a_1 , \\
&\diff{n}{T^\mathrm{e}}(a_0 \otimes a_1 \otimes \cdots \otimes a_{n+1})
= \sum_{i=0}^{n} (-1)^i a_0 \otimes a_1 \otimes \cdots
\otimes a_i a_{i+1} \otimes \cdots \otimes a_{n+1}, \text{ for } n \geq 1.
\end{align*}
Moreover, we denote $\cpx{C}{e}{n}{A}{M} = \Hom{A}{\res{T}{n}{e}{A}}{M}$
and $\dif{C_e}{n} = \Hom{A}{\diff{n+1}{T^e}}{M}$.
Next, we define an action of the symmetric group $S_{n+1}$
on $\cpx{C}{e}{n}{A}{M}$ for each $n \geq 0$.
Let $A$ be a cocommutative Hopf algebra.
Then we define the action of $\sigma_i = (i,i+1) \in S_{n+1}$
on $\cpx{C}{e}{n}{A}{M}$ by
\begin{align}\label{eq:sa3}
(\sigma_i f)(a_0 \otimes \cdots \otimes a_{n+1})
= - f(a_0 \otimes \cdots \otimes a_{i-1}a_i^{(1)} \otimes S(a_i^{(2)})
\otimes a_i^{(3)} a_{i+1} \otimes \cdots \otimes a_{n+1})
\end{align}
for $f \in \cpx{C}{e}{n}{A}{M}$, $a_0 , \dots , a_n \in A$
and $1 \leq i \leq n$.
Similar to the case of symmetric cohomology,
the above formula \eqref{eq:sa3} is well-defined as an action,
and the action is compatible with the differential.
Hence the sequence of invariant subspaces
$\cpx{C}{e}{\bullet}{A}{M}^{S_{\bullet +1}}$
by the action of symmetric groups
becomes a subcomplex of $\cpx{C}{e}{\bullet}{A}{M}$
and is denoted by $\cpx{CS}{e}{\bullet}{A}{M}$.
\begin{defin}
Let $A$ be a cocommutative Hopf algebra and $M$ an $A$-bimodule.
The \textit{symmetric Hochschild cohomology} $\shoch{\bullet}{A}{M}$ of $A$
with coefficients in $M$ is defined by
\begin{align*}
\shoch{n}{A}{M} = \coho{n}{\cpx{CS}{e}{\bullet}{A}{M}}.
\end{align*}
\end{defin}
Similar to Hopf algebra cohomology,
we can construct a homogeneous complex
which gives the Hochschild cohomology.
We describe its construction and an action of symmetric groups
on the homogeneous complex.
Let $A$ be a Hopf algebra and $n$ a non-negative integer.
Suppose that $\res{\widetilde{T}}{n}{e}{A} = A^{\otimes n+2}$
is an $A$-bimodule via
\begin{align*}
(b \otimes c^\mathrm{op}) \cdot (a_0 \otimes a_1 \otimes \cdots
\otimes a_{n+1}) = b^{(1)} a_0 \otimes b^{(2)} a_1 \otimes \cdots
\otimes b^{(n+2)} a_{n+1} c
\end{align*}
for any $b,c,a_1, \dots ,a_{n+1} \in A$.
Then there is a projective resolution of $A$ as $A^\mathrm{e}$-modules:
\begin{align*}
\cdots \longrightarrow \res{\widetilde{T}}{n}{e}{A}
\xrightarrow{\diff{n}{\widetilde{T}^\mathrm{e}}} \res{\widetilde{T}}{n-1}{e}{A}
\longrightarrow
\cdots \longrightarrow
\res{\widetilde{T}}{1}{e}{A} \xrightarrow{\diff{1}{\widetilde{T}^\mathrm{e}}}
\res{\widetilde{T}}{0}{e}{A} \xrightarrow{\diff{0}{\widetilde{T}^\mathrm{e}}}
A \longrightarrow 0,
\end{align*}
where we set
$\diff{n}{\widetilde{T}^\mathrm{e}}(a_0 \otimes a_1 \otimes \cdots \otimes a_{n+1})
= \sum_{i=0}^{n} (-1)^i a_0 \otimes a_1 \otimes \cdots \otimes
\varepsilon(a_i) a_{i+1} \otimes \cdots \otimes a_{n+1}$.
Moreover, we denote
$\cpx{K}{e}{n}{A}{M} = \Hom{A}{\res{\widetilde{T}}{n}{e}{A}}{M}$
and $\dif{K_e}{n} = \Hom{A}{\diff{n+1}{\widetilde{T^e}}}{M}$.
Next, we define an action of $S_{n+1}$ on $\cpx{K}{e}{n}{A}{M}$
for each $n \geq 0$.
Let $A$ be a cocommutative Hopf algebra.
Then we define the action of $\sigma_i$ on $\cpx{K}{e}{n}{A}{M}$ by
\begin{align}\label{eq:sa4}
(\sigma_i f) (a_0 \otimes \dots \otimes a_{n+1})
= -f(a_0 \otimes \dots \otimes a_i \otimes a_{i-1} \otimes \dots
\otimes a_{n+1}), \text{ for } 1\leq i \leq n
\end{align}
for $f \in \cpx{K}{e}{n}{A}{M}$ and $a_0, \dots , a_{n+1} \in A$.
Similar to the case of symmetric cohomology,
the above formula \eqref{eq:sa4} is well-defined as an action,
and the action is compatible with the differential.
Hence the sequence of invariant subspaces
$\cpx{K}{e}{\bullet}{A}{M}^{S_{\bullet +1}}$
by the action of symmetric groups
becomes a subcomplex of $\cpx{K}{e}{\bullet}{A}{M}$
and is denoted by $\cpx{KS}{e}{\bullet}{A}{M}$.
Moreover, we have the following assertion which is similar to
Proposition \ref{prop.nonhomo-homo-iso}.
\begin{prop}
\label{prop.nonhomo-homo-iso-hoch}
Let $A$ be a Hopf algebra and $M$ an $A$-bimodule.
Then there is an isomorphism $\cpx{C}{e}{\bullet}{A}{M} \cong \cpx{K}{e}{\bullet}{A}{M}$
as complexes. Moreover, if $A$ is cocommutative, then this isomorphism induces an isomorphism
$\cpx{CS}{e}{\bullet}{A}{M} \cong \cpx{KS}{e}{\bullet}{A}{M}$ as complexes.
Therefore, $\shoch{n}{A}{M} = \coho{n}{\cpx{KS}{e}{\bullet}{A}{M}}$ holds.
\end{prop}
\section{The relationships between classical, symmetric and symmetric Hochschild cohomology}
\subsection{Resolutions that give symmetric cohomology and symmetric Hochschild cohomology}
In this subsection, we describe that symmetric cohomology
is given by a resolution of $k$.
For $a_0 , \dots , a_n \in A$ and $1 \leq i \leq n$,
define the right action of $S_{n+1}$ on $\res{\widetilde{T}}{n}{}{A}$ by
$$
(a_0 \otimes \cdots \otimes a_n) \cdot \sigma_i
= -a_0 \otimes \cdots \otimes a_{i-2} \otimes a_i \otimes a_{i-1}
\otimes a_{i+1} \otimes \cdots \otimes a_n.
$$
This action implies that $\res{\widetilde{T}}{n}{}{A}$
is a right $kS_{n+1}$-module.
Furthermore, $\cpx{K}{}{n}{A}{M}$ is a left $kS_{n+1}$-module
induced by the right $kS_{n+1}$-module structure
of $\res{\widetilde{T}}{n}{}{A}$.
According to the structure of the counit on $kS_{n+1}$,
there is a natural isomorphism of $k$-vector spaces
$\cpx{KS}{}{n}{A}{M} \cong {}^{kS_{n+1}}\cpx{K}{}{n}{A}{M}$.
Moreover, we show that it can be deformed $\cpx{KS}{}{\bullet}{A}{M}$
to a form of the set of homomorphisms.
\begin{prop} \label{prop.ks-hom}
Let $A$ be a cocommutative Hopf algebra and $M$ a left $A$-module.
Then, for each $n \geq 0$, there is an isomorphism
$$
\cpx{KS}{}{n}{A}{M} \cong \Hom{A}{\res{\widetilde{T}}{n}{}{A}
\otimes_{kS_{n+1}} k}{M}
$$
as $k$-vector spaces with the trivial left $kS_{n+1}$-module $k$.
\end{prop}
\begin{proof}
According to Lemma \ref{lem.hopf-hom-invariant}
and the adjunction isomorphism of the tensor product
and the set of homomorphisms, we have
\begin{align*}
\Hom{A}{\res{\widetilde{T}}{n}{}{A}}{M}^{S_{n+1}}
&\cong {}^{kS_{n+1}}\Hom{A}{\res{\widetilde{T}}{n}{}{A}}{M}
\cong {}^{kS_{n+1}}\Hom{k}{k}{\Hom{A}{\res{\widetilde{T}}{n}{}{A}}{M}} \\
&\cong \Hom{kS_{n+1}}{k}{\Hom{A}{\res{\widetilde{T}}{n}{}{A}}{M}}
\cong \Hom{A}{\res{\widetilde{T}}{n}{}{A} \otimes_{kS_{n+1}} k}{M}.
\end{align*}
\end{proof}
Next, we define a resolution of $k$.
We take the left $A$-module sequence:
$$
\cdots \longrightarrow \sym{n}{}{A}
\overset{\diff{n}{\widetilde{S}}}{\longrightarrow}
\sym{n-1}{}{A} \longrightarrow
\cdots \longrightarrow \sym{1}{}{A}
\overset{\diff{1}{\widetilde{S}}}{\longrightarrow}
\sym{0}{}{A} \overset{\diff{0}{\widetilde{S}}}{\longrightarrow}
k \longrightarrow 0,
$$
where we set
$\diff{n}{\widetilde{S}}((a_0 \otimes \cdots \otimes a_n) \otimes_{kS_{n+1}} x)
= \diff{n}{\widetilde{T}}(a_0 \otimes \cdots \otimes a_n) \otimes_{kS_{n}} x$
and denote $\sym{n}{}{A} = \res{\widetilde{T}}{n}{}{A} \otimes_{kS_{n+1}}k$.
According to the relation between the action and the differential,
the morphism
$$
\map{\diff{n}{}}{\res{\widetilde{T}}{n}{}{A} \times k}{\sym{n-1}{}{A}};
\ ((a_0 \otimes \cdots \otimes a_n) , x)
\mapsto \diff{n}{\widetilde{T}}(a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_n} x
$$
satisfies the relation
$\diff{n}{}((a_0 \otimes \cdots \otimes a_n) \cdot \sigma_i , x)
= \diff{n}{}((a_0 \otimes \cdots \otimes a_n) , \sigma_i \cdot x)$
for each $1 \leq i \leq n$. Hence, $\diff{n}{\tilde{S}}$ is well-defined.
Moreover, we define the morphism
$\map{h_n^{\mathrm{\tilde{S}}}}{\sym{n}{}{A}}{\sym{n+1}{}{A}}$ by
$h_n^{\mathrm{\tilde{S}}}((a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x) = (1 \otimes a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+2}} x$,
then this morphism is a contracting homotopy.
Therefore, $(\sym{\bullet}{}{A} , \diff{\bullet}{\tilde{S}})$ is exact.
Similarly, we describe that symmetric Hochschid cohomology
is given by a resolution of $A$.
For $a_0 , \dots , a_{n+1} \in A$ and $1 \leq i \leq n$,
define the right action of $S_{n+1}$ on $\res{\widetilde{T}}{n}{e}{A}$ by
$$
(a_0 \otimes \cdots \otimes a_{n+1}) \cdot \sigma_i
= -a_0 \otimes \cdots \otimes a_{i-2} \otimes a_i \otimes a_{i-1}
\otimes a_{i+1} \otimes \cdots \otimes a_{n+1}.
$$
This action implies that $\res{\widetilde{T}}{n}{e}{A}$
is a right $kS_{n+1}$-module.
Furthermore, $\cpx{K}{e}{n}{A}{M}$ is a left $kS_{n+1}$-module
induced by the right $kS_{n+1}$-module structure
of $\res{\widetilde{T}}{n}{e}{A}$.
According to the structure of the counit on $kS_{n+1}$,
there is a natural isomorphism
$\cpx{KS}{e}{n}{A}{M} \cong {}^{kS_{n+1}} \cpx{K}{e}{n}{A}{M}$
as $k$-vector spaces.
Moreover, we get an isomorphism which is similar to
Proposition \ref{prop.ks-hom}.
\begin{prop}
Let $A$ be a cocommutative Hopf algebra and $M$ an $A$-bimodule.
Then, for each $n \geq 0$, there is an isomorphism
$$
\cpx{KS}{e}{n}{A}{M} \cong
\Hom{A}{\res{\widetilde{T}}{n}{e}{A} \otimes_{kS_{n+1}} k}{M}
$$
as $k$-vector spaces, with the trivial left $kS_{n+1}$-module $k$.
\end{prop}
Next, we define a resolution of $A$.
We take the $A$-bimodule sequence:
$$
\cdots \longrightarrow \sym{n}{e}{A}
\overset{\diff{n}{\widetilde{S}^\mathrm{e}}}{\longrightarrow}
\sym{n-1}{e}{A} \longrightarrow
\cdots \longrightarrow \sym{1}{e}{A}
\overset{\diff{1}{\widetilde{S}^\mathrm{e}}}{\longrightarrow}
\sym{0}{e}{A} \overset{\diff{0}{\widetilde{S}^\mathrm{e}}}{\longrightarrow}
A \longrightarrow 0,
$$
where we set
$\diff{n}{\widetilde{S}^\mathrm{e}}((a_0 \otimes \cdots \otimes a_{n+1})
\otimes_{kS_{n+1}} x)
= \diff{n}{\widetilde{T}^\mathrm{e}}(a_0 \otimes \cdots \otimes a_{n+1})
\otimes_{kS_{n}} x$
and denote $\sym{n}{}{A} = \res{\widetilde{T}}{n}{}{A} \otimes_{kS_{n+1}}k$.
Then this sequence is a resolution of $A$ as $A^\mathrm{e}$-modules.
Moreover, there is an isomorphism
$\sym{n}{e}{A} \cong \sym{n}{}{A} \otimes A$
as $A$-bimodules where the right $A$-module structure
of $\sym{n}{}{A}$ is trivial.
\subsection{The relationships between symmetric cohomology and symmetric Hochschild cohomology}
In this subsection,
we show that there is an isomorphism
between symmetric cohomology and symmetric Hochschild cohomology
as $k$-vector spaces.
First, we describe the fact about group cohomology
by Eilenberg and MacLane.
Here, we recall that ${}^\mathrm{ad} X$ is a left module
whose structure given by the left adjoint action.
According to the structure of the coproduct and the antipode
of group algebras, ${}^\mathrm{ad} X$ is a left $G$-module
by $g \cdot x = g x g^{-1}$ for $g \in G$ and $x \in X$.
\begin{thm}[Eilenberg-MacLane {\cite[Section 5]{eilenberg1947cohomology}}]
\label{thm.iso-grp-coho}
Let $G$ be a group and X a $G$-bimodule.
Then, for each $n \geq 0$, there is an isomorphism
$\hoch{n}{\mathbb{Z} G}{X} \cong \hcoho{n}{G}{{}^\mathrm{ad} X}$
as $\mathbb{Z}$-modules.
\end{thm}
\begin{rem}
\label{rem.iso-hopf-coho}
Theorem \ref{thm.iso-grp-coho} is generalized
to the case of Hopf algebras
by Ginzburg-Kumar \cite[Subsection 5.6]{ginzburg1993cohomology}.
\end{rem}
In this paper, we have the following result
which is a symmetric version of the above isomorphism.
\begin{thm}
\label{thm.iso-symcoho-symHoch}
If $A$ be a cocommutative Hopf algebra and $M$ an $A$-bimodule.
Then, for each $n \geq 0$, there is an isomorphism
$$
\shoch{n}{A}{M} \cong \shcoho{n}{A}{{}^\mathrm{ad} M}
$$
as $k$-vector spaces.
\end{thm}
\begin{proof}
According to Lemma \ref{lem.hopf-adjunction},
we have the following isomorphisms
\begin{align*}
\cpx{KS}{e}{n}{A}{M} &\cong \Hom{A^e}{\sym{n}{e}{A}}{M} \\
&= \Hom{A^e}{\sym{n}{}{A} \otimes A}{M}
\cong \Hom{A^e}{\sym{n}{}{A}}{\Hom{k}{A}{M}}.
\end{align*}
So, since $\sym{n}{}{A}$ is a trivial right $A$-module,
there is an isomorphism
$$
\Hom{A^e}{\sym{n}{}{A}}{\Hom{k}{A}{M}} \cong
\Hom{l\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}{\sym{n}{}{A}}{\Hom{k}{A}{M}^A},
$$
where $\mathrm{Hom}_{l\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}$ (or $\mathrm{Hom}_{r\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}$) denotes a set of homomorphisms as a left (or right) $A$-module. While, there are isomorphisms of left $A$-modules
$$
\Hom{k}{A}{M}^A \cong
\Hom{r\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}{A}{M}
\cong {}^{\mathrm{ad}}{M}.
$$
Hence, we have
$\Hom{l\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}{\sym{n}{}{A}}{\Hom{k}{A}{M}^A}
\cong \Hom{l\operatorname{\mathsf{\hspace{-1pt}-\hspace{-1pt}}}A}{\sym{n}{}{A}}{{}^{\mathrm{ad}}M}
\cong \cpx{KS}{}{n}{A}{{}^{\mathrm{ad}}M}$.
\end{proof}
Moreover, we have the following assertion
from Theorem \ref{thm.iso-symcoho-symHoch}.
\begin{cor}
If $A$ be a finite dimensional, commutative
and cocommutative Hopf algebra.
Then, for each $n \geq 0$, there is an isomorphism
$$
\shoch{n}{A}{A} \cong A \otimes \shcoho{n}{A}{k}
$$
as $k$-vector spaces.
\end{cor}
\begin{proof}
According to Theorem \ref{thm.iso-symcoho-symHoch},
there is an isomorphism
$\cpx{KS}{e}{n}{A}{A} \cong \cpx{KS}{}{n}{A}{{}^{\mathrm{ad}}{A}}$.
Furthermore, by Lemma \ref{lem.hopf-hom-invariant}
and \ref{lem.hopf-hom-dual}, there are isomorphisms
\begin{align*}
\cpx{KS}{}{n}{A}{{}^{\mathrm{ad}}A}
&\cong \Hom{A}{\sym{n}{}{A}}{{}^{\mathrm{ad}}A} \\
&\cong {}^{A}\Hom{k}{\sym{n}{}{A}}{{}^{\mathrm{ad}}A}
\cong {}^{A}({}^{\mathrm{ad}}A \otimes \Hom{k}{\sym{n}{}{A}}{k}).
\end{align*}
Since $A$ is commutative, we have
\begin{align*}
{}^{A}({}^{\mathrm{ad}}A \otimes \Hom{k}{\sym{n}{}{A}}{k})
&\cong A \otimes {}^{A}\Hom{k}{\sym{n}{}{A}}{k} \\
&\cong A \otimes \Hom{A}{\sym{n}{}{A}}{k}
\cong A \otimes \cpx{KS}{}{n}{A}{k}.
\end{align*}
\end{proof}
\subsection{The relationships between classical cohomology and symmetric cohomology}
In this subsection, we describe the relationships
between classical cohomology and symmetric cohomology.
First, the results about the morphism
$\shcoho{n}{A}{M} \longrightarrow \hcoho{n}{A}{M}$
induced by the inclusion in low degrees are as follows.
In the case of degree $0$, since $S_1$ is the trivial group,
we have $\cpx{CS}{}{0}{A}{M} = \Hom{A}{\res{\widetilde{T}}{0}{}{A}}{M}$.
In the case of degree $1$, we have the following assertion
by the same proofs as for \cite[Proposition 2.1]{todea2015symmetric}.
\begin{prop}
Let $A$ be a cocommutative Hopf algebra
and $M$ a left $A$-module.
Then there is an isomorphism
$\shcoho{1}{A}{M} \cong \hcoho{1}{A}{M}$ as $k$-vector spaces.
\end{prop}
In the case of degree $2$,
we have the following assertion by the same proofs
as for \cite[Lemma 3.1]{staic2009symmetric}.
\begin{prop}
Let $A$ be a cocommutative Hopf algebra
and $M$ a left $A$-module.
Then the morphism
$\shcoho{2}{A}{M} \longrightarrow \hcoho{2}{A}{M}$
induced by the inclusion
$\cpx{CS}{}{2}{A}{M} \longrightarrow \cpx{C}{}{2}{A}{M}$ is injective.
\end{prop}
Next, we have the following result about the projectivity
of the resolution $\sym{\bullet}{}{A}$.
\begin{thm}\label{thm.res-proj}
Let $A$ be a cocommutative Hopf algebra.
If $\ch{k} \nmid n+1$, then $\sym{n}{}{A}$ is a projective $A$-module
for each $n \geq 1$.
\end{thm}
\begin{proof}
First, we show that $\sym{n}{}{A}$ is a direct summand
of $A \otimes \sym{n-1}{}{A}$ as a left $A$-module.
Define the morphism
$\map{\phi}{\sym{n}{}{A}}{A \otimes \sym{n-1}{}{A}}$ by
$$
\phi( (a_0 \otimes \cdots \otimes a_n) \otimes_{kS_{n+1}} x)
= \frac{1}{n+1} \sum_{i = 0}^n (-1)^i a_i \otimes
((a_0 \otimes \cdots \otimes a_{i-1} \otimes a_{i+1}
\otimes \cdots \otimes a_n) \otimes_{kS_n} x)
$$
for any $a_0, \dots , a_n \in A$ and $x \in k$.
Then this morphism is well-defined.
Indeed, we define the morphism
$\map{\phi^\prime}{\res{\widetilde{T}}{n}{}{A} \times k}{A \otimes \sym{n-1}{}{A}}$
by
$$
\phi^\prime( (a_0 \otimes \cdots \otimes a_n) , x)
= \frac{1}{n+1} \sum_{i = 0}^n (-1)^i a_i
\otimes ((a_0 \otimes \cdots \otimes a_{i-1} \otimes a_{i+1}
\otimes \cdots \otimes a_n) \otimes_{kS_n} x).
$$
We show that
$\phi^\prime ((a_0 \otimes \cdots \otimes a_n)\cdot \sigma_j , x)
= \phi^\prime((a_0 \otimes \cdots \otimes a_n) , \sigma_j \cdot x)$
for each $1 \leq j \leq n$.
The left hand side is
\begin{align*}
&\phi^\prime ((a_0 \otimes \cdots \otimes a_n)\cdot \sigma_j , x)
= \phi^\prime ((-a_0 \otimes \cdots \otimes a_j
\otimes a_{j-1} \otimes \cdots \otimes a_n) , x) \\
&= \frac{1}{n+1} \left\{ \sum_{i=0}^{j-2} (-1)^i a_i
\otimes ((-a_0 \otimes \cdots \otimes a_{i-1} \otimes a_{i+1}
\otimes \cdots \otimes a_j \otimes a_{j-1} \otimes \cdots
\otimes a_n) \otimes_{kS_{n}} x)\right. \\
&\hspace{1.3cm} +(-1)^{j-1} a_j \otimes
((-a_0 \otimes \cdots \otimes a_{j-2} \otimes
a_{j-1} \otimes a_{j+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x) \\
&\hspace{1.3cm} +(-1)^j a_{j-1}
\otimes ((-a_0 \otimes \cdots \otimes a_{j-2}
\otimes a_j \otimes a_{j+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x) \\
&\hspace{1.3cm} +\left.\sum_{i=j+1}^n (-1)^i a_i
\otimes ((-a_0 \otimes \cdots \otimes a_j \otimes a_{j-1}
\otimes \cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x) \right\}.
\end{align*}
By deforming the first term of the above formula,
because $k$ is a trivial left $kS_n$-module, we have
\begin{align*}
&(-a_0 \otimes \cdots \otimes a_{j-1} \otimes a_{j+1}
\otimes \cdots
\otimes a_j \otimes a_{j-1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x \\
&\quad=(-a_0 \otimes \cdots \otimes a_{j-1} \otimes a_{j+1}
\otimes \cdots \otimes a_j \otimes a_{j-1}
\otimes \cdots \otimes a_n) \otimes_{kS_{n}} \sigma_{j-1} \cdot x \\
&\quad=(-a_0 \otimes \cdots \otimes a_{j-1} \otimes a_{j+1}
\otimes \cdots \otimes a_j \otimes a_{j-1} \otimes \cdots
\otimes a_n)\cdot \sigma_{j-1} \otimes_{kS_{n}} x \\
&\quad=(a_0 \otimes \cdots \otimes a_{j-1} \otimes a_{j+1} \otimes
\cdots \otimes a_{j-1} \otimes a_{j} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x.
\end{align*}
Similarly, we have
\begin{align*}
&(-a_0 \otimes \cdots \otimes a_j \otimes a_{j-1} \otimes
\cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x \\
&\quad=(a_0 \otimes \cdots \otimes a_{j-1} \otimes a_{j} \otimes
\cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n}} x,
\end{align*}
and so we have
$$
\phi^\prime ((a_0 \otimes \cdots \otimes a_n)\cdot \sigma_j , x)
= \phi^\prime ((a_0 \otimes \cdots \otimes a_n) , x)
=\phi^\prime ((a_0 \otimes \cdots \otimes a_n) , \sigma_j \cdot x).
$$
Therefore, $\phi$ is well-defined.
Next, we show that $\phi$ is an $A$-module homomorphism.
A direct calculation shows that, for all $a, a_0, \dots , a_n \in A$
and $x \in k$,
\vspace{-0.5em}
\begin{align*}
&\phi( a\cdot ((a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x)) \\
&\quad= \phi( a\cdot (a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x) \\
&\quad= \phi( (a^{(1)}a_0 \otimes \cdots \otimes a^{(n+1)}a_n)
\otimes_{kS_{n+1}} x) \\
&\quad= \frac{1}{n+1} \sum_{i=0}^n (-1)^i a^{(i+1)}a_i
\otimes ((a^{(1)}a_0 \otimes \cdots \otimes a^{(i)}a_{i-1}
\otimes a^{(i+2)}a_{i+1} \otimes \cdots \otimes a^{(n+1)}a_n)
\otimes_{kS_n} x) \\
&\quad= \frac{1}{n+1} \sum_{i=0}^n (-1)^i a^{(1)}a_i
\otimes ((a^{(2)}a_0 \otimes \cdots \otimes a^{(i+1)}a_{i-1}
\otimes a^{(i+2)}a_{i+1} \otimes \cdots \otimes a^{(n+1)}a_n)
\otimes_{kS_n} x) \\
&\quad= \frac{1}{n+1} \sum_{i=0}^n (-1)^i a^{(1)}a_i
\otimes (a^{(2)}\cdot (a_0 \otimes \cdots \otimes a_{i-1}
\otimes a_{i+1} \otimes \cdots \otimes a_n) \otimes_{kS_n} x) \\
&\quad= \frac{1}{n+1} \sum_{i=0}^n (-1)^i a\cdot
(a_i \otimes ((a_0 \otimes \cdots \otimes a_{i-1} \otimes a_{i+1}
\otimes \cdots \otimes a_n) \otimes_{kS_n} x)) \\
&\quad=a \cdot \phi((a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x).
\end{align*}
Hence, $\phi$ is an $A$-module homomorphism.
Finally, we construct a retraction of $\phi$.
We define the morphism
$\map{\psi}{A \otimes \sym{n-1}{}{A}}{\sym{n}{}{A}}$ by
$$
\psi(a \otimes ((a_0 \otimes \cdots \otimes a_{n-1}) \otimes_{kS_n} x))
= (a \otimes a_0 \otimes \cdots \otimes a_{n-1}) \otimes_{kS_{n+1}} x.
$$
Since $S_n$ is a subgroup of $S_{n+1}$,
this morphism is well-defined.
Furthermore, we show that $\psi$ is an $A$-module homomorphism.
A direct calculation shows that,
for all $b,a, a_0, \dots , a_n \in A$ and $x \in k$,
\begin{align*}
\psi(b\cdot (a \otimes ((a_0 \otimes \cdots \otimes a_{n-1})
\otimes_{kS_n} x)))
&= \psi(b^{(1)}a \otimes (b^{(2)}
\cdot (a_0 \otimes \cdots \otimes a_{n-1}) \otimes_{kS_n} x)) \\
&= \psi(b^{(1)}a \otimes ((b^{(2)}a_0 \otimes \cdots
\otimes b^{(n+1)}a_{n-1}) \otimes_{kS_n} x)) \\
&= (b^{(1)}a \otimes b^{(2)}a_0 \otimes \cdots
\otimes b^{(n+1)}a_{n-1}) \otimes_{kS_{n+1}} x \\
&= b \cdot (a \otimes a_0 \otimes \cdots \otimes a_{n-1})
\otimes_{kS_{n+1}} x \\
&= b \cdot \psi(a \otimes ((a_0 \otimes \cdots \otimes a_{n-1})
\otimes_{kS_n} x)).
\end{align*}
Hence, $\psi$ is an $A$-module homomorphism.
Moreover, for all $a_0,\dots ,a_n \in A$ and $x \in k$, we have
\vspace{-0.5em}
\begin{align*}
&\psi \circ \phi((a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x) \\
&\quad= \psi \left( \frac{1}{n+1} \sum_{i = 0}^n (-1)^i a_i
\otimes ((a_0 \otimes \cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots
\otimes a_n) \otimes_{kS_n} x) \right) \\
&\quad= \frac{1}{n+1} \sum_{i = 0}^n (-1)^i (a_i \otimes a_0
\otimes \cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x \\
&\quad= \frac{1}{n+1} \sum_{i = 0}^n (-1)^i (a_i \otimes a_0
\otimes \cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} (\sigma_1 \cdots \sigma_i) \cdot x \\
&\quad= \frac{1}{n+1} \sum_{i = 0}^n (-1)^i (a_i \otimes a_0
\otimes \cdots \otimes a_{i-1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
\cdot (\sigma_1 \cdots \sigma_i) \otimes_{kS_{n+1}} x \\
&\quad= \frac{1}{n+1} \cdot (n+1) (a_0 \otimes \cdots \otimes a_n)
\otimes_{kS_{n+1}} x \\
&\quad= (a_0 \otimes \cdots \otimes a_n) \otimes_{kS_{n+1}} x.
\end{align*}
So, $\psi \circ \phi = \id{}$ holds.
While, by Lemma \ref{lem.hopf-proj},
$A \otimes \sym{n-1}{}{A}$ is projective as a left $A$-module.
Therefore, $\sym{n}{}{A}$ is projective as a left $A$-module.
\end{proof}
\begin{rem}
By Theorem \ref{thm.res-proj},
if $\ch{k}\nmid (n+1)!$, then, for each $0 \leq m \leq n$,
there is an isomorphism $\hcoho{m}{A}{M} \cong \shcoho{m}{A}{M}$
as $k$-vector spaces.
In particular, if $\ch{k} = 0$, then $\sym{\bullet}{}{A}$
is a projective resolution of $k$, and hence there is an isomorphism
$\hcoho{\bullet}{A}{M} \cong \shcoho{\bullet}{A}{M}$
as $k$-vector spaces.
\end{rem}
\subsection{Example}
In the last subsection, we describe an example
of the resolution which gives symmetric cohomology.
Let $p$ be a odd prime number,
$k$ a field with characteristic $p$ and $C_p$ a cyclic group of order $p$.
Then we calculate the symmetric cohomology of $A = kC_p$.
\begin{prop}
Let $p$ be a odd prime number, $\ch{k} = p$ and $A= kC_p$.
Then $\sym{n}{}{A}$ is a free $A$-module
with rank $\dfrac{{}_p C_{n+1}}{p}$ for each $1 \leq n \leq p-2$.
\end{prop}
\begin{proof}
Let any generator
$(g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1$
of $\sym{n}{}{A}$ as a left $A$-module.
There exists $0 \leq a_i \leq p-1$ such that $g_i$ is represented
by $g^{a_i}$ where $C_p = \langle g \mid g^p = e \rangle$.
We can change the order of tensor products
by an action of symmetric groups,
and hence we can assume $a_0 < a_1 < \cdots < a_n$
without loss of generality.
First, we show
$$
h \cdot ((g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1)
\neq \pm (g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1
$$
for $(e\neq) h=g^a \in C_p$.
Assume that there exists
$h = g^a$ such that $h \cdot ((g_0 \otimes \cdots \otimes g_n)
\otimes_{kS_{n+1}} 1) = \pm (g_0 \otimes \cdots \otimes g_n)
\otimes_{kS_{n+1}} 1$. Then there exists $i$ such that
\begin{align*}
a_j \equiv \begin{cases}
a_{i+j} + a & (i+j \leq n), \\
a_{i+j-(n+1)} + a & (i+j \geq n+1)
\end{cases}
\end{align*}
modulo $p$ for $0 \leq j \leq n$.
In particular, $a_0 \equiv a_i + a \pmod{p}$ and
\begin{align*}
a_i \equiv \begin{cases}
a_{2i} + a & (2i \leq n), \\
a_{2i-(n+1)} +a & (2i \geq n+1)
\end{cases}
\end{align*}
hold. Hence, we have
\begin{align*}
a_0 \equiv \begin{cases}
a_{2i} + 2a & (2i \leq n), \\
a_{2i-(n+1)} +2a & (2i \geq n+1).
\end{cases}
\end{align*}
Repeating this operation,
there exists $1 \leq m \leq n+1$ and $0 \leq l \leq m$
such that $mi-l(n+1)=0$, and hence $ma\equiv 0 \pmod{p}$ holds.
Moreover, since $a_{mi-l(n+1)} = a_0$, we have $ma \equiv 0 \pmod{p}$.
This result contradicts $1 \leq a \leq p-1$
and $1 \leq m \leq n+1 \leq p-1$.
Therefore,
$h \cdot ((g_0 \otimes \cdots \otimes g_n)
\otimes_{kS_{n+1}} 1) \neq \pm (g_0 \otimes \cdots \otimes g_n)
\otimes_{kS_{n+1}} 1$ for $(e\neq) h=g^a \in C_p$ is obtained.
Next, we show that generators of $\sym{n}{}{A}$
as a left $A$-module are not torsion. Assume that
\begin{align*}
a \cdot ((g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1)=0
\end{align*}
for any $a\in A$. If $a = \sum_{i=0}^{p-1} x_i g^i$, then
\begin{align*}
a \cdot ((g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1)
= \sum_{i=0}^{p-1} x_i (g^i \cdot ((g_0 \otimes \cdots \otimes g_n)
\otimes_{kS_{n+1}} 1)).
\end{align*}
By the above discussion,
each $g^i \cdot ((g_0 \otimes \cdots \otimes g_n) \otimes_{kS_{n+1}} 1)$
is linear independent.
So, this implies $a = 0$, and hence generators are not torsion.
Finally, we construct a basis as an $A$-module from a basis
as a $k$-vector space.
Let $\mathcal{B} = \{ b_1,\dots ,b_{{}_pC_{n+1}} \}$
be a basis of $\sym{n}{}{A}$ as a $k$-vector space.
We put $\mathcal{B}_{(i)}$ by remove $b_i$ from $\mathcal{B}$,
if there exists $j$ such that $b_i \in \orb{C_p}{b_j} \cup \orb{C_p}{-b_j}$,
where $\orb{C_p}{b}$ is an orbit of $b$ under the action of $C_p$.
Moreover, continue this operation to $\mathcal{B}_{(i)}$.
We construct the set $\mathcal{B}^\prime$ which has the property
that intersections of orbits of each element are empty
by repeating the operation.
It is obtained that $\mathcal{B}^\prime$ is a set of generators
as an $A$-module by its construction.
Also, we have each element of $\mathcal{B}^\prime$
is linear independent because each element is not torsion,
and intersections of orbits of each element are empty.
Hence, $\sym{n}{}{A}$ is a free $A$-module.
In particular, we get
$\pm \mathcal{B}
= \bigcup_{b\in \mathcal{B}^\prime} \left( \orb{C_p}{b} \cup
\orb{C_p}{-b} \right)$
where we put $\pm \mathcal{B} = \{ \pm b \mid b\in \mathcal{B} \}$.
While, we have $\left| \mathcal{B} \right| = {}_p C_{n+1}$
and $\left| \orb{C_p}{b} \right| = p$.
Therefore, $\left| \mathcal{B}^\prime \right| = \dfrac{{}_pC_{n+1}}{p}$ holds,
that is, the rank of $\sym{n}{}{A}$ is $\dfrac{{}_pC_{n+1}}{p}$.
\end{proof}
Since, $\sym{p-1}{}{A}$ is isomorphic to $k$ as a left $A$-module,
the resolution of $k$ is the following exact sequence
$$
0 \rightarrow k \xrightarrow{\diff{p-1}{\widetilde{S}}}
\sym{p-2}{}{A} \rightarrow \cdots \rightarrow
\sym{1}{}{A} \xrightarrow{\diff{1}{\widetilde{S}}}
\sym{0}{}{A} \xrightarrow{\diff{0}{\widetilde{S}}} k \rightarrow 0,
$$
where $\sym{i}{}{A}$ is a free $A$-module for each $0 \leq i \leq p-2$.
This implies that there is an isomorphism
$\hcoho{n}{A}{M} \cong \shcoho{n}{A}{M}$
for any left $A$-module $M$ and each $0 \leq n \leq p-2$.
Also, in the case of $n=p-1$, the above isomorphism is obtained
by simple calculation. Summarizing the above, we have
\begin{align*}
\shcoho{n}{A}{M} \cong
\begin{cases}
\hcoho{n}{A}{M} & (0 \leq n \leq p-1), \\
0 & (p \leq n).
\end{cases}
\end{align*}
|
1,108,101,563,624 | arxiv | \section{Introduction}
\iffalse
\todo{In this paper we consider $\gamma$-discounted infinite-horizon constrained Markov decision process (CMDP) \cite{altman1999constrained}. Such problem arises in many practical applications: autonomous driving \cite{fisac2018general}, robotics \cite{ono2015chance}, in applications when for an agent it is critical to met safety constraints, for example an energy-efficient wireless communication system may want to consume minimum power without violating any constraint on quality service \cite{li2016cmdp}. Such Reinforcement Learning (RL) problems are often formulated as CMDP \cite{garcia2015comprehensive}.}
\fi
In this paper we consider $\gamma$-discounted infinite-horizon constrained Markov decision process (CMDP) \cite{altman1999constrained}. Such problem arises in many practical applications, such as autonomous driving \cite{fisac2018general}, robotics \cite{ono2015chance}
or applications systems when the agent must meet safety constraints.
An example of such a problem is an energy-efficient wireless communication system that aims to consume minimum power without violating any constraint on quality service \cite{li2016cmdp}. Such Reinforcement Learning (RL) problems are often formulated as CMDP \cite{garcia2015comprehensive}.
\iffalse
\todo{In October 2021 there appears independently three papers \cite{ying2022dual,lanarcpo,liu2021fast} where the authors under various assumptions propose algorithms that have $\tilde{\mathcal{O}}\left(1/\epsilon\right)$ iteration complexity to find global optimum (we skip the dependence on $1 - \gamma$ and logarithmic factors); $\epsilon$ characterize optimality gap and constraint violation; each iteration has the same complexity as iteration of Policy Gradient (PG) methods.}
\fi
Recently \cite{ying2022dual,lanarcpo,liu2021fast} proposed algorithms (under various assumptions)
that achieve $\tilde{\mathcal{O}}\left(1/\epsilon\right)$\footnote{For clarity we skip the dependence on $1 - \gamma$ and logarithmic factors.} iteration complexity to find global optimum, where $\epsilon$ characterizes optimality gap and constraint violation. Each iteration of the proposed methods has the same complexity as an iteration of the Policy Gradient (PG) methods.
\iffalse
\todo{Although the CMDP problem is nonconcave (CMDP problem is typical maximization problem) in policy $\pi$ (nonconcavity inherited from nonconcavity of MDP problem, which is nonconcave even in the bandit case \cite{mei2020global}), the complexity $\tilde{\mathcal{O}}\left(1/\epsilon\right)$ fits lower bound for smooth concave problems with large number of constraints \cite{nemirovsky1992information,ouyang2021lower}. But if we have only a few constraints $m$ (this is typical for the most of practical applications), this results is not optimal and we may expect for concave problems with $m$ constraints $\tilde{\mathcal{O}}\left(m\right)$ iteration complexity \cite{gasnikov2016universal,gladin2020solving,Gladin2021SolvingSM,xu2020first}. In this paper we transferring $\tilde{\mathcal{O}}\left(m\right)$ iteration complexity result to nonconcave CMDP problem.}
\fi
Although the CMDP problem is nonconcave (CMDP problem is typically a maximization problem) in policy $\pi$ (nonconcavity inherited from
MDP problem, which is nonconcave even in the bandit case \cite{mei2020global}), the complexity $\tilde{\mathcal{O}}\left(1/\epsilon\right)$ fits lower bound for smooth concave problems with large number of constraints \cite{nemirovsky1992information,ouyang2021lower}.
Despite that fact, if we have only a few constraints $m$ --- that is typical for most of the practical applications --- these results are not optimal and we may expect for concave problems with $m$ constraints $\tilde{\mathcal{O}}\left(m\right)$ iteration complexity \cite{gasnikov2016universal,gladin2020solving,Gladin2021SolvingSM,xu2020first}, which corresponds to lower bound for small enough $m$ \cite{nemirovsky1979problem}. In this paper we are transferring $\tilde{\mathcal{O}}\left(m\right)$ iteration complexity result to the nonconcave CMDP problem.
\subsection{Related work}
There is considerable interest in RL / MDP problems \cite{sutton1999policy,puterman2014markov,bertsekas2019reinforcement} and CMDP problems \cite{altman1999constrained}. For the last ten years there was a great theoretical progress in different directions. For example,
given a generative model with $|\mathcal{S}|$ states and $|\mathcal{A}|$ actions we can find $\epsilon$-policy ($\epsilon$ is a quality in terms of cumulative reward) for $\gamma$-discounted infinite-horizon MDP problem with
\begin{equation}\label{complexity}
\tilde{\mathcal{O}}\left(\frac{|\mathcal{S}| \cdot |\mathcal{A}|}{(1-\gamma)^{3}\epsilon^{2}}\right)
\end{equation}
samples \cite{sidford2018near,wainwright2019variance,agarwal2020model} (analogously for CMDP, see arXive version of \cite{jin2020efficiently}) that corresponds (up to a logarithmic factors) to the lower bound from \cite{azar2012sample}. Moreover, the dependence of $\epsilon$ can be improved to $\log\left(1/\epsilon\right)$ due to the deterioration of dependence on $|\mathcal{S}|$.
Unfortunately, in many practical applications these optimal algorithms do not work at all due to the size of $|\mathcal{S}|$.
A popular way to escape from the curse of dimensionality is to use PG methods \cite{mnih2015human,schulman2015trust}, where a parameterized (for example by Deep Neural Networks \cite{li2017deep}) class of policies is considered. In the core of PG type methods for MDP problems lie gradient type methods (Mirror Descent \cite{lan2022policy,zhan2021policy}, Natural Policy Gradient (NPG) \cite{kakade2001natural,cen2021fast}, e.t.c.) in the space of parameters applied to a properly regularized (in proper proximal setup) cumulative reward maximization problem. The gradient is calculated by using policy gradient theorem \cite{sutton1999policy}, which reduces gradient calculation to $Q$-function estimation. Under proper choice of regularizes (proximal setups) these methods required $\tilde{\mathcal{O}}\left((1-\gamma)^{-1}\right)$ iterations (they convergent linearly in function value and in policy) and are not sensitive to inexactness $\delta$ of $Q$-value estimation ($\delta \sim \epsilon$), see details in \cite{cen2021fast,lan2022policy,zhan2021policy} and reference therein. Given a generative model from these results we can obtain (see \cite{azar2012sample,agarwal2020model}) analogues of formula \eqref{complexity} for sample complexity that would be worse in terms of $(1-\gamma)$ dependence \cite{cen2021fast}, but can be better in terms of $|\mathcal{S}|$.
For CMDP problems, PG methods are also well developed, e.g., see surveys in \cite{lanarcpo,liu2021fast}. As we have already mentioned, the best-known iteration complexity bounds \cite{lanarcpo,liu2021fast,ying2022dual} assume that the number of constraints $m$ is big enough.
In \cite{ying2022dual} with additional strong assumption (initial state distribution covers the entire state space) the complexity bound $\tilde{\mathcal{O}}\left(\epsilon^{-1}\right)$ was obtained for entropy-regularized CMDP (for true MDP -- $\tilde{\mathcal{O}}\left(\epsilon^{-2}\right)$). In \cite{lanarcpo} the complexity bound $\tilde{\mathcal{O}}\left(\epsilon^{-1}\right)$ was obtained under weaker additional assumption (Markov chain induced by any stationary policy is ergodic) by using dual approach, see Section~\ref{contr} for the details. For both of these approaches, given a generative model, we can obtain analogs of formula \eqref{complexity} for sample complexity that would be worse not only in terms of $(1-\gamma)$ dependence but also in terms of $\epsilon$ (but still can be better in terms of $|\mathcal{S}|$).
In \cite{liu2021fast} the complexity bound $\tilde{\mathcal{O}}\left(\epsilon^{-1}\right)$ was obtained without additional assumptions. But PG procedure (Policy Mirror Descent) was incorporated into the method directly, so we do not have sensitivity analysis for inexactness in $Q$-values.
\subsection{Main contributions}\label{contr}
In the core of our approach lies the paper \cite{lanarcpo}, where the authors introduce entropy regularized policy optimizer and solve regularized dual problem by proper version of Nesterov's accelerated gradient method.
First of all, they use the
strong duality for CMDP problem, which can be derived \cite{paternain2019constrained} from the fact of compactness and convexity of the set of occupation measures \cite{borkar1988convex} or
from Linear Programming representation of CMDP problem in discounted state-action visitation distribution \cite{altman1999constrained}. The next important step is entropy policy regularization. This regularization simultaneously solves several tasks at once. First of all, it allows to estimate the gradient of dual function using NPG method that has a linear rate of convergence (in policy) and is robust to inexactness in $Q$-function evaluations \cite{cen2021fast}. The linear rate of convergence in policy is crucial since the dual accelerated method is sensitive to inexactness in gradient, which can be controlled if we have convergence in policy.
Secondly, this regularization allows to prove smoothness (in the spirit of \cite{nesterov2005smooth} and with additional nice analysis of Mitrophanov's perturbation bounds \cite{mitrophanov2005sensitivity} for showing that visitation measure is Lipschitz w.r.t. the policy) of the dual problem.
The smoothness of the dual problem allows to use Nesterov's accelerated method to solve it and to get an optimal rate.
The last step is the regularization of the dual problem to obtain a linear rate of convergence for the dual accelerated method, which negates the fact that we should solve the dual problem with higher accuracy to obtain the desired accuracy for the primal problem and constraint violation \cite{devolder2012double,gasnikov2016efficient}. An alternative approach, which is not realized, is related to the primal-dual analysis of the method, which is used for the dual problem, see \cite{nemirovski2010accuracy} for convex problems.
In this approach, it is sufficient to solve the dual problem with the same accuracy as we wish to solve the primal one. This (primal-dual) approach may conserve the dependence on $\epsilon^{-2}$ in \eqref{complexity} in the final sample complexity estimate if the method we used for the dual problem does not accumulate an error in gradient over iterations. From \cite{nemirovski2010accuracy,gladin2020solving} it is known that the Ellipsoid method is a primal-dual one and does not accumulate an error in gradient.
Our contribution consists in replacing in described above approach the dual accelerated method with Vaidya's cutting-plane method \cite{vaidya1989new,vaidya1996new}. Vaidya's method has a linear rate of convergence (without any regularization) and outperforms the accelerated method when dealing with small dimension problems \cite{bubeck_book}. Moreover, Vaidya's method does not accumulate an error in gradient value \cite{Gladin2021SolvingSM} and hence is more robust than the accelerated method.
We build a new way for CMDP problems to estimate the quality of the primal solution from the dual one. To the best of our knowledge, the developed technique is also new for standard convex (concave) inequalities constrained problems and quite different from the technique that was used in \cite{lanarcpo}. This technique can be applied to any linear convergent algorithms for the dual problem.
Moreover, we improve $|\mathcal{A}|$-times the bound on the Lipschitz gradient constant of the dual function from \cite{lanarcpo}. In Lemma 7 \cite{lanarcpo} the authors formulate the correct result, but in fact, they prove $|\mathcal{A}|$-times worse result than it was formulated. We give accurate proof of Lemma 7 by using the result from Appendix of \cite{juditsky2005recursive}.
Similarly to \cite{lanarcpo}, our proposed method
can be applied to a wider class of nonconvex/nonconcave constrained problems with strong duality (zero duality gap) and uniqueness of the solution of the auxiliary problem, which relates the primal variables with the dual ones.
In the end, we demonstrate by numerical experiments that our proposed algorithm indeed outperforms AR-CPO from \cite{lanarcpo} when $m$ is not too big.
\section{Preliminaries}\label{sec_prelim}
\subsection{Markov Decision Process}
A Markov decision process (MDP) is determined by a five-tuple $(\mathcal{S}, \mathcal{A}, \mathrm{P}, r, \gamma)$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathrm{P}$ is the transition kernel, $r$ is the reward function and $\gamma \in(0,1)$ is the discount factor. Assume that $\mathcal{S}$ and $\mathcal{A}$ are finite with cardinality $|\mathcal{S}|$ and $|\mathcal{A}|$, respectively. The initial state $s_{0}$ follows a distribution $\rho$. At any time $t \in \mathbb{N}_{+}$, an agent takes an action $a_{t} \in \mathcal{A}$ at state $s_{t} \in \mathcal{S}$, after which, according to the distribution $\mathrm{P}\left(s_{t} \mid s_{t-1}, a_{t-1}\right)$, the environment transits to the next state and the agent receives a reward $r\left(s_{t}, a_{t}\right)$. The goal is to maximize the expected accumulated discounted reward: $\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} r\left(s_{t}, a_{t}\right)\right]$. \\
A stationary policy maps a state $s \in \mathcal{S}$ to a distribution $\pi(\cdot \mid s)$ over $\mathcal{A}$, which does not depend on time $t$. For a given policy $\pi$, its value function for any initial state $s \in \mathcal{S}$ is defined as $V_{r}^{\pi}(s):=\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} r\left(s_{t}, a_{t}\right) \mid s_{0}=\right.$ $\left.s, a_{t} \sim \pi\left(a_{t} \mid s_{t}\right), s_{t+1} \sim \mathrm{P}\left(\cdot \mid s_{t}, a_{t}\right)\right]$. Next, the mathematical expectation is taken with respect to the distribution of the initial state, the expected reward is determined by following policy $\pi$ as $V_{r}^{\pi}(\rho):=\mathbb{E}_{s_{0} \sim \rho}\left[V_{r}^{\pi}\left(s_{0}\right)\right]$. The discounted state-action visitation distribution defined $\nu_{\rho}^{\pi}$ as follows: $\nu_{\rho}^{\pi}(s, a):=(1-\gamma) \sum_{t=0}^{\infty} \gamma^{t} \operatorname{Pr}\left\{s_{t}=s, a_{t}=a \mid s_{0} \sim\right.$ $\left.\rho, a_{t} \sim \pi\left(\cdot \mid s_{t}\right), s_{t+1} \sim \mathrm{P}\left(\cdot \mid s_{t}, a_{t}\right)\right\}$, for any $s \in \mathcal{S}, a \in \mathcal{A}$. The value function thus can be equivalently written as
$$
V_{r}^{\pi}(\rho)=\frac{\sum_{s \in \mathcal{S}, a \in \mathcal{A}} \nu_{\rho}^{\pi}(s, a) r(s, a)}{1-\gamma}=\frac{\left\langle\nu_{\rho}^{\pi}, r\right\rangle_{\mathcal{S} \times \mathcal{A}}}{1-\gamma},
$$
where $\langle\cdot, \cdot\rangle_{\mathcal{S}} \times \mathcal{A}$ denotes the inner product over the space $\mathcal{S} \times \mathcal{A}$ by reshaping $\nu_{\rho}^{\pi}$ and $r$ as $|\mathcal{S}| \times|\mathcal{A}|$-dimensional vectors, and we omit the subscripts and use $\langle\cdot, \cdot\rangle$ when there is no confusion.
\subsection{Constrained MDP}
\label{CMDP_setting}
The difference between CMDP and MDP is that the reward is an $(m+1)$-dimensional vector: $r(s, a)=$ $\left[r_{0}(s, a), r_{1}(s, a), \ldots, r_{m}(s, a)\right]^{\top}$. Each reward function $r_{i},\ i=0,1, \ldots, m$ is positive and finite, $r_{i, \max }:=\max _{s \in \mathcal{S}, a \in \mathcal{A}}\left\{r_{i}(s, a)\right\}$, for $i=0,1, \ldots, m$; $R_{\max }:=\sqrt{\sum_{i=1}^{m} r_{i, \max }^{2}} .$ Then the value function defined with respect to the $i$-th component of the reward vector $r$ as follows $V_{i}^{\pi}(s):=$ $\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} r_{i}\left(s_{t}, a_{t}\right) \mid s_{0}=s, a_{t} \sim \pi\left(a_{t} \mid s_{t}\right), s_{t+1} \sim\right.$ $\left.\mathrm{P}\left(\cdot \mid s_{t}, a_{t}\right)\right]$ and $V_{i}^{\pi}(\rho) = \mathbb{E}_{s_{0} \sim \rho}\left[V_{i}^{\pi} \left(s_{0} \right)\right],\ $ for $i=0,1, \ldots, m$. The objective of the constrained MDP is to solve the following constrained optimization problem:
\begin{equation}
\begin{aligned}
\max_{\pi \in \Pi}\, & V_{0}^{\pi}(\rho)\\
\text{s.t. } & V_{i}^{\pi}(\rho) &\geq c_{i}, \quad i=1, \ldots, m,
\label{optprob}
\end{aligned}
\end{equation}
where $\Pi=\left\{\pi \in \mathbb{R}^{|\mathcal{S}||\mathcal{A}|}: \sum_{a \in \mathcal{A}} \pi(a \mid s)=1, \pi(a \mid s) \geq 0\right.$, $\forall(s, a) \in \mathcal{S} \times \mathcal{A}\}$ is the set of all stationary policies.\\
Let $\pi^{*}$ denote the optimal policy for the problem \eqref{optprob}. The goal is to
find an $\epsilon$-optimal policy defined as follows.
\begin{definition}
A policy $\tilde{\pi}$ is $\epsilon$-optimal if its corresponding optimality gap and the constraint violation satisfy
$$
V_{0}^{*}(\rho)-V_{0}^{\tilde{\pi}}(\rho) \leq \epsilon ; \text { and }\left\|\left[c-V^{\tilde{\pi}}(\rho)\right]_{+}\right\|_{2} \leq \epsilon,
$$
where $V_{0}^{*}(\rho)$ is the optimal value of \eqref{optprob}, $V^{\tilde{\pi}}(\rho):=$ $\left[V_{1}^{\tilde{\pi}}(\rho), \ldots, V_{m}^{\tilde{\pi}}(\rho) \right]^{\top}$ and $c:=\left[c_{1}, \ldots, c_{m}\right]^{\top}$.
\end{definition}
\subsection{Notation}
Let $I_m$ denote $m\times m$ identity matrix, $\mathbf{1}_m \in \mathbb{R}^m$ be a vector of ones. The set of nonnegative real numbers is denoted by $\mathbb{R}_+$. Notation $\operatorname{int} P$ is used for the interior of a set $P \subseteq \mathbb{R}^m$. Given a vector $x \in \mathbb{R}^m$, let $\| x \|_p, p \in [1, \infty]$ denote the $p$-norm of $x$, $[x]_+$ be defined by $\left([x]_+\right)_i=\max\{0, x_i\}, i=1, \ldots, m$. For two vectors $x,y \in \mathbb{R}^m$, inner product is denoted by $\langle x, y\rangle$ or $x^\top y$.
Given two functions $f(\epsilon)$ and $g(\epsilon)$, we write $f(\epsilon)=\mathcal{O}(g(\epsilon))$ if there exists some constant $C>0$, such that $f(\epsilon) \leq C g(\epsilon)$ for small enough $\epsilon$. $\tilde{\mathcal{O}}(g(\epsilon))(\cdot)$ means $\mathcal{O}(\cdot)$ up to logarithmic factor in a small power (usually 1 or 2). Bold symbol $\pmb{\pi}$ denotes the constant ($\pmb{\pi} = 3.1415\ldots$) contrary to plain $\pi$ which indicates policy. $\log(\cdot)$ is the natural logarithm.
\section{Cutting-plane algorithm for CMDP}
In this section, we introduce the Cutting-plane algorithm for CMDP which is presented in Algorithm \ref{alg:vaidya}.
The algorithm assumes access to:
\begin{enumerate}
\item Oracles, sufficent to run
Natural policy gradient algorithm (see \ref{NPGs}),
for our MDP with arbitrary rewards.
For example, access to exact gradient of value function
w.r.t. softmax policy parametrization for any vector
of rewards, and exact Fisher information matrix for a
softmax-parametrized policy. Or, in our finite setting, also access
to soft Q-functions is enough.
\item Exact value function of a policy w.r.t
to constraints' reward vectors $r_1, ... r_m$
\end{enumerate}
In the algorithm, policies may be stored as full
softmax parametrizations (\ref{softmax_parametrization}),
or directly as vectors
$\pi(a\vert s) \,\,\forall (s,a) \in \mathcal{S} \times \mathcal{A}$
(both ways are equivalent and can be calculated one from another).
The core idea is to consider the entropy-regularized Lagrange function
\begin{equation*}
\mathcal{L}_{\tau}(\pi, \lambda) := V_{0}^{\pi}(\rho)+\left\langle\lambda, V^{\pi}(\rho)-c\right\rangle + \tau \mathcal{H}(\pi),
\end{equation*}
where $\lambda \in \mathbb{R}^m_+$ is the vector of dual variables, $V^{\pi}(\rho)=\left[V_{1}^{\pi}(\rho), \ldots, V_{m}^{\pi}(\rho)\right]^{\top}$ is the vector of constraints, $c=\left[c_{1}, \ldots, c_{m}\right]^{\top}$ is the vector of constraint thresholds,
\begin{equation*}
\mathcal{H}(\pi)=-\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} \log \left(\pi\left(a_{t} \mid s_{t}\right)\right) \mid s_{0}\sim \rho, a_{t} \sim \pi\left(a_{t} \mid s_{t}\right), s_{t+1} \sim \mathrm{P}\left(\cdot \mid s_{t}, a_{t}\right)\right]
\end{equation*}
is the discounted entropy of the policy $\pi$ and $\tau>0$ is the regularization coefficient. The proposed method is based on two components: Vaidya's cutting-plane method \cite{vaidya1989new, vaidya1996new} for solving the dual problem
\begin{equation}\label{dual_probl}
\min_{\lambda \in \mathbb{R}^m_+} \bigl\{ d_{\tau}(\lambda) := \max_{\pi \in \Pi} \mathcal{L}_{\tau}(\pi, \lambda) \bigr\},
\end{equation}
and an entropy-regularized policy optimizer for solving the inner problem $\max_{\pi \in \Pi} \mathcal{L}_{\tau}(\pi, \lambda)$ on each iteration of the outer loop. Below is the description of both components.
\subsection{Entropy-regularized policy optimizer}
To estimate the gradient of the dual function $\nabla d_{\tau}(\lambda)$, one has to solve the problem $\max_{\pi \in \Pi} \mathcal{L}_{\tau}(\pi, \lambda)$. Note that this is equivalent to maximizing an entropy-regularized value function corresponding to a reward function $r_{\lambda}:=r_{0}+\sum_{i=1}^{m} \lambda_{i} r_{i}$. As mentioned in the introduction, entropy regularization enables the linear rate of convergence of an NPG method. In the line \ref{line_NPG} of Algorithm \ref{alg:vaidya}, $\mathrm{NPG}\left(r_{\lambda}, \tau, \delta\right)$ represents
a call to NPG procedure, that learns the optimal policy for the entropy-regularized MDP with the regularization coefficient $\tau$ and reward $r_{\lambda}$ up to $\delta$-accuracy in terms of $l_{\infty}$ distance to the unique optimal regularized policy
$\pi_{\tau, \lambda}^*$.
More detailed information on it is provided in Appendix \ref{NPGs}.
\begin{algorithm}[h!
\caption{Cutting-plane algorithm for CMDP}
\label{alg:vaidya}
\begin{algorithmic}[1]
\REQUIRE number of outer iterations $T$,
NPG accuracy $\delta$,
pair $(A_0, b_0) \in \mathbb{R}^{k_0\times m} \times \mathbb{R}^{k_0}$,
algorithm parameters $\eta \leq 10^{-4}$,
$\zeta \leq 10^{-3} \cdot \eta$.
\FOR{$t=0,\, \dots, \, T-1$}
\STATE $\lambda_t:=\operatorname{VolCenter}(A, b)$
\STATE Compute
$H_t^{-1} := \left( H(\lambda_t; A_t,b_t) \right)^{-1}$ and
$\displaystyle \left\{ \sigma_{i}(\lambda_t; A_t,b_t) \right\}_{i=1}^{k_t}$
\STATE $\displaystyle i_t := \argmin_{1 \leq i \leq k_t} \sigma_{i}(\lambda_t; A_t,b_t)$
\IF {$\sigma_{i_t}(\lambda_t; A_t,b_t) < \zeta$}
\STATE Obtain $\left(A_{t+1}, b_{t+1}\right)$ by removing the $i_t$-th row from $\left(A_t, b_t\right)$,
\STATE $k_{t+1} := k_t - 1.$
\ELSE
\IF {$\lambda_t \in \mathbb{R}^k_+$}
\STATE $\pi_t := \mathrm{NPG}\left(r_0 + \langle \lambda_t, r \rangle, \tau, \delta \right)$ \label{line_NPG}
\STATE $\widehat{\nabla}_{t} := c - V^{\pi_t}(\rho)$, \label{line_antigrad}
\ELSE
\STATE Define $\widehat{\nabla}_{t}$ as the vector with components
$$ (\widehat{\nabla}_{t})_{i}= \begin{cases}1, & (\lambda_t)_{i}<0, \\ 0, & (\lambda_t)_{i} \geq 0,\end{cases}\quad i=1,\ldots, m. $$
\ENDIF
\STATE Find such $\beta_t \in \mathbb{R}$ that $\widehat{\nabla}_{t}^\top \lambda_t \geq \beta_t$ from the equation
$$\frac{\widehat{\nabla}_{t}^\top H_t^{-1} \widehat{\nabla}_{t}}{(\widehat{\nabla}_{t}^\top \lambda_t - \beta_t)^2} = \frac{1}{2} \sqrt{\eta \zeta},$$
\STATE $A_{t+1} := \begin{pmatrix}A_t\\ \widehat{\nabla}_{t}^{\top}\end{pmatrix},\;\;b_{t+1} := \begin{pmatrix}b_t\\\beta_t\end{pmatrix},\;\;k_{t+1} = k_t + 1$.
\ENDIF
\ENDFOR
\STATE $\lambda_T = \argmin\limits_{\lambda \in \{\lambda_0, ..., \lambda_{T-1}\}} d_{\tau}(\lambda)$
\STATE $\pi_T := \mathrm{NPG}\left(r_0 + \langle \lambda_T, r \rangle, \tau, \delta \right)$
\ENSURE $\pi_T$.
\end{algorithmic}
\end{algorithm}
\subsection{Vaidya's cutting-plane method}
Vaidya's cutting-plane method \cite{vaidya1989new, vaidya1996new} in an algorithm for a convex optimization problem with complexity $\mathcal{O}(m \log \frac{m}{\epsilon})$, which makes it a good choice for formulations with a small or moderate dimensionality like the dual problem \eqref{dual_probl}. Moreover, it has been shown that the method can be used with an inexact subgradient, and it does not accumulate the error \cite{Gladin2021SolvingSM}. This makes it very suitable for the problem \eqref{dual_probl}, since the gradient of the dual function is computed approximately.
We will now introduce necessary formulas and present the proposed method for the problem \eqref{optprob}.
For a matrix $A \in \mathbb{R}^{k \times m}$ with rows $a_{i}^{\top}, i=1, \ldots, k$, and a vector $b \in \mathbb{R}^k$, define
\begin{align}
P(A,b) &:= \{\lambda \in \mathbb{R}^m: \, A\lambda \geq b\}, \\
\label{hess}
H(\lambda; A,b) &:= \sum_{i=1}^{k} \frac{a_{i} a_{i}^{\top}}{\left(a_{i}^{\top} \lambda-b_{i}\right)^{2}},\\
\label{vaidya_sigmas}
\sigma_{i}(\lambda; A,b) &:= \frac{a_{i}^{\top} \left(H(\lambda; A,b)\right)^{-1} a_{i}}{\left(a_{i}^{\top} \lambda-b_{i}\right)^{2}}, \quad 1 \leq i \leq k,\\
\label{vol_center}
\operatorname{VolCenter}(A, b) &:= \argmin_{\lambda \in \operatorname{int} P(A, b)} \left\{ V(\lambda; A,b) := \frac{1}{2} \log \left(\operatorname{det} H(\lambda; A,b)\right) \right\},
\end{align}
where $\operatorname{det}H(\lambda; A,b)$ denotes the determinant of $H(\lambda; A,b)$. Since $V$ is a self-concordant function of $x$, it can be efficiently minimized with Newton-type methods. The algorithm starts with a pair $(A_0, b_0) \in \mathbb{R}^{k_0\times m} \times \mathbb{R}^{k_0}$, such that $P(A_0, b_0)$ contains the search space. We refer the reader to Appendix \ref{vaidya_descr} for more information on the original Vaidya's method and its parameters.
\section{Convergence results for the proposed algorithm}\label{sec_converg}
First, we introduce technical assumptions on our CMDP instance
$(\mathcal{S}, \mathcal{A}, P, \gamma, r_0, r_1, \ldots, r_m, \rho)$,
that are widely used in reinforcement learning literature.
\begin{assumption}[Slater Condition]\label{assumption_slater}
There exists a constant $\xi \in \mathbb{R}_{+}$, and at least one policy $\pi_{\xi} \in \Pi$, such that for all $i=1, \ldots, m,\, V_{i}^{\pi_{\xi}} \geq c_{i}+\xi$.
\end{assumption}
Slater condition asserts, that there is a strictly feasible policy. Define the set
\begin{equation}
\Lambda := \{ \lambda \in \mathbb{R}^m_+\; |\; \| \lambda \|_1 \leq B_{\lambda} \}\; \text{ with }
B_{\lambda}:=\frac{r_{0, \max } + \tau \log |\mathcal{A}|}{(1-\gamma) \xi}.
\end{equation}
\begin{assumption}[Regularized optimal policy uniqueness]\label{assumption_tau_optimal_unique}
For any $\tau > 0, \lambda \in \Lambda$ there exists exactly one optimal policy for the problem:
\begin{equation}
\max_{\pi \in \Pi} \mathcal{L}_{\tau}(\pi, \lambda)
\end{equation}
which we call $\pi_{\tau,\lambda}^*$.
\end{assumption}
\begin{remark}
As was proved in \cite{UniqueRegularized}, for any tabular MDP
there is a unique policy that maximizes
$V^{\pi}_{\tau}(s)$ for all $s \in S$ at once, given that $\tau > 0$.
This policy obviously will be optimal with $\rho$, but we assume
that it will still be unique. We use recurrent equations
on it that were proved in \cite{UniqueRegularized} as well.
\end{remark}
\begin{assumption}[Uniform Ergodicity]\label{assumption_ergodicity}
For any $\lambda \in \Lambda$, the Markov chain induced by the policy $\pi_{\tau, \lambda}^{*}$ and the Markov transition kernel is uniformly ergodic, i.e., there exist constants $C_{M}>0$ and $0<\beta<1$ such that for all $t \geq 0$,
$$
\sup _{s \in \mathcal{S}} d_{T V}\left(\mathbb{P}\left(s_{t} \in \cdot \mid s_{0}=s\right), \chi_{\pi_{\tau, \lambda}^{*}}\right) \leq C_{M} \beta^{t},
$$
where $\chi_{\pi_{\tau, \lambda}^{*}}$ is the stationary distribution of the MDP induced by policy $\pi_{\tau, \lambda}^{*}$, and $d_{T V}(\cdot, \cdot)$ is the total variation distance.
\end{assumption}
Convergence rate of Algorithm \ref{alg:vaidya} in terms of the optimality gap $V_{0}^{*}(\rho)-V_{0}^{\pi_T}(\rho)$ and the constraint violation $\left[c-V^{\pi_T}(\rho)\right]_{+}$ is described by the following theorem. The proof can be found in Appendix \ref{main_proof}.
\begin{theorem}\label{convergence_theorem}
Suppose Assumptions \ref{assumption_slater},
\ref{assumption_tau_optimal_unique} and \ref{assumption_ergodicity}
hold, let $T\in \mathbb{N},\, \delta>0$ be fixed and $\epsilon$ denote the value
\begin{equation}\label{dual_unopt}
\epsilon := \frac{2m^2 B_{\lambda}}{\zeta} \left(\xi + \frac{\sqrt{m} R_{max}}{1-\gamma} \right) \exp \left( \frac{\log \pmb{\pi} - \zeta T}{2m} \right)
\end{equation}
where $\pmb{\pi}$ denotes the constant ($\pmb{\pi} = 3.1415\ldots$).
The Cutting-plane algorithm for CMDP (Algorithm~\ref{alg:vaidya}) with parameters
\begin{equation*}
A_0 := \left[\begin{array}{c}
-I_m \\
1
\end{array}\right],\quad b_0 := \left[\begin{array}{c}
B_{\lambda} \mathbf{1}_m \\
m B_{\lambda}
\end{array}\right],\quad \tau := \sqrt[3]{\epsilon},
\end{equation*}
provides the following convergence guarantee of the optimality gap and the constraint violation:
\begin{align}
\label{opt_gap}
V_{0}^{*}(\rho)-V_{0}^{\pi_T}(\rho) &\leq \frac{B_{\lambda} R_{max} \sqrt{2 m L_{\nu}}}{1-\gamma} \sqrt{\epsilon^{2/3}+6\gamma\delta} + 2\epsilon \nonumber \\
&+18\gamma \delta \sqrt[3]{\epsilon} + \frac{\log |\mathcal{A}|}{1-\gamma} \sqrt[3]{\epsilon} + \sqrt{m} B_{\lambda} \| \hat{\delta} \|_2,\\
\label{constr_viol}
\bigl\| [ c - V^{\pi_T}(\rho) ]_+ \bigr\|_2 &\leq \frac{2 R_{max}^2 L_{\nu}}{1-\gamma} (\epsilon^{2/3}+6\gamma\delta) + \| \hat{\delta} \|_2,
\end{align}
where $\hat{\delta} := V^{\pi_T}(\rho) - V^{\pi^*_{\tau, \lambda_T}}(\rho)$ is a value controlled by the NPG.
\end{theorem}
The value $\epsilon$ in \eqref{dual_unopt} reflects linear convergence of Algorithm \ref{alg:vaidya} in terms of the dual function. As it can be seen from \eqref{opt_gap} and \eqref{constr_viol}, this also implies linear convergence in terms of value function and constraint violation, if NPG provides appropriate accuracy. Thus, the algorithm results in the following complexity bound
\begin{corollary}\label{main_coroll}
Algorithm \eqref{alg:vaidya} outputs an $\epsilon$-optimal policy with respect to both the optimality gap and constraint violation after
\begin{equation}\label{asympt}
T = \mathcal{O}\left( m \log \frac{m}{\epsilon} \right)
\end{equation}
steps. The total number of calls to the policy gradient
oracle made in all NPG calls is:
\begin{equation}\label{asympt_npg}
N_{oracle} = \mathcal{O}\left(
T \cdot \log \epsilon^{-1}
\right)=
\mathcal{O}\left(
m \log{\frac{m}{\epsilon}} \log \epsilon^{-1}
\right)
\end{equation}
\end{corollary}
Proof of the corollary is in Appendix \ref{coroll_proof}.
\subsection{Regularization of dual variables}
The proposed approach can also be modified in the following way: the dual problem writes as
\begin{equation}
\max_{\lambda \in \mathbb{R}^m_+} d_{\tau,\mu}(\lambda):=d_{\tau}(\lambda) + \frac{\mu}{2} \| \lambda \|_2^2,
\end{equation}
where $\mu>0$ is the regularization coefficient. In this case, the vector $\widehat{\nabla}_{t}$ in Line~\ref{line_antigrad} of Algorithm~\ref{alg:vaidya} will be replaced with $\widehat{\nabla}_{t} := c - V^{\pi_t}(\rho) - \mu \lambda_t$. If $\mu$ is chosen sufficiently small, the
result of Corollary \ref{main_coroll}
remains true, see Appendix \ref{dual_reg_append} for details.
\section{Experiments}\label{sec:main_experiments}
For our experiments we used Acrobot-v1, OpenAI Gym \cite{Mei2020OnTG} environment. This environment contains two links connected linearly to form a chain, with one end of the chain fixed. The joint between the two links is actuated. The goal is to swing the end of the lower link up to a given height. Two additional constraints are implemented in order to have similar environment as in \cite{lanarcpo} for comparison purpose.
In Figure \ref{fig:perf1} we compare our cutting-plane algorithm (VMDP) with the state-of-the-art primal-dual optimization (AR-CPO) method for CMDP \cite{lanarcpo}. In order for a fair comparison, the same neural softmax policy and the trust region policy \cite{pmlr-v37-schulman15} optimization are used in both algorithms.
\begin{figure}[h!]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width = 1.0\textwidth ]{figures/Rewards.pdf}
\caption{}
\label{fig:1a}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{figures/Costs1.pdf}
\caption{} \label{fig:1b}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{figures/Costs2.pdf}
\caption{} \label{fig:1c}
\end{subfigure}
\caption{ Average performance for VMDP and AR-CPO; the $x$-axis is training iteration.}
\label{fig:perf1}
\end{figure}
Similarly to \cite{lanarcpo} we picture the average over 10 random initialized seeds and translucent error bands have the width of two standard deviations. The hyper parameters of AR-CPO algorithm are optimal from \cite{lanarcpo}. More information about experiments and parameters settings can be found in Appendix \ref{sec:expparam}.
Figure \ref{fig:1a} represents average total reward over episode, while Figures \ref{fig:1b} and \ref{fig:1c} show constraints with dashed line as the constraint thresholds. We used total reward for a fair comparison with existing state-of-the-art approach. Moreover, in some cases total reward is more important in practise.
We find that our algorithm achieves higher total reward with similar standard deviation. The speed of converge of both algorithms is similar. Thus, our algorithm allows to achieve better performance with the same training time for MDP tasks with the small number of constraints.
\section{Conclusion}
In this paper we consider the constrained Markov decision process, where an agent aims to maximize the expected accumulated discounted reward subject to a relatively small number $m$ of constraints on its costs. The best known algorithms achieve $\tilde{\mathcal{O}}\left(1/\epsilon\right)$ iteration complexity to find global optimum, where $\epsilon$ characterizes optimality gap and constraint violation. Each iteration of these algorithms has the same complexity as an iteration of the Policy Gradient (PG) methods. In this paper we improve (for relatively small number $m$) iteration complexity bound and obtain linear convergence $\tilde{\mathcal{O}}\left(m\right)$.
\subsubsection*{Acknowledgments}
The work of E. Gladin is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
The work of A. Gasnikov was supported by a grant for research centers in the field of
artificial intelligence, provided by the Analytical Center for the
Government of the Russian Federation in accordance with the subsidy
agreement (agreement identifier 000000D730321P5Q0002 ) and the agreement
with the Ivannikov Institute for System Programming of the Russian
Academy of Sciences dated November 2, 2021 No. 70-2021-00142.
\bibliographystyle{abbrvnat}
|
1,108,101,563,625 | arxiv | \section{Introduction}\label{s:iintro}
This is the continuation of paper~\cite{part-I}; we also call this latter `Part~I', while the present paper `Part~II'. We refer to formulas, definitions, etc.\ from Part~I in the following format: formula~(I.5) means formula~5 in Part~I, etc.
\medskip
A standard way to define a piecewise linear (PL) manifold is via its \emph{triangulation}. In this paper, we will be dealing with \emph{four-dimen\-sion\-al} manifolds, and a triangulation of such manifold means that it is represented as a union of 4-simplices, also called \emph{pentachora}, glued together in a proper way that can be described purely combinatorially. An \emph{invariant} of such manifold is a quantity that may need an actual triangulation for its calculation, but must \emph{not} depend on this triangulation.
A theorem of Pachner~\cite{Pachner} states that a triangulation of a PL manifold can be transformed into any other triangulation using a finite sequence of \emph{Pachner moves}; monograph~\cite{Lickorish} can be recommended as a pedagogical introduction to this subject. And, as indicated in~\cite[Section~1]{Lickorish}, in order to construct invariants of PL manifolds, it makes sense to construct algebraic relations corresponding to Pachner moves, also called their \emph{algebraic realizations}.
It turns out that very interesting mathematical structures appear if we begin constructing a realization of four-dimen\-sion\-al Pachner moves by ascribing a \emph{Grassmann--Gaussian weight} to each pentachoron. Here `Gaussian' means that this weight is proportional to the exponential of a quadratic form, and `Grassmann' means that this form depends on \emph{anticommuting} Grassmann variables. Each Grassmann variable is supposed to live on a 3-face (tetrahedron) of a pentachoron, and gluing two pentachora along a 3-face corresponds to \emph{Berezin integration} w.r.t.\ the corresponding variable. A large family of such realizations for Pachner move 3--3 was discovered in paper~\cite{KS2}, and then a full parameterization for (a Zariski open set of) such relations was found in~\cite{full-nonlinear}. A beautiful fact is that this \emph{nonlinear} parameterization goes naturally in terms of a \emph{2-cocycle} given on both initial and final configurations of the Pachner move; we call these respective configurations (clusters of pentachora) the \emph{left-} and \emph{right-hand side} (l.h.s.\ and r.h.s.) of the move.
There was, however, one unsettled problem with the realization of move 3--3 in~\cite{full-nonlinear}: not all involved quantities were provided with their explicit expressions in terms of the 2-cocycle. In particular, Theorem~9 in~\cite{full-nonlinear} was just an existence theorem for the proportionality coefficient between the Berezin integrals representing the l.h.s.\ and r.h.s.\ of the move, while this coefficient is crucial for constructing an invariant for a whole `big' manifold. Also, it remained to find realizations for the rest of Pachner moves, namely 2--4 and 1--5.
One possible solution to the problem with the coefficient was proposed in~\cite{gce}, in a rather complicated way combining computational commutative algebra with a guess-and-try method, and leaving the feeling that the algebra behind it deserves more investigation.
It turns out that a more transparent way to solving the mentioned problem with the coefficient appears if we consider this problem \emph{together} with constructing formulas for moves 2--4 and 1--5. This is what we are doing in the present paper: we provide all necessary formulas for coefficients, and in a multiplicative form suitable for `globalizing', that is, transition to a formula for the whole manifold. To be exact, we construct an invariant of a pair $(M,h)$, where $M$ is a four-dimen\-sional piecewise linear manifold, and $h\in H^2(M,\mathbb C)$ is a given middle cohomology class.
One algebraic problem still remains unsolved though, namely, a formal proof of what we have to call `Conjecture', see Subsection~\ref{ss:ER}, and what is actually a firmly established mathematical fact. That is simply a formula involving ten indeterminates (over field~$\mathbb C$), whose both sides are composed using the four arithmetic operations and also square root signs and parentheses. The experience gained in the previous work~\cite{rels,KS2} led the author to the idea that such formula exists---and indeed, the reader can check it on a computer by substituting any random numerical values for the indeterminates\footnote{For instance, using the program code available from the author and written for \emph{Maxima} computer algebra system.}. Many such checks have been already carried out, but the available computer capabilities are not enough to check our formula symbolically\footnote{Currently, efforts to solve this problem are being made mainly in two following directions: write a \emph{specialized} software that would be able to handle more effectively our specific expressions, and/or---of course!---discover their new properties that would enable us to find a `conceptual' proof. Also, it can be proved that if such a formula has been checked (using exact arithmetic) for a huge enough set of tuples of arguments, than it is right for \emph{all} arguments.}.
Below,
\begin{itemize}\itemsep 0pt
\item in Section~\ref{s:P}, we recall the Pachner moves in four dimensions, and also express the moves 2--4 and 1--5 in terms of 3--3 and two kinds of auxiliary moves `0--2'. Introducing these auxiliary moves makes our reasonings and formulas simple and transparent,
\item in Section~\ref{s:u}, we present the general structure of our manifold invariant, and explain how it is expressed in terms of Grassmann--Berezin calculus of anticommuting variables,
\item in Section~\ref{s:33}, we show how it follows from the `local' Grassmann-algebraic relation 3--3 that our proposed `global' invariant is indeed invariant under moves 3--3. This is a general theorem, where some important quantities, called~$\eta_u$, remain, at the moment, unspecified,
\item in Section~\ref{s:oF}, we provide some formulas needed for proofs of two theorems in the next Section~\ref{s:02}. These formulas have also an algebraic beauty of their own,
\item in Section~\ref{s:02}, we consider Grassmann-algebraic realizations of the mentioned moves 0--2. These turn out to have a simple and elegant form, involving Grassmann delta functions. Moreover, it turns out that they produce expressions for the mentioned quantities~$\eta_u$,
\item and in Section~\ref{s:f}, we prove that our invariant---built initially using a 2-cocycle~$\omega$ on a PL manifold~$M$---depends actually only on the cohomology class $h\ni \omega$. Also, we explain its independence from such things as the signs of square roots~$K_t$ (see~(I.36)) that appeared in our calculations. We thus have (assuming our Conjecture) indeed an invariant of the pair $(M,h)$, as was promised in the Introduction to Part~I~\cite{part-I} of this work.
\end{itemize}
\section{Pachner moves and some useful decompositions of them}\label{s:P}
\subsection{Pachner moves}\label{ss:P}
A four-dimensional Pachner move replaces a cluster of pentachora in a manifold triangulation by another cluster occupying the same place in the triangulation. As already said in the Introduction, we call the initial and final clusters of a move its left- and right-hand side, respectively. There are five kinds of Pachner moves in four dimensions.
In this Subsection, pentachora and other simplices are determined by their vertices. We will describe Pachner moves using a fixed numeration of vertices, and we will be using this numeration throughout this paper. All pentachora must be \emph{oriented} consistently; by default, the orientation of a pentachoron corresponds to the order of its vertices, or, if it has the opposite orientation, this is marked by a wide tilde above, as in `pentachoron~$\widetilde{12346}$'.
\begin{remark}
Starting from Subsection~\ref{ss:24}, it will be convenient for us to consider triangulations in a broader sense, where there may be more than one simplex with the same vertices. We will be using tildes or primes to distinguish between such simplices.
\end{remark}
\paragraph{Move 3--3} transforms a cluster of three pentachora into a different cluster, also of three pentachora, as follows:
\begin{equation}\label{P33}
12345, \widetilde{12346}, 12356 \rightarrow 12456, \widetilde{13456}, 23456.
\end{equation}
Also, the l.h.s.\ of this move has 2-face~$123$ not present in the r.h.s., while the r.h.s.\ has 2-face~$456$ not present in the l.h.s. The two sides of this move differ also in their \emph{inner} tetrahedra: these are $1234$, $1235$ and~$1236$ in the l.h.s., and $1456$, $2456$ and~$3456$ in the r.h.s. All this information will be included in our algebraic realization~\eqref{33} of this move.
\paragraph{Move 2--4} transforms a cluster of two pentachora into a cluster of four pentachora, as follows:
\begin{equation}\label{P24}
\widetilde{13456}, 23456 \rightarrow 12345, \widetilde{12346}, 12356, \widetilde{12456}.
\end{equation}
One more (kind of) Pachner move is the inverse to~\eqref{P24}.
\paragraph{Move 1--5} transforms just one pentachoron into a cluster of five pentachora, as follows:
\begin{equation}\label{P15}
23456 \rightarrow 12345, \widetilde{12346}, 12356, \widetilde{12456}, 13456.
\end{equation}
One more (kind of) Pachner move is the inverse to~\eqref{P15}.
\bigskip
There are some decompositions of moves 2--4 and 1--5 whose importance will be seen in Section~\ref{s:02}. Namely, we will represent move 2--4 as a composition of what we call `first move 0--2' and Pachner move 3--3. This is explained in Subsection~\ref{ss:24}. Similarly, Pachner move 1--5 will be expressed, in Subsection~\ref{ss:15}, as a composition of `second move 0--2' and Pachner move 2--4.
Hence, in order to prove that a quantity is a PL manifold invariant, it is enough to prove its invariance under moves 3--3 and the mentioned two kinds of moves 0--2.
\subsection{Inflating two adjacent tetrahedra into a four-dimensional pillow, and Pachner move 2--4}\label{ss:24}
\begin{dfn}\label{dfn:1}
The \emph{first move 0--2} is defined as follows. Consider two tetrahedra $1456$ and~$2456$ having the common 2-face~$456$. We are going to glue to them two pentachora with opposite orientations, that is, $\widetilde{12456}$ and $12456$. After gluing the first of these, the ``free part'' of its boundary consists of three tetrahedra $1245$, $1246$ and~$1256$, and then we glue the second pentachoron to these three tetrahedra, thus obtaining a ``pillow'' whose boundary consists of two copies of tetrahedron~$1456$ and two copies of~$2456$, and having the inner edge~$12$.
\end{dfn}
Consider now the left-hand side of Pachner move 2--4~\eqref{P24} consisting of pentachora $\widetilde{13456}$ and~$23456$. We inflate the part $1456 \cup 2456$ of its boundary into the pillow described above, and get the pentachora $\widetilde{12456}$, $12456$, $\widetilde{13456}$ and~$23456$. Then we do the \emph{inverse} to move~\eqref{P33} on the last three of them, and obtain the r.h.s.\ of~\eqref{P24}.
It will also be important for us what happens with \emph{2-faces} when the pillow from Definition~\ref{dfn:1} is inserted into a triangulation instead of just two tetrahedra $1456$ and $2456$. It can be checked that there appear three new 2-faces, namely $123$, $124$ and~$125$, while the face~$456$ is \emph{doubled}. This information (about 2- as well as 3-faces) will be naturally included into our algebraic realization \eqref{P}, \eqref{pw24} of the first move 0--2.
\subsection{Inflating a single tetrahedron into a four-dimensional pillow, and Pachner move 1--5}\label{ss:15}
\begin{dfn}\label{dfn:2}
The \emph{second move 0--2} is defined as the inflation of a single tetrahedron~$3456$ into the pillow made of pentachora $13456$ and $\widetilde{13456}$ glued along four pairs of their same-name 3-faces containing new vertex~$1$.
\end{dfn}
Consider now the l.h.s.\ of~\eqref{P15}, i.e., the pentachoron~$23456$, and inflate its 3-face~$3456$ this way. We get the pentachora $13456$, $\widetilde{13456}$ and~$23456$. Then we apply the move 2--4~\eqref{P24} to the last two of them, and get the r.h.s.\ of~\eqref{P15}.
\section{Structure of the invariant}\label{s:u}
Here, in Subsection~\ref{ss:tor}, we describe our invariant as the \emph{square root of exotic torsion with a correcting multiplier}. In fact, we give it almost full definition below in formula~\eqref{inv}, where we leave undefined (until Subsection~\ref{ss:02_24}) only factors~$\eta_u$ attached to all pentachora~$u$. Our tool for studying the behavior of our invariant under Pachner moves will be Grassmann--Berezin calculus of anticommuting variables, so then, in Subsection~\ref{ss:igb}, we rewrite our invariant in terms of this calculus.
\subsection{Invariant: square root of exotic torsion with a correcting multiplier}\label{ss:tor}
We want to use the \emph{torsion} of chain complex~(I.11) in the construction of our invariant. So, we adopt the following assumption.
\begin{mpt}\label{mpt:e}
The complex~(I.11) is \emph{acyclic}.
\end{mpt}
\begin{remark}
Assumption~\ref{mpt:e} does hold for some manifolds, namely, for instance, for sphere~$S^4$. Note, however, that a torsion may be constructed even if Assumption~\ref{mpt:e} fails, see~\cite[Subsection~3.1]{Turaev}.
\end{remark}
Recall that all linear spaces in complex~(I.11) are equipped with distinguished bases, hence all linear mappings are identified with their matrices. According to the general theory~\cite[Subsection~2.1]{Turaev}, the torsion of complex~(I.11) is
\begin{equation}\label{tauu}
\tau=\frac{\minor f_1\cdot \minor f_3\cdot \minor f_5}{\minor f_2\cdot \minor f_4},
\end{equation}
where the minors are chosen as follows. For each of the six nonzero linear spaces in~(I.11), a subset~$\mathfrak b_i$ is taken of its basis, $i=1,\ldots,6$, and these subsets must satisfy the following conditions:
\begin{itemize}\itemsep 0pt
\item $\mathfrak b_1$ is the empty set,
\item $\mathfrak b_{i+1}$ contains the same number of basis vectors as $\overline{\mathfrak b_i}$ (the complement to the $i$-th subset),
\item submatrices of $f_i$, \ $i=1,\ldots,5$, whose rows correspond to $\mathfrak b_{i+1}$ and columns to $\overline{\mathfrak b_i}$, are nondegenerate.
\end{itemize}
Such subsets~$\mathfrak b_i$ always exist for an acyclic complex, and minors in~\eqref{tauu} are by definition the determinants of the mentioned submatrices of $f_i$.
\begin{remark}
$\minor f_1$ is of course just one matrix element of one-column matrix~$f_1$; similar statement applies to $\minor f_5$ as well.
\end{remark}
Using the symmetry of complex~(I.11) (see the text right after formula~(I.11)), we can choose all the minors in~\eqref{tauu} in a symmetric way, namely, so that the rows used in $\minor f_1$ have the same numbers as the columns in $\minor f_5$; similarly for $f_2$ and~$f_4$; and the submatrix of~$f_3$ used for the minor will involve rows and columns with the same (subset of) numbers and will thus be skew-symmetric, like~$f_3$ itself. Using also the notion of \emph{Pfaffian}, we can write:
\begin{equation}\label{rt}
\sqrt{\tau} = \frac{\minor f_1\cdot \Pfaffian(\mathrm{submatrix\;of\;} f_3)}{\minor f_2}.
\end{equation}
We will show that a PL manifold invariant can be obtained from our quantity~\eqref{rt} if it is multiplied by a correcting factor, introduced to ensure the invariance under Pachner moves. Namely, our invariant will be
\begin{equation}\label{inv}
I = \prod_{\substack{\text{all}\\ \!\!\!\!\!\text{2-faces }s\!\!\!\!\!}} q_s^{-1} \prod_{\substack{\text{all}\\ \!\!\!\!\!\text{pentachora }u\!\!\!\!\!}} \eta_u \cdot \sqrt{\tau}.
\end{equation}
Recall~(I.34) that $q_s=\sqrt{\omega_s}$ are square roots of 2-cocycle~$\omega$'s values. Likewise, each quantity~$\eta_u$, attached to pentachoron~$u$, will be expressed algebraically in terms of values~$\omega_s$ for $s\subset u$, but the exact definition of~$\eta_u$ needs some preparational work; it will appear in Subsection~\ref{ss:02_24} as formula~\eqref{eta}.
Formula~\eqref{inv} implies that the triangulation vertices have been ordered, see Convention~I.1 (and for independence of~$I$ of this ordering, see Theorem~\ref{th:oe} below). It is also implied that the values of~$q_s$ and~$K_t$~(I.36) must agree in the sense of formula~(I.40); it makes sense to reproduce it here:
\begin{equation}\label{I.40}
K_{ijkl} = q_{ijk}q_{ijl}q_{ikl}q_{jkl},
\end{equation}
for any tetrahedron $t=ijkl$ with $i<j<k<l$.
\begin{remark}\label{r:+-}
As the reader can see, formula~\eqref{inv} contains square roots and, besides, it looks not easy to specify the order of rows and/or columns in the minors in formulas like~\eqref{rt}. Below, in terms of Grassmann--Berezin calculus, we will get the same problem disguised as the choice of integration order for multiple Berezin integrals. This leads to the agreement that we consider~$I$ as determined \emph{up to a sign}\footnote{Which is not very surprising for a theory dealing with something like Reidemeister torsion.}, so we will not pay much attention to the signs in our formulas, as long as these signs can affect only the sign of~$I$. A subtler situation---namely with \emph{fourth} roots---is considered in Theorem~\ref{th:oe}.
\end{remark}
\subsection{Invariant in terms of Grassmann--Berezin calculus}\label{ss:igb}
In order to study the behavior of quantity~\eqref{rt} under Pachner moves, it makes sense to express it in terms of Grassmann--Berezin calculus. Recall (Definition~I.1) that our Grassmann algebras are over field~$\mathbb C$, hence `linear' means `$\mathbb C$-linear', etc. As we will see in formulas \eqref{33}, \eqref{P} and~\eqref{pw15}, the algebraic operation corresponding to gluing pentachora together will be, in our construction, the \emph{Berezin integral}, so we recall now its definition.
\begin{dfn}\label{dfn:Bi}
\emph{Berezin integral}~\cite{B1,B2} with respect to a generator~$\vartheta$ of a Grassmann algebra~$\mathcal A$ is a linear functional
\[
f\mapsto \int f\, \mathrm d\vartheta
\]
on~$\mathcal A$ defined as follows. As $\vartheta^2=0$, any algebra element~$f$ can be represented as $f=f_0+f_1\vartheta$, where none of $f_0$ and~$f_1$ contain~$\vartheta$. By definition,
\[
\int f\, \mathrm d\vartheta = f_1.
\]
\emph{Multiple integral} is defined as the iterated one:
\[
\iint f\, \mathrm d\vartheta_1\, \mathrm d\vartheta_2 = \int \left(\int f\, \mathrm d\vartheta_1\right) \mathrm d\vartheta_2.
\]
\end{dfn}
We will need only a few simple properties of Berezin integral; we recall them when necessary. Right now, we mention the following two properties:
\begin{itemize} \itemsep 0pt
\item Berezin integral looks very much like the (left) \emph{derivative} (see Definition~I.3). In fact, there is an operator in Grassmann algebras called \emph{right derivative} that coincides exactly with Berezin integral. Also, some authors simply define the Berezin integral to be the same as the \emph{left} derivative; we will, however, stick to the traditional Definition~\ref{dfn:Bi}. Anyhow, the results of integration and differentiation of either \emph{even} or \emph{odd} (all monomials have even or odd degrees, respectively) Grassmann algebra element can differ at most by a sign---and we will make use of this fact when proving Lemma~\ref{l:GB} below,
\item integral of a Grassmann--Gaussian exponential is
\begin{equation}\label{pfi}
\int \exp(\uptheta^{\mathrm T}A\,\uptheta) \prod_{i=1}^n \mathrm d\vartheta_i = (-2)^{n/2} \Pfaffian A,
\end{equation}
where $A$ is a skew-symmetric matrix (with entries in~$\mathbb C$), and $\uptheta$ is a column of Grassmann generators: $\uptheta=\begin{pmatrix}\vartheta_1 & \dots & \vartheta_n\end{pmatrix}^{\mathrm T}$.
\end{itemize}
Recall now that in Subsection~I.4.2 we have put a Grassmann variable (generator)~$\vartheta_t$ in correspondence to each tetrahedron~$t$ in the triangulation. Below ``Grassmann algebra'' means, by default, the algebra generated by all these~$\vartheta_t$. Moreover, we have introduced, in Subsection~I.6.1, the column~$\Uptheta=\begin{pmatrix} \vartheta_1 & \ldots & \vartheta_{N_3} \end{pmatrix}^{\mathrm T}$ made of all these~$\vartheta_t$.
We now also introduce the \emph{row} consisting of all (left) \emph{differentiations} with respect to variables~$\vartheta_t$:
\begin{equation}\label{dt}
\mathbf D=\begin{pmatrix} \partial_1 & \ldots & \partial_{N_3} \end{pmatrix}, \quad \text{where \ } \partial_t=\frac{\partial}{\partial \vartheta_t}.
\end{equation}
Consider the product $\mathbf D f_2$, where elements of both row~$\mathbf D$ and matrix~$f_2$ are understood as $\mathbb C$-linear operators acting in our Grassmann algebra, that is, differentiations and multiplications by constants, respectively. This way, $\mathbf D f_2$ makes a row of differential operators. Recall now that columns of~$f_2$---and thus elements of row~$\mathbf D f_2$---correspond to basis elements in $Z^2(M,\mathbb C)$---the second space in complex~(I.11), and these include elements corresponding to edges in a set~$\mathsf B$ whose complement~$\overline{\mathsf B}$ makes a maximal tree in the 1-skeleton of our triangulation, see the paragraph right between Remarks I.3 and~I.4. We call the element in~$\mathbf D f_2$, corresponding this way to an edge~$b\in \mathsf B$, \emph{global edge operator} corresponding to~$b$, and denote it~$D_b$. Clearly, $D_b$ does not depend on a specific~$\mathsf B$, as long as $\mathsf B\ni b$.
\begin{remark}
Row~$\mathbf D f_2$ includes also elements corresponding to a pullback of some chosen basis in $H^2(M,\mathbb C)$, see the paragraph right before Remark~I.2.
\end{remark}
\begin{remark}
In the proof of Theorem~I.7, we denoted~$D_c$ the differential operator corresponding to a \emph{2-cochain}~$c$. In the notations of that proof, our operators~$D_b$ correspond to $c=\delta b$ and should be called~$D_{\delta b}$. Hopefully, our less pedantical notations will bring no confusion.
\end{remark}
Denote now the submatrix of matrix~$f_2$ consisting exactly of its columns used in $\minor f_2$ in~\eqref{rt} as~$\tilde f_2$ (these are of course all columns except one whose number is the same as the number of row used in $\minor f_1$), and consider the product $\mathbf D \tilde f_2$, consisting of all operators in~$\mathbf D f_2$ except one. We denote by $\boldsymbol{\partial}$ the \emph{product of all operators in~$\mathbf D \tilde f_2$}.
\begin{lemma}\label{l:GB}
In the notations of formula~\eqref{rt},
\begin{equation}\label{23GB}
\frac{\Pfaffian(\mathrm{submatrix\;of\;} f_3)}{\minor f_2} = 2^{-m_3/2} \idotsint \boldsymbol{\partial}^{-1} 1 \cdot \exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta) \prod_{\substack{\mathrm{all}\\ \!\!\!\!\mathrm{tetrahedra\; }t\!\!\!\!}} \mathrm d\vartheta_t,
\end{equation}
where $\boldsymbol{\partial}^{-1} 1$ is \emph{any} such Grassmann algebra element~$w$ that $\boldsymbol{\partial}w = 1$, and $m_3$ is the size of the $\mathrm{submatrix\;of\;} f_3)$.
\end{lemma}
\begin{proof}
It is not hard to see that $\boldsymbol{\partial}^{-1} 1$ can be chosen as one monomial
\[
\boldsymbol{\partial}^{-1} 1 = \frac{\prod_{t\in\mathfrak b_3} \vartheta_t}{\minor f_2},
\]
where we use the notations introduced after formula~\eqref{tauu}; in other words, the product in the numerator goes over the tetrahedra corresponding to those rows of~$f_2$ that enter in~$\minor f_2$. With this choice, a small exercise in Grassmann--Berezin calculus using formula~\eqref{pfi} shows that \eqref{23GB} holds indeed.
What remains is to show that the r.h.s.\ of~\eqref{23GB} does \emph{not} depend on a choice of $\boldsymbol{\partial}^{-1} 1$. In other words, we must show that if $w_0$ is such that $\boldsymbol{\partial} w_0 =0$, then
\begin{equation}\label{w0}
\idotsint w_0 \exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta) \prod_{\substack{\mathrm{all}\\ \!\!\!\!\mathrm{tetrahedra\; }t\!\!\!\!}} \mathrm d\vartheta_t = 0.
\end{equation}
First, we note that we defined $\boldsymbol{\partial}$ as a product of differential operators---linear combinations of differentiations w.r.t.\ Grassmann generators~$\vartheta_t$---each of which annihilates $\exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta)$, as a formula proved within the proof of Theorem~I.7 tells us; as it happens to be unnumbered, it makes sense to repeat it here:
\[
D_c \exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta) = D_c \prod_u \mathcal W_u = 0.
\]
Using Leibniz rule~(I.3) repeatedly, we deduce (from the fact that the operators annihilate the exponential) that
\begin{equation}\label{d0}
\boldsymbol{\partial} \bigl(w_0 \exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta)\bigr) = 0.
\end{equation}
Finally, the multiple integral in~\eqref{w0} is equivalent to ($\pm$) the product of anticommuting differentiations~$\partial_t$ for all tetrahedra~$t$. Following our Subsection~I.2.2, we denote $U^*=\linearspan_{\mathbb C}\{\partial_t\}$ the linear space generated by these differentiations. On the other hand, $\boldsymbol{\partial}$ is the product of some linearly independent linear combinations~$D_c$ of~$\partial_t$. Enlarging the set of these~$D_c$ to a basis in~$U^*$ by adding the necessary number of new operators, and denoting the product of these latter as~$\boldsymbol D$, we see that the integration in~\eqref{w0} is equivalent to $\const\cdot \boldsymbol{\partial D}$. Hence, \eqref{w0} immediately follows from~\eqref{d0}.
\end{proof}
\section{Grassmann-algebraic relation 3--3: how it works within a triangulation}\label{s:33}
In this Section we, while still not specifying what exactly the quantities~$\eta_u$ are in formula~\eqref{inv}, prove the following theorem showing that the `local' relation~\eqref{33} implies that the `global' quantity is indeed invariant \emph{under moves 3--3}. Relation~\eqref{33} corresponds to the move 3--3 as it was described in Subsection~\ref{ss:P}.
\begin{remark}
The explicit form of~$\eta_u$ satisfying~\eqref{33} will appear very naturally in Section~\ref{s:02} while examining the moves 0--2 introduced in Subsections \ref{ss:24} and~\ref{ss:15}. The proof of~\eqref{33}, with these~$\eta_u$, turns out to be a specific algebraic problem, as we already said in the Introduction; see Conjecture in Subsection~\ref{ss:ER}.
\end{remark}
\begin{theorem}\label{th:33}
Let pentachora $12345$, $\widetilde{12346}$ and~$12356$ be contained in the triangulation of manifold~$M$, and consider move 3--3~\eqref{P33} done on them. Suppose the following Grassmann-algebraic realization of this move holds:
\begin{multline}\label{33}
\frac{\eta_{12345}\, \eta_{12346}\, \eta_{12356}}{q_{123}} \iiint \mathcal W_{12345} \widetilde{\mathcal W}_{12346} \mathcal W_{12356} \,\mathrm d\vartheta_{1234} \,\mathrm d\vartheta_{1235} \,\mathrm d\vartheta_{1236} \\
= \frac{\eta_{12456}\, \eta_{13456}\, \eta_{23456}}{q_{456}} \iiint \mathcal W_{12456} \widetilde{\mathcal W}_{13456} \mathcal W_{23456} \,\mathrm d\vartheta_{1456} \,\mathrm d\vartheta_{2456} \,\mathrm d\vartheta_{3456} \,,
\end{multline}
where we write $\widetilde{\mathcal W}_{12346}$ instead of $\mathcal W_{\widetilde{12346}}$, etc.
Then, quantities~I~\eqref{inv} calculated for the initial and final triangulations are the same.
\end{theorem}
\begin{proof}
Due to Lemma~\ref{l:GB}, it is enough to show that the multiple integral in the right-hand side of~\eqref{23GB} for the initial or final triangulation can be obtained by making first triple integration in the respective side of~\eqref{33}, and then further integrating in the rest of variables~$\vartheta_t$. Consider the initial triangulation for definiteness.
Consider differentiations $\partial_{1234}$, $\partial_{1235}$ and~$\partial_{1236}$; their product is, up to a possible sign, the same operation as the triple integration in the left-hand side of~\eqref{33}; the sign may appear because our differentiations are, by default, \emph{left}, while integral corresponds to \emph{right} differentiations. Let
\[
\mathbf W=\prod_u \mathcal W_u=\exp(\Uptheta^{\mathrm T} f_3 \,\Uptheta)
\]
be the product of pentachoron weights~(I.16) over all triangulation. Then $\partial_{1234} \partial_{1235} \partial_{1236} \mathbf W \ne 0$.
On the other hand, $\boldsymbol{\partial}$ is the product of differentiations in~$\mathbf D \tilde f_2$, each of which annihilates~$\mathbf W$---see the proof of Theorem~I.7. It follows then that only zero is contained in the intersection of the linear space spanned by differentiations in~$\mathbf D \tilde f_2$ with the linear space spanned by $\partial_{1234}$, $\partial_{1235}$ and~$\partial_{1236}$. It follows further that $\boldsymbol{\partial}^{-1}1$ can be chosen as a single Grassmann monomial containing none of $\vartheta_{1234}$, $\vartheta_{1235}$ and~$\vartheta_{1236}$.
Indeed, make the following matrix~$C$: its columns correspond to all tetrahedra~$t$ in the triangulation, while the $i$-th row consists of the coefficients of~$\partial_t$ in the $i$-th operator in~$\mathbf D \tilde f_2$. Let also tetrahedra $1234$, $1235$ and~$1236$ correspond to the three \emph{last} columns. Reduce then matrix~$C$ to the \emph{echelon} form; the leading element in any row cannot then belong to the three last columns---and the desired monomial can be obtained as the product of variables~$\vartheta_t$ corresponding to these exactly leading elements, with some coefficient.
Now we see that $\boldsymbol{\partial}^{-1}1$ can be chosen not to contain the integration variables in~\eqref{33}, and pentachoron weights~$\mathcal W_u$ for $u$ not present in~\eqref{33} do not contain them either. This means that all these factors can be taken out from under the integration in~\eqref{33}, and this latter can be performed first, as we wanted to show.
\end{proof}
\section[Edge operators and cocycle~$\omega$ from matrix~$F$]{Edge operators and cocycle~$\boldsymbol{\omega}$ from matrix~$\boldsymbol{F}$}\label{s:oF}
In this Section, we write out some formulas needed for proofs of Theorems \ref{th:eta} and~\ref{th:e2} below. Apart from that need, these formulas have their own intriguing algebraic beauty.
We work here within one fixed pentachoron~$u$. Let $b$ and~$t$ denote an edge and a tetrahedron such that $b\subset t\subset u$. Recall that the (local) edge operator has the following form\footnote{\eqref{I.14c} reproduces our formula~(I.14), except for one misprint in the latter: `$\partial_{bt}$' in~(I.14) must be read as~`$\partial_t$'.}:
\begin{equation}\label{I.14c}
d_b = d_b^{(u)} = \sum_{\substack{t\subset u\\ t\supset b}} (\beta_{bt}\partial_t + \gamma_{bt}\vartheta_t),
\end{equation}
and the coefficients in it can be calculated, up to a scalar factor---we call it~$h_b$---directly from the matrix~$F$ corresponding to~$u$. We have already done this calculation in~\cite[formula~(20)]{full-nonlinear}; here we write the result in a slightly different form, namely:
\begin{equation}\label{hb}
\beta_{bt}=h_b \tilde{\beta}_{bt},\qquad \gamma_{bt}=h_b \tilde{\gamma}_{bt},
\end{equation}
where
\begin{align}
\tilde{\beta}_{bt} &= F_{ik}F_{jl}-F_{il}F_{jk}, \label{bF} \\
\tilde{\gamma}_{bt} &= F_{ik}F_{jm}F_{lm}-F_{im}F_{jk}F_{lm}-F_{il}F_{jm}F_{km}+F_{im}F_{jl}F_{km}, \label{gF}
\end{align}
and the notations are as follows: $i,\ldots,m$ are vertices determined by the requirement that they must give our edge, tetrahedron and pentachoron, \emph{together with their respective orientations}, according to
\[
b=ij,\quad t=ijkl,\quad u=ijklm,
\]
(orientations correspond to the order of vertices; orientation of~$t$ is induced from~$u$); and $F_{ik}$ is a shorthand for the matrix element between the tetrahedra \emph{not} containing vertices $i$ and~$j$, respectively, that is,
\[
F_{ik} \stackrel{\mathrm{def}}{=} F_{jklm,ijlm}.
\]
Coefficients~$h_b$ for the ten edges $b\subset u$ must be chosen in such way as to ensure the 1-cycle relations~(I.21) for edge operators. It can be checked that such~$h_b$ are given by the following remarkable formula:
\begin{equation}\label{pf}
h_{ij} \stackrel{\mathrm{def}}{=} p\,(\tilde{\beta}_{ki,t}\,\tilde{\gamma}_{kj,t}+\tilde{\beta}_{kj,t}\,\tilde{\gamma}_{ki,t}),
\end{equation}
where the r.h.s.\ actually does \emph{not} depend on the tetrahedron $t\subset u$, and $p$ is an arbitrary scalar factor.
Even more remarkably, the values~$\omega_s$ of cocycle~$\omega$, determined by homogeneous linear equations~(I.22) (and the unnumbered formula right below~(I.22)), turn out to be proportional to the following triple products over the edges of tetrahedron~$s=ijk$:
\begin{equation}\label{hhh}
\omega_{ijk} \propto h_{ij}^{-1}h_{ik}^{-1}h_{jk}^{-1}.
\end{equation}
The proportionality coefficient in~\eqref{hhh} is then determined by the normalization Convention~I.5 for edge operators; this leads to the formula
\begin{equation}\label{ph}
\omega_{ijk} = \frac{p}{h_{ij}h_{ik}h_{jk}}.
\end{equation}
\begin{remark}
We have already met triple products of the kind~\eqref{hhh} in our elliptic parameterization~\cite[(50)]{full-nonlinear}; the corresponding matrix elements of~$F$ had the elegant form~\cite[(51)]{full-nonlinear}, or could be obtained from those by a simple `gauge transformation'.
\end{remark}
\begin{remark}
It must be stressed again that the above calculations were done within one pentachoron~$u$. In particular, the factor~$p$ in \eqref{pf} and ~\eqref{ph} may not be the same for another pentachoron.
\end{remark}
\section[Relations corresponding to moves 0--2, and the factors~$\eta_u$]{Relations corresponding to moves 0--2, and the factors~$\boldsymbol{\eta_u}$}\label{s:02}
\subsection{Grassmann delta functions}\label{ss:Gd}
Quite naturally (see Theorem~\ref{th:m1}), our Grassmann-algebraic relations corresponding to moves 0--2 introduced in Section~\ref{s:P} will involve \emph{Grassmann delta functions}. The following simple definition will suit our needs well enough.
\begin{dfn}\label{dfn:Gd}
Let $\vartheta$ and~$\vartheta'$ be two Grassmann generators. Then we introduce the Grassmann delta function as follows:
\begin{equation}\label{d}
\delta(\vartheta,\vartheta')\stackrel{\mathrm{def}}{=}\vartheta-\vartheta'.
\end{equation}
\end{dfn}
This definition is justified by the following equality:
\begin{equation}\label{de}
\int f(\vartheta) \, \delta(\vartheta,\vartheta') \,\mathrm d\vartheta = f(\vartheta').
\end{equation}
Here $f(\vartheta)$ is any Grassmann algebra element (that may contain Grassmann generator~$\vartheta$ and/or other generators), while $f(\vartheta')=f(\vartheta)[\vartheta/\vartheta']$ is the result of replacing~$\vartheta$ with~$\vartheta'$. Equality~\eqref{de} is easily checked directly if we write $f(\vartheta)$ as $f(\vartheta)=f_0+f_1\vartheta$, where neither of $f_0$ and~$f_1$ contain~$\vartheta$.
\subsection{Relation corresponding to the first move 0--2}\label{ss:02_24}
The first move 0--2 has been described in Subsection~\ref{ss:24}. When we insert two new pentachora~$12456$ in the triangulation this way, there appears one new edge, namely~$b^{\mathrm{new}}=12$. Accordingly, the set~$\mathsf B$ of edges introduced in Subsection~\ref{ss:igb} can be changed the following way:
\[
\mathsf B^{\mathrm{new}}=\mathsf B^{\mathrm{old}}\cup\{ b^{\mathrm{new}} \},
\]
the operator~$\boldsymbol{\partial}$ introduced also in Subsection~\ref{ss:igb} changes as follows:
\[
\boldsymbol{\partial}^{\mathrm{new}}=\boldsymbol{\partial}^{\mathrm{old}}D_{b^{\mathrm{new}}},
\]
and the new expression for~$\boldsymbol{\partial}^{-1} 1$ can be chosen as follows:
\begin{equation}\label{1/d}
(\boldsymbol{\partial}^{-1} 1)^{\mathrm{new}}=(\boldsymbol{\partial}^{-1} 1)^{\mathrm{old}} \mathbf w,
\end{equation}
where $\mathbf w$---it can be called \emph{Grassmann weight of the edge~$b^{\mathrm{new}}$}---is any element of the Grassmann algebra obeying
\[
D_{b^{\mathrm{new}}}\mathbf w = 1,
\]
and, in addition, depending only on Grassmann variables \emph{not present in~$(\boldsymbol{\partial}^{-1} 1)^{\mathrm{old}}$}, that is, only on variables $\vartheta_{1245}$, $\vartheta_{1246}$ and/or~$\vartheta_{1256}$ living on the newly created tetrahedra.
\begin{example}\label{xmp:w1}
For instance, we can choose~$\mathbf w$ to be proportional to~$\vartheta_{1256}$, namely,
\begin{equation}\label{w1}
\mathbf w = \frac{\vartheta_{1256}}{\beta_{12,1256}},
\end{equation}
where $\beta_{12,1256}$ is the coefficient of~$\partial_{1256}$ in the edge operator~$d_{12}$, see the definition~\eqref{I.14c}.
\end{example}
We now introduce the ``pillow Grassmann weight'' for our first move 0--2. Its unnormalized version\footnote{Later we will ``normalize'' it, see the l.h.s.\ of~\eqref{pw24}.} reads as follows:
\begin{equation}\label{P}
\mathcal P = \iiint \mathcal W_{12456} \widetilde{\mathcal W}_{12456}\, \mathbf w\,\mathrm d\vartheta_{1245}\,\mathrm d\vartheta_{1246}\,\mathrm d\vartheta_{1256} .
\end{equation}
Both $\mathcal W_{12456}$ and~$\widetilde{\mathcal W}_{12456}$ are Grassmann--Gaussian exponentials~(I.16), and the differences between them are as follows:
\begin{enumerate}\itemsep 0pt
\item as they belong to the pentachora with opposite orientations, their respective edge operators have the same differential parts, but their `$\vartheta$-parts' differ in signs,
\item\label{i:twoF} consequently, the two respective matrices~$F$ also differ in signs\footnote{A bit more detailed explanation: $\gamma_{ij,t}$ in~(I.44) changes its sign together with the orientation of tetrahedron~$t$, hence $\gamma_{tt'}$ in~(I.49) changes its sign together with the orientation of tetrahedron~$t'$, and this latter orientation is induced from the pentachoron.},
\item additionally, while tetrahedra $1245$, $1246$ and~$1256$ lie inside the pillow and are common for our two pentachora, each of these has its own copies of boundary tetrahedra $1456$ and~$2456$, and we have to introduce special notation for the corresponding Grassmann generators. Namely, we denote as $\vartheta_{1456}$ and~$\vartheta_{2456}$ variables entering in the pentachoron weight~$\mathcal W_{12456}$, while $\vartheta_{1456}'$ and~$\vartheta_{2456}'$ enter in~$\widetilde{\mathcal W}_{12456}$; there are also $\vartheta_{1245}$, $\vartheta_{1246}$ and~$\vartheta_{1256}$ that enter in both weights.
\end{enumerate}
A small exercise based mainly on item~\ref{i:twoF} shows that the weight~\eqref{P} must be proportional to the product of two Grassmann delta functions~\eqref{d}:
\begin{equation}\label{Phi}
\mathcal P = \Phi \,\delta(\vartheta_{1456},\vartheta'_{1456})\, \delta(\vartheta_{2456},\vartheta'_{2456}).
\end{equation}
Coefficient~$\Phi$ can be expressed in terms of values~$\omega_s$ for 2-faces~$s$ of our pentachoron~$12456$ in many equivalent ways; one of these is described in the following Example~\ref{xmp:Phi}.
\begin{example}\label{xmp:Phi}
We calculate the coefficient~$\Phi$ by considering the terms proportional to $\vartheta_{1456}\vartheta'_{2456}$ in both sides of~\eqref{Phi}. Also, we take the edge weight in the form~\eqref{w1}. This means that the only term in the product $\mathcal W_{12456} \widetilde{\mathcal W}_{12456}$ in~\eqref{P} that is important for us now is that proportional to $\vartheta_{1456}\vartheta_{1245}\vartheta'_{2456}\vartheta_{1246}$, and the coefficient of proportionality is
\[
F_{1456,1245}\widetilde F_{2456,1246}-F_{1456,1246}\widetilde F_{2456,1245}.
\]
Here we denoted, just for clearness, matrix elements coming from the weight~$\mathcal W_{12456}$ as~$F_{tt'}$, while those coming from the weight~$\widetilde{\mathcal W}_{12456}$ as~$\widetilde F_{tt'}$; as we have already explained, $\widetilde F_{tt'}=-F_{tt'}$. One can also see from formula~(I.16) that the monomial in~$\mathcal W_u$ proportional to $\vartheta_t \vartheta_{t'}$, for two 3-faces $t,t'\subset u$, is $-F_{tt'}\vartheta_t \vartheta_{t'}$. It follows that one possible expression for~$\Phi$ is
\begin{equation}\label{Phii}
\Phi = \frac{F_{1456,1245}F_{2456,1246}-F_{1456,1246}F_{2456,1245}}{\beta_{12,1256}}.
\end{equation}
\end{example}
As we noted in Subsection~\ref{ss:24}, four new 2-faces, namely $124$, $125$, $126$ and~$456$, appear when our first move 0--2 is done. It turns out that the following quantity:
\[
\mathrm H = \frac{q_{124}q_{125}q_{126}q_{456}}{\Phi},
\]
where the numerator is the product of values~$q_s$ for the mentioned 2-faces, has remarkable symmetry properties. Roughly speaking, $\mathrm H$ is invariant under all permutations of the pentachoron vertices. The exact statement is presented below as Theorem~\ref{th:eta}, but first we introduce one more notational convention.
Recall (Convention~I.1) that all our vertices are always numbered and consequently there is a natural order on them. This order plays an auxiliary---but useful---role in our constructions, as can be seen in Subsection~I.5.2, see especially formula~(I.40). Below in~\eqref{Hu}, $\{ijk\}$ means that $i$, $j$ and~$k$ are taken in this (increasing) order, for instance, $q_{\{635\}}$ means~$q_{356}$.
\begin{theorem}\label{th:eta}
Choose arbitrarily a tetrahedron~$t$ and its edge~$b$, both belonging to a given pentachoron~$u$, that is, $b\subset t\subset u$. Edge $b=ij$ $(\,=-ji)$ is understood as oriented; pentachoron~$u$ is also oriented. The orientation of~$u$ can always be written as given by the order of vertices in $u=ijklm$, where $i$ and~$j$ belong to~$b$, while $m$ is, by definition, the vertex \emph{not} belonging to the tetrahedron~$t$, and the remaining two vertices are denoted $k$ and\/~$l$ in such way as to give the needed orientation.
Then the quantity
\begin{equation}\label{Hu}
\mathrm H_u = \frac{\beta_{bt}\, q_{\{ijk\}} q_{\{ijl\}} q_{\{ijm\}} q_{\{klm\}}} {F_{iklm,ijkm}F_{jklm,ijlm}-F_{iklm,ijlm}F_{jklm,ijkm}}
\end{equation}
remains the same for all pairs $b\subset t$ and thus belongs only to~$u$ itself.
\end{theorem}
\begin{proof}
This Theorem states some algebraic equalities between quantities expressible in terms of the cocycle~$\omega$. So, in principle, they can be proved by a direct calculation. In reality, however, this turned out too hard even for a computer using computer algebra.
Happily, there is a roundabout way. As only one matrix~$F$ is present in~\eqref{Hu}, we can take \emph{its entries as independent variables}, and express everything in their terms. All necessary formulas are written out in Section~\ref{s:oF} and, with them, the proof becomes an easy exercise.
\end{proof}
Recall that we have not yet given the expression for quantities~$\eta_u$. \emph{Define} now $\eta_u$ as follows:
\begin{equation}\label{eta}
\eta_u = \sqrt{\mathrm H_u}.
\end{equation}
This definition contains, due to~\eqref{Hu}, \emph{fourth} roots of values~$\omega_s$. As the signs of~$\omega_s$ already depend on a vertex ordering: $\omega_{ijk}=-\omega_{jik}$ and so on, taking further roots involves such quantities as~$\sqrt{-1}$, and these must be taken under control. This is what the following theorem is about.
\begin{theorem}\label{th:oe}
Quantity~$I$~\eqref{inv}, taken up to a sign\footnote{Recall Remark~\ref{r:+-}.}, does not depend on a vertex ordering, if the values of square roots~$K_t$~(I.36) are fixed\/\footnote{It is understood, of course, that the conditions (I.37) and~(I.38) are also fulfilled. As for the actual \emph{independence} of the signs of~$K_t$, it will be proved in Lemma~\ref{l:rr}.}\!.
\end{theorem}
\begin{proof}
It is enough to consider the interchange of numbers between two vertices with neighboring numbers $i$ and $j=i+1$. Moreover, anything nontrivial may happen only if there is an edge~$ij$ in the triangulation. In this case, all~$q_s$ with $s\supset ij$ are multiplied by the \emph{same} root of~$-1$: this follows from the fact that, for any tetrahedron~$t$, the product $\prod_{s\subset t} q_s$ must give the quantity~$K_t$ corresponding to the orientation of~$t$ determined by the order of vertices, see~\eqref{I.40} (and this orientation changes when the numbers are permuted).
Denote the number of triangles~$s$ around edge~$ij$ as~$n_2$; then the number of pentachora around~$ij$ is $n_4=2n_2-4$ (this is an easy exercise using the fact that the \emph{link} of an edge is a triangulated two-sphere). As a result of our permutation, $\prod_{\mathrm{all}\;s}q_s$ is multiplied by $(\sqrt{-1})^{n_2}$. On another hand, $\eta_u$ for each pentachoron $u\supset ij$ gets multiplied by $\pm(\sqrt{-1})^{3/2}$, according to~\eqref{Hu} and the fact that there are exactly three triangles in~$u$ containing~$ij$. The overall factor for $\prod_{\mathrm{all}\;u}\eta_u$ is thus $\pm(\sqrt{-1})^{3n_2-6}$, and this means that, altogether, \eqref{inv} acquires a factor of $\pm(\sqrt{-1})^{2n_2-6}$, which is~$\pm 1$.
\end{proof}
Formulas \eqref{Phi} and~\eqref{eta} imply the following beautiful identity for the pillow Grassmann weight~$\mathcal P$:
\begin{equation}\label{pw24}
\frac{\eta_{12456}^2}{q_{124}q_{125}q_{126}q_{456}} \mathcal P = \delta(\vartheta_{1456},\vartheta'_{1456})\,\delta(\vartheta_{2456},\vartheta'_{2456}),
\end{equation}
with a transparent meaning of each factor in it. Namely, the numerator~$\eta_{12456}^2$ of the fraction before~$\mathcal P$ corresponds to the fact that two new pentachora~$12456$ have appeared, and the denominator---to the fact that new 2-faces $124$, $125$ and~$126$ have appeared, while 2-face~$456$ has doubled.
\begin{theorem}\label{th:m1}
Expression~\eqref{inv} remains the same under our ``first move 0--2''.
\end{theorem}
\begin{proof}
Due to \eqref{1/d} and~\eqref{pw24}, and using the fundamental property~\eqref{de} of delta function, we can express the integral in~\eqref{23GB}, taken after the move 0--2, as the same integral taken before this move and multiplied by $\frac{q_{124}q_{125}q_{126}q_{456}}{\eta_{12456}^2}$. Then it follows from \eqref{23GB} and~\eqref{rt} that the value~$I$~\eqref{inv} remains the same.
\end{proof}
\subsection{Relation corresponding to the second move 0--2}\label{ss:02_15}
The second move 0--2 has been described in Subsection~\ref{ss:15}. This time, we insert two pentachora~$13456$ in the triangulation, and four new edges are added, namely $13$, $14$, $15$ and~$16$. Any three of them can be added to the set~$\mathsf B$:
\[
\mathsf B^{\mathrm{new}}=\mathsf B^{\mathrm{old}}\cup \{b_1^{\mathrm{new}},b_2^{\mathrm{new}},b_3^{\mathrm{new}}\}.
\]
Accordingly, we choose
\[
(\boldsymbol{\partial}^{-1} 1)^{\mathrm{new}}=(\boldsymbol{\partial}^{-1} 1)^{\mathrm{old}} \mathbf w,
\]
where this time $\mathbf w$ satisfies
\[
D_{b_1^{\mathrm{new}}}D_{b_2^{\mathrm{new}}}D_{b_3^{\mathrm{new}}}\mathbf w=1
\]
and depends only on variables~$x_t$ on newly created tetrahedra~$t$.
For instance, take $b_1^{\mathrm{new}}=13$, \ $b_2^{\mathrm{new}}=14$, \ $b_3^{\mathrm{new}}=15$. Then $\mathbf w$ can be taken proportional to $\vartheta_{1345} \vartheta_{1346} \vartheta_{1356}$, and the proportionality factor is $(\det m)^{-1}$, where $m$ is the matrix of coefficients of $\partial_{1345}$, $\partial_{1346}$ and~$\partial_{1356}$ in edge operators $d_{13}$, $d_{14}$ and~$d_{15}$:
\[
m = \begin{pmatrix} \beta_{13,1345} & \beta_{13,1346} & \beta_{13,1356} \\
\beta_{14,1345} & \beta_{14,1346} & 0 \\
\beta_{15,1345} & 0 & \beta_{15,1356} \end{pmatrix}.
\]
Recall that formulas for coefficients~$\beta_{bt}$ can be found in Subsection~I.5.3.
We would like to have now the following equality for the ``pillow weight'':
\begin{multline}\label{pw15}
\frac{\eta_{13456}^2}{q_{134}q_{135}q_{136}q_{145}q_{146}q_{156}} \iiiint \mathcal W_{13456} \widetilde{\mathcal W}_{13456}\, \mathbf w\,\mathrm d\vartheta_{1345}\,\mathrm d\vartheta_{1346}\,\mathrm d\vartheta_{1356}\,\mathrm d\vartheta_{1456} \\
=\delta(\vartheta_{3456}, \vartheta_{3456}'),
\end{multline}
where $\vartheta_{3456}$ and~$\vartheta_{3456}'$ correspond to the two copies of tetrahedron~$3456$, having Grassmann weights $\mathcal W_{13456}$ and $\widetilde{\mathcal W}_{13456}$, respectively.
Indeed, the quadruple integral in~\eqref{pw15} is easily evaluated to
\begin{equation}\label{Psi*}
-\frac{F_{3456,1456}}{\det m}\, \delta(\vartheta_{3456}, \vartheta_{3456}'),
\end{equation}
where the delta function appears, much like in Subsection~\ref{ss:02_24}, from integrating the product of two exponentials of quadratic forms differing in signs. The miracle is the following.
\begin{theorem}\label{th:e2}
The inverse to the coefficient of delta function in~\eqref{Psi*} can again be written in terms of $\eta_u$ and~$q_s$ according to the same principle as in~\eqref{pw24}, that is, as a product of $\eta_u$ for each new pentachoron~$u$, and $q_s^{-1}$ for each new 2-face. Hence, formula~\eqref{pw15} holds indeed.
\end{theorem}
\begin{proof}
Take, like we did in the proof of Theorem~\ref{th:eta}, the entries of~$F$ as independent variables. This makes the proof feasible by a direct computer calculation.
\end{proof}
\begin{theorem}\label{th:m2}
Expression~\eqref{inv} remains the same under our ``second move 0--2''.
\end{theorem}
\begin{proof}
This invariance of~\eqref{inv} is proved in full analogy with Theorem~\ref{th:m1}.
\end{proof}
\subsection[Quantities~$\eta_u$ and move 3--3]{Quantities~$\boldsymbol{\eta_u}$ and move 3--3}\label{ss:ER}
One more miracle is that the quantities~$\eta_u$ introduced in Subsection~\ref{ss:02_24}, work also for the Pachner move 3--3.
\begin{conjecture}
Relation~\eqref{33} holds indeed, and with the same~$\eta_u$~\eqref{eta}.
\end{conjecture}
As we have explained in the Introduction, this Conjecture is actually a firmly established mathematical fact, although not yet formally proven.
\section[Dependence of~$I$ only on~$M$ and the cohomology class of~$\omega$]{Dependence of~$\boldsymbol{I}$ only on~$\boldsymbol{M}$ and the cohomology class of~$\boldsymbol{\omega}$}\label{s:f}
We will be able to call our quantity $I$~\eqref{inv} the invariant of a pair ``piecewise linear manifold~$M$, cohomology class~$h\ni \omega$\,''---assuming of course our Conjecture in Subsection~\ref{ss:ER}, and keeping in mind Remark~\ref{r:+-}---if we check its invariance under everything that may be changed in out calculations. Namely, $I$ must be independent of
\begin{enumerate}\itemsep 0pt
\item\label{i:a} a specific triangulation,
\item\label{i:b} vertex ordering,
\item\label{i:c} permitted signs of~$K_t$~(I.36),
\item\label{i:d} and choice of~$\omega$ within its cohomology class.
\end{enumerate}
Item~\ref{i:a} is solved using our formulas corresponding to Pachner moves---this has been the main subject matter in the previous Sections.
Item~\ref{i:b} has been solved in Theorem~\ref{th:oe}.
The two remaining items are solved below in Lemma~\ref{l:rr} and Theorem~\ref{th:inv}.
\begin{lemma}\label{l:rr}
Assuming our Conjecture, the quantity~$I$ is the same for any permitted (see (I.37) and~(I.38)) choice of signs of~$K_t$.
\end{lemma}
\begin{proof}
Permitted way of changing the signs of some~$K_t$ consists in changing the signs of some~$q_s$, due to Assumption~I.1, see Theorem~I.5. Consider just one~$q_s$, and imagine a sequence of Pachner moves where the last of them removes 2-face~$s$ from the triangulation. Then do this sequence backwards, beginning with inserting~$s$ and the corresponding~$q_s$ with the \emph{changed} sign. This can always be done, because our formulas \eqref{33}, \eqref{pw24} and~\eqref{pw15} hold for any choice of the signs of~$q_s$.
\end{proof}
\begin{theorem}\label{th:inv}
Assuming our Conjecture, our quantity~$I$~\eqref{inv} remains the same for all cocycles~$\omega$ within a given cohomology class~$h$ and is thus indeed an invariant of the pair $(M,h)$.
\end{theorem}
\begin{proof}
Recall that we agreed (in Section~I.3) to denote a 1-cochain taking value~$1$ on an edge~$b$ and vanishing on all other edges, simply by the same letter~$b$. Any 2-\emph{coboundary} is a linear combination of edge coboundaries~$\delta b$. We are going to show how to change
\begin{equation}\label{co}
\omega \mapsto \omega + c\,\delta b
\end{equation}
for any edge~$b$ and number~$c$, without changing~$I$.
First, we recall that our pillow Grassmann weights, taken with the corresponding multipliers, are simply delta functions, see \eqref{pw24} and~\eqref{pw15}. Hence, they do \emph{not} depend on the cocycle~$\omega$. If there is such pillow within a triangulation, and we change~$\omega$ by a multiple of~$\delta b$ for any edge~$b$ lying \emph{inside} the pillow, this will not affect~$I$ either.
Second, there always exists a sequence of Pachner moves whose last move 4--2 or 5--1 takes $b$ away from the triangulation. Now we do such sequence, and then its inverse, returning to the initial triangulation. But when we do the first move 2--4 or 1--5 in the inverse sequence, we represent it as a composition of moves where a move 0--2 is the first (according to Subsection~\ref{ss:24} or~\ref{ss:15}), and, while doing this 0--2, change $\omega$, with respect to its initial values, according to~\eqref{co}.
\end{proof}
|
1,108,101,563,626 | arxiv | \section{Introduction}
The Submillimeter Common User Bolometer Array (SCUBA) camera
on the James Clerk Maxwell
Telescope (JCMT) (Holland {\it et al.}\ 1999) has provided a new window for
ground-based
studies of the high-$z$ Universe. This brief review summarizes
results from a large campaign of deep
surveys carried out during SCUBA's first two years of operation on
Mauna Kea. Evidence is presented that the submm sources detected in the SCUBA
deep fields must be predominantly luminous infrared galaxies
(LIGs: $L_{\rm ir}${\thinspace}$>${\thinspace}10$^{11}${\thinspace}$L_\odot$)\footnote{$L_{\rm
ir}${\thinspace}$\equiv${\thinspace}$L$(8--1000$\mu$m).
Unless otherwise stated,
$H_{\rm o}${\thinspace}$=${\thinspace}50{\thinspace}km{\thinspace}s$^{-1}$Mpc$^{-1}$,
$q_{\rm o}${\thinspace}$=${\thinspace}0.} at high redshift
($z{\thinspace}\sim${\thinspace}1--5), and by analogy with local LIGs,
that they plausibly represent the building of
spheroids through major mergers of gas-rich disks.
\vspace{-0.25cm}
\section{SCUBA Deep Surveys}
Smail {\it et al.}\ (1997)
were the first to infer a substantial population of luminous
submm galaxies from their SCUBA detections at
850{\thinspace}$\mu$m/450{\thinspace}$\mu$m of background sources amplified
by weak lensing from foreground clusters.
Subsequent blank-field
surveys at 850$\mu$m/450$\mu$m\ (Hughes
{\it et al.}\ 1998; Barger {\it et al.}\ 1998;
Eales {\it et al.}\ 1999), confirmed the surprisingly large space density of faint
submm
sources, and in addition, showed that their optical
{\it and near-infrared} counterparts were often quite faint, as
illustrated below in our own data for the Lockman Hole.
\subsection{The Lockman Hole}
The 850{\thinspace}$\mu$m data for the Lockman Hole deep field
are shown in Figure 1. The two SCUBA
sources, LH\_NW1 and LH\_NW2, detected at 850{\thinspace}$\mu$m
($>${\thinspace}3{\thinspace}$\sigma$), have 850{\thinspace}$\mu$m fluxes of 5.1{\thinspace}mJy,
and 2.7{\thinspace}mJy, respectively, with upper limits at 450{\thinspace}$\mu$m
of $\lsim${\thinspace}50{\thinspace}mJy (5{\thinspace}$\sigma$).
Neither SCUBA source has an ISOCAM 7{\thinspace}$\mu$m counterpart
($< 35${\thinspace}$\mu$Jy; 5{\thinspace}$\sigma$). LH\_NW1 appears to be
centered on a faint K$^\prime$ source ($K_{\rm AB}^\prime = 21.8$)
with disturbed morphology, which is barely detected in the current
B-band image ($B_{\rm AB} = 23.5$). LH\_NW2 is ``blank"
implying that any counterpart has $K_{\rm AB}^\prime > 22.5$
and $B_{\rm AB} > 24.5$.
\begin{figure}[hb]
\vspace{-0.5cm}
\centerline{\epsfig{file=sanders_fig1.eps, width=12cm}}
\caption{SCUBA 850{\thinspace}$\mu$m detections (2 small thick circles: Barger {\it et al.}
\
1998; Barger, Cowie \& Sanders 1999), and ISOCAM 7$\mu$m detections (22 small
thin circles: Taniguchi {\it et al.}\ 1997) in the Lockman Hole northwest
(LH\_NW) Deep Field
(J2000: $RA = 10^h33^m55.5^s, Dec = +57^\circ46^\prime18^{\prime\prime}$)
superimposed on a K$^\prime$ image obtained with the QUick InfraRed
Camera (QUIRC) on the University of Hawaii 2.2-m telescope. The
field-of-view of the ISOCAM detector array and the SCUBA array are
indicated by a long dashed line and large solid circle respectively.
On the right are two zoomed images of the region outlined by the
$45^{\prime\prime} \times 45^{\prime\prime}$ box which is centered
on the strongest SCUBA source. The zoomed K$^\prime$ image was
obtained with the Near InfraRed Camera (NIRC) on the Keck 10-m
telescope, and the zoomed B-band image was obtained with the University
of Hawaii 2.2-m telescope.}
\vspace{-0.5cm}
\end{figure}
\section{ULIGs at High Redshift}
\begin{figure}[hb]
\vspace{-0.25cm}
\centerline{\epsfig{file=sanders_fig2.eps, width=7.5cm}}
\caption{Observed radio-to-UV spectral energy distribution of the nearest
ULIG,
Arp{\thinspace}220 ($z = 0.018$). Labeled tickmarks represent object rest-frame
emission
that will be shifted into the 850{\thinspace}$\mu$m and 2.2{\thinspace}$\mu$m
observed frame for redshifts, $z = 0 - 5$. The insert shows the
corresponding observed-frame 850{\thinspace}$\mu$m flux and $\nu S_\nu(850)/\nu S_
\nu(2.2)$
ratio for Arp{\thinspace}220 at redshifts, $z = 0 - 5$.
}
\end{figure}
From the strength of the 850{\thinspace}$\mu$m detections and the faintness
of the K$^\prime$ counterparts alone, it is relatively straightforward
to show that the SCUBA sources detected in the LH\_NW deep field are most
likely to be ultraluminous infrared galaxies
(ULIGs: $L_{\rm ir}${\thinspace}$>${\thinspace}$10^{12}${\thinspace}$L_\odot$) at high redshift
(i.e. $z >${\thinspace}1). The ``submm excess",
($\equiv \nu S_\nu (850\mu m) / \nu S_\nu (2.2\mu m)$), for both LH\_NW1 and
LH\_NW2
is larger than 1 (2.4 and $>${\thinspace}3 respectively), which is impossible to
produce
with normal optically selected galaxies at any redshift, or even by the
most extreme infrared selected galaxies at low redshift, but is almost
exactly what would be expected for an ULIG at high redshift. Figure 2
shows that the expected flux for the nearest ULIG Arp{\thinspace}220 when placed at
$z >${\thinspace}1 is on the order of a few mJy at 850{\thinspace}$\mu$m. Also, the
combination of a
large negative K-correction in the submm plus a
relatively flat or positive K-correction in the near-IR
naturally leads to values $\nu S_\nu (850\mu m) / \nu S_\nu (2.2\mu m) > 1$ for
all ULIGs at $z \gsim${\thinspace}1.5{\thinspace}. The observed faintness of the high-$z$
submm sources
in current B-band images and the non-detections at 7{\thinspace}$\mu$m in the deep
ISOCAM images are consistent with the large U--B colors and the
pronounced minimum at $\sim${\thinspace}3-6{\thinspace}$\mu$m respectively, in the
rest-frame SEDs of ULIGs
like Arp{\thinspace}220.
\section{Source Counts, the Extragalactic Background, and Luminosity Function
Evolution}
\begin{figure}[hb]
\centerline{\epsfig{file=sanders_fig3a.eps, width=6cm}\
\epsfig{file=sanders_fig3b.eps, width=6cm}}
\caption{{\it (a)}\ Comparison of the 850{\thinspace}$\mu$m source counts
(solid squares: from Barger, Cowie \& Sanders 1999) with semi-analytic model
counts (see text).
\ {\it (b)}\
Comparison of the contribution of the 850{\thinspace}$\mu$m sources
brighter than 3{\thinspace}mJy (solid circle) and extrapolated contribution
of sources brighter than 1{\thinspace}mJy (open circle) to the EBL compared
with the Fixsen {\it et al.}\ (1998) analytic approximation (solid curve) to the
EBL.
The two dashed curves are for {\it observed} source temperatures
of 50{\thinspace}K and 25{\thinspace}K where each is based on a $\lambda$-weighted
Planck function.}
\end{figure}
Figure 3a shows that the cumulative 850{\thinspace}$\mu$m
counts in the range 2--10{\thinspace}mJy can be approximated by a single
power law of the form $N(>S) = 1 \times 10^4\ S^{-2}$ deg$^{-2}$.
Figure 3b compares the contribution of these 850{\thinspace}$\mu$m sources with
the recent model of the EBL determined from COBE data
(Fixsen {\it et al.}\ 1998; Puget {\it et al.}\ 1996, 1999;
Hauser {\it et al.}\ 1998). Approximately 25{\thinspace}\% of the
850{\thinspace}$\mu$m EBL resides in sources brighter than 2{\thinspace}mJy,
and {\it nearly all of the EBL at 850{\thinspace}$\mu$m can be accounted
for by sources brighter than 1{\thinspace}mJy}, assuming the extrapolation down to
1{\thinspace}mJy given by the fit to the SCUBA data in Figure 3a.
The observed cumulative SCUBA counts imply {\it strong evolution} in
the co-moving space density of LIGs and ULIGs.
Figure 3a compares the observed SCUBA counts with
predictions from semi-analytic models using three rather extreme
distributions of ULIGs.
Model 1 is based on the local IRAS 60{\thinspace}$\mu$m luminosity
function of galaxies (i.e. $\sim${\thinspace}0.001 ULIGs deg$^{-2}$ at
$z${\thinspace}$<${\thinspace}0.08:
Kim \& Sanders 1998; see also Soifer {\it et al.}\ 1987; Saunders {\it et al.}\ 1990)
{\it assuming no evolution, which underestimates the observed space
density SCUBA sources by
nearly 3 orders of magnitude}. Model 2 includes no ULIGs, instead
attempting to account for the fraction of the optical/UV emission absorbed
and reradiated by dust in sources observed in optical/UV deep fields.
Model 2 still underpredicts the 850{\thinspace}$\mu$m source
counts by a factor of $\sim${\thinspace}30. A better fit to the data is provided
by Model 3 (similar to Model E of Guiderdoni {\it et al.}\ 1998), which
includes a strongly evolving
population of ULIGs, constrained only by recent measurements
of the submm extragalactic background light (EBL).
and
Figure 4 graphically illustrates how the high luminosity tail of the
LF for infrared galaxies
must change to match the observed SCUBA counts and inferred redshift
distribution (Barger et al. 1999).
It is interesting to note that
the strong evolution already detected in the 1-Jy sample of ULIGs
over the relative small range
$z${\thinspace}$\lsim${\thinspace}0.3 [i.e. $\propto (1+z)^{6-7}$: Kim \& Sanders 1998],
if continued out
to $z${\thinspace}$\sim${\thinspace}2, would also provide a good
match to the observed cumulative surface density of SCUBA sources.
\begin{figure}[hb]
\centerline{\epsfig{file=sanders_fig4.eps, width=6.5cm}}
\caption{The local LFs for infrared selected galaxies
from the {\it IRAS\/} Bright Galaxy Sample
(Soifer \& Neugebauer 1991; Sanders \& Mirabel 1996)
and for optically selected ``normal'' galaxies (Schechter 1976),
compared to the LFs for slightly more distant ULIGs
from the {\it IRAS\/} 1-Jy sample (Kim \& Sanders 1998)
and for the high-$z$ submm sources detected in the SCUBA 850{\thinspace}$\mu$m
deep fields.}
\end{figure}
\vspace{-0.25cm}
\section{Identification of IR/Submm Sources}
\vspace{-0.25cm}
\subsection{Low-z LIGs ($z${\thinspace}$\lsim${\thinspace}0.3)}
Substantial progress has been made in understanding the nature of infrared
selected
galaxies in the local Universe. Ground-based follow-up studies of complete
samples
of LIGs discovered by the {\it IRAS} satellite show that
nearly all objects with $L_{\rm ir} > 10^{11.5}${\thinspace}$L_\odot$ appear to be
strongly interacting/merging, gas-rich,
$\sim${\thinspace}$L^*$ spirals (Figure 5).
At the highest luminosities most objects
appear to be advanced mergers powered by a mixture of starburst and AGN both of
which are
fueled by an enormous concentration of gas that has been funneled
into the merger nucleus. These LIGs appear to represent a primary stage in the
formation
of elliptical galaxy cores, and the ULIG phase also appears to represent an
important phase in the
formation of quasars and powerful radio galaxies (see SM96 for a
complete review).
\begin{figure}[hb]
\centerline{\epsfig{file=sanders_fig5.eps, width=10cm}}
\caption{A representative subsample of R-band images of LIGs (Mazzarella et al.
1999)
from the {\it IRAS\/} RBGS (Sanders et al. 1999), illustrating
the strong interactions/mergers that are characteristic of nearly all objects
with $L_{\rm ir} > 10^{11.5}{\thinspace}L_\odot$. The scale bar represents 10{\thinspace}kpc,
tick marks are at 20$^{\prime\prime}$ intervals, and
log{\thinspace}($L_{\rm ir}/L_\odot$) is indicated in the lower left of each panel.}
\end{figure}
\vspace{-0.5cm}
\subsection{High-z SCUBA sources ($z${\thinspace}$\sim$1--5)}
Progress in identifying optical/near-IR counterparts of the SCUBA
deep-field sources has been frustratingly slow, due in large part
to the intrinsic faintness of optical/near-IR counterparts.
However, this is now understood -- from far-UV studies of local ULIGs
(e.g. Trentham, Kormendy \& Sanders 1999) -- as what would be expected for
ULIGs at $z${\thinspace}$>${\thinspace}1. Local ULIGs, if placed at $z${\thinspace}$=$1--4, would
have
apparent magnitudes in the range $m_{\rm B}${\thinspace}$\sim${\thinspace}27--32,
$m_{\rm I}${\thinspace}$\sim${\thinspace}25--29, and $m_{\rm K}${\thinspace}$\sim${\thinspace}21--24{\thinspace}!
Currently only $\sim${\thinspace}25{\thinspace}\%
of the sources with $S_{850}${\thinspace}$>${\thinspace}3{\thinspace}mJy appear to have ``secure''
identifications and redshifts (e.g. Barger et al. 1999).
However, the best studied of these
have properties (e.g. magnitudes, morphology, spectra, gas-content)
similar both to local ULIGs as well as to the small sample of high-$z$
ULIGs discovered in the {\it IRAS} faint source database
(see Scoville, these proceedings).
\vspace{-0.25cm}
\section{Summary: ``Star Formation History" of the Universe}
\begin{figure}[ht]
\vspace{-0.25cm}
\centerline{\epsfig{file=sanders_fig6.eps, width=6cm}}
\vspace{-.75cm}
\caption{The ``star formation rate'' vs. $z$ for optical/UV
and far-IR/submm selected galaxies.
($H_{\rm o} = 50${\thinspace}km{\thinspace}s$^{-1}$Mpc$^{-1}$,
$q_{\rm o} = 0.5$ is used for consistency with previously published
optical versions of this plot.).
In the optical/near-UV, the mean co-moving SFR
is determined from the total observed rest-frame
UV luminosity density of galaxies (solid diamond: Trayer {\it et al.}\ 1998;
solid circles: Cowie, Songalia \& Barger 1999;
solid squares: Connolly {\it et al.}\ 1997;
solid stars: Steidel {\it et al.}\ 1999).
The shaded
region and thick solid line represent the maximum contribution to the SFR from
far-IR/submm sources (i.e. assuming all of the far-IR/submm
emission is powered by young stars) using models with a range of
$z$-distributions which are consistent with both the current
observations of 850{\thinspace}$\mu$m SCUBA sources (Blain {\it et al.}\ 1999; Barger et al.
1999),
and the local volume density of LIGs
(Sanders et al. 1999).}
\vspace{-2.25cm}
\end{figure}
\vspace{-0.25cm}
What the SCUBA deep surveys now make abundantly clear is that
a substantial fraction of the ``activity" in galaxies
at high redshifts ($z >${\thinspace}1) is obscured by dust, and, therefore has
been missed in deep optical/UV surveys. This is graphically illustrated
in Figure 6 using the latest SCUBA redshift distribution estimates of Barger et
al. (1999),
assuming that all of the far-IR/submm luminosity is
powered by star
\vspace{-2.0cm}
\noindent
formation, and then comparing with similar plots derived
for deep optical/UV surveys. Figure 6 suggests that the SCUBA sources
dominate the {\it observed} optical/UV SFR by
at least a factor of 10 at $z${\thinspace}$>${\thinspace}1.
What is the relationship of the
SCUBA sources to the optically selected high-$z$ population of starburst
galaxies ?
One view is that the SCUBA sources are indeed just the most heavily reddened
objects
already contained in the optical samples. Favoring this view is the
evidence (summarized by Steidel {\it et al.}\ 1999) that on average the more
luminous objects in optical samples are also redder, such that after correction
for extinction (typically by a mean factor of $\sim${\thinspace}3--5) using
models developed for nearby starburst galaxies
(e.g. Meurer {\it et al.}\ 1997;
Calzetti 1997) they would have intrinsic luminosities equivalent to that of
the SCUBA sources (i.e. $\gsim 10^{12} L_\odot$).
However, there is little current evidence to show that the SCUBA detections
are related to the most heavily reddened optical sources, or that applying a
mean
dust correction to all optical/UV sources is advised.
An alternative view is that the SCUBA sources represent an inherently distinct
population, for example the formation of spheroids and massive
black holes, both of which are triggered by the merger of two large gas-rich
disks
(e.g. Kormendy \& Sanders 1992; Kormendy \& Richstone 1995; SM96).
Favoring this view is the fact that the strong evolution
for {\it IRAS} ULIGs and SCUBA sources
at $z${\thinspace}$<${\thinspace}1, and a possible peak in the range $z${\thinspace}$\sim${\thinspace}1--3,
is similar to what is observed for QSOs
(e.g. Schmidt {\it et al.}\ 1995) and radio
galaxies
(Dunlop 1997). For the UV/starburst population, the more gradual decrease
at $z${\thinspace}$<${\thinspace}1 and the flat redshift distribution
at $z${\thinspace}$>${\thinspace}1 (Steidel {\it et al.}\ 1999) might better
represent
the building of disks over a wider range of cosmic time.
\vspace{-0.5cm}
|
1,108,101,563,627 | arxiv | \section{Introduction}
Hirsch-Smale theory \cite{Hirsch} reduces the problem of regular
homotopy classification of immersions to homotopy theory. However,
this homotopy theoretic problem is usually hard to deal with. In
the case of immersions of oriented 3-manifolds into $\mathbb R^5$ this
homotopy theoretic problem was solved by Wu \cite{Wu} using
algebraic topological methods (also see \cite{Li}).
However, it remains a problem to determine the regular homotopy
class of a given immersion from its geometry. By geometry we mean
the structure of the "singularities" of the map. For example,
double points are such "singularities", and indeed, Smale
\cite{Smale2} showed that for $n > 1$ the regular homotopy class
of an immersion $S^n \looparrowright \mathbb R^{2n}$ is completely
determined by the number of its double points (modulo 2 if $n$ is
odd). A similar classification was carried out by Ekholm
\cite{Ekholm2} for immersions of $S^k$ into $\mathbb R^{2k-1}$ for $k
\ge 4$.
The whole picture changes when we consider immersions of $S^3$
into $\mathbb R^5$. Hughes and Melvin \cite{Hughes} showed that there
are infinitely many embeddings $S^3\hookrightarrow \mathbb R^5$ that
are pairwise not regularly homotopic to each other. Therefore one
can not determine the regular homotopy class from the
"singularities" since an embedding has no such. Ekholm and
Sz\H{u}cs \cite{ESz} came over this problem using "singular
Seifert surfaces" bounded by the immersions. For an immersion $f
\colon M^3 \looparrowright \mathbb R^5$ a singular Seifert surface is
a generic map $F \colon W^4 \to \mathbb R^5$ of a compact orientable
manifold $W^4$ with boundary $M^3$ such that $\partial F = f$. In
\cite{ESz} it is shown that for $M^3 = S^3$ the Smale invariant of
$f$ can be computed from the singularities of $F$. Later Saeki,
Sz\H{u}cs and Takase \cite{Takase} generalized these results for
immersions $f \colon M^3 \looparrowright \mathbb R^5$ with trivial
normal bundle (for oriented $M^3$). The invariant introduced in
\cite{Takase} corresponds to the 3-dimensional obstruction to a
regular homotopy between two such immersions. Our present paper
generalizes the results of \cite{Takase} to arbitrary immersions
$f \colon M^3 \looparrowright \mathbb R^5$.
We will consider the set $\mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ of immersions
with fixed normal Euler class $e(\nu_f) = \chi \in H^2(M^3;
\mathbb{Z})$ and construct a $\mathbb{Z}_{2d(\chi)}$-valued
regular homotopy invariant $i$ for this set of immersions, where
$d(\chi)$ denotes the divisibility of $\chi$. The construction of
the invariant $i$ will also make use of a singular Seifert surface
$F$. In \cite{Takase} $F$ had to be an immersion near the
boundary, but we (have to and) will allow arbitrary generic maps.
If $\chi = 0$ and $F$ is an immersion near the boundary then the
construction of the invariant $i$ agrees with the one introduced
in \cite{Takase}. We will also show that whenever $f, g \colon M^3
\looparrowright \mathbb R^5$ are regularly homotopic on a neighborhood
of the 2-skeleton of $M^3$ then $i(f) = i(g)$ iff $f$ and $g$ are
regularly homotopic. This shows that $i$ corresponds to the
3-dimensional obstruction to a regular homotopy between $f$ and
$g$. (Note that there is an invariant which determines the regular
homotopy class of the restriction of an immersion to a
neighborhood of the 2-skeleton of $M^3$. This invariant was called
the Wu invariant in \cite{Takase}, see below.)
Regular homotopy classes of immersions of oriented 3-manifolds
into $\mathbb R^5$ endowed with the connected sum operation form a
semigroup whose structure we will also determine. Finally, an
exact sequence will be defined that relates $\mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5]$
to $\mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^6]$ and $[M^3, S^2]$.
\section{Preliminaries}
First let us recall the result of Wu \cite{Wu} that classifies
immersions of an oriented 3-manifold $M^3$ into $\mathbb R^5$ up to
regular homotopy.
\begin{thm} \label{thm:1}
The normal Euler class $\chi$ of an immersion $f \colon M^3
\looparrowright \mathbb R^5$ is of the form $2c$ for some $c \in
H^2(M^3; \mathbb{Z})$ and for any $c \in H^2(M^3; \mathbb{Z})$
there is an immersion $f$ such that $\chi = 2c$. Furthermore,
$$\mathop{\textrm{Imm}}\nolimits[M^3,\mathbb R^5]_{\chi} \approx \coprod_{c \in H^2(M^3;
\mathbb{Z}) \,,\, 2c = \chi} H^3(M^3; \mathbb{Z})/(2\chi \cup
H^1(M^3; \mathbb{Z})),$$ where $\mathop{\textrm{Imm}}\nolimits[M^3,\mathbb R^5]_{\chi}$ is the
set of regular homotopy classes of immersions with normal Euler
class $\chi \in H^2(M^3; \mathbb{Z})$ and $\cup$ represents the
cup product, moreover the symbol $\approx$ denotes a bijection.
\end{thm}
\begin{rem}
For $\chi \in H^2(M^3; \mathbb{Z})$ let $d(\chi) \in \mathbb{Z}$
denote the divisibility of $\chi$, so that $\chi$ equals $d(\chi)$
times a primitive class in $H^2(M^3; \mathbb{Z})$ modulo torsion,
and $d(\chi) = 0$ if $\chi$ is of finite order. Then Poincar\'e
duality implies that $$H^3(M^3; \mathbb{Z})/(2\chi \cup H^1(M^3;
\mathbb{Z})) \approx \mathbb{Z}_{2d(\chi)}.$$ If $f$ is an
immersion of $M^3$ into $\mathbb R^5$ with normal Euler class $\chi$
then let us introduce the notation $d(f)$ for $d(\chi)$.
\end{rem}
\begin{note}
For $\chi \in H^2(M^3; \mathbb{Z})$ let $\Gamma_2(\chi)$ denote
the set $\{\,c \in H^2(M^3; \mathbb{Z}) \colon 2c = \chi \,\}$.
Throughout this paper we will use the notation $M^3_{\circ}$ for
the punctured 3-manifold $M^3 \setminus D^3$, where $D^3 \subset
M^3$ is a closed 3-disc. Then the 2-skeleton $\text{sk}_2(M^3)$ is
a deformation retract of $M^3_{\circ}$.
\end{note}
Theorem \ref{thm:1} can also be applied to the open manifold
$M^3_{\circ}$. Since $H^3(M^3_{\circ}; \mathbb{Z}) = 0$ we obtain
a bijection $$\bar{c} \colon \mathop{\textrm{Imm}}\nolimits[M^3_{\circ}, \mathbb R^5]_{\chi} \to
\Gamma_2(\chi).$$ Thus for an immersion $f \colon M^3
\looparrowright \mathbb R^5$ the invariant $c(f) =
\bar{c}(f|M^3_{\circ}) \in \Gamma_2(\chi)$ describes the regular
homotopy class of $f|M^3_{\circ}$. Following \cite{Takase} we will
call $c(f)$ the Wu invariant of the immersion $f$.
To get a complete description of $\mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5]_{\chi}$ we
will construct a $\mathbb{Z}_{2d(\chi)}$-valued invariant $i$ such
that the map $$(c, i) \colon \mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5]_{\chi} \to
\Gamma_2(\chi) \times \mathbb{Z}_{2d(\chi)}$$ will be a bijection.
The invariant $i$ is constructed in a geometric manner and is an
extension of the invariant defined in \cite{Takase} for $\chi =
0$.
Next let us recall Theorem 1.1(a) in \cite{ESz}. Let $f \colon S^3
\looparrowright \mathbb R^5$ be an immersion and $V^4$ an arbitrary
compact oriented 4-manifold with $\partial V^4 = S^3$. The map $f$
extends to a generic map $F \colon V^4 \to \mathbb R^5$ which has no
singular points near the boundary $\partial V^4$ since the normal
bundle $\nu_f$ of $f$ is trivial. This map $F$ has isolated cusps,
each one having a sign. Let us denote by $\#\Sigma^{1,1}(F)$ their
algebraic number and let $\Omega(f)$ be the Smale invariant of
$f$. The following formula was proved in \cite{ESz}.
\begin{thm} \label{thm:2}
$$\Omega(f) = \frac12(3\sigma(V^4) + \#\Sigma^{1,1}(F)).$$
\end{thm}
The proof of this theorem relies on the following proposition
(\cite{Szucs}, Lemma 3).
\begin{lem} \label{lem:0}
Let $X^4$ be a closed oriented 4-manifold and $g \colon X^4 \to
\mathbb R^5$ a generic map. Then $3 \sigma(X^4) + \#\Sigma^{1,1}(g) =
0$.
\end{lem}
For the sake of completeness we will also recall from
\cite{Takase} the definition of the invariant $i$ for immersions
with trivial normal bundle. First we need a preliminary
definition.
\begin{defn}
Let $M^3$ be a closed oriented 3-manifold. We denote by
$\alpha(M^3)$ the dimension of the $\mathbb{Z}_2$ vector space
$\tau H_1(M^3; \mathbb{Z}) \otimes \mathbb{Z}_2$, where $\tau
H_1(M^3; \mathbb{Z})$ is the torsion subgroup of $H_1(M^3;
\mathbb{Z})$.
\end{defn}
\begin{defn} \label{defn:6}
Let $f \colon M^3 \looparrowright \mathbb R^5$ be an immersion with
trivial normal bundle. Let $W^4$ be any compact oriented
4-manifold with $\partial W^4 = M^3$ and $F \colon W^4 \to
\mathbb R^5$ a generic map nonsingular near the boundary such that
$F|\partial W^4 = f$. (We can choose such a generic map $F$ since
$f$ is an immersion with trivial normal bundle.) Denote the
algebraic number of cusps of $F$ by $\#\Sigma^{1,1}(F)$. Then let
$$i(f) = \frac32 (\sigma(W^4)-\alpha(M^3)) + \frac12\#\Sigma^{1,1}(F).$$
It is proved in \cite{Takase} that $i(f)$ is always an integer and
a regular homotopy invariant.
\end{defn}
In the following sections we will extend the above regular
homotopy invariant $i$ to arbitrary immersions. If $f \colon M^3
\looparrowright \mathbb R^5$ has non-trivial normal bundle then we
have to give up the assumption that the singular Seifert-surface
$F$ is an immersion near the boundary. Thus we will use an
arbitrary generic map $F \colon W^4 \to \mathbb R^5$ such that
$\partial F = f$. The singular set $\Sigma^1(F)$ of such an $F$ is
a 2-dimensional submanifold of $W^4$ with boundary $C(F) =
\partial \Sigma^1(F) \subset M^3$. If we orient $\ker(dF)|C(F)$
so that it points into $W^4$ and project it into $TM^3$ then we
obtain a normal field $\nu(F)$ along $C(F)$. We will define the
rotation of $\nu(F)$ around $C(F)$ modulo $4d(f)$ and denote this
by $R(F)$. The double of the extended invariant will be defined to
be
$$I(f) = 3(\sigma(W^4) - \alpha(M^3)) + \#\Sigma^{1,1}(F) + R(F)
\in \mathbb{Z}_{4d(f)}.$$ We will show that $I(f)$ is always even,
thus it defines an element $i(f) \in \mathbb{Z}_{2d(f)}$ using the
natural embedding $\mathbb{Z}_{2d(f)} \hookrightarrow
\mathbb{Z}_{4d(f)}$.
\section{Rotation}
Throughout this paper $M^3$ will denote a fixed closed connected
and oriented 3-manifold.
\begin{note} \label{note:1}
A pair $(C, \nu)$ will always stand for an oriented 1-dimensional
submanifold $C$ of $M^3$ and a nowhere vanishing normal field
$\nu$ along $C$.
\end{note}
\begin{defn} \label{defn:1}
Let $\chi \in H^2(M^3; \mathbb{Z})$ and let $C_0$ and $C_1$ be
1-dimensional oriented submanifolds of $M^3$ with normal fields
$\nu_0$ and $\nu_1$ such that $PD[C_0] = PD[C_1] = \chi$. (Here
$PD$ denotes Poincar\'e duality.) Then we can define the
\emph{rotation difference} $\text{rd}((C_0,\nu_0),(C_1,\nu_1)) \in
\mathbb{Z}_{2d(\chi)}$ of $(C_0, \nu_0)$ and $(C_1, \nu_1)$ as
follows. Since $[C_0] = [C_1]$ and
$$H_1(M^3; \mathbb{Z}) \approx H^2(M^3; \mathbb{Z}) \approx [M^3,
\mathbb{C}P^{\infty}],$$ there exists an oriented cobordism $K^2
\subset M^3 \times I$ between $C_0 \subset M^3 \times\{0\}$ and
$C_1 \subset M^3 \times \{1\}$. Let $\nu$ be a generic normal
field along $K^2$ that extends $\nu_0$ and $\nu_1$. Then a sign
can be given to each zero of $\nu$ since $M^3$ is oriented. Now we
define $\text{rd}((C_0,\nu_0),(C_1,\nu_1))$ to be the algebraic
number of zeroes of $\nu$ modulo $2d(\chi)$. Equivalently,
$\text{rd}((C_0,\nu_0),(C_1,\nu_1))$ is the self intersection of
$K$ in $M^3 \times I$ modulo $2d(\chi)$ if perturbed in the
direction of $\nu$.
\end{defn}
\begin{rem} \label{rem:3}
The rotation difference is the obstruction to the existence of a
framed cobordism between the framed submanifolds $(C_0, \nu_0)$
and $(C_1, \nu_1)$ of $M^3$. Using the Pontrjagin construction
this corresponds to the obstruction to a homotopy between two maps
of $M^3$ to $S^2$. This situation was first examined in
\cite{Pontrjagin}. It is easy to see that $\text{rd}((C_0,
\nu_0),(C_1, \nu_1)) = 0$ iff $(C_0, \nu_0)$ and $(C_1, \nu_1)$
are framed cobordant. Thus we obtain a bijection $$[M^3, S^2]
\approx \coprod_{\chi \in H^2(M^3; \mathbb{Z})} H^3(M^3;
\mathbb{Z})/ 2\chi \cup H^1(M^3 ; \mathbb{Z}).$$
\end{rem}
\begin{prop} \label{prop:1}
In Definition \ref{defn:1} above the rotation difference is well
defined, i.e., it does not depend on the choice of $K$ and $\nu$.
\end{prop}
\begin{proof}
Let $K$, $\nu$ and $K'$, $\nu'$ be as in Definition \ref{defn:1}.
We glue together $M^3 \times I$ and $-M^3 \times I$ along their
boundaries so that we obtain the double $D(M^3 \times I) = M^3
\times S^1$. Place $K$ into the half of $M^3 \times S^1$
corresponding to $M^3 \times I$ and $K'$ into the other half. Then
we obtain a closed oriented surface $F = K \cup -K'$ in $M^3
\times S^1$ and a normal field $\mu = \nu \cup \nu'$ along $F$.
Since $H^*(S^1; \mathbb{Z})$ is a torsion free $\mathbb{Z}$-module
we can apply K\"unneth's theorem and we get that
$$H^2(M^3 \times S^1; \mathbb{Z}) \approx H^1(M^3; \mathbb{Z}) \otimes
H^1(S^1; \mathbb{Z}) \oplus H^2(M^3; \mathbb{Z})\otimes H^0(S^1;
\mathbb{Z}).$$ Thus the Poincar\'e dual of $F$ can be written in
the form $$ PD[F] = x \times \alpha + y \times 1 \in H^2(M^3
\times S^1; \mathbb{Z}),$$ where $x \in H^1(M^3; \mathbb{Z})$ and
$y \in H^2(M^3; \mathbb{Z})$, moreover $\alpha$ denotes the
generator of $H^1(S^1; \mathbb{Z})$ and $1$ the generator of
$H^0(S^1; \mathbb{Z})$ given by the orientation of $S^1$. Note
that for $1 \in S^1$ the dual class of $M^3 \times \{1\} \subset
M^3 \times S^1$ is $$PD[M^3 \times \{1\}] = PD[M^3] \times
PD[\{1\}] = 1 \times \alpha \in H^1(M^3 \times S^1; \mathbb{Z}).$$
Moreover,
$$PD[F \cap (M^3 \times \{1\})] = PD[C_1 \times \{1\}] = PD[C_1]
\times PD[\{1\}] = \chi \times \alpha.$$ On the other hand
$$PD[F \cap (M^3 \times \{1\})] = PD[F] \cup PD[M^3 \times \{1\}] =
(x \times \alpha + y \times 1) \cup (1 \times \alpha) = x \times
\alpha^2 + y \times \alpha.$$ Since $\alpha^2 = 0$ we get that $y
\times \alpha = \chi \times \alpha$. Using K\"unneth's theorem
again we obtain the equality $y = \chi$. Thus we get that $$PD[F]
\cup PD[F] = (x \times \alpha + \chi \times 1)^2 = (2x \cup \chi)
\times \alpha $$ since $\alpha^2 = \chi^2 = 0$ and $x \cup \chi =
\chi \cup x$ because the degree of $\chi$ is 2. So the self
intersection of $F$ in $M^3 \times S^1$ equals $\langle (2\chi
\cup x) \times \alpha, [M^3 \times S^1] \rangle = \langle 2\chi
\cup x, [M^3] \rangle \in 2d(\chi)\mathbb{Z}$. If we perturb $F$
in the direction of $\mu$ we get that the self intersection of $K$
with respect to $\nu$ equals the self intersection of $K'$ with
respect to $\nu'$ modulo $2d(\chi)$.
\end{proof}
\begin{prop}
If $[C_0]=[C_1]=[C_2] \in H_1(M^3, \mathbb{Z})$ then
$$\text{rd}((C_0,\nu_0),(C_1,\nu_1)) + \text{rd}((C_1,\nu_1),(C_2,\nu_2)) =
\text{rd}((C_0,\nu_0),(C_2,\nu_2)).$$
\end{prop}
\begin{defn}
For each $a \in H_1(M^3; \mathbb{Z})$ fix a pair $(C_a, \nu_a)$
such that $[C_a] = a$. Then for $[C] = a$ let $r(C,\nu) =
\text{rd}((C,\nu),(C_a, \nu_a))$.
\end{defn}
\begin{cor}
If $[C_0]=[C_1]$ then $r(C_0,\nu_0) - r(C_1, \nu_1) =
\text{rd}((C_0, \nu_0), (C_1, \nu_1))$.
\end{cor}
\begin{defn} \label{defn:7}
We can define the mod 2 rotation difference $\text{rd}_2((C_0,
\nu_0), (C_1, \nu_1))$ for unoriented $C_0$ and $C_1$ just as in
Definition \ref{defn:1} but allowing the cobordism $K$ to be
non-orientable and counting the self intersection of $K$ in $M
\times I$ only modulo 2. The proof that this is well defined is
analogous to the oriented case. It is clear that the epimorphism
$\mathbb{Z}_{2d(\chi)} \to \mathbb{Z}_2$ takes $\text{rd}$ to
$\text{rd}_2$. The mod 2 rotation $r_2$ is defined just like $r$.
\end{defn}
Unfortunately we will have to lift the invariants $\text{rd}$ and
$r$ to $\mathbb{Z}_{4d(\chi)}$. To be able to do this we need more
structure on $M^3$ then just a framed submanifold. We will use
this additional structure to restrict the homology class of the
cobordism $K$ so that the surface $F$ in the proof of Proposition
\ref{prop:1} will represent an even homology class and thus $x$
will always be even (since $\chi$ is even). So the self
intersection of $F$ will be divisible by $4d(\chi)$ instead of
just $2d(\chi)$.
\begin{note}
Fix a cohomology class $\chi \in H^2(M^3; \mathbb{Z})$. Let
$\varepsilon^3_M$ denote the 3-dimensional trivial bundle over $M^3$ and
let $t,v \in \Gamma(\varepsilon^3_M)$ be two generic non-zero sections of
$\varepsilon^3_M$. Furthermore, suppose that the 2-dimensional oriented
subbundle $t^{\perp} < \varepsilon^3_M$ has Euler class $\chi$. If we
project $v$ into $t^{\perp}$ we obtain a section $w \in
\Gamma(t^{\perp})$ that vanishes along a curve $C \subset M^3$ and
we orient $C$ so that $PD[C] = e(t^{\perp})$. In particular, $t$
and $v$ are linearly dependent exactly at the points of $C$.
Finally let $\nu$ be a non-zero normal field along $C$. In the
future we will denote such a structure on $M^3$ by a quadruple
$(C, \nu, t,v)$ and the set of these structures by $N(M^3, \chi)$.
\end{note}
\begin{rem}
Since $PD[C]|_2 = w_2(\varepsilon^3_M) = 0 \in H^2(M^3; \mathbb{Z}_2)$,
the cohomology class $\chi = PD[C]$ is of the form $2c$ for some
$c \in H^2(M^3; \mathbb{Z})$. This can be seen from the long exact
sequence associated to the coefficient sequence $\mathbb{Z} \to
\mathbb{Z} \to \mathbb{Z}_2$. Thus $N(M^3, \chi) = \emptyset$ if
$\chi$ is not of the form $2c$.
\end{rem}
\begin{defn} \label{defn:5}
Suppose that $a_0 = (C_0, \nu_0, t_0, v_0)$ and $a_1 = (C_1,
\nu_1, t_1, v_1)$ are elements of $N(M^3, \chi)$, where $\chi =
2c$. Then we will define their rotation difference
$\text{Rd}(a_0,a_1) \in \mathbb{Z}_{4d(\chi)}$ as follows. We will
consider $a_i$ to be in $N(M^3 \times \{i\}, \chi)$ for $i = 0,1$.
Let $t, v \in \Gamma(\varepsilon^3_{M \times I})$ be generic non-zero
sections extending $t_i$ and $v_i$ for $i = 0,1$. Denote by $K$
the 2-dimensional submanifold of $M^3 \times I$ where $t$ and $v$
are linearly dependent. Let $w$ denote the projection of $v$ into
the 2-dimensional oriented subbundle $t^{\perp} < \varepsilon^3_{M \times
I}$. Then $w$ is zero exactly at the points of $K$, thus it
defines an orientation of $K$. With this orientation $K$ is an
oriented cobordism between $C_0$ and $C_1$. Let $\nu$ denote a
normal field of $K$ that extends both $\nu_0$ and $\nu_1$. Now we
define $\text{Rd}(a_0,a_1)$ to be the algebraic number of zeroes
of $\nu$ modulo $4d(\chi)$. Equivalently, $\text{Rd}(a_0,a_1)$ is
the self intersection of $K$ in $M^3 \times I$ modulo $4d(\chi)$
if perturbed in the direction of $\nu$.
\end{defn}
\begin{prop} \label{prop:2}
In Definition \ref{defn:5} the rotation difference is well
defined. I.e., it does not depend on the extensions $t,v$ and
$\nu$.
\end{prop}
\begin{proof}
Let $t,v,\nu$ and $t',v',\nu'$ be as in Definition \ref{defn:5}.
The sections $t,v$ are linearly dependent over $K$ and $t',v'$ are
dependent over $K'$. Just as in the proof of Proposition
\ref{prop:1} we will place $K, \nu$ and $ K', \nu'$ in the two
halves of the double $D(M^3 \times I) = M^3 \times S^1$ and place
$t,v$ and $t',v'$ in the two halves of the trivial bundle
$\varepsilon^3_{M \times S^1}$. Let $F$ denote the oriented surface $K
\cup -K'$ and by $\mu$ the normal field along $F$ obtained from
$\nu$ and $\nu'$. Moreover, let $T = t \cup t'$ and $V = v \cup
v'$. Then $T,V \in \Gamma(\varepsilon^3_{M \times S^1})$ are linearly
dependent exactly over $F$, thus $PD[F]|_2 = w_2(\varepsilon^3_{M \times
S^1}) = 0 \in H^2(M^3 \times S^1; \mathbb{Z}_2)$. Using the
coefficient sequence $\mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}_2$
we get that $PD[F]$ is of the for $2b$ for some $b \in H^2(M^3
\times S^1; \mathbb{Z})$. Since $PD[F]$ is of the form $x \times
\alpha + \chi \times 1$ where $\chi = 2c$ we get that there exists
an element $z \in H^1(M^3; \mathbb{Z})$ such that $x = 2z$. Thus
$PD[F]^2 = (4z \cup \chi) \times \alpha$ which implies that the
self intersection of $F$ is divisible by $4d(\chi)$. If we
perturb $F$ in the direction of $\mu$ we get that the self
intersection of $K$ with respect to $\nu$ equals the self
intersection of $K'$ with respect to $\nu'$ modulo $4d(\chi)$.
\end{proof}
\begin{rem} \label{rem:1}
The surface $K$ represents the dual of the Stiefel-Whitney class
of the bundle $\varepsilon^3_{M \times I}$ relative to the sections $t_i,
v_i$ given over $M^3 \times \{0,1\}$. I.e., $$PD[K]|_2 =
w_2(\varepsilon^3_{M \times I} ; t_i, v_i) \in H^2(M^3 \times I, ((M^3
\setminus C_0) \times \{0\}) \cup ((M^3 \setminus C_1)\times
\{1\}); \mathbb{Z}_2)$$ since $v$ and $t$ are linearly independent
over $((M^3 \setminus C_0) \times \{0\}) \cup ((M^3 \setminus
C_1)\times \{1\})$. Using Lefschetz duality we get that the
relative homology class $[K]|_2 \in H^2(M^3 \times I, C_0 \times
\{0\} \cup C_1 \times \{1\}; \mathbb{Z}_2)$ is independent of the
choice of $t$ and $v$. If we choose a simplicial subdivision of
$M^3$ so that $\text{sk}_1(M^3) \cap C_i = \emptyset$ for $i =
0,1$ then $w_2$ is the obstruction to extending the map $(t_i,v_i)
\colon \text{sk}_1(M^3 \times \{0,1\}) \to V_2(\mathbb R^3)$ to
$\text{sk}_2(M^3 \times I)$. So the homology class $[K]|_2$ and
thus $\text{Rd}(a_0,a_1)$ depends only on the homotopy class of
the map
$$(t_i,v_i)|\text{sk}_1(M^3) \colon \text{sk}_1(M^3) \to V_2(\mathbb R^3)$$ for $i =
0,1$. For the sake of completeness we note that if the extension
$t$ is given then
$$PD[K] = e(t^{\perp}; w_i) \in H^2(M^3 \times I, (M^3 \setminus
C_0) \times \{0\} \cup (M^3 \setminus C_1)\times \{1\};
\mathbb{Z}).$$ So we have obtained the following proposition.
\end{rem}
\begin{prop} \label{prop:3}
Suppose that $(C_0,\nu_0)$ and $(C_1, \nu_1)$ are framed
submanifolds of $M^3$ and let $a_0,b_0,a_1,b_1 \in N(M^3, \chi)$
be of the form $a_i = (C_i, \nu_i, t^a_i,v^a_i)$ and $b_i = (C_i,
\nu_i, t^b_i, v^b_i)$ for $i = 0,1$. Moreover, suppose that
$\text{sk}_1(M^3) \cap C_i = \emptyset$ and $(t^a_i,
v^a_i)|\text{sk}_1(M^3)$ is homotopic to $(t^b_i,
v^b_i)|\text{sk}_1(M^3)$ as maps into $V_2(\mathbb R^3)$ for $i =
0,1$. Then the following equality holds:
$$\text{Rd}(a_0,a_1) = \text{Rd}(b_0,b_1).$$
\end{prop}
\begin{proof}
Let $t^a$ and $v^a$ be generic extensions of $t_i^a$, respectively
$v_i^a$ over $M^3 \times I$ and denote by $K^a$ the submanifold of
$M^3 \times I$ where $t^a$ and $v^a$ are linearly dependent. We
obtain the sections $t^b$ and $v^b$ of $\varepsilon^3_{M \times I}$ and
the submanifold $K^b \subset M^3 \times I$ in a similar way. Then,
according to Remark \ref{rem:1}, we get that $$PD[K^a]|_2 =
w_2(\varepsilon^3_{M \times I}; t^a_i,v^a_i) = w_2(\varepsilon^3_{M \times I};
t^b_i,v^b_i) = PD[K^b]|_2,$$ since $w_2$ is the obstruction to
extending a map into $V_2(\mathbb R^3)$ from $\text{sk}_1(M^3 \times
I)$ to $\text{sk}_2(M^3 \times I)$ and $(t^a_i,
v^a_i)|\text{sk}_1(M^3 \times \{i\})$ is homotopic to $(t^b_i,
v^b_i)|\text{sk}_1(M^3 \times \{i\})$. Thus if $F$ denotes the
submanifold of $M^3 \times S^1 = D(M^3 \times I)$ obtained by
piecing together $K^a$ and $K^b$ we get that $PD[F]|_2 =
w_2(\varepsilon^3_{M \times S^1}) = 0$, so we can proceed as in the proof
of Proposition \ref{prop:2}.
\end{proof}
\begin{prop} \label{prop:5}
If $a_i = (C_i,\nu_i, t_i,v_i) \in N(M^3, \chi)$ for $i = 0,1$
then $$\text{rd}((C_0,\nu_0),(C_1,\nu_1)) \equiv
\text{Rd}(a_0,a_1) \mod 2d(\chi).$$
\end{prop}
\begin{prop}
If $a_0,a_1,a_2 \in N(M^3, \chi)$ then $$\text{Rd}(a_0,a_1) +
\text{Rd}(a_1,a_2) = \text{Rd}(a_0,a_2).$$
\end{prop}
\begin{defn}
For each $\chi \in H^2(M^3; \mathbb{Z})$ of the form $\chi = 2c$
fix an element $a_{\chi} \in N(M^3, \chi)$. Then for each $a \in
N(M^3, \chi)$ define the dotation $R(a) \in \mathbb{Z}_{4d(\chi)}$
to be $\text{Rd}(a,a_{\chi})$.
\end{defn}
\begin{cor}
If $a_0, a_1 \in N(M^3, \chi)$ then $\text{Rd}(a_0,a_1) = R(a_0)-
R(a_1)$.
\end{cor}
\section{The orientation of $\Sigma^1$}
Now let us recall a special case of Lemma 6.1 of \cite{ESz}. Let
$F \colon W^4 \to \mathbb R^5$ be a generic map of a compact
orientable manifold. Then the singularity set $\Sigma(F)$ of $F$
is a 2-dimensional submanifold of $W^4$ which is not necessarily
orientable.
\begin{lem} \label{lem:1}
The line bundles $\det(T\Sigma(F))$ and $\ker(dF)$ over
$\Sigma(F)$ are isomorphic.
\end{lem}
\begin{defn}
Let $\pi$ denote the projection of $\mathbb R^{m+1}$ onto $\mathbb R^m$. A
map $f \colon N^n \to \mathbb R^m$ is called \emph{prim} if there
exists an immersion $f' \colon N^n \looparrowright \mathbb R^{m+1}$
such that $\pi \circ f' = f$.
\end{defn}
\begin{cor} \label{cor:1}
If $F \colon W^4 \to \mathbb R^5$ is a generic prim map then
$\Sigma(F) \subset W^4$ is an orientable surface.
\end{cor}
\begin{proof}
Let $s$ denote the sixth coordinate function of $F'$, i.e., $F' =
(F, s)$. Since $F'$ is non-singular, the function $s$ is
non-degenerate along $\ker(dF)$. Thus we can orient $\ker(dF)$ so
that the derivative of $s$ in the positive direction of $\ker(dF)$
is positive. But the orientability of $\ker(dF)$ implies the
orientability of $\Sigma(F)$ by Lemma \ref{lem:1}.
\end{proof}
The following definition, motivated by Corollary \ref{cor:1},
gives an explicit isomorphism $\Psi$ between $\ker(dF)$ and
$\det(T\Sigma(F))$.
\begin{defn} \label{defn:4}
Let $W^4$ be a compact oriented manifold with possibly non-empty
boundary and let $F \colon W^4 \to \mathbb R^5$ be a generic map. For
$p \in \Sigma(F)$ choose a small neighborhood $U_p \subset W^4$ of
$p$ in which $\ker(dF)$ is orientable. Put $F_p = F|U_p$ and
choose an orientation $o_p$ of $\ker(dF_p)$. Then there exists a
smooth function $s \colon U_p \to \mathbb R$ such that the derivative
of $s$ in the direction of $o_p$ is positive. (First construct $s$
along $\Sigma(F_p)$ near $\Sigma^{1,1}(F_p)$ then extend it to a
tubular neighborhood of $\Sigma(F_p)$.) The map $F_p' = (F_p, s)
\colon U_p \looparrowright \mathbb R^6$ is an immersion. If $U_p$ is
chosen sufficiently small then we can even suppose that $F_p'$ is
an embedding. Denote by $e_6$ the sixth coordinate direction in
$\mathbb R^6$ and let $\nu_6 \colon U_p \to T\mathbb R^6$ denote the
vector field along $F_p'$ defined by the formula $\nu_6(x) = e_6
\in T_{F_p'(x)}\mathbb R^6$ for $x \in U_p$. Projecting $\nu_6$ into
the normal bundle of $F_p'$ we obtain a normal field $\mu_6$ along
$F_p'$ that vanishes exactly at the points of $\Sigma(F_p)$.
Perturb $F_p'$ in the direction of $\mu_6$ to obtain an embedding
$F_p''$. Then orient $\Sigma(F_p)$ as the intersection of $F_p'$
and $F_p''$ in $\mathbb R^6$. Here $\mathbb R^6$ is considered with its
standard orientation. This orientation of $\Sigma(F_p)$ does not
depend on the choice of the function $s$, since if $s_1$ and $s_2$
are two such functions then for $0 \le t \le 1$ the convex
combination $(1-t)s_1 + ts_2$ also satisfies the conditions for
$s$.
If we reverse the orientation of $\ker(dF_p)$, i.e. if we orient
it by $-o_p$, then we can choose $-s$ instead of $s$. Thus we
obtain the embedding $(F_p, -s)$, which is the reflection of
$F_p'$ in the hyperplane $\mathbb R^5$. Denote this reflection by $R
\colon \mathbb R^6 \to \mathbb R^6$ (i.e., $R(x_1, \dots, x_5, x_6) =
(x_1, \dots, x_5, -x_6)$). Then $(F_p,-s) = R \circ F_p'$. The
vector field $dR \circ \nu_6$ along $R \circ F_p'$ points in the
direction $-e_6$ and $R \circ F_p''$ is the perturbation of $R
\circ F_p'$ in the direction of $dR \circ \mu_6$. But in this case
we should perturb $R \circ F_p'$ in the direction of $-(dR \circ
\mu_6)$. We obtain the same orientation if we look at the
intersection $(R \circ F_p'') \cap (R \circ F_p')$ instead. Since
the intersection is 2-dimensional and $U_p$ is 4-dimensional we
get that $(R \circ F_p'') \cap (R \circ F_p') = (R \circ F_p')
\cap (R \circ F_p'')$ in the oriented sense. The orientations of
$\Sigma(F_p)$ defined by the intersections $ F_p' \cap F_p''$ and
$(R \circ F_p') \cap (R \circ F_p'')$ are opposite. This can be
seen from the following argument: For $0 \le t \le 1$ denote by
$R_t$ the rotation of the hyperplane $\mathbb R^6$ in $\mathbb R^7$ around
$\mathbb R^5$ by the angle $\pi t$. The orientation of $(R_t \circ
F_p')\cap (R_t \circ F_p'')$ in $R_t(\mathbb R^6)$ changes
continuously as $t$ goes from $0$ to $1$. The orientations of the
hyperplanes $R_1(\mathbb R^6)$ and $\mathbb R^6$ are opposite, thus the
reflection $R$ changes the orientation of the intersection $F_p'
\cap F_p''$.
So we have defined an isomorphism $\Psi_p$ between $\ker(dF_p)$
and $\det(T\Sigma(F_p))$ for every $p \in W^4$ in a compatible way
(i.e., $\Psi_p|(U_p \cap U_q) = \Psi_q|(U_p \cap U_q)$ for $p,q
\in W^4$). These local isomorphisms define a global isomorphism
$\Psi$ between $\ker(dF)$ and $\det(T\Sigma(F))$.
\end{defn}
\section{The invariant}
In this section we will give a geometric formula for the
3-dimensional obstruction to the existence of a regular homotopy
between two immersions of $M^3$ into $\mathbb{R}^5$. This
generalizes the results of \cite{Takase} to immersions with
non-trivial normal bundle.
\begin{defn} \label{defn:2}
Let $W^4$ be a compact oriented manifold with boundary $M^3$ and
$F \colon W^4 \to \mathbb R^5$ a generic map such that $f = F|M^3$ is
an immersion. Recall that $\Sigma(F)$ denotes the set of singular
points of $F$. Let us denote by $C(F) \subset M^3$ the
1-dimensional submanifold $\partial \Sigma(F)$. Choose a
trivialization $\tau$ of $\ker(dF)|C(F)$ so that it points into
the interior of $W^4$. This is possible since $f$ is non-singular
(and so $\ker(dF)$ never lies in $TM^3$). Then $\tau$ is normal to
$\Sigma(F)$ because $F$ is generic and thus $\Sigma^{1,1}(F) \cap
C(F) = \emptyset$. So if we project $\tau$ into $TM$ along
$\Sigma(F)$ we obtain a nowhere vanishing normal field $\nu(F)$ in
$\nu(C(F) \subset M^3)$.
Let $U$ denote a small collar neighborhood of $C(F)$ in
$\Sigma(F)$. Then clearly $U$ is orientable. Using the isomorphism
$\Psi$ of Definition \ref{defn:4} the trivialization $\tau$ of
$\ker(dF)$ induces an orientation of $U$. Thus $C(F) \subset
\partial U$ is also oriented. So we have assigned a pair $(C(F),
\nu(F))$ to $F$ as in Notation \ref{note:1}. Let $r(F) = r(C(F),
\nu(F))$.
\end{defn}
\begin{note}
For $\chi \in H^2(M^3; \mathbb{Z})$ let us denote by $\mathop{\textrm{Imm}}\nolimits(M^3,
\mathbb R^5)_{\chi}$ the space of immersions with normal Euler class
$\chi$.
\end{note}
Fix a cohomology class $\chi \in H^2(M^3; \mathbb{Z})$. Our aim is
to define an invariant $i \colon \pi_0\left(\mathop{\textrm{Imm}}\nolimits(M^3,
\mathbb R^5)_{\chi}\right) \to \mathbb{Z}_{2d(\chi)}$.
\begin{prop}
Let $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ and let $F \colon W^4 \to
\mathbb R^5$ be a generic map such that $\partial F = f$. Then $[C(F)]
= D\chi$.
\end{prop}
\begin{proof}
Let $\kappa$ denote an inner normal field of $M^3$ in $W^4$ that
extends $\tau$ (see Definition \ref{defn:2}). Then $dF \circ
\kappa$ is a vector field along $f$ that is tangent to $f$ exactly
at the points of $C(F) =
\partial \Sigma(F)$. (If $p \in C(F)$ then the rank of $(dF)_p$ is
$3$, moreover $dF|(T_pM^3) = df$ is non-degenerate. Thus
$dF(\kappa_p) \in df(T_pM^3)$.) So if we project $dF \circ \kappa$
into the normal bundle of $f$ we obtain a normal field of $f$ that
vanishes along $C(F)$. To see that $C(F)$ represents the normal
Euler class of $f$, we have to know that it is oriented suitably.
Using the notations of Definition \ref{defn:2} we choose a
function $s \colon W^4 \to \mathbb R$ such that the derivative of $s$
in the direction of $\kappa$ (and thus $\tau$) is positive and $
s|M^3 \equiv 0 $. Then there exists a collar neighborhood $V$ of
$M^3$ in $W^4$ such that $F' = (F,s)|V$ is an immersion. Denote by
$\nu_{F'}$ the normal bundle of $F'$ in $\mathbb R^6$ and by $\nu_f$
the normal bundle of $f$ in $\mathbb R^5$. Then $\nu_{F'}|M^3 = \nu_f$
as oriented bundles, since $s$ is increasing along $\kappa$ (here
$\mathbb R^5$ and $\mathbb R^6$ are considered with their standard
orientations). By Definition $\ref{defn:4}$ the surface of
singular points $U = \Sigma(F|V)$ is oriented as the
self-intersection of $F'$ in $\mathbb R^6$, or more precisely, as the
intersection of the zero section and a generic section of
$\nu_{F'}$. Moreover, $C(F)$ is oriented as the boundary of $U$.
Thus $C(F)$ is the self-intersection of the zero section of
$\nu_{F'}|M^3 = \nu_f$, so it is dual to the Euler class $e(\nu_f)
= \chi$. (Here we used the naturality of the Euler class.)
\end{proof}
\begin{defn} \label{defn:3}
Let $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ and let $F \colon W^4 \to
\mathbb R^5$ be generic such that $\partial F = f$. Denote the
algebraic number of cusps of $F$ by $\#\Sigma^{1,1}(F)$ (for the
definition see \cite{ESz}). Then let $j(f) = 3\sigma(W^4) - 3
\alpha(M^3) + \#\Sigma^{1,1}(F) + r(F) \in \mathbb{Z}_{2d(\chi)}$.
\end{defn}
Note that if $\chi = 0$ and $F$ is an immersion near $\partial
W^4$ then $r(F) = 0$. Thus in this case $j(f)$ agrees with the
double of the invariant introduced in \cite{Takase} (see
Definition \ref{defn:6}).
\begin{thm} \label{thm:3}
j(f) is well defined, i.e., it does not depend on the choice of
the generic map $F$. Moreover, if $f_0$ and $f_1$ are regularly
homotopic then $j(f_0) = j(f_1)$.
\end{thm}
\begin{proof}
For $i \in \{0,1\}$ let $F_i \colon W^4_i \to \mathbb R^5$ be a
generic map such that $\partial F_i = f_i$. Choose a regular
homotopy $\{h_t: 0 \le t \le 1\}$ connecting $f_0$ and $f_1$. This
defines an immersion $H \colon M^3 \times I \to \mathbb R^5 \times I$
by the formula $H(x,t) = (h_t(x),t)$. Also choose a closed collar
neighborhood $U_i$ of $M^3$ in $W^4_i$ and a diffeomorphism $d_i
\colon U_i \to M^3 \times [0,\varepsilon]$ for $i = 0,1$. Let $p
\colon M^3 \times [0, \varepsilon] \to [0, \varepsilon ]$ denote
the projection onto the second factor. If $\varepsilon$ (i.e.,
$U_i$) is sufficiently small then $p \circ d_i$ is non-degenerate
along $\ker(dF_i)$ for $i =0,1$ since $\ker(dF_i)$ never lies in
$TM^3$. Let $s_i$ be an arbitrary smooth extension of $p \circ
d_i$ over $W^4_i$. Now let $F_0' = (F_0, -s_0) \colon -W^4_0 \to
\mathbb R^6$ and $F_1' = (F_1, s_1 + 1) \colon W^4_1 \to \mathbb R^6$.
Then $F_i'$ is an immersion on $U_i$. Notice that $H|(M^3 \times
\{0\}) = F_0'| (\partial W^4_0)$ and $H|(M^3 \times \{1\}) = F_1'|
(\partial W^4_1)$.
Denote by $\kappa_i$ the inner normal field of $W^4_i$ along $M^3
= \partial W^4_i$ and by $v_6$ the sixth coordinate direction in
$\mathbb R^6$. Then the inner product $\langle dF_0'(\kappa_0), v_6
\rangle < 0$ and $\langle dF_1'(\kappa_1), v_6 \rangle > 0$.
Furthermore, if $\lambda_i$ denotes the inner normal field of $M^3
\times I$ along $M^3 \times \{i\}$ for $i = 0,1$ then $\langle
dH(\lambda_0), v_6 \rangle > 0$ and $\langle dH(\lambda_1), v_6
\rangle < 0$. So $dH(\lambda_i)$ is homotopic to $-dF_i(\kappa_i)$
in the space of vector fields normal to $H|(M^3 \times \{i\})$.
Using Smale's lemma there exists a regular homotopy of $H$ fixed
on the boundary $M^3 \times \{0,1\}$ that induces the above
homotopy of normal fields. Denote by $H'$ the result of this
regular homotopy of $H$. Then $F_0'$, $H'$ and $F_1'$ fit together
to a smooth map $F'$ of $W^4 = -W^4_0 \cup (M^3 \times I) \cup
W^4_1$ into $\mathbb R^6$ that is an immersion on $M^3 \times I$. Let
$\pi \colon \mathbb R^6 \to \mathbb R^5$ denote the projection map. Then
by a small perturbation of $H'$ we can achieve that $F = \pi \circ
F'$ is generic.
Since $G = F | (M^3 \times I) = \pi \circ H'$ is prim, the
singular surface $\Sigma(G)$ is oriented and a trivialization
$\tau$ of $\ker(dG)$ is given. If we project $\tau$ into
$\nu(\Sigma(G) \subset M^3 \times I)$ we obtain a normal field
$\nu$ along $\Sigma(G)$ that vanishes exactly at the cusps of $G$,
i.e., where $\tau$ is tangent to $\Sigma(G)$. So
$\#\Sigma^{1,1}(G)$ is equal to the algebraic number of zeroes of
$\nu$, which in turn is congruent to $\text{rd} ((C(F_0),
\nu(F_0)),(C(F_1), \nu(F_1))) = r(F_0)-r(F_1)$ modulo $2d(\chi)$
by Definition \ref{defn:1}.
Now using the result of Sz\H{u}cs \cite{Szucs} that $3\sigma(W^4)
+ \#\Sigma^{1,1}(F) = 0$ we get that
\begin{multline} \label{eqn:1}
-\left(3\sigma(W_0^4) + \#\Sigma^{1,1}(F_0) + r(F_0)\right) +
\left(3\sigma(W_1^4) + \#\Sigma^{1,1}(F_1) + r(F_1)\right) = \\
3\sigma(W^4) + \#\Sigma^{1,1}(-F_0 \cup G \cup F_1) = 0.
\end{multline}
In the special case $f_0 = f_1 = f$ this implies that $j(f)$ is
well defined, and for $f_0$ and $f_1$ arbitrary (but regularly
homotopic) we get that $j$ is a regular homotopy invariant.
\end{proof}
\begin{prop} \label{prop:4}
For any immersion $f \colon M^3 \looparrowright \mathbb R^5$ the
invariant $j(f)$ is always an even element of
$\mathbb{Z}_{2d(\chi)}$, i.e., it is mapped to $0$ by the
epimorphism $\mathbb{Z}_{2d(\chi)} \to \mathbb{Z}_2$.
\end{prop}
\begin{proof}
Choose an immersion $f_1 \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_0$ and denote $f$
by $f_0$. Since in \cite{Takase} it is proved that $j(f_1)$ is
always even ($j(f_1) = 2i(f_1)$ for $i$ as in Definition
\ref{defn:6}) it is sufficient to prove that $j(f_0) \equiv j(f_1)
\mod 2$. Choose a singular Seifert surface $F_i \colon W_i \to
\mathbb R^5$ for $f_i$ ($i = 0,1$) and let $G \colon M^3 \times I \to
\mathbb R^5$ be a generic map such that $-F_0 \cup G \cup F_1$ is a
smooth map on $-W_1 \cup (M^3 \times I) \cup W_2$. Then by
equation \ref{eqn:1} above it is sufficient to prove that
$r(F_0)-r(F_1) \equiv \#\Sigma^{1,1}(G) \mod 2$. Since $f_1$ has
trivial normal bundle we may choose $F_1$ to be an immersion in a
neighborhood of $\partial W^4_1$. So $G$ is an immersion in a
neighborhood of $M^3 \times \{1\}$, moreover $r(F_1) = 0$. The
difference between the present situation and the proof of Theorem
\ref{thm:3} is that now $\ker(dG)$ might be non-orientable. Using
Definition \ref{defn:7} we get that $r_2(F_0) - r_2(F_1) \equiv
r(F_0)-r(F_1) \mod 2$. Let $\nu$ denote a generic normal field
along $\Sigma^1(G)$ that extends both $\nu(F_0)$ and $\nu(F_1)$.
By definition $r_2(F_0)-r_2(F_1)$ equals the mod $2$ number of
zeroes of $\nu$. Thus we only have to prove that
$\abs{\nu^{-1}(0)} \equiv \#\Sigma^{1,1}(G) \mod 2$.
From now on we will denote $\Sigma^1(G)$ by $K$ and the line
bundle $\ker(dG) < T(M^3 \times I)|K$ by $l$. Then $l$ is tangent
to $K$ exactly at the points of $\Sigma^{1,1}(G)$. For $\varepsilon >0$
sufficiently small let $\widetilde{K}$ denote the sphere bundle
$S_{\varepsilon}l$. If $\varepsilon$ is sufficiently small then the exponential
map of $M^3 \times I$ defines an immersion $s \colon \widetilde{K}
\to M^3 \times I$ so that the double points of $s$ correspond
exactly to the points of $\Sigma^{1,1}(G)$. So we have to prove
that $\abs{D_2(s)} \equiv \abs{\nu^{-1}(0)} \mod 2$. By Lemma
\ref{lem:1} the surface $\widetilde{K}$ is the orientation double
cover of $K$, in particular $\widetilde{K}$ is oriented and a sign
can be given to each double point of $s$ (here we also use that
$\dim(\widetilde{K})$ is even). The sign of a double point of $s$
is the opposite of the sign of the corresponding cusp of $G$
(since the sign of a cusp is defined as the self intersection of
$K$). Thus $\#D_2(s) = -\#\Sigma^{1,1}(G)$. Let $p \colon
\widetilde{K} \to K$ denote the covering map. Then $p^*\nu_K
\approx \nu_s$, thus $p^*\nu$ defines a section $\widetilde{\nu}$
of $\nu_s$. From the construction of $\widetilde{\nu}$ it is clear
that $\#\widetilde{\nu}^{-1}(0) = 2 \#\nu^{-1}(0)$.
If we perturb $s$ in the direction of $\widetilde{\nu}$ we get a
self intersection point of $s$ for each element of
$\widetilde{\nu}^{-1}(0)$ and two self intersection points for
each double point of $s$. Thus $$s \cap (s + \varepsilon \widetilde{\nu})
= \#\widetilde{\nu}^{-1}(0) + 2D_2(s) = 2(\#\nu^{-1}(0) -
\#\Sigma^{1,1}(G)).$$ So we only have to show that the left hand
side is divisible by $4$.
From now on we will work in a fixed tubular neighborhood $T$ of
$C(F_0) \subset M^3$. Note that $\partial (s, \widetilde{\nu}) =
(C(F_0)+ \varepsilon \nu(F_0), \nu(F_0)) \cup (-C(F_0)-\varepsilon \nu(F_0),
\nu(F_0)) \subset T \times \{0\}$. Let us denote by $C$ the
one-dimensional submanifold $(C(F_0)+ \varepsilon \nu(F_0))\cup(-C(F_0)-
\varepsilon \nu(F_0))$ of $M^3$. Then $s \cap (s + \varepsilon\widetilde{\nu}) =
r(C, \nu(F_0)) \in \mathbb{Z}$ since $C \sim C(F_0)-C(F_0)$ is
null homologous in $M^3$. We define an embedding $e \colon C(F_0)
\times [-\varepsilon, \varepsilon] \to T $ by the formula $e(x,t) = x + t
\nu(F_0)$. Then $E=\mathop{\textrm{Im}}\nolimits(e)$ is a 2-dimensional oriented submanifold
of $T$ with boundary $C$. Thus $r(C, \nu(F_0)) = E \cap
(C+\nu(F_0))$ where the right hand side is considered to be a
generic intersection (each fiber of $E$ is parallel to
$\nu(F_0)$). Let $n$ be a small non-zero vector field along
$C(F_0)$ orthogonal to $\nu(F_0)$. Then $E \cap (C + \nu(F_0)+n) =
\emptyset$ (this can be verified by inspecting each fiber of $T$),
thus $r(C, \nu(F_0)) = 0$. So we get that $\#\nu^{-1}(0) =
\#\Sigma^{1,1}(G)$, not just a mod 2 congruence.
\end{proof}
\begin{rem}
A small improvement on the proof of Proposition \ref{prop:4}
yields an interesting result: Let $G \colon M^3 \times [0,1] \to
\mathbb R^5 $ be a generic map connecting the immersions $f_0$ and
$f_1$. Let $K$ denote the singular surface of $G$ and let $\nu_i$
be a trivialization of $\ker(dG)|(M^3 \times \{i\})$ for $i =
0,1$. Then $\#\Sigma^{1,1}(G)$ is equal to the relative twisted
normal Euler class $e(\nu_K; \nu_0, \nu_1)$.
Note that if $\partial K = C_0 \cup C_1$ then by definition
$$\text{rd}_2((C_0, \nu_0),(C_1,\nu_1)) \equiv e(\nu_K; \nu_0,\nu_1) \mod
2.$$ If in particular $K$ is an oriented cobordism between $C_0$
and $C_1$ then $$\text{rd}((C_0, \nu_0),(C_1,\nu_1)) = e(\nu_K;
\nu_0,\nu_1) = \#\Sigma^{1,1}(G);$$ here $C_i$ is oriented by
$\nu_i$ using the isomorphism $\Psi$ (see Definition
\ref{defn:4}).
\end{rem}
Thus $j$ may take only $d(\chi)$ different values if $d(\chi) >
0$. (Since $j$ is additive if a connected sum is taken with an
immersion of a sphere (Lemma \ref{lem:2}) and $j(g)$ can be any
even number for $g \colon S^3 \looparrowright \mathbb R^5$ it follows
that $j$ is an epimorphism onto $2\mathbb{Z}_{2d(\chi)}$.) But
Theorem \ref{thm:1} implies that there are exactly $2d(\chi)$
regular homotopy classes with normal Euler class $\chi$. So $j$
describes the regular homotopy class of $f$ only up to a $2:1$
ambiguity. To resolve this problem we will lift the invariant $j
\in \mathbb{Z}_{2d(\chi)}$ to an invariant $I \in
\mathbb{Z}_{4d(\chi)}$. It follows from Proposition \ref{prop:4}
that $I$ is always an even element, thus it defines an invariant
$i \in \mathbb{Z}_{2d(\chi)}$ by the embedding
$\mathbb{Z}_{2d(\chi)} \hookrightarrow \mathbb{Z}_{4d(\chi)}$.
\begin{note}
Let $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ and let $F \colon W^4 \to
\mathbb R^5$ be a singular Seifert surface for $f$. If $\kappa$
denotes the inner normal field along $\partial W^4$ then let
$\bar{w}(F) \in \Gamma(\nu_f)$ be the projection of $dF(\kappa)$
into $\nu_f$. If $\bar{t} \in \Gamma(\varepsilon^1_M)$ denotes a
trivialization of $\varepsilon^1_M$ then we can consider $\bar{t}$ and
$\bar{w}(F)$ to be sections of $\nu_f \oplus \varepsilon^1_M$. Let
$\bar{v}(F) = \bar{w}(F) + \bar{t} \in \Gamma(\nu_f \oplus
\varepsilon^1_M)$.
\end{note}
From now on we will fix a spin structure $s_M \in
\text{Spin}(M^3)$. If we consider $\mathbb R^5$ with its unique spin
structure then for every immersion $f \colon M^3 \looparrowright
\mathbb R^5$ a spin structure $s(f)$ is induced on $\nu_f$ by $s_M$.
Then $s(f)$ is equivalent to a trivialization $\tau(f) \colon
\varepsilon^3_M|\text{sk}_2(M) \to \nu_f \oplus
\varepsilon^1_M|\text{sk}_2(M^3)$ up to homotopy. Since $\pi_2(SO(3)) =
0$ the trivialization $\tau(f)$ extends to an isomorphism $\tau(f)
\colon \varepsilon^3_M \to \nu_f \oplus \varepsilon^1_M$, but this extension is
not unique because $\pi_3(SO(3)) \neq 0$.
\begin{defn}
Using the above notations let $t(f),v(F) \in \Gamma(\varepsilon^3_M)$ be
defined by the formulas $t(f) = \tau(f)^{-1} \circ \bar{t}$ and
$v(F) = \tau(f)^{-1} \circ \bar{v}(F)$. Denote by $a(F)$ the
quadruple $(C(F), \nu(F), t(f), v(F)) \in N(M^3, \chi)$. Then
define $R(F) \in \mathbb{Z}_{4d(\chi)}$ to be $R(a(F))$. Since the
homotopy class of the map $(t(f),v(F))|\text{sk}_2(M^3) \colon
\text{sk}_2(M^3) \to V_2(\mathbb R^3)$ is independent of the choice of
the extension of $\tau(f)$ to $\varepsilon^3_M$ Proposition \ref{prop:3}
implies that $R(F)$ is also independent of $\tau(f)$ and depends
only on $s_M$.
\end{defn}
\begin{rem} \label{rem:2}
Proposition \ref{prop:5} implies that $r(F) \equiv R(F) \mod
2d(\chi)$.
\end{rem}
Now we can finally define a complete regular homotopy invariant.
\begin{defn}
For $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ and a singular Seifert
surface $F$ let $I(f) \in \mathbb{Z}_{4d(\chi)}$ be defined as
$3\sigma(W^4) - 3 \alpha(M^3) + \#\Sigma^{1,1}(F) + R(F)$. (Recall
that we have fixed a spin structure $s_M$ on $M^3$ for the
definition of $R(F)$.) Remark \ref{rem:2} above implies that $j(f)
\equiv I(F) \mod 2d(\chi)$. Thus by Proposition \ref{prop:4} we
get that $I(F)$ is always an even element of
$\mathbb{Z}_{4d(\chi)}$. Let us denote by $\frac12$ the
isomorphism from $2\mathbb{Z}_{4d(\chi)}$ to
$\mathbb{Z}_{2d(\chi)}$. Then let $i(F) = \frac12 I(F)$.
\end{defn}
Clearly $j(f) = 2i(f)$ for every $f \in \mathop{\textrm{Imm}}\nolimits(M^3,
\mathbb R^5)_{\chi}$.
\begin{thm}
I(f) is well defined, i.e., it does not depend on the choice of
the generic map $F$. Moreover, if $f_0$ and $f_1$ are regularly
homotopic then $I(f_0) = I(f_1)$.
\end{thm}
\begin{proof}
Using the notations of the proof of Theorem \ref{thm:3} we only
have to show that the surface $K = \Sigma(G) \subset M^3 \times I$
satisfies Definition \ref{defn:5}. I.e., there exist generic
sections $t$ and $v$ of $\varepsilon^3_{M \times I}$ that extend $t(f_i)$
and $v(F_i)$ for $i = 0,1$ and are linearly dependent exactly over
$K$. The regular homotopy between $f_0$ and $f_1$ defines the
immersion $H \colon M^3 \times I \looparrowright \mathbb R^5 \times
I$. For $i \in \{0,1\}$ there is a canonic isomorphism $\varphi_i
\colon \nu_H|(M^3 \times \{i\}) \to \nu_{f_i}$. Let $\bar{w} \in
\Gamma(\nu_H)$ denote the projection of the sixth coordinate
vector $v_6 \in \mathbb R^6$ into $\nu_H$. Then $\varphi_i \circ
(\bar{w}|M^3 \times \{i\}) = \bar{w}(F_i)$. Moreover, $K =
\bar{w}^{-1}(0)$ and the orientation of $K$ is defined as the self
intersection of $H$ if perturbed in the direction of $\bar{w}$.
Define $\bar{t}$ to be a trivialization of the $\varepsilon^1_{M \times
I}$ component of the bundle $\nu_H \oplus \varepsilon^1_{M \times I}$ and
let $\bar{v} = \bar{w} + \bar{t}$. Then $\bar{t}$ and $\bar{v}$
are linearly dependent exactly at the points of $K$. Note that
$t(f_i) = \tau(f_i)^{-1} \circ (\varphi_i \oplus
\text{id}_{\varepsilon^1})(\bar{t}|M^3 \times \{i\})$ and $v(F_i) =
\tau(f_i)^{-1} \circ (\varphi_i \oplus
\text{id}_{\varepsilon^1})(\bar{v}|M^3 \times \{i\})$. Thus we only have
to define a trivialization $\tau \colon \varepsilon^3_{M \times I} \to
\nu_H \oplus \varepsilon^1_{M \times I}$ such that
\begin{equation} \label{eqn:2}
\tau|(M^3 \times \{i\}) = \varphi_i^{-1} \circ \tau(f_i)
\,\,\,\,\text{for}\,\,\,\, i = 0,1.
\end{equation}
The spin structure $s_M \in \text{Spin}(M^3)$ and the unique spin
structure on $I$ define a spin structure on $M^3 \times I$.
Together with the unique spin structure of $\mathbb R^6$ we get a spin
structure $s_H$ on $\nu_H$. When $s_H$ is restricted to $M^3
\times \{i\}$ we get back the spin structure $s_M$. Thus $s_H$
defines a trivialization $$\tau_H \colon \varepsilon^3_{M \times
I}|\text{sk}_2(M^3 \times I) \to (\nu_H \oplus \varepsilon^1_{M \times
I})|\text{sk}_2(M^3 \times I)$$ satisfying equation \ref{eqn:2}
over the 2-skeleton of $M^3 \times \{0,1\}$. Note that the
trivialization $\tau(f_i)$ is only well defined over
$\text{sk}_2(M^3)$ and that we can choose an arbitrary extension
over $M^3$ in order to define the rotation difference. Thus we
only have to extend $\tau_H$ to a trivialization $\tau$ of $\nu_H
\oplus \varepsilon^1_{M \times I}$ and then define $\tau(f_i)$ by formula
\ref{eqn:2}.
First we extend $\tau_H$ to $\text{sk}_3(M^3 \times I) \setminus
\text{sk}_3(M^3 \times \{1\})$. This is possible since the
obstruction to extending the trivialization over a 3-simplex from
its boundary lies in $\pi_2(SO(3)) = 0$. If $\sigma^3$ is a
3-simplex of $M^3$ then we can extend $\tau_H$ to $\sigma^3 \times
I$ since it is given only on $\partial (\sigma^3 \times I)
\setminus (\sigma^3 \times \{1\})$. Thus we have obtained the
required extension $\tau$ of $\tau_H$.
\end{proof}
\section{Connected sums and completeness of the invariant $i$}
\begin{lem} \label{lem:2}
If $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)_{\chi}$ and $g \in \mathop{\textrm{Imm}}\nolimits(S^3,
\mathbb R^5)$ then $c(f \# g) = c(f)$, in particular $e(\nu_{f \# g})
= e(\nu_f)$. Moreover
$$i(f \# g) = i(f) + \left(i(g) \mod 2d(\chi)\right) \in
\mathbb{Z}_{2d(\chi)}.
$$
\end{lem}
\begin{proof}
Since $c(f)$ describes the regular homotopy class of
$f|(M^3_{\circ})$ it is trivial that $c(f \# g) = c(f)$. Let $F$
be a singular Seifert surface of $f$ and $G$ of $g$ such that $G$
is an immersion near the boundary. Then the result follows by
inspecting the boundary connected sum $F \natural G$ and the fact
that $C(G) = \emptyset$.
\end{proof}
\begin{thm}
Suppose that the immersions $f_0, f_1 \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)$ are
regularly homotopic on $M^3 \setminus D$, where $D \subset M^3$ is
diffeomorphic to the closed disc $D^3$ (i.e., $c(f_0) = c(f_1)$).
Then $i(f_0) = i(f_1)$ implies that $f_0$ is regularly homotopic
to $f_1$.
\end{thm}
\begin{proof}
The proof consists of two cases according to the value of
$d(\chi)$.
If $d(\chi) > 0$ then $i$ takes values in $\mathbb{Z}_{2d(\chi)}$
which is a finite group. Theorem \ref{thm:1} implies that there
are exactly $2d(\chi)$ regular homotopy classes with a fixed Wu
invariant $c$. Thus we only have to show that the invariant $i$
restricted to immersions with Wu invariant $c$ is an epimorphism
onto $\mathbb{Z}_{2d(\chi)}$. For this end choose an immersion $f
\in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)$ such that $c(f) = c$. In \cite{ESz} it is
shown that $i \colon \mathop{\textrm{Imm}}\nolimits[S^3, \mathbb R^5] \to \mathbb{Z}$ is a
bijection. Thus Lemma \ref{lem:2} implies that $c(f \# g) = c(f) =
c$ for every $g \in \mathop{\textrm{Imm}}\nolimits(S^3, \mathbb R^5)$, moreover $i \colon \{\, f
\#g \colon g \in \mathop{\textrm{Imm}}\nolimits(S^3, \mathbb R^5) \,\} \to
\mathbb{Z}_{2d(\chi)}$ is surjective.
If $d(\chi) = 0$ then $i$ maps into $\mathbb{Z}$. Using Smale's
lemma we can suppose that $f_0|(M^3 \setminus D) = f_1|(M^3
\setminus D)$. The normal bundles of $f_0|D$ and $f_1|D$ in
$\mathbb R^5$ are trivial, choose a trivialization for both of them.
Let $\tau_0$ be a non-zero normal field along $f_0|D$. Then
$\tau_0| \partial D$ considered in the trivialization of the
normal bundle of $f_1|D$ is a map $(\tau_0|
\partial D) \colon
\partial D \to S^1$. Since $\partial D$ is homeomorphic to $S^2$
and $\pi_2(S^1) = 0$ the normal field $\tau_0|\partial D$ can be
extended to a normal field $\tau_1$ of $f_1|D$. Thus $\tau_i$ is a
normal field of $f_i|D$ for $i = 0,1$ and $\tau_0|\partial D =
\tau_1 | \partial D$.
Next choose an oriented compact manifold $W^4_0$ with boundary
$M^3$. We push $D$ into the interior of $W^4_0$ fixing the
boundary $\partial D$ to obtain a 3-disc $D_1 \subset W^4_0$ so
that $\partial D = \partial D_1$ and $M^3_1 = (M^3 \setminus D)
\cup D_1$ is a smooth submanifold of $W^4_0$. If we throw out the
domain bounded by $D$ and $D_1$ in $W^4_0$ we obtain a
4-dimensional submanifold $W^4_1$ of $W^4_0$ with boundary
$M^3_1$. Clearly $W^4_0$ is diffeomorphic to $W^4_1$.
We can choose a generic map $F_0 \colon W^4_0 \to \mathbb R^5$ with
the following three properties:
\begin{enumerate}
\item $F_0 | M^3 = f_0$ and $F_0 | M^3_1 = f_1$ (where $M^3_1$ is
identified with $M^3$ by a diffeomorphism keeping $M^3 \setminus
D$ fixed).
\item $F_0$ is an immersion in a neighborhood of $D$ and $D_1$.
\item If $\kappa_0$ denotes the inner normal field of $D$ in
$W^4_0$ and $\kappa_1$ denotes the inner normal field of $D_1$ in
$W^4_1$ then $dF_0 \circ \kappa_0 = \tau_0$ and $dF_1 \circ
\kappa_1 = \tau_1$.
\end{enumerate}
Let $F_1 = F_0 | W^4_1$. Then (2) implies that $C(F_0) = C(F_1)
\subset M^3 \setminus D$, moreover $\nu(F_0) = \nu(F_1)$. In
particular, the normal Euler class of $f_0$ and $f_1$ coincide.
Thus $R(F_0) = R(F_1)$. Since $\sigma(W^4_0) = \sigma(W^4_1)$, we
get that $$0 = i(f_0)-i(f_1) = \#\Sigma^{1,1}(F_0|(W^4_0 \setminus
W^4_1)).$$
Choose diffeomorphisms $d_0 \colon S^3_+ \to D$ and $d_1 \colon
S^3_+ \to D_1$, where $S^3_+$ denotes the northern hemisphere of
$S^3$. Then the immersion $F_0 \circ d_i$ can be extended to an
immersion $f_i' \colon S^3 \looparrowright \mathbb R^5$ for $i = 0,1$
so that $f_0'|S^3_- = f_1'|S^3_-$. (This is possible since
$j^1(f_0)|\partial D = j^1(f_1)|\partial D.$) Now repeating the
same argument as above for $f_0'$ and $f_1'$, we obtain that
$$i(f_0')-i(f_1') = \#\Sigma^{1,1}(F_0|(W^4_0 \setminus
W^4_1)).$$ (Note that $\tau_0$ and $\tau_1$ have a common
extension over $S^3_-$.) Thus $i(f_0') -i(f_1') = 0$, so using
\cite{ESz} we get that $f_0'$ and $f_1'$ are regularly homotopic.
But this implies that there exists a regular homotopy between
$f_0'$ and $f_1'$ that is fixed on $S^3_-$ (see \cite{Juhasz2},
Lemma 3.33). So $f_0|D$ and $f_1|D$ are regularly homotopic
keeping the 1-jets on the boundary fixed, which completes the
proof that $f_0$ and $f_1$ are regularly homotopic.
\end{proof}
\begin{cor}
The map $$(c,i) \colon \mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5] \to \coprod_{c \in
H^2(M^3; \mathbb{Z})} \mathbb{Z}_{4d(c)}$$ is a bijection.
\end{cor}
We get more structure on the set of regular homotopy classes of
immersions of oriented 3-manifolds into $\mathbb R^5$ if we endow it
with the connected sum operation. Let us introduce the notation
$$I(3,5) = \left\{\, [f] \colon [f] \in \mathop{\textrm{Imm}}\nolimits[M^3,\mathbb R^5] \,\,\text{for}\,\,
M^3 \,\,\text{oriented} \,\right\}.$$ Then $(I(3,5), \#)$ is a
semigroup whose structure is described in the following theorem.
\begin{thm}
Let $M^3_1$ and $M^3_2$ be oriented 3-manifolds. Then
\begin{equation} \label{eqn:3}
H^2(M^3_1 \# M^3_2; \mathbb{Z}) \approx H^2(M^3_1;
\mathbb{Z})\oplus H^2(M^3_2; \mathbb{Z}).
\end{equation}
If $f_i \in \mathop{\textrm{Imm}}\nolimits(M^3_i, \mathbb R^5)$ for $i = 1,2$ then
\begin{equation} \label{eqn:4}
c(f_1 \# f_2) = c(f_1) \oplus c(f_2) \in H^2(M^3_1 \# M^3_2;
\mathbb{Z}).
\end{equation}
Moreover, if $\chi_i$ denotes the normal euler class of $f_i$ and
$\chi$ the normal euler class of $f_1 \# f_2$ then $d(\chi) =
\gcd(d(\chi_1), d(\chi_2))$. Finally,
\begin{equation} \label{eqn:5}
i(f_1 \# f_2) = (i(f_1) \mod 2d(\chi)) + (i(f_2) \mod 2d(\chi))
\in \mathbb{Z}_{2d(\chi)},
\end{equation}
where $i(f_i) \in \mathbb{Z}_{2d(\chi_i)}$ for $i = 1,2$.
\end{thm}
\begin{proof}
Equation \ref{eqn:3} follows from the fact that $H^2(M^3_i;
\mathbb{Z}) \approx H^2(M^3_i \setminus D^3; \mathbb{Z})$ (see the
long exact sequence of the pair $(M^3_i, M^3_i \setminus D^3)$)
and the Mayer-Vietoris exact sequence for $M^3_1 \# M^3_2 = (M^3_1
\setminus D^3) \cup (M^3_2 \setminus D^3)$.
Equation \ref{eqn:4} can be seen from the description of $c(f_i)$
as the regular homotopy class of $f_i|\text{sk}_2(M^3)$. Since
$\chi = \chi_1 \oplus \chi_2$ the statement about $d(\chi)$ is
trivial.
Finally, equation \ref{eqn:5} is obtained by taking the boundary
connected sum $F_1 \natural F_2$ of singular Seifert surfaces
$F_1$ and $F_2$ for $f_1$, respectively $f_2$.
\end{proof}
\section{Immersions of $M^3$ into $\mathbb R^6$ with a normal field}
Let $\mathop{\textrm{Imm}}\nolimits_1(M^3, \mathbb R^6)$ denote the space of immersions of $M^3$
into $\mathbb R^6$ with a normal field $\nu$. Moreover, let
$\mathop{\textrm{Imm}}\nolimits_1[M^3, \mathbb R^6] = \pi_0(\mathop{\textrm{Imm}}\nolimits_1(M^3, \mathbb R^6))$ be the set of
regular homotopy classes of such immersions with normal fields. If
we fix a trivialization of $TM^3$ then Hirsch's theorem
\cite{Hirsch} implies that the natural map $\mathop{\textrm{Imm}}\nolimits_1(M^3, \mathbb R^6)
\to C(M^3, V_4(\mathbb R^6))$ is a weak homotopy equivalence.
For $f \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)$ let $\iota(f) \in \mathop{\textrm{Imm}}\nolimits_1(M^3,
\mathbb R^6)$ be the immersion $f$ with the constant normal field
defined by the sixth coordinate vector in $\mathbb R^6$. Thus $\iota$
is an embedding of $\mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5)$ into $\mathop{\textrm{Imm}}\nolimits_1(M^3,
\mathbb R^6)$. As a special case of Hirsch's compression theorem we
have the following proposition.
\begin{prop}
$\iota_* \colon \mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5] \to \mathop{\textrm{Imm}}\nolimits_1[M^3, \mathbb R^6]$ is a
bijection.
\end{prop}
\begin{proof}
The embedding $\mathbb R^5 \hookrightarrow \mathbb R^6$ induces an
embedding $V_3(\mathbb R^5) \hookrightarrow V_4(\mathbb R^6)$ and thus a
map $\psi \colon [M^3, V_3(\mathbb R^5)] \to [M^3, V_4(\mathbb R^6)]$ that
makes the following diagram commutative.
\[
\begin{CD}
\mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5] @>\iota_*>>
\mathop{\textrm{Imm}}\nolimits_1[M^3, \mathbb R^6] \\
@VVV @VVV \\
[M^3, V_3(\mathbb R^5)] @>\psi>> [M^3, V_4(\mathbb R^6)].
\end{CD}
\]
By Hirsch's theorem the vertical arrows are bijections, thus it is
sufficient to prove that $\psi$ is also a bijection. To see this
consider the fibration $V_3(\mathbb R^5) \to V_4(\mathbb R^6) \to S^5$.
Then from the homotopy exact sequence of this fibration we get
that the homomorphism $\pi_i(V_3(\mathbb R^5)) \to
\pi_i(V_4(\mathbb R^6))$ is an isomorphism for $i \le 3$ and this
implies that $\psi$ is a bijection.
\end{proof}
The natural forgetful map $\varphi \colon \mathop{\textrm{Imm}}\nolimits_1(M^3, \mathbb R^6) \to
\mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^6)$ is a Serre fibration.
\begin{prop}
For any immersion $f \colon M^3 \looparrowright \mathbb R^6$ the
normal bundle $\nu_f$ is trivial.
\end{prop}
\begin{proof}
Since $M^3$ is spin the normal bundle $\nu_f$ is also spin, thus
it is trivial over the 2-skeleton of $M^3$. Such a trivialization
can be extended to the 3-simplices of $M^3$ because
$\pi_2(SO(3))=0$.
\end{proof}
So $\varphi$ is surjective and the fiber of $\varphi$ is homotopy
equivalent to $\Gamma(\nu_f) = C(M^3, S^2)$. Thus the end of the
homotopy exact sequence of $\varphi$ looks like as follows:
\[\pi_1(\mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^6)) \longrightarrow [M^3, S^2] \longrightarrow
\mathop{\textrm{Imm}}\nolimits_1[M^3, \mathbb R^6] \xrightarrow{\,\,\varphi_*} \mathop{\textrm{Imm}}\nolimits[M^3,
\mathbb R^6] \longrightarrow 0.\] By Hirsch's theorem there is a
bijection $\mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^6] \approx [M^3, V_3(\mathbb R^6)]$. Since
$V_3(\mathbb R^6)$ is 2-connected and $\pi_3(V_3(\mathbb R^6)) \approx
\mathbb{Z}_2$ we get from obstruction theory that $[M^3,
V_3(\mathbb R^6)] \approx H^3(M^3; \mathbb{Z}_2) \approx
\mathbb{Z}_2$. It is well known that for $f \in \mathop{\textrm{Imm}}\nolimits(M^3,
\mathbb R^6)$ the regular homotopy class of $f$ is determined by the
number of its double points $D(f)$ modulo 2. This gives a
geometric interpretation of the map $\varphi_*$: for $(f, \nu) \in
\mathop{\textrm{Imm}}\nolimits_1(M^3, \mathbb R^6)$ the regular homotopy invariant $\varphi_*(f,
\nu)$ is equal to $D(f)$ modulo 2.
How can we determine the value of $\varphi_* \circ \iota_*(g)$ for
a generic $g \in \mathop{\textrm{Imm}}\nolimits(M^3, \mathbb R^5) ?$ This question was answered
in \cite{Szucs2}, let us recall that result now. The
self-intersection set $A(g)$ of $g$ is a closed 1-dimensional
submanifold of $M^3$ and $g(A(g))$ is also a closed 1-dimensional
submanifold of $\mathbb R^5$. We say that a component $C$ of $g(A(g))$
is non-trivial if the double cover $g|g^{-1}(C) \colon g^{-1}(C)
\to C$ is non-trivial, i.e., if $g^{-1}(C)$ is connected. Let the
number of non-trivial components be denoted by $\delta(g)$. In
\cite{Szucs2} Sz\H{u}cs proved the following.
\begin{thm}
Suppose that $f \colon M^3 \to \mathbb R^6$ is a generic immersion and
that $\pi \colon \mathbb R^6 \to \mathbb R^5$ is a projection such that $g
= \pi \circ f$ is also a generic immersion. Then $$D(f) \equiv
\delta(g) \mod 2.$$
\end{thm}
Note that the immersions $f$ and $g$ above are regularly homotopic
in $\mathbb R^6$. Thus for any generic $g$ we have that $\varphi_*
\circ \iota_* = \delta$.
Now we are going to determine the group $\pi_1(\mathop{\textrm{Imm}}\nolimits(M^3,
\mathbb R^6))$. Using Hirsch's theorem we get that it is isomorphic to
$\pi_1(C(M^3, V_3(\mathbb R^6))) = [SM^3, V_3(\mathbb R^6)]$. Here $SM^3$
denotes the suspension of $M^3$ and is a 4-dimensional CW complex.
The space $V_3(\mathbb R^6)$ is 2-connected and $\pi_3(V_3(\mathbb R^6))
\approx \mathbb{Z}_2$. Moreover, $\pi_4(V_3(\mathbb R^6)) \approx 0$.
This can be seen as follows: From the homotopy exact sequence of
the fibration $V_3(\mathbb R^6) \to V_4(\mathbb R^7) \to S^6$ we get that
$\pi_4(V_3(\mathbb R^6)) \approx \pi_4(V_4(\mathbb R^7))$. It was shown by
Paechter \cite{Paechter} that for $k \ge 4$ the isomorphism
$\pi_k(V_k(\mathbb R^{2k-1})) \approx 0$ holds if $k \equiv 0 \mod 4$.
Thus obstruction theory yields that $[SM^3, V_3(\mathbb R^6)] \approx
H^3(SM^3; \mathbb{Z}_2) \approx H^2(M^3; \mathbb{Z}_2)$.
Putting together the above results we obtain the following
theorem.
\begin{thm} \label{thm:4}
The following sequence is exact: $$H^2(M^3; \mathbb{Z}_2)
\longrightarrow [M^3, S^2] \longrightarrow \mathop{\textrm{Imm}}\nolimits[M^3, \mathbb R^5]
\xrightarrow{\,\,\delta\,\,} \mathbb{Z}_2 \longrightarrow 0.
$$
\end{thm}
\begin{rem}
If we fix a trivialization of $TM^3$ then non-zero vector fields
(or equivalently, oriented 2-plane fields) on $M^3$ correspond to
maps $M^3 \to S^2$. Thus the set of homotopy classes of oriented
2-plane fields on $M^3$ is equal to $[M^3, S^2]$ which was
determined in Remark \ref{rem:3}. A geometric classification of
such oriented 2-plane fields, avoiding the use of a trivialization
of $TM^3$, was carried out by Gompf (see \cite{Gompf}, section 4).
A complete set of homotopy invariants, similar to those introduced
in our present paper, were obtained in \cite{Gompf}. Gompf's
result and the regular homotopy classification of immersions of
$M^3$ into $\mathbb R^5$ are related by Theorem \ref{thm:4}.
\end{rem}
\section*{Acknowledgements}
I would like to take this opportunity to thank Professor Andr\'as
Sz\H{u}cs for our long and helpful discussions and for reading
earlier versions of this paper.
\bibliographystyle{amsplain}
|
1,108,101,563,628 | arxiv | \section{Introduction}\label{chap:intro}
\begin{quote}
\hfill `All of mathematics is the study of symmetry' --- H.S.M.~Coxeter.
\end{quote}
The broadest possible setting in which to place the current work is in the study of \emph{mathematical symmetry}. This should practically go without saying, given the above quote, attributed to Coxeter by Roberts in \cite{Roberts2006}. Nevertheless, the algebraic and categorical structures we are concerned with had their origins in the concerted efforts of mathematicians and scientists to pin down the exact meaning of `symmetry', and to devise the best ways to encode and study it.
We could not hope to adequately recount the fascinating history here, but fortunately we can instead refer readers to the introduction of Lawson's monograph \cite{Lawson1998}, and also to the work of Hollings \cite{Hollings2012}. We pick up the story in the late nineteenth century, long after the discovery of group theory, when mathematicians were facing a plethora of new non-Euclidean geometries. Some of these had trivial automorphism groups, yet were intuitively highly symmetric. Two main approaches to capture this \emph{non-global} or \emph{partial symmetry} eventually emerged in the middle of the twentieth century: the \emph{inductive groupoids} of Charles Ehresmann \cite{Ehresmann1957,Ehresmann1960}, and the \emph{inverse semigroups} of Viktor Wagner \cite{Wagner1952,Wagner1953} and Gordon Preston \cite{Preston1954,Preston1954b}. Far from being rival approaches, the two theories were eventually united in one of the landmark results of inverse semigroup theory:
\begin{ESN}
The categories of inverse semigroups (with semigroup homomorphisms) and inductive groupoids (with inductive functors) are isomorphic.
\end{ESN}
This theorem contains too much information to fully unpack just now, but more will be said in Chapter \ref{chap:I}. The ESN Theorem was first formulated by Lawson in \cite[Theorem 4.1.8]{Lawson1998}, who named it after the three mathematicians who had contributed most to its development. We have already encountered Ehresmann. The second named author, Boris Schein, was largely responsible for establishing the explicit connection between inverse semigroups and inductive groupoids \cite{Schein1965}. The third, K.S.~Subramonian Nambooripad \cite{Nambooripad1979}, was credited for his categorical framework (albeit in a different context), wherein the theorem is couched in terms of a \emph{relationship between entire categories (including their morphisms)} of semigroups and groupoids, rather than simply showing how the algebraic and categorical structures are inter-definable.
The ESN Theorem has had many far-reaching consequences, both within the fields of inverse semigroups and inductive groupoids (see \cite{Lawson1998}), and also beyond. Before we comment on the latter, it is worth addressing the former, by confronting an elephant in the room:
\bit
\item Inverse semigroups are relatively simple mathematical objects. In what sense does it `help' to view them through the lens of groupoids, and category theory?
\eit
The precise answer to this question would again take us too far afield, but some brief comments should suffice for the time being. First, an inverse semigroup $S$ can in principle be completely understood in terms of its multiplication table. Each pair of elements $a,b\in S$ gives rise to the product $ab\in S$. This binary operation must be \emph{associative}, i.e.~satisfy the law $(ab)c=a(bc)$. Further, each element $a\in S$ must have a unique \emph{(semigroup-theoretic) inverse} in $S$, i.e.~an element $a'\in S$ satisfying $a=aa'a$ and $a'=a'aa'$. Uniqueness means that we can think of $a\mt a'$ as an additional (unary) operation of~$S$, and we generally write~$a'=a^{-1}$. Among other things, the ESN Theorem says that one can associate a groupoid $\G=\G(S)$ to such an inverse semigroup $S$. We will postpone the full definition of $\G$ for now, but two key points bear emphasising:
\bit
\item the morphisms of $\G$ are precisely the elements of $S$, and
\item the (partial) composition in $\G$ is a restriction of the (total) product in $S$.
\eit
Roughly speaking, the groupoid $\G$ `remembers' only the `easy' products of $S$, and the ESN Theorem tells us that (combined with some order-theoretic data) these easy products are enough to reconstruct the entire multiplication table of $S$. (The exact meaning of `easy' will be explored a little below, and in more detail in Sections \ref{sect:RSS} and \ref{sect:D}.) In a sense, the latter point encapsulates the deepest part of the theorem: Inductive groupoids are precisely the (abstract) categories $\G$ whose composition can be extended in a natural way to construct an inverse semigroup $S=S(\G)$. Moreover, the constructions $S\mt\G(S)$ and $\G\mt S(\G)$ are mutually inverse functors between the categories of inverse semigroups and inductive groupoids. This category isomorphism is not merely a convenient way to say that inverse semigroups and inductive groupoids are `one and the same', however. The fact that the isomorphism sends morphisms to morphisms means that a map $\phi:S\to S'$ between inverse semigroups is a semigroup homomorphism if and only if it is a (so-called) inductive functor $\G(\phi)=\phi:\G(S)\to\G(S')$. Less effort is required to show that $\phi$ is a functor, as opposed to a full semigroup homomorphism, as the former involves checking that
\[
(a\circ b)\phi = (a\phi)\circ(b\phi) \qquad\text{for \emph{composable} pairs $a,b\in\G(S) \equiv S$,}
\]
and we emphasise that the \emph{composable pairs} in $\G$ are precisely the \emph{easy products} in $S$.
The ESN Theorem has had a major impact outside of inverse semigroup theory, with one particularly fruitful direction being the application of inverse semigroups to C$^*$-algebras. For example, the articles \cite{Paterson2002,Li2012,Li2017,LRRW2018, BLS2017,MS2014,KS2002, FMY2005,Lawson2012,BEM2017} display a mixture of semigroup-theoretic and groupoid-based approaches. For more extensive discussions, and many more references, see Lawson's survey \cite{Lawson2020} and Paterson's monograph \cite{Paterson1999}.
Another way to measure the influence of the ESN Theorem is to consider its extensions and generalisations. When faced with an important class of semigroups, one naturally looks for an `ESN-type link' with an equally-natural class of categories, and vice versa. Thus, we have ESN-type theorems for regular semigroups, restriction semigroups, Ehresmann semigroups, concordant semigroups, and others; see for example \cite{Wang2016,Lawson2004,DP2018,FitzGerald2019,GW2012,FitzGerald2010,Armstrong1988,Wang2020,Wang2019 ,Nambooripad1979 ,Lawson1991,Lawson2021 ,Hollings2012,GH2010,Gould2012,Hollings2010}. As with other `dualities' in mathematics \cite{CD1998,Stone1936,Birkhoff1937,Pontrjagin1934a,Pontrjagin1934b,Priestley1970,Priestley1972}, such correspondences allow problems in one field to be translated into another, where they can hopefully be solved, and the solution then reinterpreted in the original context.
Arguably the most significant of the papers just cited was Nambooripad's~1979 memoir~\cite{Nambooripad1979}. As well as pioneering the categorical approach (which led to the `N' in `ESN'), this paper majorly extended the ESN Theorem to the class of \emph{regular} semigroups. These are semigroups in which every element has \emph{at least one} inverse. Dropping the uniqueness restriction on inverses leads to a far more general class of semigroups, which contains many additional natural examples, including semigroups of mappings, linear transformations, and more. The increase in generality of the semigroups led inevitably to an increase in the complexity of the categorical structures modelling them. Although Nambooripad still represented a regular semigroup $S$ by an ordered groupoid $\G=\G(S)$, the groupoid $\G$ was not enough to completely recover the semigroup; an additional layer of structure was required. The need for this extra data is due to the fact that the object set of $\G$ is not a semilattice (as for inverse semigroups), but instead a \emph{(regular) biordered set}: a partial algebra with a pair of intertwined pre-orders satisfying a fairly complex set of axioms. The increase in generality also led to the sacrifice of the category \emph{isomorphism}. Nambooripad's main result, \cite[Theorem~4.14]{Nambooripad1979}, is that the category of regular semigroups is \emph{equivalent} to a certain category of groupoids (which Nambooripad also called \emph{inductive}).
One way to understand this sacrifice is via the \emph{loss of symmetry} when moving from inverse to regular semigroups. As we discussed above, inverse semigroups were devised to model partial symmetries that were unrecognisable by groups. This is formalised in the Wagner--Preston Theorem, which states that any inverse semigroup is (isomorphic to) a semigroup of partial symmetries of some mathematical structure; see \cite[Theorem 1.5.1]{Lawson1998}. But an inverse semigroup is itself an extremely symmetrical mathematical structure in its own right, so much so that the canonical proof of the Wagner--Preston Theorem involves showing that an inverse semigroup is (isomorphic to) a semigroup of partial symmetries of \emph{itself}! This `internal symmetry' is manifested in the properties of the inversion operation $a\mt a^{-1}$, speficially the fact that this is an \emph{involution}, i.e.~satisfies the familiar laws
\begin{equation}\label{eq:I1}
(a^{-1})^{-1} = a \AND (ab)^{-1} = b^{-1}a^{-1},
\end{equation}
beyond the defining property that $a$ and $a^{-1}$ are mutual inverses:
\begin{equation}\label{eq:I2}
a=aa^{-1}a \AND a^{-1}=a^{-1}aa^{-1}.
\end{equation}
Inverse semigroups actually satisfy an additional law:
\begin{equation}\label{eq:I3}
aa^{-1}\cdot bb^{-1} = bb^{-1}\cdot aa^{-1},
\end{equation}
and this is in fact equivalent to the commutativity of idempotents. This in turn means that the set $E=E(S)$ of idempotents of an inverse semigroup $S$ forms a semilattice, with the (order-theoretic) meet of two idempotents being their (semigroup) product: $e\wedge f=ef$. This then feeds into the definition of the groupoid $\G=\G(S)$. The object set of $\G$ is $E$, and each element $a\in S$ is thought of as a morphism from a `domain idempotent' $\bd(a)=aa^{-1}$ to a `range idempotent' $\br(a)=a^{-1}a$. Beyond the obvious law
\begin{equation}\label{eq:a}
\bd(a)\cdot a = a = a\cdot \br(a),
\end{equation}
which follows immediately from~\eqref{eq:I2}, the key point making everything work is that
\begin{equation}\label{eq:ababab}
\br(a)=\bd(b) \IMPLIES \bd(ab)=\bd(a) \text{ \ and \ } \br(ab)=\br(b).
\end{equation}
This allows us to define the composition
\begin{equation}\label{eq:abab}
a\circ b=ab \qquad\text{when $\br(a)=\bd(b)$.}
\end{equation}
These are the `easy' products alluded to above. Going in the reverse direction, the semilattice structure of the object set $E$ of an inductive groupoid $\G$ is crucial in extending the (partial) composition to a (total) product. Given two morphisms $a,b\in\G$, the range $e=\br(a)$ and domain $f=\bd(b)$ have a meet (greatest lower bound) in $E$: $g=e\wedge f$. The ordered structure of $\G$ means that we have \emph{(right and left) restrictions} $a' = a\rest_g$ and $b' = {}_g\corest b$, and since $\br(a') = g = \bd(b')$, these are composable in $\G$, so that we can define
\[
a\pr b = a' \circ b' = a\rest_g \circ {}_g\corest b,
\]
and in this way reconstruct the entire structure of $S=S(\G)$. The operation $\pr$ is often called the \emph{pseudo-product} in the literature.
The major difficulty in extending the above ideas to arbitrary regular semigroups is in the loss of uniqueness of inverses, and hence in the possible expansion of the available domain and range idempotents. Indeed, if an element $a$ of a regular semigroup $S$ possesses distinct inverses ${a',a''\in S}$, then one is faced with a dilemma: whether to think of $a$ as a morphism ${aa'\to a'a}$ or ${aa''\to a''a}$, and in general it is quite possible to have $aa'\not=aa''$ and/or $a'a\not=a''a$. Thus, Nambooripad was led to consider the pairs $(a,a')$ and $(a,a'')$ as separate morphisms, essentially splitting each semigroup element into several morphisms, one for each inverse, and then considering equivalence-classes of such pairs. This also necessitated a far more elaborate pseudo-product. Given semigroup elements $a,b\in S$, one would like to identify suitable domain and range idempotents, $e=\br(a)$ and $f=\bd(b)$, and then to find composable restrictions $a\rest_g$ and ${}_g\corest b$ (with $g$ `below' $e$ and $f$) in order to define $a\pr b = a\rest_g \circ {}_g\corest b$, as above. This is not generally possible, however, and one inevitably has to work with the biordered sets alluded to above. Specifically, Nambooripad located $g$ in the so-called \emph{sandwich set} $\mathcal S(e,f)$. The restrictions $a\rest_g$ and ${}_g\corest b$ still may or may not exist, but $a\rest_{eg}$ and ${}_{gf}\corest b$ do. These latter restrictions still may or may not be composable, but it is possible to find a distinguished element $x$ for which the composition
\[
a\rest_{eg} \circ x\circ {}_{gf}\corest b
\]
does exist, and can be taken as the definition of the pseudo-product $a\pr b$. This element $x$ has the form
\[
x = \ve(eg,g)\circ\ve(g,gf) = \ve(eg,g,gf),
\]
where $\ve$ is a special functor into $\G$ from a certain category of `$E$-chains'. In the end, Nambooripad's `groupoids' are in fact \emph{pairs} $(\G,\ve)$, where $\G$ is a special ordered groupoid with a biordered object set, and where $\ve$ is a special functor from the $E$-chain groupoid into $\G$. The ingredients~$\G$ and~$\ve$ are linked via a certain coherence condition, and a property of so-called \emph{singular squares} in the biordered set. It would be impossible to adequately cover all the details and subtleties of Nambooripad's work, so instead we refer the reader to the survey \cite{MAR2021}. In any case, the step up in complexity from the inverse case should be apparent from our brief treatment.
Let us now take a small step back. Recall that inverse semigroups are the semigroups with a unary operation $a\mt a^{-1}$ satisfying the laws \eqref{eq:I1}, \eqref{eq:I2} and \eqref{eq:I3}. Associated to such an inverse semigroup $S$, one can define a groupoid $\G=\G(S)$ with morphism set $S$, and with composition~\eqref{eq:abab}. As it happens, law \eqref{eq:I3} is not a necessary ingredient in the construction of the groupoid $\G$. In other words, if $S$ is a semigroup satisfying laws \eqref{eq:I1} and \eqref{eq:I2}, then the above procedure works without modification to produce a groupoid $\G=\G(S)$. If law \eqref{eq:I3} does not hold, then the groupoid may or may not be inductive, but it still has many remarkable properties shared by inductive groupoids. This observation is, in essence, the starting point for the current work.
Semigroups satisfying the laws \eqref{eq:I1} and \eqref{eq:I2} are precisely the \emph{regular $*$-semigroups} of Nordahl and Scheiblich \cite{NS1978}. (These semigroups are not to be confused with the \emph{$*$-regular semigroups} of Drazin \cite{Drazin1979}, and are known by some other names in the literature; for example, they are called \emph{special $*$-semigroups} in \cite{NP1985}.) As the terminology suggests, the involution on such a semigroup is generally denoted $a\mt a^*$ (reserving $a^{-1}$ for inverse semigroups), and this must satisfy the laws
\[
(a^*)^* = a = aa^*a \AND (ab)^* = b^*a^*.
\]
The class of regular $*$-semigroups has received a great deal of attention in recent years, partly because of the prototypical examples of the so-called \emph{diagram semigroups}. These semigroups include families such as the Brauer, Temperley-Lieb, Kauffman and partition monoids, and have their origins and applications in a wide range of mathematical and scientific disciplines, from representation theory, low-dimensional topology, statistical mechanics, and many more; see for example \cite{LF2006,FL2011,KV2019,ACHLV2015,Auinger2014,Auinger2012,KM2006,
BDP2002,Jones1994_2,Jones1987,Jones1994_a,Martin1996,Martin1994,Martin2015,EMRT2018,ER2020,ER2022b,ER2022c, MM2007,Maz2002,Wenzl1988,LZ2015,HR2005,Wilcox2007,BH2014,BH2019,LZ2012,DEEFHHLM2019,DEEFHHL2015, MR1998,MM2014,Brauer1937,TL1971,Kauffman1987,Kauffman1990,Jones1983_2,EG2017}. These monoids have provided a strong bridge between semigroup theory and these other branches of mathematics; for a fuller discussion of this fruitful dialogue, and for many more references, see the introduction to~\cite{EG2017}.
As we will see, regular $*$-semigroups occupy something of a `sweet spot' between the classes of inverse and regular semigroups. On the one hand, their involution provides the `internal symmetry' possessed by inverse semigroups, allowing for the natural groupoid representations discussed above. On the other hand, the non-commutativity of idempotents (i.e.~the dropping of law \eqref{eq:I3}) leads to a much richer idempotent structure, which in turn means that the groupoid representation is not faithful, and necessitates a more intricate approach when defining the pseudo-product.
In a way, this `balance' can be summed up in an observation about the proliferation of inverses. In an inverse semigroup, the inverse `picks itself', in the sense that each element has \emph{exactly one}. In a regular semigroup there is some degree of `chaos'; while inverses are guaranteed to exist, there could be an over-abundance (there even exist regular ($*$-)semigroups in which every element is an inverse of every other element). In regular $*$-semigroups, inverses are not necessarily unique, but the involution `picks one for you'. The involution also leads to a canonical way to `pick' domain and range idempotents; for $a\in S$, one can take
\begin{equation}\label{eq:dra}
\bd(a) = aa^* \AND \br(a) = a^*a.
\end{equation}
Such idempotents have the additional property that they are invariant under the involution, as for example $(aa^*)^* = (a^*)^*a^* = aa^*$. Such an idempotent is called a \emph{projection}, and the set of all such projections is denoted
\[
P = P(S) = \set{p\in S}{p^2=p=p^*}.
\]
This is of course contained in the set
\[
E = E(S) = \set{e\in S}{e^2=e}
\]
of \emph{all} idempotents, but we do not have equality in general; in fact, it is well known (and easy to see) that the regular $*$-semigroup $S$ is inverse if and only if $E=P$. Although $P=P(S)$ is not a subsemigroup of the regular $*$-semigroup $S$ in general, it can be given an algebraic structure via the `conjugation action' of projections on each other. Each element $p\in P$ determines a \emph{unary operation}
\[
\th_p:P\to P \GIVENBY q\th_p = pqp \qquad\text{for $q\in P$.}
\]
These so-called \emph{projection algebras} have been used by Imaoka (under a different name) and Jones (in a different, but equivalent, form) in \cite{Imaoka1983} and \cite{Jones2012}, and are the appropriate `$*$-analogues' of semilattices in the theory of regular $*$-semigroups. (Semilattices are indeed a special case, wherein we define ${e\th_f=e\wedge f}$.) In particular, the groupoid $\G=\G(S)$ associated to a regular $*$-semigroup~$S$ has the projection algebra $P=P(S)$ for its object set. Moreover, \eqref{eq:a} and \eqref{eq:ababab} both hold (with respect to the domains and ranges given in \eqref{eq:dra}), allowing us to define the composition in $\G$ exactly as in~\eqref{eq:abab}. Conversely, in attempting to define a pseudo-product~$\pr$ from such a `projection groupoid' $\G$, one first identifies the range $p = \br(a)$ and domain $q = \bd(b)$, calculates two further projections $p'=q\th_p$ and $q'=p\th_q$ (which collectively play the role of a meet $p\wedge q$, which may or may not exist), and then forms the restrictions $a\rest_{p'}$ and~${}_{q'}\corest b$. These are still not composable in general, as we do not always have $p'=q'$, but there always exists a special morphism $e:p'\to q'$, allowing us to define
\[
a\pr b = a\rest_{p'} \circ e \circ {}_{q'}\corest b.
\]
This morphism $e$ is again in the image of a special functor $\ve:\C\to\G$, where here $\C=\C(P)$ is a certain `chain groupoid' associated to the projection algebra $P$. This all results in what we call a \emph{chained projection groupoid}, which is a pair $(\G,\ve)$, consisting of:
\bit
\item an ordered groupoid $\G$, whose object set $P$ is an (abstract) projection algebra, with strong links between the categorical and algebraic structures of $\G$ and $P$, and
\item a functor $\ve:\C=\C(P)\to\G$ obeying a natural coherence condition.
\eit
Roughly speaking, our main result (Theorem \ref{thm:iso}) states that this construction is invertible, and furnishes an isomorphism between the categories of regular $*$-semigroups and (abstract) chained projection groupoids. Specialising to the inverse case, we obtain the ESN Theorem as a corollary.
We emphasise, however, that Theorem \ref{thm:iso} is not a specialisation of Nambooripad's result on regular semigroups \cite{Nambooripad1979}. In fact, such a specialisation would invariably lead to a category \emph{equivalence} (to a very different category of groupoids) rather than an isomorphism, as inverses are generally not unique in regular $*$-semigroups.
Any further discussion would necessitate additional technicalities that are undesirable at this stage. So instead, let us now give a brief summary of the structure of the article. After this introductory chapter, we begin in Chapter \ref{chap:prelim} with preliminaries on (regular $*$-)semigroups and ordered categories, as well as a fairly detailed discussion of the special case of partition monoids. The paper then splits into two main parts:
\bit
\item Part \ref{part:structure} is devoted to the proof of our main result, Theorem~\ref{thm:iso}, which establishes the isomorphism between the categories of regular $*$-semigroups and chained projection groupoids.
\item Part \ref{part:applications} then applies the machinery developed in the first part to (free) idempotent-generated regular $*$-semigroups, fundamental regular $*$-semigroups and inverse semigroups.
\eit
Part \ref{part:structure} contains Chapters \ref{chap:P}--\ref{chap:iso}, which build towards our main structural result, stated in Theorem~\ref{thm:iso}.
\bit
\item Chapter \ref{chap:P} begins with the most basic building block, the so-called \emph{projection algebras}. In particular, we show how to build a number of categorical structures on top of an abstract projection algebra $P$, culminating in the construction of the \emph{chain groupoid} $\C=\C(P)$.
\item In Chapter \ref{chap:G} we turn to the class of \emph{projection groupoids}. These are ordered groupoids $\G$, whose object set $P$ is a projection algebra, with strong links between the structures of $\G$ (as a groupoid) and $P$ (as an algebra). The key idea is that of a \emph{chained projection groupoid}, which is a pair $(\G,\ve)$ consisting of a projection groupoid $\G$, and a special functor $\ve:\C\to\G$ called an \emph{evaluation map}, where here $\C=\C(P)$ is the chain groupoid of the projection algebra~$P$. The main result of the chapter is Theorem \ref{thm:SGve}, which shows how to construct a regular $*$-semigroup $S=\bS(\G,\ve)$ from a chained projection groupoid $(\G,\ve)$.
\item Chapter \ref{chap:S} goes in the opposite direction, and shows how a regular $*$-semigroup $S$ gives rise to a chained projection groupoid $(\G,\ve)=\bG(S)$; see Theorem \ref{thm:GveS}. In a sense, this breaks down the structure of $S$ into simpler parts: the groupoid $\G$ remembers only the `easy products' in~$S$; the projection algebra $P$ remembers the projections of $S$ and their conjugation action; and the evaluation map~$\ve$ tells us how these parts fit together. This decomposition can be thought of as a \emph{structure theorem} for arbitrary regular $*$-semigroups (see Remark \ref{rem:structure}); as far as we are aware it is the first of its kind.
\item We then bring these ideas together in Chapter \ref{chap:iso} in order to complete the proof of Theorem~\ref{thm:iso}. This involves showing that the $\bS$ and $\bG$ constructions are in fact mutually inverse functors between the categories of regular $*$-semigroups and chained projection groupoids.
\eit
Part \ref{part:applications} contains Chapters \ref{chap:E}--\ref{chap:I}, which comprise a number of applications of the theory developed in Part \ref{part:structure}.
\bit
\item Chapter \ref{chap:E} concerns \emph{idempotent-generated} regular $*$-semigroups. The main construction here is the so-called \emph{chain semigroup} associated to an arbitrary (abstract) projection algebra. These semigroups enjoy several categorical `free-ness' properties, as described in Theorems~\ref{thm:IG}, \ref{thm:free} and~\ref{thm:adjoint}. Theorem \ref{thm:pres} gives a presentation by generators and relations.
\item We then consider \emph{fundamental} regular $*$-semigroups in Chapter \ref{chap:F}. These have been previously studied by a number of authors \cite{Imaoka1980,Imaoka1983,Yamada1981,NP1985}, but we show how our general machinery leads to a new way to build these semigroups; see Theorems \ref{thm:SOP} and \ref{thm:fund}. In Theorem \ref{thm:muCP} we show that (up to isomorphism) there is a unique idempotent-generated fundamental regular $*$-semigroup with a given (abstract) projection algebra, and show how to construct it.
\item Finally, in Chapter \ref{chap:I} we show how the entire theory simplifies in the case of \emph{inverse} semigroups. As an important application, we give a new proof of the Ehresmann--Schein--Nambooripad Theorem, stated above.
\eit
Throughout the paper, we consider a number of examples, which serve both to illustrate the general theory, and also to highlight some of the subtleties that arise. For the same reasons, at times we will compare our constructions and results with the regular and inverse cases. We also pose a number of open problems, which we believe are worthy of further study.
\section{Preliminaries}\label{chap:prelim}
In this chapter we gather the preliminary definitions and basic results we require in the rest of the paper, and establish notation. We begin in Section \ref{sect:S} with the main ideas from semigroup theory, and then in Section \ref{sect:C} we discuss (ordered) categories and groupoids. Section~\ref{sect:RSS} contains some basic background on regular $*$-semigroups, as well as a foreshadowing of various ideas that will occur throughout the paper. This material is included to motivate the more abstract results of later chapters. Finally, Section \ref{sect:D} concerns a particularly important concrete class of regular $*$-semigroups, namely the partition monoids. This is again included to give more motivation for our later ideas and results, and to show how these can be understood diagrammatically for these monoids. We also comment on a number of known results, and show how they can be interpreted through the groupoid lens.
For further general background, we refer the reader to texts such as \cite{Howie1995,CPbook} for semigroups,~\cite{Lawson1998} for inverse semigroups, \cite{MacLane1998,Awodey2010} for categories, and \cite{BS1981} for universal algebra, though the latter is only used tangentially.
\subsection{Semigroups}\label{sect:S}
A \emph{semigroup} is a set with an associative binary operation, which will typically be denoted by juxtaposition. A \emph{monoid} is a semigroup with an identity element. As usual we write $S^1$ for the \emph{monoid completion} of the semigroup $S$. So $S^1=S$ if $S$ happens to be a monoid; otherwise, $S^1=S\cup\{1\}$, where $1$ is a symbol not belonging to $S$, acting as an adjoined identity element.
\emph{Green's relations} are five equivalence relations, defined on an arbitrary semigroup $S$ as follows~\cite{Green1951}. First, for $a,b\in S$ we have
\[
a\mathrel{\mathscr R} b \iff aS^1=bS^1 \COMMA a\L b \iff S^1a=S^1b \COMMA a\mathrel{\mathscr J} b \iff S^1aS^1=S^1bS^1.
\]
Note that $a\mathrel{\mathscr R} b$ precisely when either $a=b$ or else $a=bx$ and $b=ay$ for some $x,y\in S$; similar comments apply to $\L$ and $\mathrel{\mathscr J}$. The remaining two of Green's relations are defined by
\[
{\H} = {\mathrel{\mathscr R}}\cap {\L} \AND {\mathrel{\mathscr D}} = {\mathrel{\mathscr R}}\vee{\L},
\]
where the latter is the join in the lattice of equivalences on $S$, i.e.~the least equivalence containing~${\mathrel{\mathscr R}}\cup{\L}$. It is well known that ${\mathrel{\mathscr D}}={\mathrel{\mathscr R}}\circ{\L}={\L}\circ{\mathrel{\mathscr R}}$, and that ${\mathrel{\mathscr D}}={\mathrel{\mathscr J}}$ if $S$ is finite. We denote the $\mathrel{\mathscr R}$-class of $a\in S$ by
\[
R_a = \set{b\in S}{a\mathrel{\mathscr R} b},
\]
and similarly for $\L$-classes $L_a$, and so on.
An element $a$ of a semigroup $S$ is \emph{regular} if $a=axa$ for some $x\in S$. The element $y=xax$ then has the property that $a=aya$ and $y=yay$; such an element $y$ is called an \emph{inverse} of $x$. We say $S$ is \emph{regular} if every element of $S$ is regular. The set of \emph{idempotents} of $S$ is denoted
\[
E(S) = \set{e\in S}{e=e^2}.
\]
Idempotents are of course regular. An \emph{inverse semigroup} is a semigroup in which every element has a unique inverse. It is well known that $S$ is inverse if and only if it is regular and its idempotents commute; in this case, $E(S)$ is a \emph{semilattice}, i.e.~a semigroup of commuting idempotents. In this paper we are concerned with a class of semigroups contained strictly between regular and inverse semigroups, the so-called \emph{regular $*$-semigroups} of Nordahl and Scheiblich \cite{NS1978}. We postpone their definition until Section~\ref{sect:RSS}.
A \emph{congruence} on a semigroup $S$ is an equivalence relation $\si$ that is compatible with the product of $S$, meaning that
\begin{align*}
a\mr\si b &\IMPLIES ax\mr\si bx \text{ and } xa\mr\si xb &&\text{for all $a,b,x\in S$,}
\intertext{or equivalently that}
a\mr\si b \text{ and } x\mr\si y &\IMPLIES ax\mr\si by &&\text{for all $a,b,x,y\in S$.}
\end{align*}
Given a congruence $\si$, the \emph{quotient semigroup} $S/\si$ consists of all $\si$-classes under the induced product, $[a][b]=[ab]$. The \emph{kernel} of a semigroup homomorphism $\phi:S\to T$ is the congruence
\[
\ker(\phi) = \set{(a,b)\in S\times S}{a\phi=b\phi}.
\]
The fundamental homomorphism theorem for semigroups says that the quotient $S/\ker(\phi)$ is isomorphic to $\im(\phi)$, the image of $S$ under $\phi$.
\subsection{Categories}\label{sect:C}
Unless stated otherwise, the categories we are concerned with are assumed to be \emph{small}. We typically identify a (small) category $\CC$ with its set of morphisms. We identify the objects of~$\CC$ with the identities, the set of which is denoted $v\CC$. We denote the domain and codomain (a.k.a.~range) of $a\in\CC$ by $\bd(a)$ and $\br(a)$, respectively. We compose morphisms left to right, so $a\circ b$ is defined if and only if $\br(a)=\bd(b)$, in which case $\bd(a\circ b)=\bd(a)$ and $\br(a\circ b)=\br(b)$. (Sometimes functors will be written to the left of their arguments, and so composed right to left; it should always be clear which convention is being used.) For $p,q\in v\CC$, we write $\CC(p,q) = {\set{a\in\CC}{\bd(a)=p,\ \br(a)=q}}$ for the set of all morphisms $p\to q$.
All the (small) categories we study will have an involution and an order, as made precise in the next two standard definitions.
\begin{defn}\label{defn:*cat}
A \emph{$*$-category} is a (small) category $\CC$ with an involution, i.e.~a map ${\CC\to\CC:a\mt a^*}$ satisfying the following, for all $a,b\in\CC$:
\begin{enumerate}[label=\textup{\textsf{(I\arabic*)}},leftmargin=10mm]
\item \label{I1} $\bd(a^*)=\br(a)$ and $\br(a^*)=\bd(a)$.
\item \label{I2} $(a^*)^*=a$.
\item \label{I3} If $\br(a)=\bd(b)$, then $(a\circ b)^*=b^*\circ a^*$.
\end{enumerate}
A \emph{groupoid} is a $*$-category for which we additionally have
\begin{enumerate}[label=\textup{\textsf{(I\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{3}
\item \label{I4} $a\circ a^*=\bd(a)$ (and hence also $a^*\circ a=\br(a)$) for all $a\in\CC$.
\end{enumerate}
In a groupoid, we typically write $a^*=a^{-1}$ for $a\in\CC$.
\end{defn}
It is easy to show that $p^*=p$ for all $p\in v\CC$, when $\CC$ is a $*$-category. It is also clear that \ref{I1} is a consequence of \ref{I4}, so a groupoid is a category with a map $a\mt a^*$ satisfying~\ref{I2}--\ref{I4}.
\begin{defn}\label{defn:OC}
An \emph{ordered $*$-category} is a $*$-category $\CC$ equipped with a partial order $\leq$ satisfying the following, for all $a,b,c,d\in\CC$ and $p\in v\CC$:
\begin{enumerate}[label=\textup{\textsf{(O\arabic*)}},leftmargin=10mm]
\item \label{O1} If $a\leq b$, then $\bd(a)\leq\bd(b)$ and $\br(a)\leq\br(b)$.
\item \label{O2} If $a\leq b$, then $a^*\leq b^*$.
\item \label{O3} If $a\leq b$ and $c\leq d$, and if $\br(a)=\bd(c)$ and $\br(b)=\bd(d)$, then $a\circ c\leq b\circ d$.
\item \label{O4} For all $p\leq\bd(a)$, there exists a unique $u\leq a$ with $\bd(u)=p$.
\end{enumerate}
It is easy to see that \ref{O1}--\ref{O4} imply the following, which is a dual of \ref{O4}:
\begin{enumerate}[label=\textup{\textsf{(O\arabic*)$^*$}},leftmargin=10mm] \addtocounter{enumi}{3}
\item \label{O4*} For all $q\leq\br(a)$, there exists a unique $v\leq a$ with $\br(v)=q$.
\end{enumerate}
(Here we have $q\leq\bd(a^*)$, and if $w$ is the unique element with $w\leq a^*$ with $\bd(w)=q$, then we take $v=w^*$.)
The elements $u$ and $v$ in \ref{O4} and \ref{O4*} are denoted $u={}_p\corest a$ and $v=a\rest_q$, respectively, and called the \emph{left restriction of $a$ to $p$} and the \emph{right restriction of $a$ to $q$}. Some authors call $a\rest_q$ a \emph{restriction}, and ${}_p\corest a$ a \emph{co-restriction}; we prefer the left/right terminology, however, as we feel that it does not `prioritise' one over the other.
An \emph{ordered groupoid} is a groupoid with a partial order satisfying \ref{O1}--\ref{O4}. In fact, when~$\CC$ is a groupoid, \ref{O2}, \ref{O3} and \ref{I4} together imply \ref{O1}.
\end{defn}
It is easy to see that the object set $v\CC$ is an \emph{order ideal} in any ordered $*$-category $\CC$, meaning that the following holds:
\bit
\item For all $a\in\CC$ and $p\in v\CC$, $a\leq p \implies a\in v\CC$.
\eit
Indeed, if $a\leq p$, and if we write $q=\bd(a)\in v\CC$, then by \ref{O1} we have $q\leq\bd(p)=p$. But then $a,q\leq p$ and $\bd(a)=q=\bd(q)$, so by uniqueness in \ref{O4} we have $a=q\in v\CC$. It also follows from this that ${}_q\corest p=q$ for any $q\leq p$. We typically use facts such as these without explicit reference.
In what follows, it is typically more convenient to construct an ordered $*$-category $\CC$ by:
\bit
\item defining an order $\leq$ on the object set $v\CC$,
\item defining left restrictions ${}_p\corest a$, for $a\in\CC$ and $p\leq\bd(a)$,
\item specifying that $a\leq b$ (for morphisms $a,b\in\CC$) if $a$ is a restriction of $b$.
\eit
The next lemma axiomatises the conditions required to ensure that we do indeed obtain an ordered $*$-category in this way.
\begin{lemma}\label{lem:C}
Suppose $\CC$ is a $*$-category for which the following two conditions hold:
\ben
\item \label{*C1} There is a partial order $\leq$ on the object set $v\CC$.
\item \label{*C2} For all $a\in\CC$, and for all $p\leq\bd(a)$, there exists a morphism ${}_p\corest a\in\CC$, such that the following hold, for all $a,b\in\CC$ and $p,q\in v\CC$:
\begin{enumerate}[label=\textup{\textsf{(O\arabic*)$'$}},leftmargin=10mm]
\item \label{O1'} If $p\leq\bd(a)$, then $\bd({}_p\corest a)=p$ and $\br({}_p\corest a)\leq\br(a)$.
\item \label{O2'} If $p\leq\bd(a)$, and if $q=\br({}_p\corest a)$, then $({}_p\corest a)^*={}_q\corest a^*$.
\item \label{O3'} ${}_{\bd(a)}\corest a = a$.
\item \label{O4'} For all $p\leq q\leq\bd(a)$, we have ${}_p\corest{}_q\corest a = {}_p\corest a$.
\item \label{O5'} If $p\leq\bd(a)$ and $\br(a)=\bd(b)$, and if $q=\br({}_p\corest a)$, then ${}_p\corest(a\circ b) = {}_p\corest a\circ{}_q\corest b$.
\end{enumerate}
\een
Then $\CC$ is an ordered $*$-category with order given by
\begin{equation}\label{eq:aleqb}
a\leq b \IFF a={}_p\corest b \qquad\text{for some $p\leq\bd(b)$.}
\end{equation}
Moreover, any ordered $*$-category has the above form.
\end{lemma}
\pf
Beginning with the final assertion, suppose $\CC$ is an ordered $*$-category. Then $v\CC$ is a sub-poset of $\CC$, and hence~\ref{*C1} holds. For \ref{*C2}, we take ${}_p\corest a$ to be the morphism $u\leq a$ from~\ref{O4}, and~\ref{O1'}--\ref{O5'} are all easily checked. For example, to verify \ref{O2'}, suppose $a\in\CC$ and ${p\leq\bd(a)}$, and let $q=\br({}_p\corest a)$. Then since ${}_p\corest a\leq a$, it follows from \ref{O2} that $({}_p\corest a)^*\leq a^*$, and we have $\bd(({}_p\corest a)^*)=\br({}_p\corest a)=q$. But ${}_q\corest a^*$ is the unique element below $a^*$ with domain $q$, so in fact~${({}_p\corest a)^*={}_q\corest a^*}$.
Conversely, suppose conditions \ref{*C1} and \ref{*C2} both hold.
We first check that the relation $\leq$ in~\eqref{eq:aleqb} is a partial order. Reflexivity follows immediately from \ref{O3'}, and transitivity from~\ref{O4'}. For anti-symmetry, suppose $a\leq b$ and $b\leq a$, so that $a={}_p\corest b$ and $b={}_q\corest a$ for some $p\leq\bd(b)$ and $q\leq\bd(a)$. Then
\[
a = {}_p\corest b = {}_p\corest {}_q\corest a = {}_p\corest a \Implies \bd(a) = \bd({}_p\corest a) = p \leq \bd(b) \ANDSIM \bd(b) = q \leq \bd(a).
\]
It follows that $p=q=\bd(a)=\bd(b)$, and so $a = {}_p\corest b = {}_{\bd(b)}\corest b = b$. Now that we know $\leq$ is a partial order, we verify conditions \ref{O1}--\ref{O4}.
\pfitem{\ref{O1} and \ref{O2}} Suppose $a\leq b$, so that $a={}_p\corest b$ for some $p\leq\bd(b)$. Then \ref{O1'} gives
\[
\bd(a)=\bd({}_p\corest b) = p\leq\bd(b) \AND \br(a)=\br({}_p\corest b)\leq\br(b).
\]
We also have $a^* = ({}_p\corest b)^* = {}_q\corest b^*$ by \ref{O2'}, where $q=\br({}_p\corest b)$, so that $a^*\leq b^*$.
\pfitem{\ref{O3}} Suppose $a\leq b$ and $c\leq d$ are such that $\br(a)=\bd(c)$ and $\br(b)=\bd(d)$. So $a={}_p\corest b$ and $c={}_q\corest d$ for some $p\leq\bd(b)$ and $q\leq\bd(d)$. Since
\[
q = \bd({}_q\corest d) = \bd(c) = \br(a) = \br({}_p\corest b),
\]
it follows from \ref{O5'} that $a\circ c = {}_p\corest b\circ {}_q\corest d = {}_p\corest(b\circ d)$, so that $a\circ c \leq b\circ d$.
\pfitem{\ref{O4}} Given $p\leq\bd(a)$, we certainly have $u\leq a$ and $\bd(u)=p$, where $u={}_p\corest a$. For uniqueness, suppose also that $x\leq a$ for some $x\in\CC$ with $\bd(x)=p$. Since $x\leq a$, we have $x={}_q\corest a$ for some $q\leq\bd(a)$. But then $q=\bd({}_q\corest a) = \bd(x) = p$, so in fact $x={}_q\corest a={}_p\corest a=u$.
\epf
\begin{rem}\label{rem:dual}
The previous result referred only to left restrictions. Right restrictions can be defined from the left, using the involution:
\[
a\rest_q = ({}_q\corest a^*)^* \qquad\text{for $a\in\CC$ and $q\leq\br(a)$.}
\]
As explained in Definition \ref{defn:OC}, $a\rest_q$ is the unique element below $a$ with codomain $q$. In particular, the following holds in any ordered $*$-category:
\begin{enumerate}[label=\textup{\textsf{(O\arabic*)$'$}},leftmargin=10mm]\addtocounter{enumi}{5}
\item \label{O6'} If $a\in\CC$ and $p\leq\bd(a)$, and if $q=\br({}_p\corest a)$, then ${}_p\corest a = a\rest_q$.
\end{enumerate}
Each of \ref{O1'}--\ref{O6'} of course have duals. For example, the dual of \ref{O5'} says:
\bit
\item If $q\leq\br(b)$ and $\br(a)=\bd(b)$, and if $p=\bd(b\rest_q)$, then $(a\circ b)\rest_q=a\rest_p\circ b\rest_q$.
\eit
It is also worth noting that for $a,b\in\CC$ we have
\begin{align*}
a\leq b &\iff a={}_p\corest b &&\text{for some $p\leq\bd(b)$}\\
&\iff a=b\rest_q &&\text{for some $q\leq\br(b)$}\\
&\iff a={}_p\corest b=b\rest_q &&\text{for some $p\leq\bd(b)$ and $q\leq\br(b)$.}
\end{align*}
The $p,q\in P$ here are of course $p=\bd(a)$ and $q=\br(a)$.
\end{rem}
\newpage
\begin{defn}\label{defn:cong}
A \emph{$v$-congruence} on a category $\CC$ is an equivalence relation $\approx$ on $\CC$ satisfying the following, for all $a,b,u,v\in\CC$:
\begin{enumerate}[label=\textup{\textsf{(C\arabic*)}},leftmargin=10mm]
\item \label{C1} $a\approx b \implies [\bd(a)=\bd(b)$ and $\br(a)=\br(b)]$,
\item \label{C2} $a\approx b \implies u\circ a\approx u\circ b$, whenever the stated compositions are defined,
\item \label{C3} $a\approx b \implies a\circ v\approx b\circ v$, whenever the stated compositions are defined.
\end{enumerate}
If $\CC$ is a $*$-category, we say that $\approx$ is a \emph{$*$-congruence} if it satisfies \ref{C1}--\ref{C3} and the following, for all $a,b\in\CC$:
\begin{enumerate}[label=\textup{\textsf{(C\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{3}
\item \label{C4} $a\approx b \implies a^*\approx b^*$.
\end{enumerate}
If $\CC$ is an ordered $*$-category, we say that $\approx$ is an \emph{ordered $*$-congruence} if it satisfies \ref{C1}--\ref{C4} and the following, for all $a,b\in\CC$ and $p\in v\CC$:
\begin{enumerate}[label=\textup{\textsf{(C\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{4}
\item \label{C5} $[a\approx b$ and $p\leq\bd(a)] \implies {}_p\corest a \approx {}_p\corest b$.
\end{enumerate}
\end{defn}
Given a $v$-congruence $\approx$ on a category $\CC$, the quotient category $\CC/{\approx}$ consists of all $\approx$-classes, under the induced composition. We typically write $[a]$ for the $\approx$-class of $a\in\CC$. It follows immediately from \ref{C1} that $p\approx q\implies p=q$ for objects $p,q\in v\CC$, so we can identify the object sets of $\CC$ and $\CC/{\approx}$, \emph{viz.}~$p\equiv[p]$. In this way, for $a\in\CC$ we have $\bd[a]=\bd(a)$ and $\br[a]=\br(a)$ for all $a\in\CC$, and
\[
[a]\circ[b]=[a\circ b] \qquad\text{whenever $\br[a]=\bd[b]$.}
\]
\begin{lemma}\label{lem:approx}
\ben
\item \label{approx1} If $\approx$ is a $*$-congruence on a $*$-category $\CC$, then $\CC/{\approx}$ is a $*$-category, with involution given by
\begin{equation}
\label{eq:a*}
[a]^* = [a^*] \qquad\text{for all $a\in\CC$.}
\end{equation}
If also $a\circ a^*\approx\bd(a)$ for all $a\in\CC$, then $\CC/{\approx}$ is a groupoid.
\item \label{approx2} If $\approx$ is an ordered $*$-congruence on an ordered $*$-category $\CC$, then $\CC/{\approx}$ is an ordered $*$-category, with involution \eqref{eq:a*}, and order given by
\begin{equation}\label{eq:alleqbe}
\al\leq\be \Iff a\leq b \qquad\text{for some $a\in\al$ and $b\in\be$.}
\end{equation}
\een
\end{lemma}
\pf
Part \ref{approx1} is routine, and \ref{approx2} only a little more difficult, so we just give a sketch for the latter. For this, one begins by showing that $\al\leq\be$ is equivalent to the ostensibly stronger condition:
\bit
\item For all $b\in\be$, there exists $a\in\al$ such that $a\leq b$.
\eit
It is then easy to check that $\leq$ is indeed a partial order on $\CC/{\approx}$. Conditions \ref{O1}--\ref{O4} for $\CC/{\approx}$ follow quickly from the corresponding conditions for $\CC$.
\epf
We will shortly return to congruences in Lemma \ref{lem:Om} below, where we give simpler criteria for checking that a $v$-congruence satisfies \ref{C4} or \ref{C5}. The statement of the lemma will require certain maps defined on any ordered $*$-category. These maps will also be used extensively in the rest of the paper. To define them, fix some such ordered $*$-category $\CC$ with object set $P=v\CC$. For $p\in P$, we write
\[
p^\da = \set{q\in P}{q\leq p}
\]
for the down-set of $p$ in the poset $(P,\leq)$. Consider a morphism $a\in\CC$. A restriction ${}_p\corest a$ is defined precisely when $p\leq\bd(a)$, and we write $p\vt_a = \br({}_p\corest a)$ for the codomain of ${}_p\corest a$; by \ref{O1'}, we have $p\vt_a\leq\br(a)$. In other words, we have a map
\begin{equation}\label{eq:vta}
\vt_a : \bd(a)^\da\to\br(a)^\da \GIVENBY p\vt_a = \br({}_p\corest a).
\end{equation}
Note that for $q\leq\br(a)$, we have
\begin{equation}\label{eq:vta*}
\bd(a\rest_q) = \bd(({}_q\corest a^*)^*) = \br({}_q\corest a^*) = q\vt_{a^*}.
\end{equation}
\begin{lemma}\label{lem:vtavta*}
If $\CC$ is an ordered $*$-category, then for any $a\in\CC$,
\[
\vt_a:\bd(a)^\da\to\br(a)^\da \AND \vt_{a^*}:\br(a)^\da\to\bd(a)^\da
\]
are mutually inverse bijections.
\end{lemma}
\pf
By symmetry, it is enough to show that $\vt_a\vt_{a^*}=\operatorname{id}_{\bd(a)^\da}$. To do so, let $p\leq\bd(a)$, and write $q=p\vt_a=\br({}_p\corest a)$. By \ref{O6'} we have ${}_p\corest a=a\rest_q$. Combining this with \ref{O1'} and \eqref{eq:vta*}, we deduce that
\[
p = \bd({}_p\corest a) = \bd(a\rest_q) = q\vt_{a^*} = p\vt_a\vt_{a^*},
\]
as required.
\epf
Note that property \ref{O5'} and its dual (cf.~Remark \ref{rem:dual}) can be rephrased in terms of the $\vt$ maps. Specifically, if $a,b\in\CC$ (an ordered $*$-category) are such that $\br(a)=\bd(b)$, then (keeping~\eqref{eq:vta*} and Lemma \ref{lem:vtavta*} in mind) we have
\begin{equation}\label{eq:pab}
{}_p\corest(a\circ b) = {}_p\corest a \circ {}_{p\vt_a}\corest b \AND (a\circ b)\rest_q = a\rest _{q\vt_b^{-1}} \circ b\rest_q \qquad\text{for all $p\leq\bd(a)$ and $q\leq\br(b)$.}
\end{equation}
This can be extended to restrictions of compositions with an arbitrary number of terms. In a sense, the first part of the next result is a formalisation of the previous observation.
\begin{lemma}\label{lem:vt}
Let $\CC$ be an ordered $*$-category, and let $a,b\in\CC$ and $p\in v\CC$.
\ben
\item \label{vt1} If $\br(a)=\bd(b)$, then $\vt_{a\circ b} = \vt_a\vt_b$.
\item \label{vt2} $\vt_p=\operatorname{id}_{p^\da}$.
\item \label{vt3} If $p\leq\bd(a)$, then $\vt_{{}_p\corest a} = \vt_a|_{p^\da}$.
\een
\end{lemma}
\pf
\firstpfitem{\ref{vt1}} Write $p=\bd(a)$, $q=\br(a)=\bd(b)$ and $r=\br(b)$. Note that
\[
\vt_a:p^\da\to q^\da \COMMA \vt_b:q^\da\to r^\da \AND \vt_{a\circ b}:p^\da\to r^\da,
\]
so that $\vt_{a\circ b}$ and $\vt_a\vt_b$ are defined on the same domain, i.e.~on $p^\da$. Now let $s\leq p$, and put $t=s\vt_a=\br({}_s\corest a)$. Then by \ref{O5'} we have
\[
s\vt_{a\circ b} = \br({}_s\corest(a\circ b)) = \br({}_s\corest a\circ{}_t\corest b) = \br({}_t\corest b) = t\vt_b = s\vt_a\vt_b.
\]
\pfitem{\ref{vt2}} By part \ref{vt1}, we have $\vt_p = \vt_{p\circ p} = \vt_p\vt_p$, and the result follows since the only idempotent bijection $p^\da\to p^\da$ is the identity.
\pfitem{\ref{vt3}} Both maps have domain $p^\da$, so it suffices to show that $t\vt_{{}_p\corest a} = t\vt_a$ for all $t\leq p$. For this we use \ref{O4'} to calculate
\[
t\vt_{{}_p\corest a} = \br({}_t\corest{}_p\corest a) = \br({}_t\corest a) = t\vt_a. \qedhere
\]
\epf
We will also need the following simple property of the $\vt$ maps:
\begin{lemma}\label{eq:vtaOP}
If $\CC$ is an ordered $*$-category, and if $a\in\CC$, then $\vt_a$ is order-preserving, in the sense that
\[
p\leq q \Implies p\vt_a\leq q\vt_a \qquad\text{for all $p,q\in\bd(a)^\da$.}
\]
\end{lemma}
\pf
If $p\leq q\leq\bd(a)$, then ${}_p\corest a={}_p\corest{}_q\corest a \leq {}_q\corest a$, and so $\br({}_p\corest a)\leq\br({}_q\corest a)$, i.e.~$p\vt_a\leq q\vt_a$.
\epf
In later chapters we will define $v$-congruences by specifying sets of generating pairs:
\begin{defn}\label{defn:Om}
Suppose $\CC$ is a category, and $\Om\sub\CC\times\CC$ a set of pairs satisfying
\bit
\item $(u,v)\in\Om \implies [\bd(u)=\bd(v)$ and $\br(u)=\br(v)]$.
\eit
The \emph{($v$-)congruence generated by $\Om$}, denoted $\Om^\sharp$, is the least congruence on $\CC$ containing $\Om$. Specifically, we have $(a,b)\in\Om^\sharp$ if and only if there is a sequence
\[
a = a_1 \to \cdots \to a_k = b
\]
such that for each $1\leq i<k$ we have
\[
a_i = b_i\circ u_i\circ c_i \AND a_{i+1} = b_i\circ v_i\circ c_i \qquad\text{for some $b_i,c_i\in\CC$ and $(u_i,v_i)\in\Om\cup\Om^{-1}$.}
\]
(Here $\Om^{-1} = \set{(v,u)}{(u,v)\in\Om}$.)
\end{defn}
The next result shows how conditions \ref{C4} and \ref{C5} regarding a congruence ${\approx}=\Om^\sharp$ can be deduced from properties of its generating set $\Om$. In the case of \ref{C5} we utilise the above maps~$\vt_a$.
\begin{lemma}\label{lem:Om}
Suppose ${\approx}=\Om^\sharp$ is a $v$-congruence on an ordered $*$-category $\CC$.
\ben
\item If $\Om$ satisfies the condition
\begin{equation}\label{eq:Om*}
(a,b)\in\Om \Implies a^* \approx b^* ,
\end{equation}
then $\approx$ satisfies condition \ref{C4}.
\item If $\Om$ satisfies the condition
\begin{equation}\label{eq:Om}
(a,b)\in\Om \Implies \vt_a=\vt_b \ANd {}_p\corest a\approx{}_p\corest b \text{ for all $p\leq\bd(a)$,}
\end{equation}
then $\approx$ satisfies condition \ref{C5}.
\een
\end{lemma}
\pf
We just prove the second part, as the first is easier. To show that \ref{C5} holds, suppose $a\approx b$, and let the $a_i,b_i,c_i,u_i,v_i$ be as in Definition \ref{defn:Om}. Of course it suffices to show that ${}_p\corest a_i \approx {}_p\corest a_{i+1}$ for each $i$. For this we define
\[
q = p\vt_{b_i} \AND r = q\vt_{u_i}.
\]
Since $\vt_{u_i}=\vt_{v_i}$ by \eqref{eq:Om}, we also have $r = q\vt_{v_i}$. We then use \ref{O5'} and \eqref{eq:Om} to calculate
\[
{}_p\corest a_i = {}_p\corest(b_i\circ u_i\circ c_i) = {}_p\corest b_i \circ {}_q\corest u_i \circ {}_r\corest c_i \approx {}_p\corest b_i \circ {}_q\corest v_i \circ {}_r\corest c_i = {}_p\corest(b_i\circ v_i\circ c_i) = {}_p\corest a_{i+1}. \qedhere
\]
\epf
\subsection[Regular $*$-semigroups]{\boldmath Regular $*$-semigroups}\label{sect:RSS}
As we have already mentioned, our primary interest is in the class of \emph{regular $*$-semigroups}, as defined by Nordahl and Scheiblich in \cite{NS1978}. Here we revise their definition, and list some of their basic properties. The main new results will be proved later in the paper, but we will give a few `appetisers' here to motivate these later results.
\begin{defn}\label{defn:RSS}
A \emph{regular $*$-semigroup} is a semigroup $S$ with a unary operation ${{}^*:S\to S:s\mt s^*}$ satisfying
\[
(a^*)^* = a = aa^*a \AND (ab)^* = b^*a^* \qquad\text{for all $a,b\in S$.}
\]
\end{defn}
Note that a regular $*$-semigroup $S$ is considered to be an algebra of type $(2,1)$, i.e.~${}^*$ is a basic operation of $S$. There are semigroups with multiple unary operations, giving rise to pairwise non-isomorphic regular $*$-semigroups \cite{Scheiblich1987}. Any inverse semigroup is a regular $*$-semigroup, with the involution being inversion, $a^*=a^{-1}$. In fact, it has long been known \cite{Schein1963} that inverse semigroups are precisely the regular $*$-semigroups satisfying the additional identity
\begin{equation}\label{eq:aa*bb*}
aa^*bb^* = bb^*aa^* \qquad\text{for all $a,b\in S$.}
\end{equation}
As we will see, inverse semigroups represent a somewhat `degenerate' case in the theory we develop; see Chapter \ref{chap:I}.
Arguably the most important families of non-inverse regular $*$-semigroups are the so-called \emph{diagram monoids}, including the Brauer monoids \cite{Brauer1937}, Temperley-Lieb monoids \cite{TL1971}, partition monoids \cite{Jones1994_2,Martin1994}, Motzkin monoids~\cite{BH2014} and several others. In Section \ref{sect:D} we will look extensively at the partition monoids, but before this we give the following simple example, coming from a standard semigroup-theoretic construction; we will return to it on a number of occasions. Another standard generalisation of the construction is given in Example \ref{eg:Rees} (see also Example~\ref{eg:Brandt}); more examples of this kind can also be found in \cite{Petrich1985}.
\begin{eg}\label{eg:PxP}
Let $P$ be an arbitrary set, and let $S=P\times P=\set{(p,q)}{p,q\in P}$ be the cartesian product of two copies of $P$. We define binary and unary operations on $S$ by:
\[
(p,q)(r,s)=(p,s) \AND (p,q)^*=(q,p).
\]
It is then routine to check that $S$ is a regular $*$-semigroup. This semigroup is known as the \emph{square band} over $P$. (More generally, a \emph{rectangular band} has the form $P\times Q$, for possibly different sets~$P$ and~$Q$, with the same product as above; this is still a semigroup, but need not have an involution.)
\end{eg}
For a regular $*$-semigroup $S$, we write
\[
E = E(S) = \set{e\in S}{e^2=e} \AND P = P(S) = \set{p\in S}{p^2=p=p^*}.
\]
So $E$ consists of all idempotents of $S$. The elements of $P$ are called \emph{projections}, and of course we have $P\sub E$. Projections play a very important role in virtually every study of regular $*$-semigroups in the literature, and this is also true in the current work.
The following result gathers some of the basic properties of idempotents and projections. Proofs can be found in many places (e.g., \cite{Imaoka1980}), but we give some simple arguments here to keep the paper self-contained.
\begin{lemma}\label{lem:PS1}
If $S$ is a regular $*$-semigroup, with sets of idempotents $E=E(S)$ and projections $P=P(S)$, then
\ben
\item \label{PS11} $P=\set{aa^*}{a\in S}=\set{a^*a}{a\in S}$,
\item \label{PS12} $E=P^2=\set{pq}{p,q\in P}$, and consequently $\la E\ra = \la P\ra$,
\item \label{PS13} $a^*Pa\sub P$ for all $a\in S$.
\een
\end{lemma}
\pf
\firstpfitem{\ref{PS11}} It is enough to show that $P=\set{aa^*}{a\in S}$. If $p\in P$, then $p=pp=pp^*$. Conversely, if $a\in S$, then $(aa^*)^2=aa^*aa^*=aa^*$, and $(aa^*)^*=(a^*)^*a^*=aa^*$.
\pfitem{\ref{PS12}} If $e\in E$, then $e=ee^*e=e(ee)^*e=ee^*\cdot e^*e$, with $ee^*,e^*e\in P$, by part \ref{PS11}. Conversely, for any $p,q\in P$, we have $pq=pq(pq)^*pq=pqq^*p^*pq=pqqppq=pqpq=(pq)^2$.
\pfitem{\ref{PS13}} If $p\in P$, then $a^*pa=a^*p^*pa=(pa)^*pa\in P$ by part \ref{PS11}.
\epf
Comparing \eqref{eq:aa*bb*} with Lemma \ref{lem:PS1}\ref{PS11}, it follows that inverse semigroups are precisely the regular $*$-semigroups with commuting projections. In fact, it is easy to see that any idempotent of an inverse semigroup is a projection, so another equivalent condition for a regular $*$-semigroup~$S$ to be inverse is that $E(S)=P(S)$.
If $p,q\in P$, then it follows from Lemma \ref{lem:PS1}\ref{PS13} that $pqp = p^*qp\in P$. Thus, for all $p\in P$, we have a well-defined map
\begin{equation}\label{eq:thpS}
\th_p:P\to P \GIVENBY q\th_p = pqp \qquad\text{for $q\in P$.}
\end{equation}
The next result summarises some of the key properties of these maps.
\begin{lemma}\label{lem:PS2}
If $S$ is a regular $*$-semigroup, then for any $p,q\in P$ we have
\[
p\theta_p = p \COMMA
\theta_p\theta_p=\theta_p \COMMA
p\theta_q\theta_p=q\theta_p \COMMA
\theta_p\theta_q\theta_p=\theta_{q\theta_p} \AND
\theta_p\theta_q\theta_p\theta_q=\theta_p\theta_q.
\]
\end{lemma}
\pf
The first two follow from the fact that projections are idempotents.
The third and fifth follow from the fact that the product of two projections is an idempotent (cf.~Lemma~\ref{lem:PS1}\ref{PS12}).
For the fourth, given any $t\in P$ we have
\[
t\th_{q\th_p} = t\th_{pqp} = pqp\cdot t\cdot pqp = t\th_p\th_q\th_p. \qedhere
\]
\epf
When $S$ is inverse, commutativity of projections implies that $q\th_p=p\th_q=pq=qp$ for all~${p,q\in P}$. This then leads to a significant simplification in the theory developed below, as we will explore in detail in Chapter \ref{chap:I}.
Next we define a relation $\leq$ on $P=P(S)$ by
\begin{equation}\label{eq:leq}
p\leq q \Iff p=p\th_q = qpq \qquad\text{for $p,q\in P$}.
\end{equation}
Note that $p\leq q$ precisely when $q$ is a left and right identity for $p$. Using this it is easy to see that $\leq$ is a partial order. It is also possible to deduce this fact from the identities listed in Lemma~\ref{lem:PS2}, as we do in Lemma \ref{lem:leqP} below. In fact, since $p=qpq \iff p=pq=qp$, and since
\begin{equation}\label{eq:p=pq}
p=pq \implies p=p^*=(pq)^*=q^*p^*=qp,
\end{equation}
and conversely by symmetry, it follows that
\begin{equation}\label{eq:leq2}
p\leq q \Iff p=pq \Iff p=qp.
\end{equation}
The above properties of projections, and their associated $\th$ mappings, will be of fundamental importance in all that follows. In particular, even though $P=P(S)$ is generally not a subsemigroup of the regular $*$-semigroup $S$, we can regard it as a \emph{unary algebra}, whose basic (unary) operations are the $\th_p$ ($p\in P$). In Chapter \ref{chap:P}, we take the properties from Lemma \ref{lem:PS2} as the axioms for what we will call a \emph{projection algebra} (see Definition \ref{defn:P}). Ultimately, we will see that (abstract) projection algebras are precisely the unary algebras of projections of regular $*$-semigroups. This has in fact been known for some time (see for example \cite{Imaoka1983}), but our new approach leads to another way to see this. When $S$ is inverse, $P$ is simply a subsemigroup of~$S$, and there is no real advantage in considering $P$ as a unary algebra. In a sense, a projection algebra is the `$*$-analogue' of the semilattice of idempotents of an inverse semigroup.
As a foreshadowing of things to come, we explain now how to use the projections of a regular $*$-semigroup $S$ to construct a groupoid $\G=\G(S)$. The groupoid~$\G$ has the same underlying set as $S$, and the (partially defined) composition in $\G$ is a restriction of the (totally defined) product in $S$. Roughly speaking, $\G$ remembers only the `nice' or `easy' products; see Section \ref{sect:D} for some examples justifying our use of these words.
As it happens, the construction of $\G=\G(S)$ is \emph{exactly} the same as in the inverse case; see \cite[Chapter~4]{Lawson1998}. However, we will see that the groupoid $\G(S)$ is not a \emph{total invariant} of the regular $*$-semigroup $S$, unlike the case for inverse semigroups; see Section \ref{sect:eg} for more details.
The definition of $\G=\G(S)$ begins with the following neat equational (rather than existential) characterisation of Green's relations on a regular $*$-semigroup.
\begin{lemma}\label{lem:Green}
If $S$ is a regular $*$-semigroup, and if $a,b\in S$, then
\[
a\mathrel{\mathscr R} b \iff aa^*=bb^* \AND a\L b \iff a^*a=b^*b.
\]
\end{lemma}
\pf
Even though this has been proved in a number of places, we prove the statement concerning $\mathrel{\mathscr R}$ to keep the paper self contained. The statement for $\L$ is of course dual.
Suppose first that $a\mathrel{\mathscr R} b$, so that $a=bx$ for some $x\in S$. Using \eqref{eq:leq2}, it then follows that
\[
aa^*=bxa^*=bb^*bxa^*=bb^*aa^* \Implies aa^*\leq bb^*.
\]
By symmetry, we also have $bb^*\leq aa^*$, so that $aa^*=bb^*$.
Conversely, if $aa^*=bb^*$, then $a=aa^*a=b(b^*a)$, and similarly $b=a(a^*b)$, so that $a\mathrel{\mathscr R} b$.
\epf
For a regular $*$-semigroup $S$, and for $a\in S$, we write
\[
\bd(a) = aa^* \AND \br(a) = a^*a.
\]
Both of these elements are projections (cf.~Lemma \ref{lem:PS1}\ref{PS11}), and the identity $a=aa^*a$ gives
\[
a = \bd(a)\cdot a = a\cdot \br(a).
\]
Moreover, Lemma \ref{lem:Green} says that
\[
a\mathrel{\mathscr R} b \iff \bd(a)=\bd(b) \AND a\L b \iff \br(a)=\br(b).
\]
Since $p=pp^*=p^*p$ for any projection $p$, we have $\bd(p)=\br(p)=p$ for such $p$. It then follows from Lemma \ref{lem:Green} that the $\mathrel{\mathscr R}$- and $\L$-classes of $p$ are given by
\begin{equation}\label{eq:RpLp}
R_p = \set{a\in S}{aa^*=pp^*} = \set{a\in S}{\bd(a)=p} \ANDSIM L_p = \set{a\in S}{\br(a)=p}.
\end{equation}
\begin{lemma}\label{lem:dab}
If $S$ is a regular $*$-semigroup, and if $a,b\in S$ are such that $\br(a)=\bd(b)$, then
\[
\bd(ab) = \bd(a) \AND \br(ab) = \br(b).
\]
\end{lemma}
\pf
We just prove the first claim, as the second is symmetrical. If $\br(a)=\bd(b)$, then $a^*a = bb^*$, and then
\[
\bd(ab) = ab(ab)^* = abb^*a = aa^*aa^* = aa^* = \bd(a). \qedhere
\]
\epf
This allows us to make the following definition.
\begin{defn}\label{defn:GS}
Given a regular $*$-semigroup $S$, we define the category $\G=\G(S)$ as follows.
\bit
\item The object set of $\G$ is $v\G=P=P(S)$.
\item For $a\in\G$ we have $\bd(a) = aa^*$ and $\br(a) = a^*a$.
\item For $a,b\in\G$ with $\br(a)=\bd(b)$, we have $a\circ b = ab$.
\eit
\end{defn}
For projections $p,q\in P$, it follows from \eqref{eq:RpLp} that
\[
\G(p,q) = \set{a\in S}{\bd(a)=p,\ \br(a)=q} = R_p \cap L_q.
\]
Since ${\mathrel{\mathscr D}}={\mathrel{\mathscr R}}\circ{\L}$, such a morphism set is non-empty precisely when $p\mathrel{\mathscr D} q$. It follows that the connected components of $\G=\G(S)$ are in one-one correspondence with the $\mathrel{\mathscr D}$-classes of $S$.
Since the underlying set of $\G=\G(S)$ is $S$, the unary operation ${}^*:S\to S$ can be thought of as a map ${}^*:\G\to\G$. It is a routine matter to verify conditions \ref{I1}--\ref{I4} from Definition \ref{defn:*cat}, so we have the following:
\begin{prop}\label{prop:GS}
If $S$ is a regular $*$-semigroup, then the category $\G=\G(S)$ is a groupoid, with inversion given by ${}^{-1} = {}^*$. \epfres
\end{prop}
\begin{rem}\label{rem:GS}
We will see later that the groupoid $\G=\G(S)$ associated to a regular $*$-semigroup~$S$ has rather a lot more structure than is evident at this stage. For example, the fact that the object set $v\G=P$ is a (so-called) projection algebra will allow us to construct an order on $\G$. This ordering can then be used to reduce the calculation of an arbitrary product $ab$ in $S$ to an associated composition in $\G$:
\begin{equation}\label{eq:ab}
ab = a' \circ e \circ b',
\end{equation}
where $a'\leq a$ and $b'\leq b$, and where $e\in\G$ is a special morphism $\br(a')\to\bd(b')$. Although we do not need to understand the ordering on $\G$ yet, we can at least verify that a composition of the form \eqref{eq:ab} does indeed exist. Specifically, let us write
\[
p=\br(a)=a^*a \AND q=\bd(b)=bb^*,
\]
noting that $a=ap$ and $b=qb$. We also define the additional projections
\[
p' = q\th_p = pqp \AND q' = p\th_q = qpq.
\]
Then since $pq$ is an idempotent, by Lemma \ref{lem:PS1}\ref{PS12}, we have $p'q'=(pq)^3=pq$, and so
\[
ab = ap \cdot qb = a\cdot p'q'\cdot b = ap' \cdot p'q' \cdot q'b.
\]
Then with $a'=ap'$, $e=p'q'(=pq)$ and $b'=q'b$, one can check that
\[
\br(a')=p' = \bd(e) \AND \br(e)=q'=\bd(b').
\]
For example,
\[
\br(a') = (ap')^*ap' = (apqp)^*apqp = pqpa^*apqp = pqp\cdot p\cdot pqp = pqpqp = pqp = p'.
\]
Thus, continuing from above, it follows that indeed
\[
ab = ap' \cdot p'q' \cdot q'b = a' \cdot e \cdot b' = a' \circ e \circ b'.
\]
In what follows, we will think of $a'=ap'$ and $b'=q'b$ as `restrictions' $a'=a\rest_{p'}$ and $b'={}_{q'}\corest b$ in the groupoid $\G$.
\end{rem}
\begin{rem}\label{rem:e}
There is a rather subtle point regarding Remark \ref{rem:GS} that is far from obvious at this stage, but ought to be mentioned now. It may seem as if we are implying that the structure of the regular $*$-semigroup $S$ is completely determined by that of the groupoid $\G=\G(S)$. That is, if we are given a complete description of the groupoid $\G$, including its composition, inversion, ordering, and the (so-called) projection algebra structure of its object set $P=v\G$, then it might seem that we ought to be able to construct the entire multiplication table of $S$.
However, this is far from the truth. Indeed, in Section \ref{sect:eg}, we will give examples of non-isomorphic regular $*$-semigroups $S_1$ and~$S_2$ with exactly the same groupoids $\G(S_1)=\G(S_2)$. (We postpone the definition of these semigroups, as we have already gone on a somewhat lengthy tangent, and also since we will only be able to fully appreciate their properties once we have developed more theory.)
If the reader is worried that this contradicts the previous remark, the source of the subtlety is in the precise identity of the element~$e$ from \eqref{eq:ab} that allowed us to reduce a product $ab$ in~$S$ to a composition in $\G$:
\[
ab = a\rest_{p'}\circ e \circ {}_{q'}\corest b.
\]
(On the other hand, the projections $p',q'$ can be found directly using the projection algebra structure of $P=v\G$.) We were able to `locate' this element $e$ in Remark \ref{rem:GS} becase we began with full knowledge of the semigroup $S$; we simply took $e=pq=p'q'$. However, if we begin with an ordered groupoid $\G$, it is not so obvious what this element $e$ should be, even if we know that $\G=\G(S)$ for some (unknown) regular $*$-semigroup $S$. Certainly $e$ should be a morphism $p'\to q'$, but when $p'\not=q'$, it is not so easy to distinguish any such morphism. Getting around this problem is one of the main sources of difficulty encountered in the current work. Our eventual solution utilises what we will call the `chain groupoid' $\C=\C(P)$ associated to an (abstract) projection algebra $P$, and a certain `evaluation map' $\ve:\C\to\G$. We will find our element $e$ in the image of this map.
\end{rem}
\begin{rem}\label{rem:trace}
Before moving on, it is worth commenting on another related notion, namely the so-called \emph{trace} of a semigroup. This goes back to a classical result of Miller and Clifford. Among other things, \cite[Theorem 3]{MC1956} says that for elements $a,b$ of a semigroup $S$, we have
\[
ab \in R_a \cap L_b \Iff L_a \cap R_b \text{ contains an idempotent}.
\]
Such products $ab \in R_a \cap L_b$ are often called \emph{trace products} in the literature. We can then define a partial binary operation~$\bc$ on $S$ by
\[
a\bc b = \begin{cases}
ab &\text{if $L_a \cap R_b$ contains an idempotent}\\
\text{undefined} &\text{otherwise,}
\end{cases}
\]
and the resulting partial algebra $(S,{\bc})$ is often called the \emph{trace of $S$}. Following Definition \ref{defn:GS}, it is not hard to see that in a regular $*$-semigroup $S$, a composition $a\circ b$ exists precisely when $L_a\cap R_b$ contains a projection, which must of course be $a^*a=bb^*$, i.e.~$\br(a)=\bd(b)$. Since projections are idempotents, it follows that $a\circ b$ being defined forces $a\bc b$ to be defined, and then of course $a\circ b=a\bc b=ab$, meaning that ${\circ}\sub{\bc}$, i.e.~that $\bc$ is an extension of $\circ$. We have already noted that $S$ is inverse if and only if $P(S)=E(S)$, and it now quickly follows that this is also equivalent to having ${\circ}={\bc}$. In this inverse case, the trace $(S,{\bc})$ is then precisely the groupoid $\G(S)$ from Definition \ref{defn:GS}, but this is not true of arbitrary regular $*$-semigroups.
\end{rem}
As we have already mentioned, projections have played a crucial role in virtually all studies of regular $*$-semigroups. Of particular significance in papers such as \cite{DEG2017,EG2017,JEgrpm} are pairs of projections ${p,q\in P=P(S)}$ that are mutual inverses, i.e.~pairs that satisfy $p=pqp$ and $q=qpq$. This is very much the case in the current work: so much so, in fact, that we define relations $\leqF$ and $\F$ on~$P$ by
\begin{equation}\label{eq:leqFF}
p\leqF q \Iff p=pqp=q\th_p \AND p\F q \Iff[p\leqF q\text{ and }q\leqF p].
\end{equation}
We think of pairs of $\F$-related projections as \emph{friends} (hence the symbol $\F$). It is clear that~$\leqF$ and $\F$ are both reflexive, and that $\F$ is symmetric. But neither $\leqF$ nor $\F$ is transitive in general (and neither is friendship in `real life'); for an example, see Section \ref{sect:D}. It is easy to see that $p\leq q \implies p\leqF q$, where $\leq$ is the partial order in~\eqref{eq:leq}. It follows quickly from Lemma~\ref{lem:Green} that
\begin{equation}\label{eq:pRpqLq}
p\F q \Iff p\mathrel{\mathscr R} pq \L q \Iff p\L qp\mathrel{\mathscr R} q .
\end{equation}
It of course follows from this that
\begin{equation}\label{eq:FD}
p\F q \Implies p\mathrel{\mathscr D} q,
\end{equation}
though the converse does not hold in general; again see Section \ref{sect:D}.
Combining \eqref{eq:pRpqLq} with \cite[Proposition 2.3.7]{Howie1995}, it follows that
\[
p\F q \Iff L_p\cap R_q \text{ contains an idempotent} \Iff R_p\cap L_q \text{ contains an idempotent} ,
\]
and of course these idempotents are
\[
qp \in L_p\cap R_q \AND pq \in R_p\cap L_q.
\]
This all means that a pair of $\F$-related projections $p,q\in P(S)$ generates a $2\times2$ rectangular band in $S$:
\[
\begin{tikzpicture}[scale=1]
\draw(0,0)--(2,0)--(2,2)--(0,2)--(0,0) (0,1)--(2,1) (1,0)--(1,2);
\node()at(.5,1.45){$p$};
\node()at(1.5,1.45){$pq$};
\node()at(.5,.45){$qp$};
\node()at(1.5,.45){$q$};
\end{tikzpicture}
\]
One can define a graph $\Ga(S)$ with vertex set $P=P(S)$, and with (undirected) edges $\{p,q\}$ whenever $p\not=q$ and $p\F q$. Connectivity properties of (certain subgraphs of) these graphs played a vital role in \cite{EG2017}, in studies of idempotent-generated regular $*$-semigroups and their ideals. We will say more about this in Section \ref{sect:D}.
The (standard) proof of Lemma \ref{lem:PS1}\ref{PS12} given above involved showing that any idempotent $e\in E=E(S)$ of a regular $*$-semigroup $S$ satisfies $e = ee^*\cdot e^*e$, with $ee^*,e^*e\in P=P(S)$. It quickly follows that the idempotents $e,e^*\in E(S)$ also generate a $2\times2$ rectangular band in $S$:
\[
\begin{tikzpicture}[scale=1]
\draw(0,0)--(2,0)--(2,2)--(0,2)--(0,0) (0,1)--(2,1) (1,0)--(1,2);
\node()at(.5,1.45){$\phantom{{}_*}e\phantom{{}^*}$};
\node()at(1.5,1.45){$ee^*_{\phantom*}$};
\node()at(.5,.45){$e^*_{\phantom*}e$};
\node()at(1.5,.45){$e^*_{\phantom*}$};
\end{tikzpicture}
\]
This is in fact a characterisation of the biordered sets arising from regular $*$-semigroups, in a sense made precise in \cite[Corollary 2.7]{NP1985}. In any case, it quickly follows that $ee^*\F e^*e$ for any $e\in E$, so this shows that any idempotent is a product of two $\F$-related projections, and it follows that not only do we have $E=P^2$ (cf.~Lemma \ref{lem:PS1}\ref{PS12}), but in fact
\[
E = \set{pq}{(p,q)\in{\F}}.
\]
Actually, such factorisations for idempotents are \emph{unique}. Indeed, if $e=pq$, where $e\in E$ and $(p,q)\in{\F}$, then
\begin{equation}\label{eq:ee*e*e}
ee^* = pq(pq)^* = pqq^*p^* = pqqp = pqp = p \ANDSIM e^*e = q.
\end{equation}
Consequently, we have $|E|=|{\F}|$. A number of studies have enumerated the idempotents in various classes of regular $*$-semigroups \cite{DEEFHHLM2019,DEEFHHL2015,Larsson2006}. The identity $|E|=|{\F}|$ played an implicit role in~\cite{DEEFHHLM2019}.
So idempotents are products of $\F$-related projections. As an application of the theory developed in this paper, we will be able to prove the following result, which extends this to arbitrary products of idempotents, though uniqueness is lost, in general (cf.~\eqref{eq:triangle}); see Proposition~\ref{prop:ES}\ref{ES3} and Remark~\ref{rem:ES}. The result might be known, but we are not aware of its existence in the literature; it does bear some resemblance, however, to a classical result of FitzGerald \cite{FitzGerald1972}. Moreover, we believe it could be useful in studies of idempotent-generated regular $*$-semigroups, and might have even led to some simpler arguments in existing studies such as~\cite{EF2012,EG2017,DEG2017}.
\begin{prop}\label{prop:ERSS}
Let $S$ be a regular $*$-semigroup, with $E=E(S)$ and $P=P(S)$. Then for any $a\in\la E\ra=\la P\ra$ we have
\[
a = p_1\cdots p_k \qquad\text{for some $p_1,\ldots,p_k\in P$ with $p_1\F\cdots\F p_k$.}
\]
\end{prop}
We will leave our preliminary discussion of regular $*$-semigroups here. We will return to them again in Chapter \ref{chap:S}. But for now, we will discuss an important special case.
\subsection{Case study: diagram monoids}\label{sect:D}
The theory of regular $*$-semigroups has seen something of a resurgence in recent years, partly due to the importance of so-called \emph{diagram monoids}. Here we recall the definition of these monoids, and use them to illustrate some of the ideas discussed in the previous section. We will also return to them at various points during the rest of the paper. Here we focus exclusively on the \emph{partition monoids} \cite{Martin1994,Jones1994_2}, although similar comments could be made for other diagram monoids, such as (partial) Brauer, Temperley-Lieb and Motzkin monoids \cite{Brauer1937,TL1971,DEG2017,EG2017,BH2014,MM2014}.
Let $X$ be a set, and let $X'=\set{x'}{x\in X}$ be a disjoint copy of $X$. The \emph{partition monoid over $X$}, denoted $\PP_X$, is defined as follows. The elements of $\PP_X$ are the set partitions of $X\cup X'$, and the operation will be defined shortly. A partition from $\PP_X$ will be identified with any graph on vertex set $X\cup X'$ whose connected components are the blocks of the partition. The elements of $X$ and $X'$ are called \emph{upper} and \emph{lower} vertices, respectively, and are generally displayed in two parallel rows, as in the various figures below.
To define the product on $\PP_X$, consider two partitions $\al,\be\in\PP_X$. Let $X''=\set{x''}{x\in X}$ be another disjoint copy of $X$, and define three new graphs:
\bit
\item $\al^\vee$, the graph on vertex set $X\cup X''$ obtained from $\al$ by changing each lower vertex $x'$ to $x''$,
\item $\be^\wedge$, the graph on vertex set $X''\cup X'$ obtained from $\be$ by changing each upper vertex $x$ to $x''$,
\item $\Pi(\al,\be)$, the graph on vertex set $X\cup X''\cup X'$, whose edge set is the union of the edge sets of~$\al^\vee$ and $\be^\wedge$.
\eit
The graph $\Pi(\al,\be)$ is called the \emph{product graph} of $\al$ and $\be$; it is generally drawn with $X''$ as the middle row of vertices. The product $\al\be$ is defined to be the partition of $X\cup X'$ with the property that $u,v\in X\cup X'$ belong to the same block of $\al\be$ if and only if $u,v$ are connected by a path in~$\Pi(\al,\be)$.
If $X=\{1,\ldots,n\}$ for a non-negative integer $n$, we write $\PP_n=\PP_X$.
\begin{eg}\label{eg:PX1}
To illustrate the product above, consider the partitions $\al,\be\in\PP_6$ defined by
\begin{align*}
\al &= \big\{ \{1,2,3,1'\}, \{4,4',5',6'\}, \{5\},\{6\},\{2',3'\}\big\} , \\
\be &= \big\{\{1,4',6'\},\{2,3\},\{4,5,6,1',2',3'\},\{5'\}\big\}.
\intertext{Figure \ref{fig:PX1} illustrates (graphs representing) $\al$ and $\be$, their product}
\al\be &= \big\{\{1,2,3,4',6'\},\{4,1',2',3'\},\{5\},\{6\},\{5'\}\big\},
\end{align*}
as well as the product graph $\Pi(\al,\be)$. Here and elsewhere, vertices are assumed to increase from left to right, $1,\ldots,n$, and similarly for (double) dashed vertices.
\end{eg}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.53]
\begin{scope}[shift={(0,0)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc12
\uarc23
\darc23
\darc45
\darc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\al=$};
\draw[->](7.5,-1)--(9.5,-1);
\end{scope}
\begin{scope}[shift={(0,-4)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc23
\uarc45
\uarc56
\darc12
\darc23
\darc46
\stline14
\stline43
\draw(0.6,1)node[left]{$\be=$};
\end{scope}
\begin{scope}[shift={(10,-1)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc12
\uarc23
\darc23
\darc45
\darc56
\stline11
\stline44
\draw[->](7.5,0)--(9.5,0);
\end{scope}
\begin{scope}[shift={(10,-3)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc23
\uarc45
\uarc56
\darc12
\darc23
\darc46
\stline14
\stline43
\end{scope}
\begin{scope}[shift={(20,-2)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc12
\uarc23
\darc12
\darc23
\darc46
\stline34
\stline43
\draw(6.4,1)node[right]{$=\al\be$};
\end{scope}
\end{tikzpicture}
\caption{Left to right: partitions $\al,\be\in\PP_6$, the product graph $\Pi(\al,\be)$, and the product $\al\be\in\PP_6$. For more information, see Example \ref{eg:PX1}.}
\label{fig:PX1}
\end{center}
\end{figure}
A reader familiar with partition monoids might protest that we have `cheated' in Example~\ref{eg:PX1}, and that our choice of $\al$ and $\be$ was too `easy' to fully illustrate the complexities of calculating products of partitions. This is indeed the case, as the bottom half of $\al$ `matches' the top half of $\be$ in a way that can hopefully be understood by examining Figure \ref{fig:PX1}, but which will be made precise shortly. Before this, we briefly consider a more `difficult' product:
\begin{eg}\label{eg:PX2}
Figure \ref{fig:PX2} gives another product, this time with $\al,\be\in\PP_{20}$. One can immediately see that there is no such `matching' between the bottom of $\al$ and the top of $\be$. Rather, to calculate the product $\al\be$, one needs to follow paths in the product graph $\Pi(\al,\be)$, often alternating several times between edges coming from $\al$ or $\be$. For example, to see that $\{1',4'\}$ is a block of $\al\be$, one needs to trace the following path (or its reverse):
\[
1' \xrightarrow{\ \be\ } 1'' \xrightarrow{\ \al\ } 2'' \xrightarrow{\ \be\ } 3'' \xrightarrow{\ \al\ } 4'' \xrightarrow{\ \be\ } 4'.
\]
\end{eg}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.6]
\begin{scope}[shift={(0,0)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{1/2,2/3,4/5,7/8,8/9,9/10,11/12,12/13,13/14,14/15,15/16,16/17,17/18,18/19,19/20}
\darcs{1/2,3/4,7/8,8/9,12/13,14/15,16/17,18/19,19/20}
\stlines{3/5,6/6,10/11}
\draw(0.6,1)node[left]{$\al=$};
\draw(21,0)node[right]{$\Pi(\al,\be)$};
\draw[|-|] (21,2)--(21,-2);
\end{scope}
\begin{scope}[shift={(0,-2)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{2/3,5/6,6/7,9/10,11/12,13/14,17/18}
\darcs{2/3,5/6,6/7,7/8,11/12,12/13,13/14,14/15,16/17,17/18,18/19,19/20}
\stlines{1/1,4/4,8/8,15/15}
\draw(0.6,1)node[left]{$\be=$};
\end{scope}
\begin{scope}[shift={(0,-6)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{1/2,2/3,4/5,7/8,8/9,9/10,11/12,12/13,13/14,14/15,15/16,16/17,17/18,18/19,19/20}
\uarcx36{.7}
\darcs{2/3,5/6,6/7,7/8,11/12,12/13,13/14,14/15,16/17,17/18,18/19,19/20}
\darcx14{.7}
\stlines{6/6,10/11}
\draw(0.6,1)node[left]{$\al\be=$};
\end{scope}
\end{tikzpicture}
\caption{Top: partitions $\al,\be\in\PP_{20}$, already connected to create the product graph $\Pi(\al,\be)$. Bottom: the product $\al\be\in\PP_{20}$. For more information, see Example \ref{eg:PX2}.}
\label{fig:PX2}
\end{center}
\end{figure}
We hope that Example \ref{eg:PX2} convinces the reader that products in $\PP_X$ can be `messy'. Of course things get worse when we increase the number of vertices, even more so when $X$ is infinite~\cite{JE2014}. Nevertheless, it is actually quite easy to see that $\PP_X$ is a regular $*$-semigroup. The involution
\[
{}^*:\PP_X\to\PP_X:\al\mt\al^*
\]
can be defined diagrammatically as a reflection in a horizontal axis; see Figure \ref{fig:PX3}. Formally, $\al^*$ is obtained from $\al$ by swapping dashed and undashed vertices, $x\leftrightarrow x'$. It is not hard to see that
\[
(\al^*)^* = \al = \al\al^*\al \AND (\al\be)^*=\be^*\al^* \qquad\text{for all $\al,\be\in\PP_X$.}
\]
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.53]
\begin{scope}[shift={(0,0)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc12
\uarc23
\darc23
\darc45
\darc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\al=$};
\draw[->](7.5,1)--(9.5,1);
\end{scope}
\begin{scope}[shift={(10,0)}]
\uvs{1,...,6}
\lvs{1,...,6}
\darc12
\darc23
\uarc23
\uarc45
\uarc56
\stline11
\stline44
\draw(6.4,1)node[right]{$=\al^*$};
\end{scope}
\end{tikzpicture}
\caption{A partition $\al\in\PP_6$ (left) and its image $\al^*$ under the involution of $\PP_6$.}
\label{fig:PX3}
\end{center}
\end{figure}
At this point it is convenient to recall some further notation and terminology. First, we say a non-empty subset $\es\not=A\sub X\cup X'$ is:
\bit
\item a \emph{transversal} if $A\cap X\not=\es$ and $A\cap X'\not=\es$ (i.e.~if $A$ contains both upper and lower vertices),
\item an \emph{upper non-transversal} if $A\sub X$ (i.e.~if $A$ contains only upper vertices), or
\item a \emph{lower non-transversal} if $A\sub X'$ (i.e.~if $A$ contains only lower vertices).
\eit
We can then describe a partition $\al\in\PP_X$ using a convenient two-line notation from \cite{EF2012}. Specifically, we write
\[
\al = \begin{partn}{2} A_i&C_j\\ \hhline{~|-} B_i&D_k\end{partn}_{i\in I,\ j\in J,\ k\in K}
\]
to indicate that $\al$ has transversals $A_i\cup B_i'$ ($i\in I$), upper non-transversals $C_j$ ($j\in J$) and lower non-transversals $D_k$ ($k\in K$). We often abbreviate this to $\al = \begin{partn}{2} A_i&C_j\\ \hhline{~|-} B_i&D_k\end{partn}$, with the indexing sets~$I$,~$J$ and~$K$ being implied, rather than explicitly listed. When $X$ is finite we list the blocks of partitions, \emph{viz.}~$\al = \begin{partn}{6} A_1&\dots&A_q&C_1&\dots&C_s\\ \hhline{~|~|~|-|-|-} B_1&\dots&B_q&D_1&\dots&D_t\end{partn}$. Thus, for example, the partition $\be\in\PP_6$ from Figure~\ref{fig:PX1} can be expressed as
\[
\be = \begin{partn}{3} 1&4,5,6&2,3\\ \hhline{~|~|-} 4,6&1,2,3&5\end{partn}.
\]
With this notation, the identity of $\PP_X$ is the partition $\operatorname{id}_X = \begin{partn}{1} x\\ \hhline{~} x\end{partn}_{x\in X}$, and the involution is given by
\[
\al = \begin{partn}{2} A_i&C_j\\ \hhline{~|-} B_i&D_k\end{partn}
\mt
\al^* = \begin{partn}{2} B_i&D_k\\ \hhline{~|-} A_i&C_j\end{partn}.
\]
Moreover, one can easily see that with $\al=\begin{partn}{2} A_i&C_j\\ \hhline{~|-} B_i&D_k\end{partn}$, and with the notation of Section \ref{sect:RSS}, we have
\[
\bd(\al) = \al\al^* = \begin{partn}{2} A_i&C_j\\ \hhline{~|-} A_i&C_j\end{partn}
\AND
\br(\al) = \al^*\al = \begin{partn}{2} B_i&D_k\\ \hhline{~|-} B_i&D_k\end{partn}.
\]
Thus, consulting Lemma \ref{lem:Green}, it follows that partitions $\al,\be\in\PP_X$ are $\mathrel{\mathscr R}$-related (or $\L$-related) precisely when the `top halves' (or `bottom halves') of $\al,\be$ `match' in a sense that we again hope is clear. Nevertheless, this `matching' can be formalised by defining some further notation.
For $\al\in\PP_X$, we define the \emph{(co)domain} and \emph{(co)kernel} of $\al$ by:
\begin{align*}
\operatorname{dom}(\al) &= \set{x\in X}{x \text{ belongs to a transversal of }\al}, \\
\operatorname{codom}(\al) &= \set{x\in X}{x' \text{ belongs to a transversal of }\al}, \\
\ker(\al) &= \set{(x,y)\in X\times X}{x\text{ and }y \text{ belong to the same block of }\al},\\
\operatorname{coker}(\al) &= \set{(x,y)\in X\times X}{x'\text{ and }y' \text{ belong to the same block of }\al}.
\end{align*}
The \emph{rank} of $\al$, denoted $\operatorname{rank}(\al)$, is the number of transversals of $\al$.
Thus, $\operatorname{dom}(\al)$ and $\operatorname{codom}(\al)$ are subsets of $X$, $\ker(\al)$ and $\operatorname{coker}(\al)$ are equivalences on $X$, and $\operatorname{rank}(\al)$ is a cardinal between~$0$ and~$|X|$. For example, with $\al\in\PP_6$ from Figure \ref{fig:PX1}, we have
\begin{align*}
\operatorname{dom}(\al) &= \{1,2,3,4\} , & \ker(\al) &= (1,2,3\mr\vert4\mr\vert5\mr\vert6) , \\
\operatorname{codom}(\al) &= \{1,4,5,6\} , & \operatorname{coker}(\al) &= (1\mr\vert2,3\mr\vert4,5,6) , & \operatorname{rank}(\al) &= 2,
\end{align*}
where we have indicated equivalences by listing their classes.
The various parts of the following result are contained in \cite{Wilcox2007,FL2011,ER2022}, though some of those papers use different terminology.
\newpage
\begin{lemma}\label{lem:Green_PX}
For $\al,\be\in\PP_X$, we have
\ben
\item $\al \mathrel{\mathscr R} \be \iff [\operatorname{dom}(\al)=\operatorname{dom}(\be)$ and $\ker(\al)=\ker(\be)]$,
\item $\al \L \be \iff [\operatorname{codom}(\al)=\operatorname{codom}(\be)$ and $\operatorname{coker}(\al)=\operatorname{coker}(\be)]$,
\item $\al \mathrel{\mathscr D} \be \iff \al \mathrel{\mathscr J} \be \iff \operatorname{rank}(\al)=\operatorname{rank}(\be)$.
\een
The ${\mathrel{\mathscr D}}={\mathrel{\mathscr J}}$-classes of $\PP_X$ are the sets
\[
D_\mu = D_\mu(\PP_X) = \set{\al\in\PP_X}{\operatorname{rank}(\al)=\mu} \qquad\text{for cardinals $0\leq \mu\leq|X|$.} \epfreseq
\]
Group $\H$-classes contained in $D_\mu$ are isomorphic to the symmetric group $\mathcal S_\mu$.
\end{lemma}
Let us now return to the example products considered earlier.
\begin{eg}\label{eg:PX3}
Consider again the partitions $\al,\be\in\PP_6$ from Example \ref{eg:PX1} and Figure \ref{fig:PX1}. The projections
\[
\br(\al) = \al^*\al \AND \bd(\be) = \be\be^*
\]
are calculated in Figure \ref{fig:PX4}. Since we have $\br(\al)=\bd(\be)$, we see that $\al$ and $\be$ are composable in the groupoid $\G(\PP_6)$ from Definition \ref{defn:GS}, so that in fact $\al\be=\al\circ\be$. This explains the `matching' phenomenon discussed above, and gives the reason for the `ease' of forming the product~$\al\be$.
\end{eg}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.53]
\begin{scope}[shift={(0,0)}]
\uvs{1,...,6}
\lvs{1,...,6}
\darc12
\darc23
\uarc23
\uarc45
\uarc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\al^*=$};
\end{scope}
\begin{scope}[shift={(0,-2)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc12
\uarc23
\darc23
\darc45
\darc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\al=$};
\end{scope}
\begin{scope}[shift={(0,-6)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc23
\uarc45
\uarc56
\darc23
\darc45
\darc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\br(\al)=$};
\end{scope}
\begin{scope}[shift={(15,0)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc23
\uarc45
\uarc56
\darc12
\darc23
\darc46
\stline14
\stline43
\draw(0.6,1)node[left]{$\be=$};
\end{scope}
\begin{scope}[shift={(15,-2)}]
\uvs{1,...,6}
\lvs{1,...,6}
\darc23
\darc45
\darc56
\uarc12
\uarc23
\uarc46
\stline41
\stline34
\draw(0.6,1)node[left]{$\be^*=$};
\end{scope}
\begin{scope}[shift={(15,-6)}]
\uvs{1,...,6}
\lvs{1,...,6}
\uarc23
\uarc45
\uarc56
\darc23
\darc45
\darc56
\stline11
\stline44
\draw(0.6,1)node[left]{$\bd(\be)=$};
\end{scope}
\end{tikzpicture}
\caption{The projections $\br(\al)=\al^*\al$ and $\bd(\be)=\be\be^*$, where $\al,\be\in\PP_6$ are as in Figure \ref{fig:PX1}. For more information, see Examples \ref{eg:PX1} and \ref{eg:PX3}.}
\label{fig:PX4}
\end{center}
\end{figure}
\begin{eg}\label{eg:PX4}
On the other hand, the partitions $\al,\be\in\PP_{20}$ from Example \ref{eg:PX2} and Figure \ref{fig:PX2} are not composable in the groupoid $\G = \G(\PP_{20})$. Indeed, one can check that the projections
\[
\br(\al) = \al^*\al \AND \bd(\be) = \be\be^*
\]
are as shown in Figure \ref{fig:PX5}, and we clearly do not have $\br(\al)=\bd(\be)$. On the other hand, it will follow from results of later chapters that there exist certain partitions $\al'\leq\al$ and $\be'\leq\be$ (the $\leq$ relation will be defined later) that can `almost' be composed in $\G$. By this we mean that while we still do not have $\br(\al')=\bd(\be')$, we do have $\br(\al') \F \bd(\be')$. (The $\F$ relation was defined in~\eqref{eq:leqFF}.) It will follow from the general theory that
\[
\al = \al' \circ \ve \circ \be',
\]
where $\ve\in\G$ is a special element with $\bd(\ve)=\br(\al')$ and $\br(\ve)=\bd(\be')$. The partitions ${\al',\ve,\be'\in\PP_{20}}$ are shown in Figure \ref{fig:PX6}. The reader might like to check that $\ve$ is an idempotent of $\PP_{20}$, i.e.~that~${\ve^2=\ve}$ (but we note that the composition $\ve\circ\ve$ does not exist in $\G$).
\end{eg}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.6]
\begin{scope}[shift={(0,0)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{1/2,3/4,7/8,8/9,12/13,14/15,16/17,18/19,19/20}
\darcs{1/2,3/4,7/8,8/9,12/13,14/15,16/17,18/19,19/20}
\stlines{5/5,6/6,11/11}
\draw(0.6,1)node[left]{$\br(\al)=$};
\end{scope}
\begin{scope}[shift={(0,-4)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{2/3,5/6,6/7,9/10,11/12,13/14,17/18}
\darcs{2/3,5/6,6/7,9/10,11/12,13/14,17/18}
\stlines{1/1,4/4,8/8,15/15}
\draw(0.6,1)node[left]{$\bd(\be)=$};
\end{scope}
\end{tikzpicture}
\caption{The projections $\br(\al)=\al^*\al$ and $\bd(\be)=\be\be^*$, where $\al,\be\in\PP_{20}$ are as in Figure \ref{fig:PX2}. For more information, see Examples \ref{eg:PX2} and \ref{eg:PX4}.}
\label{fig:PX5}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.6]
\begin{scope}[shift={(0,-2)}]
\darccols{2/3,5/6,6/7,9/10,11/12,13/14,17/18}{red}
\uarccols{1/2,3/4,5/6,7/8,8/9,12/13,14/15,16/17,18/19,19/20}{red}
\stlinecols{6/8,11/15}{red}
\darcxcol14{.7}{red}
\draw(0.6,1)node[left]{$\ve=$};
\end{scope}
\begin{scope}[shift={(0,0)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{1/2,2/3,4/5,7/8,8/9,9/10,11/12,12/13,13/14,14/15,15/16,16/17,17/18,18/19,19/20}
\uarcx36{.7}
\darcs{1/2,3/4,5/6,7/8,8/9,12/13,14/15,16/17,18/19,19/20}
\stlines{6/6,10/11}
\draw(0.6,1)node[left]{$\al'=$};
\draw(21,-1)node[right]{$\Pi(\al',\ve,\be')$};
\draw[|-|] (21,2)--(21,-4);
\end{scope}
\begin{scope}[shift={(0,-4)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{2/3,5/6,6/7,9/10,11/12,13/14,17/18}
\darcs{2/3,5/6,6/7,7/8,11/12,12/13,13/14,14/15,16/17,17/18,18/19,19/20}
\stlines{8/8,15/15}
\uarcx14{.7}
\darcx14{.7}
\draw(0.6,1)node[left]{$\be'=$};
\end{scope}
\begin{scope}[shift={(0,-8)}]
\uvs{1,...,20}
\lvs{1,...,20}
\uarcs{1/2,2/3,4/5,7/8,8/9,9/10,11/12,12/13,13/14,14/15,15/16,16/17,17/18,18/19,19/20}
\uarcx36{.7}
\darcs{2/3,5/6,6/7,7/8,11/12,12/13,13/14,14/15,16/17,17/18,18/19,19/20}
\darcx14{.7}
\stlines{6/6,10/11}
\draw(0.6,1)node[left]{$\al'\circ\ve\circ\be'=$};
\end{scope}
\end{tikzpicture}
\caption{Top: partitions $\al',\ve,\be'\in\PP_{20}$, with the edges of $\ve$ shown in red. Bottom: the composition $\al'\circ\ve\circ\be'$ in the groupoid $\G(\PP_{20})$. Note that $\al'\circ\ve\circ\be' = \al\be$, where $\al,\be\in\PP_{20}$ are as in Figure \ref{fig:PX2}. For more information, see Examples \ref{eg:PX2} and~\ref{eg:PX4}.}
\label{fig:PX6}
\end{center}
\end{figure}
We conclude this section with a brief discussion of the relations $\leq$, $\leqF$ and $\F$ on the set $P=P(\PP_X)$ of projections of a partition monoid $\PP_X$. These relations were defined in \eqref{eq:leq} and~\eqref{eq:leqFF}. First, one can check that the projections of the partition monoid $\PP_2$ are the six partitions shown in Figure \ref{fig:PX11}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(0,0)}]
\tuv1\tuv2\tlv1\tlv2\stline11\stline22
\end{scope}
\begin{scope}[shift={(5,0)}]
\tuv1\tuv2\tlv1\tlv2\stline11
\end{scope}
\begin{scope}[shift={(10,0)}]
\tuv1\tuv2\tlv1\tlv2\stline11\stline22\uarc12\darc12
\end{scope}
\begin{scope}[shift={(15,0)}]
\tuv1\tuv2\tlv1\tlv2\stline22
\end{scope}
\begin{scope}[shift={(20,0)}]
\tuv1\tuv2\tlv1\tlv2
\end{scope}
\begin{scope}[shift={(25,0)}]
\tuv1\tuv2\tlv1\tlv2\uarc12\darc12
\end{scope}
\end{tikzpicture}
\caption{The projections of the partition monoid $\PP_2$.}
\label{fig:PX11}
\end{center}
\end{figure}
Figure \ref{fig:PX7} shows the partial order $\leq$ on $P(\PP_2)$, as defined in \eqref{eq:leq}; as usual for Hasse diagrams, only covering relationships are shown, and the rest can be deduced from transitivity (and reflexivity). Figure \ref{fig:PX8} shows the relations~$\leqF$ and $\F$ on $P(\PP_2)$, as defined in \eqref{eq:leqFF}, and we remind the reader that these are not transitive. For example, we have
\[
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(5,0)}]
\tuv1\tuv2\tlv1\tlv2\stline11 \node() at (4,1){$\F$};
\end{scope}
\begin{scope}[shift={(10,0)}]
\tuv1\tuv2\tlv1\tlv2\stline11\stline22\uarc12\darc12 \node() at (4,1){$\F$};
\end{scope}
\begin{scope}[shift={(15,0)}]
\tuv1\tuv2\tlv1\tlv2\stline22 \node() at (2.75,.5){,};
\end{scope}
\end{tikzpicture}
\]
even though the first and third of the above projections are not $\F$-related, or even $\leqF$-related.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\node (a) at (0,4) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\end{tikzpicture}};
\node (b) at (-2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\end{tikzpicture}};
\node (c) at (0,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\uarc12\darc12\end{tikzpicture}};
\node (d) at (2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline22\end{tikzpicture}};
\node (e) at (-1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\end{tikzpicture}};
\node (f) at (1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\uarc12\darc12\end{tikzpicture}};
\foreach \x/\y in {e/b,e/d,f/c,b/a,c/a,d/a} {\draw[-{latex}] (\x)--(\y);}
\end{tikzpicture}
\caption{Hasse diagram of the poset $P(\PP_2)$, with respect to the order $\leq$ given in \eqref{eq:leq}. An arrow $p\to q$ means $p\leq q$.}
\label{fig:PX7}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\node (a) at (0,4) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\end{tikzpicture}};
\node (b) at (-2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\end{tikzpicture}};
\node (c) at (0,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\uarc12\darc12\end{tikzpicture}};
\node (d) at (2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline22\end{tikzpicture}};
\node (e) at (-1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\end{tikzpicture}};
\node (f) at (1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\uarc12\darc12\end{tikzpicture}};
\foreach \x/\y in {b/a,c/a,d/a,e/a,f/a,b/c,c/b,c/d,d/c,e/f,f/e,e/b,e/c,e/d,f/b,f/c,f/d} {\draw[-{latex}] (\x)--(\y);}
\begin{scope}[shift={(8,0)}]
\node (a) at (0,4) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\end{tikzpicture}};
\node (b) at (-2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\end{tikzpicture}};
\node (c) at (0,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline11\stline22\uarc12\darc12\end{tikzpicture}};
\node (d) at (2,2) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\stline22\end{tikzpicture}};
\node (e) at (-1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\end{tikzpicture}};
\node (f) at (1,0) {\begin{tikzpicture}[scale=.5]\uv1\uv2\lv1\lv2\uarc12\darc12\end{tikzpicture}};
\draw (b)--(c)--(d) (e)--(f);
\end{scope}
\end{tikzpicture}
\caption{Left: the relation $\leqF$ on $P(\PP_2)$; an arrow $p\to q$ means $p\leqF q$. Right: the relation $\F$ on $P(\PP_2)$; an edge $p-q$ means $p\F q$. These relations are given in \eqref{eq:leqFF}. In both diagrams, loops are omitted.}
\label{fig:PX8}
\end{center}
\end{figure}
In both Figures \ref{fig:PX7} and \ref{fig:PX8}, the elements of $P=P(\PP_2)$ have been arranged so that each row consists of $\mathrel{\mathscr D}$-related elements; cf.~Lemma \ref{lem:Green_PX}. So from top to bottom, the rows correspond to the projections from the $\mathrel{\mathscr D}$-classes $D_2$, $D_1$ and $D_0$, where
\[
D_i=D_i(\PP_2)=\set{\al\in\PP_2}{\operatorname{rank}(\al)=i} \qquad\text{for $i=0,1,2$.}
\]
\newpage\noindent
For $i=0,1,2$, we write $P_i = P\cap D_i$ for the set of projections from $D_i$. The right-hand graph in Figure \ref{fig:PX8} is the graph $\Ga(\PP_2)$, as defined at the end of Section \ref{sect:RSS}. We denote this by $\Ga=\Ga(\PP_2)$; so the vertex set of $\Ga$ is $P=P(\PP_2)$, and $\Ga$ has an undirected edge $\{p,q\}$ precisely when $p\not=q$ and $p\F q$.
As in \eqref{eq:FD}, $\F$-related projections are $\mathrel{\mathscr D}$-related. Thus,
\[
\Ga=\Ga_0\sqcup\Ga_1\sqcup\Ga_2
\]
decomposes as the disjoint union of the induced subgraphs $\Ga_i$ on the vertex sets $P_i$, for $i=0,1,2$. It is clear that each $\Ga_i$ is connected, but this is not necessarily the case for an arbitrary regular $*$-semigroup~$S$.
In general, for a regular $*$-semigroup $S$, we still have the graph $\Ga(S)$, and we still have the decomposition
\[
\Ga(S) = \bigsqcup_{D\in S/{\mathrel{\mathscr D}}} \Ga(D),
\]
where $\Ga(D)$ is the induced subgraph of $\Ga(S)$ on vertex set $P(S)\cap D$, for each $\mathrel{\mathscr D}$-class $D$ of $S$. However, the subgraphs $\Ga(D)$ need not be connected in general. For example, if $S$ is inverse, then the $\F$-relation is trivial; it follows that each $\Ga(D)$ is discrete (has empty edge set), and hence is disconnected if $D$ contains more than one idempotent. On the other hand, it follows from results of \cite{EG2017} that the graphs $\Ga(D)$ are connected for any $\mathrel{\mathscr D}$-class $D$ of a finite partition monoid $\PP_n$, and this turns out to be equivalent to certain facts about minimal idempotent-generation of the proper ideals of $\PP_n$. We have seen this connectivity property in the case $n=2$, above. Figure~\ref{fig:PX9} shows the graph $\Ga(\PP_3)$, produced using the Semigroups package for GAP \cite{Semigroups,GAP}. From left to right, the connected components of the graph $\Ga(\PP_3)$ in Figure \ref{fig:PX9} are the induced subgraphs $\Ga(D_3)$, $\Ga(D_2)$, $\Ga(D_1)$ and $\Ga(D_0)$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{FP3_3.pdf}
\caption{The graph $\Ga(\PP_3)$, produced by GAP. }
\label{fig:PX9}
\end{center}
\end{figure}
\newpage
As a special case, the induced subgraph $\Ga(D_{n-1})$ of the graph $\Ga(\PP_n)$ corresponding to the $\mathrel{\mathscr D}$-class $D_{n-1}=D_{n-1}(\PP_n)$ has a very interesting structure. First, one can easily check that the projections of $\PP_n$ contained in $D_{n-1}$ have the form
\[
\begin{tikzpicture}[scale=.5]
\begin{scope}[shift={(0,0)}]
\foreach \x in {1,3,4,5,7} {\uv\x\lv\x}
\foreach \x in {1,3,5,7} {\stline\x\x}
\foreach \x in {2,6} {\node()at(\x,2){$\cdots$};\node()at(\x,0){$\cdots$};}
\foreach \x/\y in {1/1,4/i,7/n} {\node()at(\x,2.5){\footnotesize $\y$};}
\draw(0.6,1)node[left]{$\pi_i=$};
\end{scope}
\begin{scope}[shift={(13,0)}]
\foreach \x in {1,3,4,5,7,8,9,11} {\uv\x\lv\x\stline\x\x}
\foreach \x in {2,6,10} {\node()at(\x,2){$\cdots$};\node()at(\x,0){$\cdots$};}
\foreach \x/\y in {1/1,4/j,8/k,11/n} {\node()at(\x,2.5){\footnotesize $\y$};}
\darc48
\uarc48
\draw(0.6,1)node[left]{and \qquad $\pi_{jk}=$};
\end{scope}
\end{tikzpicture}
\]
for each $1\leq i\leq n$ and $1\leq j<k\leq n$. It is also easy to check that the only non-identical $\F$-relations among these projections are
\[
\pi_i\F\pi_{ij}\F\pi_j.
\]
The graph $\Ga(D_{n-1})$ is shown in Figure \ref{fig:PX10} in the case $n=5$. In the figure, vertices representing projections $\pi_i$ or $\pi_{jk}$ are labelled simply with the subscripts $i$ or $jk$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=3]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\draw (0,0) circle (1);
\node[vertex] (1) at (90:1) { $1$ };
\node[vertex] (2) at (90-72:1) { $2$ };
\node[vertex] (3) at (90-72-72:1) { $3$ };
\node[vertex] (4) at (90-72-72-72:1) { $4$ };
\node[vertex] (5) at (90-72-72-72-72:1) { $5$ };
\node[vertex] (12) at (90-36:1) {\footnotesize $12$ };
\node[vertex] (13) at (0.293890801 ,0.095490177) {\footnotesize $13$ };
\node[vertex] (14) at (-0.293894022 ,0.095492517) {\footnotesize $14$ };
\node[vertex] (15) at (90-36-72-72-72-72:1) {\footnotesize $15$ };
\node[vertex] (23) at (90-36-72:1) {\footnotesize $23$ };
\node[vertex] (24) at (0.181635098 ,-0.250001636) {\footnotesize $24$ };
\node[vertex] (25) at (0 ,0.30901548) {\footnotesize $25$ };
\node[vertex] (34) at (90-36-72-72:1) {\footnotesize $34$ };
\node[vertex] (35) at (-0.181637088 ,-0.25000019) {\footnotesize $35$ };
\node[vertex] (45) at (90-36-72-72-72:1) {\footnotesize $45$ };
\foreach \x/\y in {1/3,1/4,2/4,2/5,3/5} {\draw(\x)--(\x\y)--(\y);}
\end{tikzpicture}
\end{center}
\caption{The graph $\Ga(D_4)$, where $D_4=D_4(\PP_5)$.}
\label{fig:PX10}
\end{figure}
\newpage
The idempotent-generated subsemigroup $\E(\PP_X)=\la E(\PP_X)\ra=\la P(\PP_X)\ra$ of a partition monoid~$\PP_X$ was described for finite and infinite $X$ in \cite{JEpnsn} and \cite{EF2012}, respectively. One of the main results of~\cite{JEpnsn} is that finite~$\E(\PP_n)$ is (minimally) generated as a monoid by the set
\[
\Om = \set{\pi_i,\pi_{jk}}{1\leq i\leq n,\ 1\leq j<k\leq n}
\]
of all projections of rank $n-1$, and that in fact
\[
\E(\PP_n) = \{\operatorname{id}_n\}\cup\operatorname{Sing}(\PP_n),
\]
where here ${\operatorname{Sing}(\PP_n)=\PP_n\setminus\mathcal S_n}$ is the \emph{singular ideal} of $\PP_n$, consisting of all non-invertible elements. The second main result was a presentation for $\operatorname{Sing}(\PP_n)$ in terms of the above generating set $\Om$. For example, one of the defining relations was
\begin{equation}\label{eq:triangle}
\pi_i \pi_{ij} \pi_j \pi_{jk} \pi_k \pi_{ki} \pi_i = \pi_i \pi_{ik} \pi_k \pi_{kj} \pi_j \pi_{ji} \pi_i \qquad\text{for distinct $1\leq i,j,k\leq n$,}
\end{equation}
where here we use symmetrical notation $\pi_{st}=\pi_{ts}$. Examining Figure \ref{fig:PX10}, one can see that the two words in this equation represent two different triangular paths in the graph $\Ga(D_{n-1})$:
\[
i\rightsquigarrow j\rightsquigarrow k\rightsquigarrow i \AND i\rightsquigarrow k\rightsquigarrow j\rightsquigarrow i.
\]
(In representing the above paths we have omitted the intermediate vertices, so $i\rightsquigarrow j$ is shorthand for $i\to ij\to j$, and so on.) Since paths in this graph correspond to lists of sequentially $\F$-related projections, the above words represent factorisations of the form described in Proposition~\ref{prop:ERSS}, and the corresponding tuples $(\pi_i,\pi_{ij},\pi_j,\ldots)$ and $(\pi_i,\pi_{ik},\pi_k,\ldots)$ are examples of what we will later call \emph{$P$-paths}; see Section \ref{sect:PP}. The two factorisations represent the following partition, pictured in the case~${i<j<k}$:
\[
\begin{tikzpicture}[scale=.5]
\foreach \x in {1,3,4,5,7,8,9,11,12,13,15} {\uv\x\lv\x}
\foreach \x in {1,3,5,7,9,11,13,15} {\stline\x\x}
\stline8{12}
\stline{12}8
\foreach \x in {2,6,10,14} {\node()at(\x,2){$\cdots$};\node()at(\x,0){$\cdots$};}
\foreach \x/\y in {1/1,4/i,8/j,12/k,15/n} {\node()at(\x,2.5){\footnotesize $\y$};}
\end{tikzpicture}
\]
Much more could be said, but we will conclude our preliminary discussion of partition monoids here. We will return to them at various stages throughout the text.
\newpage
\part{Structure}\label{part:structure}
This first part of the paper is devoted to our main structural result, Theorem \ref{thm:iso}. This theorem states that the category of regular $*$-semigroups is isomorphic to the category of so-called \emph{chained projection groupoids}. Such groupoids play the same role in the theory of regular $*$-semigroups that inductive groupoids play in inverse semigroup theory. Roughly speaking, we do not assume that the object sets of our groupoids are semilattices (as per inductive groupoids), but rather \emph{projection algebras}. These are unary algebras that abstractly model the projections of regular $*$-semigroups, together with their `conjugation actions' as in \eqref{eq:thpS}. In fact, we will see that a groupoid simply having a projection algebra for its object set is not enough. Rather, our chained projection groupoids are \emph{pairs} $(\G,\ve)$, where $\G$ is a groupoid with $v\G=P$ a projection algebra, and where $\ve$ is a certain special functor that encapsulates the strong relationship between the groupoid structure of $\G$ and the algebra structure of $P$. The domain of the functor $\ve$ is another groupoid, $\C=\C(P)$, which we call the \emph{chain groupoid} of $P$. In a sense, $\C$ models the behaviour of $P$ at the most `free' level, somehow recording precisely the information that holds in \emph{every} regular $*$-semigroup with projection algebra $P$. The full extent of this `free-ness' will be explored in more detail in the second part of the paper; see Chapter \ref{chap:E}.
This part of the paper contains four chapters:
\bit
\item Chapter \ref{chap:P} introduces projection algebras, and their chain groupoids.
\item Chapter \ref{chap:G} defines chained projection groupoids, and shows how to associate a regular $*$-semigroup to each such groupoid; see Theorem \ref{thm:SGve}.
\item Chapter \ref{chap:S} goes in the reverse direction, and shows how a regular $*$-semigroup gives rise to a chained projection groupoid; see Theorem \ref{thm:GveS}.
\item Chapter \ref{chap:iso} contains the main result, Theorem \ref{thm:iso}, which shows that the constructions of Chapters \ref{chap:G} and \ref{chap:S} are in fact mutually inverse isomorphisms between the categories of regular $*$-semigroups and chained projection groupoids.
\eit
The introduction of each chapter contains a detailed summary of its structure, and of the results it contains.
\section{Projection algebras}\label{chap:P}
One of the key ideas in this paper is that of a \emph{projection algebra}. Such an algebra consists of a set $P$, along with a family $\th_p$ ($p\in P$) of unary operations, one for each element of $P$. These operations are required to satisfy certain axioms, as listed in Definition \ref{defn:P} below, and are meant to abstractly model the unary algebras of projections of regular $*$-semigroups. This is in fact not a new concept. Indeed, projection algebras have appeared in a number of settings, under a variety of names, including the \emph{$P$-groupoids} of Imaoka \cite{Imaoka1983}, the \emph{$P$-sets} of Yamada \cite{Yamada1981}, (certain special) \emph{$P$-sets} of Nambooripad and Pastijn \cite{NP1985}, and the \emph{(left and right) projection algebras} of Jones~\cite{Jones2012}. (We also mention Yamada's \emph{$p$-systems} \cite{Yamada1982}; these are defined in a very different way, but for a similar purpose. Strictly speaking, Yamada's $P$-sets are different to Imaoka's $P$-groupoids, but are ultimately equivalent.) We prefer Jones' terminology `projection algebras', since these are indeed algebras, not just sets, though we note that Jones considered \emph{binary} algebras rather than Imaoka's \emph{unary} algebras, which we use here; see Remark \ref{rem:diamond}. We also only use the term `groupoid' in its usual categorical sense. Imaoka used it in one of its less-common meanings, stemming from the fact that a projection algebra also has a partially defined binary operation; curiously this partial operation plays no role in the current work.
Although the concept of a projection algebra is not new, here we build several new categorical structures on top of such algebras. The most general situation is covered in Chapter \ref{chap:G} below, where we have an ordered groupoid $\G$ whose object set $v\G=P$ is a projection algebra. The current chapter lays the foundation for this, by showing how to construct various categories directly from a projection algebra. The main such construction is the \emph{chain groupoid} $\C=\C(P)$; this will be used in the definition of the groupoids $\G$ in Chapter \ref{chap:G}, but we will uncover its deeper categorical significance in Chapter \ref{chap:E}.
We begin in Section \ref{sect:P} by giving/recalling the definition of projection algebras (see Definition~\ref{defn:P}), and establishing some of their important properties. In Sections~\ref{sect:PP} and~\ref{sect:CP}, respectively, we introduce the \emph{path category} $\P=\P(P)$ and the \emph{chain groupoid} $\C=\C(P)$ of a projection algebra~$P$ (see Definitions \ref{defn:PP} and \ref{defn:CP}). The groupoid $\C$ is defined as a quotient of the category~$\P$ by a certain congruence $\approx$, whose definition requires the somewhat-technical notion of \emph{linked pairs}, which are the subject of Section \ref{sect:LP}.
\subsection{Definitions and basic properties}\label{sect:P}
Here then is the key definition:
\begin{defn}\label{defn:P}
A \emph{projection algebra} is a set $P$, together with a collection of unary operations~$\th_p$~($p\in P$) satisfying the following axioms, for all $p,q\in P$:
\begin{enumerate}[label=\textup{\textsf{(P\arabic*)}},leftmargin=10mm]
\item \label{P1} $p\theta_p = p$,
\item \label{P2} $\theta_p\theta_p=\theta_p$,
\item \label{P3} $p\theta_q\theta_p=q\theta_p$,
\item \label{P4} $\theta_p\theta_q\theta_p=\theta_{q\theta_p}$,
\item \label{P5} $\theta_p\theta_q\theta_p\theta_q=\theta_p\theta_q$.
\end{enumerate}
The elements of a projection algebra are called \emph{projections}.
\end{defn}
Strictly speaking, one should refer to `a projection algebra $(P,\th)$', but we will almost always use the symbol $\th$ to denote the unary operations of such an algebra, and so typically refer to `a projection algebra $P$'. On the small number of occasions we need to refer simultaneously to more than one projection algebra, we will be careful to distinguish the operations.
\begin{rem}\label{rem:diamond}
It is worth noting at the outset that the collection of unary operations~$\th_p$~($p\in P$) could be replaced by a single binary operation $\diamond$, defined by $q\diamond p = q\th_p$. For example, Jones took this binary approach in \cite{Jones2012}, in his study of the more general $P$-restriction semigroups, though his preferred symbol was $\star$, which we reserve for other uses throughout the paper. We prefer the unary approach of Imaoka \cite{Imaoka1983}, however, for three main reasons:
\bit
\item First, we feel that the meaning of (some of) axioms \ref{P1}--\ref{P5} is clearer in the unary context. For example, the binary form of~\ref{P4} is
\[
r\diamond(q\diamond p) = ((r\diamond p)\diamond q)\diamond p \qquad\text{for all $p,q,r\in P$.}
\]
As we will discuss in more detail in Remark \ref{rem:G1}, axiom \ref{P4} can be thought of as a rule for `iterating' the unary operations, and we feel that this intuition is somewhat lost in the binary form of \ref{P4} above, although this is of course entirely subjective. (We also note that Jones' axioms are different to \ref{P1}--\ref{P5}, though they are equivalent, as he explains in~\cite[Section~7]{Jones2012}.)
\item The non-associativity of $\diamond$ means that brackets are necessary when working with $\diamond$-terms. On the other hand, associativity of function composition allows us to dispense with bracketing when using the $\th$ maps, and this is a significant advantage when dealing with lengthy terms.
\item In later chapters, we will consider groupoids $\G$ whose object sets are projection algebras,~${v\G=P}$. One of the key tools when studying such groupoids are certain maps
\[
\Th_a=\th_{\bd(a)}\vt_a:P\to P
\]
that are built from the unary operations of $P$ and the maps $\vt_a:\bd(a)^\da\to\br(a)^\da$ defined in~\eqref{eq:vta}. These $\Th$ maps could certainly be defined in terms of $\diamond$ and the $\vt$ maps, \emph{viz.}
\[
p\Th_a=(p\diamond\bd(a))\vt_a,
\]
but their definition as a composition $\Th_a=\th_{\bd(a)}\vt_a$ is more direct, and leads to more succinct statements and proofs, as for example with Proposition \ref{prop:G1}. The direct definition as a composition also helps to clarify the very formulation of our so-called (chained) projection groupoids; see Definitions \ref{defn:PG} and \ref{defn:CPG}.
\eit
\end{rem}
As we have already observed, the axioms \ref{P1}--\ref{P5} are abstractions of the properties of projections of regular $*$-semigroups; cf.~Lemma \ref{lem:PS2}. It will transpire (see especially Chapters~\ref{chap:E} and~\ref{chap:F}) that abstract projection algebras are precisely the projection algebras of regular $*$-semigroups, as is already known \cite{Imaoka1983}. For the eager reader, Example \ref{eg:FP} below provides a construction (without proof at this stage) of a regular $*$-semigroup with projection algebra $P$. In any case, the reader can keep in mind that (abstract) projection algebras are supposed to `model' the behaviour of algebras of projections of regular $*$-semigroups; thus, one can interpret every result in this chapter in the context of regular $*$-semigroups, and can gain an intuition for their meaning in that context.
\begin{eg}\label{eg:FP}
Consider a projection algebra $P$. Since an operation $\th_p$ is a map $P\to P$, we can think of it as an element of the \emph{full transformation monoid} $\T_P$. (This is the monoid of \emph{all} maps~$P\to P$, under composition.) The maps $\th_p$ generate the subsemigroup
\[
S_P = \pres{\th_p}{p\in P} = \set{\th_{p_1}\cdots\th_{p_k}}{k\geq1,\ p_1,\ldots,p_k\in P} \leq \T_P.
\]
So $S_P$ is an idempotent-generated semigroup (cf.~\ref{P2}), but it is generally not a regular $*$-semigroup. However, we can alter it to create a regular $*$-semigroup. Indeed, first we write~$\T_P^{\operatorname{op}}$ for the \emph{opposite} semigroup to $\T_P$. So $\T_P^{\operatorname{op}}$ has the same underlying set as $\T_P$, and its product~$\star$ is given by $\al\star\be=\be\al$. We then consider the subsemigroup of the direct product $\T_P\times\T_P^{\operatorname{op}}$ generated by all pairs $(\th_p,\th_p)$:
\[
\FF_P = \pres{(\th_p,\th_p)}{p\in P} = \set{(\th_{p_1}\cdots\th_{p_k},\th_{p_k}\cdots\th_{p_1})}{k\geq1,\ p_1,\ldots,p_k\in P} \leq \T_P\times\T_P^{\operatorname{op}}.
\]
It turns out that $\FF_P$ is a regular $*$-semigroup with projection algebra (isomorphic to) $P$. In fact, in Chapter \ref{chap:F} we will see that (up to isomorphism) $\FF_P$ is the unique idempotent-generated fundamental regular $*$-semigroup with projection algebra $P$; see Theorem \ref{thm:muCP} and Remark \ref{rem:FP}. Note that $\FF_P$ is a subdirect product of $S_P$ and $S_P^{\operatorname{op}}$.
\end{eg}
For the rest of this section we fix a projection algebra $P$. In what follows, for $x,y\in P$, we write $x=_1y$ to indicate that $x=y$ by an application of \ref{P1}, and similarly for $=_2$, and so on.
\begin{lemma}\label{lem:P}
For any $p_1,\ldots,p_k,q,r\in P$ we have
\[
\th_{q\th_{p_1}\cdots\th_{p_k}} = \th_{p_k}\cdots\th_{p_1}\th_q\th_{p_1}\cdots\th_{p_k}.
\]
\end{lemma}
\pf
This follows by iterating \ref{P4}.
\epf
A number of relations on $P$ will play a crucial role in all that follows. The first of these is defined by
\begin{align}
\label{eq:leqP} p\leq q &\Iff p=p\th_q.
\intertext{By \ref{P2}, it quickly follows that}
\label{eq:leqP2} p\leq q &\Iff p = r\th_q \qquad\text{for some $r\in P$.}
\end{align}
\begin{lemma}\label{lem:leqP}
$\leq$ is a partial order on $P$.
\end{lemma}
\pf
Reflexivity follows from \ref{P1}. For anti-symmetry, suppose $p\leq q$ and $q\leq p$, so that $p=p\th_q$ and $q=q\th_p$. Then
\[
p =_1 p\th_p = p\th_q\th_p =_3 q\th_p = q.
\]
For transitivity, suppose $p\leq q$ and $q\leq r$, so that $p=p\th_q$ and $q=q\th_r$. Then
\[
p = p\th_q = p\th_{q\th_r} =_4 p\th_r\th_q\th_r.
\]
By \eqref{eq:leqP2}, this implies $p\leq r$.
\epf
By \eqref{eq:leqP2}, the image of $\th_p:P\to P$ is precisely
\begin{equation}\label{eq:imthp}
\im(\th_p) = p^\da = \set{q\in P}{q\leq p},
\end{equation}
the down-set of $p$ in the poset $(P,\leq)$.
\begin{lemma}\label{lem:thpthq}
If $p\leq q$, then $\th_p=\th_p\th_q = \th_q\th_p$.
\end{lemma}
\pf
Since $p=p\th_q$, we have $\th_p = \th_{p\th_q} =_4 \th_q\th_p\th_q$. The claim then quickly follows from~\ref{P2}.
\epf
An equally important role will be played by two further relations, $\leqF$ and $\F$, defined as follows. For $p,q\in P$, we say that
\begin{equation}\label{eq:leqF}
p \leqF q \Iff p = q\th_p .
\end{equation}
By \ref{P1}, $\leqF$ is reflexive, but it need not be transitive. We also define
\begin{equation}\label{eq:F}
{\F} = {\leqF}\cap{\geqF},
\end{equation}
which is the largest symmetric (and reflexive) relation contained in $\leqF$. So
\[
p \F q \Iff p = q\th_p \text{ and } q=p\th_q.
\]
\begin{lemma}\label{lem:pqp}
For any $p,q\in P$,
\ben\bmc2
\item \label{pqP1} $p\th_q\leqF p$,
\item \label{pqP2} $p\leq q\implies p\leqF q$,
\item \label{pqp1} $p\leqF q \implies \th_p=\th_p\th_q\th_p$.
\item \label{pqp2} $p\F q\implies\th_p=\th_p\th_q\th_p$ and $\th_q=\th_q\th_p\th_q$.
\emc\een
\end{lemma}
\pf
\firstpfitem{\ref{pqP1}} We have $p\th_{p\th_q} =_4 p\th_q\th_p\th_q =_3 q\th_p\th_q =_3 p\th_q$.
\pfitem{\ref{pqP2}} If $p\leq q$, then $p=p\th_q$, so $q\th_p = q\th_{p\th_q} =_4 q\th_q\th_p\th_q =_1 q\th_p\th_q =_3 p\th_q = p$.
\pfitem{\ref{pqp1} and \ref{pqp2}} These follow immediately from \ref{P4}.
\epf
\begin{rem}
The partial order $\leq$ from \eqref{eq:leqP} was used by Imaoka in \cite{Imaoka1983}, where it was defined by
\[
p\leq q \Iff p=p\th_q=q\th_p.
\]
Note that part \ref{pqP2} of the lemma just proved means that the `$=q\th_p$' part of Imaoka's definition is superfluous. Jones also used the order $\leq$ in \cite{Jones2012}, defined exactly as in \eqref{eq:leqP}, albeit in binary form, $p\leq q \iff p=p\diamond q$ (cf.~Remark~\ref{rem:diamond}). We are not aware of any previous use of the $\leqF$ relation in the literature, however, despite its central importance in the current work.
\end{rem}
The following simple result will be crucial in later chapters. Roughly speaking, it says that any pair of projections $p,q\in P$ sit above an $\F$-related pair $p',q'\in P$.
\begin{lemma}\label{lem:p'q'}
Let $p,q\in P$ be arbitrary, and let $p'=q\th_p$ and $q'=p\th_q$. Then
\[
p'\leq p \COMMA q'\leq q \AND p'\F q'.
\]
\end{lemma}
\pf
We obtain $p'\leq p$ and $q'\leq q$ directly from \eqref{eq:leqP2}. We also have
\[
p'\th_{q'} = q\th_p\th_{p\th_q} =_4 q\th_p\th_q\th_p\th_q =_5 q\th_p\th_q =_3 p\th_q = q' \ANDSIM q'\th_{p'} = p'. \qedhere
\]
\epf
Although $\leqF$ is not transitive, $\leq$ and $\leqF$ have some `transitivity-like' properties in combination:
\begin{lemma}\label{lem:pqr}
If $p,q,r\in P$ are such that $p\leq q\leqF r$ or $p\leqF q\leq r$, then
\ben\bmc2
\item \label{pqr1} $p\leqF r$,
\item \label{pqr2} $p \F p\th_r$.
\emc\een
\end{lemma}
\pf
Throughout the proof we assume that $p\leq q\leqF r$. The arguments for the case of $p\leqF q\leq r$ are similar, and are omitted.
Since $p\leq q$ and $q\leqF r$, we have $p=p\th_q$ and $q=r\th_q$. Since also $p\leqF q$ by Lemma \ref{lem:pqp}\ref{pqP2}, we also have $p=q\th_p$.
\pfitem{\ref{pqr1}} We must show that $p=r\th_p$. Since $p\leq q$, Lemma \ref{lem:thpthq} gives $\th_p=\th_q\th_p$. Combining all of the above gives $r\th_p = r\th_q\th_p = q\th_p = p$.
\pfitem{\ref{pqr2}} We have $p\th_r\leqF p$ by Lemma \ref{lem:pqp}\ref{pqP1}, so we just need to show that $p\leqF p\th_r$, i.e.~that $p = (p\th_r)\th_p$. This follows from the axioms and the above-mentioned consequences of $p\leq q\leqF r$:
\[
p\th_r\th_p =_3 r\th_p = r\th_{p\th_q} =_4 r\th_q\th_p\th_q = q\th_p\th_q =_3 p\th_q = p. \qedhere
\]
\epf
It remains an open problem to determine whether a projection algebra $P$ (including its $\th$ operations) can be somehow characterised by the purely order-theoretic properties of the relations $\leq$ and $\leqF$. The next result is included in case it is of use in such a characterisation.
\begin{lemma}
If $p,q\in P$ are such that $p\leqF q$, then we have $p\F p'\leq q$ for some $p'\in P$.
\end{lemma}
\pf
It is a routine matter to verify that $p'=p\th_q$ has the stated properties.
\epf
\subsection{The path category}\label{sect:PP}
For the duration of this section we fix a projection algebra $P$, as in Definition \ref{defn:P}, including the operations $\th_p$ ($p\in P$), and the relations $\leq$, $\leqF$ and $\F$.
Roughly speaking, we wish to provide an abstract setting in which we can think about `products' of projections $p_1\cdots p_k$, even though~$P$ itself does not have a binary operation. Guided by the (as-yet unproved) Proposition \ref{prop:ERSS}, we are (for now) solely interested in such `products' in the case that $p_1\F\cdots\F p_k$. The \emph{path category} $\P$ (see Definition \ref{defn:PP}) is the first attempt at doing so; it represents such a `product' as a tuple $(p_1,\ldots,p_k)$, which is considered as a morphism~$p_1\to p_k$. In Section \ref{sect:CP}, we will define the \emph{chain groupoid} $\C$ (see Definition~\ref{defn:CP}) as a quotient~$\C=\P/{\approx}$ by a certain congruence $\approx$. The intuition here is that the chain groupoid records more information about such products than the factors alone, but only such information that is present in \emph{any} regular $*$-semigroup with projection algebra $P$. For example,~$(p,p)$ and~$(p)$ should always represent the same `product', since projections are idempotents; so too should~$(p,q,p)$ and~$(p)$ when $p\F q$ (cf.~\eqref{eq:leqFF}). These pairs of paths (and one other family of pairs) are taken as generators for the congruence $\approx$. But before we get ahead of ourselves, here is the main definition for the current section:
\begin{defn}\label{defn:PP}
A \emph{$P$-path} in a projection algebra $P$ (cf.~Definition \ref{defn:P}) is a tuple
\[
\p = (p_1,p_2,\ldots,p_k)\in P^k \qquad\text{for some $k\geq1$, such that $p_1\F p_2\F\cdots\F p_k$.}
\]
(Since $\F$ is not transitive, this does not imply that the $p_i$ are \emph{all} $\F$-related.) We say that $\p$ is a $P$-path from $p_1$ to $p_k$, and write $\bd(\p)=p_1$ and $\br(\p)=p_k$. We identify each $p\in P$ with the path $(p)$ of length $1$.
The \emph{path category} of $P$ is the $*$-category $\P=\P(P)$ of all $P$-paths, with object set $v\P=P$, under the following operations:
\bit
\item For $\p=(p_1,\ldots,p_k)$ and $\q=(q_1,\ldots,q_l)$ with $p_k=q_1$, we define
\[
\p\circ\q = (p_1,\ldots,p_{k-1},p_k=q_1,q_2,\ldots,q_l).
\]
\item For $\p=(p_1,\ldots,p_k)\in\P$, we define
\[
\p^\rev=(p_k,\ldots,p_1),
\]
the reverse of $\p$. (It is convenient to write $\p^\rev$ instead of $\p^*$, and it is clear that conditions \ref{I1}--\ref{I3} from Definition \ref{defn:*cat} all hold.)
\eit
\end{defn}
So a morphism set $\P(p,q)$ consists of all $P$-paths from $p$ to $q$. The identities are the paths of the form $p\equiv(p)$. Although $\P$ is a $*$-category, it is not a groupoid, as when~$\p$ has length $k\geq2$, $\p\circ\p^\rev$ has length $2k-1>k$. However, and as noted above, a very important role will be played by a certain groupoid quotient of $\P$; see Definition \ref{defn:CP}.
\begin{rem}
It is also worth noting that $\P$ is the free $*$-category over the relation $\F$ in the following sense. We define $\Ga=\Ga_P$ to be the graph with vertex set $P$, and an edge $x_{pq}:p\to q$ for each $(p,q)\in{\F}$ with $p\not=q$. As in \cite[p.~49]{MacLane1998}, the \emph{free category} $\CC=\CC(\Ga)$ has object set $v\CC=P$, and its morphisms are the paths in $\Ga$, including the empty path at each vertex $p\in P$ (which are identified with $p$, as usual). Non-empty paths can be thought of as words $x_{p_1p_2}x_{p_2p_3}\cdots x_{p_{k-1}p_k}$ (note the matching subscripts between successive letters), and composition of paths is given by concatenation when the endpoints match. The involution of~$\CC$ is given by reversal of paths; this is well-defined since $\F$ is symmetric. It is then clear that
\[
x_{p_1p_2}x_{p_2p_3}\cdots x_{p_{k-1}p_k}\to(p_1,p_2,\ldots,p_k)
\]
defines a $*$-isomorphism $\CC\to\P$.
\end{rem}
We now wish to show that $\P$ is an \emph{ordered} $*$-category, using Lemma \ref{lem:C}. To apply the lemma, we need a partial order on the object set $v\P=P$, and a collection of (left) restrictions~${}_q\corest\p$. We already have the order $\leq$ on $P$ given in \eqref{eq:leqP}. To define the restrictions, consider a $P$-path $\p=(p_1,\ldots,p_k)$, and suppose $q\in P$ is such that $q\leq \bd(\p) = p_1$. We define a tuple
\begin{equation}\label{eq:rest}
{}_q \corest \p = (q_1,\ldots,q_k) \WHERE q_1=q \ANd q_i = q_{i-1}\th_{p_i} \text{ for $2\leq i\leq k$.}
\end{equation}
Note that $q_i = q\th_{p_2}\cdots\th_{p_i}$ for all $1\leq i\leq k$. In fact, since $q\leq p_1$ we have $q=q\th_{p_1}$, so that
\begin{equation}\label{eq:qi}
q_i = q\th_{p_2}\cdots\th_{p_i} = q\th_{p_1}\cdots\th_{p_i} \qquad\text{for all $1\leq i\leq k$.}
\end{equation}
It follows immediately that $q_i\leq p_i$ for all $i$.
\begin{lemma}\label{lem:q|p}
If $\p\in\P$, and if $q\leq\bd(\p)$, then ${}_q\corest\p\in\P$. Moreover, $\bd({}_q\corest\p)=q$ and ${\br({}_q\corest\p)\leq\br(\p)}$.
\end{lemma}
\pf
Write $\p=(p_1,\ldots,p_k)$ and ${}_q\corest\p=(q_1,\ldots,q_k)$, as above. Since $\p\in\P$, we have $p_i\F p_{i+1}$ for all $1\leq i<k$, and we must show that $q_i\F q_{i+1}$ for all such $i$, i.e.~that
\[
q_i\th_{q_{i+1}}=q_{i+1} \AND q_{i+1}\th_{q_i} = q_i.
\]
For the first, we have
\[
q_i \th_{q_{i+1}} = q_i \th_{q_i\th_{p_{i+1}}} =_4 q_i\th_{p_{i+1}}\th_{q_i}\th_{p_{i+1}} =_3 p_{i+1}\th_{q_i}\th_{p_{i+1}} =_3 q_i\th_{p_{i+1}} = q_{i+1}.
\]
For the second, it follows from \eqref{eq:qi} that $q_i=_1q_i\th_{p_i}$. Combining this with $p_i=p_{i+1}\th_{p_i}$ (as~${p_i\F p_{i+1}}$), we have
\[
q_{i+1} \th_{q_i} = q_i\th_{p_{i+1}}\th_{q_i} =_3 p_{i+1}\th_{q_i} = p_{i+1}\th_{q_i\th_{p_i}} =_4 p_{i+1}\th_{p_i}\th_{q_i}\th_{p_i} = p_i\th_{q_i}\th_{p_i} =_3 q_i\th_{p_i} = q_i.
\]
This all shows that ${}_q\corest\p\in\P$.
By definition, we have $\bd({}_q\corest\p)=q_1=q$, and $\br({}_q\corest\p)=q_k\leq p_k=\br(\p)$, where the latter follows from the fact, noted above, that $q_i\leq p_i$ for all $i$.
\epf
\begin{lemma}\label{lem:qp=pr}
If $\p\in\P$ and $q\leq\bd(\p)$, and if $r=\br({}_q\corest\p)$, then $({}_q\corest\p)^\rev={}_r\corest\p^\rev$.
\end{lemma}
\pf
Write $\p=(p_1,\ldots,p_k)$ and ${}_q\corest\p=(q_1,\ldots,q_k)$, so that $r=q_k$. We also of course have $\p^\rev=(p_k,\ldots,p_1)$ and $({}_q\corest\p)^\rev=(q_k,\ldots,q_1)$.
To make the subscripts match up, it is convenient to write ${}_r\corest\p^\rev=(r_k,\ldots,r_1)$. So
\begin{equation}\label{eq:qiri}
q_i = q\th_{p_1}\cdots\th_{p_i} \AND r_i = r\th_{p_k}\cdots\th_{p_i} \qquad\text{for all $1\leq i\leq k$.}
\end{equation}
We show by descending induction that $q_i=r_i$ for all $i$. The $i=k$ case is trivial, as $r_k=r=q_k$. For $1\leq i<k$, we have
\begin{align*}
r_i &= r_{i+1}\th_{p_i} &&\text{by \eqref{eq:qiri}}\\
&= q_{i+1}\th_{p_i} &&\text{by induction}\\
&= q\th_{p_1}\cdots\th_{p_i}\th_{p_{i+1}}\th_{p_i} &&\text{by \eqref{eq:qiri}}\\
&= q\th_{p_1}\cdots\th_{p_i} &&\text{by Lemma \ref{lem:pqp}\ref{pqp2}}\\
&= q_i &&\text{by \eqref{eq:qiri}.} \qedhere
\end{align*}
\epf
\begin{lemma}\label{lem:reflexive}
If $\p\in\P$ and $q=\bd(\p)$, then ${}_q\corest\p=\p$.
\end{lemma}
\pf
Write $\p=(p_1,\ldots,p_k)$, so that $q=p_1$. Then ${}_q\corest\p=(q_1,\ldots,q_k)$, where each $q_i=q\th_{p_1}\cdots\th_{p_i}$, and we must show that $p_i=q_i$ for all $i$. This is clear for $i=1$. For $i\geq2$, we have
\[
q_i = q_{i-1}\th_{p_i} = p_{i-1}\th_{p_i} = p_i,
\]
by definition, induction, and the fact that $p_i\F p_{i-1}$.
\epf
\begin{lemma}\label{lem:transitive}
If $\p\in\P$ and $r\leq q\leq\bd(\p)$, then ${}_r\corest{}_q\corest\p={}_r\corest\p$.
\end{lemma}
\pf
Write
\[
\p=(p_1,\ldots,p_k) \COMMA {}_q\corest\p=(q_1,\ldots,q_k) \COMMA {}_r\corest\p=(r_1,\ldots,r_k) \AND {}_r\corest{}_q\corest\p=(s_1,\ldots,s_k).
\]
So
\begin{equation}\label{eq:pqrsi}
q_i=q\th_{p_1}\cdots\th_{p_i} \COMMA r_i=r\th_{p_1}\cdots\th_{p_i} \AND s_i=r\th_{q_1}\cdots\th_{q_i} \qquad\text{for all $1\leq i\leq k$,}
\end{equation}
and we must show that $r_i=s_i$ for all $i$. For $i=1$ we have $r_1=r=s_1$. For $i\geq2$,
\begin{align*}
s_i &= s_{i-1}\th_{q_i} &&\text{by \eqref{eq:pqrsi}}\\
&= (s_{i-1}\th_{q_{i-1}})\th_{q_{i-1}\th_{p_i}} &&\text{by \eqref{eq:pqrsi}, noting that $s_{i-1}\leq q_{i-1}$}\\
&= s_{i-1}\th_{q_{i-1}}\th_{p_i}\th_{q_{i-1}}\th_{p_i} &&\text{by \ref{P4}}\\
&= s_{i-1}\th_{q_{i-1}}\th_{p_i} &&\text{by \ref{P5}}\\
&= s_{i-1}\th_{p_i} &&\text{by \eqref{eq:pqrsi}}\\
&= r_{i-1}\th_{p_i} &&\text{by induction}\\
&= r_i &&\text{by \eqref{eq:pqrsi}.} \qedhere
\end{align*}
\epf
\begin{lemma}\label{lem:O2}
If $\p,\q\in\P$ with $\br(\p) = \bd(\q)$, and if $r\leq\bd(\p)$ and $s=\br({}_r\corest\p)$, then
\[
{}_r\corest(\p\circ\q) = {}_r\corest\p \circ {}_{s}\corest\q.
\]
\end{lemma}
\pf
Write $\p=(p_1,\ldots,p_k)$ and $\q=(q_1,\ldots,q_l)$, noting that $p_k=q_1$. Then
\begin{align*}
{}_r\corest\p&=(r,r\th_{p_2},r\th_{p_2}\th_{p_3},\ldots,r\th_{p_2}\cdots\th_{p_k}), \qquad\text{so}\qquad s=r\th_{p_2}\cdots\th_{p_k}.
\intertext{It follows that}
{}_r\corest(\p\circ\q) &= {}_r\corest(p_1,\ldots,p_k,q_2,\ldots,q_l) \\
&= (r,r\th_{p_2},r\th_{p_2}\th_{p_3},\ldots,r\th_{p_2}\cdots\th_{p_k}=s,s\th_{q_2},s\th_{q_2}\th_{q_3},\ldots,s\th_{q_2}\cdots\th_{q_l}) \\
&= (r,r\th_{p_2},r\th_{p_2}\th_{p_3},\ldots,r\th_{p_2}\cdots\th_{p_k})\circ(s,s\th_{q_2},s\th_{q_2}\th_{q_3},\ldots,s\th_{q_2}\cdots\th_{q_l}) = {}_r\corest\p \circ {}_{s}\corest\q. \qedhere
\end{align*}
\epf
\begin{prop}
For any projection algebra $P$, the path category $\P=\P(P)$ is an ordered $*$-category, with ordering given by
\[
\p\leq\q \IFF \p={}_r\corest\q \qquad \text{for some $r\leq\bd(\q)$.}
\]
\end{prop}
\pf
This follows from an application of Lemma \ref{lem:C}. The ordering on $v\P=P$ is given in~\eqref{eq:leqP}. Properties \ref{O1'}--\ref{O5'} were established in Lemmas \ref{lem:q|p}--\ref{lem:O2}.
\epf
As usual (cf.~Remark \ref{rem:dual}), we can use the involution to define a right-handed restriction:
\begin{equation}\label{eq:revrev}
\p\rest_q = ( {}_q\corest\p^\rev)^\rev \qquad\text{for $\p\in\P$ and $q\leq\br(\p)$.}
\end{equation}
Specifically, if $\p=(p_1,\ldots,p_k)$ and $q\leq\br(\p)=p_k$, then
\begin{equation}\label{eq:corest}
\p\rest_q = (q_1,\ldots,q_k) \WHERE q_i = q\th_{p_k}\cdots\th_{p_i} \qquad\text{for all $1\leq i\leq k$.}
\end{equation}
\subsection{Linked pairs}\label{sect:LP}
In the next section we will define the \emph{chain groupoid} $\C=\C(P)$ of a projection algebra $P$ as a quotient of the path category $\P=\P(P)$ by a certain congruence $\approx$. For the definition of $\approx$ we require the concept of \emph{linked pairs}:
\begin{defn}\label{defn:CP_P}
Consider a projection $p\in P$. A pair of projections $(e,f)\in P^2$ is said to be \emph{$p$-linked} if
\begin{equation}\label{eq:epf}
f = e\th_p\th_f \AND e = f\th_p\th_e.
\end{equation}
Associated to such a $p$-linked pair $(e,f)$ we define the tuples
\[
\lam(e,p,f) = (e,e\th_p,f) \AND \rho(e,p,f) = (e,f\th_p,f).
\]
\end{defn}
The next two results gather some important basic properties of $p$-linked pairs.
\begin{lemma}\label{lem:epf}
If $(e,f)$ is $p$-linked, then
\ben
\item \label{epf1} $(f,e)$ is also $p$-linked, and we have $\lam(e,p,f)^\rev = \rho(f,p,e)$ and $\rho(e,p,f)^\rev = \lam(f,p,e)$,
\item \label{epf2} $e,f\leqF p$,
\item \label{epf3} $\lam(e,p,f)$ and $\rho(e,p,f)$ both belong to $\P(e,f)$.
\een
\end{lemma}
\pf
\firstpfitem{\ref{epf1}} This follows directly by inspecting Definition~\ref{defn:CP_P}.
\pfitem{\ref{epf2}} By the symmetry afforded by part \ref{epf1}, it suffices to show that $e\leqF p$. For this we use~\eqref{eq:epf}, Lemma \ref{lem:P} and \ref{P3} to calculate
\[
p\th_e = p\th_{f\th_p\th_e} = p\th_e\th_p\th_f\th_p\th_e = e\th_p\th_f\th_p\th_e = f\th_p\th_e = e.
\]
\pfitem{\ref{epf3}} By symmetry, it suffices to show that $\lam(e,p,f)\in\P$, i.e.~that $e \F e\th_p \F f$. From $e\leq e\leqF p$ it follows from Lemma \ref{lem:pqr}\ref{pqr2} that $e\F e\th_p$. By \eqref{eq:epf} and Lemma~\ref{lem:pqp}\ref{pqP1}, we have $f = (e\th_p)\th_f \leqF e\th_p$, so we are left to show that $e\th_p \leqF f$, and for this we use \ref{P4} and \eqref{eq:epf}:
\[
f\th_{e\th_p} = f\th_p\th_e\th_p = e\th_p. \qedhere
\]
\epf
\begin{rem}\label{rem:LPP}
Consider a projection $p\in P$, and a $p$-linked pair $(e,f)$. By Lemma \ref{lem:epf}\ref{epf2} we have $e,f\leqF p$, and of course we also have $e\th_p,f\th_p\leq p$. These relationships are all shown in Figure \ref{fig:plinked}. In the diagram, each arrow $s\to t$ stands for the $P$-path $(s,t)\in\P$. Thus, the upper and lower paths from $e$ to $f$ in the bottom part of the diagram correspond to $\lam(e,p,f)$ and~$\rho(e,p,f)$, respectively.
\end{rem}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[xscale=1]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\nc\sss{4}
\node (e) at (-1,2){$e$};
\node (p1) at (4,6){$p$};
\node (u1) at (3,2){$e\th_p$};
\node (v1) at (5,-2){$f\th_p$};
\begin{scope}[shift={(\sss,0)}]
\node (v2) at (5,-2){$f$};
\end{scope}
\draw[->-=0.5] (e)--(u1);
\draw[->-=0.5] (u1)--(v2);
\draw[->-=0.5] (e)--(v1);
\draw[->-=0.5] (v1)--(v2);
\draw[white,line width=2mm] (p1)--(v1);
\draw[dashed] (u1)--(p1)--(v1);
\draw[dotted] (e)--(p1) (v2)--(p1);
\end{tikzpicture}
\caption{A projection $p\in P$, and a $p$-linked pair $(e,f)$, as in Definition \ref{defn:CP_P}. Dotted and dashed lines indicate $\leqF$ and~$\leq$ relationships, respectively. See Remark \ref{rem:LPP} for more details.}
\label{fig:plinked}
\end{center}
\end{figure}
The next result has an obvious dual, but we will not state it.
\begin{lemma}\label{lem:e'pf'}
If $(e,f)$ is $p$-linked, and if $e'\leq e$, then $(e',f')$ is $p$-linked, where $f'=e'\th_p\th_f$, and we have
\[
{}_{e'}\corest\lam(e,p,f) = \lam(e',p,f') \AND {}_{e'}\corest\rho(e,p,f) = \rho(e',p,f').
\]
\end{lemma}
\pf
To show that $(e',f')$ is $p$-linked, we must show that
\[
f'=e'\th_p\th_{f'} \AND e'=f'\th_p\th_{e'}.
\]
For the first we use the definition of $f'$ and Lemma \ref{lem:P} several times to calculate
\[
e'\th_p\th_{f'} = e'\th_p\th_{e'\th_p\th_f}
= e'\th_p\th_f\th_p\th_{e'}\th_p\th_f
= f\th_p\th_{e'}\th_p\th_f
= {e'}\th_p\th_f = f'.
\]
For the second, we begin with the projection algebra axioms, and calculate
\begin{align*}
f'\th_p\th_{e'} = e'\th_p\th_f\th_p\th_{e'}
=_4 e'\th_{f\th_p}\th_{e'}
=_3 f\th_p\th_{e'}
&= f\th_p\th_e\th_{e'} &&\text{by Lemma \ref{lem:thpthq}, as $e'\leq e$}\\
&= e\th_{e'} &&\text{by \eqref{eq:epf}}\\
&= e' &&\text{as $e'\leqF e$ by Lemma \ref{lem:pqp}\ref{pqP2}.}
\end{align*}
It remains to show that ${}_{e'}\corest\lam=\lam'$ and ${}_{e'}\corest\rho=\rho'$, where for convenience we write
\[
\lam = \lam(e,p,f) \COMMA \rho = \rho(e,p,f) \COMMA \lam' = \lam(e',p,f') \AND \rho' = \rho(e',p,f').
\]
Using \eqref{eq:rest} and Definition \ref{defn:CP_P}, we have
\begin{align*}
{}_{e'}\corest\lam &= (e',e'\th_{e\th_p},e'\th_{e\th_p}\th_f) , & \lam' &= (e',e'\th_p,f'),\\
{}_{e'}\corest\rho &= (e',e'\th_{f\th_p},e'\th_{f\th_p}\th_f) , & \rho' &= (e',f'\th_p,f'),
\end{align*}
so it remains to check that
\[
e'\th_{e\th_p} = e'\th_p \COMMA e'\th_{e\th_p}\th_f = f' \COMMA e'\th_{f\th_p} = f'\th_p \AND e'\th_{f\th_p}\th_f = f'.
\]
These are all easily dealt with:
\bit
\item $e'\th_{e\th_p} =_4 e'\th_p\th_e\th_p = e'\th_e\th_p\th_e\th_p =_5 e'\th_e\th_p = e'\th_p$, using $e'\leq e$,
\item $(e'\th_{e\th_p})\th_f = e'\th_p\th_f = f'$, using the previous calculation,
\item $e'\th_{f\th_p} =_4 e'\th_p\th_f\th_p = f'\th_p$, and
\item $e'\th_{f\th_p}\th_f =_4 e'\th_p\th_f\th_p\th_f =_5 e'\th_p\th_f = f'$. \qedhere
\eit
\epf
\subsection{The chain groupoid}\label{sect:CP}
We are now almost ready to define the \emph{chain groupoid} $\C=\C(P)$ associated to a projection algebra $P$. This groupoid is defined below as a certain quotient $\C=\P/{\approx}$ of the path category $\P=\P(P)$ from Definition \ref{defn:PP}. The congruence $\approx$ is defined by specifying a generating set:
\begin{defn}\label{defn:approx}
Given a projection algebra $P$ (cf.~Definition \ref{defn:P}), let $\Om=\Om(P)$ be the set of all pairs $(\s,\t)\in\P\times\P$ of the following three forms:
\begin{enumerate}[label=\textup{\textsf{($\mathsf{\Om}$\arabic*)}},leftmargin=10mm]
\item \label{Om1} $\s=(p,p)$ and $\t=(p)\equiv p$, for some $p\in P$,
\item \label{Om2} $\s=(p,q,p)$ and $\t=(p)\equiv p$, for some $(p,q)\in {\F}$,
\item \label{Om3} $\s=\lam(e,p,f)$ and $\t=\rho(e,p,f)$, for some $p\in P$, and some $p$-linked pair $(e,f)$.
\end{enumerate}
We define ${\approx}=\Om^\sharp$ to be the congruence on $\P$ generated by $\Om$.
\end{defn}
Since $\bd(\s)=\bd(\t)$ and $\br(\s)=\br(\t)$ for all $(\s,\t)\in\Om$, it follows that $\approx$ is a $v$-congruence.
\begin{lemma}\label{lem:Om1}
The set $\Om$ satisfies \eqref{eq:Om*}. Consequently, the congruence $\approx$ satisfies \ref{C4}.
\end{lemma}
\pf
By Lemma \ref{lem:Om}, it suffices to prove the first claim. To do so, we must show that $(\s,\t)\in\Om \implies \s^\rev\approx\t^\rev$ for all $(\s,\t)\in\Om$. This is immediate when $(\s,\t)$ has the form \ref{Om1} or~\ref{Om2}. For \ref{Om3}, it follows from Lemma \ref{lem:epf}\ref{epf1}.
\epf
\begin{lemma}\label{lem:Om2}
The set $\Om$ satisfies \eqref{eq:Om}. Consequently, the congruence $\approx$ satisfies \ref{C5}.
\end{lemma}
\pf
By Lemma \ref{lem:Om}, it suffices to prove the first claim. To do so, let $(\s,\t)\in\Om$. We must show that
\[
{}_r\corest\s \approx {}_r\corest\t \AND \br({}_r\corest\s) = \br({}_r\corest\t) \qquad\text{for all $r\leq\bd(\s)$.}
\]
(Note that the equality $\vt_\s=\vt_\t$, which is part of \eqref{eq:Om}, is equivalent to $\br({}_r\corest\s) = \br({}_r\corest\t)$ for all $r\leq\bd(\s)$.)
We do this separately for each of the three forms the pair $(\s,\t)$ can take. This is very easy for~\ref{Om1}, so we just treat the other two cases.
\pfitem{\ref{Om2}} Let $r\leq p$. To calculate ${}_r\corest\s = {}_r\corest(p,q,p)$, we first note that $r\th_q\th_p = r\th_p\th_q\th_p = r\th_p = r$, where we used $r\leq p$ in the first and third steps, and Lemma \ref{lem:pqp}\ref{pqp2} in the second. We then have
\[
{}_r\corest\s = (r,r\th_q,r\th_q\th_p) = (r,r\th_q,r) \qquad\text{and of course}\qquad {}_r\corest\t=(r).
\]
This shows that in fact $({}_r\corest\s,{}_r\corest\t)\in\Om$, so certainly ${}_r\corest\s \approx {}_r\corest\t$.
We also have $\br({}_r\corest\s) = r = \br({}_r\corest\t)$.
\pfitem{\ref{Om3}} Let $e'\leq\bd(\s)=e$, and let $f'=e'\th_p\th_f$. By Lemma \ref{lem:e'pf'} we have
\[
{}_{e'}\corest\s = \lam(e',p,f') \AND {}_{e'}\corest\t = \rho(e',p,f'),
\]
so again $({}_{e'}\corest\s,{}_{e'}\corest\t)\in\Om$, and $\br({}_{e'}\corest\s) = f' = \br({}_{e'}\corest\t)$.
\epf
It follows quickly from iterating \ref{Om2} that
\[
\p\circ\p^\rev\approx\bd(\p) \qquad\text{for all $\p\in\P$.}
\]
Combining this with Lemmas \ref{lem:approx}, \ref{lem:Om1} and \ref{lem:Om2}, it follows that the quotient $\P/{\approx}$ is an ordered groupoid.
\begin{defn}\label{defn:CP}
The \emph{chain groupoid} of a projection algebra $P$ (cf.~Definition \ref{defn:P}) is the quotient
\[
\C = \C(P) = \P/{\approx},
\]
where $\P=\P(P)$ is the path category of $P$ (cf.~Definition \ref{defn:PP}), and where $\approx$ is the congruence given in Definition \ref{defn:approx}.
\bit
\item The elements of $\C$, which are $\approx$-classes of $P$-paths, are called \emph{$P$-chains}. For $\p\in\P$, we write $[\p]\in\C$ for the $\approx$-class of $\p$. If $\p=(p_1,\ldots,p_k)$, then we write $[\p] = [p_1,\ldots,p_k]$. We then have $\bd[\p]=\bd(\p)=p_1$ and $\br[\p]=\br(\p)=p_k$.
\item For $\p=(p_1,\ldots,p_k)$ and $\q=(q_1,\ldots,q_l)$ with $p_k=q_1$, we have
\[
[\p]\circ[\q] = [\p\circ\q] = [p_1,\ldots,p_{k-1},p_k=q_1,q_2,\ldots,q_l].
\]
\item For $\p=(p_1,\ldots,p_k)\in\P$, we have
\[
[\p]^{-1}=[\p^\rev] = [p_k,\ldots,p_1].
\]
\item The order in $\C$ is given by
\[
\c\leq\d \IFF \p\leq\q \qquad\text{for some $\p\in\c$ and $\q\in\d$.}
\]
\eit
\end{defn}
Before we move on, it is worth commenting on the definition of the congruence $\approx$. The chain groupoid $\C=\C(P)$ will be used in Chapter \ref{chap:G} to provide an environment in which to `interpret' products of projections in certain abstract groupoids $\G$ with object set $P$. Since distinct projections/objects have distinct co/domains, they cannot be composed in such a groupoid. But a $P$-chain $[p_1,\ldots,p_k]\in\C$ is meant to be thought of as a `product' $p_1\cdots p_k$, and the `interpretation' mentioned above is a functor $\C\to\G$. Thus, $\approx$ is meant to equate such `products' when they `should be equal' in any regular $*$-semigroup with projection algebra $P$.
In this way, it is clear why $[p,p]$ and $[p]$ `should' be equal for any $p\in P$; cf.~\ref{Om1}. Similar comments apply to $[p,q,p]$ and $[p]$ when $p\F q$; cf.~\ref{Om2} and \eqref{eq:leqFF}. Pairs of the form~\ref{Om3} are not quite as obvious. For the purposes of Chapter \ref{chap:G}, one could replace $\approx$ with the congruence~$\sim$ generated by pairs only of the form \ref{Om1} and \ref{Om2}, and hence replace $\C=\P/{\approx}$ with the quotient $\P/{\sim}$. However, as we will see in Chapter \ref{chap:S} (see especially the proof of Lemma \ref{lem:GS4}), pairs of the form \ref{Om3} do indeed become equal when interpreted as products of projections in any regular $*$-semigroup. At the very least, this means that there is `no harm' in including such pairs in the definition of $\approx$.
However, there is a more compelling reason for including such pairs. Specifically, we will see in Chapter \ref{chap:E} that $\C=\P/{\approx}$ gives rise to a `free regular $*$-semigroup' with projection algebra~$P$. For this to work, we need $\C$ to be a (so-called) chained projection groupoid, and for this we require pairs of the form \ref{Om3} to be equivalent; see especially the proof of Proposition \ref{prop:Cid}.
This of course leads to the question of whether the equivalence of pairs of type \ref{Om3} is implied by those of type \ref{Om1} and \ref{Om2}, i.e.~whether the congruences $\approx$ and $\sim$ are in fact equal. But it is actually fairly easy to see that they are not. Indeed, it is easy to check that the rewriting system generated by the rules
\[
(p,p) \to (p) \AND (p,q,p) \to (p) \qquad\text{for $p,q\in P$ with $p\F q$}
\]
is complete and Notherian, in the sense of \cite{Gerard1980}. It follows that any $P$-path is $\sim$-equivalent to a unique `reduced' path, i.e.~one in which there are no sub-paths of the form $(p,p)$ or $(p,q,p)$. This reduced path can be obtained by repeatedly but arbitrarily replacing any sub-path of the form $(p,p)$ or $(p,q,p)$ by $(p)$. In particular, two $P$-paths are $\sim$-equivalent if and only if they have the same reduced word. Moreover, it is not hard to show that there are pairs of the form \ref{Om3} that are reduced but not equal, and hence not $\sim$-equivalent.
For a concrete example, consider the projection algebra $P=P(\PP_3)$ of the partition monoid~$\PP_3$, and define the following projections from $P$:
\[
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(0,0)}]
\foreach \x in {1,2,3} {\tuv\x \tlv\x \stline\x\x}
\uarc12 \darc12
\node[left] () at (.8,1) {$p=$};
\node() at (3.75,.5){,};
\end{scope}
\begin{scope}[shift={(8,0)}]
\foreach \x in {1,2,3} {\tuv\x \tlv\x}
\stline33
\node[left] () at (.8,1) {$e=$};
\end{scope}
\begin{scope}[shift={(19,0)}]
\foreach \x in {1,2,3} {\tuv\x \tlv\x}
\stline 22 \stline33
\uarc23 \darc23
\node[left] () at (.8,1) {and \qquad $f=$};
\node() at (3.75,.5){.};
\end{scope}
\end{tikzpicture}
\]
One can easily check that $(e,f)$ is $p$-linked, and that
\[
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(0,0)}]
\foreach \x in {1,2,3} {\tuv\x \tlv\x}
\stline33
\uarc12 \darc12
\node[left] () at (.8,1) {$e\th_p=pep=$};
\end{scope}
\begin{scope}[shift={(14.5,0)}]
\foreach \x in {1,2,3} {\tuv\x \tlv\x}
\stline11 \stline33
\uarc12\uarc23\darc12\darc23
\node[left] () at (.8,1) {and \qquad $f\th_p=pfp=$};
\node() at (3.75,.5){.};
\end{scope}
\end{tikzpicture}
\]
In particular, the paths $\lam(e,p,f)=(e,e\th_p,f)$ and $\rho(e,p,f)=(e,f\th_p,f)$ are reduced. Since $e\th_p\not=f\th_p$, these reduced paths are not equal, and hence not $\sim$-equivalent, even though they are $\approx$-equivalent by definition; cf.~\ref{Om3}.
Although we will not elaborate on this further, it is worth noting that linked pairs in projection algebras are somewhat akin to singular squares in regular biordered sets; cf.~\cite[p.~20]{Nambooripad1979}. Equating pairs of the form \ref{Om3} is somewhat akin to requiring singular squares to commute; cf.~\cite[p40]{Nambooripad1979}.
\subsection{The category of projection algebras}
Let $\PA$ be the (large) category of projection algebras. A morphism in $\PA$ is a projection algebra morphism $\phi:P\to P'$, by which we mean a map satisfying
\begin{align*}
(p\th_q)\phi &= (p\phi)\th'_{q\phi} &&\hspace{-3cm}\text{for all $p,q\in P$.}
\intertext{Here we use $\th$ and $\th'$ to denote the unary operations on $P$ and $P'$, respectively. One can also think of a projection algebra morphism as a morphism of binary algebras, in the sense discussed in Remark \ref{rem:diamond}. Indeed, using $\diamond$ and $\diamond'$ to denote the binary operations of $P$ and $P'$ (as in Remark~\ref{rem:diamond}), $\phi:P\to P'$ is a projection algebra morphism if and only if}
(p\diamond q)\phi &= (p\phi) \diamond' (q\phi) &&\hspace{-3cm}\text{for all $p,q\in P$.}
\end{align*}
In the next result we also write~$\leq$ and~$\leq'$ for the partial orders on $P$ and $P'$, as in \eqref{eq:leqP}, and similarly for the $\leqF$ and $\F$ relations from \eqref{eq:leqF} and \eqref{eq:F}.
\begin{lemma}\label{lem:PP'}
If $\phi:P\to P'$ is a projection algebra morphism, then for any $p,q\in P$,
\[
p\leq q \implies p\phi \leq' q\phi
\COMMA
p\leqF q \implies p\phi \leqF' q\phi
\AND
p\F q \implies p\phi \F' q\phi.
\]
\end{lemma}
\pf
The proofs are all essentially the same, so we just prove the first. For this we have
\[
p\leq q \Implies p = p\th_q \Implies p\phi = (p\th_q)\phi = (p\phi)\th'_{q\phi} \Implies p\phi \leq' q\phi. \qedhere
\]
\epf
The next result shows that any projection algebra morphism naturally induces a functor between the corresponding chain groupoids.
\begin{prop}\label{prop:CC'}
If $\phi:P\to P'$ is a projection algebra morphism, then there is a well-defined ordered groupoid functor
\[
\Phi:\C(P)\to\C(P') \GIVENBY [p_1,\ldots,p_k]\Phi = [p_1\phi,\ldots,p_k\phi].
\]
\end{prop}
\pf
During the proof we write $\C=\C(P)$ and $\C'=\C(P')$, and similarly for $\P$ and $\P'$. By Lemma \ref{lem:PP'}, we have a well-defined functor
\[
\varphi:\P\to\C' \GIVENBY (p_1,\ldots,p_k)\varphi=[p_1\phi,\ldots,p_k\phi].
\]
We first show that ${\approx}\sub\ker(\varphi)$. To do so, it suffices to show that $\s\varphi=\t\varphi$ for any pair $(\s,\t)\in\Om$. This is clear if $(\s,\t)$ has type \ref{Om1}. We now consider the other two types.
\pfitem{\ref{Om2}} Next suppose $\s=(p,q,p)$ and $\t=(p)$ for some $(p,q)\in{\F}$. By Lemma \ref{lem:PP'}, we have $p\phi\F q\phi$ (in $P'$), and so $\s\varphi=[p\phi,q\phi,p\phi]=[p\phi]=\t\varphi$.
\pfitem{\ref{Om3}} Finally, suppose $\s=\lam(e,p,f)=(e,e\th_p,f)$ and $\t=\rho(e,p,f)=(e,f\th_p,f)$ for some $p\in P$, and some $p$-linked pair $(e,f)$. It is then easy to see that $(e\phi,f\phi)$ is $p\phi$-linked, and $\s\varphi=[\lam(e\phi,p\phi,f\phi)]$ and $\t\varphi=[\rho(e\phi,p\phi,f\phi)]$, so again we have $\s\varphi=\t\varphi$.
\aftercases Now that we know ${\approx}\sub\ker\varphi$, we have an induced functor
\[
\Phi:\C=\P/{\approx} \to \C' \GIVENBY [\p]\Phi = \p\varphi,
\]
and this is of course the map in the statement of the proposition. It is easy to check that $\phi$ being a morphism implies $({}_q\corest\c)\Phi={}_{q\phi}\corest(\c\Phi)$ for all $\c\in\C$ and $q\leq\bd(\c)$, so that $\Phi$ is ordered.
\epf
\section{Projection groupoids}\label{chap:G}
Recall from Section \ref{sect:RSS} (see Definition \ref{defn:GS} and Proposition \ref{prop:GS}) that to any regular $*$-semigroup~$S$ we can associate a groupoid $\G=\G(S)$. The object set of this groupoid is $v\G=P=P(S)$, the projection algebra of~$S$, and the (partial) composition in $\G$ is a restriction of the (total) product in $S$. Nevertheless, as explained in Remark \ref{rem:GS}, the groupoid $\G$ contains enough information to completely recover the entire product, in the sense that an arbitrary product $ab$ in $S$ can be written as a composition in $\G$, \emph{viz.}~$ab=a'\circ e\circ b'$ for suitable $a',e,b'\in\G(=S)$. This is not without subtleties, however, as discussed in Remark \ref{rem:e}.
Roughly speaking, the purpose of the current chapter is to provide an abstract framework for the above ideas. In Section \ref{sect:PG} we introduce \emph{projection groupoids} (see Definition \ref{defn:PG}) as certain ordered groupoids~$\G$ whose object set $v\G=P$ is an (abstract) projection algebra, as in~Definition~\ref{defn:P}. Section~\ref{sect:CPG} introduces the concept of an \emph{evaluation map}, which is a special functor~$\ve:\C\to\G$, where $\C=\C(P)$ is the chain groupoid of $P$, as in Definition \ref{defn:CP}. We then introduce the key idea of a \emph{chained projection groupoid} $(\G,\ve)$; here $\G$ is a projection groupoid, and $\ve:\C\to\G$ is an evaluation map obeying a certain coherence condition; see Definition~\ref{defn:CPG}. In Section \ref{sect:SGve} we show that an arbitrary chained projection groupoid $(\G,\ve)$ gives rise to a regular $*$-semigroup $S=\bS(\G,\ve)$ on the same underlying set as $\G$, with product $\pr$ extending the composition $\circ$, with involution given by groupoid inversion, and with projection algebra $P(S)$ equal to the object set $P=v\G$; see Theorem \ref{thm:SGve}. We conclude, in Section \ref{sect:PoP}, with some basic properties concerning $\pr$ products of projections in $S$, including a characterisation of the idempotent-generated subsemigroup $\E(S)=\la E(S)\ra$ in Proposition \ref{prop:ES}.
In coming chapters we will take this idea much further. Specifically, in Chapter \ref{chap:S} we show, conversely, that any regular $*$-semigroup $S$ gives rise to a chained projection groupoid ${\bG(S)=(\G,\ve)}$. In Chapter \ref{chap:iso} we show that $\bS$ and $\bG$ are in fact mutually inverse functors between the categories of regular $*$-semigroups and chained projection groupoids (with suitable morphisms), so that these categories are isomorphic; see Theorem \ref{thm:iso}.
\subsection{Projection groupoids}\label{sect:PG}
Let $\G$ be an ordered groupoid, and suppose the object set $P=v\G$ is a projection algebra (cf.~Definition \ref{defn:P}), with the ordering on $\G$ inherited from that of $P$ (cf.~\eqref{eq:leqP}) in the sense described in Lemma \ref{lem:C}. That is, for each $a\in\G$ and each $p\leq\bd(a)$ we have a left restriction~${{}_p\corest a\in\G}$, with respect to which conditions \ref{O1'}--\ref{O5'} hold. We also have the right restrictions $a\rest_q=({}_q\corest a^{-1})^{-1}$, defined for $q\leq\br(a)$, and these satisfy the duals of \ref{O1'}--\ref{O5'}. We typically use these conditions without explicit reference, and also \ref{O6'} and its dual, which follow from the others.
At this point, the only `link' between the structures of the groupoid $\G$ and the projection algebra $P=v\G$ is via the order $\leq$ on $P$. This order is defined in terms of the $\th$ operations in~\eqref{eq:leq}. Conversely, however, we cannot recover the $\th$ operations from the order $\leq$; indeed, we will see in Example \ref{eg:Brandt} that different projection algebras (with the same underlying set) can give rise to exactly the same ordering. Thus, we will be particularly interested in groupoids $\G$ (as above) with a stronger link to their object algebra $P=v\G$. Such a link manifests itself in a number of equivalent properties listed in Proposition \ref{prop:G1} below. But before we get to these properties, we first consider the more general situation in which the only assumed link between~$\G$ and $P=v\G$ is via the order $\leq$ on $P$, as above.
Consider a morphism $a\in\G$. As in~\eqref{eq:vta}, we have the map
\[
\vt_a : \bd(a)^\da\to\br(a)^\da \GIVENBY p\vt_a = \br({}_p\corest a).
\]
Since $\bd(a)\in P$, we also have the unary operation $\th_{\bd(a)}$, and by \eqref{eq:imthp} its image is $\bd(a)^\da$. It follows that we can compose $\th_{\bd(a)}$ with $\vt_a$, and we denote this composition by
\begin{equation}\label{eq:Tha}
\Th_a = \th_{\bd(a)}\vt_a : P\to\br(a)^\da.
\end{equation}
As a special case, note that by Lemma \ref{lem:vt}\ref{vt2}, we have
\begin{equation}\label{eq:Thp}
\Th_p = \th_{\bd(p)}\vt_p = \th_p\operatorname{id}_{p^\da} = \th_p \qquad\text{for any projection $p\in P$.}
\end{equation}
We begin by recording some basic properties of the $\Th$ maps.
\begin{lemma}\label{lem:G2}
For any $a\in\G$ we have:
\ben
\item \label{G21} $\vt_a\th_{\br(a)} = \vt_a$,
\item \label{G22} $\Th_a\th_{\br(a)} = \Th_a = \th_{\bd(a)}\Th_a$,
\item \label{G23} $\Th_a\vt_{a^{-1}} = \th_{\bd(a)}$,
\item \label{G24} $\Th_a = \th_{\bd(a)}\vt_b$ for any $a\leq b$,
\item \label{G25} $\Th_{{}_p\corest a} = \th_p\Th_a$ for any $p\leq\bd(a)$.
\een
\end{lemma}
\pf
\firstpfitem{\ref{G21}} Since $\im(\vt_a)=\br(a)^\da$, this follows immediately from the fact that each operation $\th_p$ fixes each element of $\im(\th_p)=p^\da$.
\pfitem{\ref{G22}} Since $\Th_a = \th_{\bd(a)}\vt_a$, we obtain $\Th_a\th_{\br(a)} = \Th_a$ from part \ref{G21}, and $\th_{\bd(a)}\Th_a = \Th_a$ from \ref{P2}.
\pfitem{\ref{G23}} Since $\im(\th_{\bd(a)})=\bd(a)^\da$, it follows from Lemma \ref{lem:vtavta*} that
\[
\Th_a\vt_{a^{-1}} = \th_{\bd(a)}\vt_a\vt_{a^{-1}} = \th_{\bd(a)}\operatorname{id}_{\bd(a)^\da} = \th_{\bd(a)}.
\]
\pfitem{\ref{G24}} Since $a\leq b$, we have $a={}_p\corest b$, where $p=\bd(a)$. Now let $s\in P$ be arbitrary; we must show that $s\Th_a = s\th_p\vt_b$. For this, we write $t=s\th_p$, and use \ref{O4'} to calculate
\[
s\Th_a = s\th_p\vt_a = t\vt_a = \br({}_t\corest a) = \br({}_t\corest {}_p\corest b) = \br({}_t\corest b) = t\vt_b = s\th_p\vt_b.
\]
\pfitem{\ref{G25}} We have
\begin{align*}
\Th_{{}_p\corest a} = \th_{\bd({}_p\corest a)}\vt_a &= \th_p\vt_a &&\text{by part \ref{G24}, as ${}_p\corest a\leq a$}\\
&= \th_p\th_{\bd(a)}\vt_a &&\text{by Lemma \ref{lem:thpthq}, as $p\leq\bd(a)$}\\
&= \th_p\Th_a. &&\qedhere
\end{align*}
\epf
Another important property is that when $a$ and~$b$ are composable in $\G$, the map~$\Th_{a\circ b}$ is the composition of~$\Th_a$ and $\Th_b$.
\begin{lemma}\label{lem:Thab}
If $a,b\in\G$ are such that $\br(a)=\bd(b)$, then $\Th_{a\circ b}=\Th_a\Th_b$.
\end{lemma}
\pf
Write $p=\bd(a)$ and $q=\br(a)=\bd(b)$. Then by Lemmas \ref{lem:vt}\ref{vt1} and \ref{lem:G2}\ref{G21} we have
\[
\Th_a\Th_b = \th_p\vt_a\th_q\vt_b = \th_p\vt_a\th_{\br(a)}\vt_b = \th_p\vt_a\vt_b = \th_p\vt_{a\circ b} = \Th_{a\circ b}. \qedhere
\]
\epf
Part \ref{G25} of Lemma \ref{lem:G2} above shows how the $\Th$ maps interact with left restrictions, but the lemma did not include a corresponding statement concerning right restrictions. Of course if $q\leq\br(a)$, then $a\rest_q = {}_p\corest a$ where $p=q\vt_{a^{-1}}$ (cf.~\eqref{eq:vta*}), and then
\begin{equation}\label{eq:Thaq}
\Th_{a\rest_q} = \Th_{{}_p\corest a} = \th_p\Th_a = \th_{q\vt_{a^{-1}}}\Th_a.
\end{equation}
However, the groupoids we will be concerned with satisfy the neater identity $\Th_{a\rest_q}=\Th_a\th_q$. This is in fact equivalent to a number of additional properties linking the groupoid structure of~$\G$ to the projection algebra structure of $P=v\G$:
\begin{prop}\label{prop:G1}
If $\G$ is an ordered groupoid, whose object set $P=v\G$ is a projection algebra, as above, then the following are equivalent:
\begin{enumerate}[label=\textup{\textsf{(G1\alph*)}},leftmargin=12mm]
\item \label{G1a} $\th_{p\vt_a} = \Th_{a^{-1}}\th_p\Th_a$ for all $a\in\G$ and $p\leq\bd(a)$,
\item \label{G1b} $\th_{p\Th_a} = \Th_{a^{-1}}\th_p\Th_a$ for all $a\in\G$ and $p\in P$,
\item \label{G1c} $\Th_{a\rest_q}=\Th_a\th_q$ for all $a\in\G$ and $q\leq\br(a)$,
\item \label{G1d} $\vt_a$ is a projection algebra morphism (and hence isomorphism) $\bd(a)^\da\to\br(a)^\da$ for all $a\in\G$.
\end{enumerate}
\end{prop}
\pf
For the duration of the proof we fix $a\in\G$, and we write $s=\bd(a)$ and $t=\br(a)$. On a number of occasions we will use the fact that
\begin{equation}\label{eq:Tha*Tha}
\Th_{a^{-1}}\Th_a = \Th_{a^{-1}\circ a} = \Th_{\br(a)} = \Th_t = \th_t.
\end{equation}
In the above calculation we used Lemma \ref{lem:Thab} and \eqref{eq:Thp}. In what follows, we show that \ref{G1a} implies each of \ref{G1b}--\ref{G1d}, and conversely that any of \ref{G1b}--\ref{G1d} implies \ref{G1a}.
\pfitem{\ref{G1a}$\implies$\ref{G1b}--\ref{G1d}} Suppose first that \ref{G1a} holds. To verify \ref{G1b}, let $p\in P$ be arbitrary. Then
\begin{align*}
\th_{p\Th_a} &= \th_{(p\th_s)\vt_a} &&\text{by definition}\\
&= \Th_{a^{-1}}\th_{p\th_s}\Th_a &&\text{by \ref{G1a}, as $p\th_s\leq s=\bd(a)$}\\
&= \Th_{a^{-1}}\th_s\th_p\th_s\Th_a &&\text{by \ref{P4}}\\
&= \Th_{a^{-1}}\th_{\br(a^{-1})}\th_p\th_{\bd(a)}\Th_a \\
&= \Th_{a^{-1}}\th_p\Th_a &&\text{by Lemma \ref{lem:G2}\ref{G22}.}
\intertext{For \ref{G1c}, if $q\leq \br(a)$ then}
\Th_{a\rest_q} &= \th_{q\vt_{a^{-1}}}\Th_a &&\text{by \eqref{eq:Thaq}}\\
&= \Th_a\th_q\Th_{a^{-1}} \Th_a &&\text{by \ref{G1a}}\\
&= \Th_a\th_q\th_{\br(a)} &&\text{by \eqref{eq:Tha*Tha}}\\
&= \Th_a\th_q &&\text{by Lemma \ref{lem:thpthq}, as $q\leq\br(a)$.}
\intertext{For \ref{G1d}, since $\vt_a$ is a bijection $s^\da\to t^\da$ by Lemma \ref{lem:vtavta*}, it is enough to show that $\vt_a$ is a projection algebra morphism, i.e.~that}
(p\th_q)\vt_a &= (p\vt_a)\th_{q\vt_a} &&\text{for all $p,q\leq s=\bd(a)$.}
\intertext{But for any such $p,q$, we have}
(p\vt_a)\th_{q\vt_a} &= p\vt_a\Th_{a^{-1}}\th_q\Th_a &&\text{by \ref{G1a}}\\
&= p\vt_a\th_t\vt_{a^{-1}}\th_q\th_s\vt_a &&\text{by definition of the $\Th$ maps}\\
&= p\vt_a\vt_{a^{-1}}\th_q\vt_a &&\text{as $p\vt_a\leq\br(a) = t$ and $q\leq s$ (cf.~Lemma \ref{lem:thpthq})}\\
&= p\th_q\vt_a &&\text{by Lemma \ref{lem:vtavta*}.}
\end{align*}
\pfitem{\ref{G1b}$\implies$\ref{G1a}} This is immediate, since for any $p\leq\bd(a)=s$ we have $p\vt_a=(p\th_s)\vt_a=p\Th_a$ .
\pfitem{\ref{G1c}$\implies$\ref{G1a}} If \ref{G1c} holds, then for any $p\leq \bd(a)=\br(a^{-1})$ we have
\begin{align*}
\Th_{a^{-1}}\th_p\Th_a &= \Th_{a^{-1}\rest_p}\Th_a &&\text{by \ref{G1c}}\\
&= \th_{p\vt_a}\Th_{a^{-1}}\Th_a &&\text{by \eqref{eq:Thaq}}\\
&= \th_{p\vt_a}\th_{\br(a)} &&\text{by \eqref{eq:Tha*Tha}}\\
&= \th_{p\vt_a} &&\text{by Lemma \ref{lem:thpthq}, as $p\vt_a\leq\br(a)$.}
\end{align*}
\pfitem{\ref{G1d}$\implies$\ref{G1a}} Suppose \ref{G1d} holds, and let $p\leq s$. We must show that
\begin{align*}
e\th_{p\vt_a} &= e\Th_{a^{-1}}\th_p\Th_a &&\text{for any $e\in P$,}
\intertext{so fix some such $e$. To make the following calculation easier to read, let $f=e\th_t$. Since ${f\leq t=\bd(a^{-1})}$, we can also define $g=f\vt_{a^{-1}}$. By Lemma \ref{lem:vtavta*} we have $f=g\vt_a$, and we note also that $g\leq\br(a^{-1})=s$. We then have}
e\th_{p\vt_a} = e\th_t\th_{p\vt_a} &= f\th_{p\vt_a} &&\text{by Lemma \ref{lem:thpthq}, as $p\vt_a\leq\br(a)=t$}\\
&= (g\vt_a)\th_{p\vt_a} &&\text{as observed above}\\
&= (g\th_p)\vt_a &&\text{by \ref{G1d}, as $g,p\leq s=\bd(a)$}\\
&= e\th_t\vt_{a^{-1}}\th_p\vt_a &&\text{as $g=f\vt_{a^{-1}}$ and $f=e\th_t$}\\
&= e\th_t\vt_{a^{-1}}\th_p\th_s\vt_a &&\text{by Lemma \ref{lem:thpthq}, as $p\leq s$}\\
&= e\Th_{a^{-1}}\th_p\Th_a &&\text{by definition of the $\Th$ maps.} \qedhere
\end{align*}
\epf
Here then is the main definition of this section:
\begin{defn}\label{defn:PG}
A \emph{projection groupoid} is an ordered groupoid $\G$, whose object set $P=v\G$ is a projection algebra (cf.~Definition \ref{defn:P}), with the ordering on $\G$ inherited from that of $P$ (cf.~\eqref{eq:leqP}) in the sense described in Lemma \ref{lem:C}, and for which:
\begin{enumerate}[label=\textup{\textsf{(G\arabic*)}},leftmargin=10mm]
\item \label{G1} any (and hence all) of the conditions \ref{G1a}--\ref{G1d} hold.
\end{enumerate}
\end{defn}
\begin{rem}\label{rem:G1}
The reader should notice a resemblance between \ref{G1a} and \ref{P4}. In fact, this is more than just superficial. Recall that (abstract) projection algebras are supposed to `model' projections of regular $*$-semigroups, and that the $\th_p$ operations model `conjugation', \emph{viz.}~$q\th_p \equiv pqp$. In this way, axiom \ref{P4} can be thought of as a recipe for `iterating' conjugation by projections.
By the same token, restrictions ${}_p\corest a$ in $\G$ (with $p\leq\bd(a)$) are supposed to model `restrictions'~$pa$ in a regular $*$-semigroup $S$, as in Remark \ref{rem:GS}. `Pretending' that we are working in a regular $*$-semigroup, we can then think of
\[
p\vt_a = \br({}_p\corest a) \equiv \br(pa) \equiv (pa)^*pa = a^*p^*pa = a^*pa
\]
as a kind of `conjugate of $p$ by $a$'. In this sense, the $\vt_a$ maps can be thought of as analogues of conjugation in an arbitrary (abstract) projection groupoid. For arbitrary $t\in P$ (not necessarily below $\bd(a)$), we can then also think of
\[
t\Th_a = t\th_{\bd(a)}\vt_a \equiv a^*\cdot \bd(a)\cdot t\cdot \bd(a)\cdot a \equiv a^*\cdot aa^*\cdot t\cdot aa^*\cdot a = a^*ta,
\]
so that $\Th_a$ also acts as a kind of `conjugation' by morphisms. Condition \ref{G1a} can then be thought of as a way to iterate this kind of conjugation, as can condition \ref{G1b}.
\end{rem}
We close the section with a simple technical result that will help us shorten some later arguments.
\begin{lemma}\label{lem:pa'qap}
If $\G$ is a projection groupoid, and if $a\in\G$ and $p,q\in P$, then
\[
p\Th_{a^{-1}}\th_q\Th_a\th_p = q\Th_a\th_p.
\]
\end{lemma}
\pf
By \ref{G1b} and \ref{P3} we have $p\Th_{a^{-1}}\th_q\Th_a\th_p = p\th_{q\Th_a}\th_p = q\Th_a\th_p$.
\epf
\subsection{Chained projection groupoids}\label{sect:CPG}
At this point it is worth pausing to consider again the broad goal of this chapter, and to draw attention to one of the major hurdles we currently face. Recall that (broadly speaking) we are aiming to give an abstract description of the groupoids that correspond to regular $*$-semigroups. Starting with a regular $*$-semigroup $S$, in Definition \ref{defn:GS} we constructed a groupoid $\G=\G(S)$ with underlying set $S$, and whose composition was a restriction of the product in $S$. As we explained in Remark \ref{rem:GS}, this (partial) composition is enough to reconstruct the entire product of $S$, in the sense that for any $a,b\in S$ we have $ab=a'\circ e\circ b'$ for suitable~$a',e,b'\in\G(=S)$. As noted in that remark, the element $e$ is in fact the idempotent $e=p'q'$, where~$p',q'\in P$ are certain special projections below $\br(a)=a^*a$ and~$\bd(b)=bb^*$. Being an idempotent of course means that~$e^2=e$ in $S$. However, since $\bd(e)=p'$ and $\br(e)=q'$ (cf.~\eqref{eq:ee*e*e}), and since~$p'$ and~$q'$ are not necessarily equal (though always $\F$-related), the composition $e\circ e$ is generally not defined in $\G$. Returning to the abstract setting of projection groupoids, in which we are operating in this chapter, it is not at all obvious at the outset which morphism $p'\to q'$ ought to play the role of this idempotent~$e$.
The precise mechanism for solving this problem (which was explored at greater length in Remark \ref{rem:e}) is encoded in what we will call an \emph{evaluation map}, and in the properties we will require of them below.
\begin{defn}\label{defn:ve}
Let $\G$ be a projection groupoid (cf.~Definition \ref{defn:PG}), and let $\C=\C(P)$ be the chain groupoid of $P=v\G$ (cf.~Definition \ref{defn:CP}). An \emph{evaluation map} is an ordered $v$-functor $\ve:\C\to\G$, meaning that the following hold:
\begin{enumerate}[label=\textup{\textsf{(E\arabic*)}},leftmargin=10mm]
\item \label{E1} $\ve(p)=p$ for all $p\in P$ (where as usual we identify $p\equiv[p]\in\C$),
\item \label{E2} $\ve(\c\circ\d)=\ve(\c)\circ\ve(\d)$ if $\br(\c)=\bd(\d)$,
\item \label{E3} $\c\leq\d \implies \ve(\c)\leq\ve(\d)$.
\end{enumerate}
\end{defn}
We will soon define a \emph{chained projection groupoid} to be a projection groupoid with an evaluation map possessing a certain coherence property. But first we note a number of simple consequences of \ref{E1}--\ref{E3}.
\begin{rem}\label{rem:ve}
As with group homomorphisms, it is easy to see that
\begin{enumerate}[label=\textup{\textsf{(E\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{3}
\item \label{E4} $\ve(\c^{-1})=\ve(\c)^{-1}$ for all $\c\in\C$.
\end{enumerate}
It also follows that:
\begin{enumerate}[label=\textup{\textsf{(E\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{4}
\item \label{E5} $\bd(\ve(\c))=\bd(\c)$ and $\br(\ve(\c))=\br(\c)$ for all $\c\in\C$.
\end{enumerate}
For example, $\bd(\ve(\c)) = \ve(\c)\circ\ve(\c)^{-1} = \ve(\c)\circ\ve(\c^{-1}) = \ve(\c\circ\c^{-1}) = \ve(\bd(\c))$.
It is also easy to see that \ref{E3} is equivalent (in the presence of the other axioms) to:
\begin{enumerate}[label=\textup{\textsf{(E\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{5}
\item \label{E6} $\ve({}_q\corest\c)={}_q\corest\ve(\c)$ if $q\leq\bd(\c)$.
\end{enumerate}
For example, if \ref{E6} holds, and if $\c\leq\d$, then $\c={}_p\corest\d$ for some $p\leq\bd(\d)$; it follows from this that $\ve(\c) = \ve({}_p\corest\d) = {}_p\corest\ve(\d) \leq\ve(\d)$.
\end{rem}
Consider a projection groupoid $\G$ with $P=v\G$, and let $\ve:\C\to\G$ be an evaluation map.
For $p,q\in P$ with $p\F q$, we have the $P$-chain $[p,q]\in\C$, and the elements $\ve[p,q] \in \G$ will play a crucial role in all that follows. For one thing, these elements generate the image of $\C$ under $\ve$ (as the $[p,q]$ generate $\C$). Specifically, if $\c=[p_1,p_2,\ldots,p_k]\in\C$, then
\begin{equation}\label{eq:vec}
\ve(\c) = \ve[p_1,p_2]\circ\ve[p_2,p_3]\circ\cdots\circ\ve[p_{k-1},p_k].
\end{equation}
The next lemma gathers some important basic properties of the $\ve[p,q]$; in what follows, we will typically use these without explicit reference. We write $f|_A$ for the (set-theoretic) restriction of a function $f$ to a subset $A$ of its domain.
\begin{lemma}\label{lem:ve}
Let $\G$ be a projection groupoid, and $\ve:\C\to\G$ an evaluation map.
\ben
\item \label{ve1} For any $p\in P$ we have $\ve[p,p]=p$.
\item \label{ve2} If $p,q\in P$ are such that $p\F q$, then
\begin{align*}
\bd(\ve[p,q]) &= p ,& \ve[p,q]^{-1} &= \ve[q,p], &\vt_{\ve[p,q]} &= \th_q|_{p^\da},\\
\br(\ve[p,q]) &= q ,& && \Th_{\ve[p,q]} &= \th_p\th_q.
\end{align*}
\item \label{ve3} If $p,q,r,s\in P$ are such that $p\F q$, $r\leq p$ and $s\leq q$, then
\[
{}_r\corest\ve[p,q] = \ve[r,r\th_q] \AND \ve[p,q]\rest_s = \ve[s\th_p,s].
\]
\een
\end{lemma}
\pf
\firstpfitem{\ref{ve1}} This follows quickly from \ref{E1} and $[p,p]=[p]$, keeping in mind the identification of $p\in P$ with the chain $[p]\in\C$.
\pfitem{\ref{ve3}} This follows from \ref{E6}, together with \eqref{eq:rest} and \eqref{eq:corest}.
\pfitem{\ref{ve2}} The claims concerning domains, ranges and inversion follow from \ref{E4} and \ref{E5}, and the fact that $[p,q]^{-1}=[q,p]$ in $\C$.
Since the maps $\vt_{\ve[p,q]}$ and $\th_q|_{p^\da}$ both have domain $p^\da$, we can show the maps are equal by showing that
\[
r\vt_{\ve[p,q]} = r\th_q \qquad\text{for all $r\leq p$.}
\]
But for such $r$, we use the definition of the $\vt$ maps, and parts of the current lemma that have already been proved, to calculate
\[
r\vt_{\ve[p,q]} = \br({}_r\corest\ve[p,q]) = \br(\ve[r,r\th_q]) = r\th_q.
\]
Finally, again using already-proved parts of the current lemma, we have
\[
\Th_{\ve[p,q]} = \th_p\vt_{\ve[p,q]} = \th_p\circ \th_q|_{p^\da} = \th_p\th_q,
\]
where in the last step we used the fact that $p^\da=\im(\th_p)$.
\epf
Before we move on, it will also be convenient to record the following result.
\begin{lemma}\label{lem:aepq}
Let $\G$ be a projection groupoid, and $\ve:\C\to\G$ an evaluation map. If $p,q\in P$ are such that $p\F q$, and if $a\leq\ve[p,q]$, then
\[
a=\ve[r,s] \WHERE r=\bd(a) \ANd s=\br(a).
\]
\end{lemma}
\pf
Noting that $a\leq\ve[p,q]$, we use Lemma \ref{lem:ve} to calculate
\[
a = {}_r\corest\ve[p,q] = \ve[r,r\th_q] \AND r\th_q = \br(\ve[r,r\th_q]) = \br(a)=s. \qedhere
\]
\epf
To state the coherence property alluded to above, we require the concept of \emph{linked pairs}, which we now define. We have reused this term from Section \ref{sect:LP} (where $p$-linked pairs were used to define the congruence $\approx$ on the path category $\P$), because there is a strong tie between the two concepts, as we will see later.
\begin{defn}\label{defn:LP1}
Let $\G$ be a projection groupoid, and consider a morphism $b\in\G$. A pair of projections $(e,f)\in P^2$ is said to be \emph{$b$-linked} if
\begin{equation}\label{eq:LP}
f = e\Th_b\th_f \AND e = f\Th_{b^{-1}}\th_e.
\end{equation}
\end{defn}
We will soon associate two morphisms $\lam(e,b,f)$ and $\rho(e,b,f)$ to the $b$-linked pair $(e,f)$. But first we prove the next result, which will ensure these morphisms are well defined.
\begin{lemma}\label{lem:LP}
Let $\G$ be a projection groupoid, and suppose $(e,f)$ is $b$-linked, where $b\in\G(q,r)$. Define further projections
\begin{equation}\label{eq:e1e2f1f2}
e_1 = e\th_q \COMMA e_2 = f\Th_{b^{-1}} \COMMA f_1 = e\Th_b \AND f_2 = f\th_r.
\end{equation}
Then
\ben\bmc2
\item \label{LP1} $e\leqF q$ and $f\leqF r$,
\item \label{LP2} $e_i\leq q$ and $f_i\leq r$ for $i=1,2$,
\item \label{LP3} $e\F e_i$ and $f\F f_i$ for $i=1,2$,
\item \label{LP4} ${}_{e_i}\corest b=b\rest_{f_i}$ for $i=1,2$.
\emc\een
\end{lemma}
\pf
It is clear from Definition \ref{defn:LP1} that $(e,f)$ is $b$-linked if and only if $(f,e)$ is $b^{-1}$-linked. If we replace $b\leftrightarrow b^{-1}$ and $(e,f)\leftrightarrow(f,e)$, then the projections defined in \eqref{eq:e1e2f1f2} are interchanged accordingly, \emph{viz.}~$e_1\leftrightarrow f_2$ and $e_2\leftrightarrow f_1$.
Because of this symmetry, for parts \ref{LP1}--\ref{LP3}, it suffices to prove the claims concerning $e,e_1,e_2$. (Alternatively, the arguments below can be easily adapted to prove the claims regarding $f,f_1,f_2$.)
\pfitem{\ref{LP2}} This follows from $\im(\Th_{b^{-1}})=\im(\th_q)=q^\da$.
\pfitem{\ref{LP3}} From $e\leq e\leqF q$, it follows from Lemma \ref{lem:pqr}\ref{pqr2} that $e\F e\th_q=e_1$. By Lemma \ref{lem:pqp}\ref{pqP1}, we have $e=e_2\th_e\leqF e_2$. It remains to show that $e_2 \leqF e$, and for this we use \ref{G1b}, \eqref{eq:LP} and~\eqref{eq:e1e2f1f2}:
\[
e\th_{e_2} = e\th_{f\Th_{b^{-1}}} = e\Th_b\th_f\Th_{b^{-1}} = f\Th_{b^{-1}} = e_2.
\]
\pfitem{\ref{LP4}} We must show that $f_i = e_i\vt_b$ ($i=1,2$). Keeping $q=\bd(b)$ and $r=\bd(b^{-1})$ in mind, we have
\[
e_1\vt_b = e\th_q\vt_b = e\Th_b = f_1 \AND e_2\vt_b = f\Th_{b^{-1}}\vt_b = f\th_r = f_2,
\]
where we used Lemma \ref{lem:G2}\ref{G23} in the second calculation.
\pfitem{\ref{LP1}} Combining parts \ref{LP2} and \ref{LP3}, we have $e\leqF e_1\leq q$, and Lemma~\ref{lem:pqr}\ref{pqr1} then gives ${e\leqF q}$.
\epf
\begin{defn}\label{defn:LP2}
Let $\G$ be a projection groupoid (cf.~Definition \ref{defn:PG}), and $\ve:\C\to\G$ an evaluation map (cf.~Definition \ref{defn:ve}). Let $(e,f)$ be a $b$-linked pair, where $b\in\G(q,r)$, and let $e_1,e_2,f_1,f_2\in P$ be as in \eqref{eq:e1e2f1f2}. By Lemma \ref{lem:LP}, we have two well-defined morphisms
\begin{align}
\nonumber \lam(e,b,f) &= \ve[e,e_1]\circ {}_{e_1}\corest b \circ \ve[f_1,f] &\text{and}&& \rho(e,b,f) &= \ve[e,e_2]\circ {}_{e_2}\corest b \circ \ve[f_2,f]\\
\label{eq:G3} &= \ve[e,e_1]\circ b\rest_{f_1} \circ \ve[f_1,f] &&& &= \ve[e,e_2]\circ b\rest_{f_2} \circ \ve[f_2,f].
\end{align}
These morphisms are shown in Figure \ref{fig:G3}.
\end{defn}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[xscale=0.8]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\node (e) at (0,0){$e$};
\node (e1) at (3,2){$e_1$};
\node (e2) at (5,-2){$e_2$};
\node (f) at (15,0){$f$};
\node (f1) at (10,2){$f_1$};
\node (f2) at (12,-2){$f_2$};
\node (q) at (4,6){$q$};
\node (r) at (11,6){$r$};
\draw[->-=0.5] (q)--(r);
\draw[->-=0.5] (e)--(e1);
\draw[->-=0.5] (e1)--(f1);
\draw[->-=0.5] (f1)--(f);
\draw[->-=0.5] (e)--(e2);
\draw[->-=0.5] (e2)--(f2);
\draw[->-=0.5] (f2)--(f);
\draw[white,line width=2mm] (q)--(e2);
\draw[white,line width=2mm] (r)--(f2);
\draw[dashed] (e1)--(q)--(e2) (f1)--(r)--(f2);
\draw[dotted] (e)--(q) (f)--(r);
\node () at (7.4,6.3) {$b$};
\node () at (6.4,2.3) {${}_{e_1}\corest b = b\rest_{f_1}$};
\node () at (8.4,-1.7) {${}_{e_2}\corest b = b\rest_{f_2}$};
\node () at (2.1,0.7) {{\small $\ve[e,e_1]$}};
\node () at (1.7,-1.2) {{\small $\ve[e,e_2]$}};
\node () at (13.15,1.2) {{\small $\ve[f_1,f]$}};
\node () at (14.15,-1.2) {{\small $\ve[f_2,f]$}};
\end{tikzpicture}
\caption{The projections and morphisms associated to a $b$-linked pair $(e,f)$, for $b\in\G(q,r)$; see Definitions \ref{defn:LP1}, \ref{defn:LP2} and \ref{defn:CPG}, and Lemma \ref{lem:LP}. Dotted and dashed lines indicate $\leqF$ and~$\leq$ relationships, respectively. Axiom \ref{G2} says that the hexagon at the bottom of the diagram commutes.}
\label{fig:G3}
\end{center}
\end{figure}
The above-mentioned coherence property states that the two terms in \eqref{eq:G3} must be equal:
\begin{defn}\label{defn:CPG}
A \emph{chained projection groupoid} is a pair $(\G,\ve)$, where $\G$ is a projection groupoid (cf.~Definition \ref{defn:PG}), and $\ve:\C\to\G$ is an evaluation map (cf.~Definitions \ref{defn:CP} and \ref{defn:ve}) satisfying the following condition:
\begin{enumerate}[label=\textup{\textsf{(G\arabic*)}},leftmargin=10mm]\addtocounter{enumi}{1}
\item \label{G2} For every $b\in\G$, and for every $b$-linked pair $(e,f)$, we have $\lam(e,b,f) = \rho(e,b,f)$, where these morphisms are as in \eqref{eq:G3}.
\end{enumerate}
\end{defn}
\begin{rem}\label{rem:G2'}
One might wonder if the concept of linked pairs could be avoided when defining chained projection groupoids. Specifically, one might wonder if \ref{G2} is equivalent to:
\begin{enumerate}[label=\textup{\textsf{(G\arabic*)$'$}},leftmargin=10mm]\addtocounter{enumi}{1}
\item \label{G2'} For any morphism $b\in\G(q,r)$, and for any projections $e,e_1,e_2,f,f_1,f_2\in P$ satisfying conditions~\ref{LP1}--\ref{LP4} from Lemma \ref{lem:LP}, we have $\ve[e,e_1]\circ {}_{e_1}\corest b \circ \ve[f_1,f] = \ve[e,e_2]\circ {}_{e_2}\corest b \circ \ve[f_2,f]$.
\end{enumerate}
While \ref{G2'} of course implies \ref{G2}, the converse does not hold. Most significantly, the stronger condition \ref{G2'} does not (generally) hold in the all-important case that $\G=\G(S)$ is the groupoid associated to a regular $*$-semigroup~$S$, as in Definition \ref{defn:GS}. We will say more about this in Remark \ref{rem:G2'2}.
\end{rem}
\subsection[The regular $*$-semigroup associated to a chained projection groupoid]{\boldmath The regular $*$-semigroup associated to a chained projection groupoid}\label{sect:SGve}
For the duration of this section, we fix a chained projection groupoid $(\G,\ve)$, as in Definition~\ref{defn:CPG}, and we continue to write $P=v\G$, $\C=\C(P)$, and so on. Our aim here is to construct a regular $*$-semigroup $S=\bS(\G,\ve)$, built from $\G$ and $\ve$ in a natural way. The underlying set of~$S$ will simply be $\G$, and the involution of $S$ will simply be inversion in $\G$:
\[
a^* = a^{-1} \qquad\text{for $a\in\G$.}
\]
The definition of the product in $S$, which we will denote by $\pr$, is more involved. Its inspiration is drawn from the properties of regular $*$-semigroups discussed in Remark \ref{rem:GS}.
\begin{defn}\label{defn:pr}
Let $(\G,\ve)$ be a chained projection groupoid (cf.~Definition \ref{defn:CPG}), and consider an arbitrary pair of morphisms~$a,b\in\G$. Let $p=\br(a)$ and $q=\bd(b)$, and define the projections
\[
p' = q\th_p \AND q'=p\th_q.
\]
By Lemma \ref{lem:p'q'}, we have $p'\leq p$, $q'\leq q$ and $p'\F q'$. In particular, the morphisms $ a\rest_{p'}$, $\ve[p',q']$ and ${}_{q'}\corest b$ exist, and we define
\begin{equation}\label{eq:pr}
a\pr b = a\rest_{p'} \circ \ve[p',q'] \circ {}_{q'}\corest b.
\end{equation}
This is illustrated in Figure \ref{fig:pr}. We define $\bS(\G,\ve)$ to be the $(2,1)$-algebra with underlying set $\G$, and with binary operation $\pr$ and unary operation ${}^*={}^{-1}$.
\end{defn}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[xscale=.7]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\node (p) at (5,3){$p$};
\node (q) at (10,4){$q$};
\node (p0) at (0,3){};
\node (q0) at (15,4){};
\node (p') at (5,0){$p'$};
\node (q') at (10,0){$q'$};
\node (p0') at (0,0){};
\node (q0') at (15,0){};
\draw[dashed] (p)--(p')
(q)--(q')
;
\draw[->-=0.5] (p0)--(p);
\draw[->-=0.5] (q)--(q0);
\draw[->-=0.5] (p0')--(p');
\draw[->-=0.5] (q')--(q0');
\draw[->-=0.5] (p')--(q');
\draw[->-=0.5] (p0') to [bend right = 15] (q0');
\node () at (2.4,3.2) {$a$};
\node () at (12.5,4.25) {$b$};
\node () at (2.4,.3) {$a\rest_{p'}$};
\node () at (12.5,.25) {${}_{q'}\corest b$};
\node () at (7.4,.3) {$\ve[p',q']$};
\node () at (7.35,-1.35) {$a\pr b$};
\end{tikzpicture}
\caption{Construction of the product $a\pr b$, as in Definition \ref{defn:pr}.}
\label{fig:pr}
\end{center}
\end{figure}
To show that $S=\bS(\G,\ve)$ is a regular $*$-semigroup, with respect to the binary operation $\pr$ and unary operation ${}^*={}^{-1}$, we must establish the identities:
\[
(a\pr b)\pr c = a\pr(b\pr c) \COMMA (a^*)^* = a = (a\pr a^*)\pr a \AND (a\pr b)^*=b^*\pr a^*.
\]
Associativity of $\pr$ is quite difficult to establish, and is finally achieved in Lemma \ref{lem:abcabc} below. The identities involving ${}^*$ are comparatively easy, however, and we verify these shortly in Lemma \ref{lem:*}. For its proof, we need the following lemma, which shows that the (total) product $\pr$ extends the (partial) composition $\circ$, in the sense that these operations agree whenever the latter is defined.
Before we begin, it is worth noting that the operation $\pr$ from Definition \ref{defn:pr} makes sense in the more general case of $\G$ being a projection groupoid with an evaluation map $\ve$. All of the coming lemmas apart from the final Lemma \ref{lem:abcabc} hold in this more general situation as well; the coherence property~\ref{G2} is needed only at the very last stage of the proof of associativity. Still, for simplicity, we do continue to assume that $(\G,\ve)$ is a chained projection groupoid.
\begin{lemma}\label{lem:prcirc}
If $a,b\in\G$ are such that $\br(a)=\bd(b)$, then $a\pr b=a\circ b$.
\end{lemma}
\pf
In the notation of Definition \ref{defn:pr}, we have $p=q$, so also $p'=q'=p$. But then
\[
a\pr b = a\rest_{p} \circ \ve[p,p] \circ {}_{p}\corest b = a\circ p\circ b = a\circ b. \qedhere
\]
\epf
\begin{lemma}\label{lem:*}
For any $a,b\in\G$, we have
\[
(a^*)^* = a = (a\pr a^*)\pr a \AND (a\pr b)^*=b^*\pr a^*.
\]
\end{lemma}
\pf
Of course $(a^*)^*=(a^{-1})^{-1}=a$. It follows from Lemma \ref{lem:prcirc} that $a\pr a^* = a\circ a^{-1} = \bd(a)$, and so $(a\pr a^*)\pr a = \bd(a)\pr a = \bd(a)\circ a = a$.
Next, let $p,q,p',q'$ be as in Definition \ref{defn:pr}. Since $q=\br(b^*)$ and $p=\bd(a^*)$, we have
\[
a\pr b = a\rest_{p'} \circ \ve[p',q'] \circ {}_{q'}\corest b \AND b^*\pr a^* = b^*\rest_{q'} \circ \ve[q',p'] \circ {}_{p'}\corest a^* ,
\]
and so
\[
(a\pr b)^*= ({}_{q'}\corest b)^* \circ \ve[p',q']^* \circ (a\rest_{p'})^* = b^*\rest_{q'} \circ \ve[q',p'] \circ {}_{p'}\corest a^* = b^*\pr a^*. \qedhere
\]
\epf
The symmetry/duality afforded by the identity $(a\pr b)^*=b^*\pr a^*$ will allow for some simplification in some of the proofs to follow. This is the case with the proof of the next result, which identifies the domain and range of a product $a\pr b$.
\begin{lemma}\label{lem:drab}
For any $a,b\in\G$ we have
\[
\bd(a\pr b) = \bd(b)\Th_{a^{-1}} \AND \br(a\pr b) = \br(a)\Th_b.
\]
\end{lemma}
\pf
Let $p,q,p',q'$ be as in Definition \ref{defn:pr}, so that $a\pr b$ is as in \eqref{eq:pr}. Then
\[
\br(a\pr b) = \br({}_{q'}\corest b) = q'\vt_b = p\th_q\vt_b = \br(a) \th_{\bd(b)}\vt_b = \br(a)\Th_b.
\]
The identity involving domains may be proved in similar fashion (using \eqref{eq:vta*}). Alternatively, it follows from the range identity and duality:
\[
\bd(a\pr b) = \br((a\pr b)^*) = \br(b^*\pr a^*) = \br(b^*)\Th_{a^*} = \bd(b)\Th_{a^{-1}}. \qedhere
\]
\epf
Our next result establishes an important identity.
\begin{lemma}\label{lem:Thaprb}
For any $a,b\in\G$ we have $\Th_{a\pr b} = \Th_a\Th_b$.
\end{lemma}
\pf
Let $p,q,p',q'$ be as in Definition \ref{defn:pr}. Then by the projection algebra axioms we have
\[
\th_{p'}\th_{q'} = \th_{q\th_p}\th_{p\th_q} =_4 \th_p\th_q\th_p\th_q\th_p\th_q =_5 \th_p\th_q.
\]
We then calculate
\begin{align*}
\Th_{a\pr b} &= \Th_{a\rest_{p'}} \circ \Th_{\ve[p',q']} \circ \Th_{{}_{q'}\corest b} &&\text{by \eqref{eq:pr} and Lemma \ref{lem:Thab}}\\
&= \Th_a\th_{p'} \circ \th_{p'}\th_{q'} \circ \th_{q'}\Th_b &&\text{by \ref{G1c} and Lemmas \ref{lem:G2}\ref{G25} and \ref{lem:ve}\ref{ve2}}\\
&= \Th_a \circ \th_{p'}\th_{q'} \circ \Th_b &&\text{by \ref{P2}}\\
&= \Th_a \circ \th_p\th_q \circ \Th_b &&\text{by the above observation}\\
&= \Th_a \th_{\br(a)}\circ\th_{\bd(b)} \Th_b\\
&= \Th_a \Th_b &&\text{by Lemma \ref{lem:G2}\ref{G22}.} \qedhere
\end{align*}
\epf
We are almost ready to show that $\pr$ is associative, i.e.~that $(a\pr b)\pr c$ and $a\pr(b\pr c)$ are equal, for all $a,b,c\in\G$. Before we do so, we note that Lemmas \ref{lem:drab} and \ref{lem:Thaprb} can be used to show that these two terms have equal domains, and equal ranges. For example,
\begin{equation}\label{eq:rabc}
\br((a\pr b)\pr c) = \br(a\pr b)\Th_c = \br(a)\Th_b\Th_c = \br(a)\Th_{b\pr c} = \br(a\pr(b\pr c)).
\end{equation}
A similar calculation gives
\begin{equation}\label{eq:dabc}
\bd((a\pr b)\pr c) = \bd(a\pr(b\pr c)) = \bd(c)\Th_{b^{-1}}\Th_{a^{-1}}.
\end{equation}
As a stepping stone to associativity, the next result gives expansions of $(a\pr b)\pr c$ and $a\pr(b\pr c)$, expressing each term as a composition of five morphisms.
\begin{lemma}\label{lem:abc}
If $a,b,c\in\G$, and if $p=\br(a)$, $q=\bd(b)$, $r=\br(b)$ and $s=\bd(c)$, then
\ben
\item \label{abc2} $(a\pr b)\pr c = a\rest_e \circ \ve[e,e_1] \circ {}_{e_1}\corest b \circ \ve[f_1,f] \circ {}_f\corest c$,
where
\[
e = s\Th_{b^{-1}}\th_p \COMMA e_1 = e\th_q \COMMA f_1 = e\Th_b \AND f = p\Th_b\th_s,
\]
\item \label{abc1} $a\pr (b\pr c) = a\rest_e \circ \ve[e,e_2] \circ b\rest_{f_2} \circ \ve[f_2,f] \circ {}_f\corest c$,
where
\[
e = s\Th_{b^{-1}}\th_p \COMMA e_2 = f\Th_{b^{-1}} \COMMA f_2 = f\th_r \AND f = p\Th_b\th_s.
\]
\een
The above factorisations are illustrated in Figure \ref{fig:abc}.
\end{lemma}
\pf
\firstpfitem{\ref{abc2}} Put $p'=q\th_p$ and $q'=p\th_q$, so that
\[
a\pr b = a'\circ\ve'\circ b' \WHERE a' = a\rest_{p'} \COMMa \ve' = \ve[p',q'] \ANd b' = {}_{q'}\corest b.
\]
Keeping Lemma \ref{lem:drab} in mind, we now define
\[
t = \br(a\pr b) = p\Th_b.
\]
Then
\[
(a\pr b)\pr c = (a\pr b)\rest_{t'} \circ \ve[t',s'] \circ {}_{s'}\corest c \WHERE \text{$t'=s\th_t$ and $s'=t\th_s$.}
\]
We now focus on the term $(a\pr b)\rest_{t'}$. By \eqref{eq:pab} we have
\[
(a\pr b)\rest_{t'} = (a'\circ\ve'\circ b')\rest_{t'} = a'\rest_u \circ \ve'\rest_v \circ b'\rest_{t'} \WHERE v = t'\vt_{b'}^{-1} \ANd u = t'\vt_{b'}^{-1}\vt_{\ve'}^{-1}.
\]
Now,
\bit
\item $a'\rest_u = a\rest_{p'}\rest_u = a\rest_u$,
\item $\ve'\rest_v \leq \ve' = \ve[p',q']$, $\bd(\ve'\rest_v) = \br(a'\rest_u) = u$ and $\br(\ve'\rest_v)=v$, so $\ve'\rest_v = \ve[u,v]$ by Lemma \ref{lem:aepq}, and
\item $b'\rest_{t'}\leq b'\leq b$ and $\bd(b'\rest_{t'})=\br(\ve'\rest_v)=v$, so $b'\rest_{t'}={}_v\corest b$. (For later use, since $b'\rest_{t'}$ and $b\rest_{t'}$ are both below $b$ and have codomain $t'$, it also follows that $b\rest_{t'}=b'\rest_{t'}={}_v\corest b$.)
\eit
Putting everything together, it follows that
\begin{align*}
(a\pr b)\pr c &= (a\pr b)\rest_{t'} \circ \ve[t',s'] \circ {}_{s'}\corest c \\
&= a'\rest_u \circ \ve'\rest_v \circ b'\rest_{t'} \circ \ve[t',s'] \circ {}_{s'}\corest c \\
&= a\rest_u \circ \ve[u,v] \circ {}_v\corest b \circ \ve[t',s'] \circ {}_{s'}\corest c .
\end{align*}
\newpage\noindent
It therefore remains to check that
\bena
\bmc4
\item \label{dabc1} $u=e$,
\item \label{dabc2} $v=e_1$,
\item \label{dabc3} $t'=f_1$,
\item \label{dabc4} $s'=f$,
\emc
\een
where $e,e_1,f_1,f$ are as defined in the statement of the lemma.
\pfitem{\ref{dabc1}} First note that $\bd(a\rest_u) = \bd((a\pr b)\pr c) = s\Th_{b^{-1}}\Th_{a^{-1}}$ by \eqref{eq:dabc}, so $a\rest_u = {}_{s\Th_{b^{-1}}\Th_{a^{-1}}}\corest a$. But then
\[
u = \br(a\rest_u) = \br({}_{s\Th_{b^{-1}}\Th_{a^{-1}}}\corest a) = s\Th_{b^{-1}}\Th_{a^{-1}}\vt_a = s\Th_{b^{-1}}\th_p = e,
\]
where we used Lemma \ref{lem:G2}\ref{G23} in the second-last step.
\pfitem{\ref{dabc3}} Here we use \ref{G1b} to calculate $t' = s\th_t = s\th_{p\Th_b} = s\Th_{b^{-1}}\th_p\Th_b = e\Th_b = f_1$.
\pfitem{\ref{dabc2}} We noted above that ${}_v\corest b=b\rest_{t'}$. Combining this with \eqref{eq:vta*}, the just-proved item \ref{dabc3}, and Lemma \ref{lem:G2}\ref{G23}, it follows that
\[
v = \bd({}_v\corest b) = \bd(b\rest_{t'}) = t'\vt_{b^{-1}} = f_1\vt_{b^{-1}} = (e\Th_b)\vt_{b^{-1}} = e\th_q = e_1.
\]
\pfitem{\ref{dabc4}} Finally, $s'=t\th_s=p\Th_b\th_s=f$.
\aftercases As noted above, this completes the proof of \ref{abc2}.
\pfitem{\ref{abc1}} This can be proved in similar fashion to part \ref{abc2}, but in fact we can also deduce it from \ref{abc2}. Indeed, we first use Lemma \ref{lem:*} to expand
\[
(a\pr(b\pr c))^{-1} = (b\pr c)^{-1}\pr a^{-1} = (c^{-1}\pr b^{-1})\pr a^{-1}.
\]
Note that $s=\br(c^{-1})$, $r=\bd(b^{-1})$, $q=\br(b^{-1})$ and $p=\bd(a^{-1})$. It follows from part \ref{abc2} that
\[
(c^{-1}\pr b^{-1})\pr a^{-1} = c^{-1}\rest_{e'} \circ \ve[e',e_1'] \circ {}_{e_1'}\corest b^{-1} \circ \ve[f_1',f'] \circ {}_{f'}\corest a^{-1},
\]
where
\[
e' = p\Th_b\th_s \COMMA e_1' = e'\th_r \COMMA f_1' = e'\Th_{b^{-1}} \AND f' = s\Th_{b^{-1}}\th_p.
\]
But then
\begin{align}
\nonumber a\pr (b\pr c) &= ((c^{-1}\pr b^{-1})\pr a^{-1})^{-1} \\
\nonumber &= (c^{-1}\rest_{e'} \circ \ve[e',e_1'] \circ {}_{e_1'}\corest b^{-1} \circ \ve[f_1',f'] \circ {}_{f'}\corest a^{-1})^{-1} \\
\nonumber &= ({}_{f'}\corest a^{-1})^{-1} \circ \ve[f_1',f']^{-1} \circ ({}_{e_1'}\corest b^{-1})^{-1} \circ \ve[e',e_1']^{-1} \circ (c^{-1}\rest_{e'})^{-1} \\
\label{eq:a(bc)1} &= a\rest_{f'} \circ \ve[f',f_1'] \circ b\rest_{e_1'} \circ \ve[e_1',e'] \circ {}_{e'}\corest c .
\end{align}
We note immediately that $f'=e$ and $e'=f$ (where $e,f$ are as in the statement of the lemma). We also have
\[
e_1' = e'\th_r = f\th_r = f_2 \AND f_1' = e'\Th_{b^{-1}} = f\Th_{b^{-1}} = e_2.
\]
Thus, continuing from \eqref{eq:a(bc)1}, we have
\[
a\pr (b\pr c) = a\rest_e \circ \ve[e,e_2] \circ b\rest_{f_2} \circ \ve[f_2,f] \circ {}_f\corest c,
\]
as required.
\epf
\begin{figure}[h]
\begin{center}
\scalebox{0.9}{
\begin{tikzpicture}[xscale=0.6]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\node (e) at (0,0){$e$};
\node (e1) at (3,2){$e_1$};
\node (e2) at (5,-2){$e_2$};
\node (f) at (15,0){$f$};
\node (f1) at (10,2){$f_1$};
\node (f2) at (12,-2){$f_2$};
\node (p) at (0,5){$p$};
\node (q) at (4,6){$q$};
\node (r) at (11,6){$r$};
\node (s) at (15,4){$s$};
\node (da) at (-7,5) {};
\node (rc) at (22,4) {};
\node (dae) at (-7,0) {};
\node (rfc) at (22,0) {};
\draw[->-=0.5] (q)--(r);
\draw[->-=0.5] (e)--(e1);
\draw[->-=0.5] (e1)--(f1);
\draw[->-=0.5] (f1)--(f);
\draw[->-=0.5] (e)--(e2);
\draw[->-=0.5] (e2)--(f2);
\draw[->-=0.5] (f2)--(f);
\draw[->-=0.5] (da)--(p);
\draw[->-=0.5] (s)--(rc);
\draw[->-=0.5] (dae)--(e);
\draw[->-=0.5] (f)--(rfc);
\draw[white,line width=2mm] (q)--(e2);
\draw[white,line width=2mm] (r)--(f2);
\draw[dashed] (e1)--(q)--(e2) (f1)--(r)--(f2) (p)--(e) (s)--(f);
\node () at (-3.7,5.2) {$a$};
\node () at (7.4,6.3) {$b$};
\node () at (18.4,4.2) {$c$};
\node () at (6.4,2.3) {${}_{e_1}\corest b = b\rest_{f_1}$};
\node () at (8.4,-1.7) {${}_{e_2}\corest b = b\rest_{f_2}$};
\node () at (2.3,0.7) {{\small $\ve[e,e_1]$}};
\node () at (1.7,-1.2) {{\small $\ve[e,e_2]$}};
\node () at (13.4,1.2) {{\small $\ve[f_1,f]$}};
\node () at (14.4,-1.2) {{\small $\ve[f_2,f]$}};
\node () at (-3.7,.3) {$a\rest_e$};
\node () at (18.4,.3) {${}_f\corest c$};
\end{tikzpicture}
}
\caption{Top: three morphisms $a,b,c\in\G$. Bottom: the upper and lower paths represent $(a\pr b)\pr c$ and $a\pr(b\pr c)$, respectively; cf.~Lemma \ref{lem:abc}. Dashed lines indicate $\leq$ relationships.}
\label{fig:abc}
\end{center}
\end{figure}
\newpage
\begin{lemma}\label{lem:abcabc}
For any $a,b,c\in\G$, we have $(a\pr b)\pr c = a\pr(b\pr c)$.
\end{lemma}
\pf
By Lemma \ref{lem:abc}, it suffices to show (in the notation of that lemma) that
\begin{equation}\label{eq:abcabc}
\ve[e,e_1] \circ {}_{e_1}\corest b \circ \ve[f_1,f] = \ve[e,e_2] \circ b\rest_{f_2} \circ \ve[f_2,f].
\end{equation}
Comparing the definitions of the $e_i,f_i$ from Lemma \ref{lem:abc} (in terms of $e,f,q,r,b$) with \eqref{eq:e1e2f1f2}, an application of \ref{G2} will give us \eqref{eq:abcabc} if we can show that $(e,f)$ is $b$-linked (cf.~Definitions \ref{defn:LP1} and \ref{defn:LP2}). In other words, the proof of the lemma will be complete if we can show that
\[
f = e\Th_b\th_f \AND e = f\Th_{b^{-1}}\th_e.
\]
For the first, we have
\begin{align*}
e\Th_b\th_f &= s\Th_{b^{-1}}\th_p\Th_b\th_{p\Th_b\th_s} &&\text{by definition}\\
&= s\Th_{b^{-1}}\th_p\Th_b\th_s\th_{p\Th_b}\th_s &&\text{by \ref{P4}}\\
&= p\Th_b\th_s\Th_{b^{-1}}\th_{p}\Th_b\th_s &&\text{by \ref{G1b} and Lemma \ref{lem:pa'qap}}\\
&= s\Th_{b^{-1}}\th_{p}\Th_b\th_s &&\text{by Lemma \ref{lem:pa'qap}}\\
&= p\Th_b\th_s = f &&\text{by Lemma \ref{lem:pa'qap}.}
\end{align*}
The proof that $e = f\Th_{b^{-1}}\th_e$ is virtually identical.
\epf
Lemmas \ref{lem:*} and \ref{lem:abcabc} give the following:
\begin{thm}\label{thm:SGve}
If $(\G,\ve)$ is a chained projection groupoid, then $\bS(\G,\ve)$ is a regular $*$-semigroup. \epfres
\end{thm}
\subsection{Products of projections}\label{sect:PoP}
We conclude this chapter with some simple results concerning $\pr$ products involving projections. These will be of use later, but we also apply them here to describe the idempotent-generated subsemigroup of the regular $*$-semigroup $S=\bS(\G,\ve)$. In the following statements we continue to fix the chained projection groupoid $(\G,\ve)$, and write $P=v\G$.
\begin{lemma}\label{lem:apr}
If $a\in\G(p,q)$, and if $t\in P$, then
\ben
\item \label{apr1} $a\pr t = a\rest_{q'}\circ\ve[q',t']$, where $q'=t\th_q$ and $t'=q\th_t$,
\item \label{apr2} $t\pr a = \ve[t',p']\circ{}_{p'}\corest a$, where $p'=t\th_p$ and $t'=p\th_t$,
\item \label{apr3} $a\pr t = a\circ\ve[q,t]$ if $q\F t$,
\item \label{apr4} $t\pr a = \ve[t,p]\circ a$ if $p\F t$,
\item \label{apr5} $a\pr t = a\rest_t$ if $t\leq q$,
\item \label{apr6} $t\pr a = {}_t\corest a$ if $t\leq p$.
\een
\end{lemma}
\pf
By symmetry, it suffices to prove \ref{apr1}, \ref{apr3} and \ref{apr5}.
\pfitem{\ref{apr1}} We have $a\pr t = a\rest_{q'}\circ\ve[q',t']\circ{}_{t'}\corest t = a\rest_{q'}\circ\ve[q',t']\circ t' = a\rest_{q'}\circ\ve[q',t']$.
\pfitem{\ref{apr3}} This follows from part \ref{apr1}, as $q'=q$ and $t'=t$ if $q\F t$.
\pfitem{\ref{apr5}} If $t\leq q$, then we also have $t\leqF q$ by Lemma \ref{lem:pqp}\ref{pqP2}, so that $t=t\th_q=q'$ and $t=q\th_t=t'$. It then follows from part \ref{apr1} that
\[
a\pr t = a\rest_{q'}\circ\ve[q',t'] = a\rest_t\circ\ve[t,t] = a\rest_t\circ t = a\rest_t. \qedhere
\]
\epf
\begin{lemma}\label{lem:pqvepq}
If $p,q\in P$, then
\ben
\item \label{pqvepq1} $p\pr q = \ve[p',q']$, where $p'=q\th_p$ and $q'=p\th_q$,
\item \label{pqvepq2} $p\pr q = \ve[p,q]$ if $p\F q$.
\item \label{pqvepq3} $p\pr q\pr p = q\th_p$.
\een
\end{lemma}
\pf
\firstpfitem{\ref{pqvepq1}} By Lemma \ref{lem:apr}\ref{apr1} we have $p\pr q = p\rest_{p'}\circ\ve[p',q'] = p'\circ\ve[p',q'] = \ve[p',q']$.
\pfitem{\ref{pqvepq2}} This follows from part \ref{pqvepq1}, as $p=p'$ and $q=q'$ when $p\F q$.
\pfitem{\ref{pqvepq3}} Here we have
\begin{align*}
p\pr q\pr p &= \ve[p',q']\pr p &&\text{by part \ref{pqvepq1}, where $p'=q\th_p$ and $q'=p\th_q$}\\
&= \ve[p',q']\rest_{q''}\circ\ve[q'',p''] &&\text{by Lemma \ref{lem:apr}\ref{apr1}, where $q''=p\th_{q'}$ and $p''=q'\th_p$.}
\end{align*}
Using the projection algebra axioms, it is easy to show that in fact $p''=p'$ and $q''=q'$. Thus, continuing from above, we have
\[
p\pr q\pr p
= \ve[p',q']\rest_{q''} \circ \ve[q'',p'']
= \ve[p',q']\rest_{q'} \circ \ve[q',p']
= \ve[p',q'] \circ \ve[q',p']
= p' = q\th_p. \qedhere
\]
\epf
Recall that the sets of idempotents and projections of a regular $*$-semigroup $S$ are denoted
\[
E(S) = \set{e\in S}{e^2=e} \AND P(S) = \set{p\in S}{p^2=p=p^*}.
\]
Although these sets are not equal in general, Lemma \ref{lem:PS1}\ref{PS12} says that they generate the same subsemigroup of $S$, i.e.~$\la E(S)\ra = \la P(S)\ra$. We write $\E(S)$ for this idempotent-generated subsemigroup. The results proved above allow us to give the following description of $\E(S)$ in the case that $S=\bS(\G,\ve)$ arises from a chained projection groupoid $(\G,\ve)$.
\newpage
\begin{prop}\label{prop:ES}
Let $(\G,\ve)$ be a chained projection groupoid, and let $P=v\G$, $\C=\C(P)$ and $S=\bS(\G,\ve)$. Then
\ben
\item \label{ES1} $P(S)=P$,
\item \label{ES2} $E(S)=\set{\ve[p,q]}{(p,q)\in{\F}}$,
\item \label{ES3} $\E(S) = \im(\ve) = \set{\ve(\c)}{\c\in\C}$, and consequently $S$ is idempotent-generated if and only if~$\ve$ is surjective.
\een
\end{prop}
\pf
\firstpfitem{\ref{ES1}} By Lemmas \ref{lem:PS1}\ref{PS11} and \ref{lem:prcirc} we have
\[
P(S) = \set{a\pr a^*}{a\in S} = \set{a\circ a^{-1}}{a\in \G} = \set{\bd(a)}{a\in \G} = v\G = P.
\]
\pfitem{\ref{ES2}} By Lemma \ref{lem:PS1}\ref{PS12}, and part \ref{ES1}, we have $E(S) = \set{p\pr q}{p,q\in P}$. The claim then follows from Lemma \ref{lem:pqvepq}.
\pfitem{\ref{ES3}} If $\c=[p_1,\ldots,p_k]\in\C$, then
\begin{align*}
\ve(\c) &= \ve[p_1,p_2]\circ\ve[p_2,p_3]\circ\cdots\circ\ve[p_{k-1},p_k] &&\text{by \eqref{eq:vec}}\\
&= \ve[p_1,p_2]\pr\ve[p_2,p_3]\pr\cdots\pr\ve[p_{k-1},p_k] &&\text{by Lemma \ref{lem:prcirc}}\\
&= (p_1\pr p_2)\pr(p_2\pr p_3)\pr\cdots\pr(p_{k-1}\pr p_k) &&\text{by Lemma \ref{lem:pqvepq}\ref{pqvepq2}}\\
&= p_1\pr p_2\pr\cdots\pr p_k \in \E(S).
\end{align*}
Conversely, fix some $a\in\E(S)$, so that $a = p_1\pr\cdots\pr p_k$ for some $p_1,\ldots,p_k\in P$. We show that $a\in\im(\ve)$ by induction on $k$. The $k=1$ case being trivial, we assume that $k\geq2$, and let $b=p_1\pr\cdots\pr p_{k-1}$. By induction we have $b=\ve(\c)$ for some $\c\in\C$, and we write $r=\br(b)=\br(\c)$. Then with $r'=p_k\th_r$ and $p_k'=r\th_{p_k}$, it follows from Lemma \ref{lem:apr}\ref{apr1} and properties of $\ve$ from Definition \ref{defn:ve} that
\[
a = b\pr p_k = b\rest_{r'}\circ\ve[r',p_k'] = \ve(\c)\rest_{r'}\circ\ve[r',p_k'] = \ve(\c\rest_{r'})\circ\ve[r',p_k'] = \ve(\c\rest_{r'}\circ[r',p_k']) \in \im(\ve). \qedhere
\]
\epf
\begin{rem}\label{rem:ES}
We will see in Chapter \ref{chap:iso} (see Theorem \ref{thm:iso}) that every regular $*$-semigroup has the form $\bS(\G,\ve)$ for some chained projection groupoid $(\G,\ve)$. Combining this with Proposition~\ref{prop:ES}\ref{ES3}, we obtain a proof of Proposition \ref{prop:ERSS}, stated in Chapter \ref{chap:prelim}.
\end{rem}
\section{\boldmath Regular $*$-semigroups}\label{chap:S}
In the previous chapter we showed that a chained projection groupoid $(\G,\ve)$ gives rise to a regular $*$-semigroup $S=\bS(\G,\ve)$; see Theorem \ref{thm:SGve}. In the current chapter we go in the opposite direction. Starting from a regular $*$-semigroup $S$, we show how to construct a chained projection groupoid $\bG(S)=(\G,\ve)$ from $S$; see Definitions \ref{defn:GS2} and \ref{defn:veS}, and Theorem \ref{thm:GveS}. In the next chapter we will show that the $\bS$ and $\bG$ constructions are inverse processes, and in fact define isomorphisms between the categories of regular $*$-semigroups and chained projection groupoids.
The construction of $\bG(S)$ is given in Section \ref{sect:GS}. We then pause in Section \ref{sect:eg} to consider a number of examples. These are intended to aid with understanding of the theory developed thus far, and also to help uncover some subtleties that arise.
\subsection[The chained projection groupoid associated to a regular $*$-semigroup]{\boldmath The chained projection groupoid associated to a regular $*$-semigroup}\label{sect:GS}
For the duration of this chapter, we fix a regular $*$-semigroup $S$, as in Definition \ref{defn:RSS}. As ever, we write
\[
P = P(S) = \set{p\in S}{p^2=p=p^*}
\]
for the set of projections of $S$.
As in Section \ref{sect:RSS}, every projection $p\in P$ gives rise to a map
\begin{equation}\label{eq:thp}
\th_p:P\to P \qquad\text{defined by}\qquad q\th_p = pqp.
\end{equation}
Comparing Lemma \ref{lem:PS2} with Definition \ref{defn:P}, we immediately deduce the following:
\begin{prop}
If $S$ is a regular $*$-semigroup, then $P=P(S)$ is a projection algebra. \epfres
\end{prop}
Consequently, we have the partial order $\leq$ on $P$ given in \eqref{eq:leqP}, and the relations~$\leqF$ and~$\F$ on~$P$ given in \eqref{eq:leqF} and \eqref{eq:F}. In particular (keeping \eqref{eq:p=pq} in mind), we note that
\begin{equation}\label{eq:leqPS}
p \leq q \Iff p = p\th_q = qpq \Iff p = pq = qp \Iff p = pq \Iff p = qp.
\end{equation}
We also have the path category $\P=\P(P)$ and the chain groupoid $\C=\C(P)$, as in Definitions~\ref{defn:PP} and \ref{defn:CP}.
Recall that we wish to construct a chained projection groupoid $\bG(S) = (\G,\ve)$ from $S$. The definition of the groupoid $\G=\G(S)$ has already been given in Section~\ref{sect:RSS}; see Definition~\ref{defn:GS} and Proposition \ref{prop:GS}. Since we wish to show that $\G$ is in fact an \emph{ordered} groupoid, it is convenient to give the following expanded definition.
\begin{defn}\label{defn:GS2}
Given a regular $*$-semigroup $S$, we define the ordered groupoid $\G=\G(S)$ as follows.
\bit
\item The object set of $\G$ is $v\G=P=P(S)$.
\item For $a\in\G$ we have $\bd(a) = aa^*$ and $\br(a) = a^*a$.
\item For $a,b\in\G$ with $\br(a)=\bd(b)$, we have $a\circ b = ab$.
\item For $a\in\G$ we have $a^{-1}=a^*$.
\eit
The ordering on $P=v\G$ is given in \eqref{eq:leqPS}, and left restrictions in $\G$ are defined by
\[
{}_p\corest a = pa \qquad\text{for $p\leq \bd(a)$.}
\]
As usual, right restrictions are given by $a\rest_q = ({}_q\corest a^{-1})^{-1}$, for $q\leq\br(a)$, and in fact we have
\[
a\rest_q = (qa^*)^* = aq
\]
for such $q$. For $a,b\in\G$, we have
\begin{align*}
a\leq b &\IFF a={}_p\corest b &&\hspace{-1.5cm}\text{for some $p\leq\bd(b)$}\\
&\IFF a=b\rest_q &&\hspace{-1.5cm}\text{for some $q\leq\br(b)$}\\
&\IFF a={}_p\corest b=b\rest_q &&\hspace{-1.5cm}\text{for some $p\leq\bd(b)$ and $q\leq\br(b)$,}
\end{align*}
and then of course $p=\bd(a)$ and $q=\br(a)$.
\end{defn}
\begin{lemma}\label{lem:GS2}
If $S$ is a regular $*$-semigroup, then $\G=\G(S)$ is an ordered groupoid.
\end{lemma}
\pf
By Proposition \ref{prop:GS}, we just need to verify conditions \ref{O1'}--\ref{O5'} from Lemma \ref{lem:C}, with respect to the ordering \eqref{eq:leqPS}, and the restrictions given in Definition \ref{defn:GS2}.
\pfitem{\ref{O1'}} Consider a morphism $a\in\G$, and let $p\leq \bd(a)=aa^*$. It follows from $p\leq aa^*$ that $p=paa^*$, and so
\[
\bd({}_p\corest a) = \bd(pa) = pa(pa)^* = paa^*p^* = pp = p .
\]
We also have
\begin{equation}\label{eq:a*pa}
\br({}_p\corest a) = \br(pa) = (pa)^*pa = a^*p^*pa = a^*pa .
\end{equation}
It then follows that
\[
\br({}_p\corest a) = a^*pa = a^*a \cdot a^*pa\cdot a^*a = \br(a) \cdot \br({}_p\corest a) \cdot \br(a).
\]
By \eqref{eq:leqPS}, the previous conclusion says precisely that $\br({}_p\corest a) \leq \br(a)$.
\pfitem{\ref{O2'}} By \eqref{eq:a*pa} we have $q=a^*pa$, and again $p=paa^*$ follows from $p\leq\bd(a)$. It follows that
\[
({}_p\corest a)^* = (pa)^* = a^*p^* = a^*p = a^*paa^* = qa^* = {}_q\corest a^*.
\]
\pfitem{\ref{O3'}} Here ${}_{\bd(a)}\corest a = \bd(a)a = aa^*a = a$.
\pfitem{\ref{O4'}} It follows from $p\leq q$ that $p=pq$, and so ${}_p\corest{}_q\corest a = p(qa) = (pq)a = pa = {}_p\corest a$.
\pfitem{\ref{O5'}} Here we again have $q=a^*pa$. Since $p\leq\bd(a)$ we have $p=aa^*p$, so $pa=aa^*pa=aq$. It follows that
\[
{}_p\corest(a\circ b) = p(ab) = pp(ab) = paqb = {}_p\corest a \circ {}_q\corest b. \qedhere
\]
\epf
\begin{rem}
In particular, the relation $\leq$ on the regular $*$-semigroup $S(=\G)$ given in Definition~\ref{defn:GS2} is a partial order. This order was in fact used by Imaoka in \cite{Imaoka1981}, but in a different form. Imaoka's order, which for clarity we will denote by $\leq'$, was defined by
\begin{equation}\label{eq:leq'}
a\leq' b \Iff a\in Pb\cap bP \qquad\text{where $P=P(S)$.}
\end{equation}
In light of the rules ${}_p\corest b=pb$ and $b\rest_q=bq$, it is clear that $a\leq b\implies a\leq' b$. For the converse, suppose $a\leq'b$, so that
\[
a = pb = bq \qquad\text{for some $p,q\in P$.}
\]
In the following calculations, we make extensive use of \eqref{eq:leqPS}. We first note that
\[
a = bq \Implies a = bb^*a \Implies aa^* = bb^*aa^* \Implies aa^* \leq bb^* .
\]
We also have
\[
a = pb \Implies a = pa \Implies aa^* = paa^* \Implies aa^* \leq p \Implies aa^* = aa^*p.
\]
We then calculate
\[
a = aa^*a = aa^*(pb) = (aa^*p)b = aa^*b.
\]
Writing $r=aa^*\in P$, we have already seen that $r = aa^* \leq bb^* = \bd(b)$, so that in fact $a = aa^*b = rb = {}_r\corest b$, and hence $a\leq b$.
\end{rem}
\begin{rem}
Recall that any regular semigroup $S$ has a so-called \emph{natural partial order}. This order, which for clarity we will denote by $\preceq$, has \emph{many} different formulations; see for example \cite{Hartwig1980,Nambooripad1980,Hickey1983}, and especially \cite{Mitsch1986} for a discussion of the variations, and an extension to arbitrary semigroups. The most straightforward definition of the natural partial order on the regular semigroup $S$ is:
\[
a\preceq b \Iff a\in Eb\cap bE \qquad\text{where $E=E(S)$.}
\]
It is natural to ask how the orders $\leq$ and $\preceq$ are related when $S$ is a regular $*$-semigroup, where~$\leq$ is the order from Definition \ref{defn:GS2}. Since $P\sub E$, and since $\leq$ is equal to $\leq'$ (cf.~\eqref{eq:leq'}), the order~$\leq$ is of course contained in $\preceq$, meaning that $a\leq b\implies a\preceq b$. But the converse does not hold, in general. For example, if $S$ is a regular $*$-\emph{monoid} with identity~$1$, and if $e\in E\setminus P$ is any non-projection idempotent, then $e\preceq1$ but $e\not\leq1$.
\end{rem}
Next we wish to show that $\G$ is a \emph{projection} groupoid, and to do this we need to understand the $\vt$ and $\Th$ maps on $\G$. For this, let~$a\in\G$. For $p\leq\bd(a)=aa^*$, it follows from \eqref{eq:a*pa} that
\begin{equation}\label{eq:vtaS}
p\vt_a = \br({}_p\corest a) = a^*pa .
\end{equation}
It follows that for arbitrary $p\in P$,
\begin{equation}\label{eq:ThaS}
p\Th_a = p\th_{\bd(a)}\vt_a = a^*(\bd(a)\cdot p\cdot \bd(a))a = a^*(aa^*\cdot p\cdot aa^*)a = a^*pa.
\end{equation}
In other words, $\vt_a$ and $\Th_a$ both act by `conjugation' on projections (cf.~Remark \ref{rem:G1}), but we note that these maps have different domains: $\operatorname{dom}(\vt_a)=\bd(a)^\da$ and $\operatorname{dom}(\Th_a)=P$.
\begin{lemma}\label{lem:GS3}
If $S$ is a regular $*$-semigroup, then $\G=\G(S)$ is a projection groupoid.
\end{lemma}
\pf
By Lemma \ref{lem:GS2}, it remains to check \ref{G1}. Specifically, we show that \ref{G1a} holds, i.e.~that
\[
\th_{p\vt_a} = \Th_{a^*}\th_p\Th_a \qquad\text{for any $a\in\G$ and $p\leq\bd(a)$.}
\]
But for any such $a$ and $p$, and for arbitrary $t\in P$, we use \eqref{eq:vtaS} and \eqref{eq:ThaS} to calculate
\[
t\th_{p\vt_a} = t\th_{a^*pa} = a^*pa\cdot t\cdot a^*pa = t\Th_{a^*}\th_p\Th_a. \qedhere
\]
\epf
Next we wish to construct an evaluation map $\ve=\ve(S):\C\to\G$, where $\C=\C(P)$ is the chain groupoid of $P=P(S)$. To do so, it is first convenient to define a map
\begin{equation}\label{eq:piS}
\pi=\pi(S):\P\to\G(=S) \BY \pi(p_1,\ldots,p_k) = p_1\cdots p_k,
\end{equation}
where $\P=\P(P)$ is the path category of $P$. So $\pi(\p)$ is simply the product (in $S$) of the entries of $\p$ (taken in order). We begin with the following fact:
\begin{lemma}\label{lem:dpip}
For any $\p\in\P$, we have $\bd(\pi(\p)) = \bd(\p)$ and $\br(\pi(\p)) = \br(\p)$.
\end{lemma}
\pf
Write $\p=(p_1,\ldots,p_k)$. A routine calculation shows that
\[
\bd(\pi(\p)) = \pi(\p)\cdot\pi(\p)^* = p_1\cdots p_{k-1}\cdot p_k\cdot p_{k-1}\cdots p_1.
\]
Since $p_1\F\cdots\F p_k$, this reduces to $p_1=\bd(\p)$. We obtain $\br(\pi(\p)) = p_k = \br(\p)$ by symmetry.
\epf
The next result concerns the congruence ${\approx}=\Om^\sharp$ on $\P$ from Definition \ref{defn:approx}.
\begin{lemma}\label{lem:GS4}
The map $\pi=\pi(S):\P\to\G$ is a $v$-functor, and ${\approx}\sub\ker(\pi)$.
\end{lemma}
\pf
For the first statement, it suffices by Lemma \ref{lem:dpip} to show that $\pi$ is a functor. For this, consider composable paths $\p=(p_1,\ldots,p_k)$ and $\q=(p_k,\ldots,p_l)$. Then since $p_k$ is an idempotent, and since $\br(\pi(\p)) = p_k = \bd(\pi(\q))$ by Lemma \ref{lem:dpip}, we have
\begin{align*}
\pi(\p\circ\q) = \pi(p_1,\ldots,p_l) &= p_1\cdots p_kp_{k+1}\cdots p_l \\
&= p_1\cdots p_k\cdot p_kp_{k+1}\cdots p_l = \pi(\p)\pi(\q) = \pi(\p)\circ\pi(\q).
\end{align*}
To show that ${\approx}\sub\ker(\pi)$, it suffices to show that $\Om\sub\ker(\pi)$, i.e.~that $\pi(\s)=\pi(\t)$ for all $(\s,\t)\in\Om$. This is clear if $(\s,\t)$ has the form \ref{Om1} or \ref{Om2}; for the latter, recall that $p=q\th_p=pqp$ when $p\F q$. So suppose now that $(\s,\t)$ has the form \ref{Om3}. Consulting Definitions \ref{defn:approx} and \ref{defn:CP_P}, this means that
\[
\s =(e,e\th_p,f) \AND \t=(e,f\th_p,f) \qquad\text{for some $p\in P$, and some $p$-linked pair $(e,f)$.}
\]
Since $ep,pf\in E(S)$ by Lemma \ref{lem:PS1}\ref{PS12}, it follows that
\[
\pi(\s) = e\cdot pep \cdot f = epf = e\cdot pfp\cdot f = \pi(\t). \qedhere
\]
\epf
\begin{defn}\label{defn:veS}
Since $\C=\P/{\approx}$, it follows from Lemma \ref{lem:GS4} that we have a well-defined $v$-functor
\[
\ve = \ve(S):\C \to \G \GIVENBY \ve[\p] = \pi(\p) \qquad\text{for $\p\in\P$.}
\]
That is, $\ve[p_1,\ldots,p_k] = p_1\cdots p_k$ whenever $p_1\F\cdots\F p_k$.
\end{defn}
\begin{lemma}\label{lem:veS}
The functor $\ve=\ve(S):\C\to\G$ is an evaluation map.
\end{lemma}
\pf
Conditions \ref{E1} and \ref{E2} hold because $\ve$ is a $v$-functor. As explained in Remark \ref{rem:ve}, we can prove \ref{E6} in place of \ref{E3}, and we note that \ref{E6} says
\begin{align}
\nonumber \ve( {}_q\corest[\p]) &= {}_q\corest\ve[\p] &&\text{for all $\p\in\P$ and $q\leq\bd[\p]=\bd(\p)$.}
\intertext{Since $\ve( {}_q\corest[\p]) = \ve[{}_q\corest\p] = \pi({}_q\corest\p)$ and ${}_q\corest\ve[\p]={}_q\corest\pi(\p)$, this is equivalent to}
\label{eq:prodqp}
\pi( {}_q\corest\p) &= {}_q\corest\pi(\p) &&\text{for all $\p\in\P$ and $q\leq\bd(\p)$.}
\end{align}
We prove this by induction on $k$, the length of the path $\p=(p_1,\ldots,p_k)$. When $k=1$, we have
\[
\pi({}_q\corest\p) = \pi({}_q\corest(p_1)) = \pi (q) = q = qp_1 = q\pi(\p) = {}_q\corest\pi(\p),
\]
where we used the fact that $q\leq\bd(\p)=p_1 \implies q=qp_1$. Now suppose $k\geq2$, and let ${}_q\corest\p = (q_1,\ldots,q_k)$ be as in \eqref{eq:rest}. Let $\p'=(p_1,\ldots,p_{k-1})\in\P$, noting that ${}_q\corest\p'=(q_1,\ldots,q_{k-1})$. Then
\begin{align*}
{}_q\corest\pi(\p) = q\cdot p_1\cdots p_k &= q p_1\cdots p_k \cdot (q p_1\cdots p_k)^*\cdot q p_1\cdots p_k \\
&= q p_1\cdots p_k \cdot p_k\cdots p_1q \cdot qp_1\cdots p_k \\
&= q\cdot (p_1\cdots p_{k-1}) \cdot (p_k\cdots p_1\cdot q\cdot p_1\cdots p_k) \\
&= {}_q\corest\pi(\p') \cdot q\th_{p_1}\cdots\th_{p_k} \\
&= \pi({}_q\corest\p') \cdot q_k &&\hspace{-2cm}\text{by induction and \eqref{eq:qi}}\\
&= q_1\cdots q_{k-1}\cdot q_k = \pi({}_q\corest \p).
\end{align*}
This completes the proof of \eqref{eq:prodqp}, and hence of the lemma.
\epf
Note in particular that $\ve[p,q] = pq$ for $p,q\in P$ with $p\F q$. We are now ready to prove the main result of this chapter:
\begin{thm}\label{thm:GveS}
If $S$ is a regular $*$-semigroup, then $\bG(S)=(\G,\ve)$ is a chained projection groupoid.
\end{thm}
\pf
By Lemmas \ref{lem:GS3} and \ref{lem:veS}, it remains to verify \ref{G2}. To do so, let $(e,f)$ be a $b$-linked pair, where $b\in\G(=S)$. Write
\[
q = \bd(b) = bb^* \AND r = \br(b) = b^*b,
\]
and let the $e_i,f_i$ be as in \eqref{eq:e1e2f1f2}. Keeping \eqref{eq:ThaS} in mind, we have
\[
e_1 = qeq \COMMA e_2 = bfb^* \COMMA f_1 = b^*eb \AND f_2 = rfr.
\]
We then calculate $ee_1 = eqeq = eq$ and $bf_1 = bb^*eb = qeb$. Since $e\leqF q$ by Lemma \ref{lem:LP}\ref{LP1}, we also have $e=q\th_e=eqe$. Putting this all together, we have
\[
\lam(e,b,f) = \ve[e,e_1]\circ {}_{e_1}\corest b \circ \ve[f_1,f] = ee_1 \cdot e_1b \cdot f_1f = ee_1\cdot bf_1\cdot f = eq\cdot qeb \cdot f = eqe\cdot bf = ebf.
\]
A similar calculation gives $\rho(e,b,f) = ebf$, and the proof is complete.
\epf
\begin{rem}\label{rem:G2'2}
In the above proof, verification of \ref{G2} boils down to checking that
\[
ee_1bf_1f = ee_2bf_2f,
\]
where $b\in\G(q,r)$, $(e,f)$ is $b$-linked, and the $e_i,f_i$ are as in \eqref{eq:e1e2f1f2}. We are now in a position to see that condition \ref{G2} cannot be replaced by the stronger \ref{G2'} discussed in Remark \ref{rem:G2'}. Indeed, consider the partition monoid $\PP_4$, and the elements defined by
\[
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(0,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x \stline\x\x}
\node[left] () at (.8,1) {$b=$};
\node() at (4.75,.5){,};
\end{scope}
\begin{scope}[shift={(12,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x}
\foreach \x in {3,4} {\stline\x\x}
\darc12\uarc12
\node[left] () at (.8,1) {$e=e_1=f_1=$};
\node() at (4.75,.5){,};
\end{scope}
\begin{scope}[shift={(22,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x}
\foreach \x in {2,4} {\stline\x\x}
\darc13\uarc13
\node[left] () at (.8,1) {$e_2=f_2=$};
\node() at (4.75,.5){,};
\end{scope}
\begin{scope}[shift={(30,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x}
\foreach \x in {2,3} {\stline\x\x}
\darc14\uarc14
\node[left] () at (.8,1) {$f=$};
\node() at (4.75,.5){.};
\end{scope}
\end{tikzpicture}
\]
These elements are all projections (so in particular $q=r=b$), and they satisfy conditions~\ref{LP1}--\ref{LP4} of Lemma \ref{lem:LP}. However, we have
\[
\begin{tikzpicture}[scale=.4]
\begin{scope}[shift={(0,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x}
\uarc12\darc14
\stline33\stline42
\node[left] () at (.5,1) {$ee_1bf_1f=$};
\end{scope}
\begin{scope}[shift={(7,0)}]
\foreach \x in {1,2,3,4} {\tuv\x \tlv\x}
\uarc12\darc14
\stline32\stline43
\node () at (-1,1) {$\not=$};
\node[right] () at (4.5,1) {$=ee_2bf_2f$.};
\end{scope}
\end{tikzpicture}
\]
\end{rem}
\subsection{Examples}\label{sect:eg}
At this point it is worth pausing to consider a number of examples in a fair amount of detail. These are included to illustrate the general theory developed above, but also to demonstrate some subtleties.
Our first example shows that it is possible for two non-isomorphic regular $*$-semigroups~$S_1$ and $S_2$ to have exactly the same ordered groupoids $\G(S_1)=\G(S_2)$.
\begin{eg}\label{eg:Brandt}
Let $P$ be an arbitrary set, and to avoid trivialities we assume that $|P|\geq2$. We will define two semigroups $S_1$ and $S_2$, both with underlying set $(P\times P)\cup\{0\}$. To distinguish the products, we will denote them by $\star_1$ and $\star_2$. The element $0$ will act as a zero element in both semigroups, and the rest of the products are defined by
\[
(p,q)\star_1(r,s) = (p,s) \AND (p,q)\star_2(r,s) = \begin{cases}
(p,s) &\text{if $q=r$}\\
0 &\text{otherwise.}
\end{cases}
\]
So these are in fact special cases of some basic semigroup constructions:
\bit
\item $S_1$ is a $P\times P$ square band with a zero attached, and
\item $S_2$ is a $P\times P$ combinatorial Brandt semigroup.
\eit
Both $S_1$ and $S_2$ are regular $*$-semigroups with respect to the involution
\[
(p,q)^*=(q,p) \AND 0^*=0.
\]
In fact, $S_2$ is \emph{inverse} (as a Brandt semigroup), while $S_1$ is not (as the idempotents $(p,p)$ and $(q,q)$ do not commute, for distinct $p,q\in P$). In particular, $S_1$ and~$S_2$ are not isomorphic; this also follows from the fact that $S_1$ has no non-trivial zero divisors, while every element of $S_2$ is a zero divisor.
The regular $*$-semigroups $S_1$ and $S_2$ have the same projection sets, and the projections are precisely $0$ and the pairs $(p,p)$, for $p\in P$. So, identifying $(p,p)\equiv p$, we have
\[
P(S_1)=P(S_2)\equiv P\cup\{0\}.
\]
On the other hand, the idempotents are the products of two projections (cf.~Lemma~\ref{lem:PS1}\ref{PS12}). Of course any product involving $0$ is $0$, while for $p,q\in P$ we have
\[
p\star_1q = (p,q) \AND p\star_2q = \begin{cases}
p &\text{if $p=q$}\\
0 &\text{otherwise.}
\end{cases}
\]
This shows that
\[
E(S_1)=S_1 \AND E(S_2)=P(S_2)=P\cup\{0\}.
\]
(Of course these facts are also easy to see directly.) This difference is also reflected at the level of projection algebras. In what follows, we use $\th^i$ to denote the projection algebra operations on~$S_i$ ($i=1,2$), as in~\eqref{eq:thp}. Of course $\th_0^1=\th_0^2$ is the constant map with image $\{0\}$, while for $p\in P$ we have
\begin{equation}\label{eq:x}
x\th_p^1 = \begin{cases}
p &\text{if $x\in P$}\\
0 &\text{if $x=0$}
\end{cases}
\AND
x\th_p^2 = \begin{cases}
p &\text{if $x=p$}\\
0 &\text{if $x=0$ or $x\in P\setminus\{p\}$.}
\end{cases}
\end{equation}
Thus, the projection algebras $P_1=P(S_1)$ and $P_2=P(S_2)$ are different, despite having the same underlying sets. It is also easy to see that the $\F$ relations on $P_1$ and $P_2$ are very different. Indeed, denoting these by $\F_1$ and $\F_2$, we have
\[
{\F_1} = \nab_P \cup \{(0,0)\} \AND {\F_2} = \De_{P\cup\{0\}} .
\]
On the other hand, the partial orders on $P_1$ and $P_2$ (see \eqref{eq:leqPS}) are the same. These orders have the following Hasse diagram, shown in the case $P=\{p_1,\ldots,p_5\}$:
\begin{equation}\label{eq:p1p5}
\begin{tikzpicture}
\foreach \x in {1,...,5} {\node (p\x) at (\x,2) {$p_\x$};}
\node (0) at (3,0) {$0$};
\foreach \x in {1,...,5} {\draw (p\x)--(0);}
\end{tikzpicture}
\end{equation}
It turns out that the groupoids $\G_1=\G(S_1)$ and $\G_2=\G(S_2)$ are exactly the same as well. Indeed, to see this, we note that the object sets are equal: $v\G_1=v\G_2=P\cup\{0\}$. Next we note that Green's $\mathrel{\mathscr R}$ and $\L$ relations are the same on $S_1$ and $S_2$. Indeed, in both semigroups we have $R_0=L_0=\{0\}$, and
\[
(p,q) \mathrel{\mathscr R} (r,s) \Iff p=r \AND (p,q) \L (r,s) \Iff q=s.
\]
It then follows that the non-empty morphism sets in $\G_i$ ($i=1,2$) are
\[
\G_i(0,0)=\{0\} \AND \G_i(p,q)=R_p\cap L_q=\{(p,q)\} \quad\text{for $p,q\in P$.}
\]
The operations in the groupoids $\G_i$ ($i=1,2$) are given by
\[
(p,q)\circ(q,r) = (p,r) \AND (p,q)^{-1} = (p,q)^* = (q,p).
\]
Moreover, the order in both groupoids are the same. Indeed, keeping the order on $P_1=P_2$ in mind (cf.~\eqref{eq:p1p5}), the morphism $0$ is below every other morphism, and there are no other non-trivial relations.
\end{eg}
We have just seen that it is possible for two non-isomorphic regular $*$-semigroups $S_1\not\cong S_2$ to have exactly the same ordered groupoids, $\G(S_1)=\G(S_2)$. It follows from this that the groupoid alone is not enough information to distinguish regular $*$-semigroups; in other words, the groupoid is not a \emph{total invariant} of a regular $*$-semigroup. However, the semigroups~$S_1$ and~$S_2$ in Example~\ref{eg:Brandt} had different projection algebra structures. Thus, even though the groupoids~$\G_1$ and $\G_2$ are identical (in terms of their compositions, inversions and orderings), we can still distinguish them by means of the underlying projection algebra structure of their object sets, $v\G_1\not\cong v\G_2$.
This may seem like a strange point, as the projection algebra structure is somewhat `invisible' in the structure of the groupoid $\G$. In a sense, the projection algebra is merely used as a means to define the order on $v\G$, which then feeds in to the order on $\G$ itself. However, $\G$ is not only an ordered groupoid, but in fact a \emph{projection groupoid} (cf.~Definition~\ref{defn:PG}), and this means that there is a stronger link between the structures of $\G$ and $v\G$ than merely the order-theoretic one just described. Indeed, the defining properties \ref{G1a}--\ref{G1d} from Definition \ref{defn:PG} involve the $\Th_a$ maps from~\eqref{eq:Tha}, which are defined in terms of the order-theoretic properties of $\G$ (specifically, the $\vt_a$ maps) and the projection algebra structure of $v\G$ (specifically, the $\th_p$ maps). And it is easy to see that the groupoids $\G_1=\G(S_1)$ and $\G_2=\G(S_2)$ have different $\Th$ maps. Indeed, again distinguishing these by using the symbols $\Th^1$ and $\Th^2$, we even have $\Th_p^1=\th_p^1\not=\th_p^2=\Th_p^2$ for $p\in P$; cf.~\eqref{eq:Thp} and \eqref{eq:x}.
In any case, one may wonder if the pair $(\G,P)$ consisting of the groupoid $\G=\G(S)$ and the projection algebra $P=P(S)$, including its structure as determined by the $\th$ operations, might be a total invariant for a regular $*$-semigroup. The next example shows that this is still not the case, and illustrates the need for the evaluation map in the chained projection groupoid~$(\G,\ve)$.
\begin{eg}\label{eg:Rees}
Fix a group $G$, and an arbitrary set $P$. Also let $M=(m_{pq})_{p,q\in P}$ be a $P\times P$ `sandwich matrix' over $G$ with the property that
\begin{equation}\label{eq:M}
m_{qp} = m_{pq}^{-1} \qquad\text{for all $p,q\in P$.}
\end{equation}
Note that \eqref{eq:M} forces $m_{pp}=1$ for all $p\in P$, where here $1$ denotes the identity of the group $G$.
The \emph{Rees matrix semigroup} $S_M = \mathscr M(G,P,M)$ has underlying set $P\times G\times P$, and product
\[
(p,g,q)(r,h,s) = (p,gm_{qr}h,s).
\]
One can easily check (using \eqref{eq:M}) that $S_M$ becomes a regular $*$-semigroup under the involution given by
\[
(p,g,q)^* = (q,g^{-1},p).
\]
The projections of $S_M$ are the elements $(p,1,p)$ for each $p\in P$, and we identify $p\equiv(p,1,p)$. With this identification, one can easily check that $pqp=p$ for all $p,q\in P$, so that each $\th_p$ map (as in~\eqref{eq:thp}) is the constant map with image $\{p\}$. It follows that the projection algebra structure of $P(S_M)$ is independent of the sandwich matrix $M$, and depends only on the set $P$. Note also that the law $p=pqp=q\th_p$ gives $p\leqF q$ for all $p,q\in P$, which means that ${\F}=\nab_P$ is the universal relation on $P$. On the other hand, the order $\leq$ on $P$ is trivial, as $p\leq q\iff p=p\th_q=q$.
As ever, the idempotents of $S_M$ are the products of two projections:
\[
pq \equiv(p,1,p)(q,1,q) = (p,m_{pq},q) \qquad\text{for $p,q\in P$.}
\]
In particular, the idempotents of $S_M$ depend on the sandwich matrix $M$.
We now consider the groupoid $\G = \G(S_M)$. (The reason for not including an $M$-subscript on~$\G$ will become clear shortly.) As ever, the object set of $\G$ is $v\G=P$. Green's $\mathrel{\mathscr R}$ and $\L$ relations on $S_M$ are given by
\[
(p,g,q) \mathrel{\mathscr R} (r,h,s) \Iff p=r \AND (p,g,q) \L (r,h,s) \Iff q=s.
\]
In particular, the morphism sets of $\G$ are given by
\[
\G(p,q) = R_p\cap L_q = \{p\}\times G\times\{q\} = \set{(p,g,q)}{g\in G} \qquad\text{for each $p,q\in P$.}
\]
The operations in the groupoid $\G$ are given by
\[
(p,g,q)\circ(q,h,r) = (p,gh,r) \AND (p,g,q)^{-1} = (p,g,q)^* = (q,g^{-1},p).
\]
Note that the above rule for composition in $\G$ holds because of the identity $m_{qq}=1$ (cf.~\eqref{eq:M}). In particular, these operations are independent of the sandwich matrix $M$. Since the ordering on~$P$ is trivial, so too is the ordering on $\G$.
Thus, given distinct matrices $M_1$ and $M_2$ satisfying \eqref{eq:M}, the semigroups $S_1=S_{M_1}$ and $S_2=S_{M_2}$ will give rise to identical projection algebras $P(S_1)=P(S_2)$, including the $\th$ operations, and identical groupoids $\G(S_1)=\G(S_2)$. However, the semigroups $S_1$ and $S_2$ are of course different, as they will have different products (since $M_1\not=M_2$). Moreover, $S_1$ and $S_2$ need not even be isomorphic. Indeed, if every entry of $M_1$ is $1$ (the identity of $G$), then $S_1$ is isomorphic to the direct product of $G$ with the square band $P\times P$, and in particular the idempotents $E(S_1)$ form a subsemigroup of $S_1$. But this need not be the case in general; indeed, if $|G|\geq2$ and $|P|\geq3$, and if we choose the entries of $M_2=(m_{pq})$ in such a way that $m_{pq}=m_{qr}=1\not=m_{pr}$ for distinct $p,q,r\in P$, then $(p,1,q)$ and $(q,1,r)$ are both idempotents of $S_2$, but their product
\[
(p,1,q)(q,1,r)=(p,1,r)\not=(p,m_{pr},r)
\]
is not. Thus, $S_1\not\cong S_2$ in this case.
It follows from this that the pair $(\G,P)$ consisting of the groupoid $\G=\G(S)$ and projection algebra $P=P(S)$ is not a total invariant of a regular $*$-semigroup. It turns out that the missing ingredient is the evaluation map $\ve=\ve(S)$ from Definition \ref{defn:veS}. Indeed, we will see that the pair~$(\G,\ve)$, i.e.~the chained projection groupoid associated to $S$, \emph{is} a total invariant.
Going back to the Rees matrix semigroup $S_M$, we now consider the evaluation map
\[
\ve_M = \ve(S_M) : \C\to\G,
\]
where here $\C=\C(P)$ is the chain groupoid of $P$. As in \eqref{eq:vec}, and keeping ${\F}=\nab_P$ in mind,~$\ve_M$ is determined entirely by the elements $\ve_M[p,q]$, for $p,q\in P$. Following Definition \ref{defn:veS}, we have
\[
\ve_M[p,q] = pq \equiv (p,m_{pq},q).
\]
In particular, the evaluation map $\ve_M$ depends on the sandwich matrix $M$.
In a sense, the previous conclusion is reversible. Indeed, if we start from the pair $(\G,P)$, consisting of the groupoid $\G$, and the projection algebra $P$ (with each $\th_p$ being the constant map with image $\{p\}$), we have immense freedom in defining an evaluation map $\ve:\C\to\G$. Indeed, we can do so by defining the elements $\ve[p,q]$, for $p,q\in P$, to be arbitrary morphisms from $\G(p,q)$, as long as we ensure that $\ve[q,p]=\ve[p,q]^{-1}$ for all $p,q$. Since $\ve[p,q]$ must be of the form $(p,m_{pq},q)$ for some $m_{pq}\in G$, and since then $\ve[p,q]^{-1}=(q,m_{pq}^{-1},p)$, this condition on inverses is of course equivalent to \eqref{eq:M}. Thus, roughly speaking, evaluation maps $\C\to\G$ and sandwich matrices satisfying \eqref{eq:M} are one and the same thing.
On the other hand, the idempotent \emph{structure} of $S_M$, as determined by the \emph{biordered set}~$E(S_M)$, is independent of $M$. The biordered set of a semigroup $S$ consists of the set $E(S)$ of idempotents, together with a partial binary operation $(e,f)\mt ef$ (a restriction of the product in $S$), defined precisely when $ef$ and/or $fe$ is equal to one of $e$ or $f$. These `basic products' can be conveniently depicted using Easdown's arrow notation from \cite{Easdown1985}; we write
\[
\begin{tikzpicture}
\node(e)at(0,0){$e$};\node(f)at(1,0){$f$};\draw[>-](e)--(f);
\node()at(3,0){or};
\node(ee)at(5,0){$e$};\node(ff)at(6,0){$f$};\draw[->](ee)--(ff);\end{tikzpicture}
\]
to indicate that $ef=e$ ($e$ is a left zero for $f$) or $fe=e$ ($e$ is a right zero for $f$), respectively. In the case that $S=S_M$, we have seen that $E=E(S)$ consists of the elements $e_{pq}=(p,m_{pq},q)$, for $p,q\in P$. One can easily check that
\[
\begin{tikzpicture}
\node(e)at(0,0){$e_{pq}$};\node(f)at(1.5,0){$e_{rs}$};\draw[>-](e)--(f);
\node()at(2.5,0){$\iff$};
\node()at(3.8,0){$p=r$,};
\begin{scope}[shift={(0,-1)}]
\node(e)at(0,0){$e_{pq}$};\node(f)at(1.5,0){$e_{rs}$};\draw[->](e)--(f);
\node()at(2.5,0){$\iff$};
\node()at(3.8,0){$q=s$.};
\end{scope}
\end{tikzpicture}
\]
The entire biordered structure of $E(S)$ is then as follows, in the case $P=\{1,2,3,4\}$, and remembering that the $\begin{tikzpicture}\draw[>-](0,0)--(.5,0);\end{tikzpicture}$ and $\begin{tikzpicture}\draw[->](0,0)--(.5,0);\end{tikzpicture}$ relations are transitive:
\[
\begin{tikzpicture}[scale=1.8]
\foreach[evaluate={\z=int(5-\x)}] \x in {1,...,4} \foreach \y in {1,...,4} {\node (\y\x) at (\y,\z) {$e_{\y\x}$};}
\foreach \x in {1,...,4} \foreach[evaluate={\z=int(\y+1)}] \y in {1,...,3} {\draw[>-<] (\x\y)--(\x\z);}
\foreach \x in {1,...,4} \foreach[evaluate={\z=int(\y+1)}] \y in {1,...,3} {\draw[<->] (\y\x)--(\z\x);}
\end{tikzpicture}
\]
\end{eg}
\section{The category isomorphism}\label{chap:iso}
We are now ready to bring all of the above ideas together. The previous two chapters gave constructions for:
\bit
\item building a regular $*$-semigroup $\bS(\G,\ve)$ from a chained projection groupoid $(\G,\ve)$, and
\item building a chained projection groupoid $\bG(S)$ from a regular $*$-semigroup $S$.
\eit
As we have already discussed, one of the goals of the current chapter is to show that the~$\bS$ and~$\bG$ constructions are inverse processes, meaning that
\[
\bG(\bS(\G,\ve)) = (\G,\ve) \AND \bS(\bG(S)) = S
\]
for any chained projection groupoid $(\G,\ve)$, and any regular $*$-semigroup $S$.
In fact, we go further than this, and show in Theorem~\ref{thm:iso} below that $\bS$ and $\bG$ are mutually inverse isomorphisms between the categories of regular $*$-semigroups and chained projection groupoids.
Let $\RSS$ be the (large) category of all regular $*$-semigroups. Morphisms in $\RSS$ are the $*$-semigroup homomorphisms, i.e.~the maps
\[
\phi:S_1\to S_2 \qquad\text{such that}\qquad (ab)\phi=(a\phi)(b\phi) \AND (a^*)\phi = (a\phi)^* \qquad\text{for all $a,b\in S_1$.}
\]
We also define $\CPG$ to be the (large) category of all chained projection groupoids. Morphisms in~$\CPG$ are what we call \emph{chained projection functors}:
\newpage
\begin{defn}\label{defn:CPF}
Consider two chained projection groupoids $(\G_1,\ve_1)$ and $(\G_2,\ve_2)$. For $i=1,2$, we write $P_i=v\G_i$, $\C_i=\C(P_i)$, and so on.
A \emph{chained projection functor} from $(\G_1,\ve_1)$ to $(\G_2,\ve_2)$ is an ordered functor $\phi:\G_1\to\G_2$ satisfying the following two conditions:
\begin{enumerate}[label=\textup{\textsf{(F\arabic*)}},leftmargin=10mm]
\item \label{F1} The object map $v\phi:P_1\to P_2$ (i.e.~the restriction of $\phi$ to $P_1=v\G_1$) is a projection algebra morphism, meaning that
\[
(p\th_q)\phi = (p\phi) \th_{q\phi} \qquad\text{for all $p,q\in P_1$.}
\]
(Here we use $\th$ to denote the projection algebra operations on both $P_1$ and $P_2$.)
\item \label{F2} The functor $\phi$ respects the evaluation maps, meaning that the following diagram commutes:
\[
\begin{tikzcd}
\C_1 \arrow{rr}{\Phi} \arrow[swap]{dd}{\ve_1} & ~ & \C_2 \arrow{dd}{\ve_2} \\%
~&~&~\\
\G_1 \arrow{rr}{\phi}& ~ & \G_2,
\end{tikzcd}
\]
where here $\Phi:\C_1\to\C_2$ is the induced functor given by $[p_1,\ldots,p_k]\Phi = [p_1\phi,\ldots,p_k\phi]$, as in Proposition \ref{prop:CC'}. Commutativity of the diagram means that
\[
(\ve_1(\c))\phi = \ve_2(\c\Phi) \qquad\text{for all $\c\in\C_1$.}
\]
\end{enumerate}
\end{defn}
The main result of this chapter, Theorem \ref{thm:iso} below, states that the categories $\RSS$ and $\CPG$ are isomorphic. We build towards the theorem with a series of lemmas.
\begin{lemma}\label{lem:S1S2}
If $S_1$ and $S_2$ are regular $*$-semigroups, then any $*$-semigroup homomorphism ${S_1\to S_2}$ is a chained projection functor $\bG(S_1)\to\bG(S_2)$.
\end{lemma}
\pf
Fix a $*$-semigroup homomorphism $\phi:S_1\to S_2$, and write $\bG(S_i)=(\G_i,\ve_i)$, ${P_i=P(S_i)=v\G_i}$, $\C_i=\C(P_i)$, and so on. Since the underlying sets of $\G_1$ and $\G_2$ are $S_1$ and $S_2$, we can think of~$\phi$ as a map $\G_1\to\G_2$. Since $\phi$ is a $*$-semigroup homomorphism, it follows that for $a\in\G_1$ we have
\[
\bd(a\phi) = (a\phi)(a\phi)^* = (a\phi)(a^*\phi) = (aa^*)\phi = \bd(a)\phi \ANDSIM \br(a\phi) = \br(a)\phi.
\]
But then if $a,b\in\G_1$ are such that $\br(a)=\bd(b)$, then $\br(a\phi)=\bd(b\phi)$, so we have
\[
(a\circ b)\phi = (ab)\phi = (a\phi)(b\phi) = (a\phi)\circ(b\phi).
\]
Thus, $\phi$ is a functor.
Before we check that $\phi$ is order-preserving, we note that \ref{F1} holds, as for any $p,q\in P_1$,
\[
(p\th_q)\phi = (qpq)\phi = (q\phi)(p\phi)(q\phi) = (p\phi) \th_{q\phi}.
\]
To see that $\phi$ preserves the order, suppose $a,b\in\G_1$ are such that $a\leq b$. Then $a={}_p\corest b = pb$ for some $p\leq\bd(b)$. Note that
\[
p\leq\bd(b) \Implies p = p\th_{\bd(b)} \Implies p\phi = (p\phi)\th_{\bd(b)\phi} \Implies p\phi \leq \bd(b)\phi = \bd(b\phi).
\]
It then follows that
\[
a\phi = (pb)\phi = (p\phi)(b\phi) = {}_{p\phi}\corest(b\phi) \leq b\phi.
\]
It remains to check that \ref{F2} holds. For this, let $\c=[p_1,\ldots,p_k]\in\C_1$. Then
\[
(\ve_1(\c))\phi = (p_1\cdots p_k)\phi = (p_1\phi)\cdots(p_k\phi) = \ve_2[p_1\phi,\ldots,p_k\phi] = \ve_2(\c\Phi). \qedhere
\]
\epf
\begin{lemma}\label{lem:G1G2}
If $(\G_1,\ve_1)$ and $(\G_2,\ve_2)$ are chained projection groupoids, then any chained projection functor $(\G_1,\ve_1)\to(\G_2,\ve_2)$ is a $*$-semigroup homomorphism $\bS(\G_1,\ve_1)\to\bS(\G_2,\ve_2)$.
\end{lemma}
\pf
Fix a chained projection functor $\phi:\G_1\to\G_2$. Also write $S_1=\bS(\G_1,\ve_1)$ and ${S_2=\bS(\G_2,\ve_2)}$. Again we can think of $\phi$ as a map $S_1\to S_2$. Since $\phi$ is a groupoid functor, we have
\[
(a^*)\phi = (a^{-1})\phi = (a\phi)^{-1} = (a\phi)^* \qquad\text{for all $a\in S_1$.}
\]
To make the rest of this proof notationally concise, we will write $\ol a = a\phi$ for $a\in S_1$. It remains to show that $\phi$ is a semigroup homomorphism, i.e.~that $\ol{a\pr b}=\ol a\pr \ol b$ for all $a,b\in S_1$. (Here we write~$\pr$ for the product on both $S_1$ and $S_2$.) For this, write as usual
\[
p = \br(a) \COMMA q = \bd(b) \COMMA p' = q\th_p \AND q' = p\th_q,
\]
so that
\[
a\pr b = a\rest_{p'} \circ \ve_1[p',q'] \circ {}_{q'}\corest b.
\]
Since $\phi$ is a functor, we have
\[
\br(\ol a) = \br(\ol{a\circ p}) = \br(\ol a\circ \ol p) = \br(\ol p) = \ol p \ANDSIM \bd(\ol b) = \ol q.
\]
Thus,
\[
\ol a \pr \ol b = \ol a\rest_{\ol p'} \circ \ve_2[\ol p',\ol q'] \circ {}_{\ol q'}\corest \ol b \WHERE \ol p' = \ol q\th_{\ol p} \AND \ol q' = \ol p\th_{\ol q}.
\]
By \ref{F1} we have
\[
\ol{p'} = \ol{q\th_p} = \ol q\th_{\ol p} = \ol p' \ANDSIM \ol{q'} = \ol q'.
\]
By \ref{F2} we have
\[
\ol{\ve_1[p',q']} = (\ve_1[p',q'])\phi = \ve_2([p',q']\Phi) = \ve_2[\ol{p'},\ol{q'}] = \ve_2[\ol p',\ol q'].
\]
Since $\phi$ is an ordered functor, we have
\[
\ol{a\rest_{p'}} = \ol a\rest_{\ol{p'}} = \ol a\rest_{\ol p'} \ANDSIM \ol{{}_{q'}\corest b} = {}_{\ol q'}\corest\ol b.
\]
Putting everything together, we have
\[
\ol{a\pr b} = \ol{a\rest_{p'} \circ \ve_1[p',q'] \circ {}_{q'}\corest b} = \ol{a\rest_{p'}} \circ \ol{\ve_1[p',q']} \circ \ol{{}_{q'}\corest b} = \ol a\rest_{\ol p'} \circ \ve_2[\ol p',\ol q'] \circ {}_{\ol q'}\corest \ol b = \ol a\pr\ol b. \qedhere
\]
\epf
\begin{lemma}\label{lem:GS}
If $(\G,\ve)$ is a chained projection groupoid, then $\bG(\bS(\G,\ve))=(\G,\ve)$.
\end{lemma}
\pf
Write
\[
S = \bS(\G,\ve) \AND (\G',\ve') = \bG(S).
\]
We must show that $\G'=\G$ and $\ve'=\ve$. By the equality $\G'=\G$, we mean that these have the same underlying sets, and all the same operations and relations (composition, inversion, order and projection algebra operations).
The underlying sets of $S$ and $\G'$ are simply $\G$, and the unary operations of $S$ and $\G'$ are simply inversion in $\G$. We also have $v\G=P(S)=v\G'$ (cf.~Proposition~\ref{prop:ES}\ref{ES1}), and we denote this set of projections by $P$.
Recall that the product $\pr$ in $S$ is given by Definition \ref{defn:pr}. The groupoid $\G'$ is constructed from $S$, as outlined in Definition \ref{defn:GS2}. To avoid ambiguity, we will denote the composition on $\G'$ by $\ccirc$ in order to distinguish it from the original composition $\circ$ in $\G$. For the same reason, we also denote the domain and range maps in $\G'$ by $\bd'$ and $\br'$. So for $a,b\in\G'(=\G)$, we have
\[
\bd'(a) = a\pr a^{-1} \COMMA \br'(a) = a^{-1}\pr a \AND a\ccirc b=a\pr b \text{ when $\br'(a)=\bd'(b)$.}
\]
By Lemma \ref{lem:prcirc}, and since $\br(a)=\bd(a^{-1})$, it follows that
\begin{equation}\label{eq:d'r'}
\bd'(a) = a\pr a^{-1} = a \circ a^{-1} = \bd(a) \ANDSIM \br'(a) = \br(a).
\end{equation}
It also follows that
\[
\br'(a)=\bd'(b) \Iff \br(a)=\bd(b) \qquad\text{and then}\qquad a\ccirc b = a\pr b = a\circ b.
\]
Thus, the compositions in the categories $\G$ and $\G'$ coincide.
Next we consider the projection algebra operations. We denote those in $\G$ as usual by $\th_p$ ($p\in P$), and those in $\G'$ by $\th'_p$ ($p\in P$), remembering that $v\G'=v\G=P$. We must show that $q\th_p=q\th'_p$ for all $p,q\in P$, and this in fact follows from Lemma \ref{lem:pqvepq}\ref{pqvepq3}:
\[
q\th'_p = p\pr q\pr p = q\th_p.
\]
Since $\th_p=\th_p'$ for all $p\in P=v\G=v\G'$, it follows that $p\leq q$ in $\G$ precisely when $p\leq q$ in $\G'$.
We now move on to the orders on $\G$ and $\G'$. We use the symbols $\corest$ and $\rest$ to denote restrictions in $\G$ as usual, and $\corestt$ and $\restt$ for restrictions in $\G'$ (note the placement of the arrow heads). Let $a\in\G$ and write $q=\bd(a)$. By \eqref{eq:d'r'}, ${}_p\corest a$ is defined in $\G$ precisely when ${}_p\corestt a$ is defined in $\G'$, i.e.~when $p\leq q$. And for such $p$, it follows from Definition \ref{defn:GS2} and Lemma \ref{lem:apr}\ref{apr6} that
\[
{}_p\corestt a = p\pr a = {}_p\corest a .
\]
But then for $a,b\in\G$, and writing $\leq$ and $\leq'$ for the orders in $\G$ and $\G'$, we have
\[
a\leq b \Iff a={}_p\corest b\text{ for some $p\leq\bd(b)$} \Iff a={}_p\corestt b\text{ for some $p\leq\bd'(b)$} \Iff a\leq' b.
\]
All of the above shows that the ordered groupoids $\G$ and $\G'$ coincide. It remains to show that $\ve=\ve'$. To do so, fix some $\c=[p_1,\ldots,p_k]\in\C$. By Definition \ref{defn:veS} we have
\[
\ve'(\c) = p_1\pr\cdots\pr p_k,
\]
and we show that $\ve(\c)=\ve'(\c)$ by induction on $k$. The $k=1$ case being clear, suppose $k\geq2$, and write $\d=[p_1,\ldots,p_{k-1}]$ and $a=\ve(\d)$. Note that $\c=\d\circ[p_{k-1},p_k]$, and by induction we have
\[
a = \ve(\d) = \ve'(\d) = p_1\pr\cdots\pr p_{k-1}.
\]
It follows that $\ve'(\c)=a\pr p_k$. Since $\br(a)=\br(\d)=p_{k-1}$ (as $\ve'$ is a $v$-functor), and since $p_{k-1}\F p_k$ (as $\c\in\C$), it follows from Lemma \ref{lem:apr}\ref{apr3} that
\[
\ve'(\c) = a\pr p_k = a\circ\ve[p_{k-1},p_k] = \ve(\d)\circ\ve[p_{k-1},p_k] = \ve(\d\circ[p_{k-1},p_k]) = \ve(\c),
\]
as required.
\epf
\begin{lemma}\label{lem:SG}
If $S$ is a regular $*$-semigroup, then $\bS(\bG(S))=S$.
\end{lemma}
\pf
Write
\[
(\G,\ve) = \bG(S) \AND S' = \bS(\G,\ve).
\]
Again, by construction the underlying sets of $S$ and $S'$ coincide, and so do the involutions. It therefore remains to show that the products in $S$ and $S'$ coincide as well. We denote the product in $S$ by juxtaposition, as usual. The product in $S'$ is $\pr$, which is constructed from $(\G,\ve)$ as in Definition \ref{defn:pr}, while $(\G,\ve)$ is in turn constructed from $S$ as in Definitions \ref{defn:GS2} and \ref{defn:veS}.
So let $a,b\in S$; we must show that $a\pr b=ab$. Write $p=\br(a)$ and $q=\bd(b)$, so that
\[
a\pr b = a\rest_{p'} \circ \ve[p',q'] \circ {}_{q'}\corest b \WHERE p' = q\th_p \ANd q' = p\th_q.
\]
Following the definitions in Chapter \ref{chap:S}, and using Lemma \ref{lem:PS1}, we have
\begin{align*}
p &= a^*a, & p' &= pqp, & a\rest_{p'} &= ap' = apqp, \\
q &= bb^*, & q' &= qpq, & {}_{q'}\corest b &= q'b = qpqb, & \ve[p',q'] = p'q' = pqpqpq = pq.
\end{align*}
Combining all of the above, and keeping Definition \ref{defn:GS2} and Lemma \ref{lem:PS1} in mind, it follows that
\[
a\pr b = apqp \circ pq \circ qpqb = apqpqpqb = apqb = aa^*abb^*b = ab,
\]
as required.
\epf
We now have all we need to prove the main result.
\begin{thm}\label{thm:iso}
The category $\RSS$ of regular $*$-semigroups (with $*$-semigroup homomorphisms) is isomorphic to the category $\CPG$ of chained projection groupoids (with chained projection functors).
\end{thm}
\pf
By Theorems \ref{thm:SGve} and \ref{thm:GveS} we have maps
\[
\bS:\CPG\to\RSS:(\G,\ve)\mt\bS(\G,\ve) \AND \bG:\RSS\to\CPG:S\mt\bG(S).
\]
By Lemmas \ref{lem:S1S2} and \ref{lem:G1G2}, $\bS$ and $\bG$ are functors, and by Lemmas \ref{lem:GS} and \ref{lem:SG} they are mutually inverse.
\epf
\begin{rem}\label{rem:structure}
We can use Theorem \ref{thm:iso} to deduce a \emph{structure theorem} for regular $*$-semigroups, in the sense that it shows how the entire structure of such a semigroup can be entirely, and uniquely, determined by `simpler' objects. Specifically, given a regular $*$-semigroup $S$, one constructs:
\bit
\item the projection groupoid $\G=\G(S)$, as in Definition \ref{defn:GS2}, including its object set $v\G=P(S)$, which is the projection algebra of $S$, and
\item the evaluation map $\ve=\ve(S)$, as in Definition \ref{defn:veS}.
\eit
Lemma \ref{lem:SG} tells us that $S$ is completely determined by these two simpler objects, in the sense that $S=\bS(\G,\ve)$ can be constructed from $\G$ and $\ve$ in the manner described in Definition~\ref{defn:pr}. Examples \ref{eg:Brandt} and \ref{eg:Rees} demonstrate the subtlety of the decomposition, showing how seemingly-small changes to $\G$ and/or $\ve$ can result in very different semigroups.
\end{rem}
Theorem \ref{thm:iso} shows that the categories of regular $*$-semigroups and chained projection groupoids are in a sense the same. It also follows from the definition of the functors $\bS$ and $\bG$ that the algebras of projections of regular $*$-semigroups are precisely the object sets of chained projection groupoids. However, we are still left with the following two (equivalent) questions, which we will explore in some depth in the second part of the paper:
\begin{que}\label{qu:P}
Given an (abstract) projection algebra $P$, as in Definition \ref{defn:P}:
\bit
\item does there exist a regular $*$-semigroup $S$ with $P(S)=P$?
\item does there exist a chained projection groupoid $(\G,\ve)$ with $v\G=P$?
\eit
\end{que}
Question \ref{qu:P} (or at least the first version of it) is already known to have an affirmative answer. For example, Imaoka \cite{Imaoka1980,Imaoka1983} and Yamada \cite{Yamada1981} have both shown how to construct a \emph{maximum fundamental regular $*$-semigroup} with a given projection algebra $P$ (definitions are given below), though Yamada worked with a somewhat different formulation of projection algebras. This fundamental semigroup was also studied by Jones in \cite[Section 4]{Jones2012}, but he was not concerned with maximality. As an application of our results, we will give another (ultimately equivalent) construction of this maximum fundamental semigroup in Chapter \ref{chap:F}. We also answer Question~\ref{qu:P} by an alternative route in Chapter~\ref{chap:E}, by constructing a \emph{free (projection-generated) regular $*$-semigroup} with projection algebra $P$. As the name suggests, these semigroups have many remarkable (categorical) `free-ness' properties, which will be elaborated upon further in Chapter \ref{chap:E}, and can be constructed in a variety of ways, including by a presentation by generators and relations (see Theorem \ref{thm:pres}).
\newpage
\part{Applications}\label{part:applications}
The first part of this paper was devoted to a proof of our main structural result, Theorem~\ref{thm:iso}, which established the isomorphism of the categories of regular $*$-semigroups and chained projection groupoids. In this second part, we explore a number of consequences of this theorem:
\bit
\item Chapter \ref{chap:E} uses chain groupoids to construct the \emph{free regular $*$-semigroup} over an arbitrary projection algebra; see Theorems \ref{thm:IG},~\ref{thm:pres},~\ref{thm:free} and~\ref{thm:adjoint}.
\item Chapter \ref{chap:F} uses projection groupoids to give a new, transparent, construction of \emph{fundamental} regular $*$-semigroups; see Theorems \ref{thm:SOP}, \ref{thm:fund} and Theorem \ref{thm:muCP}.
\item Chapter \ref{chap:I} applies our theory to the special case of \emph{inverse} semigroups, leading to a new proof of the celebrated Ehresmann--Schein--Nambooripad Theorem, stated in Theorem~\ref{thm:ESN}.
\eit
Again, the introduction of each chapter contains a detailed outline of its structure, and a summary of the main results.
\section{\boldmath Idempotent-generated regular $*$-semigroups}\label{chap:E}
As discussed at the end of Chapter \ref{chap:iso}, one of our motivations at this point is to show how to construct, from an arbitrary projection algebra $P$ (cf.~Definition \ref{defn:P}), a regular $*$-semigroup~$S$ with projection algebra $P(S)=P$ or, equivalently, a chained projection groupoid $(\G,\ve)$ with object set $v\G=P$. As it happens, we have already constructed such a groupoid, namely the chain groupoid $\C=\C(P)$ from Definition \ref{defn:CP}, with evaluation map simply the identity~$\operatorname{id}_\C$, although we still have to verify quite a few non-trivial details. The chained projection groupoid $(\C,\operatorname{id}_\C)$ leads as usual to the regular $*$-semigroup $\bS(\C,\operatorname{id}_\C)$, which we will call the \emph{chain semigroup of $P$}, and denote by $\C_P$.
As we will see, this semigroup possesses many important properties, beyond simply being a regular $*$-semigroup with projection algebra $P(\C_P)=P$.
In Section \ref{sect:C_P} we prove Theorem~\ref{thm:IG}, which shows that $\C_P$ is universal among \emph{idempotent-generated} regular $*$-semigroups in the sense that:
\bit
\item $\C_P$ is an idempotent-generated regular $*$-semigroup with projection algebra $P$, and
\item any idempotent-generated regular $*$-semigroup with projection algebra $P$ is a homomorphic image of $\C_P$.
\eit
(Recall from Lemma \ref{lem:PS1}\ref{PS12} that a regular $*$-semigroup is idempotent-generated if and only if it is projection-generated.)
As explained in Remark \ref{rem:IGRSSP}, this shows that $\C_P$ is a free/initial object in the category $\IGRSS_P$ of idempotent-generated regular $*$-semigroups with (fixed) projection algebra $P$.
In Section \ref{sect:gr} we give a presentation for $\C_P$ in terms of generators and relations; see Theorem~\ref{thm:pres}. We then return in Section \ref{sect:freeness} to our discussion of the `free-ness' of~$\C_P$. Theorem~\ref{thm:free} demonstrates another universal property of $\C_P$ among all regular $*$-semigroups:
\bit
\item For any projection algebra $P$ and regular $*$-semigroup $S$, any projection algebra morphism $\phi:P\to P(S)$ extends uniquely to a $*$-semigroup morphism $\C_P\to S$.
\eit
This is then used to prove Theorem \ref{thm:adjoint}, which says that $\C_P$ is the free regular $*$-semigroup with projection algebra~$P$, in the sense that the semigroups $\C_P$ are precisely the objects in the image of a left adjoint of a suitable forgetful functor. (The relevant definitions are given below.) We then conclude the chapter with some open questions.
\subsection{The chain semigroup}\label{sect:C_P}
For the duration of this section we fix a projection algebra $P$, as in Definition \ref{defn:P}, and all of the data that comes with it, i.e.~the unary operations $\th_p$, the relations $\leq$, $\leqF$ and $\F$, and so on. We also write $\P=\P(P)$ and $\C=\C(P)$ for the path category and chain groupoid of $P$; cf.~Definitions \ref{defn:PP} and \ref{defn:CP}.
Our first goal is to show that $\C$ is a projection groupoid, for which it remains to verify~\ref{G1}, specifically \ref{G1a}. To do so, we need to understand the maps $\vt_\c$ and $\Th_\c$, for a $P$-chain ${\c=[p_1,\ldots,p_k]\in\C}$. Keeping $\bd(\c)=p_1$ and $\br(\c)=p_k$ in mind, and using \eqref{eq:qi}, the map $\vt_\c:p_1^\da\to p_k^\da$ is given by
\begin{align}
\label{eq:vtc} q\vt_\c = \br({}_q\corest\c) = q\th_{p_2}\cdots\th_{p_k} &= q\th_{p_1}\cdots\th_{p_k} &&\text{for $q\leq p_1$.}
\intertext{It follows from this that $q\Th_\c = q\th_{\bd(\c)}\vt_\c = q\th_{p_1}\th_{p_2}\cdots\th_{p_k}$ for arbitrary $q\in P$. This all shows that}
\label{eq:Thc} \Th_\c &= \th_{p_1}\cdots\th_{p_k} &&\text{for $\c=[p_1,\ldots,p_k]\in\C$.}
\end{align}
\begin{lemma}
For any projection algebra $P$, the chain groupoid $\C=\C(P)$ is a projection groupoid.
\end{lemma}
\pf
To verify \ref{G1a}, consider a $P$-chain $\c=[p_1,\ldots,p_k]\in\C$, and let $q\leq\bd(\c)=p_1$. We must show that
\[
\th_{q\vt_\c} = \Th_{\c^{-1}}\th_q\Th_\c.
\]
For this we use \eqref{eq:vtc}, Lemma \ref{lem:P} and \eqref{eq:Thc} to calculate
\[
\th_{q\vt_\c} = \th_{q\th_{p_1}\cdots\th_{p_k}} = \th_{p_k}\cdots\th_{p_1}\th_q\th_{p_1}\cdots\th_{p_k} = \Th_{\c^{-1}}\th_q\Th_\c ,
\]
where in the last step we recall that $\c^{-1}=[p_k,\ldots,p_1]$.
\epf
Next, it is clear that the identity map $\ve=\operatorname{id}_\C$ is an evaluation map (cf.~Definition \ref{defn:ve}).
\begin{prop}\label{prop:Cid}
For any projection algebra $P$, $(\C,\operatorname{id}_\C)$ is a chained projection groupoid, where $\C=\C(P)$ is the chain groupoid of $P$.
\end{prop}
\pf
It remains to verify \ref{G2}. To do so, fix a $\c$-linked pair $(e,f)$, where $\c=[\p]\in\C$ for $\p=(p_1,\ldots,p_k)\in\P$. Following Definitions \ref{defn:LP1}, \ref{defn:LP2} and \ref{defn:CPG} this means that
\begin{equation}\label{eq:ecf}
f = e\Th_\c\th_f \AND e = f\Th_{\c^{-1}}\th_e,
\end{equation}
and we must show that $\lam(e,\c,f) = \rho(e,\c,f)$. Keeping $\ve=\operatorname{id}_\C$ in mind, these morphisms are defined by
\begin{equation}\label{eq:lamecf}
\lam(e,\c,f) = [e,e_1]\circ {}_{e_1}\corest \c \circ [f_1,f] \AND \rho(e,\c,f) = [e,e_2]\circ \c\rest_{f_2} \circ [f_2,f],
\end{equation}
in terms of the projections
\[
e_1 = e\th_{p_1} \COMMA e_2 = f\Th_{\c^{-1}} \COMMA f_1 = e\Th_\c \AND f_2 = f\th_{p_k}.
\]
For convenience, we will write
\[
{}_{e_1}\corest \c = [ u_1,\ldots,u_k] \AND \c\rest_{f_2} = [v_1,\ldots,v_k].
\]
Using \eqref{eq:rest} and \eqref{eq:corest}, and remembering that $e_1=e\th_{p_1}$ and $f_2 = f\th_{p_k}$, we have
\[
u_i = e_1\th_{p_2}\cdots\th_{p_i} = e\th_{p_1}\cdots\th_{p_i} \AND v_i = f_2\th_{p_{k-1}}\cdots\th_{p_i} = f\th_{p_k}\cdots\th_{p_i} ,
\]
for each $1\leq i\leq k$. Keeping in mind that the compositions in \eqref{eq:lamecf} exist, it follows from all this that
\[
\lam(e,\c,f) = [e,u_1,\ldots,u_k,f] \AND \rho(e,\c,f) = [e,v_1,\ldots,v_k,f],
\]
and we must show that these chains are equal. It will also be convenient to additionally write $u_0=e$ and $v_{k+1}=f$. To assist with understanding the coming arguments, these projections are shown in Figure \ref{fig:clinked} (in the case $k=4$). Our task is essentially to show that the large rectangle at the bottom of the diagram commutes, modulo $\approx$.
Next we claim that $(u_{i-1},v_{i+1})$ is $p_i$-linked for each $1\leq i\leq k$, as in Definition \ref{defn:CP_P}. To prove this, we must show that
\[
v_{i+1} = u_{i-1}\th_{p_i}\th_{v_{i+1}} \AND u_{i-1} = v_{i+1}\th_{p_i}\th_{u_{i-1}}.
\]
First we note that \eqref{eq:Thc} and \eqref{eq:ecf} give
\[
f = e\th_{p_1}\cdots\th_{p_k}\th_f \AND e = f\th_{p_k}\cdots\th_{p_1}\th_e.
\]
Combining this with Lemma \ref{lem:P}, we obtain
\begin{align*}
u_{i-1}\th_{p_i}\th_{v_{i+1}} &= e\th_{p_1}\cdots\th_{p_{i-1}} \cdot \th_{p_i} \cdot \th_{f\th_{p_k}\cdots\th_{p_{i+1}}} \\
&= e\th_{p_1}\cdots\th_{p_{i-1}} \cdot \th_{p_i} \cdot \th_{p_{i+1}}\cdots\th_{p_k}\th_f\th_{p_k}\cdots\th_{p_{i+1}} = f\th_{p_k}\cdots\th_{p_{i+1}} = v_{i+1}.
\end{align*}
The proof that $u_{i-1} = v_{i+1}\th_{p_i}\th_{u_{i-1}}$ is essentially identical.
Now that we have proved the claim, these linked pairs lead to the paths
\begin{align*}
\lam(u_{i-1},p_i,v_{i+1}) &= (u_{i-1},u_{i-1}\th_{p_i},v_{i+1}) &\text{and}&& \rho(u_{i-1},p_i,v_{i+1}) &= (u_{i-1},v_{i+1}\th_{p_i},v_{i+1})\\
&= (u_{i-1},u_i,v_{i+1}) &&&&=(u_{i-1},v_i,v_{i+1}),
\end{align*}
as in Definition \ref{defn:CP_P}. It then follows by the form of $\Om$ in Definition \ref{defn:approx} (see \ref{Om3}) that
\begin{equation}\label{eq:uvv}
[u_{i-1},v_i,v_{i+1}] = [u_{i-1},u_i,v_{i+1}] \qquad\text{for each $1\leq i\leq k$.}
\end{equation}
In other words, each of the small rectangles at the bottom of Figure \ref{fig:clinked} commute. It follows, therefore, that the large rectangle commutes. Formally, we repeatedly use \eqref{eq:uvv} in the indicated places, as follows:
\begin{align*}
\rho(e,\c,f) = [e,v_1,\ldots,v_k,f]
&= [\ul{u_0,v_1,v_2},v_3,v_4,\ldots,v_k,v_{k+1}] \\
&= [u_0,\ul{u_1,v_2,v_3},v_4,\ldots,v_k,v_{k+1}] \\
&= [u_0,u_1,\ul{u_2,v_3,v_4},\ldots,v_k,v_{k+1}] \\
& \hspace{1.3mm} \vdots\\
&= [u_0,u_1,u_2,u_3,u_4,\ldots,u_k,v_{k+1}] = [e,u_1,\ldots,u_k,f] = \lam(e,\c,f),
\end{align*}
and the proof is complete.
\epf
\begin{figure}[h]
\begin{center}
\scalebox{0.85}{
\begin{tikzpicture}[xscale=0.75]
\tikzstyle{vertex}=[circle,draw=black, fill=white, inner sep = 0.07cm]
\nc\sss{4}
\node[left] () at (-1.2,2){${}_{\phantom{0}}e=$};
\node (e) at (-1,2){$u_0$};
\node (f) at (1+5*\sss,-2){$v_5$};
\node[right] () at (1+5*\sss+.2,-2){$=f_{\phantom{0}}$};
\node (p1) at (4,6){$p_1$};
\node (u1) at (3,2){$u_1$};
\node (v1) at (5,-2){$v_1$};
\begin{scope}[shift={(\sss,0)}]
\node (p2) at (4,6){$p_2$};
\node (u2) at (3,2){$u_2$};
\node (v2) at (5,-2){$v_2$};
\end{scope}
\begin{scope}[shift={(2*\sss,0)}]
\node (p3) at (4,6){$p_3$};
\node (u3) at (3,2){$u_3$};
\node (v3) at (5,-2){$v_3$};
\end{scope}
\begin{scope}[shift={(3*\sss,0)}]
\node (p4) at (4,6){$p_4$};
\node (u4) at (3,2){$u_4$};
\node (v4) at (5,-2){$v_4$};
\end{scope}
\draw[->-=0.5] (p1)--(p2);
\draw[->-=0.5] (p2)--(p3);
\draw[->-=0.5] (p3)--(p4);
\draw[->-=0.5] (e)--(u1);
\draw[->-=0.5] (u1)--(u2);
\draw[->-=0.5] (u2)--(u3);
\draw[->-=0.5] (u3)--(u4);
\draw[->-=0.5] (u4)--(f);
\draw[->-=0.5] (e)--(v1);
\draw[->-=0.5] (v1)--(v2);
\draw[->-=0.5] (v2)--(v3);
\draw[->-=0.5] (v3)--(v4);
\draw[->-=0.5] (v4)--(f);
\draw[->-=0.5] (u1)--(v2);
\draw[->-=0.5] (u2)--(v3);
\draw[->-=0.5] (u3)--(v4);
\draw[white,line width=2mm] (p1)--(v1) (p2)--(v2) (p3)--(v3) (p4)--(v4);
\draw[dashed] (u1)--(p1)--(v1) (u2)--(p2)--(v2) (u3)--(p3)--(v3) (u4)--(p4)--(v4);
\end{tikzpicture}
}
\caption{The projections $e,f,p_i,u_i,v_i$ from the proof of Proposition \ref{prop:Cid}, shown here in the case $k=4$. Dashed lines indicate $\leq$ relationships. Each arrow $s\to t$ represents the $P$-path $(s,t)\in\P$, so the upper and lower paths $e\to f$ represent $(e,u_1,\ldots,u_k,f)$ and $(e,v_1,\ldots,v_k,f)$, respectively.}
\label{fig:clinked}
\end{center}
\end{figure}
As a result of Proposition \ref{prop:Cid} and Theorem \ref{thm:SGve}, we have a regular $*$-semigroup $\bS(\C,\operatorname{id}_\C)$, which we will denote by $\C_P$. To give an explicit description of the $\pr$ product in $\C_P$, consider an arbitrary pair of $P$-chains $\c=[p_1,\ldots,p_k]$ and $\d=[q_1,\ldots,q_l]$. Also write $p=\br(\c)=p_k$ and $q=\bd(\d)=q_1$. Following Definition \ref{defn:pr}, and remembering that the evaluation map is $\operatorname{id}_\C$, we have
\[
\c\pr\d = \c\rest_{p'}\circ[p',q']\circ{}_{q'}\corest\d \WHERE p'=q\th_p \AND q'=p\th_q.
\]
As in \eqref{eq:rest} and \eqref{eq:corest}, we have $\c\rest_{p'} = [p_1',\ldots,p_k']$ and ${}_{q'}\corest\d = [q_1',\ldots,q_l']$, where
\[
p_i' = p'\th_{p_k}\cdots\th_{p_i} \AND q_j' = q'\th_{q_1}\cdots\th_{q_j} \qquad\text{for $1\leq i\leq k$ and $1\leq j\leq l$,}
\]
and where $p'=p_k'$ and $q'=q_1'$. Then
\[
\c\pr\d = \c\rest_{p'}\circ[p',q']\circ{}_{q'}\corest\d = [p_1',\ldots,p_k']\circ[p_k',q_1']\circ[q_1',\ldots,q_l'] = [p_1',\ldots,p_k',q_1',\ldots,q_l']
\]
is simply the \emph{concatenation} of $\c\rest_{p'}$ and ${}_{q'}\corest\d$. We denote the concatenation of $\a,\b\in\C$ by $\a\op\b$, but we note that this only belongs to $\C$ if $\br(\a)\F\bd(\b)$. As special cases we have
\[
\text{$\c\pr\d = \c\op\d$ if $\br(\c) \F \bd(\d)$ \AND $\c\pr\d = \c\circ\d$ if $\br(\c) = \bd(\d)$.}
\]
\begin{defn}\label{defn:C_P}
Given a projection algebra $P$ (cf.~Definition \ref{defn:P}), we define the regular $*$-semigroup
\[
\C_P = \bS(\C,\operatorname{id}_\C),
\]
where $\C=\C(P)$ is the chain groupoid of $P$ (cf.~Definition \ref{defn:CP}). Explicitly:
\bit
\item The elements of $\C_P$ are the $P$-chains, $[p_1,\ldots,p_k]$, as in Definition \ref{defn:CP}.
\item The product $\pr$ in $\C_P$ is given, for $\c,\d\in\C_P$ with $\br(\c)=p$ and $\bd(\d)=q$, by
\[
\c\pr\d = \c\rest_{p'} \op {}_{q'}\corest\d \WHERE p'=q\th_p \ANd q'=p\th_q,
\]
and where $\op$ denotes concatenation, as above.
\item The involution in $\C_P$ is given by reversal of $P$-chains, $[p_1,\ldots,p_k]^*=[p_k,\ldots,p_1]$.
\item The projections of $\C_P$ are the trivial chains $[p]\equiv p$, for $p\in P$, and consequently $P(\C_P)\equiv P$.
\item The idempotents of $\C_P$ are the chains $[p,q]$, for $(p,q)\in{\F}$.
\eit
We call $\C_P$ the \emph{chain semigroup of $P$}.
\end{defn}
For the coming results of this chapter, and for later use, we require the following definition:
\begin{defn}\label{defn:canonical}
Let $S$ and $S'$ be regular $*$-semigroups with the same projection algebras, ${P(S)=P(S')=P}$. A $*$-semigroup homomorphism $\phi:S\to S'$ is said to be \emph{canonical} if $\phi|_P=\operatorname{id}_P$, i.e.~if $p\phi=p$ for all $p\in P$.
\end{defn}
\begin{prop}\label{prop:CP}
For any chained projection groupoid $(\G,\ve)$, the evaluation map $\ve$ is a chained projection functor $(\C,\operatorname{id}_\C)\to(\G,\ve)$, and (consequently) a canonical $*$-semigroup homomorphism ${\C_P\to\bS(\G,\ve)}$.
\end{prop}
\pf
By Lemma \ref{lem:G1G2} (and since the $v$-functor $\ve:\C\to\G$ has object map $v\ve=\operatorname{id}_P$), it suffices to prove the first claim. Since $\ve$ is an ordered $v$-functor $\C\to\G$ (cf.~Definition \ref{defn:ve}), we just need to check that conditions \ref{F1} and \ref{F2} both hold. The first follows immediately from $v\ve=\operatorname{id}_P$. For \ref{F2}, $v\ve=\operatorname{id}_P$ means that the induced map $\Phi:\C\to\C$ at the top of the diagram in Definition \ref{defn:CPF} is the identity map. Thus, this diagram is
\[
\begin{tikzcd}
\C \arrow{rr}{\operatorname{id}_\C} \arrow[swap]{dd}{\operatorname{id}_\C} & ~ & \C \arrow{dd}{\ve} \\%
~&~&~\\
\C \arrow{rr}{\ve}& ~ & \G,
\end{tikzcd}
\]
which obviously commutes.
\epf
\begin{thm}\label{thm:IG}
If $P$ is a projection algebra, then
\ben
\item \label{IG1} $\C_P$ is an idempotent-generated regular $*$-semigroup with projection algebra $P$,
\item \label{IG2} any idempotent-generated regular $*$-semigroup with projection algebra $P$ is a canonical image of~$\C_P$.
\een
\end{thm}
\pf
\firstpfitem{\ref{IG1}} By Theorem \ref{thm:SGve}, $\C_P=\bS(\C,\operatorname{id}_\C)$ is a regular $*$-semigroup. It is idempotent-generated by Proposition \ref{prop:ES}\ref{ES3}, since the evaluation map $\operatorname{id}_\C:\C\to\C$ is obviously surjective. We have already observed that $P(\C_P)\equiv P$.
\pfitem{\ref{IG2}} Let $S$ be an idempotent-generated regular $*$-semigroup with projection algebra $P$. By Proposition \ref{prop:CP} (and Theorem \ref{thm:iso}), $\ve$ is a canonical $*$-semigroup homomorphism $\C_P\to \bS(\bG(S)) = S$, and by Proposition~\ref{prop:ES}\ref{ES3}, it is surjective.
\epf
\begin{rem}\label{rem:IGRSSP}
We now briefly comment on a categorical interpretation of Theorem \ref{thm:IG}.
For a fixed projection algebra $P$, we denote by $\IGRSSP$ the (typically large) category of idempotent-generated regular $*$-semigroups with projection algebra $P$. The morphisms in $\IGRSSP$ are the canonical $*$-homomorphisms from Definition \ref{defn:canonical}. Since every object in $\IGRSSP$ is generated by~$P$ (cf.~Lemma \ref{lem:PS1}\ref{PS12}), and since every morphism in $\IGRSSP$ maps $P$ identically, there can be at most one such morphism~${S\to S'}$, for any pair of objects $S,S'$ of $\IGRSSP$, which must then be surjective.
Theorem \ref{thm:IG} essentially states that $\C_P$ is a free/initial object in $\IGRSSP$. We will say more about the free-ness of $\C_P$ (in the full category $\RSS$) in Section \ref{sect:freeness}; see in particular Theorems~\ref{thm:free} and \ref{thm:adjoint}.
\end{rem}
Before we move on, we record the following simple consequence of our previous results.
\begin{prop}\label{prop:CC'2}
Any projection algebra morphism $\phi:P\to P'$ extends to a (unique) well-defined $*$-semigroup homomorphism
\[
\Phi:\C_P\to\C_{P'} \GIVENBY [p_1,\ldots,p_k]\Phi = [p_1\phi,\ldots,p_k\phi].
\]
\end{prop}
\pf
By Proposition \ref{prop:CC'}, the mapping $\Phi$ is an ordered groupoid functor $\C\to\C'$, where as usual we write $\C=\C(P)$ and $\C'=\C(P')$. By Lemma \ref{lem:G1G2}, it suffices to show that $\Phi$ is in fact a chained projection functor $(\C,\operatorname{id}_\C)\to(\C',\operatorname{id}_{\C'})$, as it will then also be a $*$-semigroup homomorphism $\C_P=\bS(\C,\operatorname{id}_\C)\to\bS(\C',\operatorname{id}_{\C'})=\C_{P'}$. Consulting Definition \ref{defn:CPF}, we are left to show that $\Phi$ satisfies conditions \ref{F1} and \ref{F2}.
\pfitem{\ref{F1}} By definition we have $v\Phi=\phi$, and this is a projection algebra morphism by assumption.
\pfitem{\ref{F2}} Keeping in mind that the evaluation maps are the identities, the diagram in Definition \ref{defn:CPF} becomes
\[
\begin{tikzcd}
\C \arrow{rr}{\Phi} \arrow[swap]{dd}{\operatorname{id}_\C} & ~ & \C' \arrow{dd}{\operatorname{id}_{\C'}} \\%
~&~&~\\
\C \arrow{rr}{\Phi}& ~ & \C',
\end{tikzcd}
\]
and this obviously commutes.
\epf
\subsection{Presentation by generators and relations}\label{sect:gr}
Our next result is a presentation by generators and relations for the chain semigroup $\C_P$. Roughly speaking, we take an abstract copy of $P$ as a generating set, and impose the bare minimum of relations to ensure that the quotient of the free semigroup on $P$ is a regular $*$-semigroup with projection algebra $P$. This idea is akin to the constructions of Pastijn \cite{Pastijn1980} and Easdown \cite{Easdown1985} of the free (regular) idempotent-generated semigroup over an arbitrary (regular) biordered set~$E$. They both take a semigroup defined by a presentation with generating set (an abstract copy of)~$E$, and relations coming from the `basic products', and the `sandwich sets' in the regular case.
Before we state the result, we briefly establish some notation. For a set $X$, we denote by $X^+$ the free semigroup over $X$, which consists of all non-empty words over $X$, under concatenation. For a set $R\sub X^+\times X^+$ of pairs of words, we write $R^\sharp$ for the congruence on $X^+$ generated by~$R$. We say a semigroup $S$ has \emph{presentation} $\pres XR$ if $S\cong X^+/R^\sharp$, i.e.~if there is a surjective semigroup homomorphism $X^+\to S$ with kernel $R^\sharp$. At times we identify $\pres XR$ with the semigroup $X^+/R^\sharp$ itself. The elements of $X$ and $R$ are called generators and relations, respectively. A relation $(u,v)\in R$ is typically displayed as an equation: $u=v$.
\begin{thm}\label{thm:pres}
For any projection algebra $P$, the chain semigroup $\C_P$ has presentation
\[
\C_P \cong \pres{X_P}{R_P},
\]
where $X_P = \set{x_p}{p\in P}$ is an alphabet in one-one correspondence with $P$, and where $R_P$ is the set of relations
\begin{align}
\tag*{\textsf{(R1)}} \label{R1} x_p^2 &= x_p &&\text{for all $p\in P$,}\\
\tag*{\textsf{(R2)}} \label{R2} (x_px_q)^2 &= x_px_q &&\text{for all $p,q\in P$,}\\
\tag*{\textsf{(R3)}} \label{R3} x_px_qx_p &= x_{q\th_p} &&\text{for all $p,q\in P$.}
\end{align}
\end{thm}
To prove the theorem, we require some technical lemmas. But first, it is worth observing that relations \ref{R1}--\ref{R3} closely resemble projection algebra axioms \ref{P2}, \ref{P4} and \ref{P5}, i.e.~those that are stated purely in terms of the $\th$ maps.
For the rest of this section, we fix the projection algebra $P$, and also the alphabet $X_P$ and relations $R_P$ from Theorem \ref{thm:pres}. We also write ${\sim}=R_P^\sharp$ for the congruence on $X_P^+$ generated by relations \ref{R1}--\ref{R3}. For words $u,v\in X_P^+$, we write $u\sim_1v$ to indicate that $u$ and $v$ differ by one or more applications of \ref{R1}, and similarly for $\sim_2$ and $\sim_3$.
\begin{lemma}\label{lem:pres1}
If $p_1,\ldots,p_k,q\in P$ are such that $p_1\F\cdots\F p_k$ and $q\leq p_k$, then
\[
x_{p_1}\cdots x_{p_{k-1}}x_q \sim x_{q_1}\cdots x_{q_{k-1}}x_q
\]
for some $q_1,\ldots,q_{k-1}\in P$ with $q_1\F\cdots\F q_{k-1}\F q$.
\end{lemma}
\pf
The proof is by induction on $k$. When $k=1$ the result is vacuously true, so we assume~$k\geq2$. Then with $q_{k-1} = q\th_{p_{k-1}}$, we have
\[
x_{p_{k-1}}x_q \sim_2 x_{p_{k-1}}x_qx_{p_{k-1}}x_q \sim_3 x_{q\th_{p_{k-1}}}x_q = x_{q_{k-1}} x_q.
\]
By induction, noting that $q_{k-1}\leq p_{k-1}$, we have
\[
x_{p_1}\cdots x_{p_{k-2}}x_{q_{k-1}} \sim x_{q_1}\cdots x_{q_{k-2}}x_{q_{k-1}}
\]
for some $q_1,\ldots,q_{k-2}\in P$ with $q_1\F\cdots\F q_{k-2}\F q_{k-1}$. It follows that
\[
x_{p_1}\cdots x_{p_{k-2}}x_{p_{k-1}}x_q \sim x_{p_1}\cdots x_{p_{k-2}}x_{q_{k-1}}x_q \sim x_{q_1}\cdots x_{q_{k-2}}x_{q_{k-1}}x_q,
\]
and it remains only to show that $q_{k-1}\F q$. But from $q\leq p_k\F p_{k-1}$ it follows from Lemma~\ref{lem:pqr}\ref{pqr2} that $q\F q\th_{p_{k-1}} = q_{k-1}$.
\epf
\begin{lemma}\label{lem:pres2}
For any $w\in X_P^+$ of length $k$, we have
\[
w \sim x_{p_1}\cdots x_{p_k}
\]
for some $p_1,\ldots,p_k\in P$ with $p_1\F\cdots\F p_k$.
\end{lemma}
\pf
We prove the lemma by induction on $k$. The $k=1$ case being trivial, we assume $k\geq2$, and we write $w = x_{q_1}\cdots x_{q_k}$. By induction we have
\[
x_{q_1}\cdots x_{q_{k-1}} \sim x_{r_1}\cdots x_{r_{k-1}}
\]
for some $r_1,\ldots,r_{k-1}\in P$ with $r_1\F\cdots\F r_{k-1}$. Next we define
\[
p_{k-1} = q_k\th_{r_{k-1}} \AND p_k = r_{k-1}\th_{q_k},
\]
noting that $p_{k-1}\F p_k$, by Lemma \ref{lem:p'q'}. We then calculate
\[
x_{r_{k-1}}x_{q_k} \sim_2 x_{r_{k-1}}x_{q_k}x_{r_{k-1}}x_{q_k}x_{r_{k-1}}x_{q_k} \sim_3 x_{q_k\th_{r_{k-1}}}x_{r_{k-1}\th_{q_k}} = x_{p_{k-1}}x_{p_k}.
\]
Since $p_{k-1}\leq r_{k-1}$, it follows from Lemma~\ref{lem:pres1} that
\[
x_{r_1}\cdots x_{r_{k-2}} x_{p_{k-1}} \sim x_{p_1}\cdots x_{p_{k-2}} x_{p_{k-1}}
\]
for some $p_1,\ldots,p_{k-2}\in P$ with $p_1\F\cdots\F p_{k-2}\F p_{k-1}$. Putting everything together, we have
\[
w = x_{q_1}\cdots x_{q_{k-2}}x_{q_{k-1}}x_{q_k}
\sim x_{r_1}\cdots x_{r_{k-2}}x_{r_{k-1}}x_{q_k}
\sim x_{r_1}\cdots x_{r_{k-2}}x_{p_{k-1}}x_{p_k}
\sim x_{p_1}\cdots x_{p_{k-2}}x_{p_{k-1}}x_{p_k},
\]
and the proof is complete.
\epf
Given a $P$-path $\p=(p_1,\ldots,p_k)\in\P=\P(P)$, we define the word
\[
w_\p = x_{p_1}\cdots x_{p_k} \in X_P^+.
\]
So Lemma \ref{lem:pres2} says that every word over $X_P$ is $\sim$-equivalent to some $w_\p$. Using \ref{R1}, it is easy to see that
\begin{equation}\label{eq:xpq}
w_\p w_\q \sim w_{\p\circ\q} \qquad\text{for any $\p,\q\in\P$ with $\br(\p)=\bd(\q)$.}
\end{equation}
(In fact, we have $w_\p w_\q=w_{\p\op\q}$ when $\br(\p)\F\bd(\q)$, where $\op$ again denotes the concatenation operation.)
The next result refers to the congruence $\approx$ on $\P$ from Definition \ref{defn:approx}.
\begin{lemma}\label{lem:pres3}
For any $\p,\q\in\P$, we have $\p\approx\q \implies w_\p \sim w_\q$.
\end{lemma}
\pf
It suffices to assume that $\p$ and $\q$ differ by a single application of \ref{Om1}--\ref{Om3}, i.e.~that
\[
\p = \p'\circ\s\circ\p'' \AND \q = \p'\circ\t\circ\p'' \qquad\text{for some $\p',\p''\in\P$ and $(\s,\t)\in\Om\cup\Om^{-1}$.}
\]
Since $w_\p \sim w_{\p'}w_\s w_{\p''}$ and $w_\q \sim w_{\p'}w_\t w_{\p''}$, by \eqref{eq:xpq}, it is in fact enough to prove that
\[
w_\s\sim w_\t \qquad\text{for all $(\s,\t)\in\Om$.}
\]
We consider the three forms the pair $(\s,\t)\in\Om$ can take.
\pfitem{\ref{Om1}} This follows immediately from \ref{R1}.
\pfitem{\ref{Om2}} If $\s=(p,q,p)$ and $\t=(p)$ for some $(p,q)\in{\F}$, then $w_\s = x_px_qx_p \sim_3 x_{q\th_p} = x_p = w_\t$.
\pfitem{\ref{Om3}} Finally, suppose $\s=\lam(e,p,f)=(e,e\th_p,f)$ and $\t=\rho(e,p,f)=(e,f\th_p,f)$ for some $p\in P$, and some $p$-linked pair $(e,f)$. Then
\[
w_\s = x_ex_{e\th_p}x_f \sim_3 x_ex_px_ex_px_f \sim_2 x_ex_px_f \sim_2 x_ex_px_fx_px_f \sim_3 x_ex_{f\th_p}x_f = w_\t. \qedhere
\]
\epf
We can now tie together the loose ends.
\pf[{\bf Proof of Theorem \ref{thm:pres}.}]
Define the homomorphism
\[
\Psi:X_P^+\to\C_P \BY x_p\Psi = p \equiv[p] \qquad\text{for $p\in P$.}
\]
To see that $\Psi$ is surjective, let $\c\in\C_P$, so that $\c=[\p]$ for some $\p=(p_1,\ldots,p_k)\in\P$. Then as in the proof of Proposition~\ref{prop:ES}\ref{ES3}, and remembering that $\C_P=\bS(\C,\operatorname{id}_\C)$, we have
\begin{equation}\label{eq:xpPsi}
\c = \operatorname{id}_\C(\c) = p_1\pr\cdots\pr p_k = (x_{p_1}\cdots x_{p_k})\Psi = w_\p\Psi.
\end{equation}
Next, we note that $\Psi$ preserves the relations $R_P$, meaning that
\[
(u,v)\in R_P \Implies u\Psi=v\Psi \text{ \ (in~$\C_P$).}
\]
Indeed, this follows immediately from Lemma \ref{lem:PS1}\ref{PS12} when $(u,v)$ has type~\ref{R1} or~\ref{R2}, and from Lemma \ref{lem:pqvepq}\ref{pqvepq3} for type \ref{R3}. It follows from this that ${R_P^\sharp\sub\ker(\Psi)}$.
It remains to show that $\ker(\Psi)\sub R_P^\sharp$. To do so, fix some $(u,v)\in\ker(\Psi)$, so that $u,v\in X_P^+$ and $u\Psi=v\Psi$; we must show that $u\sim v$. By Lemma \ref{lem:pres2}, we have $u\sim w_\p$ and $v\sim w_\q$ for some~$\p,\q\in\P$. Using \eqref{eq:xpPsi}, and remembering that ${\sim}\sub\ker(\Psi)$, we have
\[
[\p] = w_\p\Psi = u\Psi = v\Psi = w_\q\Psi = [\q],
\]
meaning that $\p\approx\q$. But then $w_\p\sim w_\q$ by Lemma \ref{lem:pres3}, so $u\sim w_\p\sim w_\q\sim v$, as required.
\epf
In what follows, we will typically denote the $R_P^\sharp$-class of a word $w\in X_P^+$ by $[w]$. In this way, we have
\[
\pres{X_P}{R_P} = X_P^+/R_P^\sharp = \set{[w]}{w\in X_P^+}.
\]
The product in $\pres{X_P}{R_P}$ is given by $[u][v]=[uv]$ for $u,v\in X_P^+$, and the involution by $[x_{p_1}\cdots x_{p_k}]^*=[x_{p_k}\cdots x_{p_1}]$ for $p_1,\ldots,p_k\in P$. The projections of $\pres{X_P}{R_P}$ are the $R_P^\sharp$-classes $[x_p] \equiv p$ ($p\in P$), so the projection algebra of $\pres{X_P}{R_P}$ is (isomorphic to) $P$.
\subsection{Chain semigroups as free objects}\label{sect:freeness}
In Remark \ref{rem:IGRSSP} we discussed a certain `free-ness' property of the chain semigroup~$\C_P$ in the category of idempotent-generated regular $*$-semigroups with (fixed) projection algebra~$P$. In category theory, `free' objects are more formally defined as the objects in the image of a `left adjoint to a forgetful functor'. We will soon recall the meaning of these terms, and show that our chain semigroups fit this definition, with respect to an appropriate forgetful functor.
We begin by proving the following result. It will be used to verify the above-mentioned categorical condition, but we hope that the reader can already sense a `flavour' of free-ness from the statement, and the universal property it describes.
\begin{thm}\label{thm:free}
If $P$ is a projection algebra and $S$ is a regular $*$-semigroup, then for any projection algebra morphism $\phi:P\to P(S)$, there is a unique $*$-semigroup homomorphism~${\Phi:\C_P\to S}$ such that the following diagram commutes (where both `$\io$'s denote inclusion maps):
\[
\begin{tikzcd}
P \arrow{rr}{\phi} \arrow[swap,hookrightarrow]{dd}{\io} & ~ & P(S) \arrow[hookrightarrow]{dd}{\io} \\%
~&~&~\\
\C_P \arrow{rr}{\Phi}& ~ & S.
\end{tikzcd}
\]
\end{thm}
\pf
This could be proved by constructing a suitable chained projection functor ${(\C,\operatorname{id}_\C)\to\bG(S)}$, where $\C=\C(P)$ is the chain groupoid, and then applying Lemma \ref{lem:G1G2}. Alternatively, we can use Theorem \ref{thm:pres}, and prove the result with $\pres{X_P}{R_P} = X_P^+/R_P^\sharp$ in place of $\C_P$. Taking this second route, we begin by defining a semigroup homomorphism
\[
\varphi:X_P^+\to S \BY x_p\varphi = p\phi \qquad\text{for $p\in P$.}
\]
Next we check that $R_P\sub\ker(\varphi)$, meaning that $u\varphi=v\varphi$ for all $(u,v)\in R_P$. Indeed, this is essentially trivial (modulo Lemma \ref{lem:PS1}\ref{PS12}) when $(u,v)$ has type \ref{R1} or \ref{R2}. For type \ref{R3}, we must show that ${(x_px_qx_p)\varphi = x_{q\th_p}\varphi}$ for any $p,q\in P$. But for any such $p,q$, and using $\th'$ to denote the unary operations in $P(S)$, as in \eqref{eq:thp}, we have
\[
(x_px_qx_p)\varphi = (p\phi)(q\phi)(p\phi) = (q\phi)\th'_{p\phi} = (q\th_p)\phi = x_{q\th_p}\varphi.
\]
(In the penultimate step we used the fact that $\phi$ is a projection algebra morphism.)
It follows from $R_P\sub\ker(\varphi)$ that $R_P^\sharp\sub\ker(\varphi)$. We therefore have a well-defined semigroup homomorphism
\[
\Phi:\pres{X_P}{R_P}\to S \GIVENBY [w]\Phi = w\varphi \qquad\text{for $w\in X_P^+$.}
\]
This is a $*$-homomorphism, as for any word $w=x_{p_1}\cdots x_{p_k}\in X_P^+$,
\[
[w]^*\Phi = [x_{p_k}\cdots x_{p_1}]\Phi = (p_k\phi)\cdots(p_1\phi) = (p_k\phi)^*\cdots(p_1\phi)^* = ((p_1\phi)\cdots(p_k\phi))^* = ([w]\Phi)^*.
\]
Commutativity of the diagram amounts to the fact that $\Phi|_P=\phi$ (where as above we identify $p\equiv[x_p]$ for $p\in P$). This also implies uniqueness, as $\C_P$ is generated by $P\equiv\set{[x_p]}{p\in P}$.
\epf
Consider again the two (large) categories:
\bit
\item $\RSS$, of regular $*$-semigroups with $*$-semigroup homomorphisms, and
\item $\PA$, of projection algebras with projection algebra morphisms.
\eit
The process of taking the projection algebra $P(S)$ of a regular $*$-semigroup $S$ can be thought of as a `forgetful functor'; we forget the non-projections of $S$, and remember only the `conjugation' actions of projections on each other. Guided by standard letter usage (see for example \cite[p.~79]{MacLane1998}), we denote this functor by
\[
\U:\RSS\to\PA.
\]
So the action of $\U$ on an object $S\in v\RSS$ and morphism $\phi\in\RSS(S,S')$ is given by:
\bit
\item $\U(S)=P(S)$ is the projection algebra of $S$, with unary operations as in \eqref{eq:thp}, and
\item $\U(\phi):\U(S)\to\U(S')$ is the (set-theoretic) restriction of $\phi$ to $\U(S)$, which is easily seen to be a projection algebra morphism.
\eit
In the other direction, the process of constructing the chain semigroup $\C_P$ of a projection algebra~$P$ is a functor
\[
\FF:\PA\to\RSS.
\]
The action of $\FF$ on an object $P\in v\PA$ and morphism $\phi\in\PA(P,P')$ is given by:
\bit
\item $\FF(P)=\C_P$ is the chain semigroup of $P$, as in Definition \ref{defn:CP}, and
\item $\FF(\phi):\FF(P)\to\FF(P')$ is the induced $*$-semigroup homomorphism $\Phi:\C_P\to\C_{P'}$ from Proposition~\ref{prop:CC'2}.
\eit
So we have the two functors
\[
\U:\RSS\to\PA \AND \FF:\PA\to\RSS.
\]
We have already observed that $\U(\FF(P))=P(\C_P)=P$ for any projection algebra $P$. It is also clear that $\U(\FF(\phi))=\phi$ for any morphism $\phi$ in $\PA$ (this is precisely what it means for~$\Phi$ to \emph{extend}~$\phi$ in Proposition \ref{prop:CC'2}). In other words, $\U\FF=\operatorname{id}_{\PA}$ is the identity functor $\PA\to\PA$. The other composition $\FF\U:\RSS\to\RSS$ is not the identity. However, we will show in Theorem~\ref{thm:adjoint} that $\FF$ is a \emph{left adjoint} to $\U$.
To make the previous statement precise, we need to recall some definitions. There are \emph{many} equivalent definitions of adjunctions, but for our purposes the most convenient is that of \cite[Definition 9.1]{Awodey2010}.
\begin{defn}\label{defn:nt}
Consider two (possibly large) categories $\bC$ and $\bD$, and a pair of functors ${\FF,\G:\bC\to\bD}$. A \emph{natural transformation} $\eta:\FF\to \G$ is a family $\eta = (\eta_C)_{C\in v\bC}$, where each $\eta_C:\FF(C)\to \G(C)$ is a morphism in $\bD$, and such that the following condition holds:
\bit
\item For every pair of objects $C,C'\in v\bC$, and for every morphism $\phi:C\to C'$ in $\bC$, the following diagram commutes:
\[
\begin{tikzcd}
\FF(C) \arrow{rr}{\eta_C} \arrow[swap]{dd}{\FF(\phi)} & ~ & \G(C) \arrow{dd}{\G(\phi)} \\%
~&~&~\\
\FF(C') \arrow{rr}{\eta_{C'}}& ~ & \G(C').
\end{tikzcd}
\]
\eit
\end{defn}
\newpage
\begin{defn}\label{defn:ad}
Consider two (possibly large) categories $\bC$ and $\bD$. An \emph{adjunction} $\bC\to\bD$ is a triple $(\FF,\U,\eta)$, where $\FF:\bC\to\bD$ and $\U:\bD\to\bC$ are functors, and $\eta$ is a natural transformation $\operatorname{id}_\bC\to\U\FF$, such that the following condition holds:
\bit
\item For every pair of objects $C\in v\bC$ and $D\in v\bD$, and for every morphism $\phi:C\to\U(D)$, there exists a unique morphism $\ol \phi:\FF(C)\to D$ such that the following diagram commutes:
\[
\begin{tikzcd}
C \arrow{rrdd}{\phi} \arrow[swap]{dd}{\eta_C} \\%
~&~&~\\
\U(\FF(C)) \arrow{rr}{\U(\ol\phi)}& ~ & \U(D).
\end{tikzcd}
\]
\eit
In this set-up, $\FF$ and $\U$ are called the \emph{left} and \emph{right adjoints}, respectively, and $\eta$ is the \emph{unit} of the adjunction. The \emph{$\bC$-free objects in $\bD$} are the objects in the image of $\FF$, i.e.~those of the form $\FF(C)$ for $C\in v\bC$.
\end{defn}
\begin{thm}\label{thm:adjoint}
The functor
\[
\FF:\PA\to\RSS:P\mt\C_P
\]
is a left adjoint to the forgetful functor
\[
\U:\RSS\to\PA:S\mt P(S).
\]
Consequently, the $\U$-free objects in the category $\RSS$ are precisely the chain semigroups.
\end{thm}
\pf
Consulting Definition \ref{defn:ad}, we need a natural transformation $\eta:\operatorname{id}_\PA\to\U\FF$. Since we have already observed that $\U\FF=\operatorname{id}_\PA$, we can take $\eta=(\operatorname{id}_P)_{P\in v\PA}$. For $P,P'\in v\PA$ and $\phi:P\to P'$, the diagram in Definition \ref{defn:nt} becomes
\[
\begin{tikzcd}
P \arrow{rr}{\operatorname{id}_P} \arrow[swap]{dd}{\phi} & ~ & P \arrow{dd}{\phi} \\%
~&~&~\\
P' \arrow{rr}{\operatorname{id}_{P'}}& ~ & P',
\end{tikzcd}
\]
and this obviously commutes.
To verify that $(\FF,\U,\eta)$ is an adjunction $\PA\to\RSS$, we need to show that:
\bit
\item For every projection algebra $P\in v\PA$ and regular $*$-semigroup $S\in v\RSS$, and for every morphism $\phi:P\to P(S)$, there exists a unique $*$-semigroup homomorphism $\ol\phi:\C_P\to S$ such that the following diagram commutes:
\[
\begin{tikzcd}
P \arrow{rrdd}{\phi} \arrow[swap]{dd}{\operatorname{id}_P} \\%
~&~&~\\
P \arrow{rr}{\ol\phi|_P}& ~ & P(S).
\end{tikzcd}
\]
\eit
For such $P$, $S$ and $\phi$, we take $\ol\phi=\Phi:\C_P\to S$, as in Theorem \ref{thm:free}, and the required properties (uniqueness and commutativity) follow from the theorem.
\epf
It follows from Theorem \ref{thm:adjoint} that we may rightly speak of $\C_P$ as `the free regular $*$-semigroup with projection algebra~$P$'.
There is a vast literature on \emph{free idempotent-generated semigroups} $\operatorname{FIG}(E)$ over \emph{biordered sets}~$E$; see for example \cite{Easdown1984,Easdown1984b,Easdown1984c,Easdown1984d,Easdown1989, NP1980,Nambooripad1979,GR2012,GY2014,DGR2017, Easdown1985,BMM2009, GR2012b,DR2013,DG2014,YDG2015,DDG2019,Dolinka2021}. A long-standing folklore conjecture was that the maximal subgroups of any $\operatorname{FIG}(E)$ are all free groups. The first counterexample was constructed in~\cite{BMM2009}, and then the conjecture was proven to be \emph{maximally} false in~\cite{GR2012}, where it was shown that \emph{every} group appears as the maximal subgroup of some free idempotent-generated semigroup; this result has been reproved a number of times \cite{GY2014,DR2013,YDG2015}. Since then, a number of important studies have computed maximal subgroups arising from biordered sets of natural families of semigroups, such as transformation monoids \cite{GR2012b} and linear monoids~\cite{DG2014}. We believe it would be very interesting to study the free (idempotent/projection-generated) regular $*$-semigroups~$\C_P$ along similar lines. Natural questions include the following:
\begin{prob}\label{prob:CP}
\ben
\item \label{Q1} Are maximal subgroups of $\C_P$ always free?
\item \label{Q2} Can any group appear as the maximal subgroup of some $\C_P$?
\item \label{Q3} What are the maximal subgroups of $\C_P$, when $P$ is the projection algebra of some natural family of regular $*$-semigroups (e.g.~partition, Brauer or Temperley-Lieb monoids)?
\item \label{Q4} Given a regular $*$-semigroup $S$, with projection algebra $P=P(S)$ and biordered set $E=E(S)$, how are the free semigroups $\C_P$ and $\operatorname{FIG}(E)$ related? Since $E$ is a \emph{regular} biordered set, the same question can be asked of the free \emph{regular} idempotent-generated semigroup $\operatorname{FRIG}(E)$; cf.~\cite{Pastijn1980}.
\een
\end{prob}
Some of these questions will be explored in \cite{AJN}. We will say a little about the last question in Example \ref{eg:2x2} below. One could also study structural properties of $\C_P$, or consider decision problems, as in \cite{Dolinka2021,DDG2019,DGR2017}. The above papers typically study free idempotent-generated semigroups via Easdown's presentation~\cite{Easdown1985}. To study~$\C_P$, one has the option of using its presentation from Theorem \ref{thm:pres} or its direct definition as the chain semigroup of $P$.
\begin{eg}\label{eg:2x2}
Let $P$ be an arbitrary set, and consider again the square band over $P$, i.e.~the regular $*$-semigroup $S=P\times P$, with operations
\[
(p,q)(r,s)=(p,s) \AND (p,q)^*=(q,p).
\]
The projections of $S$ are of the form $(p,p)$, for $p\in P$. Identifying $(p,p)\equiv p$, we see that $P(S)\equiv P$. For each $p\in P$, the operation $\th_p$ is the constant map with image $\{p\}$, and ${\F}=\nab_P$ is the universal relation.
In the special case that $P=\{p,q\}$ has size $2$, the chain semigroup
\[
\C_P = \{[p] , [q] , [p,q] , [q,p] \}
\]
has size $4$, and of course $\C_P\cong S$. This can also be seen by writing down the presentation from Theorem \ref{thm:pres}, which simplifies (writing $x_p\equiv p$ and $x_q\equiv q$) to
\[
\pres{p,q}{p^2=p,\ q^2=q,\ pqp=p,\ qpq=q}.
\]
It is clear that $\{p,q,pq,qp\}$ is a set of normal forms, i.e.~a set of representatives of equivalence-classes of words.
On the other hand, the biordered set $E=E(S)$ of $S$ is simply $E=S$, and it is easy to see that $\operatorname{FIG}(E)$ is infinite; indeed, simple computations in GAP \cite{GAP,Semigroups} show that maximal subgroups of $\operatorname{FIG}(E)$ are infinite cyclic. This all shows that the answer to the fourth question in Problem \ref{prob:CP} will not simply be that $\C_P$ and $\operatorname{FIG}(E)$ are always isomorphic.
\end{eg}
\section{\boldmath Fundamental regular $*$-semigroups}\label{chap:F}
In this chapter we turn our attention to \emph{fundamental} regular $*$-semigroups. Many of the ideas discussed here exist in the literature in a different form \cite{Imaoka1980,Imaoka1983,Yamada1981,Jones2012}, but we feel that our groupoid approach leads to some clearer constructions and proofs.
We begin in Section \ref{sect:muS} by obtaining a number of formulations for the maximum projection-separating congruence $\mu_S$ on a regular $*$-semigroup~$S$; see Proposition \ref{prop:muS} and Corollary \ref{cor:muS}. In Section \ref{sect:O} we construct the maximum fundamental regular $*$-semigroup $\M_P$ with a given projection algebra $P$; see Definition \ref{defn:OP}.
Theorems \ref{thm:SOP} and \ref{thm:fund} establish various universal properties of the semigroup $\M_P$. Specifically, we use the $\vt$ maps on chained projection groupoids to show that for any regular $*$-semigroup $S$ with projection algebra $P(S)=P$, there is a canonical $*$-semigroup homomorphism ${\phi_S:S\to\M_P}$, and that $\im(\phi_S)\cong S/\mu_S$ is the fundamental image of $S$. In particular, if $S$ happens to be fundamental itself, then $\phi_S$ is an embedding of $S$ in $\M_P$. At times we compare our construction of~$\M_P$ with others from the literature. In Section \ref{sect:FIG} we consider idempotent-generated fundamental regular $*$-semigroups. In particular, Theorem \ref{thm:muCP} shows that (up to isomorphism) there is exactly one such semigroup with a given projection algebra, and gives a number of ways to construct it.
\subsection{The maximum idempotent-separating congruence}\label{sect:muS}
Recall that a congruence $\si$ on a semigroup $S$ is \emph{idempotent-separating} if
\begin{align*}
e\mr\si f &\Implies e=f &&\text{for all $e,f\in E(S)$.}
\intertext{A \emph{$*$-congruence} on a regular $*$-semigroup $S$ is a congruence $\si$ that is also compatible with $*$:}
a\mr\si b &\Implies a^*\mr\si b^* &&\text{for all $a,b\in S$.}
\intertext{We say that such a $\si$ is \emph{projection-separating} if}
p\mr\si q &\Implies p=q &&\text{for all $p,q\in P(S)$.}
\end{align*}
Any such projection-separating $*$-congruence $\si$ is idempotent-separating, as for $e,f\in E(S)$,
\[
e\mr\si f \implies e^*\mr\si f^* \implies ee^*\mr\si ff^* \implies ee^* = ff^* \implies e\mathrel{\mathscr R} f \ANDSIM e\L f.
\]
It follows that $e\mr\si f\implies e\H f \implies e=f$, since an $\H$-class can contain at most one idempotent.
\begin{defn}\label{defn:F}
A regular $*$-semigroup $S$ is \emph{fundamental} if it has no non-trivial idempotent-separating (equivalently, projection-separating) $*$-congruences, i.e.~if the only idempotent-separating $*$-congruence is the trivial relation $\De_S=\set{(a,a)}{a\in S}$.
\end{defn}
Given a regular $*$-semigroup $S$, it was shown in \cite[Theorem 4]{Imaoka1980} and \cite[Theorem 2.4]{Yamada1981} that the relation
\begin{equation}\label{eq:muS}
\mu_S = \set{(a,b)\in S\times S}{a^*pa=b^*pb\text{ and }apa^*=bpb^*\text{ for all }p\in P}
\end{equation}
is the maximum projection-separating $*$-congruence on $S$, and also the maximum idempotent-separating (ordinary semigroup) congruence on $S$. It follows that a regular $*$-semigroup is fundamental as a $*$-semigroup (as in Definition \ref{defn:F}) if and only if it is fundamental as a semigroup (i.e.~has no non-trivial idempotent-separating (semigroup) congruences). It also follows that~$S/\mu_S$ is the unique fundamental quotient of $S$ with the same projection algebra (up to isomorphism); we call $S/\mu_S$ the \emph{fundamental image} of $S$.
To keep the paper self-contained, it is worth briefly sketching why \eqref{eq:muS} holds, i.e.~why the stated relation is the maximum projection-separating congruence:
\bit
\item First, it is easy to check that $\mu_S$ is a $*$-congruence.
\item It is projection-separating because for $p,q\in P$,
\[
(p,q)\in\mu_S \implies p=p^*pp = q^*pq=qpq \implies p\leq q \ANDSIM q\leq p,
\]
which shows that $(p,q)\in\mu_S\implies p=q$.
\item It is the \emph{maximum} such, because if $\si$ is an arbitrary projection-separating $*$-congruence, then for any $a,b\in S$ and $p\in P$,
\[
a\mr\si b \implies a^*\mr\si b^* \implies a^*pa\mr\si b^*pb \implies a^*pa=b^*pb \ANDSIM apa^*=bpb^* ,
\]
which shows that $(a,b)\in\si \implies (a,b)\in\mu_S$, i.e.~that $\si\sub\mu_S$.
\eit
It turns out that the congruence $\mu_S$ has a useful alternative description, in terms of our $\vt$ and~$\Th$ maps:
\begin{prop}\label{prop:muS}
If $S$ is a regular $*$-semigroup, then the maximum idempotent-separating congruence of $S$ is the relation
\begin{align*}
\mu_S = \set{(a,b)\in S\times S}{\vt_a=\vt_b} &= \set{(a,b)\in S\times S}{\Th_a=\Th_b \text{ and }\Th_{a^*}=\Th_{b^*}}\\
&= \set{(a,b)\in S\times S}{\bd(a)=\bd(b) \text{ and }\Th_a=\Th_b}.
\end{align*}
\end{prop}
\pf
It follows immediately from \eqref{eq:ThaS} that $\Th_a=\Th_b \iff a^*pa=b^*pb$ for all $p\in P$. Combining this with the definition of $\mu_S$ in \eqref{eq:muS}, we have
\[
\mu_S = \set{(a,b)\in S\times S}{\Th_a=\Th_b \text{ and }\Th_{a^*}=\Th_{b^*}}.
\]
It therefore remains to show that the following are equivalent, for all $a,b\in S$:
\ben
\item \label{mu1} $\vt_a=\vt_b$
\item \label{mu2} $\Th_a=\Th_b$ and $\Th_{a^*}=\Th_{b^*}$,
\item \label{mu3} $\bd(a)=\bd(b)$ and $\Th_a=\Th_b$.
\een
Equivalence of \ref{mu1} and \ref{mu3} follows quickly from the fact that $\vt_a=\Th_a|_{\bd(a)^\da}$ and $\Th_a=\th_{\bd(a)}\vt_a$.
\pfitem{\ref{mu1}$\implies$\ref{mu2}} Suppose $\vt_a=\vt_b$. Considering the domains of these mappings (see \eqref{eq:vta}), it follows that $\bd(a)=\bd(b)$, and then
\[
\Th_a = \th_{\bd(a)}\vt_a = \th_{\bd(b)}\vt_b = \Th_b.
\]
By Lemma \ref{lem:vtavta*}, we also have $\vt_{a^*}=\vt_a^{-1}=\vt_b^{-1}=\vt_{b^*}$. But then the previous argument also gives $\Th_{a^*}=\Th_{b^*}$.
\pfitem{\ref{mu2}$\implies$\ref{mu3}} Now suppose $\Th_a=\Th_b$ and $\Th_{a^*}=\Th_{b^*}$. Considering the range of $\Th_{a^*}=\Th_{b^*}$ (see~\eqref{eq:Tha}), it follows that $\bd(a)=\bd(b)$.
\epf
This also leads to a somewhat simpler equational characterisation of $\mu_S$, as compared to the original in~\eqref{eq:muS}.
For another equivalent formulation, see for example \cite[Corollary 4.6]{NS1978}.
\begin{cor}\label{cor:muS}
If $S$ is a regular $*$-semigroup, then the maximum idempotent-separating congruence of $S$ is the relation
\begin{align*}
\mu_S &= \set{(a,b)\in S\times S}{aa^*=bb^*,\ a^*pa=b^*pb \text{ for all } p\leq aa^*} \\
&= \set{(a,b)\in S\times S}{a^*a=b^*b,\ apa^*=bpb^* \text{ for all } p\leq a^*a} .
\end{align*}
\end{cor}
\pf
By Proposition \ref{prop:muS} we have $(a,b)\in\mu_S \iff \vt_a=\vt_b$. This is of course equivalent to the following two conditions:
\ben
\item \label{muS1} $\operatorname{dom}(\vt_a)=\operatorname{dom}(\vt_b)$, and
\item \label{muS2} $p\vt_a=p\vt_b$ for all $p\in\operatorname{dom}(\vt_a)$.
\een
Since $\operatorname{dom}(\vt_a)=(aa^*)^\da$ and $\operatorname{dom}(\vt_b)=(bb^*)^\da$, \ref{muS1} is equivalent to $aa^*=bb^*$. Combining this with~\eqref{eq:vtaS}, it follows that \ref{muS2} is equivalent to $a^*pa=b^*pb$ for all $p\leq aa^*$.
This gives the first claimed expression for $\mu_S$. The second follows from the first because $\mu_S$ is a $*$-congruence, or from the fact that $\vt_a=\vt_b\iff\vt_{a^*}=\vt_{b^*}$ (cf.~Lemma \ref{lem:vtavta*}).
\epf
\subsection[Maximum fundamental regular $*$-semigroups]{\boldmath Maximum fundamental regular $*$-semigroups}\label{sect:O}
Coming from the other direction, it has also long been known that for any projection algebra~$P$, there is a unique `maximum' fundamental regular $*$-semigroup with projection algebra~$P$ (up to isomorphism). This maximum semigroup has been constructed in a number of ways; the papers \cite{Jones2012,Yamada1981,Imaoka1983,Imaoka1980,NP1985} start from projection algebras (or similar structures), while \cite{NP1985} also contains an alternative approach using certain special biordered sets. As an application of our groupoid approach, in this section we provide an alternative construction. Although we of course arrive at the same semigroup (up to isomorphism), we believe our approach is a little more transparent. See Definition \ref{defn:OP} and Theorems~\ref{thm:SOP} and~\ref{thm:fund}, and also Remark~\ref{rem:TPTPopp}.
For the rest of this section we fix a projection algebra $P$, as in Definition \ref{defn:P}, and all of the data that comes with it, i.e.~the unary operations $\th_p$, the relations $\leq$, $\leqF$ and $\F$, and so on. We also write $\P=\P(P)$ and $\C=\C(P)$ for the path category and chain groupoid of $P$; cf.~Definitions~\ref{defn:PP} and \ref{defn:CP}.
Recall that for $p\in P$, we have the down-set
\[
p^\da = \set{q\in P}{q\leq p}.
\]
If $s,t\in p^\da$, then by Lemma \ref{lem:thpthq} we have $s\th_t = s\th_t\th_p\leq p$. This shows that $p^\da$ is closed under each operation $\th_t$ ($t\in p^\da)$, and is hence a projection algebra in its own right. (This is not to say it is a subalgebra of $P$, as it might not be closed under some $\th_q$ for $q\in P\setminus p^\da$. On the other hand, any down-set $p^\da$ \emph{is} a $\diamond$-subalgebra of $P$; cf.~Remark \ref{rem:diamond}.)
\begin{defn}\label{defn:Piso}
For $p,q\in P$, a \emph{$P$-isomorphism} $p\to q$ is a projection algebra isomorphism~${p^\da\to q^\da}$, i.e.~a bijection $\al:p^\da\to q^\da$ satisfying
\begin{equation}\label{eq:Piso}
(s\th_t)\al = (s\al)\th_{t\al} \qquad\text{for all $s,t\in p^\da$.}
\end{equation}
We write $\M(p,q)$ for the set of all such $P$-isomorphisms $p\to q$, and we set
\[
\M=\M(P)=\bigcup_{p,q\in P}\M(p,q).
\]
It is easy to see that $\M$ is a groupoid, under ordinary function composition and inversion, and with objects/identities $v\M = \set{\operatorname{id}_{p^\da}}{p\in P}$. As usual we identify $v\M\equiv P$, \emph{viz.}~${p\equiv\operatorname{id}_{p^\da}}$. We call $\M$ the \emph{Munn groupoid of $P$}, in honour of the early work of Douglas Munn on fundamental inverse semigroups \cite{Munn1970}. Munn's semigroups were constructed using order-isomorphisms of principal ideals/down-sets of semilattices, and this approach has been highly influential in studies of inverse semigroups and their various generalisations.
\end{defn}
We begin with the following simple lemma.
\begin{lemma}\label{lem:OP}
Every $\al\in\M$ is order-preserving, in the sense that
\[
s\leq t \Implies s\al\leq t\al \qquad\text{for all $s,t\leq\bd(\al)$.}
\]
\end{lemma}
\pf
We have $s\leq t \implies s=s\th_t \implies s\al = (s\th_t)\al = (s\al)\th_{t\al} \implies s\al \leq t\al$.
\epf
For $\al\in\M$, and for $p\leq\bd(\al)$, we have $p^\da\sub\bd(\al)^\da=\operatorname{dom}(\al)$, so we can define the restriction
\[
{}_p\corest\al = \al|_{p^\da} .
\]
Here as usual we write $f|_A$ for the restriction of a function $f$ to a subset $A$ of its domain.
\begin{lemma}\label{lem:pal}
If $\al\in\M$, and if $p\leq\bd(\al)$, then ${}_p\corest\al\in\M(p,p\al)$. Consequently, $p^\da\al = (p\al)^\da$.
\end{lemma}
\pf
Of course it suffices to prove the first claim. By definition we have ${}_p\corest\al = \al|_{p^\da}$. For any $t\in p^\da$, we have $t\leq p$, and so $t\al\leq p\al$ by Lemma \ref{lem:OP}, and this says that $t\al\in(p\al)^\da$. Thus,~${}_p\corest\al$ maps $p^\da$ into $(p\al)^\da$. Since $\al$ is injective and satisfies~\eqref{eq:Piso}, so too does ${}_p\corest\al$. Finally, ${}_p\corest\al$ is surjective onto $(p\al)^\da$, since for any $t\leq p\al$ we have $t = (t\al^{-1})\al$, with $t\al^{-1}\leq(p\al)\al^{-1}=p$ since $\al^{-1}\in\M$ is order-preserving by Lemma \ref{lem:OP}.
\epf
\begin{lemma}\label{lem:F1}
For any projection algebra $P$, $\M=\M(P)$ is an ordered groupoid.
\end{lemma}
\pf
Again the proof is by an application of Lemma \ref{lem:C}. We have the usual order $\leq$ on $v\M=P$ (cf.~\eqref{eq:leqP}), and we have already defined the restrictions ${}_p\corest\al=\al|_{p^\da}$. It is then routine to check that properties \ref{O1'}--\ref{O5'} all hold.
\epf
As usual, the right-handed restrictions are defined by
\[
\al\rest_q = (\corest{}_q\al^{-1})^{-1} = (\al^{-1}|_{q^\da})^{-1} \qquad\text{for $\al\in\M$ and $q\leq\br(\al)$.}
\]
Next we wish to verify that $\M=\M(P)$ is a projection groupoid. To do so, we need to understand the $\vt$ and $\Th$ maps. It follows from Lemma \ref{lem:pal} that for $\al\in\M$, the map
\[
\vt_\al:\bd(\al)^\da\to\br(\al)^\da \qquad\text{is given by}\qquad p\vt_\al = \br({}_p\corest\al) = p\al \qquad\text{for all $p\leq\bd(\al)$.}
\]
In other words, we have $\vt_\al = \al$ (!) for all $\al\in\M$, so also $\Th_\al=\th_{\bd(\al)}\vt_\al=\th_{\bd(\al)}\al$. To summarise:
\begin{equation}\label{eq:Thal}
\vt_\al = \al \AND \Th_\al = \th_{\bd(\al)}\al \qquad\text{for all $\al\in\M$.}
\end{equation}
\begin{lemma}\label{lem:F2}
For any projection algebra $P$, $\M=\M(P)$ is a projection groupoid.
\end{lemma}
\pf
By Lemma \ref{lem:F1} we just need to verify \ref{G1}. For this, it is essentially trivial to see that~\ref{G1d} holds, as $\vt_\al=\al$ is a projection algebra morphism by definition, for all $\al\in\M$.
\epf
Consider a pair $p,q\in P$ with $p\F q$. Since the operation $\th_q$ maps into $q^\da$, so too does the restriction
\[
\ga_{pq} = \th_q|_{p^\da}.
\]
\begin{lemma}\label{lem:gapq}
For any $p,q\in P$ with $p\F q$, we have $\ga_{pq}\in\M(p,q)$, and $\ga_{pq}^{-1}=\ga_{qp}$.
\end{lemma}
\pf
This can be proved directly, but it also follows by combining results of earlier chapters. Recall from Proposition \ref{prop:Cid} that $(\C,\operatorname{id}_\C)$ is a chained projection groupoid, where $\C=\C(P)$ is the chain groupoid of $P$. Applying Lemma \ref{lem:ve}\ref{ve2} to this groupoid, and remembering that the evaluation map is $\ve=\operatorname{id}_\C$, we have
\[
\vt_{[p,q]} = \vt_{\operatorname{id}_\C[p,q]} = \th_q|_{p^\da} = \ga_{pq}.
\]
It follows from property \ref{G1d} that $\ga_{pq} = \vt_{[p,q]}$ is a projection algebra isomorphism $p^\da\to q^\da$, i.e.~that $\ga_{pq}\in\M(p,q)$. Combining the above with Lemma \ref{lem:vtavta*}, it follows that
\[
\ga_{qp} = \vt_{[q,p]} = \vt_{[p,q]^{-1}} = \vt_{[p,q]}^{-1} = \ga_{pq}^{-1}. \qedhere
\]
\epf
To show that $\M$ is a chained projection groupoid, we need to define an evaluation map $\ve=\ve(P):\C\to\M$. As usual, it is convenient to first define a functor
\[
\pi=\pi(P):\P\to\M: (p_1,\ldots,p_k) \mt \ga_{p_1p_2}\cdots\ga_{p_{k-1}p_k}.
\]
By convention, when $k=1$, we interpret this last expression simply as $\operatorname{id}_{p_1^\da}\equiv p_1$. This means that ${\pi(p)=\operatorname{id}_{p^\da}}\equiv p$ for all $p\in P$, i.e.~that $\pi$ is a $v$-functor. The next result concerns the congruence ${\approx}=\Om^\sharp$ on $\P$ from Definition~\ref{defn:approx}.
\begin{lemma}\label{lem:piI}
We have ${\approx}\sub\ker(\pi)$.
\end{lemma}
\pf
We need to check that $\pi(\s)=\pi(\t)$ for each pair $(\s,\t)\in\Om$. As usual this is clear when the pair has one of the forms \ref{Om1} or \ref{Om2}. So suppose instead that
\[
\s=\lam(e,p,f) = (e,e\th_p,f) \AND \t=\rho(e,p,f) = (e,f\th_p,f)
\]
for some projection $p\in P$, and some $p$-linked pair $(e,f)$, as in Definition \ref{defn:CP_P}. Then
\[
\pi(\s) = \ga_{e,e\th_p}\ga_{e\th_p,f} \AND \pi(\t) = \ga_{e,f\th_p}\ga_{f\th_p,f}.
\]
Both $\pi(\s)$ and $\pi(\t)$ map $e^\da\to f^\da$, and we must show that these maps are equal, i.e.~that $q\pi(\s)=q\pi(\t)$ for any $q\leq e$. By definition of the $\ga_{uv}$, and keeping $q=q\th_e$ in mind (as $q\leq e$), we calculate
\[
q\pi(\s) = q\th_{e\th_p}\th_f =_4 (q\th_e)\th_p\th_e\th_p\th_f =_5 q\th_e\th_p\th_f = q\th_p\th_f
\]
We also have $q\pi(\t) = q\th_{f\th_p}\th_f =_4 q\th_p\th_f\th_p\th_f =_5 q\th_p\th_f$, so the proof is complete.
\epf
\begin{defn}\label{defn:veI}
Since $\C=\P/{\approx}$, it follows from Lemma \ref{lem:piI} that we have a well-defined functor
\[
\ve=\ve(P) :\C \to \M \GIVENBY \ve[\p] = \pi(\p) \qquad\text{for $\p\in\P$.}
\]
That is, $\ve[p_1,\ldots,p_k] = \ga_{p_1p_2}\cdots\ga_{p_{k-1}p_k}$ whenever $p_1\F\cdots\F p_k$ (and $\ve[p]=p\equiv\operatorname{id}_{p^\da}$ for $p\in P$).
\end{defn}
The proof of the next result makes use of the basic fact that for bijections $f:A\to B$ and $g:B\to C$, and for $X\sub A$, we have
\begin{equation}\label{eq:fgX}
(f\circ g)|_X=f|_X\circ g|_Y \qquad\text{where $Y=Xf$.}
\end{equation}
(This is essentially \ref{O5'} in the category of bijections.)
\begin{lemma}\label{lem:veI}
The functor $\ve=\ve(P):\C\to\M$ is an evaluation map.
\end{lemma}
\pf
As with Lemma \ref{lem:veS}, the proof boils down to showing that
\[
\pi( {}_q\corest\p) = {}_q\corest\pi(\p) \qquad\text{for all $\p\in\P$ and $q\leq\bd(\p)$.}
\]
We prove this by induction on $k$, the length of the path $\p=(p_1,\ldots,p_k)$. When $k=1$, both sides evaluate to $q\equiv\operatorname{id}_{q^\da}$. We now assume that $k\geq2$, and we write ${}_q\corest\p=(q_1,\ldots,q_k)$ as in~\eqref{eq:rest}. Also write $\p'=(p_1,\ldots,p_{k-1})$, noting that ${}_q\corest\p'=(q_1,\ldots,q_{k-1})$. By induction we have
\begin{align*}
\pi( {}_q\corest\p) = \pi(q_1,\ldots,q_k) = \ga_{q_1q_2}\cdots\ga_{q_{k-2}q_{k-1}}\ga_{q_{k-1}q_k} = \pi({}_q\corest\p')\circ\ga_{q_{k-1}q_k} &= {}_q\corest\pi(\p')\circ\ga_{q_{k-1}q_k}.
\intertext{On the other hand, using \eqref{eq:fgX}, and writing $Y = q^\da\ga_{p_1p_2}\cdots\ga_{p_{k-2}p_{k-1}}$, we have}
{}_q\corest\pi(\p) = (\ga_{p_1p_2}\cdots\ga_{p_{k-1}p_k}) |_{q^\da} = (\ga_{p_1p_2}\cdots\ga_{p_{k-2}p_{k-1}}) |_{q^\da} \circ \ga_{p_{k-1}p_k} |_Y &= {}_q\corest\pi(\p') \circ \ga_{p_{k-1}p_k} |_Y.
\end{align*}
Examining the last two conclusions, it remains to show that $\ga_{p_{k-1}p_k} |_Y = \ga_{q_{k-1}q_k}$. By Lemma \ref{lem:pal} and \eqref{eq:rest}, we have
\[
Y = q^\da(\ga_{p_1p_2}\cdots\ga_{p_{k-2}p_{k-1}}) = (q\ga_{p_1p_2}\cdots\ga_{p_{k-2}p_{k-1}})^\da = (q\th_{p_2}\cdots\th_{p_{k-1}})^\da = q_{k-1}^\da,
\]
and so
\[
\ga_{p_{k-1}p_k} |_Y = (\th_{p_k}|_{p_{k-1}^\da}) |_{q_{k-1}^\da} = \th_{p_k}|_{q_{k-1}^\da}.
\]
Since $\ga_{q_{k-1}q_k} = \th_{q_k}|_{q_{k-1}^\da}$, it remains to show that $t\th_{p_k}=t\th_{q_k}$ for all $t\in q_{k-1}^\da$. For this we use $t=t\th_{q_{k-1}}$ (as $t\leq q_{k-1}$), $q_k=q_{k-1}\th_{p_k}$ (by \eqref{eq:rest}), and the projection algebra axioms to calculate
\[
t\th_{q_k} = (t\th_{q_{k-1}})\th_{q_{k-1}\th_{p_k}} =_4 t\th_{q_{k-1}}\th_{p_k}\th_{q_{k-1}}\th_{p_k} =_5 t\th_{q_{k-1}}\th_{p_k} = t\th_{p_k}. \qedhere
\]
\epf
Note in particular that $\ve[p,q] = \ga_{pq}$ for $p,q\in P$ with $p\F q$.
\begin{prop}\label{prop:I}
If $P$ is a projection algebra, then $(\M,\ve)$ is a chained projection groupoid.
\end{prop}
\pf
By Lemmas \ref{lem:F2} and \ref{lem:veI}, it remains to verify \ref{G2}. To do so, let $(e,f)$ be a $\be$-linked pair, where $\be\in\M(q,r)$. Let the $e_i,f_i$ be as in \eqref{eq:e1e2f1f2}. We must show that $\lam=\rho$, where
\[
\lam = \lam(e,\be,f) = \ga_{ee_1}\circ{}_{e_1}\corest\be\circ\ga_{f_1f} \AND \rho = \rho(e,\be,f) = \ga_{ee_2}\circ{}_{e_2}\corest\be\circ\ga_{f_2f}.
\]
Keeping $\ga_{uv}=\th_v|_{u^\da}$ in mind, this is equivalent to showing that
\begin{equation}\label{eq:tebef}
t\th_{e_1}\be\th_f = t\th_{e_2}\be\th_f \qquad\text{for all $t\leq e$.}
\end{equation}
Starting with the left-hand side, we use $t=t\th_e$ (as $t\leq e$), $e_1=e\th_q$ (by \eqref{eq:e1e2f1f2}), and the projection algebra axioms to calculate
\[
t\th_{e_1}\be\th_f = t\th_e\th_{e\th_q}\be\th_f =_4 t\th_e\th_q\th_e\th_q\be\th_f =_5 t\th_e\th_q\be\th_f = t\th_q\be\th_f.
\]
On the other hand we have
\begin{align*}
t\th_{e_2}\be\th_f &= t\th_{f\Th_{\be^{-1}}}\be\th_f &&\text{by \eqref{eq:e1e2f1f2}}\\
&= t\Th_\be\th_f\Th_{\be^{-1}}\be\th_f &&\text{by \ref{G1b}}\\
&= t\th_q\be\th_f\th_r\be^{-1}\be\th_f &&\text{by \eqref{eq:Thal}}\\
&= t\th_q\be\th_f\th_r\th_f \\
&= t\th_q\be\th_f &&\text{by Lemma \ref{lem:pqp}\ref{pqp1}, as $f\leqF r$ (cf.~Lemma \ref{lem:LP}).}
\end{align*}
Examining the previous two conclusions, we have completed the proof of \eqref{eq:tebef}, and hence of the proposition.
\epf
Now that we know $(\M,\ve)$ is a chained projection groupoid for any projection algebra $P$, we can apply the functor $\bS$ to obtain a regular $*$-semigroup (cf.~Theorems \ref{thm:SGve} and~\ref{thm:iso}):
\begin{defn}\label{defn:OP}
For a projection algebra $P$, we define the regular $*$-semigroup
\[
\M_P = \bS(\M,\ve),
\]
where $\M=\M(P)$ and $\ve=\ve(P)$ are as in Definitions \ref{defn:Piso} and \ref{defn:veI}. Explicitly:
\bit
\item The elements of $\M_P$ are the $P$-isomorphisms $p^\da\to q^\da$ ($p,q\in P$), as in Definition \ref{defn:Piso}.
\item The product $\pr$ in $\M_P$ is given, for $\al,\be\in\M_P$ with $\br(\al)=p$ and $\bd(\be)=q$, by
\[
\al\pr\be = \al\rest_{p'} \circ \ga_{p'q'} \circ {}_{q'}\corest\be = (\al\th_{q'}\be)|_{(p'\al^{-1})^\da} \WHERE p'=q\th_p \ANd q'=p\th_q.
\]
\item The involution in $\M_P$ is given by ordinary inversion of bijections, $\al^*=\al^{-1}$.
\item The projections of $\M_P$ are the identity maps $\operatorname{id}_{p^\da}\equiv p$, for $p\in P$, and consequently ${P(\M_P)\equiv P}$.
\item The idempotents of $\M_P$ are the maps $p\pr q = \ga_{pq}$, for $(p,q)\in{\F}$.
\eit
We call $\M_P$ the \emph{Munn semigroup of $P$}.
\end{defn}
An ostensibly weaker version of the next result was proved in \cite[Theorem 5]{Imaoka1980}, albeit with a different formulation of $\M_P$; cf.~Remark \ref{rem:FP}.
The statement, and the following one, refers to the canonical homomorphisms of Definition \ref{defn:canonical}.
\begin{thm}\label{thm:SOP}
If $S$ is a regular $*$-semigroup with projection algebra $P(S)=P$, then
\ben
\item \label{SOP1} there is a canonical $*$-semigroup homomorphism
\[
\phi_S:S\to\M_P \GIVENBY a\phi_S = \vt_a \qquad\text{for all $a\in S$,}
\]
\item \label{SOP2} $\ker(\phi_S) = \mu_S$ is the maximum idempotent-separating congruence of $S$, and consequently $S$ is fundamental if and only if $\phi_S$ is injective,
\item \label{SOP3} $\im(\phi_S)\cong S/\mu_S$ is the fundamental image of $S$.
\een
\end{thm}
\pf
Throughout the proof we write $\M=\M(P)$ and $\ve=\ve(P)$, as in Definitions \ref{defn:Piso} and~\ref{defn:veI}. We also write ${\bG(S)=(\G,\ve')}$.
\pfitem{\ref{SOP1}} By \ref{G1d}, we have $\vt_a\in\M(p,q)$ for any $a\in\G(p,q)$. It follows that we have a well-defined mapping
\[
\phi:\G\to\M:a\mt\vt_a.
\]
Our first goal is to show that $\phi$ is a chained projection functor $(\G,\ve')\to(\M,\ve)$; cf.~Definition \ref{defn:CPF}. It follows quickly from Lemma~\ref{lem:vt} that $\phi$ is an ordered $v$-functor. Thus, \ref{F1} holds trivially. It also follows from $v\phi=\operatorname{id}_P$ that the induced map $\Phi:\C\to\C$ at the top of the diagram in \ref{F2} is also an identity map. Thus, the diagram becomes:
\[
\begin{tikzcd}
\C \arrow{rr}{\operatorname{id}_\C} \arrow[swap]{dd}{\ve'} & ~ & \C \arrow{dd}{\ve} \\%
~&~&~\\
\G \arrow{rr}{\phi}& ~ & \M,
\end{tikzcd}
\]
and we must show that this commutes, i.e.~that
\[
(\ve'(\c))\phi = \ve(\c) \qquad\text{for all $\c\in\C$.}
\]
This is clear if $\c=[p]\equiv p$ for some $p\in P$, so suppose instead that $\c=[p_1,\ldots,p_k]$ with $k\geq2$. Then using the definitions, and the fact that each of the maps in the diagram is a functor, we calculate
\begin{align*}
(\ve'(\c))\phi &= (\ve'[p_1,p_2]\circ\cdots\circ\ve'[p_{k-1},p_k])\phi \\
&= (\ve'[p_1,p_2])\phi\circ\cdots\circ(\ve'[p_{k-1},p_k])\phi \\
&= \vt_{\ve'[p_1,p_2]}\circ\cdots\circ\vt_{\ve'[p_{k-1},p_k]} \\
&= \th_{p_2}|_{p_1^\da}\circ\cdots\circ\th_{p_k}|_{p_{k-1}^\da} &&\text{by Lemma \ref{lem:ve}\ref{ve2}}\\
&= \ga_{p_1p_2}\circ\cdots\circ\ga_{p_{k-1}p_k} \\
&= \ve[p_1,p_2]\circ\cdots\circ\ve[p_{k-1},p_k] = \ve(\c) ,
\end{align*}
as required.
We now know that $\phi$ is a chained projection functor $\bG(S)=(\G,\ve')\to(\M,\ve)$. By Lemma~\ref{lem:G1G2} (and Theorem \ref{thm:iso}), $\phi$ is also a $*$-semigroup homomorphism ${S=\bS(\bG(S)) \to \bS(\M,\ve) = \M_P}$, and of course $\phi=\phi_S$ is the map in the statement of the theorem. It is canonical because $\phi$ is a $v$-functor.
\pfitem{\ref{SOP2}} This follows immediately from Proposition \ref{prop:muS} and the definition of $\phi_S$.
\pfitem{\ref{SOP3}} This follows from the fundamental homomorphism theorem.
\epf
We can now quickly deduce the next result, which is essentially \cite[Theorem 3.2]{Imaoka1983}. It says that not only is $\M_P$ fundamental, but it is in fact the \emph{maximum} fundamental regular $*$-semigroup with projection algebra $P$.
\begin{thm}\label{thm:fund}
If $P$ is a projection algebra, then
\ben
\item \label{thmfund1} $\M_P$ is a fundamental regular $*$-semigroup with projection algebra $P$,
\item \label{thmfund2} any fundamental regular $*$-semigroup with projection algebra $P$ embeds canonically in~$\M_P$.
\een
\end{thm}
\pf
\firstpfitem{\ref{thmfund1}} By \eqref{eq:Thal}, the homomorphism $\phi_{\M_P}:\M_P\to\M_P:\al\mt\vt_\al$ from Theorem \ref{thm:SOP}\ref{SOP1} is simply the identity map, and is therefore of course injective. Consequently, $\M_P$ is fundamental by Theorem \ref{thm:SOP}\ref{SOP2}. We have already observed that $P(\M_P)$ is (isomorphic to) $P$.
\pfitem{\ref{thmfund2}} This follows immediately from Theorem \ref{thm:SOP}\ref{SOP2}.
\epf
\begin{rem}\label{rem:TPTPopp}
The above construction of $\M_P$ using $P$-isomorphisms is closest in spirit to the work of Yamada \cite{Yamada1981} and Jones \cite{Jones2012}, although Yamada's basic set-up was quite different; in place of projection algebras, he used `$P$-sets in fundamental regular warps'.
An alternative construction of $\M_P$ was given by Imaoka \cite{Imaoka1980}, and we briefly comment here on how to interpret this in our set-up. Let $\T_P$ denote the full transformation semigroup over $P$, i.e.~the semigroup of all (totally-defined) functions $P\to P$, under composition. For any regular $*$-semigroup $S$ with $P(S)=P$, Lemma \ref{lem:Thaprb} guarantees the existence of an (ordinary semigroup) homomorphism
\[
S\to\T_P:a\mt\Th_a.
\]
(Recall that the $\pr$ operation in $\bS(\bG(S))=S$ is simply the original product in $S$.) Since the involution of $S$ is an antihomomorphism, the map $a\mt\Th_{a^*}$ is an antihomomorphism $S\to\T_P$, but an (ordinary) homomorphism $S\to\T_P^{\operatorname{op}}$. The latter denotes the \emph{opposite semigroup} to $\T_P$; the product $\star$ in $\T_P^{\operatorname{op}}$ is given by $\al\star\be=\be\al$. Thus, we have a homomorphism into the direct product
\[
\xi_S:S\to\T_P\times\T_P^{\operatorname{op}} \GIVENBY a\xi_S = (\Th_a,\Th_{a^*}) \qquad\text{for $a\in S$.}
\]
By Proposition \ref{prop:muS}, $\ker(\xi_S)=\mu_S$ is the maximum idempotent-separating congruence of $S$, and $\im(\xi_S)\cong S/\mu_S$ is the fundamental image of $S$. In particular, when $S=\M_P$, the image of $\xi_{\M_P}$ is isomorphic to $\M_P$. Keeping \eqref{eq:Thal} in mind, this copy of $\M_P$, which we will denote by $\ol\M_P$, is precisely the subsemigroup
\[
\ol\M_P = \im(\xi_{\M_P}) = \set{\ol\al}{\al\in\M_P} \leq \T_P\times\T_P^{\operatorname{op}},
\]
where we write $\ol\al = (\th_p\al,\th_q\al^{-1})$ for $\al\in\M(p,q)$. The product in $\ol\M_P$ is simply composition (and reverse composition) of transformations. While $\T_P$ and $\T_P^{\operatorname{op}}$ are not regular $*$-semigroups for $|P|\geq2$ (as their $\mathrel{\mathscr D}$-classes are not square), of course $\ol\M_P$ is, and its involution is given by~${\ol\al^*=\ol{\al^{-1}}}$. Returning to the general case, the image of $\xi_S$ (where $S$ is an arbitrary regular $*$-semigroup with $P(S)=P$) is contained in $\ol\M_P$, and $\xi_S$ is an embedding $S\to\ol\M_P$ if and only if $S$ is fundamental.
Another, very different, way to construct $\M_P$ can be found in \cite[Section 3]{NP1985}, wherein the projection algebra $P$ is characterised as a specialised poset with a connection to a certain \emph{dual} poset $P^\circ$ (in the sense of Grillet \cite{Grillet1974a,Grillet1974b}), and $\M_P$ is realised as a subdirect product, similar to Imaoka's construction.
\end{rem}
We believe the following problem is of considerable interest:
\begin{prob}
Given a projection algebra $P$, describe the maximum fundamental regular $*$-semigroup $\M_P$.
\end{prob}
In particular, one could attempt to do this in the case that $P=P(S)$ arises from some natural regular $*$-semigroup $S$, or for some family of such semigroups. For example, the case that $S$ is a finite diagram monoid (including partition monoids) is the subject of an ongoing work by the current authors \cite{AJfund}.
\subsection[Idempotent-generated fundamental regular $*$-semigroups]{\boldmath Idempotent-generated fundamental regular $*$-semigroups}\label{sect:FIG}
Finally, we combine the ideas of this chapter and the previous one. The next result concerns the chain semigroup $\C_P$, as in Definition \ref{defn:C_P}, and the proof utilises the map ${\phi_{\C_P}:\C_P\to\M_P}$ from Theorem~\ref{thm:SOP}.
The fundamental image of $\C_P$ is the quotient $\C_P/\mu$, where $\mu=\mu_{\C_P}$ is the maximum idempotent-separating congruence. This quotient is of course idempotent-generated, as $\C_P$ is. For the following statement, recall that $\E(S)$ denotes the idempotent-generated subsemigroup of the semigroup $S$. To the best of our knowledge, no general results exist in the literature concerning idempotent-generated fundamental \emph{regular} semigroups, though they were used tangentially in the proof of \cite[Theorem 7.2]{Nambooripad1979} in the case of so-called \emph{solid} biordered sets.
\begin{thm}\label{thm:muCP}
Let $P$ be a projection algebra, and let $\mu=\mu_{\C_P}$ be the maximum idempotent-separating congruence on the chain semigroup $\C_P$.
\ben
\item \label{muCP1} Up to isomorphism, there is exactly one idempotent-generated fundamental regular $*$-semigroup with projection algebra $P$, namely $\C_P/\mu$.
\item \label{muCP2} For any fundamental regular $*$-semigroup $S$ with projection algebra $P=P(S)$, we have $\E(S)\cong\C_P/\mu$.
\een
\end{thm}
\pf
We prove both parts together. Suppose $S$ is an arbitrary fundamental regular $*$-semigroup with projection algebra $P=P(S)$. By Proposition \ref{prop:CP} and Theorem \ref{thm:SOP}, we have the two $*$-semigroup homomorphisms
\[
\begin{tikzcd}
\C_P \arrow{rr}{\ve} & ~ & S \arrow[hookrightarrow]{rr}{\phi_S} & ~ & \M_P.
\end{tikzcd}
\]
Here, $\ve$ is the evaluation map $\C\to\G(S)$, and $\phi_S$ is injective by Theorem \ref{thm:SOP}\ref{SOP2}. Since all three of the above semigroups have projection algebra~$P$, and since both of the above maps is canonical, it follows that the composition
\[
\ve\phi_S:\C_P\to\M_P
\]
is also canonical. But $\C_P$ is projection-generated, so there can be at most one canonical $*$-semigroup homomorphism $\C_P\to\M_P$. It follows that in fact $\ve\phi_S=\phi_{\C_P}$. But then
\begin{align*}
\ker(\ve) &= \ker(\ve\phi_S) &&\text{as $\phi_S$ is injective}\\
&= \ker(\phi_{\C_P}) &&\text{as $\ve\phi_S=\phi_{\C_P}$}\\
&= \mu_{\C_P} = \mu &&\text{by Theorem \ref{thm:SOP}\ref{SOP2}.}
\end{align*}
Combining this with Proposition \ref{prop:ES}\ref{ES3} and the fundamental homomorphism theorem, it follows that
\[
\E(S) = \im(\ve) \cong \C_P/\ker(\ve) = \C_P/\mu.
\]
This of course gives \ref{muCP2}. Part \ref{muCP1} also follows, since if $S$ happens to be idempotent-generated (and fundamental, with $P(S)=P$), then $S=\E(S)\cong\C_P/\mu$.
\epf
\begin{rem}\label{rem:FP}
Theorem \ref{thm:muCP} shows that (up to isomorphism) there is a unique idempotent-generated fundamental regular $*$-semigroup with a given projection algebra $P$. The theorem also gives a number of ways to get hold of such a semigroup, which for now we will denote by $\FF_P$. First, we can construct $\FF_P$ as the quotient $\C_P/\mu$, where $\mu=\mu_{\C_P}$. The elements of this quotient are $\mu$-classes of $P$-chains. For two $P$-chains $\c=[p_1,\ldots,p_k]$ and $\d=[q_1,\ldots,q_l]$, we combine Proposition \ref{prop:muS} with \eqref{eq:vtc} and \eqref{eq:Thc} to obtain
\begin{align*}
(\c,\d)\in\mu &\iff \th_{p_1}\cdots\th_{p_k} = \th_{q_1}\cdots\th_{q_l} \ANd \th_{p_k}\cdots\th_{p_1} = \th_{q_l}\cdots\th_{q_1}\\
&\iff \th_{p_1}\cdots\th_{p_k} = \th_{q_1}\cdots\th_{q_l} \ANd p_1=q_1\\
&\iff \th_{p_2}\cdots\th_{p_k} = \th_{q_2}\cdots\th_{q_l} \ANd p_1=q_1.
\end{align*}
Alternatively, we could take $\FF_P$ to be the idempotent-generated subsemigroup $\E(\M_P)$ of~$\M_P$. Elements of $\E(\M_P)$ are of the form
\[
\operatorname{id}_{p_1^\da} \pr\cdots\pr \operatorname{id}_{p_k^\da} \qquad\text{for $p_1,\ldots,p_k\in P$ with $p_1\F\cdots\F p_k$.}
\]
As in Definition \ref{defn:OP}, for such $p_i$ we have
\[
\operatorname{id}_{p_1^\da} \pr\operatorname{id}_{p_2^\da} \pr\cdots\pr \operatorname{id}_{p_k^\da}= \ga_{p_1p_2}\ga_{p_2p_3}\cdots\ga_{p_{k-1}p_k} = (\th_{p_2}\cdots\th_{p_k})|_{p_1^\da},
\]
so $\E(\M_P)$ consists of all such maps.
As another possibility, we could take $\FF_P=\E(\ol\M_P)$, where $\ol\M_P\leq\T_P\times\T_P^{\operatorname{op}}$ is the isomorphic copy of $\M_P$ discussed in Remark \ref{rem:TPTPopp}. Using the over-line notation from that remark, and using~\eqref{eq:Thp}, the projections of $\ol\M_P$ are of the form
\[
\ol p = (\Th_p,\Th_{p^*}) = (\th_p,\th_p) \qquad\text{for $p\in P$.}
\]
Keeping in mind that the operation in $\ol\M_P$ is (ordinary and reverse) composition, it follows that a general element of $\E(\ol\M_P)$ has the form
\[
(\th_{p_1}\cdots\th_{p_k},\th_{p_k}\cdots\th_{p_1}) \qquad\text{for $p_1,\ldots,p_k\in P$ with $p_1\F\cdots\F p_k$.}
\]
\end{rem}
\section{Inverse semigroups and inductive groupoids}\label{chap:I}
We have noted on a number of occasions that any inverse semigroup is a regular $*$-semigroup (with $a^*=a^{-1}$). In this final chapter, we look at how the theory developed above simplifies in the case of inverse semigroups. In particular, we will see that our results allow us to deduce the celebrated Ehresmann--Schein--Nambooripad (ESN) Theorem, stated below as Theorem \ref{thm:ESN}. This theorem was first explicitly formulated by Lawson in \cite[Theorem 4.1.8]{Lawson1998}, who named it as such in order to honour the contributions of the three stated mathematicians to the development of the result. We discussed this in some detail in Chapter \ref{chap:intro}, but see \cite[Chapter 4]{Lawson1998} and \cite{Hollings2012,HL2017} for more.
In what follows, we write $\IS$ for the category of inverse semigroups. Morphisms in $\IS$ are simply the semigroup homomorphisms. (Any semigroup homomorphism between inverse semigroups automatically respects the involutions.) We also write $\IG$ for the category of \emph{inductive groupoids}, i.e.~the ordered groupoids whose object set is a semilattice (under the order inherited from the containing groupoid). Morphisms in $\IG$ are the \emph{inductive functors}, i.e.~the ordered groupoid functors whose object maps are semilattice morphisms. The ESN Theorem is as follows:
\begin{thm}\label{thm:ESN}
The category $\IS$ of inverse semigroups (with semigroup homomorphisms) is isomorphic to the category $\IG$ of inductive groupoids (with inductive functors).
\end{thm}
We will give a proof of this theorem in Section \ref{sect:ESN}, relying on our above results on regular $*$-semigroups. But before we do, in Section \ref{sect:I} we show how we are somewhat-inevitably led to the ESN Theorem by considering the simplifications that arise when we specialise our general theory to inverse semigroups.
\subsection{The chained projection groupoid associated to an inverse semigroup}\label{sect:I}
In Theorem \ref{thm:iso} we showed that the functors
\[
\bG:\RSS\to\CPG \AND \bS:\CPG\to\RSS
\]
are mutually inverse isomorphisms between
\bit
\item the category $\RSS$ of regular $*$-semigroups, with $*$-semigroup homomorphisms, and
\item the category $\CPG$ of chained projection groupoids, with chained projection functors.
\eit
It follows that for any subcategory $\CC$ of $\RSS$, the functor $\bG$ restricts to an isomorphism from $\CC$ onto its image $\bG(\CC)$ in $\CPG$:
\begin{center}
\scalebox{0.8}{
\begin{tikzpicture}
\draw [fill=blue!20,rounded corners] (0,0)--(4,0)--(4,5)--(0,5)--cycle; \node (RSS) at (2,4) {$\RSS$};
\draw [fill=red!30,rounded corners] (1,1)--(3,1)--(3,3)--(1,3)--cycle; \node (IS) at (2,2) {$\CC$};
\begin{scope}[shift={(8,0)}]
\draw [fill=blue!20,rounded corners] (0,0)--(4,0)--(4,5)--(0,5)--cycle; \node (CPG) at (2,4) {$\CPG$};
\draw [fill=red!30,rounded corners] (1,1)--(3,1)--(3,3)--(1,3)--cycle; \node (???) at (2,2) {$\bG(\CC)$};
\end{scope}
\draw[-{latex}] (RSS) to [bend left = 10] (CPG);
\draw[-{latex}] (CPG) to [bend left = 10] (RSS);
\draw[dashed,{latex}-{latex}] (IS) to (???);
\node () at (6,4.7) {$\bG$};
\node () at (6,3.3) {$\bS$};
\end{tikzpicture}
}
\end{center}
In particular, the category $\IS$ of inverse semigroups is isomorphic to its image $\bG(\IS)$.
To understand this image, fix an inverse semigroup $S$, and let $(\G,\ve)=\bG(S)$ be the chained projection groupoid associated to $S$, as in Definitions \ref{defn:GS2} and \ref{defn:veS}. As usual we write
\[
E=E(S)=\set{e\in S}{e^2=e} \AND P=P(S)=\set{p\in S}{p^2=p=p^{-1}}
\]
for the semilattice of idempotents, and the projection algebra of $S$, respectively. Since each idempotent is self-inverse, it follows that in fact $P=E$. As there will be a number of orders in play, we will (temporarily) write $\leq'$ for the natural partial order on $E$, defined by ${e\leq'f \iff e=ef(=fe)}$. Note then that the (order-theoretic) meet of two idempotents is their product (in $S$), $e\wedge f=ef$, and so $e\leq' f\iff e=e\wedge f$.
As in \eqref{eq:thp}, and remembering that idempotents commute, the $\th$ maps on $E(=P)$ are given by:
\begin{equation}\label{eq:effe}
e\th_f = fef = ef = e\wedge f = f\th_e \qquad\text{for all $e,f\in E$.}
\end{equation}
It follows quickly from this that the $\leq$ and $\leqF$ relations, defined on $E(=P)$ in \eqref{eq:leqP} and \eqref{eq:leqF}, coincide with $\leq'$ defined above. In particular, $\leqF$ is a partial order, and so ${\F}={\leqF}\cap{\geqF}=\De_E$ is the trivial relation on $E$. It follows that the only $E$-paths are of the form $(e,e,\ldots,e)$, and each such path is $\approx$-equivalent to $(e)\equiv e$. This all shows that the chain groupoid $\C=\C(E)$ is \emph{trivial}, meaning that it consists entirely of its identity morphisms. That is, we have $\C=v\C=E$, and $e,f\in E$ are composable in $\C$ precisely when $e=f$. It follows that the evaluation map $\ve:\C\to\G$ (being a $v$-functor) is just the inclusion $\io:E\hookrightarrow\G(=S)$. This means that all the information contained in the pair $(\G,\ve)=(\G,\io)$ is in fact contained in the groupoid $\G=\G(S)$ alone.
Roughly speaking, the next result shows that the above situation is reversible.
\begin{prop}\label{prop:invGve}
Let $(\G,\ve)$ be a chained projection groupoid, and let $P=v\G$ and $\C=\C(P)$. Then $(\G,\ve)=\bG(S)$ for some inverse semigroup $S$ if and only if $\C=P$ is trivial (in which case~$\ve$ is the inclusion $\ve=\io:P\hookrightarrow\G$).
\end{prop}
\pf
We have already proved the forwards implication. Conversely, assume $\C=P$ is trivial, and (hence) $\ve$ is the inclusion $\ve=\io:P\hookrightarrow\G$. Let $S=\bS(\G,\io)$, so that $S$ is a regular $*$-semigroup with projection algebra $P(S)=P$, and $(\G,\io)=\bG(S)$; cf.~Theorem \ref{thm:iso}. We must show that $S$ is inverse, and we can do this by showing that idempotents commute. As usual, we write $E=E(S)$ for the set of idempotents of $S$. By Proposition \ref{prop:ES}\ref{ES2}, $E$ is contained in $\im(\ve)=\im(\io)$, and by assumption the latter is equal to $P$. Since $P\sub E$ for any regular $*$-semigroup, it follows that $E=P$. But also $E=P^2$ by Lemma~\ref{lem:PS1}\ref{PS12}, so it follows that $P=P^2$, i.e.~that $P$ is a subsemigroup of $S$. Now let $e,f\in E=P$. Since $e$, $f$ and $ef$ are all projections, we have $ef = (ef)^* = f^*e^* = fe$, as required.
\epf
\begin{rem}
Given an inverse semigroup $S$, the groupoid $\G=\G(S)$ is \emph{inductive}, as the object set $v\G=E=E(S)$ is a semilattice.
At the outset, one might expect there to also be a version of Proposition \ref{prop:invGve} stating that a chained projection groupoid $(\G,\ve)$ corresponds to an inverse semigroup if and only if $\G$ is inductive. However, it follows from Example \ref{eg:Brandt} that this is not the case. Indeed, there we considered two regular $*$-semigroups $S_1$ and $S_2$, with $S_2$ inverse and~$S_1$ non-inverse, and with the same groupoids $\G(S_1)=\G(S_2)$. Since $S_2$ is inverse, this groupoid is inductive. It follows that the non-inverse regular $*$-semigroup $S_1$ gives rise to an inductiive groupoid $\G(S_1)$.
\end{rem}
The next result is essentially a reformulation of Proposition \ref{prop:invGve}, but it seems to be worth recording.
\begin{prop}
Let $S$ be a regular $*$-semigroup, with projection algebra $P=P(S)$. Then $S$ is inverse if and only if ${\F}=\De_P$.
\end{prop}
\pf
Clearly ${\F}=\De_P$ is equivalent to $\C=P$, and the result then follows from Proposition~\ref{prop:invGve} (and Theorem \ref{thm:iso}).
\epf
Now that we have some understanding of the image $\bG(\IS)$ of $\IS$ in $\CPG$ (cf.~Proposition~\ref{prop:invGve}), we are a step closer to proving Theorem \ref{thm:ESN}. One might now go on to show that $\bG(\IS)$ is isomorphic to~$\IG$, the category of inductive groupoids. However, there are still some subtle points remaining to be dealt with, some so subtle as to be almost invisible at this point. Rather than continue on this path, we take a more direct/canonical route in the next section, though we will make use of Proposition~\ref{prop:invGve} on a number of occasions.
\subsection{The Ehresmann--Schein--Nambooripad Theorem}\label{sect:ESN}
In Proposition \ref{prop:invGve} we characterised the chained projection groupoids $(\G,\ve)$ corresponding to inverse semigroups; essentially the proposition shows that these are precisely those in which the evaluation map $\ve$ carries no information whatsoever. Consequently, if we wish to prove the ESN Theorem (stated above as Theorem \ref{thm:ESN}) we may as well dispense with evaluation maps altogether (when dealing with inverse semigroups), and consider the groupoid construction $S\mt\G(S)$ as a functor $\IS\to\IG$, and prove that \emph{this} is a category isomorphism. This is of course the standard/canonical approach to the ESN Theorem, but the details required for the proof will be taken from our general theory developed in earlier chapters.
Following this canonical approach, we proceed by defining two functors
\[
\bbG:\IS\to\IG \AND \bbS:\IG\to\IS,
\]
and showing that they are mutually inverse isomorphisms.
First, for an inverse semigroup $S$, we take $\bbG(S)=\G=\G(S)$ to be the ordered groupoid constructed in Definition~\ref{defn:GS2}. As in Section \ref{sect:I}, we have $E=P$ (where as usual $E=E(S)$ and $P=P(S)$). Moreover, following Definition \ref{defn:GS2}, for $e,f\in E$ we have
\[
\text{$e\leq f$ in $\G$ $\IFF$ $e=ef$ in $S$ $\IFF$ $e\leq f$ in $E$.}
\]
In particular, $E=v\G$ is a semilattice (under the order inherited from $\G$), and so $\bbG(S)=\G$ is indeed an inductive groupoid.
Next, it is easy to see that any morphism $\phi:S\to S'$ in $\IS$ is an inductive functor $\bbG(S)\to\bbG(S')$. Indeed, $\phi$ is an ordered functor by Lemma \ref{lem:S1S2}, and the object map $v\phi$ is a semilattice morphism since for any $e,f\in v\bbG(S)=E(S)$ we have
\[
(e\wedge f)\phi=(ef)\phi=(e\phi)(f\phi)=e\phi\wedge f\phi.
\]
So we can simply take $\bbG(\phi)=\phi$ for any morphism $\phi$ from $\IS$.
To define the functor $\bbS$, consider an inductive groupoid $\G$, with object semilattice $E=v\G$. The semigroup $S=\bbS(\G)$ will have the same underlying set as $\G$. To define the product on~$S$, which we will denote by $\star$, consider two morphisms $a,b\in\G(=S)$, and write $e=\br(a)$ and $f=\bd(b)$. Since $E$ is a semilattice, the meet $g=e\wedge f$ exists, and since $g\leq e,f$ we can define $a\star b = a\rest_g\circ{}_g\corest b$. At this point it is not at all clear that $S=\bbS(\G)=(\G,\star)$ is a semigroup, let alone inverse, but we will deal with this shortly. For any morphism $\phi:\G\to\G'$ in $\IG$, we again define $\bbS(\phi)=\phi$.
\begin{lemma}\label{lem:IGPG}
If $\G$ is an inductive groupoid, with object semilattice $v\G=E$, then
\ben
\item \label{IGPG1} $E$ is a projection algebra with respect to the maps
\[
\th_e:E\to E \GIVENBY f\th_e = e\wedge f \qquad\text{for $e,f\in E$,}
\]
\item \label{IGPG2} $\G$ is a projection groupoid.
\een
\end{lemma}
\pf
\firstpfitem{\ref{IGPG1}} It is a routine matter to check that axioms \ref{P1}--\ref{P5} hold.
\pfitem{\ref{IGPG2}} It remains to show that condition \ref{G1} from Definition \ref{defn:PG} holds, and for this we verify~\ref{G1d}. To do so, fix some $a\in\G$, and write $p=\bd(a)$ and $q=\br(a)$. We must show that $\vt_a:p^\da\to q^\da$ is a projection algebra morphism, i.e.~that
\begin{align*}
(e\th_f)\vt_a &= (e\vt_a) \th_{f\vt_a} &&\hspace{-3cm}\text{for all $e,f\leq p$.}
\intertext{By the definition of the $\th$ maps, this amounts to showing that}
(e\wedge f)\vt_a &= e\vt_a\wedge f\vt_a &&\hspace{-3cm}\text{for all $e,f\leq p$,}
\end{align*}
i.e.~that $\vt_a$ is a semilattice morphism. But this follows quickly from basic order-theoretic facts:
\bit
\item $p^\da$ and $q^\da$ are both meet semilattices, as order ideals of the semilattice $E$, and
\item $\vt_a:p^\da\to q^\da$ and its inverse $\vt_a^{-1}=\vt_{a^{-1}}:q^\da\to p^\da$ are both order-preserving, by Lemma \ref{eq:vtaOP} (cf.~Lemma~\ref{lem:vtavta*}).
\eit
It follows that $\vt_a$ preserves meets.
\epf
\newpage
\begin{lemma}\label{lem:IGCPG}
If $\G$ is an inductive groupoid, with object semilattice $v\G=E$, then
\ben
\item \label{IGCPG1} $\C(E)=E$ is trivial (where the projection algebra structure on $E$ is as in Lemma \ref{lem:IGPG}),
\item \label{IGCPG2} $(\G,\io)$ is a chained projection groupoid, where $\io:E\hookrightarrow\G$ is the inclusion,
\item \label{IGCPG3} $\bbS(\G)=\bS(\G,\io)$ is an inverse semigroup.
\een
\end{lemma}
\pf
\firstpfitem{\ref{IGCPG1}} For any $e,f\in E$ we have $f\th_e = e\wedge f = f\wedge e = e\th_f$. We then obtain ${\F}=\De_E$, and hence $\C(E)=E$, in exactly the same way as in the discussion following \eqref{eq:effe}.
\pfitem{\ref{IGCPG2}} By Lemma \ref{lem:IGPG}, it remains to check that \ref{G2} holds, and this is essentially trivial. Indeed, consider some $b$-linked pair $(e,f)$, where $b\in\G$, and let $e_1,e_2,f_1,f_2\in E$ be as in \eqref{eq:e1e2f1f2}. Since ${\F}=\De_E$, it follows from Lemma \ref{lem:LP}\ref{LP3} that in fact $e_1=e_2=e$ and $f_1=f_2=f$. But then
\[
\lam(e,b,f) = \io[e,e]\circ{}_e\corest b\circ\io[f,f] = e\circ{}_e\corest b\circ f = {}_e\corest b \ANDSIM \rho(e,b,f) = {}_e\corest b.
\]
\pfitem{\ref{IGCPG3}} The underlying sets of $\bbS(\G)$ and $\bS(\G,\io)$ are both $\G$. To check that the operations~$\star$ and~$\pr$ coincide, let $a,b\in\G$. Write $e=\br(a)$ and $f=\bd(b)$, and also set $g=e\wedge f$. Following Definition~\ref{defn:pr} (applied to the chained projection groupoid $(\G,\io)$), we have
\[
a\pr b = a\rest_{e'} \circ \io[e',f'] \circ {}_{f'}\corest b \WHERE e'=f\th_e \ANd f'=e\th_f.
\]
But $e'=f\th_e=e\wedge f=g$, and similarly $f'=g$, so in fact
\[
a\pr b = a\rest_{g} \circ \io[g,g] \circ {}_{g}\corest b = a\rest_{g} \circ g \circ {}_{g}\corest b = a\rest_{g}\circ {}_{g}\corest b = a\star b.
\]
It remains to check that the regular $*$-semigroup $S=\bS(\G,\io)$ is inverse, and this follows from Proposition \ref{prop:invGve} and part \ref{IGCPG1} of the current lemma, as $(\G,\io)=\bG(S)$.
\epf
We call $(\G,\io)$, as in Lemma \ref{lem:IGCPG}\ref{IGCPG2}, the \emph{trivial chained projective groupoid} associated to the inductive groupoid $\G$.
\begin{lemma}\label{lem:IF}
If $\G_1$ and $\G_2$ are inductive groupoids, then any inductive functor $\G_1\to\G_2$ is a semigroup homomorphism $\bbS(\G_1)\to\bbS(\G_2)$.
\end{lemma}
\pf
Let $\phi:\G_1\to\G_2$ be an inductive functor. By Lemmas \ref{lem:G1G2} and \ref{lem:IGCPG}\ref{IGCPG3}, it suffices to show that $\phi$ is a chained projection functor $(\G_1,\io_1)\to(\G_2,\io_2)$, where these are the trivial chained projective groupoids associated to $\G_1$ and $\G_2$. Following Definition \ref{defn:CPF}, it remains to verify conditions \ref{F1} and \ref{F2}.
\pfitem{\ref{F1}} This follows from the fact that $v\phi$ is a semilattice morphism (as $\phi$ is inductive), and keeping the rule $e\th_f=e\wedge f$ in mind.
\pfitem{\ref{F2}} Writing $E_i=v\G_i$ ($i=1,2$), each $\C_i=E_i$ is trivial, so the induced map $\Phi$ at the top of the diagram in Definition \ref{defn:CPF} is just the object map $\Phi=v\phi$. The diagram then becomes:
\[
\begin{tikzcd}
E_1 \arrow{rr}{v\phi} \arrow[hook,swap]{dd}{\io_1} & ~ & E_2 \arrow[hook]{dd}{\io_2} \\%
~&~&~\\
\G_1 \arrow{rr}{\phi}& ~ & \G_2,
\end{tikzcd}
\]
and this obviously commutes.
\epf
It follows immediately from Lemmas \ref{lem:IGCPG}\ref{IGCPG3} and \ref{lem:IF} that $\bbS$ is indeed a functor $\IG\to\IS$. We can now tie together the loose ends:
\pf[\bf Proof of Theorem \ref{thm:ESN}.]
It remains to show that $\bbS$ and $\bbG$ are mutually inverse, i.e.~that
\ben
\item \label{SGS} $\bbS(\bbG(S)) = S$ for any inverse semigroup $S$, and
\item \label{GSG} $\bbG(\bbS(\G)) = \G$ for any inductive groupoid $\G$.
\een
Beginning with \ref{SGS}, fix an inverse semigroup $S$, and let $\G=\bbG(S)=\G(S)$. By Proposition \ref{prop:invGve}, ${\bG(S)=(\G,\io)}$ is the trivial chained projection groupoid associated to $\G$. Combining this with Lemmas \ref{lem:IGCPG}\ref{IGCPG3} and \ref{lem:SG}, it follows that
\[
\bbS(\bbG(S)) = \bbS(\G) = \bS(\G,\io) = \bS(\bG(S)) = S.
\]
For \ref{GSG}, fix an inductive groupoid $\G$, and write $S=\bbS(\G)$. By Lemma \ref{lem:IGCPG}\ref{IGCPG3} we have $S=\bS(\G,\io)$, where $(\G,\io)$ is the trivial chained projection groupoid associated to $\G$. Combining this with Lemma \ref{lem:GS} (and Definitions \ref{defn:GS2} and \ref{defn:veS}), it follows that
\[
(\G,\io) = \bG(\bS(\G,\io)) = \bG(S) = (\G(S),\ve(S)),
\]
and so $\G=\G(S) = \bbG(S) = \bbG(\bbS(\G))$.
\epf
We have now seen that the Ehresmann--Schein--Nambooripad Theorem (Theorem \ref{thm:ESN}) follows from our Theorem \ref{thm:iso}. Alternatively, one could also obtain a direct (and quite transparent) proof of the ESN Theorem by specialising our proof of Theorem \ref{thm:iso} to the inverse case. We will not give the full outline of this, but will comment briefly on the most involved step, which is to show that the product $\pr$ defined on the groupoid $\G$ is associative. In the inverse/inductive case, this product is defined by
\[
a\pr b = a\rest_{e} \circ {}_{e}\corest b \qquad\text{for $a,b\in\G$, where $e=\br(a)\wedge \bd(b)$.}
\]
It follows that for $a,b,c\in\G$, we have
\[
(a\pr b)\pr c = a\rest_f\circ b\rest_g\circ c\rest_h \AND a\pr(b\pr c) = a\rest_{f'}\circ b\rest_{g'}\circ c\rest_{h'}
\]
for some $f,g,h,f',g',h'\in E=v\G$. We can then show that $(a\pr b)\pr c$ and $a\pr(b\pr c)$ are equal by showing that they have equal domains and equal ranges. This follows from \eqref{eq:rabc} and \eqref{eq:dabc}, which were proved quite early in Section \ref{sect:SGve}.
\subsection{Fundamental inverse semigroups}\label{sect:FI}
So far we have examined the way that the general theory developed in Chapters \ref{chap:P}--\ref{chap:iso} simplifies in the case of inverse semigroups. Specifically, we were able to deduce the Ehresmann--Schein--Nambooripad Theorem (stated in Theorem \ref{thm:ESN}) from our results. It is then of course natural to follow the same program for the results of Chapters \ref{chap:E} and \ref{chap:F}.
The results of Chapter \ref{chap:E} become trivial/vacuous in the context of inverse semigroups, as idempotent-generated inverse semigroups are simply inverse semigroups consisting entirely of idempotents, i.e.~semilattices. It is therefore no surprise that the free (projection-generated) regular $*$-semigroup over a semilattice is simply the semilattice itself. There is another obvious way to to think about this. Given a semilattice $E$, regarded as a projection algebra with $\th$ maps as in \eqref{eq:effe}, we have already seen that the chain groupoid $\C=\C(E)$ is trivial, in the sense that $\C=v\C=E$, with $e,f\in E$ being composable in $\C$ precisely when $e=f$. Recall from Definition \ref{defn:CP} that the chain semigroup of $E$ is the semigroup $\C_E=\bS(\C,\operatorname{id}_\C)$. Since~$\operatorname{id}_\C$ can also be seen as the inclusion $\operatorname{id}_\C=\io:E=\C\hookrightarrow\C$, it follows from Lemma \ref{lem:IGCPG}\ref{IGCPG3} that $\C_E=\bS(\C,\operatorname{id}_\C)=\bS(\C,\io)=\bbS(\C)$. This semigroup has underlying set $\C=E$, and the product~${\pr}={\star}$ is given by
\[
e\pr f = e\star f = e\rest_{e\wedge f}\circ{}_{e\wedge f}\corest f = (e\wedge f)\circ(e\wedge f) = e\wedge f.
\]
In other words, $\C_E$ is just $E$ itself.
On the other hand, the results of Chapter \ref{chap:F} do have interesting specialisations to inverse semigroups, as we now briefly discuss. Fix a semilattice $E$, which we again consider as a projection algebra, with $\th$ maps given in \eqref{eq:effe}. As in the proof of Lemma \ref{lem:IGPG}\ref{IGPG2}, the projection algebra morphisms $e^\da\to f^\da$ (for $e,f\in E$) are precisely the semilattice morphisms $e^\da\to f^\da$. Since the order and meet operations are inter-definable in $E$ (\emph{viz.}~$s\leq t \iff s=s\wedge t$), it follows that these morphisms are also precisely the order isomorphisms $e^\da\to f^\da$. It follows that the (inductive) groupoid $\M=\M(E)$ from Definition \ref{defn:Piso} consists precisely of all such order isomorphisms $e^\da\to f^\da$ ($e,f\in E$). The resulting semigroup $\M_E=\bbS(\M)$ has underlying set $\M$, and product~${\pr}={\star}$ given as follows. For $\al,\be\in\M_E$ with $\br(\al)=e$ and $\bd(\be)=f$, we have
\[
\al\pr\be = \al \star \be = \al\rest_{e\wedge f}\circ{}_{e\wedge f}\corest\be.
\]
This last expression is the composition in the groupoid $\M$ of the bijections $\al\rest_{e\wedge f}$ and ${}_{e\wedge f}\corest\be$; the (set-theoretic) range of the former is equal to the (set-theoretic) domain of the latter, and this is precisely the order ideal $e^\da\cap f^\da=(e\wedge f)^\da$. However, we can also think of $\al$ and $\be$ as partial bijections of $E$, i.e.~as elements of the \emph{symmetric inverse monoid} $\I_E$, and the above product $\al\pr\be=\al\star\be$ is simply the product of $\al$ and $\be$ in $\I_E$, i.e.~their composition as binary relations:
\[
\al\pr\be = \al\star\be = \set{(s,t)\in E\times E}{(s,g)\in\al \text{ and }(g,t)\in\be\ \text{ for some } g\in E}.
\]
In this way, we can think of $\M_E$ as an inverse subsemigroup of the symmetric inverse monoid~$\I_E$. This is in fact the original approach taken by Munn in his seminal paper \cite{Munn1970}, where the semigroup~$\M_E$ (considered as a subsemigroup of $\I_E$) was denoted $T_E$.
\footnotesize
\def-1.1pt{-1.1pt}
|
1,108,101,563,629 | arxiv |
\section{Introduction}
The field of deep reinforcement learning (RL) has showcased amazing results in recent time, solving tasks in robotic control \cite{duan2016benchmarking, lillicrap2015continuous}, games \cite{mnih2015human} and other complex environments. Despite such successes, deep RL algorithms are sample inefficient and sometimes unstable. Furthermore, they usually perform sub-optimally when dealing with sparse reward and partially observable environments. One further limitation of deep RL is when rapid adaptation to changing tasks (dynamic goals) is required. Established methods only work well in fixed task environments. In an attempt to solve this problem, deep meta-reinforcement learning (meta-RL) methods \cite{finn2017model, rothfuss2018promp, zintgraf2019fast, duan2016rl, wang1611learning} were specifically devised. However, these methods are largely evaluated on dense reward, fully observable MDP environments, and perform sub-optimally in sparse reward, partially observable environments.
One key aspect in achieving fast adaptation in dynamic partially observable environments is the presence of appropriate learning structures and memory units that fits the specific class of learning problems. Therefore, standard model-free RL algorithms do not perform well in dynamic environments because they are tabula-rasa systems. They hold no knowledge in their architectures to allow a fast and targeted learning when a change in the environment occurs. Upon a task change, these algorithms will try to randomly explore the action space to relearn from scratch a different, new policy. On the other hand, model-based RL, holds knowledge of the structure of the environment, which in turn allows for rapid adaptation to changes in the environment, but such a knowledge needs to be built manually into the system.
In this paper, we investigate the use of neuroevolution to autonomously evolve inborn knowledge \cite{soltoggio2018born} in the form of neural structures and plasticity rules with a specific focus on dynamic POMDPs that have posed challenges to current RL approaches. The neuroevolutionary approach that we propose is designed to solve rapid adaptation to changing tasks \cite{soltoggio2018born} in complex high dimensional partially observable environments. The idea is to test the ability of evolution to build an unconstrained neuromodulated network architecture with problem-specific learning skills that can exploit the latent space provided by an autoencoder. Thus, in the proposed system, an autoencoder serves as a feature extractor that produces low dimensional latent features from high dimensional environment observations. A neuromodulated network \cite{soltoggio2008evolutionary} receives the low dimensional latent features as input and produces the output of the system, effectively acting as high level controller. Evolved neuromodulated networks have shown computational advantages in various dynamic task scenarios \cite{soltoggio2008evolutionary, soltoggio2018born}.
The proposed approach is similar to that proposed in \cite{alvernaz2017autoencoder}. One key novelty is that our approach seeks to evolve selective plasticity with the use of modulatory neurons, and therefore, to evolve problem-specific neuromodulated adaptive systems. The relationships among image-pixel inputs and control actions in POMDPs is highly nonlinear and history dependent, therefore, an open question is whether neuroevolution can exploit latent features to evolve learning systems with inborn knowledge. Thus, we test the hypothesis that a neuromodulated evolved network can discover neural structures and their related plasticity rules to encode required memory and fast adaptation mechanisms to compete with current deep meta-RL approaches.
We call the proposed system a Plastic Evolved Neuromodulated Network with Autoencoder (PENN-A), denoting the combination of the two neural components. We evaluate our proposed method in a POMDP environment where we show better performance in comparison to some non-evolutionary deep meta-reinforcement learning methods. Also, we evaluated the proposed method in the Malmo Minecraft environment to test its general applicability.
Two interesting findings from our experiments are that (i) the networks acquire through evolution the ability to recognise reward cues (i.e. environment cues that are associated with survival even when reward signals are not given) and (ii) the networks can evolve \emph{location} neurons that help solving the problem by detecting, and becoming active at, specific location of the partially observable MDP. The evolved network topology allows for richer dynamics in comparison to fixed architectures such as hand-designed feed-forward or recurrent networks.
The next section reviews the related work. Following that, a formal task definition is presented. Next is the description of the proposed method employed in this work, followed by the evaluation of results. The PENN-A source code is made available at: \url{https://github.com/dlpbc/penn-a}.
\section{Related Work}
In reinforcement learning (RL) literature, meta-RL methods seek to develop agents that adapt to changing tasks in an environment or a set of related environments. Meta-RL \cite{schmidhuber1996simple, schweighofer2003meta} is based on the general idea of meta-learning \cite{bengio1992optimization, thrun1998learning, hochreiter2001learning} applied to the RL domain.
Recently, deep meta-RL has been used to tackle the problem of rapid adaptation in dynamic environments. Methods such as \cite{finn2017model, duan2016rl, wang1611learning, zintgraf2019fast, mishra2018a, rothfuss2018promp, rakelly2019efficient} use deep RL methods to train a meta-learner agent that adapts to changing tasks. These methods are mostly evaluated in dense reward, fully observable MDP environments. Furthermore, most methods are either memory based \cite{duan2016rl, wang1611learning, mishra2018a} or optimization based \cite{finn2017model, zintgraf2019fast}. Optimization based methods seek to find an optimal initial set of parameters (e.g. for an agent network) across tasks, which can be fine-tuned with a few gradient steps for each specific task presented to it. Therefore, a small amount of re-training is required to enable adaptation to every change in task. Memory based methods (implemented using a recurrent network or temporal convolution attention network) do not necessarily require fine tuning after initial training to enable adaptation. This is because memory-based agents learn to build a memory of past sequence of tasks and interactions, thus enabling them to identify change in task and adapt accordingly.
In the past, neuroevolution methods have been employed to solve RL tasks \cite{stanley2002evolving, mchale2004gasnets}, including adapting to changing tasks \cite{soltoggio2008evolutionary, blynel2002levels} in partially observable environments. These methods were evaluated in environments with high level feature observations. Recently, several approaches have been introduced that combine deep neural networks and neuroevolution to tackle high dimensional deep RL tasks \cite{alvernaz2017autoencoder, poulsen2017dlne, ha2018recurrent, salimans2017evolution, such2017deep}. These approaches can be divided into two major categories. The first category uses neuroevolution to optimize the entire deep network end to end \cite{salimans2017evolution, such2017deep, risi2019deep, risi2019improving}. The second category splits the network into parts (for example, a body and controller) where some part(s) (e.g. body) are optimized using gradient based methods and other part(s) (e.g. controller) are evolved using neuroevolution methods \cite{alvernaz2017autoencoder, poulsen2017dlne, ha2018recurrent}. Current deep neuroevolution methods are usually evaluated in fully observable MDP environments, where the task is fixed. Furthermore, after the training phase is completed, the weights of a trained network are fixed (the same is true for standard deep RL). The recent attention to neuroevolution for deep RL aims to present such approaches as a competitive alternative to standard gradient based deep RL methods for fixed task problems.
In the past, neural network based agents employing Hebbian-based local synaptic plasticity have been used to achieve behavioural adaptation with changing tasks \cite{floreano1996evolution, blynel2002levels, soltoggio2008evolutionary}. Such methods use a neuroevolution algorithm to optimize the parameters of the network when producing a new generation of agents. As an agent interacts with an environment during its lifetime in training or testing, the weights are adjusted in an online fashion (via a local plasticity rule), enabling adaptation to changing tasks. In \cite{floreano1996evolution, blynel2002levels} this technique was employed, and further extended to include a mechanism of gating plasticity via neuromodulation in \cite{soltoggio2008evolutionary}. These methods were evaluated in environments with low dimensional observations (with high level features) and not compared with deep (meta-)RL algorithms.
\section{Task Definition}
\label{sec:task-definition}
A POMDP environment $E$, defined by a sextuple ($\mathcal{S}$, $\mathcal{A}$, $\mathcal{P}$, $\mathcal{R}$, $\mathcal{O}$, $\Omega$) is employed in this work. $\mathcal{S}$ defines the state set, $\mathcal{A}$ the action set, $\mathcal{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0, 1]$ the environment dynamics, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ the reward function, $\mathcal{O}$ the observation set, and $\Omega$ the function that maps observations to states.
The environment $E$ contains a number of related tasks. A task $\mathcal{T}_{i}$ is sampled from a distribution of tasks $\mathcal{T}$. The task distribution $\mathcal{T}$ can either be discrete or continuous. A sampled task is an instance of the partially observable environment $E$. The configuration of the environment (for example, the goal or reward function) varies across each task instance. An optimal agent is required to adapt its behaviour to task changes in the environment (and maximize accumulated reward), only from few interactions in the environment. When presented with a task $\mathcal{T}_i$, an optimal agent should initially explore, and subsequently exploit when the task is understood. When the task is changed (a new task $\mathcal{T}_j$ sampled from $\mathcal{T}$), the agent needs to re-explore the environment in few-shots, and then to start exploiting again when the new task has been understood.
In each task, an episode is defined as the trajectory $\tau$ of an agent interactions in the environment, terminating at a terminal state. A trial consist of two or more tasks sampled from $\mathcal{T}$. The total number of episodes in a trial is kept fixed. A trial starts with an initial task $\mathcal{T}_i$ that runs for a number of episodes, and then the task is changed to other tasks (one after another) at different points within the trial (see Figure \ref{fig:fig-task-change}). The points at which a task change occurs are stochastically generated, and the task is changed before the start of the next episode. For example, when the number of tasks is set as $2$ (i.e. $\mathcal{T}_i$ and $\mathcal{T}_j$), the trial starts with task $\mathcal{T}_i$ which runs for a number of episodes, and it is replaced by task $\mathcal{T}_j$ for the remaining episodes in the trial. An agent is iteratively trained, with each iteration consisting of a fixed number of trials. The subsections below describes two environments where the proposed system is evaluated.
\begin{figure}
\centering
\includegraphics[scale=0.6]{fig-task-change.png}
\caption{Illustration of a dynamic environment and required behavior of a learning agent. An agent is required to learn to perform optimally and then exploit the learned policy until a change in the environment occurs, at which point the agent needs to learn again before exploiting.}
\label{fig:fig-task-change}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig-ctgraph-minecraft-env.png}
\caption{Environments (note, during execution, goal location is dynamic across episodes). (A) CT-graph instance, $b=2$ and $d=2$. (B) CT-graph instance, $b=2$ and $d=3$. (C) Malmo Minecraft instance (a double T-Maze), bird's eye view on top, with some sample observations at the bottom. The maze-end with the teal colour is the goal location.}
\label{fig:fig-ctgraph-minecraft-env}
\end{figure*}
\subsection{The Configurable Tree Graph Environment}
The configurable tree graph (CT-graph) environment is a graph abstraction of a decision making process. The complexity of the environment is specified via configuration parameters; branching factor $b$ and depth $d$, controlling the width and height of the graph. Additionally, it can be configured to be fully or partially observable. It contains the following types of state; start, wait, decision, end (leaf node of graph) and crash. Each observation $o \in \mathcal{O}$ is a $12x12$ grey-scale image. The total number of end states grows exponentially as the depth $d$ of the graph increases (see Figure \ref{fig:fig-ctgraph-minecraft-env}A and B).
In the experiments in this study, partial observability is configured by mapping all wait states to the same observation, and all decision states to the same observation. Also, $b$ is set to 2. Therefore, each decision state has two choices, splitting into two sub-graphs. The discrete action space is defined as; \textit{choice 1, choice 2, wait action},
thus discrete. The \textit{wait action} is the correct action in a wait state. In a decision state, \textit{choice 1} or \textit{choice 2} is the correct subset from which to select. All incorrect actions lead to the crash state and episode termination.
An agent starts an episode in the start state, and the episode is completed when the agent traverses the graph to an end state or takes a wrong action in a state. Once an agent transitions from one state to the next, it cannot go back. In a task instance, one of the end states is set as the goal location. An agent receives a positive reward when it traverses to the goal location, and reward of 0 at other non-goal states. The agent may receive a negative reward in a crash state.
\subsection{Malmo Minecraft Environment}
Malmo \cite{johnson2016malmo} is an AI research platform built on top of Minecraft video game. The platform is configurable, and it enables the construction of various worlds in which AI agents can be evaluated. In this work, a double T-maze was constructed, with discrete action space \textit{left turn, right turn and forward action}. A task is defined based on the maze ends, requiring the agent to navigate to a specific maze end (goal location). The maze end that is set as the goal location varies across tasks. The agent only receives a positive reward when they navigate to the maze end that is the goal location. It receives reward of 0 in every other time step. If the agent runs into a wall, the episode is terminated and it receives a negative reward. The agent receives a visual observation of its current view at each time step (hence it does not fully observe the entire environment). Each observation is a $32 x 32$ RGB image based on a first-person view of the agent at each time step.
\section{Methods}
\label{sec:methods}
\begin{figure}
\includegraphics[scale=1.2]{fig-system-overview.pdf}
\caption{System overview, showcasing the feature extractor and controller components. In the controller, white and blue nodes are standard and modulatory neurons respectively. Modulatory connections facilitates selective plasticity in the network.}
\label{fig:fig-system-overview}
\end{figure}
We seek to develop an agent that is capable of continual adaptation through its life time (across episodes) - exploring, exploiting, re-exploring when the task changes and exploiting again. The system (specifically the controller or decision maker) is evolved to acquire knowledge about both the invariant and variant aspects of an environment (e.g. changing tasks).
The agent is modelled using two neural components with separate parameters and objectives; a deep network $F_\theta$ (used as a feature extractor and parameterized by $\theta$) and a neuromodulated network $G_\phi$ (serving as a controller and parameterized by $\phi$). Both components make up the overall system model $\mathcal{M_{\theta, \phi}}$. See Figure \ref{fig:fig-system-overview} for a general system overview. The presented architectural style is similar to a standard deep RL setup. However, it differs on two fronts; (i) the controller is a neuromodulated network (described in Section \ref{sec:controller}) rather than a standard neural network, (ii) the training setup combines gradient based optimization method \cite{werbos1982applications, rumelhart1988learning}), gradient free optimization method (neuroevolution \cite{yao1999evolving, stanley2019designing}), and Hebbian-based synaptic plasticity to train the system. Using this setup, each neural component therefore contains its own objective function. An autoencoder network was employed as the feature extractor, thus enabling the use of Mean Squared Error (MSE) or Binary Cross Entropy (BCE) objective function:
\begin{displaymath}
\argmin_{\theta} \frac{1}{n} \sum_{i=1}^{n} (F_{\theta}(o_i) - o_i)^2
\end{displaymath}
\begin{displaymath}
\argmin_{\theta} - \frac{1}{n} \sum_{i=1}^{n} o_i \cdot \log({F_{\theta}(o_i)}) + (1 - o_i) \cdot \log{(1 - F_{\theta}(o_i))}
\end{displaymath}
where $n$ is the number of training observations and $F_{\theta}(o_i)$ is the output of the autoencoder for observation $i$ (reconstructed observation). Each agent in the population uses the same feature extractor. The fitness function of the evolutionary algorithm is given by:
\begin{displaymath}
\argmax_{\phi} \sum_{\mathcal{T}_i \sim \mathcal{T}} \sum_{ep=1}^{z} R(\tau_{ep})
\label{eq:fitness}
\end{displaymath}
$\mathcal{T}_i$ represents a task sampled from the task distribution $\mathcal{T}$, and a single trial consist of two tasks as defined in Section \ref{sec:task-definition}. Also, z is the number of episodes in which a task is kept fixed within a trial. It is stochastically generated and may differ between tasks in a trial within an interval. $R(\tau_{ep})$ is the accumulated reward of a trajectory of an episode $ep$, defined as:
\begin{equation}
R(\tau_{ep}) = \sum_{t=0}^{k} \mathcal{R}(s_t, G_\phi(F_\theta^{enc}(o_t)))
\end{equation}
where $\mathcal{R}(s, a)$ is the reward function that takes state and action as arguments and produces a scalar reward value. $F_{\theta}^{enc}$ is the same autoencoder feature extractor network earlier described, but denoting that we only want the output from the encoder (the latent features). Also, $t$ represents discrete time steps and $k$ is the length of the trajectory of an episode.
\subsection{Feature Extractor}
\label{sec:feature_extractor}
This neural component of the system is tasked with learning a good latent representation of the observations from the environment, which can be fed to the controller as input. In the CT-graph experiments, a fully connected autoencoder was employed (two layers encoder and decoder respectively). In the Malmo Minecraft experiments, a convolutional autoencoder was employed (four layers encoder and decoder respectively).
\subsection{Control Network (Decision Maker)}
\label{sec:controller}
This neural component takes the latent features of the feature extractor as its input, and produces an output which serves as the final output of the system (the action or behaviour of the system). It is a neuromodulated network (see Section \ref{sec:neuromodulated-network-dynamics}), that reproduces the model introduced in \cite{soltoggio2008evolutionary}. The network can evolve two neuron types - a standard and a modulatory neuron. The output neuron(s) always belong to the standard neuron type.
The control network is parameterized by $\phi$. Unlike $\theta$ (which represents only the weights of the feature extractor network), $\phi$ consists of the weights, architecture and the co-efficients of Hebbian-based plasticity rule (described in \ref{sec:neuromodulated-hebbian-plasticity}) of the network, and it is evolved. Therefore, evolution is tasked with finding the architecture and plasticity rules, including selective plasticity enabled by modulatory neurons to target neurons. The large search space that is granted to evolution allows for rich dynamics that include memory in the form of both recurrent connections and temporary values of rapidly changing modulated weights.
The agent is never fed the reward signal explicitly. The reward signal is only used by the evolutionary process for the fitness evaluation, which in turn drives the selection process. Therefore, the network is tasked to learn the discovery of reward cues implicitly from the visual observations in the environment.
\subsubsection{Neuromodulated Network Dynamics}
\label{sec:neuromodulated-network-dynamics}
Though processing is distributed across neurons, a standard neural network usually contains one type of neuron - where the dynamics of each neuron is homogeneous across the network. In a neuromodulated network, there can be two types of neurons, each type having different dynamics - thus heterogeneous. The two types of neurons are standard neurons and modulatory neurons \cite{soltoggio2008evolutionary}. The standard neurons have the same dynamics as the ones in standard neural network. The modulatory neurons are used to dynamically regulate plasticity in the network.
Each neuron $i$ has one standard and one modulatory activation value that represent the weighted amount of standard and modulatory activity they receive from other neurons (see Equations \ref{eq:eq-activation-std} and \ref{eq:eq-activation-mod}). $a_{std,i}$ is the output signal of neuron $i$ that is propagated to other neurons in its outgoing connections (this is true for both standard and modulatory neurons). $a_{mod,i}$ is used internally by the neuron itself to regulate the Hebbian-based plasticity of the incoming connections from other standard neurons, as described in Section \ref{sec:neuromodulated-hebbian-plasticity}. The framework allows for selective plasticity in the network, as parts of the network may become plastic or not plastic depending on the change of the modulatory activation signals over time. In turn, the final action of the network is affected in the current and future time steps - thus enabling adaptation.
\begin{equation}
a_{\mathrm{std},i} = \tanh{ \frac{\sum_{j \in \mathrm{std}} w_{ji}a_{\mathrm{std},j}}{2} }
\label{eq:eq-activation-std}
\end{equation}
\begin{equation}
a_{\mathrm{mod},i} = \tanh{ \frac{\sum_{j \in \mathrm{mod}} w_{ji}a_{\mathrm{std},j}}{2} }
\label{eq:eq-activation-mod}
\end{equation}
\subsubsection{Neuromodulated Hebbian Plasticity}
\label{sec:neuromodulated-hebbian-plasticity}
The Hebbian synaptic plasticity of the control network is governed by the Equations \ref{eq:eq-weight-update}, \ref{eq:eq-modulated-weight-delta} and \ref{eq:eq-weight-delta}. $A, B, C, D, \alpha$ are the coefficients of the plasticity rule. The update of a weight is dependent pre-synaptic and post-synaptic standard activations, the plasticity co-efficients, and the post-synaptic modulatory activation. This is true for all weights in the neuromodulated network.
\begin{equation}
w_{ij} = w_{ij} + \Delta w_{ij}
\label{eq:eq-weight-update}
\end{equation}
\begin{equation}
\Delta w_{ij} = a_{\mathrm{mod}, j} \cdot \delta w_{ij}
\label{eq:eq-modulated-weight-delta}
\end{equation}
\begin{equation}
\delta w_{ij} = \alpha \cdot (A \cdot a_{\mathrm{std}, i} \cdot a_{\mathrm{std}, j} + B \cdot a_{\mathrm{std}, i} + C \cdot a_{\mathrm{std}, j} + D)
\label{eq:eq-weight-delta}
\end{equation}
\section{Results and Analysis}
Figures \ref{fig:ctgraph-depth2-results} and \ref{fig:ctgraph-depth3-results} show the results of the experiments in the CT-graph environment. Figure \ref{fig:minecraft-double-tmaze-results} shows the results of the experiment in the Malmo Minecraft environment. In addition, we present results obtained in the Malmo Minecraft environment (Figure \ref{fig:minecraft-double-tmaze-results}), evaluating the general applicability of PENN-A.
\subsection{Performance in CT-graph Environments}
The proposed method (PENN-A) was evaluated on depth 2 and 3 CT-graph environments, with branching factor of 2. The controller was evolved for 200 generations, with population of 600 and 800 for depth 2 and 3 experiments respectively. Tournament selection with segment size of 5 was employed. Each controller was evaluated for 4 trials, with 100 episodes and 2 tasks per trial. The initial task is changed between episodes 35 and 65, determined stochastically for each trial. The depth 2 CT-graph experiment was employed as a baseline, and we compared PENN-A against some recent deep meta-RL methods (each with its own experimental setup). The depth 3 CT-graph experiment was employed to evaluate the PENN-A in a more complex configuration of the environment.
In order to ensure compatibility in the result presented across all methods, the number of evaluations (horizontal axis) were scaled to the approximate number of episodes equivalent. Additionally, the vertical axis is the average accumulated reward across all trials and episodes. In the depth 2 CT-graph result (Figure \ref{fig:ctgraph-depth2-results}), we see that PENN-A performs optimally when compared to deep meta-RL methods; optimization-based (MAML \cite{finn2017model} and CAVIA \cite{zintgraf2019fast}) and memory-based ($\text{RL}^2$ \cite{duan2016rl} without extra input). Only the observations were fed as input to the neural network for all methods including PENN-A. We hypothesize the deep meta-RL methods perform sub-optimally due to the partial observability of the environment. When extra input (the reward, previous time step action and done state) are concatenated to the observation and fed to the $\text{RL}^2$ method (which is vanilla setup), then it is able to perform optimally (see Figure \ref{fig:ctgraph-depth2-results-rl2-with-extra-input}). We hypothesize that $\text{RL}^2$ exploits the actions fed as input to the network, ignoring the observations and other parts of the input. This reduces the problem complexity in comparison to conditions where only the observations are fed as input.
Figure \ref{fig:ctgraph-depth3-results} presents result for a depth 3 CT-graph. We present result for only PENN-A in depth 3 CT-graph (a more difficult problem than depth 2 CT-graph) since the other methods performed sub-optimally in depth 2 CT-graph. We again observe PENN-A performing optimally in the more difficult CT-graph setting.
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth2.pdf}
\caption{Results for a CT-graph with depth 2. PENN-A is compared against non-evolutionary meta-RL methods.}
\label{fig:ctgraph-depth2-results}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth2-rl2-with-extra-input.pdf}
\caption{$\text{RL}^2$ in the CT-graph with depth 2. The method is run with extra input to the network (reward, done state, and previous time step action concatenated with current observation to form input).}
\label{fig:ctgraph-depth2-results-rl2-with-extra-input}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth3.pdf}
\caption{PENN-A performance in a CT-graph with depth 3.}
\label{fig:ctgraph-depth3-results}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-minecraft.pdf}
\caption{Malmo Minecraft result.}
\label{fig:minecraft-double-tmaze-results}
\end{figure}
\subsubsection{Network Analysis}
\begin{figure}
\includegraphics[scale=0.45]{fig-average-absolute-neurons-activation.png}
\caption{Absolute activation values distribution (across trials and episodes) per time step of a sample evolved controller. (A) This neuron is active specifically at decision states (steps 3 and 5), while it remains low at wait states. (B) This neuron clearly identifies wait states (steps 2, 4 and 6) and remains inactive otherwise.}
\label{fig:fig-average-absolute-neurons-activation}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig-reward-cues-neurons-activation.pdf}
\caption{Distributions of the activation values for each neuron in a sample network when the goal location (reward) is found and vice versa. The neurons highlighted in green bounding box react differently to the presence or absence of reward cues from observation. (A) heat maps of grey-scale CT-graph observations. The top image is the observation presented when the goal location is found, with a bright square reward cue. The bottom image is the observation when the goal location is not found, with the reward cue absent. (B) Neurons 11 and 13 show complementary firing patterns based on reward cues. (C) Neurons 1, 2, 5 and 6 are active when a reward cue is observed, and have little or no activity when the reward cue is not observed.}
\label{fig:fig-reward-cues-neurons-activation}
\end{figure*}
To better understand the evolved solution and how the network implements policies, we analyzed the best performing networks after evolution in a depth 2 CT-graph environment. While different evolutionary runs produced highly different networks, we observed interesting patterns in the neural activations. For one network of 11 neurons (including the output neuron), the absolute activation value distribution (across trials and episodes per time step) is plotted for each neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}. We see that the absolute activation distribution of some neurons are high at specific time steps, i.e., at specific points within the graph environment (see Figure \ref{fig:fig-average-absolute-neurons-activation}A and B) - and therefore function as \emph{location neurons}. Such kind of location neurons had been previously discovered in an evolutionary setting in \cite{floreano2010evolution}. In the current experiments, it is worth noting that location neurons are designed by evolution to exploit latent features and possibly help action-selection in a high-dimensional dynamic POMDP. In particular, the neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}A is active at decision states, while the neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}B is active at wait states.
One aspect of our experimental setting is that the reward signal is not fed to the network, but the environment provides reward cues embedded in the observations as it is shown in Figure \ref{fig:fig-reward-cues-neurons-activation}A where a bright square represents a reward. The actual reward value is only accumulated in the fitness function, and is therefore not explicitly visible to the network. The surprising results that networks evolved to explore the environment and find the reward even if no reward signal was given suggests that the reward cue was recognised. In fact, in the example shown in Figure \ref{fig:fig-reward-cues-neurons-activation}B, some neurons fire positively when a reward cue is observed and negatively when not observed or vice versa. Other neurons fire when a reward cue is observed and have little or no firing when not observed (see Figure \ref{fig:fig-reward-cues-neurons-activation}C). Not all evolved networks appeared to have \emph{reward neurons}. Nevertheless, the examples that evolved such reward cues detectors demonstrate that evolution is able to incorporate invariant knowledge of the environment to optimize the policy, in this case, reward seeking behaviour and fast adaptation speed to changing task.
\subsection{Performance in Malmo Minecraft}
To further assess the validity of our method, it is important to use a different benchmark environment with a larger input and RGB observations that offered a different feature space, hence the Malmo Minecraft environment. The controller was evolved with population size of 800, in 400 generations. The same selection strategy as used in the CT-graph was employed. Each controller was evaluated for 8 trials, with 50 episodes and 3 tasks per trial. The task is changed at two stochastically generated points within the trial. The result is presented in Figure \ref{fig:minecraft-double-tmaze-results}, keeping the same axes format as with the results presented for the CT-graph environment. Again, the proposed method was able to perform optimally with a high average reward score, demonstrating its capability to scale to other high dimensional, less abstract environments.
\section{Conclusion}
\label{sec:conclusion}
This paper introduced an evolutionary design method for fast adaptation in POMDP environments. The system combines a feature extractor network and an evolved neuromodulated network with the aim of acquiring specific inborn knowledge and structure via evolution. While the suitability of evolved neuromodulated networks to solve environments with changing task was known \cite{soltoggio2008evolutionary, soltoggio2018born}, we demonstrated that such advantages are scalable to high dimensional input spaces, and can be used in combination with an autoenconder. The results showed performance that compare or surpass some deep meta-RL algorithms. Interestingly, the evolved networks were capable of learning to recognise implicit reward cues, and therefore could explore the environment in search for the goal location without an explicit reward signal. This ability that was acquired by the networks through evolution is an example of inborn knowledge that allow networks to be born with the knowledge of what are reward cues. Subsequently, this information can be used to direct fast adaptation when the optimal policy changes (e.g. the task change). The networks also evolved location neurons to help the deployment of a policy by distinguishing different states in the underlying MDP. We speculate that this approach might be promising when a combination of inborn knowledge and online learning are required to perform optimally in rapidly changing environments.
\section{Introduction}
The field of deep reinforcement learning (RL) has showcased amazing results in recent time, solving tasks in robotic control \cite{duan2016benchmarking, lillicrap2015continuous}, games \cite{mnih2015human} and other complex environments. Despite such successes, deep RL algorithms are sample inefficient and sometimes unstable. Furthermore, they usually perform sub-optimally when dealing with sparse reward and partially observable environments. One further limitation of deep RL is when rapid adaptation to changing tasks (dynamic goals) is required. Established methods only work well in fixed task environments. In an attempt to solve this problem, deep meta-reinforcement learning (meta-RL) methods \cite{finn2017model, rothfuss2018promp, zintgraf2019fast, duan2016rl, wang1611learning} were specifically devised. However, these methods are largely evaluated on dense reward, fully observable MDP environments, and perform sub-optimally in sparse reward, partially observable environments.
One key aspect in achieving fast adaptation in dynamic partially observable environments is the presence of appropriate learning structures and memory units that fits the specific class of learning problems. Therefore, standard model-free RL algorithms do not perform well in dynamic environments because they are tabula-rasa systems. They hold no knowledge in their architectures to allow a fast and targeted learning when a change in the environment occurs. Upon a task change, these algorithms will try to randomly explore the action space to relearn from scratch a different, new policy. On the other hand, model-based RL, holds knowledge of the structure of the environment, which in turn allows for rapid adaptation to changes in the environment, but such a knowledge needs to be built manually into the system.
In this paper, we investigate the use of neuroevolution to autonomously evolve inborn knowledge \cite{soltoggio2018born} in the form of neural structures and plasticity rules with a specific focus on dynamic POMDPs that have posed challenges to current RL approaches. The neuroevolutionary approach that we propose is designed to solve rapid adaptation to changing tasks \cite{soltoggio2018born} in complex high dimensional partially observable environments. The idea is to test the ability of evolution to build an unconstrained neuromodulated network architecture with problem-specific learning skills that can exploit the latent space provided by an autoencoder. Thus, in the proposed system, an autoencoder serves as a feature extractor that produces low dimensional latent features from high dimensional environment observations. A neuromodulated network \cite{soltoggio2008evolutionary} receives the low dimensional latent features as input and produces the output of the system, effectively acting as high level controller. Evolved neuromodulated networks have shown computational advantages in various dynamic task scenarios \cite{soltoggio2008evolutionary, soltoggio2018born}.
The proposed approach is similar to that proposed in \cite{alvernaz2017autoencoder}. One key novelty is that our approach seeks to evolve selective plasticity with the use of modulatory neurons, and therefore, to evolve problem-specific neuromodulated adaptive systems. The relationships among image-pixel inputs and control actions in POMDPs is highly nonlinear and history dependent, therefore, an open question is whether neuroevolution can exploit latent features to evolve learning systems with inborn knowledge. Thus, we test the hypothesis that a neuromodulated evolved network can discover neural structures and their related plasticity rules to encode required memory and fast adaptation mechanisms to compete with current deep meta-RL approaches.
We call the proposed system a Plastic Evolved Neuromodulated Network with Autoencoder (PENN-A), denoting the combination of the two neural components. We evaluate our proposed method in a POMDP environment where we show better performance in comparison to some non-evolutionary deep meta-reinforcement learning methods. Also, we evaluated the proposed method in the Malmo Minecraft environment to test its general applicability.
Two interesting findings from our experiments are that (i) the networks acquire through evolution the ability to recognise reward cues (i.e. environment cues that are associated with survival even when reward signals are not given) and (ii) the networks can evolve \emph{location} neurons that help solving the problem by detecting, and becoming active at, specific location of the partially observable MDP. The evolved network topology allows for richer dynamics in comparison to fixed architectures such as hand-designed feed-forward or recurrent networks.
The next section reviews the related work. Following that, a formal task definition is presented. Next is the description of the proposed method employed in this work, followed by the evaluation of results. The PENN-A source code is made available at: \url{https://github.com/dlpbc/penn-a}.
\section{Related Work}
In reinforcement learning (RL) literature, meta-RL methods seek to develop agents that adapt to changing tasks in an environment or a set of related environments. Meta-RL \cite{schmidhuber1996simple, schweighofer2003meta} is based on the general idea of meta-learning \cite{bengio1992optimization, thrun1998learning, hochreiter2001learning} applied to the RL domain.
Recently, deep meta-RL has been used to tackle the problem of rapid adaptation in dynamic environments. Methods such as \cite{finn2017model, duan2016rl, wang1611learning, zintgraf2019fast, mishra2018a, rothfuss2018promp, rakelly2019efficient} use deep RL methods to train a meta-learner agent that adapts to changing tasks. These methods are mostly evaluated in dense reward, fully observable MDP environments. Furthermore, most methods are either memory based \cite{duan2016rl, wang1611learning, mishra2018a} or optimization based \cite{finn2017model, zintgraf2019fast}. Optimization based methods seek to find an optimal initial set of parameters (e.g. for an agent network) across tasks, which can be fine-tuned with a few gradient steps for each specific task presented to it. Therefore, a small amount of re-training is required to enable adaptation to every change in task. Memory based methods (implemented using a recurrent network or temporal convolution attention network) do not necessarily require fine tuning after initial training to enable adaptation. This is because memory-based agents learn to build a memory of past sequence of tasks and interactions, thus enabling them to identify change in task and adapt accordingly.
In the past, neuroevolution methods have been employed to solve RL tasks \cite{stanley2002evolving, mchale2004gasnets}, including adapting to changing tasks \cite{soltoggio2008evolutionary, blynel2002levels} in partially observable environments. These methods were evaluated in environments with high level feature observations. Recently, several approaches have been introduced that combine deep neural networks and neuroevolution to tackle high dimensional deep RL tasks \cite{alvernaz2017autoencoder, poulsen2017dlne, ha2018recurrent, salimans2017evolution, such2017deep}. These approaches can be divided into two major categories. The first category uses neuroevolution to optimize the entire deep network end to end \cite{salimans2017evolution, such2017deep, risi2019deep, risi2019improving}. The second category splits the network into parts (for example, a body and controller) where some part(s) (e.g. body) are optimized using gradient based methods and other part(s) (e.g. controller) are evolved using neuroevolution methods \cite{alvernaz2017autoencoder, poulsen2017dlne, ha2018recurrent}. Current deep neuroevolution methods are usually evaluated in fully observable MDP environments, where the task is fixed. Furthermore, after the training phase is completed, the weights of a trained network are fixed (the same is true for standard deep RL). The recent attention to neuroevolution for deep RL aims to present such approaches as a competitive alternative to standard gradient based deep RL methods for fixed task problems.
In the past, neural network based agents employing Hebbian-based local synaptic plasticity have been used to achieve behavioural adaptation with changing tasks \cite{floreano1996evolution, blynel2002levels, soltoggio2008evolutionary}. Such methods use a neuroevolution algorithm to optimize the parameters of the network when producing a new generation of agents. As an agent interacts with an environment during its lifetime in training or testing, the weights are adjusted in an online fashion (via a local plasticity rule), enabling adaptation to changing tasks. In \cite{floreano1996evolution, blynel2002levels} this technique was employed, and further extended to include a mechanism of gating plasticity via neuromodulation in \cite{soltoggio2008evolutionary}. These methods were evaluated in environments with low dimensional observations (with high level features) and not compared with deep (meta-)RL algorithms.
\section{Task Definition}
\label{sec:task-definition}
A POMDP environment $E$, defined by a sextuple ($\mathcal{S}$, $\mathcal{A}$, $\mathcal{P}$, $\mathcal{R}$, $\mathcal{O}$, $\Omega$) is employed in this work. $\mathcal{S}$ defines the state set, $\mathcal{A}$ the action set, $\mathcal{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0, 1]$ the environment dynamics, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ the reward function, $\mathcal{O}$ the observation set, and $\Omega$ the function that maps observations to states.
The environment $E$ contains a number of related tasks. A task $\mathcal{T}_{i}$ is sampled from a distribution of tasks $\mathcal{T}$. The task distribution $\mathcal{T}$ can either be discrete or continuous. A sampled task is an instance of the partially observable environment $E$. The configuration of the environment (for example, the goal or reward function) varies across each task instance. An optimal agent is required to adapt its behaviour to task changes in the environment (and maximize accumulated reward), only from few interactions in the environment. When presented with a task $\mathcal{T}_i$, an optimal agent should initially explore, and subsequently exploit when the task is understood. When the task is changed (a new task $\mathcal{T}_j$ sampled from $\mathcal{T}$), the agent needs to re-explore the environment in few-shots, and then to start exploiting again when the new task has been understood.
In each task, an episode is defined as the trajectory $\tau$ of an agent interactions in the environment, terminating at a terminal state. A trial consist of two or more tasks sampled from $\mathcal{T}$. The total number of episodes in a trial is kept fixed. A trial starts with an initial task $\mathcal{T}_i$ that runs for a number of episodes, and then the task is changed to other tasks (one after another) at different points within the trial (see Figure \ref{fig:fig-task-change}). The points at which a task change occurs are stochastically generated, and the task is changed before the start of the next episode. For example, when the number of tasks is set as $2$ (i.e. $\mathcal{T}_i$ and $\mathcal{T}_j$), the trial starts with task $\mathcal{T}_i$ which runs for a number of episodes, and it is replaced by task $\mathcal{T}_j$ for the remaining episodes in the trial. An agent is iteratively trained, with each iteration consisting of a fixed number of trials. The subsections below describes two environments where the proposed system is evaluated.
\begin{figure}
\centering
\includegraphics[scale=0.6]{fig-task-change.png}
\caption{Illustration of a dynamic environment and required behavior of a learning agent. An agent is required to learn to perform optimally and then exploit the learned policy until a change in the environment occurs, at which point the agent needs to learn again before exploiting.}
\label{fig:fig-task-change}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig-ctgraph-minecraft-env.png}
\caption{Environments (note, during execution, goal location is dynamic across episodes). (A) CT-graph instance, $b=2$ and $d=2$. (B) CT-graph instance, $b=2$ and $d=3$. (C) Malmo Minecraft instance (a double T-Maze), bird's eye view on top, with some sample observations at the bottom. The maze-end with the teal colour is the goal location.}
\label{fig:fig-ctgraph-minecraft-env}
\end{figure*}
\subsection{The Configurable Tree Graph Environment}
The configurable tree graph (CT-graph) environment is a graph abstraction of a decision making process. The complexity of the environment is specified via configuration parameters; branching factor $b$ and depth $d$, controlling the width and height of the graph. Additionally, it can be configured to be fully or partially observable. It contains the following types of state; start, wait, decision, end (leaf node of graph) and crash. Each observation $o \in \mathcal{O}$ is a $12x12$ grey-scale image. The total number of end states grows exponentially as the depth $d$ of the graph increases (see Figure \ref{fig:fig-ctgraph-minecraft-env}A and B).
In the experiments in this study, partial observability is configured by mapping all wait states to the same observation, and all decision states to the same observation. Also, $b$ is set to 2. Therefore, each decision state has two choices, splitting into two sub-graphs. The discrete action space is defined as; \textit{choice 1, choice 2, wait action},
thus discrete. The \textit{wait action} is the correct action in a wait state. In a decision state, \textit{choice 1} or \textit{choice 2} is the correct subset from which to select. All incorrect actions lead to the crash state and episode termination.
An agent starts an episode in the start state, and the episode is completed when the agent traverses the graph to an end state or takes a wrong action in a state. Once an agent transitions from one state to the next, it cannot go back. In a task instance, one of the end states is set as the goal location. An agent receives a positive reward when it traverses to the goal location, and reward of 0 at other non-goal states. The agent may receive a negative reward in a crash state.
\subsection{Malmo Minecraft Environment}
Malmo \cite{johnson2016malmo} is an AI research platform built on top of Minecraft video game. The platform is configurable, and it enables the construction of various worlds in which AI agents can be evaluated. In this work, a double T-maze was constructed, with discrete action space \textit{left turn, right turn and forward action}. A task is defined based on the maze ends, requiring the agent to navigate to a specific maze end (goal location). The maze end that is set as the goal location varies across tasks. The agent only receives a positive reward when they navigate to the maze end that is the goal location. It receives reward of 0 in every other time step. If the agent runs into a wall, the episode is terminated and it receives a negative reward. The agent receives a visual observation of its current view at each time step (hence it does not fully observe the entire environment). Each observation is a $32 x 32$ RGB image based on a first-person view of the agent at each time step.
\section{Methods}
\label{sec:methods}
\begin{figure}
\includegraphics[scale=1.2]{fig-system-overview.pdf}
\caption{System overview, showcasing the feature extractor and controller components. In the controller, white and blue nodes are standard and modulatory neurons respectively. Modulatory connections facilitates selective plasticity in the network.}
\label{fig:fig-system-overview}
\end{figure}
We seek to develop an agent that is capable of continual adaptation through its life time (across episodes) - exploring, exploiting, re-exploring when the task changes and exploiting again. The system (specifically the controller or decision maker) is evolved to acquire knowledge about both the invariant and variant aspects of an environment (e.g. changing tasks).
The agent is modelled using two neural components with separate parameters and objectives; a deep network $F_\theta$ (used as a feature extractor and parameterized by $\theta$) and a neuromodulated network $G_\phi$ (serving as a controller and parameterized by $\phi$). Both components make up the overall system model $\mathcal{M_{\theta, \phi}}$. See Figure \ref{fig:fig-system-overview} for a general system overview. The presented architectural style is similar to a standard deep RL setup. However, it differs on two fronts; (i) the controller is a neuromodulated network (described in Section \ref{sec:controller}) rather than a standard neural network, (ii) the training setup combines gradient based optimization method \cite{werbos1982applications, rumelhart1988learning}), gradient free optimization method (neuroevolution \cite{yao1999evolving, stanley2019designing}), and Hebbian-based synaptic plasticity to train the system. Using this setup, each neural component therefore contains its own objective function. An autoencoder network was employed as the feature extractor, thus enabling the use of Mean Squared Error (MSE) or Binary Cross Entropy (BCE) objective function:
\begin{displaymath}
\argmin_{\theta} \frac{1}{n} \sum_{i=1}^{n} (F_{\theta}(o_i) - o_i)^2
\end{displaymath}
\begin{displaymath}
\argmin_{\theta} - \frac{1}{n} \sum_{i=1}^{n} o_i \cdot \log({F_{\theta}(o_i)}) + (1 - o_i) \cdot \log{(1 - F_{\theta}(o_i))}
\end{displaymath}
where $n$ is the number of training observations and $F_{\theta}(o_i)$ is the output of the autoencoder for observation $i$ (reconstructed observation). Each agent in the population uses the same feature extractor. The fitness function of the evolutionary algorithm is given by:
\begin{displaymath}
\argmax_{\phi} \sum_{\mathcal{T}_i \sim \mathcal{T}} \sum_{ep=1}^{z} R(\tau_{ep})
\label{eq:fitness}
\end{displaymath}
$\mathcal{T}_i$ represents a task sampled from the task distribution $\mathcal{T}$, and a single trial consist of two tasks as defined in Section \ref{sec:task-definition}. Also, z is the number of episodes in which a task is kept fixed within a trial. It is stochastically generated and may differ between tasks in a trial within an interval. $R(\tau_{ep})$ is the accumulated reward of a trajectory of an episode $ep$, defined as:
\begin{equation}
R(\tau_{ep}) = \sum_{t=0}^{k} \mathcal{R}(s_t, G_\phi(F_\theta^{enc}(o_t)))
\end{equation}
where $\mathcal{R}(s, a)$ is the reward function that takes state and action as arguments and produces a scalar reward value. $F_{\theta}^{enc}$ is the same autoencoder feature extractor network earlier described, but denoting that we only want the output from the encoder (the latent features). Also, $t$ represents discrete time steps and $k$ is the length of the trajectory of an episode.
\subsection{Feature Extractor}
\label{sec:feature_extractor}
This neural component of the system is tasked with learning a good latent representation of the observations from the environment, which can be fed to the controller as input. In the CT-graph experiments, a fully connected autoencoder was employed (two layers encoder and decoder respectively). In the Malmo Minecraft experiments, a convolutional autoencoder was employed (four layers encoder and decoder respectively).
\subsection{Control Network (Decision Maker)}
\label{sec:controller}
This neural component takes the latent features of the feature extractor as its input, and produces an output which serves as the final output of the system (the action or behaviour of the system). It is a neuromodulated network (see Section \ref{sec:neuromodulated-network-dynamics}), that reproduces the model introduced in \cite{soltoggio2008evolutionary}. The network can evolve two neuron types - a standard and a modulatory neuron. The output neuron(s) always belong to the standard neuron type.
The control network is parameterized by $\phi$. Unlike $\theta$ (which represents only the weights of the feature extractor network), $\phi$ consists of the weights, architecture and the co-efficients of Hebbian-based plasticity rule (described in \ref{sec:neuromodulated-hebbian-plasticity}) of the network, and it is evolved. Therefore, evolution is tasked with finding the architecture and plasticity rules, including selective plasticity enabled by modulatory neurons to target neurons. The large search space that is granted to evolution allows for rich dynamics that include memory in the form of both recurrent connections and temporary values of rapidly changing modulated weights.
The agent is never fed the reward signal explicitly. The reward signal is only used by the evolutionary process for the fitness evaluation, which in turn drives the selection process. Therefore, the network is tasked to learn the discovery of reward cues implicitly from the visual observations in the environment.
\subsubsection{Neuromodulated Network Dynamics}
\label{sec:neuromodulated-network-dynamics}
Though processing is distributed across neurons, a standard neural network usually contains one type of neuron - where the dynamics of each neuron is homogeneous across the network. In a neuromodulated network, there can be two types of neurons, each type having different dynamics - thus heterogeneous. The two types of neurons are standard neurons and modulatory neurons \cite{soltoggio2008evolutionary}. The standard neurons have the same dynamics as the ones in standard neural network. The modulatory neurons are used to dynamically regulate plasticity in the network.
Each neuron $i$ has one standard and one modulatory activation value that represent the weighted amount of standard and modulatory activity they receive from other neurons (see Equations \ref{eq:eq-activation-std} and \ref{eq:eq-activation-mod}). $a_{std,i}$ is the output signal of neuron $i$ that is propagated to other neurons in its outgoing connections (this is true for both standard and modulatory neurons). $a_{mod,i}$ is used internally by the neuron itself to regulate the Hebbian-based plasticity of the incoming connections from other standard neurons, as described in Section \ref{sec:neuromodulated-hebbian-plasticity}. The framework allows for selective plasticity in the network, as parts of the network may become plastic or not plastic depending on the change of the modulatory activation signals over time. In turn, the final action of the network is affected in the current and future time steps - thus enabling adaptation.
\begin{equation}
a_{\mathrm{std},i} = \tanh{ \frac{\sum_{j \in \mathrm{std}} w_{ji}a_{\mathrm{std},j}}{2} }
\label{eq:eq-activation-std}
\end{equation}
\begin{equation}
a_{\mathrm{mod},i} = \tanh{ \frac{\sum_{j \in \mathrm{mod}} w_{ji}a_{\mathrm{std},j}}{2} }
\label{eq:eq-activation-mod}
\end{equation}
\subsubsection{Neuromodulated Hebbian Plasticity}
\label{sec:neuromodulated-hebbian-plasticity}
The Hebbian synaptic plasticity of the control network is governed by the Equations \ref{eq:eq-weight-update}, \ref{eq:eq-modulated-weight-delta} and \ref{eq:eq-weight-delta}. $A, B, C, D, \alpha$ are the coefficients of the plasticity rule. The update of a weight is dependent pre-synaptic and post-synaptic standard activations, the plasticity co-efficients, and the post-synaptic modulatory activation. This is true for all weights in the neuromodulated network.
\begin{equation}
w_{ij} = w_{ij} + \Delta w_{ij}
\label{eq:eq-weight-update}
\end{equation}
\begin{equation}
\Delta w_{ij} = a_{\mathrm{mod}, j} \cdot \delta w_{ij}
\label{eq:eq-modulated-weight-delta}
\end{equation}
\begin{equation}
\delta w_{ij} = \alpha \cdot (A \cdot a_{\mathrm{std}, i} \cdot a_{\mathrm{std}, j} + B \cdot a_{\mathrm{std}, i} + C \cdot a_{\mathrm{std}, j} + D)
\label{eq:eq-weight-delta}
\end{equation}
\section{Results and Analysis}
Figures \ref{fig:ctgraph-depth2-results} and \ref{fig:ctgraph-depth3-results} show the results of the experiments in the CT-graph environment. Figure \ref{fig:minecraft-double-tmaze-results} shows the results of the experiment in the Malmo Minecraft environment. In addition, we present results obtained in the Malmo Minecraft environment (Figure \ref{fig:minecraft-double-tmaze-results}), evaluating the general applicability of PENN-A.
\subsection{Performance in CT-graph Environments}
The proposed method (PENN-A) was evaluated on depth 2 and 3 CT-graph environments, with branching factor of 2. The controller was evolved for 200 generations, with population of 600 and 800 for depth 2 and 3 experiments respectively. Tournament selection with segment size of 5 was employed. Each controller was evaluated for 4 trials, with 100 episodes and 2 tasks per trial. The initial task is changed between episodes 35 and 65, determined stochastically for each trial. The depth 2 CT-graph experiment was employed as a baseline, and we compared PENN-A against some recent deep meta-RL methods (each with its own experimental setup). The depth 3 CT-graph experiment was employed to evaluate the PENN-A in a more complex configuration of the environment.
In order to ensure compatibility in the result presented across all methods, the number of evaluations (horizontal axis) were scaled to the approximate number of episodes equivalent. Additionally, the vertical axis is the average accumulated reward across all trials and episodes. In the depth 2 CT-graph result (Figure \ref{fig:ctgraph-depth2-results}), we see that PENN-A performs optimally when compared to deep meta-RL methods; optimization-based (MAML \cite{finn2017model} and CAVIA \cite{zintgraf2019fast}) and memory-based ($\text{RL}^2$ \cite{duan2016rl} without extra input). Only the observations were fed as input to the neural network for all methods including PENN-A. We hypothesize the deep meta-RL methods perform sub-optimally due to the partial observability of the environment. When extra input (the reward, previous time step action and done state) are concatenated to the observation and fed to the $\text{RL}^2$ method (which is vanilla setup), then it is able to perform optimally (see Figure \ref{fig:ctgraph-depth2-results-rl2-with-extra-input}). We hypothesize that $\text{RL}^2$ exploits the actions fed as input to the network, ignoring the observations and other parts of the input. This reduces the problem complexity in comparison to conditions where only the observations are fed as input.
Figure \ref{fig:ctgraph-depth3-results} presents result for a depth 3 CT-graph. We present result for only PENN-A in depth 3 CT-graph (a more difficult problem than depth 2 CT-graph) since the other methods performed sub-optimally in depth 2 CT-graph. We again observe PENN-A performing optimally in the more difficult CT-graph setting.
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth2.pdf}
\caption{Results for a CT-graph with depth 2. PENN-A is compared against non-evolutionary meta-RL methods.}
\label{fig:ctgraph-depth2-results}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth2-rl2-with-extra-input.pdf}
\caption{$\text{RL}^2$ in the CT-graph with depth 2. The method is run with extra input to the network (reward, done state, and previous time step action concatenated with current observation to form input).}
\label{fig:ctgraph-depth2-results-rl2-with-extra-input}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-ctgraph-depth3.pdf}
\caption{PENN-A performance in a CT-graph with depth 3.}
\label{fig:ctgraph-depth3-results}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{fig-results-minecraft.pdf}
\caption{Malmo Minecraft result.}
\label{fig:minecraft-double-tmaze-results}
\end{figure}
\subsubsection{Network Analysis}
\begin{figure}
\includegraphics[scale=0.45]{fig-average-absolute-neurons-activation.png}
\caption{Absolute activation values distribution (across trials and episodes) per time step of a sample evolved controller. (A) This neuron is active specifically at decision states (steps 3 and 5), while it remains low at wait states. (B) This neuron clearly identifies wait states (steps 2, 4 and 6) and remains inactive otherwise.}
\label{fig:fig-average-absolute-neurons-activation}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig-reward-cues-neurons-activation.pdf}
\caption{Distributions of the activation values for each neuron in a sample network when the goal location (reward) is found and vice versa. The neurons highlighted in green bounding box react differently to the presence or absence of reward cues from observation. (A) heat maps of grey-scale CT-graph observations. The top image is the observation presented when the goal location is found, with a bright square reward cue. The bottom image is the observation when the goal location is not found, with the reward cue absent. (B) Neurons 11 and 13 show complementary firing patterns based on reward cues. (C) Neurons 1, 2, 5 and 6 are active when a reward cue is observed, and have little or no activity when the reward cue is not observed.}
\label{fig:fig-reward-cues-neurons-activation}
\end{figure*}
To better understand the evolved solution and how the network implements policies, we analyzed the best performing networks after evolution in a depth 2 CT-graph environment. While different evolutionary runs produced highly different networks, we observed interesting patterns in the neural activations. For one network of 11 neurons (including the output neuron), the absolute activation value distribution (across trials and episodes per time step) is plotted for each neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}. We see that the absolute activation distribution of some neurons are high at specific time steps, i.e., at specific points within the graph environment (see Figure \ref{fig:fig-average-absolute-neurons-activation}A and B) - and therefore function as \emph{location neurons}. Such kind of location neurons had been previously discovered in an evolutionary setting in \cite{floreano2010evolution}. In the current experiments, it is worth noting that location neurons are designed by evolution to exploit latent features and possibly help action-selection in a high-dimensional dynamic POMDP. In particular, the neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}A is active at decision states, while the neuron in Figure \ref{fig:fig-average-absolute-neurons-activation}B is active at wait states.
One aspect of our experimental setting is that the reward signal is not fed to the network, but the environment provides reward cues embedded in the observations as it is shown in Figure \ref{fig:fig-reward-cues-neurons-activation}A where a bright square represents a reward. The actual reward value is only accumulated in the fitness function, and is therefore not explicitly visible to the network. The surprising results that networks evolved to explore the environment and find the reward even if no reward signal was given suggests that the reward cue was recognised. In fact, in the example shown in Figure \ref{fig:fig-reward-cues-neurons-activation}B, some neurons fire positively when a reward cue is observed and negatively when not observed or vice versa. Other neurons fire when a reward cue is observed and have little or no firing when not observed (see Figure \ref{fig:fig-reward-cues-neurons-activation}C). Not all evolved networks appeared to have \emph{reward neurons}. Nevertheless, the examples that evolved such reward cues detectors demonstrate that evolution is able to incorporate invariant knowledge of the environment to optimize the policy, in this case, reward seeking behaviour and fast adaptation speed to changing task.
\subsection{Performance in Malmo Minecraft}
To further assess the validity of our method, it is important to use a different benchmark environment with a larger input and RGB observations that offered a different feature space, hence the Malmo Minecraft environment. The controller was evolved with population size of 800, in 400 generations. The same selection strategy as used in the CT-graph was employed. Each controller was evaluated for 8 trials, with 50 episodes and 3 tasks per trial. The task is changed at two stochastically generated points within the trial. The result is presented in Figure \ref{fig:minecraft-double-tmaze-results}, keeping the same axes format as with the results presented for the CT-graph environment. Again, the proposed method was able to perform optimally with a high average reward score, demonstrating its capability to scale to other high dimensional, less abstract environments.
\section{Conclusion}
\label{sec:conclusion}
This paper introduced an evolutionary design method for fast adaptation in POMDP environments. The system combines a feature extractor network and an evolved neuromodulated network with the aim of acquiring specific inborn knowledge and structure via evolution. While the suitability of evolved neuromodulated networks to solve environments with changing task was known \cite{soltoggio2008evolutionary, soltoggio2018born}, we demonstrated that such advantages are scalable to high dimensional input spaces, and can be used in combination with an autoenconder. The results showed performance that compare or surpass some deep meta-RL algorithms. Interestingly, the evolved networks were capable of learning to recognise implicit reward cues, and therefore could explore the environment in search for the goal location without an explicit reward signal. This ability that was acquired by the networks through evolution is an example of inborn knowledge that allow networks to be born with the knowledge of what are reward cues. Subsequently, this information can be used to direct fast adaptation when the optimal policy changes (e.g. the task change). The networks also evolved location neurons to help the deployment of a policy by distinguishing different states in the underlying MDP. We speculate that this approach might be promising when a combination of inborn knowledge and online learning are required to perform optimally in rapidly changing environments.
|
1,108,101,563,630 | arxiv | \section{Introduction}\label{sec:intro}
Physical simulations have influenced developments in engineering, technology, and science more rapidly than ever before. The widespread application of simulations as digital twins is one recent example. Advances in optimization and control theory have also increased simulation predictability and practicality. Physical simulations can reproduce responses in a variety of systems, from quantum mechanics to astrophysics, with considerable detail and accuracy, and they provide information that often cannot be obtained from experiments or in-situ measurements due to associated high risks and measurement difficulties. However, high-fidelity forward physical simulations are computationally expensive and, thus, make intractable any decision-making applications, such as inverse problems, design optimization, optimal controls, uncertainty quantification, and parameter studies, where many forward simulations are required.
To compensate for the computational expense issue, researchers develop several surrogate models to accelerate the physical simulations with high accuracy. One particular type is \textit{projection-based reduced order model} (pROM), in which the full state fields are approximated by applying linear or nonlinear compression techniques. A popular linear compression technique include \textit{proper orthogonal decomposition} (POD) \cite{berkooz1993proper}, reduced basis method \cite{patera2007reduced}, and balanced truncation method \cite{safonov1989schur}, while a popular nonlinear compression technique is \textit{auto-encoder} (AE) \cite{ kim2021fast, kim2020efficient, maulik2021reduced, lee2020model}. The linear compression-based pROM is successfully applied to many different problems, such as Lagrangian hydrodynamics \cite{copeland2022reduced, cheung2022local}, Burgers equations \cite{choi2020sns, choi2019space, carlberg2018conservative}, nonlinear heat conduction problem \cite{hoang2021domain}, aero-elastic wing design problem \cite{choi2020gradient}, Navier--Stokes equation \cite{iliescu2014variational}, computational fluid dynamics simulation for aircraft \cite{amsallem2008interpolation}, convection--diffusion equation \cite{kim2021efficient}, Boltzman transport problem \cite{choi2021space}, topology optimization \cite{choi2019accelerating}, the shape optimization of a simplified nozzle inlet model and the design optimization of a chemical reaction \cite{amsallem2015design}, and lattice-type structure design \cite{mcbane2021component}. These projection-based ROMs are further categorized by its intrusiveness. The \textit{intrusive ROMs} plug the reduced solution representation into the underlying physics law, governing equations, and its numerical discretization methods, such as the finite element, finite volume, and finite difference methods. Therefore, they are physics-constrained data-driven approaches and require less data than the methods that only require data to achieve the same level of accuracy. However, one needs to understand the underlying numerical methods of solving the high-fidelity simulation to implement the intrusive ROMs. Furthermore, the intrusive ROMs are only applicable when the source code for a high-fidelity physics solver is available, which is not the case for certain applications. On the other hand, the non-intrusive ROMs do not require the access to the source code of a high-fidelity physics solver. Therefore, we will focus on developing the \textit{non-intrusive ROMs}, which use \textit{only} data to approximate the full state fields.
Among many non-intrusive methods, various interpolation techniques are used to build a nonlinear map that predicts new outputs for new inputs. The interpolation techniques include, but not limited to, Gaussian processes \cite{tapia2018gaussian, qian2006building}, radial basis functions \cite{daniel2007hydraulic,huang2015hull}, Kriging \cite{han2013improving,han2012hierarchical}, and convolutional neural networks \cite{guo2016convolutional,zhang2018application}. Among them, neural networks have been the most popular framework because of their rich representation capability supported by the universal approximation theorem. Such surrogate models have been applied to various physical simulations, including, but not limited to, particle simulation \cite{paganini2018calogan}, nanophotonic particle design \cite{peurifoy2018nanophotonic}, porous media flow \cite{kadeethum2021framework,kadeethum2021continuous,kadeethum2021nonintrusive, zhu2018bayesian}, storm prediction \cite{kim2015time}, fluid dynamics \cite{kutz2017deep}, hydrology \cite{marccais2017prospective,chan2018machine}, bioinformatics \cite{min2017deep}, high-energy physics \cite{baldi2014searching}, turbulence modeling \cite{wang2017comprehensive,parish2016paradigm,duraisamy2015new,ling2016reynolds}, uncertainty propagation in a stochastic elliptic partial differential equation \cite{tripathy2018deep}, bioreactor with unknown reaction rate \cite{hagge2017solving}, barotropic climate models \cite{vlachas2018data}, and deep Koopman dynamical models \cite{morton2018deep}. However, these methods lack the \textit{interpretability} due to the black-box nature caused by its complex underlying structure of interpolators, e.g., neural networks. Furthermore, they do not aim to predict the state field solution in general. Therefore, if the target output quantity of interest changes, the model needs to go through the expensive training phase again, which shows the lack of \textit{generalizability}.
To improve the \textit{interpretability} and \textit{generalizability}, we focus on developing a latent space dynamics learning algorithm, where the whole state field data is compressed into a reduced space and the dynamics in the reduced space is learned. Two different types of compression are possible, i.e., linear and nonlinear compression. The popular linear compression can be realized by the singular value decomposition (SVD), justified by the proper orthogonal decomposition (POD) framework. The popular nonlinear compression can be accomplished by the auto-encoder (AE), where the encoder and decoder are designed with neural networks. After the compression, the data size is reduced tremendously. Moreover, the dynamics of the data within the reduced space is often much simpler than the dynamics of the full space. For example, Figure~\ref{fig:simplicityOfLatentSpaceDynamics} shows the reduced space dynamics corresponding to 2D radial advection problems. This motivates our current work to identify the simplified and reduced but latent dynamics with a system identification regression technique.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_latent_space.png}
\caption{Four snapshots of 2D radial advection simulation are shown, i.e., at $t=0$, $1.0$, $2.0$, and $3.0$. While the full order model involves 9,216 degrees of freedom, the dynamics of the reduced space with five latent variables from a linear compression technique, i.e., proper orthogonal decomposition, and three latent variables from nonlinear compression technique, i.e., auto-encoder, are much simpler than the ones of the full order model.}
\label{fig:simplicityOfLatentSpaceDynamics}
\end{figure}
There are many existing latent space learning algorithms. DeepFluids \cite{kim2019deep} uses the auto-encoder for nonlinear compression and applies the latent space time integrator to propagate the solution in the reduced space. The authors in \cite{kadeethum2021nonintrusive} and \cite{fresca2021comprehensive} use both linear and nonlinear compressions and apply some interpolation techniques, such as artificial neural networks (ANNs) and radial basis function (RBF) interpolations within the reduced space to predict the solution for new parameter value. Xie, et al., in \cite{xie2019non} uses POD as linear compression technique and apply linear multi-step neural network to predict and propagate the latent space dynamical solutions. Hoang, et al., in \cite{hoang2021projection} compresses both space and time solution space with POD, which was first introduced in \cite{choi2019space}. Then they use several interpolation techniques, such as polynomial regression, $k$-nearest neighbor regression, random forest regression, and neural network regression, within the reduced space formed by the space--time POD. However, all the aforementioned methods use a complex form as a latent space dynamics model, whose structure is not well understood and lacking the interpretability. On the other hand, Champion, et al., in \cite{champion2019data} use an auto-encoder for the compression and the sparse identification of nonlinear dynamics (SINDy) to identify \textit{ordinary differential equations} (ODEs) for the latent space dynamics, which is simpler than neural networks, improving the interpretability. However, they fail to generalize the method to achieve a {\it robust parametric model}. It is partly because one system of ODEs is not enough to cover all the variations within the latent space due to parameter change.
Therefore, we propose to identify latent space dynamics with a set of local system of ODEs that is tailored to improve the accuracy on a local area of the parameter space. This implies that each local model has a trust region, which covers a sub-region of the whole parameter space. If the set of the trust region covers the whole parameter space, then our model is guaranteed to achieve a certain accuracy level everywhere, achieving truly parametric model based on latent space learning algorithm. At the same time, the framework enables a faster calculation than the corresponding high-fidelity model. We call this framework, LaSDI, which stands for Latent Space Dynamics Identification.
The procedure of LaSDI is summarized by four distinct steps below and its schematics are depicted in Figure~\ref{fig:procedureLaSDI}:
\begin{enumerate}
\item \textbf{Data Generation}: Generate full order model (FOM) simulation data, i.e., parametric time dependent solution data
\item \textbf{Compression}: Apply compression on the simulation data, through either singular value decomposition or auto-encoder to form latent space data.
\item \textbf{Dynamics Identification}: Identify the governing equations that best matches the latent space data in a least-squares sense.
\item \textbf{Prediction}: Use the identified governing equation to predict the latent space solution for a new parameter point. In this paper, we assume that the parameter affects only the initial condition. The predicted latent space solution is reconstructed to the full state by decompression.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Final Figures/LaSDI_Figures.png}
\caption{Schematic of LaSDI algorithm applied to 2D Burgers simulations, consisting of four steps: (1) data generation, (2) compression, (3) dynamics identification, and (4) prediction. The full order model simulation data of 2D Burgers problem are collected in Data Generation step (see Section~\ref{sec:data-generation}). Either SVD or autoencoder is used to compress the simulation data to form latent space data in Compression step (see Section~\ref{sec:compression}). Governing equations that best fits the latent space data are identified in Dynamics Identification step (see Section~\ref{sec:dynamicsIndentification}). The identified governing equation is solved with a predictive latent space initial condition and the full state is reconstructed in Prediction step (see Section~\ref{sec:prediction}).}
\label{fig:procedureLaSDI}
\end{figure}
The main contributions of this paper include
\begin{itemize}
\item A novel local latent space learning algorithm, i.e., LaSDI, that demonstrates robust parametric prediction, is introduced.
\item Three novel dynamics identification algorithms are introduced, i.e., global, local, and interpolated ones.
\item Both good accuracy and speed-up are demonstrated with several numerical experiments.
\end{itemize}
The paper is organized in the following way: Section~\ref{sec:LaSDI} technically describes LaSDI, where each four distinct steps are explained in details. Section~\ref{sec:data-generation} describes how to generate and store the simulation data. Section~\ref{sec:compression} discusses two different types of compression techniques. Section~\ref{sec:dynamicsIndentification} elaborates the procedure of the dynamics identification and subsequently describes three different types. Section~\ref{sec:prediction} elucidates the prediction step. We demonstrate the performance of LaSDIs for four different numerical experiments in Section~\ref{sec:results}, where global, local, and interpolated dynamics identification algorithms are compared and analyzed. Finally, we summarize and discuss the implications, limitations, and potential future directions for LaSDI in Section~\ref{sec:conclusion}.
\section{Dynamical system of equations}\label{sec:dynamicalSystem}
We formally state a parameterized dynamical system, characterized by the following time dependent ordinary differential equations (ODEs):
\begin{equation}\label{eq:dynamics}
\frac{d\boldsymbol \solSymb}{dt} = \boldsymbol \rhsSymb(\boldsymbol \solSymb,t),\quad\quad
\boldsymbol \solSymb(0;\boldsymbol{\paramSymb}) = \sol_0(\boldsymbol{\paramSymb}),
\end{equation}
where $t\in[0,\timeSymb_f]$ denotes time with the final time $\timeSymb_f\in\RRplus{}$, and $\boldsymbol \solSymb(t;\boldsymbol{\paramSymb})$ denotes the time-dependent, parameterized state implicitly defined as the solution to System~\eqref{eq:dynamics} with $\boldsymbol \solSymb:[0,\finaltime]\times \mathcal D\rightarrow \RR{\nbig_\spaceSymb}$. Further, $\boldsymbol \rhsSymb: \RR{\nbig_\spaceSymb} \times [0,\timeSymb_f] \rightarrow \RR{\nbig_\spaceSymb}$ with $(\boldsymbol w,\tau)\rightarrow\boldsymbol \rhsSymb(\boldsymbol w, \tau) $ denotes the scaled velocity of $\boldsymbol \solSymb$, which can be either linear or nonlinear in its first argument. The initial state is denoted by $\sol_0:\mathcal D\rightarrow \RR{\nbig_\spaceSymb}$, and $\boldsymbol{\paramSymb} \in \mathcal D$ denotes the parameters with parameter domain $\mathcal D\subseteq\RR{\nsmall_\mu}$. We assume that the parameter affects only the initial condition. System~\eqref{eq:dynamics} can be considered as semi-discretized version of a system of partial differential equations (PDEs), whose spatial domain is denoted as $\Omega\in\RR{d}$, $d\innat{3}$, where $\nat{N}:=\{1,\ldots,N\}$. The spatial discretization can be done through many different numerical methods, such as the finite difference, finite element, and finite volume methods.
Many different time integrators are available to approximate the time derivative term, $d\boldsymbol \solSymb/dt$, e.g., explicit and implicit time integrators. A uniform time discretization is assumed throughout the paper, characterized by time step $\Delta \timeSymb\in\RRplus{}$ and time instances $\timeArg{n} = \timeArg{n-1} + \Delta \timeSymb$ for $n\innat{{\nbig_\timeSymb}}$ with $\timeArg{0} = 0$, ${\nbig_\timeSymb}\in\mathbb{N}$. To avoid notational clutter, we introduce the following time discretization-related notations: $\solArg{n} := \solFuncArg{n}$, $\solapproxArg{n} :=\solapproxFuncArg{n}$, $\redsolapproxArg{n} := \redsolapproxFuncArg{n}$, and $\rhsArg{n} := \boldsymbol \rhsSymb(\solFuncArg{n},t^{n}; \boldsymbol{\paramSymb})$.
Although many advanced numerical methods are available to solve System~\eqref{eq:dynamics}, as the problem size increases, i.e., a large $\nbig_\spaceSymb$, and the computational domain $\Omega$ gets geometrically complicated, the overall solution time becomes impractically slow, e.g., taking a day or week for one forward simulation. The proposed method, i.e., LaSDI, accelerates the computationally expensive simulations.
\section{LaSDI}\label{sec:LaSDI}
This section mathematically describes LaSDI. As shown in Introduction~\ref{sec:intro}, LaSDI consists of four steps, i.e., data generation, compression, identification, and prediction. Each step is described in the subsequent sections.
\subsection{Data generation}\label{sec:data-generation}
LaSDI first generates full order model simulation data by solving a system of dynamical system \eqref{eq:dynamics} by sampling parameter space, $\mathcal D$. We denote the sampling points by $\boldsymbol{\paramSymb}_{k}\in\mathcal{S}\subset\mathcal D$, $k\innat{\nsmall_\mu}$ where $\mathcal{S}$ denotes a training set. We also denote $\solArgTwo{n}{k}\in\RR{\nbig_\spaceSymb}$ for the $n$-th time step solution of \eqref{eq:dynamics} with $\boldsymbol{\paramSymb} = \boldsymbol{\paramSymb}_{k}$ and arrange the snapshot matrix $\snapshotMatArg{k} = \bmat{\solArgTwo{0}{k} & \cdots & \solArgTwo{{\nbig_\timeSymb}}{k}} \in \RR{\nbig_\spaceSymb \times ({\nbig_\timeSymb}+1)}$ for $\boldsymbol{\paramSymb} = \boldsymbol{\paramSymb}_{k}$. Concatenating all the snapshot matrices side by side, the whole snapshot matrix $\boldsymbol{\snapshotMatSymb}\in\RR{\nbig_\spaceSymb \times ({\nbig_\timeSymb}+1)\nsmall_\mu}$ is defined as
\begin{equation}\label{eq:snapshotMat}
\boldsymbol{\snapshotMatSymb} = \bmat{\snapshotMatArg{1} & \cdots & \snapshotMatArg{\nsmall_\mu}}.
\end{equation}
\subsection{Compression}\label{sec:compression}
The second step of LaSDI is to compress the snapshot matrix $\boldsymbol{\snapshotMatSymb}$ either using linear or nonlinear compression techniques. The choice of either linear or nonlinear compression can be determined by the Kolmogorov $n$-width, which quantifies the optimal linear subspace. It is defined as:
\begin{equation}\label{eq:kolmogorov_nwidth}
d_n(\mathcal{M}) := \inf_{\mathcal{L}_n} \sup_{f\in\mathcal{M}}\inf_{g\in\mathcal{L}_n} \| f-g \|,
\end{equation}
where $\mathcal{M}$ denotes the manifold of solutions over all time and parameters and $\mathcal{L}_n$ denotes all $n$-dimensional subspace. If a problem has a solution space whose Kolmogorov $n$-width decaying fast, then the the linear compression will provide an efficient subspace that can accurately approximates a true solution. However, a problem has a solution space with Kolmogorov $n$-width decaying slowly, then the linear compression will not be sufficient for good accuracy. Then, the nonlinear compression is necessary. The rate of decay in Kolmogorov $n$-width can be well indicated by the singular value decay (see Figure~\ref{fig: SV Decay}).
Section~\ref{sec:POD} describes a linear compression technique, i.e., proper orthogonal decomposition, which leads to \textbf{LaSDI-LS} where LS stands for ``Linear Subspace." Section~\ref{sec:auto-encoder} describes a nonlinear compression technique, i.e., auto-encoder, which leads to \textbf{LaSDI-NM} where NM stands for ``Nonlinear Manifold."
\subsubsection{Proper orthogonal decomposition: LaSDI-LS}\label{sec:POD}
We follow the method of snapshots first introduced by Sirovich \cite{sirovich1987turbulence}. The spatial basis from POD is an optimally compressed representation of $\boldsymbol{\snapshotMatSymb}$ in a sense that it minimizes the projection error, i.e., the difference between the original snapshot matrix and the projected one onto the subspace spanned by the basis, $\boldsymbol{\basisSymb}$:
\begin{equation}\label{eq:POD}
\begin{aligned}
\boldsymbol{\basisSymb} &:= \underset{\boldsymbol{\basisDummySymb}\in\RR{\nbig_\spaceSymb\times\nsmall_\spaceSymb},
\boldsymbol{\basisDummySymb}^T\boldsymbol{\basisDummySymb}=\identityArg{\nsmall_\spaceSymb}}{\arg\min} & & \left \|\boldsymbol{\snapshotMatSymb} -
\boldsymbol{\basisDummySymb}\basisDummy^T{\boldsymbol{\snapshotMatSymb}} \right \|_F^2,
\end{aligned}
\end{equation}
where $\|\cdot\|_F$ denotes the Frobenius norm. The solution of POD can be obtained by setting $\boldsymbol{\basisSymb} = \bmat{\leftSingularVecArg{1} & \cdots & \leftSingularVecArg{\nsmall_\spaceSymb}}$, $\nsmall_\spaceSymb < \nsmall_\mu({\nbig_\timeSymb}+1)$, where $\leftSingularVecArg{k}$ is $k$th column vector of the left singular matrix, $\boldsymbol{\leftSingularMatSymb}$, of the following Singular Value Decomposition (SVD),
\begin{equation}\label{eq:SVD}
\boldsymbol{\snapshotMatSymb} = \boldsymbol{\leftSingularMatSymb} \boldsymbol{\singularValueMatSymb} \boldsymbol{\rightSingularMatSymb}.
\end{equation}
Once the basis $\boldsymbol{\basisSymb}$ is built, the snapshot matrix $\boldsymbol{\snapshotMatSymb}\in\RR{\nbig_\spaceSymb\times({\nbig_\timeSymb}+1)\nsmall_\mu}$ can be reduced to the generalized coordinate matrix, $\hat{\snapshotMat}\in\RR{\nsmall_\spaceSymb\times({\nbig_\timeSymb}+1)\nsmall_\mu}$, so called, the reduced snapshot matrix in the subspace spanned by column vectors of $\boldsymbol{\basisSymb}$, i.e.,
\begin{equation}\label{eq:reducedSnapshotMat}
\hat{\snapshotMat} := \boldsymbol{\basisSymb}^T\boldsymbol{\snapshotMatSymb}
\end{equation}
Naturally, we can define the reduced snapshot matrix, $\reducedSnapshotMatArg{k}$, tailored for $\boldsymbol{\paramSymb}_{k}$ by extracting from $((k-1)({\nbig_\timeSymb}+1)+1)$th to $k({\nbig_\timeSymb}+1)$th column vectors of $\boldsymbol{\snapshotMatSymb}$, i.e.,
\begin{equation}\label{eq:reducedSnapshotMatArg}
\reducedSnapshotMatArg{k} := \bmat{\redsolapproxArgTwo{0}{k} & \cdots & \redsolapproxArgTwo{{\nbig_\timeSymb}}{k}},
\end{equation}
where $\redsolapproxArgTwo{j}{k}\in\RR{\nsmall_\spaceSymb}$ is the reduced coordinates at $j$th time step for $\boldsymbol{\paramSymb} = \boldsymbol{\paramSymb}_{k}$. The matrix $\reducedSnapshotMatArg{k}$ describes the latent space trajectory corresponding to $\boldsymbol{\paramSymb}_{k}$. For example, Step 2 of Figure~\ref{fig:procedureLaSDI} shows a graph of the latent space trajectory for the last parameter value, $\boldsymbol{\paramSymb}_{\nsmall_\mu}$. These reduced coordinates data will be used to train either global or local DI models to identify the dynamics of the latent space (see Section~\ref{sec:dynamicsIndentification}).
\begin{figure}
\centering
\includegraphics[width = .6\textwidth]{Final Figures/sv_full.png}
\caption{The decay of the singular values for all the problems considered in the paper. It shows that 2D Burgers and radial advection problems have slow singular value decay, indicating that nonlinear compression would work better than linear compression. On the other hand, 1D Burgers and heat conduction problems have fast singular value decay, indicating that linear compression is sufficient to achieve a good accuracy.}
\label{fig: SV Decay}
\end{figure}
\subsubsection{Auto-encoder: LaSDI-NM} \label{sec:auto-encoder}
Auto-encoders act as a nonlinear analogue to POD. As illustrated in \cite{kim2021fast}, nonlinear subspace generated by auto-encoders outperform those generated by POD. In general, we train two neural networks, $\mathcal{G}_{\text{en}}: \RR{N_s} \rightarrow \RR{n_s}$ and $\mathcal{G}_{\text{de}}: \RR{n_s} \rightarrow \RR{N_s}$ to minimize
\begin{equation}
\text{MSE}(\boldsymbol{\snapshotMatSymb} - \mathcal{G}_{\text{de}}(\mathcal{G}_{\text{en}}(\boldsymbol{\snapshotMatSymb}))
\end{equation}
where MSE denotes the mean-squared error. As above, we can define
\begin{equation}
\hat{\snapshotMat} := \mathcal{G}_{\text{en}}(\boldsymbol{\snapshotMatSymb})
\end{equation}
and can extract the snapshot $\hat{\snapshotMat}_{k}$ by considering the $(k-1)(N_t+1) + 1)$th to $k(N_t+1)$th columns of $\hat{\snapshotMat}$.
The general architecture of $\mathcal{G}_{\text{en}}$ and $\mathcal{G}_{\text{de}}$ may vary. For the purposes of this paper, we will use a masked-shallow network as described in \cite{kim2021fast}. The use of this network architecture allows for universality across various PDEs and data shapes. The masking of the network allows for increased efficiency by not including full linear layers. However, the masking requires that we take care when constructing neural networks for higher dimensional simulations because the organization of spatial data must match the masking. Specifically, the architecture of both $\mathcal{G}_{\text{en}}$ and $\mathcal{G}_{\text{de}}$ consists of three layers, i.e., input, output and one hidden layer. The first layers are fully connected with nonlinear activation functions, whereas the final layer does not have nonlinear activation functions, i.e., fully linear. Then, the a masking operator is applied to make $\mathcal{G}_{\text{de}}$ sparse. We make the sparsity of the mask matrix similar to the one in the mass matrix to respect the sparsity induced by the underlying numerical scheme. Further details on the architecture of the neural networks can be found in \cite{kim2021fast}.
When training the auto-encoder, $n_s$ becomes a key hyper-parameter. Similar to POD, if $n_s$ is too small, the compression-decompression technique incurs a significant loss of data. Thus, $n_s$ cannot be chosen too small. In contrast to the POD, where increasing the basis size, improves results, this is not always the case with an auto-encoder. Rather, for larger $n_s$, significant improvement the error might not be seen. Further, when applying the DI techniques as described below, high-dimensional complex nonlinear systems are harder to approximate than those with simpler dynamics. Thus, we tune $n_s$ to be the smallest possible dimension while not compromising the accuracy of the reconstructed snapshot matrix.
\subsection{Dynamics Identification} \label{sec:dynamicsIndentification}
To identify the latent space dynamics, we employ regression methods inspired by SINDy \cite{SINDy2016}. SINDy uses sparse regression tools, such as a sequential threshold least-squares, Lasso, Ridge, and Elastic Net. However, we do not require that our discovered dynamical system to be sparse in LaSDI. Instead, we want to generate the dynamical system that, when integrated, generates the most numerically accurate results. Because we do not require our solution to be sparse, we will refer this identification step to \textit{dynamics identification} (DI).
In general, we aim to find a dynamical system $\dot{\hat\sol}(t) = f(\hat\sol(t))$ whose discrete solution best-matches the latent space discrete trajectory data, $\hat{\snapshotMat}_{k}\in \RR{n_s\times (N_t+1)}$ in a least-squares sense. The goal is to identify a good $f(\hat\sol(t))$. To formally describe the DI procedure, we take the transpose of the latent space trajectory data, i.e., ${\hat{\snapshotMat}_{k}}^T$, to arrange the temporal evolution in a downward direction:
\begin{equation} \label{eq:temporaldownward}
{\hat{\snapshotMat}_{k}}^T = \bmat{\redsolapproxArg{0}^T \\ \vdots \\ \redsolapproxArg{N_t}^T} = \bmat{\hat{\solSymb}_0(t_0) & \hdots & \hat{\solSymb}_{n_s}(t_0) \\ \vdots & \ddots & \vdots \\ \hat{\solSymb}_0(t_{N_t}) & \hdots & \hat{\solSymb}_{n_s}(t_{N_t}) }
\end{equation}
Then we approximate the time derivative term with a finite difference and arrange the temporal evolution in a downward direction as well:
\begin{equation} \label{eq:dt_downward}
{\dot{\hat{\snapshotMat}}_{k}}^T = \bmat{\reddotsolapproxArg{0}^T \\ \vdots \\ \reddotsolapproxArg{N_t}^T} = \bmat{\dot{\redsolSymb}_0(t_0) & \hdots & \dot{\redsolSymb}_{n_s}(t_0) \\ \vdots & \ddots & \vdots \\ \dot{\redsolSymb}_0(t_{N_t}) & \hdots & \dot{\redsolSymb}_{n_s}(t_{N_t}) }
\end{equation}
We prescribe a library of functions, i.e., $\Theta \left ({\hat{\snapshotMat}_{k}}^T \right )\in\RR{({\nbig_\timeSymb}+1)\timesn_{\ell}}$, to approximate the right-hand side of the dynamical system, i.e., $f\left (\hat\sol(t) \right )$. For example, $\Theta \left ({\hat{\snapshotMat}_{k}}^T \right )$ may take a form of constant, polynomial, trigonometric, and exponential functions:
\begin{equation} \label{eq:library}
\Theta \left ({\hat{\snapshotMat}_{k}}^T \right ) = \bmat{1 & {\hat{\snapshotMat}_{k}}^T & {\hat{\snapshotMat}_{k,P_2}}^T & \hdots & {\hat{\snapshotMat}_{k,P_{\ell}}}^T & \hdots & \sin({\hat{\snapshotMat}_{k}}^T) & \cos({\hat{\snapshotMat}_{k}}^T) & \hdots & \exp({\hat{\snapshotMat}_{k}}^T) },
\end{equation}
where $n_{\ell}$ denotes the number of columns, which is determined by the choice of a library of functions, and $P_{\ell}$ represents all the polynomials with order $\ell$. For the numerical examples in this paper, $\ell\leq 5$ are sufficient for accurate approximation of the dynamical system. For example,
${\hat{\snapshotMat}_{k,P_2}}^T$ and $\exp({\hat{\snapshotMat}_{k}}^T)$ are defined as
\begin{equation} \label{eq:P2}
{\hat{\snapshotMat}_{k,P_2}}^T = \bmat{\hat{\solSymb}_0^2(t_0) & \hat{\solSymb}_0(t_0)\hat{\solSymb}_1(t_0) & \hdots & \hat{\solSymb}_1^2(t_0) & \hdots & \hat{\solSymb}_{n_s}^2(t_0) \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
\hat{\solSymb}_0^2(t_{N_t}) & \hat{\solSymb}_0(t_{N_t})\hat{\solSymb}_1(t_{N_t}) & \hdots & \hat{\solSymb}_1^2(t_{N_t}) & \hdots & \hat{\solSymb}_{n_s}^2(t_{N_t})
}
\end{equation}
and
\begin{equation} \label{eq:exo}
\exp({\hat{\snapshotMat}_{k}}^T) =
\bmat{\exp(\hat{\solSymb}_0(t_0)) & \hdots & \exp(\hat{\solSymb}_{n_s}(t_0)) \\
\vdots & \ddots & \vdots \\
\exp(\hat{\solSymb}_0(t_{N_t})) & \hdots & \exp(\hat{\solSymb}_{n_s}(t_{N_t}))}.
\end{equation}
The introduction of ${\hat{\snapshotMat}_{k}}^T$, ${\dot{\hat{\snapshotMat}}_{k}}^T$, and $\Theta \left ({\hat{\snapshotMat}_{k}}^T \right )$ in Eqs.~\eqref{eq:temporaldownward}, \eqref{eq:dt_downward}, and \eqref{eq:library} allows us to write the following regression problem:
\begin{equation}\label{eq:DI_single}
\underset{\Xi_k\in\RR{n_{\ell}\\times\nsmall_\spaceSymb}}{\text{minimize}} \quad \left \| \dot{\hat{\snapshotMat}}_{k}^T - \Theta(\hat{\snapshotMat}_k^T) \Xi_k \right \|^2_F
\end{equation}
So far, we have shown how to build local DI for a specific sample parameter, i.e., $\boldsymbol{\paramSymb}_k$. However, this is not the only option. As a matter of fact, one can build a DI model for multiple parameter values. This observation leads to two types of DI: one, using all the training data to inform the underlying dynamical system; two, using only training data close to a query parameter point to model the dynamics. The former we refer to as \textit{Global DI} and the latter, \textit{Local DI}. Note that both global and Local DIs can be considered as \textbf{region-based} because they set a unified DI for each region.
Alternatively, we can consider \textbf{point-wise} DIs, where the coefficients, $\Xi_k$ of each DI model built for a single training point, i.e., $\boldsymbol{\paramSymb}_k$, by Eq.~\eqref{eq:DI_single} can be interpolated for a new query point, $\boldsymbol{\paramSymb}^*$. We refer the detailed explanation of such interpolation to \textit{interpolated DI}.
\subsubsection{Region-based global DI}\label{sec:globalDI}
Global DI requires a straight-forward implementation. We include all the training snapshots in the identification of the latent space dynamics. For this, the discovered dynamics come from modifying \eqref{eq:DI_single} to
\begin{equation}\label{eq:globalDI}
\underset{\Xi\in\RR{n_{\ell}\times\nsmall_\spaceSymb}}{\text{minimize}} \quad \sum_{k=1}^{\nsmall_\mu}\left \| \dot{\hat{\snapshotMat}}_k^T - \Theta(\hat{\snapshotMat}_k^T) \Xi \right \|^2_F
\end{equation}
This gives one universal DI model that covers the whole parameter space, which might be a good model if the latent space dynamics do not drastically change based on parameter values. This might be valid for a small parameter space. However, for a larger parameter space, the latent space dynamics might change drastically. Therefore, we introduce local DI.
\subsubsection{Region-based local DI}\label{sec:localDI}
For a large parameter space, we should not expect that a single dynamical system would govern all the latent space trajectories from the whole training data. However, we might expect that the dynamics in the latent space locally behave. This observation allows us to build a local DI which provides a good model for a local region. The local DI takes training simulation data from $n_{\text{DI}}$ neighboring points. Therefore, the least-squares regression problem for $r$th local DI becomes
\begin{equation}\label{eq:localDI}
\underset{\Xi_{r}\in\RR{n_{\ell}\times\nsmall_\spaceSymb}}{\text{minimize}} \quad \sum_{k\in\mathcal{I}_{\numDI}(r)}\left \| \dot{\hat{\snapshotMat}}_k^T - \Theta(\hat{\snapshotMat}_k^T) \Xi_{r} \right \|^2_F,
\end{equation}
where $\trainingSet_{\numDI}(r)\in\mathcal{S}$ denotes a set of $n_{\text{DI}}$ training parameters that determine $r$th local parameter region and its corresponding set of indices is denoted as $\mathcal{I}_{\numDI}(r)$. The $r$th local parameter region, $\region{r}$, is defined by the set of points that are closest to parameters in $\trainingSet_{\numDI}(r)$ with some distance measure, i.e.,
\begin{equation}
\region{r} \equiv \{ \boldsymbol{\paramSymb}^*\in\mathcal D \mid \dist{\boldsymbol{\paramSymb}^*}{\boldsymbol{\paramSymb}_j} \leq \dist{\boldsymbol{\paramSymb}^*}{\boldsymbol{\paramSymb}_k}, \boldsymbol{\paramSymb}_j\in\trainingSet_{\numDI}(r), \boldsymbol{\paramSymb}_k\in\mathcal{S}\setminus\trainingSet_{\numDI}(r) \},
\end{equation}
where $\dist{\cdot}{\cdot}$ denotes a distance between two points in $\mathcal D$. We use Euclidean distance that is defined as
\begin{equation}\label{eq:nn}
d(\boldsymbol{\paramSymb}_i, \boldsymbol{\paramSymb}_j) := ||\boldsymbol{\paramSymb}_i-\boldsymbol{\paramSymb}_j||_2,
\end{equation}
but other distance functions can be used as well. Note that $n_{\text{DI}}$ is a parameter that can be tuned. If $n_{\text{DI}}=\nsmall_\mu$, we recover the global DI. Therefore, the global DI can be considered as a special case of local DIs. We illustrate the various examples of global and local DI in a 2D parameter space and their local parameter region which are relevant to a specific $\boldsymbol{\paramSymb}^*$ denoted as pink dots in Figure \ref{fig: method local global}.
\begin{figure}
\centering
\includegraphics[width = .7\textwidth]{Final Figures/global-local.png}
\caption{An example of a 2-dimensional parameter space with nine training values uniformly spaced throughout the space. For an arbitrary parameter point, $\boldsymbol{\paramSymb}^*\in\mathcal D$, we illustrate three different latent space dynamics identification regions, $\region{r}$, colored by blue, for $n_{\text{DI}}=9, 2$, and $4$. The case of $n_{\text{DI}}=9$ is the global DI, while the cases of $n_{\text{DI}}=2$ and $4$ are local DIs.}
\label{fig: method local global}
\end{figure}
\subsubsection{Point-wise interpolated DI}\label{sec:interpolatedDI}
For $\boldsymbol{\paramSymb}^*\in\region{r}$, the interpolated DI interpolates coefficients, $\Xi_k$, for $k\in\mathcal{I}_{\numDI}(r)$ if $\boldsymbol{\paramSymb}^*\in\region{r}$, where
$\Xi_k$ are obtained by solving Eq.~\eqref{eq:DI_single}, to obtain the interpolated coefficients, $\Xi^*$, which will be used for the LaSDI prediction in Eq.~\eqref{eq:LaSDI_ODE}. Many interpolation techniques can be utilized here, but we use two techniques: radial basis functions (RBFs) and linear bi-variate splines. In each case, we interpolate element-wise in $\Xi^*$. For the $n_{\ell}\cdot \nsmall_\spaceSymb$ elements in $\Xi^*$, the corresponding interpolation functions are defined below.
The RBF interpolation function for each element of $\Xi^*$ for $\boldsymbol{\paramSymb}^*\in\region{r}$, is defined as
\begin{equation}\label{eq:RBF}
\psi(\boldsymbol{\paramSymb}^*; \{\boldsymbol{\paramSymb}_k\}_{k\in \mathcal{I}_{\numDI}(r)}) = \sum_{k=1}^{\mathcal{I}_{\numDI}(r)} \rbfcoef_{k} \ \psi_k \left (\dist{\boldsymbol{\paramSymb}^*}{\boldsymbol{\paramSymb}_k} \right )
\end{equation}
where $\psi_k$ are radial basis functions; $\rbfcoef_{k}\in\RR{}$ denotes $k$th RBF interpolation coefficient, which is obtained by solving a least-squares problem; $\dist{\cdot}{\cdot}$ is the Euclidean distance as defined in Eq.~\eqref{eq:nn}; and $\boldsymbol{\paramSymb}_k\in\region{r} \cap \mathcal{S}$. Many radial basis functions are available. For our numerical experiments, we use the multiquadric radial basis functions defined as
\begin{equation}
\psi_k (\boldsymbol{\paramSymb};\{\boldsymbol{\paramSymb}_k\}_{k\in \mathcal{I}_{\numDI}(r)}) = \sqrt{\frac{\dist{\boldsymbol{\paramSymb}^*}{\boldsymbol{\paramSymb}_k}^2}{\epsilon^2} + 1},
\end{equation}
where $\epsilon$ approximates average distance between $\{\boldsymbol{\paramSymb}_k\}_{k\in\mathcal{I}_{\numDI}(r)}$.
The linear bi-variate splines approximates $\Xi^*$ using a linear combination of the $\Xi_k$ based on $\dist{\boldsymbol{\paramSymb}^*}{\boldsymbol{\paramSymb}_k}$ with no smoothing. If $\boldsymbol{\paramSymb}_k$ falls on a uniform grid, as in our numerical experiments, the bi-linear interpolation function for each entry of $\Xi^*$ can be defined in rectangular regions. For example, in two dimensional parameter space, this is defined by the four corner points in a specific region (e.g., $\region{r}$), $\boldsymbol{\paramSymb}_0 = \bmat{\mu_{0,0} & \mu_{0,1}}^T$, $\boldsymbol{\paramSymb}_1 = \bmat{\mu_{1,0} & \mu_{0,1}}^T$, $\boldsymbol{\paramSymb}_2 = \bmat{\mu_{0,0} & \mu_{1,1}}^T$, $\boldsymbol{\paramSymb}_3 = \bmat{\mu_{1,0} & \mu_{1,1}}^T$ and the corresponding $\Xi_k$s. For $\boldsymbol{\paramSymb}^* = \bmat{\mu^*_0 & \mu^*_1}^T\in \region{r}\setminus\partial\region{r}$, each entry of $\Xi^*$ can be obtained by
\begin{equation}\label{eq:bilinear}
\psi(\boldsymbol{\paramSymb}^*;\{\boldsymbol{\paramSymb}_k\}_{k\in \mathcal{I}_{\numDI}(r)}) = \frac{1}{(\mu_{1,0}-\mu_{0,0})(\mu_{1,1}-\mu_{0,1})}[\mu_{1,0}-\mu^*_0\;\; \mu^*_0-\mu_{0,0}]\begin{bmatrix} \Xi_{0} & \Xi_{1} \\ \Xi_{2} & \Xi_{3}\end{bmatrix}\begin{bmatrix} \mu_{1,1}-\mu^*_1 \\ \mu^*_1-\mu_{0,1}\end{bmatrix}
\end{equation}
\subsubsection{Notes on DIs}\label{sec:notesOnDI}
In general, finding an accurate and numerically stable approximation can be difficult for unknown nonlinear problems. Because of this, the terms included in the regression becomes a hyper-parameter that must be tuned. As with tuning $n_s$, we strive to use the least number of terms for best accuracy. Therefore, we now discuss the procedure used to obtain the best dynamics approximation given a set of trajectories in a latent space:
\begin{enumerate}
\item \textbf{Start simple}: start with polynomial order one. This generates the linear dynamical system that best approximates the latent space dynamics.
\item \textbf{Verification}: Visually verify that the new trajectories approximate the training data.
\item If a better fit is required, we include the following modifications in order of importance:
\begin{itemize}
\item \textbf{Rescale}: rescale the data so that $\max\left(\left|\hat{\snapshotMat}_{k}\right|\right)\leq 1$.
\item \textbf{Enrich}: enrich the library with more terms, such as polynomials with a higher order, mixed polynomial terms (e.g., $\hat{\solSymb}_0\hat{\solSymb}_1$), trigonometric and exponential terms.
\end{itemize}
\end{enumerate}
Solving one of the least-squares problems, i.e., Eqs.~\eqref{eq:DI_single}, \eqref{eq:globalDI}, and \eqref{eq:localDI}, or utilizing point-wise approach in Section~\ref{sec:interpolatedDI} gives a latent space dynamical system characterized by the form of a system of ordinary differential equations (ODEs):
\begin{equation} \label{eq:LaSDI_ODE}
\dot{\hat\sol}(t) = \rhsArg{n_{\text{DI}},r}(\hat\sol(t);\Xi),
\end{equation}
where right-hand-side, $\rhsArg{n_{\text{DI}},r}:\RR{n_s}\times\RR{n_{\ell}}\rightarrow\RR{n_s}$, is determined by a library of functions specified in Eq.~\eqref{eq:library} and the coefficients identified from the least-squares problems or interpolations.
\subsection{Prediction}\label{sec:prediction}
After local or global or interpolated DIs are set in the dynamics identification step, one can solve them to predict new latent space dynamics trajectories corresponding to a new parameter $\boldsymbol{\paramSymb}^*\in\mathcal D$. If the global DI is used, there is no ambiguity. However, if a local DIs are used, then one needs to figure out which local DI needs to be solved. It can be determined by checking $\boldsymbol{\paramSymb}^*\in\region{r}$. However, pre-defining $\region{r}$ over the whole parameter space, $\mathcal D$, can be a daunting process for non-uniform training points. Therefore, the interpolated DIs might be more practical for non-uniform training points.
\subsubsection{Latent space dynamics solve and reconstruction}\label{sec:latentspace_solve}
Once an appropriate DI is recognized for a given $\boldsymbol{\paramSymb}^*$, an appropriate initial condition $\redsolapproxArg{0}^*$ is required. The latent space initial condition corresponding to $\boldsymbol{\paramSymb}^*$ is obtained by applying compression step to the full order model initial condition $\solArg{0}^*$ corresponding to $\boldsymbol{\paramSymb}^*$ as described in Section~\ref{sec:compression}. For the linear compression,
\begin{equation} \label{eq:linear_init}
\redsolapproxArg{0}^* \equiv \boldsymbol{\basisSymb}^T\solArg{0}^*
\end{equation}
and for the nonlinear compression,
\begin{equation} \label{eq:nonlinear_init}
\redsolapproxArg{0}^* \equiv \mathcal{G}_{\text{en}}(\solArg{0}^*)
\end{equation}
are used. Using this initial condition, Eq.~\eqref{eq:LaSDI_ODE} is solved to predict the latent space dynamics trajectory, $\redsolapproxArg{n}^*$ using the Dormand-Prince pair of formulas \cite{dormand1980family}. At each iteration, the local error is controlled using a fourth-order method. Using local extrapolation, the step is then taken using a fifth-order formulation. Then, the approximated full state trajectories, $\solapproxArg{n}^*$ are restored by $\solapproxArg{n}^* \equiv \boldsymbol{\basisSymb}\redsolapproxArg{n}^*$ if the linear compression were used and by $\solapproxArg{n}^* \equiv \mathcal{G}_{\text{de}}(\redsolapproxArg{n}^*)$ if the autoencoder is used as a nonlinear compression.
\section{Numerical Results}\label{sec:results}
\label{sec: Results}
We demonstrate the performance of LaSDI for four different numerical examples, i.e., 1D and 2D Burgers equations, heat conduction, and radial advection problems. The governing equations, initial conditions, boundary conditions, parameter space, and domain are specified in Table \ref{tab:examples}.
\begin{table}[ht]
\centering
\def1.5{1.3}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Equation} & \textbf{Initial Condition} & \textbf{Domain/Boundary} \\ \hline
\begin{tabular}{c}
\underline{1D Burgers} \\
$u_t = -u u_x$ \end{tabular} & $\begin{array}{c} u(0,x;a,w) = a\cdot \exp(-x^2/w) \\ a\in [0.7, 0.9] \\ w\in [0.9,1.1] \end{array} $ & $\begin{array}{c} \Omega =[-3,3]\\ t\in [0,1]\\ u(-3,t) = u(3,t) = 0 \end{array}$\\
\hline
\begin{tabular}{c}
\underline{2D Burgers}\\
$\mathbf{u}_t = - \mathbf{u}\cdot \nabla \mathbf{u} + \frac{1}{10000}\Delta \mathbf{u}$ \end{tabular} & $\begin{array}{c} \mathbf{u}(0,\mathbf{x};a,w) = \begin{bmatrix} a\cdot \exp(-||\mathbf{x}||_2^2/w) \\ a\cdot \exp(-||\mathbf{x}||_2^2/w) \end{bmatrix} \\ a\in [0.7, 0.9]\\w\in [0.9,1.1]\end{array}$ & $\begin{array}{c}\Omega =[-3,3]\times [-3,3] \\ t\in [0,2] \\ \mathbf{u}(\mathbf{x},t) = 0\text{ on } \partial\Omega \end{array}$\\
\hline
\begin{tabular}{c}
\underline{Heat Conduction}\\
$u_t= \nabla\cdot (1+u)\nabla u$ \end{tabular} & $\begin{array}{c} u(0,
\mathbf{x}; \omega, a) = a\sin(\omega(x_1+x_2))+a \\ \omega \in [0.2,5.0]\\a\in [1.8,2.2]\end{array}$ & $\begin{array}{c}\Omega = [0,1]\times [0,1] \\ t\in [0,1] \\ \frac{\partial u}{\partial n} = 0\text{ on } \partial \Omega \end{array}$\\
\hline
\begin{tabular}{c}
\underline{Radial Advection}\\
$u_t =- \mathbf{v}\cdot \nabla u$ \\ $\mathbf{v} = \frac{\pi}{2}d\begin{bmatrix} x_2 \\ -x_1 \end{bmatrix}$ \\ $
d =((1-x_0^2)(1-x_1^2))^2$
\end{tabular} & $\begin{array}{c} u(0,\mathbf{x};\omega) = \sin(\pi \omega x_1)\sin(\pi\omega x_2) \\ \omega \in [0.6,1.0] \text{ or } \omega \in [0.6,1.4]\end{array}$ & $\begin{array}{c}\Omega = [-1,1]\times [-1,1] \\ t\in [0,3] \\ u(\mathbf{x},t) = 0\text{ on } \partial \Omega \\ \end{array}$\\
\hline
\end{tabular}
\caption{The four PDEs used to generate FOMs for the application of LaSDI.}
\label{tab:examples}
\end{table}
For both 1D and 2D Burgers problems, a uniform space discretization (i.e., $dx = 6/1000$ for 1D and $dx = dy = 1/10$ for 2D Burgers problems) is used for the full order model solve. The first order spatial derivative term is approximated by the backward difference scheme, while the diffusion terms are approximated by the central difference scheme. This generates a semi-discretized PDE characterized by a system of nonlinear ordinary differential equations specified in Eq.~\eqref{eq:dynamics}. We integrate each of these systems over a uniform time-step (i.e., $\Delta t = 1/1000$ for 1D and $\Delta t = 2/1500$ for 2D Burgers problems) using the implicit backward Euler time integrator, i.e., $\solArg{n} - \solArg{n-1} = \Delta t \rhsArg{n}$.
For the full order model of the nonlinear heat conduction problem, we use the second order finite element to discretize the spatial domain. The spatial discretization starts with $64\times64$ uniform squares, which are divided into triangular elements, resulting in 8192 triangular elements in total. A uniform time step of $\Delta t = .01$ is used with the SDIRK3 implicit L-stable time integrator (i.e., see Eq.~(229) of \cite{kennedy2016diagonally}). For the temperature field dependent conductivity coefficient, we linearize the problem by using the temperature field $\boldsymbol \solSymb$ from the previous time step.\footnote{The source code of the full order model for the nonlinear heat conduction problem can be found at https://github.com/mfem/mfem/blob/master/examples/ex16.cpp}
For the full order model of the radial advection problem, we use the third order discontinuous finite element to discretize the spatial domain. The spatial discretization is dictated by a square-mesh with periodic boundary conditions. We use $24\times24$ grid of square finite elements. The finite element data is interpolated to generate a $64\times64$ uniform grid across the spatial domain. We use a uniform time step of $\Delta t = .0025$ with the RK4 explicit time integrator.\footnote{The source code of the full order model for the radial advection problem can be found at https://github.com/mfem/mfem/blob/master/examples/ex9.cpp}
These full order models are used to generate training data. Table~\ref{tab:training} shows testing parameter set, $\mathcal{T}$, as well as all the training samples, $\mathcal{S}$, for four problems. Note that the testing points include the training points. The accuracy is measured by the maximum relative error, $e:\mathcal D\rightarrow\RRplus{}$, at each testing parameter point, which is defined as
\begin{equation}
\label{eq: error}
e(\boldsymbol{\paramSymb}^*) = \max_{n\innat{{\nbig_\timeSymb}}} \left(\frac{||\solapproxArg{n}(\boldsymbol{\paramSymb}^*)-\solArg{n}(\boldsymbol{\paramSymb}^*)||_2}{||\solArg{n}(\boldsymbol{\paramSymb}^*)||_2}\right),
\end{equation}
where $\boldsymbol{\paramSymb}^*$ is a testing parameter point.
\begin{table}[ht]
\centering
\def1.5{1.5}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Example} & \textbf{Training set}, $\mathcal{S}$ & \textbf{Testing set}, $\mathcal{T}$ \\ \hline
1D Burgers & $\begin{array}{c} a\in \{0.7, 0.9\};\; w\in \{0.9,1.1\}, \\ a\in \{0.7, 0.8, 0.9\};\; w\in \{0.9,1.0,1.1\}, \\ \text{ or } \\ a\in \{0.7,0.75, \dots, 0.9\}; \\ w\in \{0.9,0.95, \dots, 1.1\}\end{array}$ & $ \begin{array}{c} a\in \{0.7,0.71,\dots, 0.9\} \\ w \in \{0.9, 0.91, \dots 1.1\} \end{array}$\\
\hline
2D Burgers & $\begin{array}{c} a\in \{0.7,0.75, \dots, 0.9\}; \\ w\in \{0.9,0.95, \dots, 1.1\}\end{array}$ & $ \begin{array}{c} a\in \{0.7,0.72,\dots, 0.9\} \\ w \in \{0.9, 0.92, \dots 1.1\} \end{array}$\\
\hline
Heat Conduction & $\begin{array}{c} \omega \in \{0.2,0.8,\dots, 5.0\}; \\ a\in \{1.8,2.0,2.2\}\end{array}$ & $ \begin{array}{c} a\in \{0.2,0.24,\dots, 5.0\} \\ w \in \{1.8, 1.81, \dots 2.2\} \end{array}$\\
\hline
Radial Advection & $\begin{array}{c}\omega \in \{0.6,0.7,\dots, 1.0\}, \\ \omega \in \{0.6,0.65,\dots, 1.0\}, \\ \omega \in \{0.6,0.7,\dots, 1.4\}, \\ \text{ or } \\ \omega \in \{0.6,0.65,\dots, 1.4\}
\end{array}$ & $ \begin{array}{c} \omega\in \{0.6,0.61,\dots, 0.1.0\} \\ \omega \in \{0.6, 0.61, \dots 1.4\} \end{array}$\\
\hline
\end{tabular}
\caption{The specific parameter values used in training and testing LaSDI throughout the paper. Each subsequent figure makes clear the number of training values used for training the data-compression technique as well as the latent space DI.}
\label{tab:training}
\end{table}
The computational cost is measured in terms of the wallclock time. Specifically, timing is obtained by performing calculations on a IBM Power9 @ 3.50 GHz and DDR4 Memory @ 1866 MT/s. The auto-encoders are trained on a NVIDIA V100 (Volta) GPU with 3168 NVIDIA CUDA Cores and 64 GB GDDR5 GPU Memory using PyTorch [55] which is the open source machine learning frame work.
\subsection{1D Burgers}\label{sec:1dB}
First, for both LaSDI-LS and LaSDI-NM, we approximate the dynamics with linear dynamical systems in global DI. In Figure \ref{fig:1D single}, the last time step solutions for $\mu^* = (0.8, 1.01)$ are compared with the corresponding FOM solution, showing that they are almost identical. This shows that LaSDIs are able to accurately predict solutions. Figure \ref{fig: 1db errors} illustrates the relative error of a LaSDI model is bounded below by the projection error of the compression technique.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Final Figures/1db.png}
\caption{LaSDI ROMs at the final time step for the 1D Burgers problem generated using either the POD or nonlinear data-compression techniques for $\mu^* = (0.8,1.01)$. In both instances, we use linear dynamical systems in global DI. LaSDI-NM and LaSDI-LS use a latent-space dimension of four and five respectively.}
\label{fig:1D single}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Final Figures/1db_full_error.png}
\caption{The illustration of relative errors of LaSDI models being bounded below by the projection error of each compression technique, i.e., proper orthogonal decomposition and auto-encoder. The relative errors are computed from the simulation illustrated in Figure~\ref{fig:1D single}. The projection errors are computed by compressing and de-compressing the corresponding full order model solutions.}
\label{fig: 1db errors}
\end{figure}
To see the accuracy over the whole parameter domain, Figure~\ref{fig: 1db heat 4} shows two heat maps of the maximum relative errors for each parameter case: one for LaSDI-NM and the other for LaSDI-LS. Both LaSDIs are trained using four training points. For this particular experiments, LaSDI-LS outperforms LaSDI-NM in terms of accuracy, i.e., The maximum relative error for LaSDI-NM is $9.2\%$, while the maximum relative error for LaSDI-LS is $2.5\%$.
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth} \includegraphics[width=\textwidth]{Final Figures/1DB_4_global_NM.png}
\caption{LaSDI-NM with latent-space dimension of 4.}
\label{fig: 1db heat 4 NM}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\includegraphics[width=\textwidth]{Final Figures/1DB_4_global_LS.png}
\caption{LaSDI-LS with latent-space dimension of 5.}
\label{fig: 1db heat 4 ls}
\end{subfigure}
\caption{Heat map of maximum relative errors on 1D Burgers parameter space both for LaSDI-NM and LaSDI-LS. Both LaSDI-NM and LaSDI-LS takes Global DI. Four training points are used to generate simulation data and global DI is used to predict the solution at each testing point. Both maximum and minimum relative errors are indicated.}
\label{fig: 1db heat 4}
\end{figure}
For more extensive analysis, we report accuracy (i.e., the relative error range) and speedup for various LaSDI models in Figure~\ref{fig: 1db compare}. We train LaSDI-NM and LaSDI-LS models with nine training points. Then both the region-based and interpolation-based local LaSDIs are constructed for various $n_{\text{DI}}$s, labeled as LaSDI-NM, interpolated LaSDI-NM with RBF, interpolated LaSDI-NM with B-Spline, LaSDI-LS, interpolated LaSDI-LS with RBF, and interpolated LaSDI-LS with B-Spline. Furthermore, we compare both Degree 0 and 1 dynamical system in DIs. There are several things to note. \textbf{First}, the relative ranges for Degree 1 dynamical systems tend to be bigger than the ones for Degree 0 dynamical systems. This makes sense because a lot more variations are possible with a higher degree, which gives a higher chance to be both the best and worst model. As a matter of fact, the best model with the minimum relative error range is found with Degree 1 Dynamical system (i.e., the region-based LaSDI-LS with $n_{\text{DI}} = 3$). \textbf{Second}, the interpolated LaSDIs tend to give a larger relative error range than the region-based local LaSDIs. We speculate that this is because the training points were not optimally chosen for the interpolation. This fact will be further investigated in the follow-up paper. \textbf{Third}, a higher speed-up is achieved by Degree 0 dynamical system. This also makes sense because the solution process of the latent space dynamics with less degrees will be faster.
\begin{figure}
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_legend.png}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_deg0_error.png}
\caption{Error for degree 0 dynamical system in DI.}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_deg1_error.png}
\caption{Error for degree 1 dynamical system in DI.}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_deg0_speedup.png}
\caption{Speedup for degree 0 dynamical system in DI.}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_deg1_speedup.png}
\caption{Speedup for degree 1 dynamical system in DI.}
\end{subfigure}
\caption{Performance comparison among various models and the degree of polynomial in the DI. Nine training points are used in total. Both constant dynamical systems (left) and linear dynamical systems (right) are used in DI. The latent space dimension of four is used for LaSDI-NM models and the latent space dimension of five is used for LaSDI-LS models. }
\label{fig: 1db compare}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_nm_global_9.png}
\caption{LaSDI-NM Global DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_ls_local3_9.png}
\caption{LaSDI-LS Local (3) DI}
\end{subfigure}
\caption{Heat maps from both LaSDI-NM with global DI and LaSDI-LS with local(3) DI that are identified from Figure~\ref{fig: 1db compare} in terms of accuracy for 1D Burgers problem. For both models, nine training points are used.}
\label{fig:1db sum 9}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_nm_local16_25.png}
\caption{LaSDI-NM Local (16) DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_ls_local5_25.png}
\caption{LaSDI-LS Local (5) DI}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/1db_rel_error_25.png}
\caption{Relative Error Range and Speedup}
\end{subfigure}
\caption{Comparison between LaSDI-NM and LaSDI-LS on 1D Burgers problem with 25 training points. The region-based models are used for both LaSDI-NM and LaSDI-LS. For each case, the range of relative errors over all testing points in the parameter space are reported. We also show the heat maps of relative errors from both LaSDI-NM and LaSDI-LS for the corresponding best-case scenarios.}
\label{fig:1db sum}
\end{figure}
In Figure~\ref{fig:1db sum 9}, we plot the heat maps of the relative error for the best LaSDI-LS and LaSDI-NM models that are identified from Figure~\ref{fig: 1db compare}, i.e., the LaSDI-NM Global DI and the region-based LaSDI-LS local DI with $n_{\text{DI}}=3$. The plot reveals that the LaSDI-LS local DI is better than the LaSDI-NM global DI model for this particular case.
Figure~\ref{fig:1db sum} compares the performance of LaSDI-LS and LaSDI-NM with 25 total training points to see the effect of increasing the number of training FOMs. First, the accuracy of LaSDI-NM has been improved when compared with the ones with 4 or 9 training points (see Figures~\ref{fig: 1db heat 4} and \ref{fig:1db sum 9}). For example, the maximum relative error of LaSDI-NM has decreased from $9.215\%$ to $3.78\%$ and to $0.991\%$ as the number of training points increase from 4 to 9 and to 25. This implying that more data is used, the better the accuracy of LaSDI-NM would be. Second, while LaSDI-NM gives a larger range of maximum relative errors when 9 training FOMs are used, it consistently gives the smallest relative error of all predictive LaSDIs. Finally, we notice no reduction in error in LaSDI-LS from 9 to 25 training values; this highlights the limitations of POD in advection-dominated problems.
\subsection{2D Burgers} \label{sec:2dburgersResults}
We expect that LaSDI-NM will perform better than LaSDI-LS for the 2D Burgers problem based on the singular value decay shown in Figure~\ref{fig: SV Decay}. Figure~\ref{fig:2D single} illustrates the accuracy level achieved by LaSDI models for both the $u$ and $v$ components of the velocity as well as the relative error at the last time step. The figure shows the clear advantage of LaSDI-NM over LaSDI-LS over the advection-dominated problem, as expected. Of course, the projection error of the linear compression can be improved by increasing the latent space dimension. However, the larger latent space dimension implies the more terms to be identified in the dynamics identification step.
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_full_nm.png}
\caption{LaSDI-NM}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_full_ls.png}
\caption{LaSDI-LS}
\end{subfigure}
\caption{The relative error of LaSDI models at the final time-step for 2D Burgers problem with $a = 0.8$ and $w = 1.02$. A latent space dimension of three is used for LaSDI-NM and a latent space dimension of five is used for LaSDI-LS.}
\label{fig:2D single}
\end{figure}
Figure \ref{fig:2d ae error} illustrates the relative error of a LaSDI model is bounded below by the projection error of the compression technique.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_full_error.png}
\caption{The illustration of relative errors of LaSDI models for the 2D Burgers simulation shown in Figure~\ref{fig:2D single}. Note the relative errors are bounded below by the projection error of each compression technique, i.e., proper orthogonal decomposition and auto-encoder. The projection errors are computed by compressing and de-compressing the corresponding full order model solutions either by POD basis or auto-encoder.}
\label{fig:2d ae error}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_nm_local6_25.png}
\caption{LaSDI-NM Local (6) DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_ls_global_25.png}
\caption{LaSDI-LS Global DI}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/2db_rel_error.png}
\caption{Relative Error Range and Speedup}
\end{subfigure}
\caption{Comparison between LaSDI-NM and LaSDI-LS on 2D Burgers problem with 25 training points. We include two heat maps of the relative error of the LaSDIs for the prescribed parameter values. LaSDI-NM uses a latent-space dimension of three and a cubic dynamical system in the DI. LaSDI-LS uses a five dimensional latent-space with a linear dynamical system. }
\label{fig:2d GL compare}
\end{figure}
The comparison between LaSDI-LS and LaSDI-NM becomes even more vivid in Figure \ref{fig:2d GL compare}, which shows the performance of LaSDI models with 25 training points. For this example, we note that the region-based local DI outperforms global DI for LaSDI-NM. This implies that the latent-space dynamics are heavily localized within the parameter space. That is, for example, the dynamical system that best approximates the case of $a = 0.6$, $w = 0.8$ is not a good model for the case of $a = 0.9$, $w = 1.1$.
For this particular problem, a speed-up around 800 is achieved by LaSDI-NM. We use a cubic dynamical system, with cross-terms, to improve the expressivity and therefore accurately represent the three dimensional latent space generated in LaSDI-NM. However, the higher-degree dynamical systems in DI can be numerically unstable. Furthermore, the lower degree approximation can lead to a higher speed-up. Thus, the balance among expressivity, numerical instability, and speed-up must be sought when LaSDI models are applied.
\subsection{Time-Dependent Heat Conduction}
Based on the singular value decay in Figure~\ref{fig: SV Decay}, we expect that the LaSDI-LS will perform better than the LaSDI-NM. Indeed, Figure~\ref{fig:Ex16 single} shows that the LaSDI-LS achieves much smaller relative error than the LaSDI-NM for $w=1.0$ and $a=2.0$.
\begin{figure}
\centering
\begin{subfigure}[b]{.435\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex16_full_nm.png}
\caption{LaSDI-NM}
\end{subfigure}
\begin{subfigure}[b]{.465\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex16_full_ls.png}
\caption{LaSDI-LS}
\end{subfigure}
\caption{Relative error at the final time step for $w=1.0$ and $a=2.0$ in the heat conduction problem generated using both LaSDI-NM and LaSDI-LS. LaSDI-NM uses a three dimensional latent-space whereas LaSDI-LS uses a four dimensional latent-space. In both cases, a linear dynamical system is used in DI.}
\label{fig:Ex16 single}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = .5\linewidth]{Final Figures/ex16_21_svmass.png}
\caption{Relative error ranges for LaSDI-LS applied to the heat conduction problem with various latent space dimensions. The training parameters, $\mathcal{S}$, are used to build LaSDI-LS models and testing parameters, $\mathcal{T}$, are used to compute the relative error range, as prescribed in Table~\ref{tab:training}. In each case, we use Global DI with a linear dynamical system.}
\label{fig:Ex16 sv error}
\end{figure}
As discussed in Section~\ref{sec:POD}, increasing the dimension of the the latent space necessarily increases the amount of information retained in the data-compression. To quantify this, we track the first $n_s$ singular values as a proportion of the sum of the singular values:
\begin{equation}\label{eq:svmass}
m_{\text{sv}} = \sum_{i = 1}^{n_s} \sigma_i/\sum_{i=1}^{N_s}\sigma_i,
\end{equation}
where $m_{\text{sv}}$ serves as an indicator for the projection error of the POD-based reduced space as the projection error decreases as $m_{\text{sv}}$ increases. As seen in Figures~\ref{fig: 1db errors} and \ref{fig:2d ae error}, the projection error of the POD also serves as the lower bound for the LaSDI-LS. Therefore, it would be nice if the accuracy of the LaSDI-LS can follow the trend of the projection error of the POD-based compression. Figure~\ref{fig:Ex16 sv error} indeed illustrates the relative error range decreases as the latent space dimension increases. This feature implies that the accuracy of LaSDI-LS follows the trend of the projection error for this particular problem.
The decrease in the projection error implies that the quality improvement of the latent space accomplished by the POD. However, the overall accuracy of LaSDI models does not depend only on the quality of the latent space, but also on the quality of the DI model. Therefore, the decay of the relative error range in Figure~\ref{fig:Ex16 sv error} is possible because the DI models are in good quality as well.
Accurate LaSDI-LS requires fast singular value decay and as we present below, simple latent space dynamics are required to maintain this accuracy (i.e., Figure~\ref{fig:ex16 GL compare} demonstrates that 0 degree dynamical systems in DI is enough to produce a good accuracy for LaSDI-LS). Although the computational cost for the integration of the ODE and the matrix multiplication to decompress the data increases as we increase $n_s$, the decrease in LaSDI-LS relative error at the expense of decreased speedup times clearly gives benefits to applications that require a high accuracy.
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex16_nm_global_21.png}
\caption{LaSDI-NM Global DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex16_ls_global_25.png}
\caption{LaSDI-LS Global DI}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex16_rel_error.png}
\caption{Relative Error Range and Speedup}
\end{subfigure}
\caption{Comparison between LaSDI-NM and LaSDI-LS for the Heat Conduction problem with 21 training values. We include two heat maps of the maximum relative error of the LaSDI models for the prescribed parameter values when Global DI is implemented. For LaSDI-NM and LaSDI-LS, we use a constant and linear dynamical system, respectively, during the DI process.}
\label{fig:ex16 GL compare}
\end{figure}
When expanding across the full parameter space, LaSDI-LS continues to produce lower LaSDI-LS errors than LaSDI-NM, as shown in Figure \ref{fig:ex16 GL compare}. The LaSDI-NM does not produce accurate results leading to the maximum $88\%$ error of LaSDI-NM, while LaSDI-LS achieves the maximum $3.16\%$ error. Due to the simplicity of both the linear and nonlinear latent spaces, we do see consistent results between local and global DI. As in Section \ref{sec:2dburgersResults}, simpler dynamics (i.e., 0-degree dynamical system) was enough to approximate the nonlinear latent space dynamics accurately, also leading to a speed-up of much higher than $100$x.
While further neural network architectures can be explored, we restrict ourselves to the shallow masked network, which gave good performance for the 2D Burgers problem in Section~\ref{sec:2dburgersResults}. Because we expect diffusion-dominated problems to be well-represented in a latent spaces that is constructed by a linear compression, as suggested in Figure~\ref{fig: SV Decay}, exploring various neural network structures for this specific problem becomes counter-productive. Furthermore, nonlinear compression techniques, such as auto-encoders, require more computationally expensive training phase than linear compression techniques, such as POD. We have included the results of LaSDI-NM only to highlight the importance of data compression selection when implementing LaSDI.
\subsection{Radial Advection}\label{sec:radAdvection}
We show the numerical results for the radial advection problem. As in the previous sections, we first present results of LaSDI models for one specific test point to illustrate the viability of the method. Figure \ref{fig:ex9 single} presents the heat maps of relative errors produced by both LaSDI-LS and LaSDI-NM. As expected, LaSDI-NM performs significantly better than LaSDI-LS in terms of accuracy because the problem is advection-dominated and the singular value decay is slow as seen in Figure~\ref{fig: SV Decay}.
\begin{figure}
\centering
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_full_nm.png}
\caption{LaSDI-NM}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_full_ls.png}
\caption{LaSDI-LS}
\end{subfigure}
\caption{Heat maps of the relative error for LaSDI-NM and LaSDI-LS on the radial advection problem for $\omega = 1.01$. For LaSDI-LS, we use a latent-space dimension of four with a linear dynamical system in DI. For LaSDI-NM, we prescribe a latent-space dimension of three with a quadratic dynamical system and no cross terms in DI. In both cases, global DI is used.}
\label{fig:ex9 single}
\end{figure}
We now consider two different parameter spaces; a smaller one ($w\in[0.6,1.0]$) and a larger one ($w\in[0.6,1.4]$) as indicated in Table~\ref{tab:training}. We seek to analyze how flexible LaSDI can be in larger parameter spaces with potentially nonlinear latent space dynamics. Note that for this example, we only vary one parameter in the initial conditions. Thus, we do not include heat maps but rather graphs of the maximum relative error against the parameter value. We also vary the number of training points in each of these examples. Figure~\ref{fig:ex9 error range} shows the comparison of global and local DI that are trained on $\mathcal{S}$ and predicted on $\mathcal{T}$.
Figure \ref{fig:ex9 full} depicts the best-case scenario for each of these cases in terms of accuracy. We note the most accurate regime in each case is local DI. Among the best local LaSDI models, as expected, the smaller parameter space with the larger number of training points performs the best, with a maximum relative error of $\approx 3\%$. Likewise, the worst result appears in the larger parameter space with the smaller number of training points. In this case LaSDI-NM reaches error approximately 20\%.
Interestingly, the local LaSDI (2) DI model on $\mathcal{S} = \{0.60,0.65,\dots, 1.40\}$ (i.e., Figure~\ref{fig:ex9 full}(b)) outperforms the local LaSDI (3) DI model on $\mathcal{S}= \{0.60,0.70,\dots,1.00\}$ (i.e, Figure~\ref{fig:ex9 full}(c)) when using LaSDI-NM. This implies that local LaSDI models will perform well as long as there are enough training points whether the parameter space is large or small. On the other hand, the performance of the global LaSDI-NM models is affected by the parameter space size. For example, the worst error of the global LaSDI-NM for $\mathcal{S} = \{0.60,0.65,\dots, 1.40\}$ is $40.46\%$, while the worst error of the global LaSDI-NM for $\mathcal{S}= \{0.60,0.70,\dots,1.00\}$ is $17.18\%$.
\begin{figure}
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_legend.png}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_error_long_10.png}
\captionsetup{justification=centering}
\caption{$\mathcal{S} = \{0.60,0.70,...,1.40\}$\\ $\mathcal{T} = \{0.60,0.61,...,1.40\}$}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_error_long_5.png}
\caption{$\mathcal{S} = \{0.60,0.65,...,1.40\}$\\ $\mathcal{T} = \{0.60,0.61,...,1.40\}$}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_error_short_10.png}
\caption{$\mathcal{S} = \{0.60,0.70,...,1.00\}$\\ $\mathcal{T} = \{0.60,0.61,...,1.00\}$}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_error_short_5.png}
\caption{$\mathcal{S} = \{0.60,0.65,...,1.00\}$\\ $\mathcal{T} = \{0.60,0.61,...,1.00\}$}
\end{subfigure}
\caption{The relative error ranges and speedups for the radial advection problem for all four sets of $\mathcal{S}$ described in Table~\ref{tab:training}. LaSDI-LS and LaSDI-NM use five and four latent-space dimensions, respectively. For $\mathcal{S} = \{0.60,0.65,\dots,1.0\}$, a quadratic dynamical system without cross-terms is used in DI. All other cases use the linear dynamical systems. For simplicity, we shorten the $y$-axis of each figure. The high error ranges correspond to numerically unstable dynamical systems found in DI.}
\label{fig:ex9 error range}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{.7\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_heat_legend.png}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\includegraphics[width = \linewidth]{Final Figures/ex9_heat_long_10.png}
\captionsetup{justification=centering}
\caption{Local (3) DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_heat_long_5.png}
\caption{Local (2) DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_heat_short_10.png}
\caption{Local (3) DI}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width = \linewidth]{Final Figures/ex9_heat_short_5.png}
\caption{Local (4) DI}
\end{subfigure}
\caption{The relative error for the radial advection problem across the respective $\mathcal{T}$ for both LaSDI-NM and LaSDI-LS with various local DI. Each example shows the best-case scenario from the error ranges illustrated in Figure~\ref{fig:ex9 error range}.}
\label{fig:ex9 full}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
We have introduced a framework for latent space dynamics identification (LaSDI) that builds efficient parametric models by exploiting both local and global regression techniques. Two types of local dynamics identification models are introduced: (i) region-based and (ii) point-wise interpolated DIs. Latent spaces are generated in two different ways as well, i.e., using linear and nonlinear compression techniques. The compression techniques, such as proper orthogonal decomposition and auto-encoder, reduce large-scale simulation data to small-scale data whose size is manageable for regression techniques. The framework is general enough to be applicable for any physical simulations. We have applied various LaSDI models to four different problems, i.e., 1D and 2D Burgers, heat conduction, and radial advection problems.
We observe that local versus global DI models need to be determined case by case. For example, a local DI model outperforms global DI for LaSDI-NM for 1D Burgers problem with nine training points (see the case of Degree 0 dynamical system in Figure~\ref{fig: 1db compare}) and 2D Burgers problem with 25 training points (see Figure~\ref{fig:2d GL compare}) and all the cases for radial advection problems (see Figure~\ref{fig:ex9 error range}), while a global DI or local DI with larger number of points outperform a local DI with smaller number of points for LaSDI-NM for 1D Burgers problem with 25 training points (see Figure~\ref{fig:1db sum}).
The nonlinear compression techniques, such as auto-encoder, outperforms the linear compression technique for problems with slowly decaying Kolmogorov's width, while the linear compression techniques, such as proper orthogonal decomposition, outperforms the nonlinear compression technique for problems with fast decaying Kolmogorov's width. The nonlinear compression was able to bring the projection error down with small latent space dimension even for the problems with slowly decaying Komogorov's width. However, the computational cost of the nonlinear manifold compression is much higher than the linear compression, so we recommend using linear compression if the problem shows fast singular value decay.
There are several future directions to consider in the context of latent space dynamics identification framework. First, we have only considered predefined uniformly distributed training parameters, which is fine with a small size parameter space dimension of one or two. However, the number of uniformly distributed training points increases exponentially as the dimension of the parameter space increases, which makes the generation of the simulation data extremely challenging. Therefore, adaptive and sparse parameter sampling for LaSDI needs to be developed, which will lead to an optimal sampling. Second, we have only considered the parameters that affects an initial condition. However, a parameter of interest may not alter the initial condition, e.g., material properties. The LaSDI framework introduced in this paper cannot handle this case. Therefore, parameterization other than the initial condition needs to be developed. Third, we have extensively used a regression technique to identify dynamics in a latent space. However, this is not the only option. For example, any system identification techniques or operator learning algorithms are applicable. Fourth, the architecture of the neural network for LaSDI-NM may not have been optimally chosen. The more thorough neural architecture search would lead to the better performance of LaSDI-NM models.
With these future directions, the potential for LaSDI is to offer fast and accurate data-driven simulation capability. We expect to apply LaSDI to various important applications, such as climate science, manufacturing, and fusion energy.
\section*{Acknowledgments}
This work was performed at Lawrence Livermore National Laboratory and partially funded by two LDRDs (21-FS-042 and 21-SI-006). Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344 and LLNL-JRNL-831849.
This research was supported in part by an appointment with the National Science Foundation (NSF) Mathematical Sciences
Graduate Internship (MSGI) Program sponsored by the NSF Division of Mathematical Sciences. This program is administered by the Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy (DOE) and NSF. ORISE is managed for DOE by ORAU. All opinions expressed in this paper are the author's and do not necessarily reflect the policies and views of NSF, ORAU/ORISE, or DOE.
\bibliographystyle{ieeetr}
\subsection{Data-Compression Techniques}
|
1,108,101,563,631 | arxiv | \section{Introduction}
Due to the prevalence of huge networks, scalable algorithms for fundamental graph problems recently have gained a lot of importance in the area of parallel computing.
The \emph{Massively Parallel Computation ($\mathsf{MPC}$\xspace)} model \cite{karloff2010model,goodrich2011sorting, beame2014skew, Andoni2014, beame2017communication, czumaj2017round} constitutes a common abstraction of several popular large-scale computation frameworks---such as MapReduce \cite{dean2008mapreduce}, Dryad \cite{isard2007dryad}, Spark \cite{zaharia2010spark}, and Hadoop \cite{white2012hadoop}---and thus serves as the basis for the systematic study of massively parallel algorithms.
While classic parallel (e.g., $\mathsf{PRAM}$\xspace) or distributed (e.g., $\mathsf{LOCAL}$\xspace) algorithms can usually be implemented straightforwardly in $\mathsf{MPC}$\xspace in the same number of rounds \cite{karloff2010model,goodrich2011sorting},
the additional power of local computation (compared to $\mathsf{PRAM}$\xspace) or of global communication (compared to $\mathsf{LOCAL}$\xspace) could potentially be exploited to obtain faster $\mathsf{MPC}$\xspace algorithms.
Czumaj et al. \cite{czumaj2017round} thus ask:
\begin{center}
\begin{minipage}{0.95\linewidth}
\begin{mdframed}[hidealllines=true, backgroundcolor=gray!00]
\vspace{-0.5pt}
\emph{``Are the $\mathsf{MPC}$\xspace parallel round bounds ``inherited'' from the $\mathsf{PRAM}$\xspace model
tight? \\ In particular, which problems can be solved in significantly smaller number of $\mathsf{MPC}$\xspace rounds
\\ than
what the [...] $\mathsf{PRAM}$\xspace model suggest[s]?''}
\vspace{-0.5pt}
\end{mdframed}
\end{minipage}
\end{center}
Surprisingly, for \emph{maximal matching}, one of the most central graph problems in parallel and distributed computing that goes back to the very beginning of the area, the answer to this question is not known. In fact, even worse,
our understanding of the maximal matching problem in the $\mathsf{MPC}$\xspace model is rather bleak.
Indeed, the only subpolylogarithmic\footnote{We are aware of an $O(\sqrt{\log n})$-round low-memory $\mathsf{MPC}$\xspace algorithm by Ghaffari and Uitto in a concurrent work \cite{GU18}. However, only algorithms that are significantly faster than their $\mathsf{PRAM}$\xspace counterparts---that is, smaller than any polynomial in $\log n$, and ideally at most a polynomial in $\log \log n$---are considered efficient.} algorithm by Lattanzi, Moseley, Suri, and Vassilvitskii \cite{lattanzi2011filtering} requires the memory per machine to be substantially superlinear in the number $n$ of nodes in the graph. This is not only prohibitively large and hence impractical for massive graphs, but also allows an easy or even trivial solution for sparse graphs, which both are often assumed to be the case for many practical graphs\cite{karloff2010model,czumaj2017round}. When the local memory is restricted to be (nearly) linear in $n$, the round complexity of Lattanzi et al.'s algorithm drastically degrades, falling back to the trivial bound attained by the simulation of the $O(\log n)$-round $\mathsf{PRAM}$\xspace algorithm due to Luby \cite{luby1985simple} and, independently, Alon, Babai, and Itai \cite{alon1986fast}.
For other classic problems, such as \emph{Maximal Independent Set (MIS)} or \emph{approximate maximum matching}, we have a slightly better understanding. There, the local memory can be reduced to be $\widetilde{O}(n)$ while still having $\operatorname{\text{{\rm poly}}} (\log \log n)$-round algorithms \cite{assadi2017coresets,czumaj2017round,MPCMIS}. Yet, all these algorithms fail to go (substantially) below linear space without an (almost) exponential blow-up in the running time.
It is thus natural to ask
whether there is a fundamental reason why the
known techniques get stuck at the linear-memory barrier
and study the \emph{low-memory} $\mathsf{MPC}$\xspace model with strongly sublinear space to address this question.
\paragraph{Low-Memory $\mathsf{MPC}$\xspace Model for Graph Problems} We have $M$ machines with local memory of
$S=O\left(n^{\delta}\right)$ words each, for some $0<\delta\leq 1$\footnote{Note that we do not require $\delta$ to be a constant. For the sake of simplicity of presentation, we decided to omit the $\delta$-dependency, which is a multiplicative factor of $O\left(\frac{1}{\delta}\right)$, in the analysis of the running time of our algorithms.}. A graph with $n$ nodes, $m$ edges, and maximum degree $\Delta$ is distributed arbitrarily across the machines.
We assume the total memory in the system to be (nearly) linear in the input, i.e., $M\cdot S = \widetilde{\Theta}(m)$.
The computation proceeds in rounds consisting of \emph{local computation} in all machines in parallel, followed by \emph{global communication} between the machines. We require that the total size of sent and received messages of a machine in every communication phase does not exceed its local memory capacity. The main interest lies in minimizing the number of rounds, aiming for $\operatorname{\text{{\rm poly}}} (\log \log n)$.
\newpage
In this low-memory $\mathsf{MPC}$\xspace model,
the best known algorithms usually stem from straight forward simulations of $\mathsf{PRAM}$\xspace and $\mathsf{LOCAL}$\xspace algorithms, thus requiring at least polylogarithmic rounds. In the special case of trees, \cite{brandt2018breaking} managed to beat this bound
by providing an $O(\log^3 \log n)$-round algorithm for MIS. The authors left it as a main open question whether there is a low-memory $\mathsf{MPC}$\xspace algorithm for general graphs in $\operatorname{\text{{\rm poly}}} (\log \log n)$ rounds.
We make a step towards answering this question by devising a degree reduction technique that reduces the problems of maximal matching and maximal independent set in a graph with arboricity $\lambda$ to the corresponding problems in graphs with maximum degree $\operatorname{\text{{\rm poly}}}(\lambda)$ in $O\left(\log^2 \log n\right)$ rounds.
\begin{theorem}\label{thmDegRed}There is an $O\left(\log \log_{\Delta} n \cdot \log \log_{\lambda} \Delta\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\footnote{As usual, w.h.p.\ stands for \emph{with high probability}, and means with probability at least $1-n^{-c}$ for any constant $c\geq 1$.} reduces maximal matching and maximal independent set in graphs with arboricity $\lambda=o(\operatorname{\text{{\rm poly}}} (n))$\footnote{With $o(\operatorname{\text{{\rm poly}}}(n))$ we mean subpolynomial in $n$, i.e., $o(n^c)$ for any constant $c>0$. Note that for $\lambda=\operatorname{\text{{\rm poly}}}(n)$, our results for MIS and maximal matching follow directly, without any need for degree reduction.} to the respective problems in graphs with maximum degree $O\left(\max\{\lambda^{20}, \log^{20} n\}\right)$\footnote{The purpose of the choice of all the constants in this work is merely to simplify presentation.}.
\end{theorem}
This improves over the degree reduction algorithm by Barenboim, Elkin, Pettie, and Schneider \cite[Theorem 7.2]{barenboim2012locality}, which runs in $O(\log_{\lambda} \Delta)$ rounds
in the $\mathsf{LOCAL}$\xspace model and can be straightforwardly implemented in the low-memory $\mathsf{MPC}$\xspace model
Our degree reduction technique, combined with the state-of-the-art algorithms for maximal matching and maximal independent set,
gives rise to a number of low-memory $\mathsf{MPC}$\xspace algorithms, as overviewed next. Throughout, we state our running times based on a concurrent work by Ghaffari and Uitto \cite{GU18}, in which they prove that maximal matching and maximal independent set can be solved in $O\left(\sqrt{\log\Delta}+\log \log \log n\right)$ and $O\left(\sqrt{\log\Delta}+\sqrt{\log\log n}\right)$ low-memory $\mathsf{MPC}$\xspace rounds, respectively, improving over $O\left(\log\Delta+\log \log \log n\right)$ and $O\left(\log\Delta+\sqrt{\log\log n}\right)$, respectively, which can be obtained by a sped up simulation \cite{diam} of state-of-the-art $\mathsf{LOCAL}$\xspace algorithms \cite{barenboim2012locality,Ghaffari-MIS}. We apply these algorithms by Ghaffari and Uitto as a black box.
By using earlier results, the term $O(\sqrt{\log \lambda})$ in the running times of our theorem statements would get replaced by $O(\log \lambda)$.
\begin{theorem}\label{thmMM}
There is an $O\left(\sqrt{\log \lambda}+\log\log n \cdot \log \log \Delta\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\ computes a maximal matching in a graph with arboricity $\lambda$.
\end{theorem}
This improves over the $O\left(\log\lambda+\sqrt{\log n}\right)$-round $\mathsf{LOCAL}$\xspace algorithm by \cite{barenboim2012locality} and the $O(\sqrt{\log \Delta}+\log \log \log n)$-round algorithm by \cite{GU18}.
We get the first $\operatorname{\text{{\rm poly}}} (\log \log n)$-round algorithm---and hence an almost exponential improvement over the state of the art, for linear as well as strongly sublinear space per machine---for all graphs with arboricity $\lambda=\operatorname{\text{{\rm poly}}} (\log n)$. This family of \emph{uniformly sparse} graphs, also known as \emph{sparse everywhere} graphs, includes but is not restricted to graphs with maximum degree $\operatorname{\text{{\rm poly}}} (\log n)$, minor-closed graphs (e.g., planar graphs and graphs with bounded treewidth), and preferential attachment graphs, and thus arguably contains most graphs of practical relevance \cite{goel2006bounded,OnakFullyDynamicMIC}.
The previously known $\operatorname{\text{{\rm poly}}}(\log \log n)$-round $\mathsf{MPC}$\xspace algorithms either only worked in the special case of $\operatorname{\text{{\rm poly}}} (\log n)$-degree graphs \cite{GU18}, or required the local memory to be strongly superlinear \cite{lattanzi2011filtering}.
To the best of our knowledge, all $\operatorname{\text{{\rm poly}}}(\log \log n)$-round $\mathsf{MPC}$\xspace matching approximation algorithms for general graphs (even if we allow linear memory) \cite{czumaj2017round,assadi2017coresets,MPCMIS} heavily make use of subsampling,
inevitably leading to a loss of information. It is thus unlikely that these techniques are applicable for maximal matching, at least not without a factor $\Omega(\log n)$ overhead. In fact, the problem of finding a maximal matching seems to be more difficult than finding a $(1+\varepsilon)$-approximate maximum matching. Indeed, currently, an $O(1)$-approximation can be found almost exponentially faster than a maximal matching, and the approximation ratio can be easily improved from any constant to $1+\varepsilon$ using a reduction of McGregor \cite{mcgregor2005finding}.
Our result becomes particularly instructive when viewed in this context. It is not only the first maximal matching algorithm that breaks the linear-memory barrier for a large range of graphs, but it also enriches the bleak pool of techniques for maximal matching in the presence of low memory by one particularly simple technique.
\begin{theorem}\label{thmMIS}
There is an $O\left(\sqrt{\log \lambda}+\log\log n \cdot \log \log \Delta\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\ computes a maximal independent set in a graph with arboricity $\lambda$.
\end{theorem}
This algorithm improves over the $O\left(\log\lambda+\sqrt {\log n}\right)$-round algorithm that is obtained by simulating the $\mathsf{LOCAL}$\xspace algorithm of \cite{barenboim2012locality,Ghaffari-MIS} and over the $O\left(\sqrt{\log \Delta} + \sqrt{\log \log n}\right)$-round algorithm in the concurent work by Ghaffari and Uitto \cite{GU18}.
Moreover, for graphs with arboricity $\lambda=\operatorname{\text{{\rm poly}}} (\log n)$, our algorithm is the first $\operatorname{\text{{\rm poly}}} (\log \log n)$-round low-memory $\mathsf{MPC}$\xspace algorithm. The previously known $\operatorname{\text{{\rm poly}}}(\log \log n)$-round $\mathsf{MPC}$\xspace algorithms for MIS either only worked in the special case of $\operatorname{\text{{\rm poly}}} (\log n)$-degree graphs \cite{GU18} and trees \cite{brandt2018breaking}, or required the local memory to be $\widetilde{\Omega}(n)$ \cite{MPCMIS}.
\vspace{0.5cm}
As a maximal matching automatically provides $2$-approximations for maximum matching and minimum vertex cover, \Cref{thmMM} directly implies the following result.
\begin{corollary}\label{corMandVC}
There is an $O\left(\sqrt{\log \lambda}+\log\log n \cdot \log \log \Delta\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\ computes a $2$-approximate maximum matching and a $2$-approximate minimum vertex cover in a graph with arboricity $\lambda$.
\end{corollary}
This is the first constant-approximation $\mathsf{MPC}$\xspace algorithm for matching and vertex cover---except for the $\mathsf{LOCAL}$\xspace and $\mathsf{PRAM}$\xspace simulations by \cite{barenboim2012locality} and \cite{luby1985simple,alon1986fast}, respectively, as well as the concurrent work in \cite{GU18}---that work with low memory. All the other algorithms require the space per machine to be either $\widetilde{\Omega}(n)$ \cite{lattanzi2011filtering,czumaj2017round,assadi2017coresets,MPCMIS} or even strongly superlinear \cite{lattanzi2011filtering,AssadiK17}. \Cref{corMandVC} generalizes the range of graphs that admit an efficient constant-approximation for matching and vertex cover in the low-memory $\mathsf{MPC}$\xspace model from graphs with maximum degree $\operatorname{\text{{\rm poly}}}(\log n)$ \cite{GU18} to uniformly sparse graphs with arboricity $\lambda=\operatorname{\text{{\rm poly}}}(\log n)$.
\vspace{0.2cm}
McGregor's reduction \cite{mcgregor2005finding} allows us to further improve the approximation to $1+\varepsilon$.
\begin{corollary}\label{cor1+eps}
There is an $O\left(\left(\frac{1}{\varepsilon}\right)^{O(1/\varepsilon)}\cdot \left( \sqrt{\log \lambda}+\log\log n \cdot \log \log \Delta\right)\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\ computes a $(1+\varepsilon)$-approximate maximum matching, for any $\varepsilon>0$.
\end{corollary}
\vspace{0.2cm}
Due to a reduction by to Lotker, Patt-Shamir, and Ros\'en \cite{lotkerMatching}, our constant-approximate matching algorithm can be employed to find a $(2+\varepsilon)$-approximate maximum weighted matching.
\begin{corollary}\label{corW2+eps}
There is an $O\left(\frac{1}{\varepsilon}\cdot \left(\sqrt{\log \lambda}+\log\log n \cdot \log \log \Delta\right)\right)$-round low-memory $\mathsf{MPC}$\xspace algorithm that w.h.p.\ computes a $(2+\varepsilon)$-approximate maximum weighted matching, for any $\varepsilon>0$.
\end{corollary}
\vspace{0.2cm}
\subsection*{Concurrent Work}
In this section, we briefly discuss an independent and concurrent work by Behnezhad, Derakhshan, Hajiaghayi, and Karp \cite{concurr}. The authors there arrive at the same results, with the same round complexities and the same memory requirements. As in our work, their key ingredient is a degree reduction technique that reduces the maximum degree of a graph from $\Delta$ to $\operatorname{\text{{\rm poly}}} (\lambda, \log n)$ in $O(\log \log \Delta \cdot \log \log n)$ rounds. While our degree reduction algorithm is based on (a variant of) the $H$-partition that partitions the vertices according to their degrees, the authors in \cite{concurr} show how to implement (and speed up) the $\mathsf{LOCAL}$\xspace degree reduction algorithm by Barenboim, Elkin, Pettie, and Schneider \cite[Theorem 7.2]{barenboim2012locality} in the low-memory $\mathsf{MPC}$\xspace model. Note that both algorithms do not require knowledge of the arboricity $\lambda$, as further discussed in \Cref{remarkKnowledgeArb}.
\newpage
\section{Algorithm Outline and Roadmap}
In the low-memory setting, one is inevitably confronted with
the challenge of locality:
as the space of a machine is strongly sublinear, it will never be able to see a significant fraction of the nodes, regardless of how sparse the graph is.
Further building on the ideas by \cite{brandt2018breaking}, we cope with this imposed locality by adopting local techniques---mainly inspired by the $\mathsf{LOCAL}$\xspace model \cite{linial1992locality}---and enhancing them with the additional power of global communication, in order to achieve an improvement in the round complexity compared to the $\mathsf{LOCAL}$\xspace algorithms while still being able to guarantee applicability
in the presence of strongly sublinear memory.
The main observation behind our algorithms is the following: If the maximum degree in the graph is small, $\mathsf{LOCAL}$\xspace algorithms can be simulated efficiently in the low-memory $\mathsf{MPC}$\xspace model.
Our method thus basically boils down to reducing the maximum degree of the input graph, as described in \Cref{thmDegRed}, and
correspondingly consists of two parts: a \emph{degree reduction} part followed by a \emph{$\mathsf{LOCAL}$\xspace simulation} part.
In the degree reduction part, which constitutes the key ingredient of our algorithm, we want to find a partial solution (that is, either a matching or an independent set) so that the \emph{remainder graph}---i.e., the graph after the removal of this partial solution (that is, after removing all matched nodes or after removing the independent set nodes along with all their neighbors)---has smaller degree.
\begin{lemma}\label{degred}
There are $O\left(\log \log n \cdot \log \log \Delta\right)$-round low-memory $\mathsf{MPC}$\xspace algorithms that compute a matching and an independent set in a graph with arboricity $\lambda=o(\operatorname{\text{{\rm poly}}} (n))$ so that the remainder graph w.h.p.\ has maximum degree $O\left(\left(\max\{\lambda, \log n\}\right)^{20}\right)$.
\end{lemma}
Note that this directly implies \Cref{thmDegRed}. Next, we show how \Cref{thmMIS,thmMM} follow from \Cref{degred} as well as from an efficient simulation of $\mathsf{LOCAL}$\xspace algorithms due to \cite{GU18}.
\begin{proof}[Proof of \Cref{thmMIS,thmMM}]
If $\lambda$, and hence $\Delta$, is at least polynomial in $n$, we directly apply the algorithm by \cite{GU18}, which runs in $O(\sqrt{\log \Delta}+\sqrt{\log \log n})=O(\sqrt{\log n})$ rounds.
Otherwise, we first apply the algorithm of \Cref{degred} to obtain a partial solution that reduces the degree in the remainder graph to $\Delta'=O(\lambda^{20})$ if $\lambda\geq\log n$, or to $\Delta'=O(\log^{20} n)$ if $\lambda\leq\log n$. It runs in $O(\log \log n \cdot \log \log \Delta)$ rounds. We then apply the algorithm by \cite{GU18} on the remainder graph. This takes $O(\sqrt{\log \Delta'}+\sqrt{\log\log n})=O(\sqrt{\log \lambda}+\sqrt{\log\log n})$ rounds.
\end{proof}
\begin{remark}\label{remarkKnowledgeArb}
While our algorithms, at first sight, seem to need to know $\lambda$, we can employ the standard technique \cite{knowledge} of running the algorithm with doubly-exponentially increasing estimates for $\lambda$.
\end{remark}
Our degree reduction algorithm in \Cref{degred} consists of several phases, each reducing the maximum degree by a polynomial factor, as long as the degree is still large enough.
\begin{lemma}\label{poldegred}
There are $O\left( \log \log n\right)$-round low-memory $\mathsf{MPC}$\xspace algorithms that compute a matching and an independent set, respectively, in a graph with arboricity $\lambda=o(\operatorname{\text{{\rm poly}}}(n))$ and maximum degree $\Delta=\Omega\left(\left(\max\{\lambda, \log n\}\right)^{20}\right)$ so that the remainder graph w.h.p.\ has maximum degree $O(\Delta^{0.4})$.\end{lemma}
We first show that indeed iterated applications of this polynomial degree reduction lead to the desired degree reduction in \Cref{degred}.
\begin{proof}[Proof of \Cref{degred}]
We iteratively apply the polynomial degree reduction from \Cref{poldegred}, observing that as long as the maximum degree is still in $\Omega(\lambda^{20})$ and $\Omega(\log^{20}n)$, we reduce the maximum degree by a polynomial factor from $\Delta$ to $O(\Delta^{0.4})$ in each phase, resulting in at most $O(\log \log \Delta)$ phases.
\end{proof}
It remains to show that such a polynomial degree reduction, as claimed in \Cref{poldegred}, indeed is possible. This is done in two parts. First, in \Cref{sec:seqDegRed}, we provide a centralized algorithm for a polynomial degree reduction, and then, in \Cref{sec:exp}, we show how to implement this centralized algorithm efficiently in the low-memory $\mathsf{MPC}$\xspace model.
\section{A Centralized Degree Reduction Algorithm}\label{sec:seqDegRed}
In this section, we present a centralized algorithm for the polynomial degree reduction as stated in \Cref{poldegred}. For details on how this algorithm can be implemented in the low-memory $\mathsf{MPC}$\xspace model, we refer to \Cref{sec:exp}.
In \Cref{algoDescription}, we give a formal description of the (centralized) algorithm. Then, in \Cref{correctness}, we prove that this algorithm indeed leads to a polynomial degree reduction.
\subsection{Algorithm Description}\label{algoDescription}
In the following, we set $d=\Delta^{1/10}$, and observe that $d=\Omega(\lambda^2)$ as well as $d=\Omega(\log^{2} n)$, due to the assumptions on $\Delta$ in the lemma statement.
We present an algorithm that reduces the maximum degree to $O(d^4)$.
This algorithm consists of three phases: a \emph{partition} phase, in which the vertices are partitioned into layers so that every node has at most $d$ neighbors in higher-index layers,
a \emph{mark-and-propose} phase in which a random set of candidates is proposed independently in every layer, and a \emph{selection} phase in which a valid subset of the candidate set is selected as partial solution by resolving potential conflicts across layers.
\paragraph{Partition Phase}
We compute an $H$-partition, that is, a partition of the vertices into layers so that every vertex has at most $d$ neighbors in layers with higher (or equal) index \cite{nash1961edge,nash1964decomposition,barenboim2010sublogarithmic}.
\begin{definition}[$H$-Partition]
An \emph{$H$-partition} with out-degree $d$, defined for any $d>2\lambda$, is a partition of the vertices into $\ell$ \emph{layers} $L_1, \dotsc, L_{\ell}$ with the property that a vertex $v\in L_i$ has at most $d$ neighbors in $\bigcup_{j= i}^{\ell} L_j$. We call $i$ the \emph{layer index} of $v$ if $v \in L_i$. For neighbors $u\in L_i$ and $v\in L_j$ for $i<j$, we call $v$ a \emph{parent} of $u$ and $u$ a \emph{child} of $v$. If we think of the edges as being directed from children to parents, this gives rise to a partial orientation of the edges, with no orientation of the edges connecting vertices in the same layer.
\end{definition}
Note that, for $d> 2\lambda$, such a partition can be computed easily by the following sequential greedy algorithm, also known as \emph{peeling} algorithm: Iteratively, for $i\geq 1$, put all remaining nodes with remaining degree at most $d$ into layer $i$, and remove them from the graph.
\paragraph{Mark-and-Propose Phase} We first mark a random set of candidates (either edges or vertices) and then propose a subset of these marked candidates for the partial solution as follows.
In the case of maximal matching, every node first marks an outgoing edge chosen uniformly at random and then proposes one of its incoming marked edges, if any, uniformly at random.
In the case of maximal independent set, every node marks itself independently with probability $p=d^{-2}$. Then, if a node is marked and none of its neighbors in the same layer is marked, this node is proposed.
Note that whether a marked node gets proposed only depends on nodes in the same layer, thus on neighbors with respect to unoriented edges.
\paragraph{Selection Phase} The set of proposed candidates might not be a valid solution, meaning that it might have some conflicts (i.e., two incident edges or two neighboring vertices). In the selection phase, possible conflicts are resolved (deterministically) by picking an appropriate subset of the proposed candidates, as follows. Iteratively, for $i=\ell, \dotsc, 1$, all (remaining) proposed candidates in layer $i$ are added to the partial solution and then removed from the graph.
In the case of maximal matching, we add all (remaining) proposed edges directed to a node in layer $i$ to the matching and remove both their endpoints from the graph.
In the case of maximal independent set, we add all (remaining) proposed nodes in layer $i$ to the independent set and remove them along with their neighbors from the graph.
\subsection{Proof of Correctness}\label{correctness}
\begin{figure}[!htb]
\caption{Illustration of the mark-and-propose and the selection phase for matching in (a) and independent set in (b). Blue indicates marked but not proposed, green stands for (marked and) proposed but not selected, and red means (marked and proposed and) selected. Note that we omitted all (but a few) irrelevant edges from the figure; the partition into layers thus might not correspond to a valid $H$-partition.}
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[scale=1.1]{matching.pdf}
\caption{An (oriented) edge $e=(u,v)$ that is selected to be added to the matching cannot have an incident edge that is also selected: an unoriented incident edge cannot be marked as only oriented edges are marked; an oriented edge with the same starting point $u$ cannot be marked as $u$ marks only one outgoing edge; an oriented edge with the same endpoint $v$ cannot be proposed as $v$ proposes only one incoming edge; all other oriented edges $f$ are either processed before (in the case of an outgoing edge from $v$) or after (in the case of an incoming edge to $u$) edge $e$ in the selection phase. In the former case, the selection of $f$ would lead to the removal of $e$ before $e$ is processed; $e$ thus would not be selected. In the latter case, the edge $f$ is removed immediately after $e$ is selected (and thus before $f$ is processed), and thus cannot be selected.
}\label{fig:matching}
\end{subfigure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[scale=1.1]{MIS.pdf}
\caption{If two neighboring nodes are marked, none of them will be proposed, and consequently, none of them will be selected. A node $v$ that is selected to be added to the independent set cannot have a neighbor that is also selected: a neighbor in the same layer cannot be marked as otherwise $v$ would not be proposed; a neighbor in a lower-index layer is removed from the graph when $v$ joins the independent set, and hence before it potentially could be selected; a selected neighbor in a higher-index layer would lead to $v$'s immediate removal from the graph; when $v$'s layer is processed, $v$ is not part of the graph anymore, and thus could not be selected.
}\label{fig:MIS}
\end{subfigure}
\label{fig:mps}
\end{figure}
It is easy to see that the selected solution is a valid partial solution, that is, that there are no conflicts. It remains to be shown that the degree indeed drops to $O(d^4)$. As the out-degree is bounded by $d$, it is enough to show the following.
\begin{lemma}\label{lemmaHeavyDisappear}
Every vertex with in-degree at least $d^{4}$ gets removed or all but $d^4$ of its incoming edges get removed, with high probability.
\end{lemma}
\begin{proof}[Proof of \Cref{lemmaHeavyDisappear} for matching]
Let $v$ be a node with degree at least $d^4$. First, observe that if at least one incoming edge of $v$ is proposed, then an edge incident to $v$ (not necessarily incoming) will be selected to be added to the matching. This is because the only reason why $v$ would not select a proposed incoming edge is that $v$ has already been removed from the graph, and this happens only if its proposed edge has been selected to be added to the matching in a previous step. It thus remains to show that every vertex $v$ with in-degree at least $d^4$ with high probability will have at least one incoming edge that has been proposed by the respective child. As every incoming edge of $v$ is proposed independently with probability at least $1/d$, the probability of $v$ not having a proposed incoming edge is at most $\left(1-1/d\right)^{d^{4}}\leq e^{-d^{3}}=e^{-\Omega(\log^{6}n)}=o\left(\frac{1}{\operatorname{\text{{\rm poly}}}(n)}\right)$. A union bound over all vertices with degree at least $d^4$ concludes the proof.
\end{proof}
\begin{proof}[Proof of \Cref{lemmaHeavyDisappear} for independent set]
Let $v$ be a vertex in layer $i$ that is still in the graph and has at least $d^4$ children after all layers with index $\geq i$ have been processed. We show that then at least one of these children will be selected to join the independent set with high probability. Note that this then concludes the proof, as in all the cases either $v$ will be removed from the graph or will not have high in-degree anymore.
Moreover, observe that such a child $u$ of $v$ (that is still there after having processed layers $\geq i$) will be selected to join the independent set iff it is proposed. This is because if it did not join even though it is proposed, then a parent of $u$ would had been selected to join the independent set, in which case $u$ would not have been part of the graph anymore, at latest after $i$ has been processed, thus would not count towards $v$'s high degree at that point.
Every such child $u$ of $v$ is marked independently with probability $p=d^{-2}$. The probability of $u$ being proposed and hence joining the independent set is at least $p(1-p)^{d}$, as it has at most $d$ neighbors in its layer, and it is proposed iff it is marked and none of its neighbors in the same layer is marked. Vertex $v$ thus in expectation has at least $\mu:=d^{4} p(1-p)^{d}\geq d^{2}e^{-2/d}=\Omega(d^{2})$ children that join the independent set.
Since whether a node $u$ proposes and hence joins the independent set depends on at most $d$ other nodes (namely $u$'s neighbors in the same layer), it thus follows from a variant of the Chernoff bounds for bounded dependence, e.g., from Theorem 2.1 in \cite{pemmaraju2001equitable}, that the probability of $v$ having, say, $0.5\mu$ neighbors that join the independent set is at most $e^{-\Omega(\mu/d)}=e^{-\Omega(d)}\leq e^{-\Omega\left(\log^{2}n\right)}= o\left(\frac{1}{\operatorname{\text{{\rm poly}}}(n)}\right)$. A union bound over all vertices $v$ with degree at least $d^4$ concludes the proof.\end{proof}
\section{Implementation of the Degree Reduction Algorithm in $\mathsf{MPC}$\xspace}\label{sec:exp}
In this section, we show how to simulate the centralized degree reduction algorithm from \cref{sec:seqDegRed} in the low-memory $\mathsf{MPC}$\xspace model.
In \Cref{sec:construction}, we show how to implement the partition phase efficiently in the low-memory $\mathsf{MPC}$\xspace model. Then, in \cref{sec:simul}, we describe how to perform the simulation of the mark-and-propose as well as the selection phase.
Together with the correctness proof established in \Cref{correctness}, this will conclude the proof of \Cref{poldegred}.
The main idea behind the simulation is to use the well-known \emph{graph exponentiation} technique ~\cite{Lenzen2010brief}, which can be summarized as follows:
Suppose that every node knows its $2^{i - 1}$-hop neighborhood in iteration $i - 1$.
Then, in iteration $i$, each node can inform the nodes in its $2^{i - 1}$-hop neighborhood of the topology of its $2^{i - 1}$-hop neighborhood.
Hence, every node can learn of its $2^{i}$-hop neighborhood in iteration $i$, allowing it to simulate any $2^{i}$-round $\mathsf{LOCAL}$\xspace algorithm in $0$ rounds. Using this exponentiation technique, in principle, every $t$-round $\mathsf{LOCAL}$\xspace algorithm can be simulated in $O(\log t)$ $\mathsf{MPC}$\xspace rounds. We have to be careful about the memory restrictions though.
In iteration $i$ of the exponentiation process, we need to store a copy of the $2^{i}$-hop neighborhood of every node.
If the neighborhood contains more than $n^{\delta}$ edges, we violate the local memory constraint.
Similarly, the total memory of $\widetilde \Theta(m)$ might not be enough to store each of these copies, even if every single copy fits to a machine.
In order to deal with these issues, the key observation is that a large fraction of the nodes in the graph are contained in the first layers of the $H$-partition.
In particular, we show that if we focus on the graph remaining after $\ell/2$ iterations of peeling, we can perform roughly $\log \ell$ exponentiation steps without violating the memory constraints.
Hence, we can perform $2^i$ steps of the degree reduction process in roughly $i$ communication rounds.
In case the maximum degree $\Delta$ is larger than the local memory $S=O(n^{\delta})$, one needs to pay attention to how the graph is distributed.
One explicit way is to split high-degree nodes into many copies and distribute the copies among many machines.
For the communication between the copies, one can imagine a (virtual) balanced tree of depth $1 / \delta$ rooted at one of the copies.
Through this tree, the copies can exchange information in $O(1/\delta)$ communication rounds.
For the sake of simplicity, our write-up assumes that $\Delta \ll n^{\delta}$.
\subsection{Partition Phase}\label{sec:construction}
We first prove some properties of the $H$-partition constructed by the greedy peeling algorithm that will be useful for an efficient implementation in the low-memory $\mathsf{MPC}$\xspace model.
\begin{lemma}\label{HDecompProp}
The $H$-partition with out-degree $d$, constructed by the greedy peeling algorithm, satisfies the following properties.
\begin{enumerate}[(i)]
\item For all $0\leq i \leq \ell$, the number $\left|\bigcup_{j=i}^{\ell}L_j\right|$ of nodes in layers with index $\geq i$ is at most $n \left(\frac{2\lambda}{d}\right)^{i-1}$. In other words, if we remove all nodes in layer $i$ from the set of nodes in layers $\geq i$, then the number of nodes drops by a factor of $\frac{2\lambda}{d}$, i.e., $\left|\bigcup_{j=i+1}^{\ell}L_j\right|\leq \frac{2\lambda}{d}\left|\bigcup_{j=i}^{\ell}L_j\right|$ for all $0\leq i \leq \ell$.
\item There are at most $\ell=O\left(\log_{\frac{d}{\lambda}}n\right)$ layers.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (i) by induction, thus assume that there are $n_{i}\leq n \left(\frac{2\lambda}{d}\right)^{i-1}$ nodes in the graph $H_i$ induced by vertices in layers $\geq i$. Towards a contradiction, suppose that there are $n_{i+1}>n \left(\frac{2\lambda}{d}\right)^{i}$ nodes in layers $\geq i+1$. By construction, all these nodes must have had degree larger than $d$ in $H_i$, as otherwise they would have been added to layer $i$. This results in an average degree of more than $\frac{n_{i+1} d}{n_i} =2\lambda$ in $H_i$, which contradicts the well-known upper bound of $2\lambda$ on the average degree in a graph that has arboricty at most $\lambda$. Note that (ii) is a direct consequence of (i).
\end{proof}
In the following, we describe how to compute the $H$-partition with parameter $d = \Delta^{1/10}$ in the low-memory $\mathsf{MPC}$\xspace model\footnote{Note that it is easy to learn the maximum degree, and hence $d$, in $O(1)$ rounds of communications.}.
Throughout this section, we assume that $\Delta \geq (2\lambda)^{20}$, i.e., that $d \geq (2\lambda)^2$.
Observe that if $\Delta^2 > n^\delta$, then the $H$-partition with parameter $d = \Delta^{1/10}$ consists of $O(\log_{d/\lambda}n) = O(\log_{\Delta}n) = O(1/\delta)$ layers, in which case the arguments in this section imply that going through the layers one by one will easily yield at least as good runtimes as for the more difficult case of $\Delta^2 \leq n^\delta$.
Hence, throughout this section, we assume that $\Delta^2 \leq n^\delta$.
The goal of the algorithm for computing the $H$-partition is that each node (or, more formally, the machine storing the node) knows in which layer of the $H$-partition it is contained.
The algorithm proceeds in iterations, where each iteration consists of two parts: first, the output, i.e., the layer index, is determined for a large fraction of the nodes, and second, these nodes are removed for the remainder of the computation. The latter ensures that the remaining small fraction of nodes can use essentially all of the total available memory in the next iteration, resulting in a larger memory budget per node.
However, there is a caveat: When the memory budget per node exceeds $\Theta(n^\delta)$, i.e., the memory capacity of a single machine, then it is not sufficient anymore to merely argue that the used memory of all nodes together does not exceed the total memory of all machines together\footnote{Furthermore, in the ``shuffle'' step of every MPC round~\cite{karloff2010model}, we assume that the nodes are stored in the machines in a balanced way, i.e., as long as a single node fits onto a single machine and the total memory is not exceeded, the underlying system takes care of load-balancing.}.
We circumvent this issue by starting the above process repeatedly from anew (in the remaining graph) each time the memory requirement per node reaches the memory capacity of a single machine.
As we will see, the number of repetitions, called \emph{phases}, is bounded by $O(1/\delta)$.
\newpage
In the following, we examine the phases and iterations in more detail.
\paragraph{Algorithm Details}
Let $k$ be the largest integer s.t.\ $\Delta^{2^k+1}\leq n^{\delta}$ (which implies that $k \geq 0$).
The algorithm consists of phases and each phase consists of $k+1$ iterations.
Next, we describe our implementation of the graph exponentiation in more detail.
\begin{itemize}
\item In each iteration $i = 0, 1, \ldots, k$, we do the following.
\begin{itemize}
\item Let $G_i = G_i^{(0)}$ be the graph at the beginning of iteration $i$. Each node connects its current $1$-hop neighborhood to a clique by adding virtual edges to $G_i$; if $i = 0$, omit this step.
Perform $20$ repetitions of the following process if $i \geq 1$, and $60$ repetitions if $i=0$:
\begin{itemize}
\item In repetition $0 \leq j \leq 19$ (resp.\ $0 \leq j \leq 59$), each node computes its layer index in the $H$-partition of $G_i^{(j)}$ (with parameter $d$) or determines that its layer index is strictly larger than $2^i$, upon which all nodes in layer at most $2^i$ (and all its incident edges) are removed from the graph, resulting in a graph $G_i^{(j+1)}$.
\end{itemize}
Set $G_{i+1} = G_i^{(20)}$ (resp.\ $G_{i+1} = G_i^{(60)}$ if $i=0$).
\end{itemize}
At the end of the phase remove all added edges.
\end{itemize}
The algorithm terminates when each node knows its layer.
Note that each time a node is removed from the graph, the whole layer that contains this node is removed, and each time such a layer is removed, all layers with smaller index are removed at the same time or before.
By the definition of the $H$-partition, if we remove the $\ell$ layers with smallest index from a graph, then there is a 1-to-1 correspondence between the layers of the resulting graph and the layers with layer index at least $\ell+1$ of the original graph.
More specifically, layer $\ell'$ of the resulting graph contains exactly the same nodes as layer $\ell+\ell'$ of the original graph.
Hence, if a node knows its layer index in some $G_i^{(j)}$, it can easily compute its layer index in our original input graph $G$, by keeping track of the number of deleted layers, which is uniquely defined by $i$, $j$ and the number of the phase.
We implicitly assume that each node performs this computation upon determining its layer index in some $G_i^{(j)}$ and in the following only consider how to determine the layer index in the current graph.
\paragraph{Implementation in the $\mathsf{MPC}$\xspace Model}
Let us take a look at one iteration.
Connecting the $1$-hop neighborhoods to cliques is done by adding the edges that are missing.
Edges that are added by multiple nodes are only added once (since the edge in question is stored by the machines that contain an endpoint of the edge, this is straightforward to realize).
Note that during a phase, the $1$-hop neighborhoods of the nodes grow in each iteration (if not too many close-by nodes are removed from the graph); more specifically, after $i$ iterations of connecting $1$-hop neighborhoods to cliques, the new $1$-hop neighborhood of a node contains exactly the nodes that were contained in its $2^i$-hop neighborhood at the beginning of the phase (and were not removed so far).
In iteration $i$, the layer of a node is computed as follows: First each node locally gathers the topology of its $2^i$-hop neighborhood (without any added edges).\footnote{Note that it is easy to keep track of which edges are original and which are added, incurring only a small constant memory overhead; later we will also argue why storing the added edges does not violate our memory constraints.}
Since this step is performed after connecting the $2^{i-1}$-hop neighborhood of each node to a clique (by repeatedly connecting $1$-hop neighborhoods to cliques), i.e., after connecting each node to any other node in its $2^i$-hop neighborhood, only $1$ round of communication is required for gathering the topology.
Moreover, since a node that knows the topology of its $2^i$-hop neighborhood can simulate any $(2^i - 1)$-round distributed process locally, it follows from the definition of the $H$-partition, that knowledge of the topology of the $2^i$-hop neighborhood is sufficient for a node to determine whether its layer index is at most $2^i$ and, if this is the case, in exactly which layer it is contained.
Thus, the only tasks remaining are to bound the runtime of our algorithm and to show that the memory restrictions of our model are not violated by the algorithm.
\paragraph{Runtime}
It is easy to see that every iteration takes $O(1)$ rounds.
Thus, in order to bound the runtime of our algorithm, it is sufficient to bound the number of iterations by $O(1/\delta \cdot\log \log n)$.
By \cref{HDecompProp} (ii) the number of layers in the $H$-partition of our original input graph $G$ is $O(\log_{d/\lambda}n)$, which is $O(\log_{\Delta}n)$ since $d/(2\lambda) \geq \sqrt{d} = \Delta^{1/20}$.
Consider an arbitrary phase.
According to the algorithm description, in iteration $i \geq 1$, all nodes in the $20 \cdot 2^i$ lowest layers are removed from the current graph.
Hence, ignoring iteration $0$, the number of removed layers doubles in each iteration, and we obtain that the number of layers removed in the $k+1$ iterations of our phase is $\Omega(2^k)$.
By the definition of $k$, we have $\Delta^{2^{k+1}+1} > n^{\delta}$, which implies $2^k > 1/3 \cdot \delta \cdot \log_{\Delta}n$.
Combining this inequality with the observations about the total number of layers and the number of layers removed per phase, we see that the algorithm terminates after $O(1/\delta)$ phases.
Since there are $k+1 = O(\log \log n)$ iterations per phase, the bound on the number of iterations follows.
\paragraph{Memory Footprint}
As during the course of the algorithm edges are added and nodes collect the topology of certain neighborhoods, we have to show that adding these edges and collecting these neighborhoods does not violate our memory constraints of $O(n^\delta)$ per machine.
As a first step towards this end, the following lemma bounds the number of nodes contained in graph $G_i$.
\begin{lemma} \label{lem:nodedecrease}
Consider an arbitrary phase. Graph $G_i$ from that phase contains at most $n'/(\Delta^{2^i})$ nodes, for all $i \geq 1$, where $n' = n/\Delta$.
\end{lemma}
\begin{proof}
By \cref{HDecompProp} (i), removing the nodes in the layer with smallest index from the current graph decreases the number of nodes by a factor of at least $d/(2\lambda) \geq d^{1/2} = \Delta^{1/20}$.
We show the lemma statement by induction.
Since in iteration $0$ the nodes in the $60$ layers with smallest index are removed, we know that $G_1$ contains at most $n/(\Delta^3) = n'/(\Delta^{2^1})$ nodes.
Now assume that $G_i$ contains at most $n'/(\Delta^{2^i})$ nodes, for an arbitrary $i \geq 1$.
According to the design of our algorithm, $G_{i+1}$ is obtained from $G_i$ by removing the nodes in the $20 \cdot 2^i$ layers with smallest index.
Combining this fact with our observation about the decrease in the number of nodes per removed layer, we obtain that $G_{i+1}$ contains at most $n'/(\Delta^{2^i}) \cdot 1/(\Delta^{2^i}) = n'/(\Delta^{2^{i+1}})$ nodes.
\end{proof}
Using \cref{lem:nodedecrease}, we now show that the memory constraints of the low-memory $\mathsf{MPC}$\xspace model are not violated by our algorithm.
Consider an arbitrary phase and an arbitrary iteration $i$ during that phase.
If $i=0$, then no edges are added and each node already knows the topology of its $2^i$-hop neighborhood, so no additional memory is required.
Hence, assume that $i \geq 1$.
Due to \cref{lem:nodedecrease}, the number of nodes considered in iteration $i$ is at most $n'/(\Delta^{2^i})$, where, again, $n' = n/\Delta$.
After the initial step of connecting $1$-hop neighborhoods to cliques in iteration $i$, each remaining node is connected to all nodes that were contained in its $2^i$-hop neighborhood in the original graph $G$ (and were not removed so far).
Hence, each remaining node is connected to at most $O(\Delta^{2^i})$ other nodes, resulting in a memory requirement of $O(\Delta^{2^i})$ per node, or $O(n/\Delta)$ in total.
Similarly, when collecting the topology of its $2^i$-hop neighborhood, each node has to store $O(\Delta^{2^i} \cdot \Delta)$ edges, which requires at most $O(\Delta^{2^i} \cdot \Delta)$ memory, resulting in a total memory requirement of $O(n)$.
Hence, the described algorithm does not exceed the total memory available in the low-memory $\mathsf{MPC}$\xspace model.
Moreover, due to the choice of $k$, the memory requirement of each single node does not exceed the memory capacity of a single machine.
\newpage
\subsection{Simulation of the Mark-and-Propose and Selection Phase}\label{sec:simul}
For the simulation of the mark-and-propose and selection phase, we rely heavily on the approach of \cref{sec:construction}.
Recall that nodes were removed in \emph{chunks} consisting of several consecutive layers and that before a node $v$ was removed, $v$ was directly connected to all nodes contained in a large neighborhood around $v$ by adding the respective edges.
For the simulation, we go through these chunks in the reverse order in which they were removed.
Note that in which chunk a node is contained is uniquely determined by the layer index of the node.
As each node computes its layer index during the construction of the $H$-partition, each node can easily determine in which part of the simulation it will actively participate.
However, there is a problem we need to address:
For communication, we would like the edges that we added during the construction of the $H$-partition to be available also for the simulation.
Unfortunately, during the course of the construction, we removed added edges again to free memory for the adding of other edges.
Fortunately, there is a conceptually simple way to circumvent this problem: in the construction of the $H$-partition, add a pre-processing step in the beginning, in which we remove the lowest $c \cdot \log(1/\delta \cdot \log \log n)$ layers (where $c$ is a sufficiently large constant) one by one in $\log(1/\delta \cdot \log \log n)$ rounds, which increases the available memory (compared to the number of (remaining) nodes) by a factor of $\Omega(1/\delta \cdot \log \log n)$, by \cref{HDecompProp}.
Since the algorithm for constructing the $H$-partition consist of $O(1/\delta \cdot \log \log n)$ iterations, this implies that we can store all edges that we add during the further course of the construction simultaneously without violating the memory restriction, by an argument similar to the analogous statement for the old construction of the $H$-partition.
Similarly, also the number of added edges incident to one particular node does not exceed the memory capacity of a single machine.
In the following, we assume that this pre-processing step took place and all edges added during the construction of the $H$-partition are also available for the simulation.
\paragraph{Matching Algorithm}
As mentioned above, we process the chunks one by one, in decreasing order w.r.t.\ the indices of the contained layers.
After processing a chunk, we want each node contained in the chunk to know the output of all incident edges according to the centralized matching algorithm. In the following, we describe how to process a chunk, after some preliminary ``global" steps.
The mark-and-propose phase of the algorithm is straightworward to implement in the low-memory $\mathsf{MPC}$\xspace model: each node (in each chunk at the same time) performs the marking of an outgoing edge as specified in the algorithm description.\footnote{Note that, formally, the algorithm for construction the $H$-partition only returns the layer index for each node; however, from this information each node can easily determine which edges are outgoing, unoriented, or incoming according to the partial orientation induced by the $H$-partition.}
The proposing is performed for all nodes before going through the chunks sequentially: each node proposes one of its marked incoming edges (it there is at least one) uniformly at random.
Note that proposes an edge does not necessarily indicate that this edge will be added to the matching; more specifically, an edge proposed by some node $v$ will be added to the matching iff the edge that $v$ marked is not selected to be added to the matching.\footnote{In other words, only proposed edges can go into the matching and whether such an edge indeed goes into the matching can be determined by going through the layers in decreasing order and only adding a proposed edge if there is no conflict.}
After this mark-and-propose phase, the processing of the chunks begins.
Consider an arbitrary chunk.
Let $i$ be the iteration (in some phase) in which this chunk was removed in the construction of the $H$-partition, i.e., the chunk consists of $2^i$ layers.
Each node in the chunk collects the topology of its $2^i$-hop neighborhood in the chunk including the information contained therein about proposed edges.
Due to the edges added during the construction of the $H$-partition, this can be achieved in a constant number of rounds, and by an analogous argument to the one at the end of \cref{sec:construction}, collecting the indicated information does not violate the memory restrictions of our model.
\cref{lem:matchingimp} shows that the information contained in the $2^i$-hop neighborhood of a node is sufficient for the node to determine the output for each incident edge in the centralized matching algorithm.
\begin{lemma} \label{lem:matchingimp}
The information about which edges are proposed in the $2^i$-hop neighborhood of a node $v$ uniquely determines the output of all edges incident to $v$ according to the centralized matching algorithm.
\end{lemma}
\begin{proof}
From the design of the centralized matching algorithm, it follows that an edge is part of the matching iff 1) the edge is proposed and 2) either the higher-layer endpoint of the edge has no outgoing edges or the outgoing edge marked by the higher-layer endpoint is not part of the matching.
Hence, in order to check whether an incident edge is in the matching, node $v$ only has to consider the unique directed chain of proposed edges (in the chunk) starting in $v$.
Clearly, the information which of the edges in this chain are proposed uniquely defines the output of the first edge in the chain, from which $v$ can infer the output of all other incident edges.
Since the number of edges in the chain is bounded by $2^i - 1$ as the chain is directed, the lemma statement follows.
\end{proof}
It thus follows from the bound on the number of iterations that the simulation of the selection phase for the matching algorithm can be performed in $O(1/\delta \cdot\log \log n)$ rounds of communication.
\paragraph{Independent Set Algorithm}
The simulation of the independent set algorithm proceeds analogously to the case of the matching algorithm.
First, each node performs the marking and proposing in a distributed fashion in a constant number of rounds.
Then, the chunks are processed one by one, as above, where during the processing of a chunk removed in iteration $i$, each node contained in the chunk collects its $2^i$-hop neighborhood, including the information about which nodes are proposed, and then computes its own output locally.
By analogous arguments to the ones presented in the case of the matching algorithm, the algorithm adheres to the memory constraints of our model and the total number of communication rounds is $O(1/\delta \cdot\log \log n)$.
The only part of the argumentation where a bit of care is required is the analogue of \cref{lem:matchingimp}:
In the case of the independent set algorithm the output of a node $v$ may depend on \emph{each} of its parents since each of those could be part of the independent set, which would prevent $v$ from joining the independent set.
However, \emph{all} nodes in the chunk that can be reached from $v$ via a directed chain of edges are contained in $v$'s $2^i$-hop neighborhood; therefore, collecting the own $2^i$-hop neighborhood is sufficient for determining one's output.
Note that at the end of processing a chunk, if we follow the above implementation, we have to spend an extra round for removing the neighbors of all selected independent set nodes since these may be contained in another chunk.
\bibliographystyle{alpha}
|
1,108,101,563,632 | arxiv | \section{Introduction and Motivations}
The systematic investigation of strings in curved spacetimes
started in \cite{plb} has uncovered a variety of new physical phenomena
(see \cite{erice,nueview} for a general review). These results are
relevant both for
fundamental (quantum) strings and for cosmic strings, which behave in
an essentially classical way.
The study of classical and semiclassical strings in curved backgrounds will
provide and is indeed providing us with a better comprehension of what a
consistent string theory and gravity theory entail. In this
context we place
the present paper, which continues the line of research set by
\cite{plb}.
Among the heretofore existing analysis of the motion of classical strings in
gravitational backgrounds a special place is to be granted to exact solutions,
usually obtained by means of separable ans\"atze
(non-separable exact solutions were systematically constructed for de
Sitter spacetime \cite{cdms}).
Such are the circular string
ansatz \cite{letelier,hjile}, which for stationary axially symmetric spacetimes
reduces the nonlinear equations of string motion to an equivalent
one-dimensional dynamical system
\cite{schwads}, or the stationary string ansatz \cite{demirm953}.
In this paper we examine a different ansatz, which we have called the planetoid
ansatz, in stationary axisymmetric $ 3 + 1 $ spacetime backgrounds.
The planetoid solutions are straight non-oscillating string solutions
that rotate uniformly around the symmetry axis of the spacetime.
In Schwarzschild black holes,
they are permanently pointing towards $ r = 0 $ while
they rotate outside the horizon. In de Sitter spacetime the planetoid
rotates around its center.
We call our ansatz planetoid since it generalizes to strings the
bounded circular orbits of point particles in such spacetimes. In the
case of the
Schwarzschild geometry the planetoid string solutions presented here
are generalization of the circular orbits of planets.
We will show how our planetoid
ansatz produces either orbiting strings with bounded world-sheet and
length or strings of unbounded length.
The main competing physical forces in the context of this ansatz are the
attraction of gravity, the centrifugal force, and the string tension. The
combination of these three causes in different proportions produce
different effects, as we will now see.
It should be noted that the effects mentioned take place even when the
gravitational field acting on the
string is not strong. They are due to the non
local character of the string.
We quantize semiclassically the planetoid string solutions using the
WKB method adapted to periodic string solutions \cite{dvls}.
We obtain in this way their masses as a function of the angular momentum. Such
relations are non-linear and can be considered as a (generalized)
Regge trajectory [See figs. 1 and 2].
\section{Equatorial planetoid ansatz}
\subsection{The ansatz and the string equations of motion}
We consider our classical strings propagating in $ 3 + 1 $ dimensional
stationary axisymmetric spacetime. For simplicity, we restrict in
this paper to strings propagating in the
equatorial plane $ z = 0 $. We can thus restrict ourselves to the
$2+1$ metric with line element of the form
\begin{equation}
{\rm d}s^2=g_{tt}(r)\; {\rm d}t^2 +g_{rr}(r)\;{\rm d}r^2 + 2 \;
g_{t\phi}(r)\; {\rm d}t\, {\rm d}\phi + g_{\phi\phi}(r)\; {\rm d}\phi^2\,.
\end{equation}
Let $\tau$ and $\sigma$ be the time-like and
space-like world-sheet coordinate respectively in the conformal
gauge. Under the ansatz
\begin{eqnarray}
t & = & t_0 + \alpha\, \tau\,,\nonumber\\
\phi & = & \phi_0 + \beta\,\tau\,,\nonumber\\
r & = & r(\sigma)\,,\label{ansatz}
\end{eqnarray}
the equations of motion for a string in this background are given by the
following one-dimensional equivalent system:
\begin{eqnarray}
\left({{{\rm d}r}\over{{\rm d}\sigma}}\right)^2 + g^{rr}\big[\alpha^2
\; g_{tt}
+2 \, \alpha \,\beta \, g_{t\phi} & + &
\beta^2 \, g_{\phi\phi}\big]=\nonumber\\
\left({{{\rm d}r}\over{{\rm d}\sigma}}\right)^2
+V(r) & = & 0\,.\label{laecuacion}
\end{eqnarray}
The function
$r(\sigma)$ will then be given by the zero energy motion in
$\sigma$ ``time'' of $r$ under the potential $V(r)=\alpha^2 \,
g^{rr}\left[g_{tt}
+ 2 \,\lambda \,g_{t\phi}+
\lambda^2 \, g_{\phi\phi}\right]$, with $\lambda=\beta/\alpha$.
Quite obviously,
the movement of the string will be periodic. The physical period
$T$ in coordinate time $t$ relates to
$\lambda$ through
$$
T={{2\pi}\over {\lambda}} \; .
$$
It will prove useful to introduce the `physical' potential
$$
\tilde V(r)=V(r)/\alpha^2=g^{rr}\left[g_{tt}
+ {{4\pi}\over T }\, g_{t\phi}+
{{4\pi^2}\over T^2}\; g_{\phi\phi} \right] \; ,
$$
since it only depends on the physical parameter $T$.
The boundary conditions for open strings, namely,
$\partial X_\mu/\partial\sigma=0$ at the ends of the string, are naturally
fulfilled by this ansatz.
Following
\cite{chgordon}, we see that this is the only ansatz that separates
variables,
lets strings be dynamical, and respects the open string
boundary conditions, when $r=r(\sigma)$ is chosen.
Note also that this ansatz differs from the circular string ansatz (i.e.,
$t=t(\tau)$,
$\phi=\phi_0 +\nu\sigma$, $r=r(\tau)$) in the dependence of $r$ in the
space-like world-sheet (conformal) coordinate and in the form of the equivalent
one-dimensional energy equation, which for this later case reads $\dot r^2+
g^{rr}\left[\mu^2 g^{tt} +\nu^2 g_{\phi\phi}\right]$, where the dot stands for
the derivative with respect to $\tau$.
The invariant size of the planetoid string is given by the substitution of the
ansatz in the line element:
\begin{equation}\label{talla}
{\rm d}s^2=g_{rr}\left({{{\rm d}r}\over{{\rm d}\sigma}}\right)^2\left(-{\rm
d}\tau^2+{\rm d}\sigma^2\right)\,.
\end{equation}
\subsection{Energy and angular momentum}
It is well known that the definition of a stress-energy tensor for an extended
object in general relativity is no mean task \cite{dixon}. In the case at hand,
however, there exists a favored time coordinate, for which a Killing vector
exists ($\partial/\partial t$). This allows us to define clearly what is meant
as energy, \cite{dixon}: ${\cal E}=\alpha/\alpha'$.
Similarly, the existence of the Killing
vector $\partial/\partial\phi$, associated with the rotational symmetry, allows
for the definition of an angular momentum about the axis. In particular, this
is performed as follows: the function $\phi(\sigma,\tau)$ appears in the string
Lagrangian only through its derivatives, whence the conserved
world-sheet current is obtained by Noether's theorem
$$
J_{\mu} = {2 \over {\pi \alpha'}}\; \left[ g_{t\phi} \; \partial_{\mu}t
+ g_{\phi\phi} \; \partial_{\mu}\phi \right] \; .
$$
The integration of this current provides us with the string angular
momentum $J$,
$$
J \equiv \int J_{\tau} \; d\sigma = {2 \over {\pi \alpha'}}\;
\int_{{r_{\rm min}}}^{{r_{\rm max}}}{\rm d}r\,
{{g_{t\phi} + {{2\pi}\over T}\; g_{\phi\phi}}\over{\sqrt{-\tilde
V(r)}}}\, .
$$
where we used eq.(\ref{laecuacion}) and $r_{\rm min}$ and $r_{\rm
max}$ denote the minimum and maximum radius reached by the string,
respectively.
\subsection{General expressions and quantization condition}
We will collect here the expressions for the physical string magnitudes:
angular momentum $J$, classical
action for solutions $S_{\rm cl}$, mass $ m $, and reduced action $
W(m) $. The mass will be defined as
$m:=-{\rm d}S_{\rm cl}/{\rm d}T$, with $T$ the period. The reduced action
\cite{dvls} is thus obtained as $W(m)=mT(m) +S_{\rm
cl}\left(T(m) \right)$.
The quantization condition will read $W(m)=2\pi n$ (in units with $\hbar
=1$).
For the case at hand, closed expressions in terms of quadratures can be
obtained for all these quantities, as follows:
\begin{eqnarray}\label{expresiones}
S_{\rm cl}(T) & = & -{{2T}\over{\pi\alpha'}} \int_{{r_{\rm min}}}^{{r_{\rm
max}}}{\rm d}r\,g_{rr}\sqrt{-\tilde V(r)}\,,\cr\cr
W & = & {{4}\over{T\alpha'}}\int_{{r_{\rm min}}}^{{r_{\rm
max}}}{\rm d}r\,{{Tg_{t\phi} + 2\pi g_{\phi\phi}}\over{\sqrt{-\tilde
V(r)}}}\,,\cr\cr
m & = & {{W-S_{\rm cl}}\over{T}}\,,\cr\cr
J & = & {{W}\over{2\pi}}\,.
\end{eqnarray}
As is immediately obvious from these
expressions, it is not necessary to have the solution $r(\sigma)$ in a closed
form for the quantities indicated to be evaluated, and in what follows we will
not use the explicit expressions for $r=r(\sigma)$,
which, after all, is dependent on the parametrization of the world-sheet. It
should be noted that the previously mentioned quantization condition
[$ W(m)=2\pi n $] is equivalent for this class of solutions to
$J=n$. This should be interpreted as a consistency check of the
semiclassical quantization being performed.
The invariant string length at a fixed time $ t $ follows from
eq.(\ref{talla})
\begin{equation}\label{loninv}
s = \int_{{r_{\rm min}}}^{{r_{\rm max}}}{\rm d}r\,\sqrt{g_{rr}} \; .
\end{equation}
\section{Explicit solutions and their analysis}
\subsection{Minkowski spacetime}
In order to improve our understanding of the physical meaning of the
solutions being examined, let us take the easy Minkowski case, for which
$g_{tt}=-1$,
$g_{rr}=1$,
$g_{t\phi}=0$, and
$g_{\phi\phi}=r^2$. Equation (\ref{laecuacion}) then becomes
$$
\left({{dr}\over {d\sigma}}\right)^2+ \lambda^2 r^2-1=0\,,
$$
The solution is immediate:
$$
r={T \over {2\pi}}\; \left|\cos\left({{2\pi}\over T} \sigma \right)\right| \; ,
$$
where $ 0 \leq \sigma \leq T/2 $.
In cartesian coordinates,
$$
x = {T \over {2\pi}}\; \cos\left({{2\pi\tau}\over T} \right)\, \cos
\left({{2\pi\sigma}\over T} \right)\,
$$
$$
y = {T \over {2\pi}}\; \sin\left({{2\pi\tau}\over T} \right)\, \cos
\left({{2\pi\sigma}\over T} \right)\, .
$$
It is easy to see that this is a string of length $ T/\pi $
rotating around its middle point which coincides with the origin of
coordinates.
The action, reduced action, mass and angular momentum are therefore
\begin{eqnarray}
S_{\rm cl}(T) & = & -W=-{{T^2}\over{2\pi\alpha'}}\,,\cr \cr
m & = & {T \over {\pi\alpha'}} \, ,\cr \cr
J & = & {{T^2}\over {4\pi^2\alpha'}}\,,
\end{eqnarray}
from which the relation follows
\begin{equation}
\alpha' m^2 = 4J\,.
\end{equation}
It should be noted that this relation differs from the standard one by a factor
4. This is due to the different normalization of the string tension parameter
$\alpha'$.
\subsection{Static Robertson-Walker spacetimes}
As a first curved spacetime, we examine the static Robertson-Walker
universe, with line element
\begin{equation}
{\rm d}s^2= -{\rm d}t^2 + {{{\rm d}r^2}\over{1-\kappa r^2}} + r^2 {\rm
d}\phi^2\,.
\end{equation}
If $\kappa<0$, the potential
$$
{\tilde V} = {{\left({{2\pi r}\over T}\right)^2 - 1 } \over {1 -\kappa r^2}}
$$
is smaller than zero if $0<r<T/2\pi$, as in
Minkowski spacetime. On the other hand, were we to take $\kappa>0$, the
number of
possible types of solutions increases. Consider first $\kappa<0$. Let
$\nu=T\sqrt{-\kappa}/2\pi$, and $\mu=\nu/\sqrt{\nu^2+1}$.
Our computations result in
\begin{eqnarray}
S_{\rm cl}(T) & = &
-{{4T}\over{\pi\alpha'\sqrt{-\kappa}}}\; {1\over{\mu}}\; \left[K(\mu)
-E(\mu)\right]\,,\cr \cr
W & = & {{4T}\over{\pi\alpha'\sqrt{-\kappa}}}\; \left[{1\over{\mu}}\; E(\mu) +
{{\mu^2-1}\over{\mu}}K(\mu)\right]\,,\cr \cr
m & = & {{4}\over{\pi\alpha'\sqrt{-\kappa}}}\; \mu \, K(\mu)\,,
\end{eqnarray}
where $K$ and $E$ are complete elliptic integrals of the first and second kind
respectively, with the elliptic modulus as their argument.
Let us now pass to the $\kappa>0$ situation. There are two classes of
solutions: those that extend from $0$ to ${\rm min}(r_T,r_\kappa)$, and those
from ${\rm max}(r_T,r_\kappa)$ to infinity, where $r_T=T/2\pi$ and
$r_\kappa=1/\sqrt{\kappa}$. The second class of solutions lead to infinite
reduced action. As to the first class, computations yield
\begin{eqnarray}
S_{\rm cl}(T) & = & -{{8r_\kappa^2}\over{\alpha'}}\left[
E\left({{r_T}\over{r_\kappa}}\right) +
\left({{r_T^2}\over{r_\kappa^2}}-1\right)
K\left({{r_T}\over{r_\kappa}}\right)\right]\,,\cr \cr
W & = &
{{8r_\kappa^2}\over{\alpha'}}\left[K\left({{r_T}\over{r_\kappa}}\right)
-E\left({{r_T}\over{r_\kappa}}\right)\right]\,,\cr \cr
m & = & {{4 r_T}\over{\pi\alpha'}}\; K\left({{r_T}\over{r_\kappa}}\right)\,,
\end{eqnarray}
for the case $r_T<r_\kappa$, and
\begin{eqnarray}
S_{\rm cl}(T) & = & -{{8r_\kappa r_T}\over{\alpha'}}
E\left({{r_\kappa}\over{r_T}}\right)\,,\cr \cr
W & = & {{8r_\kappa
r_T}\over{\alpha'}}\; \left[K\!\left({{r_\kappa}\over{r_T}}\right)
-E\left({{r_\kappa}\over{r_T}}\right)\right]\,,\cr \cr
m & = & {{4
r_\kappa}\over{\pi\alpha'}}\; K\!\left({{r_\kappa}\over{r_T}}\right)\,,
\end{eqnarray}
for the case $r_T>r_\kappa$.
We see that the string angular momentum $ J = W/(2\pi) $ {\bf is not}
proportional to $ m^2 $ yielding a non-linear Regge trajectory.
In the $\kappa\to 0^+$ limit, we have
\begin{eqnarray}
W & \buildrel{ \kappa \to 0 }\over = & {{T^2}\over{2\pi\alpha'}}\left(1+
{{3 T^2}\over{32\pi^2}}\; \kappa + \ldots\right)\,,\nonumber\\
m & \buildrel{ \kappa \to 0 }\over = & {{T}\over{\pi\alpha'}}\left(1+
{{T^2}\over{16\pi^2}}\;\kappa + \ldots\right)\,,\label{aproximado}
\end{eqnarray}
and, consequently,
$$\alpha' m^2 \buildrel{ \kappa \to 0 }\over = 4n \left(1+ {{n \alpha'}\over8}
\; \kappa +\ldots\right)\,.$$
In the $\kappa\to 0^+$ limit we find a linear Regge trajectory,
recovering the previous results for Minkowski spacetime.
\subsection{Cosmological and black hole spacetimes}
Let us consider spacetimes with the generic form $g_{rr}=1/a(r)=-1/g_{tt}$,
$g_{t\phi}=0$. The potential $\tilde V$ is then given
by
$\tilde V(r)=a(r)\left(\lambda^2 g_{\phi\phi}-a(r)\right)$. Since the
``motion'' of
$r$ in $\sigma$ can only take place when $\tilde V(r)<0$, we have to
determine the
zeroes of $a(r)$ and of $\lambda^2 g_{\phi\phi}-a(r)$, together with the
asymptotics in the different physical regions.
\subsubsection{de Sitter spacetime}
Included within this set of metrics we find the de
Sitter metric, for which $a(r)=1-H^2r^2$ and $g_{\phi\phi}=r^2$. The radius of
the horizon, $r_H$, is given by $ r_H=1/H $. Thus,
$$
{\tilde V}(r) = (1-H^2r^2)\left\{ \left[ H^2 + \left( {{2\pi}\over T}
\right)^2 \right] \; r^2 - 1 \right\} \; .
$$
The zeroes of the potential $V$ in this case are
$r_H$ and $r_H/\sqrt{1+\left( {{2\pi}\over {HT}}\right)^2}$.
There are two types of planetoid
strings: those of infinite length that are to be found outside the horizon,
and those completely within the horizon, that are of finite length. Let us
concentrate on the later. The maximum radius is
$ r_{\rm max} = r_H/\sqrt{1+(\lambda/H)^2} $ and $ r_{\rm min} = 0 $.
This is a string rotating around its middle point located precisely
at $ r = 0 $.
The integrals to be performed are complete elliptic integrals, with elliptic
modulus
$$
k=HT/\sqrt{H^2T^2 + 4\pi^2} \; .
$$
Let $ k'=\sqrt{1-k^2} $. Our computations result in the following:
\begin{eqnarray}\label{dS}
S_{\rm cl}(T) & = & -{{8 }\over{k'\alpha'\, H^2}}\left[E(k) -
{k'}^2K(k)\right]\,,\cr \cr
W & = & 2 \pi \, J ={{8 }\over{\alpha'\, H^2}}\;k'\, \left[K(k) -
E(k)\right]\,,\cr \cr
m & = & {4\over{\pi\, H \, \alpha'}}\, k \,E(k)\,.
\end{eqnarray}
It is here obvious that $ J $ is not proportional to $ m^2 $.
For small $HT$, the quantization condition reads $T^2\sim
4\pi^2 n\alpha'$, as in flat spacetime, and the mass of the string is in this
case (compare with \cite{dvls})
\begin{equation}
\alpha'm^2\simeq 4n -7\, H^2 \alpha' n^2 +\cdots \; .
\end{equation}
It follows from eqs.(\ref{dS}) that $ k $ is a two-valued function of
$ W $ and hence of $ n $. Therefore, there are {\bf two } values of $
m $ for each $ n $. This is easy to see from the behaviour of $ W $
for $ k \to 0 $ and for $ k \to 1$. $ W $ vanishes in both cases.
$$
W \buildrel{ k \to 1 }\over = { 8 \over { \alpha'\, H^2}} \; k' \;
\left( \log{4 \over {k'}} - 1 \right) + O(k'^3 \, \log k' ) \; ,
$$
$$
W \buildrel{ k \to 0 }\over = {{ 2\pi } \over { \alpha'\, H^2}} \;
k^2 + O(k^4) \; .
$$
There is a maximum on the values $n$ can take, given by
\begin{equation}
n\leq n_{max} \equiv 0.616\,{1\over{\alpha'\, H^2}} \,.
\end{equation}
This $ n_{max} $ correspond to the maximal planetoid mass.
The first branch yields masses in the range
$$
0 \leq m \leq m_{\rm max}=1.343\ldots\; {1 \over { H \, \alpha'}} \;,
$$
and the second branch in the range
$$
{4\over {\pi\, \alpha'\, H}} \leq m \leq m_{\rm max}=1.343\ldots\; {1
\over { H \, \alpha'}} \; .
$$
[Notice that $ \frac4{\pi} = 1.2733\ldots $].
We find from eq.(\ref{loninv}) for the invariant string length
$$
s = \frac2{H}\; {\rm arcsin}{1 \over {\sqrt{1 +
\left({{2\pi}\over{HT}}\right)^2 }}}\; .
$$
$s$ takes its maximum value $ {{\pi}\over {H}} $ for the lightest
states in the second branch $ k \to 1 , \; m \to {4\over {\pi\, \alpha'\,
H}}$. The shorter planetoids $ s \simeq {T \over {\pi}} , \; k \to
0 $ correspond to the lightest states in the first branch.
With respect to the infinite length planetoid solutions (that is to say, those
restricted to be outside the horizon), the corresponding action, reduced action
and mass are all infinite.
\subsubsection{Anti-de Sitter spacetime}
In this case, $a(r)=1+H^2r^2$ and $g_{\phi\phi}=r^2$. Only for a
restricted set
of values of $\lambda$ will there be a change of sign in $V$, since only if
$\lambda^2>H^2$ will there be a zero of $V(r)$, namely at
$1/\sqrt{\lambda^2-H^2}$. Therefore, lower values of $\lambda$ correspond to
strings of infinite length, whereas those strings for which $\lambda^2>H^2$
will be of finite length. They will rotate around its middle point
located precisely at $ r = 0 $ with period $ T \; , \; 0 < T <
{{2\pi}\over H} $.
The results for this spacetime are as follows, where
$k=HT/(2\pi)=H/\lambda$:
\begin{eqnarray}
S_{\rm cl}(T) & = & -{{8 }\over{H^2\, \alpha'}}\left[K(k) -
E(k)\right]\,,\cr \cr
W & = & {{8}\over{(H\, k')^2\alpha'}}\left[E(k) - (k')^2
K(k)\right]\,, \cr \cr
m & = & {{4 }\over{\pi\, H \, \alpha'}}{{k}\over{(k')^2}} \, E(k)\,.
\end{eqnarray}
In this case $W$ is a monotonous function of $T$, and so is $m$, so the
doubling of mass eigenvalues found in de Sitter spacetime is not present
here.
For the low-lying mass states we find
$$
\alpha'm^2\simeq 4n -H^2 \alpha' n^2 +\cdots \; .
$$
There is no upper bound in the mass spectrum for anti-de Sitter
spacetime. For large masses we find
$$
m \simeq2 n H \; , \; n>>1 \; .
$$
The heavy states spacing is given by $ H $ whereas the small mass
spacing is determined by $ ( \alpha' )^{-\frac12} $.
\subsubsection{Schwarzschild black hole}
For the Schwarzschild black hole $a(r)=1-2M/r $ and $g_{\phi\phi}=r^2
$, where $ 2 M $ stands for the Schwarzschild radius.
There will be positive zeroes of $V$ other than that at
$2M$ if and only if $16 \pi^2 M^2/T^2\leq4/27$. Of the two additional zeroes
in
this case, one will be placed between $1$ and $1.5$, and the other will be
larger
than $1.5$ in units of $ 2 M $. In the extreme case $16 \pi^2
M^2/T^2=4/27$ the two will coalesce onto
$r_0=1.5$, which is the minimal (unstable) radius for a circular null
geodesic \cite{chandra}. As we turn $T$ to larger values, one of
the zeroes runs to $1$, and the other out to infinity, these extreme values
being reached for
$2\pi/T=0$, thus corresponding to an infinite static string from the horizon
to infinity \cite{fszh}.
Let us choose the following parametrization for $T$, and consequently for the
roots $r_i$ of $V$, with $r_i=2Mx_i$:
\begin{eqnarray}\label{raices}
T & = & {{6\pi M\sqrt{3}}\over{\cos(3s)}}\,,\\ \cr
x_1 & = & -{{3\cos s}\over{\cos(3s)}}\,,\cr\cr
x_2 & = & {{3}\over{2\cos(3s)}}\left(\cos s -\sqrt{3}\sin s\right)\,,\cr\cr
x_3 & = & {{3}\over{2\cos(3s)}}\left(\cos s +\sqrt{3}\sin
s\right)\,,\nonumber
\end{eqnarray}
whence $(r=2M x)$
\begin{eqnarray}
\tilde V(r) & = & {1\over{x^2}}(x-1)\left( {{4\cos^2(3s) x^3}\over{27}} -
x + 1\right)\cr
& = & {{4\cos^2(3s)}\over{27x^2}}(x-1)(x-x_3)(x-x_2)(x-x_1) \,.
\end{eqnarray}
The parameter $ s $ is a function of $ T/M $ as defined by
eq.(\ref{raices}). $ s $ runs from $0$ to $\pi/6$, and the roots are ordered as
$x_1<1<x_2<x_3$. The planetoid string extends from $ r = 2 m x_2 $ to
$ r = 2 m x_3 $. Its invariant length follows from eq.(\ref{talla})
$$
s = 2M [ f(x_3) - f(x_2)] \; ,
$$
where
$$
f(x) = \sqrt{x(x-1)} -{\rm ArgTh}\sqrt{1 - \frac1{x}}\;.
$$
The classical, reduced action and mass are then integrals expressible in
terms of elliptic integrals of modulus
$$
k^2={{(x_3-x_2)(1-x_1)}\over{(x_3-1)(x_2-x_1)}} \; .
$$
The
explicit expressions are not by themselves very illuminating, since they
involve combinations of elliptic integrals of different kinds; as a
simple exponent, we have
\begin{eqnarray}
m & =
&{{T}\over{\pi^2\alpha'}}{{2(x_2-1)}\over{\sqrt{(x_3-1)(x_2-x_1)}}}\times\cr
& & \quad\Pi\left({{x_3-x_2}\over{x_3-1}},
\sqrt{{{(x_3-x_2)(1-x_1)}\over{(x_3-1)(x_2-x_1)}}}\right)\,.
\end{eqnarray}
We use, as before, the notation of ref.\cite{gr}.
An important point is that there is a minimum value for the reduced
action and for the mass, corresponding to
$T_{\rm min}=6\pi\sqrt{3} M$, as follows:
\begin{eqnarray}
W_{\rm min} & = & {{36\pi M^2}\over{\alpha'}}\,,\cr \cr
m_{\rm min} & = & {{2\sqrt{3} M}\over{\alpha'}}\,.
\end{eqnarray}
The classical action for this configuration vanishes.
The period $ T $ has no upper bound. For large $ T $ we find
very long strings with
$$
W \buildrel{ T \to \infty }\over = { 1 \over {6 \pi^3 \, \alpha'
M}}\; T^3 \quad , \quad
m \buildrel{ T \to \infty }\over = { 1 \over {4 \pi^3 \, \alpha' M}}\; T^2
\quad , \quad s \buildrel{ T \to \infty }\over = { T \over {2 \pi}}
- M \log{T \over {\pi M}} + O(1)
$$
and the mass spectrum
$$
(\alpha' M)^{1/3} m \buildrel{ n \to \infty }\over =
\left(\frac9{4\pi}\right)^{1/3}\; n^{2/3} \; .
$$
The Regge trajectory $W(m)$ is well behaved, and we portray it in
Fig. 1.
\subsubsection{Schwarzschild black hole in de Sitter spacetime}
We shall now find competing effects due to the presence of one cosmological
and one black hole horizons. The function $ a(r) $ equals $1-2M/r -
H^2 r^2$, and $g_{\phi\phi}=r^2$. We are presented with three cases:
\begin{itemize}
\item{}
$1/(27 M^2)>(4\pi^2/T^2+H^2)$ (and, a fortiori, $1/(27 M^2)>H^2$); the positive
roots of the potential are the cosmological horizon, the black hole
horizon, and two others, which we examine later.
\item{} $(4\pi^2/T^2+H^2)>1/(27 M^2)>H^2$, when only strings inside
the black hole horizon and outside the cosmological horizon are present within
our ansatz.
\item{} $H^2>1/(27 M^2)$, which entails that there is no horizon and no
strings of the form of our ansatz.
\end{itemize}
We shall now study the first of these cases, when there are four positive
roots of the potential $V$, using a parametrization analogous to the one
before. Let
\begin{eqnarray}
x & = & r/(2M)\,,\cr \cr
H & = & {{\cos(3s_1)}\over{3\sqrt{3}M}}\,,\cr \cr
T & = & {{6\sqrt{3}\pi M}\over{\sqrt{\cos^2(3s_2)-\cos^2(3s_1)}}}\,,\cr \cr
x_{\rm neg} & = & -{{3\cos(s_1)}\over{\cos(3 s_1)}}\,,\cr \cr
x_S & = & {{3}\over{2\cos(3
s_1)}}\left(\cos(s_1)-\sqrt{3}\sin(s_1)\right)\,,\cr \cr
x_H & = & {{3}\over{2\cos(3
s_1)}}\left(\cos(s_1)+\sqrt{3}\sin(s_1)\right)\,,\cr \cr
x_{\rm nn} & = & -{{3\cos(s_2)}\over{\cos(3 s_2)}}\,,\cr \cr
x_2 & = & {{3}\over{2\cos(3
s_2)}}\left(\cos(s_2)-\sqrt{3}\sin(s_2)\right)\,,\cr \cr
x_3 & = & {{3}\over{2\cos(3
s_2)}}\left(\cos(s_2)+\sqrt{3}\sin(s_2)\right)\,,
\end{eqnarray}
with $0\leq s_2\leq s_1\leq\pi/6$. It follows that
\begin{eqnarray}
\tilde V(r) & = & -{{16 \cos^2(3 s_1)\,\cos^2(3 s_2)}\over{729 x^2}}
(x-x_{\rm neg})(x-x_S)\times\cr \cr
& &\quad (x-x_H)
(x-x_{\rm nn})(x-x_2)(x-x_3)\,.
\end{eqnarray}
Take $r_i= 2M x_i$. The four positive roots are ordered as
follows: $r_S\leq r_2\leq r_3\leq r_H$. There are thus strings of the form of
our ansatz extending from $r_2$ to $r_3$, and outside the cosmological horizon
and inside the black hole horizon. The strings outside the cosmological horizon
are of infinite length, mass and action. The really relevant ones for our
purposes are those extending from $r_2$ to $r_3$, in complete analogy with the
results for Schwarzschild's black hole. We portray a numerical computation of
the classical Regge trajectory $W(m)$ in Fig. 2 for the case
$s_1=\pi/12$, that is, $H=\frac13 \sqrt{6}\, M $. Clearly to be seen are the
two branches
which had previously appeared for the rotating string in de Sitter spacetime.
Surprisingly enough, there is no minimum value for $W$ and $m$ greater than
zero in one of the branches, although it does appear in the second one. This
is
due to the numerical integration, which is very inexact in the limit $T\to
T_{\rm min}(H)=6\pi M\sqrt{3}/\sin(3 s_1)$, and the fact is that there
{\it is} a minimum value for $W$, independent of $H$ and given by
$W_{\rm min}= 36\pi
M^2/\alpha'$, as can be found by computing the adequate limit $s_2\to0$;
the mass
$m$ also has a minimum value, but this time
$H$ dependent: $m_{\rm min}= 2\sqrt{3} M\sin(3 s_1) /\alpha'$. Notice that
we
recover the results previously obtained for Schwarzschild spacetime.
\section{Conclusions}
We have seen that the study of the planetoid solutions to the classical
equations of motion of a string provides us with a variety of effects due to
the structure of the target spacetime. In particular there are two main effects
that we have uncovered:
\begin{enumerate}
\item the existence of a {\sl maximum} value for the angular
momentum of (equatorially) moving strings in spacetimes with particle horizons
(de Sitter and Schwarzschild-de Sitter in particular), which reflects itself
on
the existence of two branches in the Regge plot. This means that
the number of bound states is finite in the semiclassical
quantization. (But this finiteness must be exact, beyond the
semiclassical approximation).
\item the presence of a minimum value for the angular momentum in the case
of a
black hole event horizon, as in Schwarzschild and Schwarzschild-de
Sitter spacetimes.
\end{enumerate}
It is not difficult to understand this phenomenon in the light of elementary
quantum mechanics. In spacetimes with particle horizons it is necessary for
the
preservation of causality that if a string extends beyond the horizon that
it be
infinite. The length is quantized in the same manner that the angular momentum
is, as can be read out from eq.(\ref{expresiones}); it is thus the
case that there
are a finite number of possible quantum planetoid strings.
As to the minimum value, given that if a string does penetrate into a
(Schwarzschild) event horizon and is to maintain its linearity it must extend
to
infinity, we see that the ``cutting out" of part of the spacetimes is what
forces a minimum value even for classical values (quantum mechanically that
was only to be expected).
String solutions that generalize non-circular point particle
trajectories should also exist in the spacetimes considered
here. However, the $\sigma$ and $\tau$ dependence probably cannot
be separated as we did in the planetoid strings presented in this paper.
We
want to stress that the Regge trajectories are no longer linear (even for weak
curvature) in the spacetimes considered here. We thus infer from this classical
test string calculations that the fundamental string spectrum will get strongly
modified in these non-trivial gravitational backgrounds.
\acknowledgments
ILE has to thank the LPTHE for their hospitality on several occasions.
\newpage
\begin{center}
{\bf Figure Captions}
\end{center}
\bigskip
{\bf Fig.1}: The reduced action $ W = 2\pi J $
(where $ J $ is the angular momentum) in units of $\pi M^2 / \alpha' $
as a function of the string mass $ m $ in Schwarzschild spacetime.
\bigskip
{\bf Fig. 2}: The reduced action $ W = 2\pi J $
(where $ J $ is the angular
momentum) as a function of the string mass $ m $ in Schwarzschild-de
Sitter spacetime.
|
1,108,101,563,633 | arxiv | \section{An Analysis}
\label{sec:anal}
Before we present the main result, let us remark on the convergence properties and run-time of both Algorithm \ref{alg:SCDM} and Algorithm \ref{alg:SPC}.
Algorithm \ref{alg:SCDM} has been proposed by \cite{marecek2017matrix}.
They present a convergence result,
which states that the method is monotonic and, with probability 1,
$\lim_{k\to \infty} \inf \|\nabla_L f(L^{(k)},R^{(k)})\|
= 0,$
and
$\lim_{k\to \infty} \inf \|\nabla_R
f(L^{(k)},R^{(k)})\| = 0.$
This applies in our case as well.
One could provide a more refined analysis under some assumptions on the initialization \cite{keshavan2010matrix}.
Our main analytical result concerns the statistical performance
of the point-to-subspace query.
Informally,
the randomized point-to-subspace distance query in Algorithm \ref{alg:SPC}
has one-sided error:
If the distance between the vector $x$ and $\textrm{span}(R)$
is no more than $\Delta$ in $\ell^\infty$, we never report otherwise.
If however, the distance actually is more than $\Delta$ in $\ell^\infty$,
considering only a subset $S$ of coordinates may ignore a coordinate where the distance is larger,
and hence mis-report that the vector is within distance $\Delta$ in $\ell^\infty$,
with a certain probability, depending on the number of constraints that
are actually violated.
For example, to achieve the one-sided error of $\epsilon$ with probability of 1/3 or less, this test needs to solve a linear program in dimension
$O(\frac{r \log r}{\epsilon} \log\frac{r \log r}{\epsilon})$.
Notice that this bound is independent of the ``ambient'' dimension $n$.
Formally:
\begin{theorem}
\label{ourThm}
(i) When the distance \eqref{eq:infty} is $D \le \Delta$, Algorithm \ref{alg:SPC} never reports the point is outside the sub-space.
(ii) When the distance \eqref{eq:infty} is $D > \Delta$,
because there are $\epsilon n$ coordinates $i$ such that for all $\hat c$, there is $|x_i - (\hat c R)_i| \ge \Delta$,
then for any $\delta \in (0, 1)$,
when Algorithm \ref{alg:SPC}
considers $s$ coordinates
$$O\left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \frac{r \log r}{\epsilon} \log \frac{r \log r}{\epsilon} \right)$$
sampled independently uniformly at random, the point is inside the subspace with probability $1 - \delta$.
\end{theorem}
\begin{proof} \emph{(Sketch)} \;
To see (i), consider the linear program constructed in Algorithm \ref{alg:SPC} and
notice that its constraints are a subset of those in \eqref{eq:inftyLP}.
If \eqref{eq:inftyLP} is feasible, then any subset of constraints will be feasible.
To see (ii),
we use standard tools from computational geometry.
In particular, we show that
a certain set related to the polyhedron of feasible $x$, which is known as range space, has a small Vapnik-Chervonenkis (VC) dimension $d$.
Subsequently, we apply the celebrated result of \cite{Haussler1987}, which states that for any range space of VC dimension $d$ and $\epsilon, \delta \in (0, 1)$, if $$O\left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \frac{d}{\epsilon} \log \frac{d}{\epsilon} \right)$$
coordinates are sampled independently, we obtain an $\epsilon$-net with probability at least $1 - \delta$.
We refer to Appendix A for the details.
\end{proof}
Next, let us consider the run-time of
Algorithm \ref{alg:SPC}, which is dominated by the feasibility test of a linear program $P$ in Line \ref{line:solveLP}.
Using standard interior-point methods \cite{GONDZIO2012},
if there is a feasible solution to the linear program $P$, an $\epsilon$-accurate approximation to the
can be obtained in $O(\sqrt{s} \ln(1/\epsilon))$
iterations, wherein a each iteration
amounts to solving a linear system.
This yields an upper bound on the run-time of
$$O \left( \frac{r^{3.5} \log^{3.5} r}{\epsilon^{3.5}} \log^{3.5}\frac{r}{\epsilon} \right),$$
which could be improved considerably, by exploiting the sparsity in the
linear program's constraint matrix.
The same iterations make it possible to detect
infeasibility using the arguments of \cite{kojima1993general},
although the homogeneous self-dual approach of \cite{Ye1994}
with a somewhat worse iteration complexity may be preferable in practice.
Either way, a solver-generator \cite{mattingley2010real,mattingley2012cvxgen} allows for excellent performance.
Alternatively, however, one may consider:
\begin{theorem}
\label{efficient}
There is an algorithm
that can pre-process a sample of $s$ coordinates such that the point-in-subspace membership query can be answered in time $O(\log s)$ in the worst case.
The expected run-time of the pre-processing is $O(s^{r+\epsilon}), \epsilon \ge 0$, where the expectation is with respect to the random behaviour of the algorithm, and remains valid for any input.
\end{theorem}
\begin{proof} \emph{(Sketch)} \;
Notice that one can replace the test of feasibility of a linear program $P$ with a point-location problem in a hyperplane arrangement.
We refer to \cite{deBerg2000,stanley2004} for a very good introduction to hyperplane arrangements, but to provide an elementary intuition:
An alternative geometric view of Algorithm \ref{alg:SPC} is that we have a subspace $P \subseteq \mathbb{R}^s$,
initialise it to $P = \mathbb{R}^s$ in Line \ref{line:init},
and then intersect it with hyperplanes on Lines \ref{constraint1}--\ref{constraint2}.
Equally well, one may consider a hyper-plane arrangement $P$,
initialise it to an empty set in Line \ref{line:init},
and then add hyperplanes on Lines \ref{constraint1}--\ref{constraint2}.
Our goal is not to optimise a linear function over $P$, but rather to decide whether there exists a point within $P$, the intersection of the hyperplanes, which corresponds to one cell of the arrangement.
The actual result follows from the work of \cite{Clarkson1986,Clarkson1987} on hyperplane arrangements.
\end{proof}
While we do not necessarily advocate the use of the algorithm of \cite{Clarkson1987} in ``pedestrian'' applications, there may be large-scale use cases, where the asymptotics do matter and the sampling of the coordinates may be reused.
\section{Proof of the Main Theorem}
As suggested earlier, our goal is to prove Theorem \ref{ourThm}, which we restate here for convenience:
\begin{theorem*}
(i) When the distance \eqref{eq:infty} is $D \le \Delta$, Algorithm \ref{alg:SPC} never reports the point is outside the sub-space.
(ii) When the distance \eqref{eq:infty} is $D > \Delta$,
because there are $\epsilon n$ coordinates $i$ such that for all $\hat c$, there is $|x_i - (\hat c R)_i| \ge \Delta$,
then for any $\delta \in (0, 1)$,
when Algorithm \ref{alg:SPC}
considers $s$ coordinates
$$O\left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \frac{r \log r}{\epsilon} \log \frac{r \log r}{\epsilon} \right)$$
sampled independently uniformly at random, the point is inside the subspace with probability $1 - \delta$.
\end{theorem*}
To see (i), consider the linear program constructed in Algorithm \ref{alg:SPC} and
notice that its constraints are a subset of those in \eqref{eq:inftyLP}.
If \eqref{eq:inftyLP} is feasible, then any subset of constraints will be feasible.
To see (ii), we show that a set related to the polyhedron of feasible $x$ has a small Vapnik-Chervonenkis (VC) dimension and apply classical results from discrete geometry.
In particular, we proceed in four steps:
\begin{enumerate}
\item denote by $\mathcal{S}_1$ the range space for all possible constraints added in Line \ref{constraint1} and by $\mathcal{S}_2$ the range space for all possible constraints added in Line
\ref{constraint2}.
\item The VC dimension of each of $\mathcal{S}_1, \mathcal{S}_2$ is at most $r + 1$.
\item The VC dimension of $\mathcal{S}_1 \cup \mathcal{S}_2$ is $O(r \log r)$.
\item Subsequently, we apply the celebrated result:
\end{enumerate}
\begin{theorem}[\cite{Haussler1986,Haussler1987,Clarkson1986,Clarkson1987}]
\label{Haussler}
Let $(\mathcal{X},\mathcal{R})$ be a range space of Vapnik-Chervonenkis dimension $d$. Let $\epsilon, \delta \in (0, 1)$.
If $\mathcal{S}$ is a set of $$O\left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \frac{d}{\epsilon} \log \frac{d}{\epsilon} \right)$$ points
sampled independently from a finite subset of $\mathcal{X}$, then $\mathcal{S}$ is an $\epsilon$-net for the finite subset with probability at least
$1 - \delta$.
\end{theorem}
To develop these ideas formally, let us reiterate the usual definitions of discrete geometry using the notation of \cite{vapnik1971uniform} and \cite{Haussler1987}, which partly overlaps with the notation used in the paper. We use calligraphic fonts in this appendix to distinguish S of the main body of the paper from $\mathcal{S}$ of the appendix, etc.
\begin{definition}[Range space of \cite{vapnik1971uniform}]
A range space $\mathcal{S}$ is a pair
$(\mathcal{X},\mathcal{R})$, where $\mathcal{X}$ is a set and $\mathcal{R}$ is a family of
subsets of $\mathcal{X}$, $\mathcal{R} \subseteq 2^{\mathcal{X}}$. Members of $\mathcal{X}$ are called
elements or points of $\mathcal{S}$ and members of
$R$ are called ranges of $\mathcal{S}$. $\mathcal{S}$ is finite if
$\mathcal{X}$ is finite.
\end{definition}
Notice that the range space is a (possibly infinite) hypergraph.
\begin{definition}[Shattering of \cite{vapnik1971uniform}]
Let $\mathcal{S} = (\mathcal{X},\mathcal{R})$ be a range
space and let $\mathcal{A} \subset \mathcal{X}$ be a finite set. Then $\Pi_\mathcal{R}(\mathcal{A})$ denotes the set
of all subsets of $\mathcal{A}$ that can be obtained
by intersecting $\mathcal{A}$ with a range of $\mathcal{S}$.
If $\Pi_{\mathcal{R}}(\mathcal{A}) = 2^{\mathcal{A}}$, we say that $\mathcal{A}$ is shattered
by $\mathcal{R}$.
\end{definition}
\begin{definition}[Dimension of \cite{vapnik1971uniform}]
The Vapnik-Chervonenkis
dimension of $\mathcal{S}$ is the smallest integer $d$ such that no $\mathcal{A} \subset \mathcal{X}$ of cardinality $d + 1$ is shattered
by $\mathcal{R}$. If no such $d$ exists, we say the
dimension of $\mathcal{S}$ is infinite.
\end{definition}
\begin{definition}[$\epsilon$-net of \cite{Haussler1986}]
An $\epsilon$-net of a finite subset of points $P \subseteq \mathcal{X}$ is a subset $\mathcal{N} \subseteq P$ such that any range $\mathcal{r} \in \mathcal{R}$ with $|\mathcal{r} \cap P| \ge \epsilon |P|$ has a non-empty intersection with $\mathcal{N}$.
\end{definition}
\paragraph{Step 1.} The range spaces $\mathcal{S}_1$ and $\mathcal{S}_2$ will
share the same set of points, namely $[n] := {1, 2, \ldots, n}$,
and feature very similar ranges: $\mathcal{S}_1$ will feature the
hyperplanes
$ x_i - (\hat c R)_i \le \Delta $
corresponding to the first set of constraints in the LP \eqref{eq:inftyLP},
while $\mathcal{S}_2$ will feature the
hyperplanes
$(\hat c R)_i - x_i \le \Delta$.
We keep them separate, so as to allow for the hyperplanes to be
in a generic position.
Alternatively, one could construct a single range space, with the
same set of points and ranges given by the subspaces given by the intersections of $ x_i - (\hat c R)_i \le \Delta $
and
$(\hat c R)_i - x_i \le \Delta$
for $i \in [n]$.
This would, however, complicate the analysis, somewhat.
\paragraph{Step 2.}
The VC dimension of each of $\mathcal{S}_1, \mathcal{S}_2$ is at most $r + 1$.
For range spaces, where the ranges are hyper-planes, this is a standard result. We refer to Section 15.5.1 of \cite{Gaertner2012} for a very elegant proof using Radon's theorem.
Notice that $r$ would suffice, if there were no vertical hyperplanes.
\paragraph{Step 3.}
The VC dimension of $\mathcal{S}_1 \cup \mathcal{S}_2 $ is $O(r \log r)$.
This follows by the counting of the possible ranges and Sauer-Shelah lemma, a standard result. We refer to Lemma 15.6 in \cite{Gaertner2012}.
\paragraph{Step 4.}
The intuition is that if there is a large-enough subset,
a large-enough random sample will intersect with it.
The surprising part of Theorem~\ref{Haussler} on the existence of $\epsilon$-nets is that the bound of the large-enough
does not depend on the number of points of the ground
set, but only on the VC dimension established above.
In particular,
we sample coordinates $S, |S| = s$ in Line \ref{line:sample}. This corresponds to sampling from $\mathcal{X}$ in
$\mathcal{S}_1 \cup \mathcal{S}_2$.
Because we assume there are $\epsilon n$ coordinates $i$
such that such that for all $\hat c$, there is $|x_i - (\hat c R)_i| \ge \Delta$, an $\epsilon$-net will intersect these by Theorem~\ref{Haussler}.
\section{Experimental evaluation}
\label{sec:exp}
In order to evaluate our approach, we have implemented both algorithms from scratch in Python, using Numpy~\cite{stefan2011numpy} for numerical linear algebra and \texttt{multiprocess} for
parallel processing.
The experiments were executed on a standard PC equipped with an Intel i7-7820X CPU and 64~GB of RAM.
\subsection{The Data}
In our experiments, we have started with traffic-monitoring data collected at intersections in Dublin, Ireland, between January 1 and November 30, 2017.
Therein, each time series is obtained by one induction loop at an approach to an intersection, with sensors at stop-lines and irregular intervals from stop-lines.
Overall, our data contains readings from $3432$ such sensors, distributed across the city.
In order to use a realistic data set, reflecting the asynchronous operations of the system,
we record the samples as they arrive asynchronously and do not impute
any missing values.
In particular, each intersection operates asynchronously, with all pre-defined phases changing, in turn, within a cycle time varying between 50 and 120 seconds both across the intersections and over time.
Whenever an intersection's cycle time finishes,
we record the flow over the cycle time.
Within any given time period, e.g., 2 minutes, we receive vehicle count data from only a fraction of the sensors.
For each day, we consider data between 7 a.m. and 10 p.m.,
which are of particular interest to traffic operators.
\begin{figure*}[h!]
\begin{minipage}[t]{\textwidth}
\centering
\subfigure[]{%
\label{fig:deviation}
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/values_distribution.pdf}}
\\
\subfigure[]{%
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/DistanceHistoricalalpha.pdf}
\label{fig:distance_historical}}
\def-0.1cm{-0.1cm}
\caption{~\ref{fig:distance_historical} The frequencies of the values reported from sensors, with and without additional Gaussian noise.~\ref{fig:distance_historical} An illustration of the mean and standard deviation of the historical flow data at all available sensors (grey), plus the mean values for events (yellow for $\mu = 5$, red for $\mu =35$) and non-events (green) at the same sensors. }
\end{minipage}
\end{figure*}
\begin{figure*}[h!]
\begin{minipage}[t]{\textwidth}
\centering
\subfigure[]{%
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/Iterations.eps}
\label{fig:iterations}}
\\
\subfigure[]{%
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/time_error_comparison.pdf}
\label{fig:time_error_comparison}}
\caption{
\ref{fig:iterations}
A sample evolution of the reconstruction error (RMSE) over time for $r =10$.
\ref{fig:time_error_comparison}
Training time until improvement in the error falls below $10^{-4}$ and reconstruction error, both plotted
as a function of dimension $r$.
Notice that the approach seems rather robust to the choice of $r$.
}
\label{fig:time_error_comparison_iterations}
\end{minipage}
\end{figure*}
\subsection{The Results}
\begin{figure*}[h!]
\begin{minipage}[t]{\textwidth}
\centering
\def-0.1cm{-0.1cm}
\subfigure[Recall]{%
\label{fig:recall_synth}
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/recall_error_mean.pdf}}
\\
\subfigure[Precision]{%
\label{fig:precision_synth}
\includegraphics[width = 0.66\textwidth]{./images/final_experiment/precision_error_mean.pdf}}
\\
\subfigure[F1-Score]{%
\label{fig:f1_synth}
\includegraphics*[width=0.66\textwidth]{./images/final_experiment/f1_score_error_mean.pdf}}
\def-0.1cm{-0.1cm}
\caption{Recall, precision and F1-score as a function of $\Delta$. For noise of large-enough magnitude ($mu = 25$, $\mu = 35$), one can obtain F1 score of 1 for a modest range of $\Delta$.}
\label{fig:syth_recall_precision_f1}
\end{minipage}
\vspace{0.5cm}
\end{figure*}
As described above, the data for our experiments
are from $3432$ sensors distributed across Dublin, Ireland,
as recorded between January and November 2017 at $2$ minute intervals,
if available.
In the flattened matrix $X\in \mathbb{R}^{304,299430}$, there were $38767895$ zeros out of the 91026720 elements, representing $42\%$ sparsity.
This is due to a large part to the asynchronicity of the sensor readings,
and to a lesser part due to sensor failures.
In order to evaluate our approach, we have created several matrices from $X$; one similar to matrix $X$ with a small amount of noise to represent normal behaviour
and further matrices with more noise to represent events.
Specifically, using rows of matrix $X$, we have created matrices $Y$ and $G$ in the following way.
First, we have generated $1000$ random scalars uniformly over $[0, 2]$ and we multiplied these scalars with rows sampled uniformly at random from matrix $X$ (with repetition). This way, we have obtained $1200$ new non-event time series, each
of which was perturbed by independently
identically uniformly distributed noise on $[- 0.8 \Delta , 0.8 \Delta]$. This way, we obtain matrix $Y\in \mathbb{R}^{1200 \times 299430}$
representing normal behaviour.
In order to validate the event detection, we have also added Gaussian noise to $200$ rows of matrix $Y$ sampled uniformly at random, with the mean $\mu=5,15,25,35$ of the Gaussian noise in four variants of $G \in \mathbb{R}^{200 \times 299430}$.
Figure~\ref{fig:system_model} illustrates the remainder of the process.
We have trained our model using the first $1000$ rows of matrix $Y$ and we left the last $200$ as ground truth for testing.
We have evaluated our model using the last $200$ entries of matrix $Y$ and the 200 entries of matrix $G$ with respect to recall, precision, and the so called F1 score, which is a harmonic mean of the former two measures.
Figure~\ref{fig:time_error_comparison} presents a trade-off between time required for training and reconstruction error in the choice of the dimension $r$.
Notice that the reconstruction error is the usual extension of root mean square error to matrices, i.e., the Frobenius norm of the difference between matrix $Y$ and $LR$.
It is clear that increasing dimension value above $10$ leads to marginal improvements in the reconstruction error, but increasing it above $40$ leads to a sharp increase in training time.
We chose $r=10$ for our experiments.
Figure~\ref{fig:deviation} compares the readings of sensors from the non-event matrix $Y$ with events in $G$, while omitting zeros. We can observe that $G$ with $\mu=5$ is hard to distinguish from $Y$.
Figure~\ref{fig:distance_historical} presents the distribution of the values of the samples used for training: the average values of normal samples we used as input plotted in green and the average values of the samples of events (\emph{i.e.}, sets with all Gaussian Noise mean values we used) plotted in red and yellow. We can observe that all supports of the distributions overlap, and especially in the case of $\mu=5$ are very hard to distinguish.
In order to evaluate the performance of our subspace proximity tester, we have measured recall, precision and F1-score using different values of $\Delta$ on the $4$ matrices $G$.
Figures~\ref{fig:recall_synth},\ref{fig:precision_synth},~\ref{fig:f1_synth} and~\ref{fig:syth_recall_precision_f1} present the evolution of recall, precision and F1-score as a function of $\Delta$ for 4 different values of $\mu$.
We ran the experiment $5$ times and present the mean.
As can be observed, for small values of $\Delta$, precision is high, while recall is low across all four $G$.
This is the case, because small values of $\Delta$ lead
to infeasibility of the LP.
As we increase $\Delta$, we observe that our approach identifies more of the input as $Normal$.
On $G$ matrices with $\mu = 15,25,35$, we can observe that
values $\Delta > 20$ lead to the perfect performance
with F1-Score of $1.0$.
We can also observe that for noise of a lesser magnitude ($\mu=5$),
the subspace proximity tester is able to identify samples from $G$ with maximum F1-score of $~0.7$.
By increasing $\Delta$ beyond this value, precision falls to $50\%$,
which is due to the fact that too many input samples are classified as non-event. This behaviour is to be expected, because by increasing the value of $\Delta$, we are ``relaxing'' the constrains of the linear program, which in turns leads to the non-event outcome being more common.
Finally, we note that in order to classify a new sample of dimensions $\mathbb{R}^{(1\times 299430)}$, our subspace proximity tester requires approximately $\sim0.009$ seconds for subset of cardinality $s=\log{r} \log{(r/e)}$ to obtain $e=0.1$.
We note that this does not use the algorithm of \cite{Clarkson1987},
and hence can be improved by many orders of magnitude, if needed.
\section{Introduction}
With the rise of Smart Cities and the Internet of Things more generally,
many cities have been instrumented with a large number of sensors
capable of capturing important statistics, such us volumes of traffic and average speeds of cars passing through urban intersections.
Although the information from each of the sensors can be useful in isolation
(\emph{e.g.,} maintaining statistics about traffic, pollution, bus speeds, etc.),
the combination of the information across multiple sensor types could provide more value.
However, processing heterogeneous sensors data poses several challenges.
One of the main challenges is dealing with the velocity and, when accumulated, volume of the data.
A city can have thousands of sensors sampling at kHz rates.
For example, in a network of $10,000$ sensors, sampling with 1 byte resolution at 1 kHz,
one obtains close to 311 TB of data per year that needs to be analyzed to estimate what is normal.
The second challenge involves detecting an event in real-time.
An automated event detection is useful in cases that the event is detected within seconds after it occurs, such as when a road is completely blocked, before people start venting their frustration on social media
or dialing their phones.
Another common challenge are missing values and failures of sensors.
It is very common for sensors to stop working or start reporting wrong values (\emph{e.g.}, negative car flow).
Finally, there is measurement noise.
To overcome these challenges, we propose novel low-rank methods for event detection.
Throughout, we consider uniformly distributed noise, but let us
present the model first in the noise-free case.
There, events correspond to points lying outside a certain subspace.
To estimate the sub-space, we flatten the input data to a matrix and apply state-of-the-art low-rank matrix-factorization techniques.
In particular, we factorize the original matrix into two smaller matrices, whose product approximates
the original matrix.
Subsequently, we develop a point-in-subspace membership test
capable of detecting whether
new samples are within the subspace spanned by the columns of one
of the factors (smaller matrices).
An affirmative answer is interpreted as an indication that
the samples from the sensors present normal behaviour.
In the case of a negative answer, a point-to-subspace distance query
can estimate the extent of abnormality of an event.
To our knowledge, this is the first application of these techniques in event detection.
Our contributions are:
\begin{itemize}
\item a general framework for representing what is an event and what is a non-event, considering \emph{heterogeneous data}, which are possibly \emph{not sampled uniformly}, with \emph{missing values}
and \emph{measurement errors}, based on matrix factorization,
\item A novel randomized event detection technique, implemented via a \emph{point-to-subspace distance query}, with approximation guarantees,
\item an experimental evaluation showing that a year of history from thousands of sensors is possible to process
in minutes to answer point-to-subspace distance queries in milliseconds.
\end{itemize}
\section{Conclusions}
We have presented a novel approach to event detection in high-dimensional data.
The estimation of the normal subspace have been shown to scale to settings with tens of millions of observations across thousands of time series, while the event-or-not query allows for near-real-time response.
Due to the use of matrix-completion techniques in the training, as well the use of as a subset of coordinates in the point-to-subspace distance query,
our approach is robust to missing values in the training data,
and hence to sensor failures, clock failures, non-uniform sampling, and asynchronous operations, more generally.
The particular matrix-completion algorithm we employ is robust to uniformly distributed noise,
and the use of the particular point-to-subspace query is optimal when uniformly distributed noise is present,
which makes the approach robust to measurement errors.
Due to the non-parametric nature of the tester the approach,
tuning is limited to the amount of measurement error expected
and the rank of the low-rank approximation.
In our experimental evaluation, we have tested the ability of the proposed algorithms to identify samples with abnormal behavior, even when the input samples were very hard to distinguish from normal data for a human person or simple approaches.
We envision this approach may have wide-ranging applications,
wherever asynchronous high-dimensional data streams are present.
Outside of traffic management,
monitoring cloud computing facilities
and electric power- and water-distribution networks,
are prime examples.
\newpage
\bibliographystyle{plain}
\section{A Model For Events}
\label{sec:problem_statement}
Our goal in this paper is to build a model of what is a non-event across many time-series,
possibly with \emph{non-uniform sampling} across the time series, \emph{missing values},
and \emph{measurement errors} present in the values.
For example, one could consider applications in urban traffic management, where the number of vehicles
passing over induction loops are measured, but often prove to be noisy, with the reliability
of the induction loops and the related communication infrastructure limited.
Subsequently,
we aim at an on-line event detection mechanism,
which would be able to decide whether multiple fragments of multiple incoming time-series present an event (abnormal behaviour) or not.
In urban traffic management, for example, one aims at detecting a road accident, based on the evolution
of the traffic volumes across a network of induction loops.
Notice that an accident will manifest itself by some readings being low,
due to roads being blocked, while other readings being high, due to re-routing, while no induction loop has to have its readings more than a standard deviation away from the long-run average,
which renders uni-variate methods difficult to use.
Such monitoring problems are central to many Internet-of-Things applications.
Our model is based on the observation that there is a daily recurrent pattern in time series across many applications.
Consider, for example, the volume of vehicles travelling over a road segment when computing
the daily variation of demand for transportation,
the current injected into a branch of a power system in the case of electric power,
or the flow through a pipe in the case of a water system.
These follow a bi-modal pattern with morning and evening peak.
This pattern can be exploited by storing each day worth of data as a row in a matrix,
possibly with many missing values.
For multiple time-series, we obtain multiple partial matrices, or a partial tensor.
These can be flattened by concatenating the matrices row-wise to obtain one large matrix, as suggested in Figure~\ref{fig:system_model}.
For $D$ days discretised to $T$ periods each, with up to $S$ sensors available,
the flattened matrix $M$ is in dimension $n = T S$ and has $m = D$ rows.
Considering this flattened representation, it is natural to assume that each new day looks like a linear combination
of $r$ prototypical days, or rows in the flattened matrix in dimension $m \gg r$.
Formally, we assume that there exists $R\in \mathbb{R}^{r \times n}$, such that our observations $x \in \mathbb{R}^n$ are
\begin{align}
\label{assumption1}
x = c R + \mathcal{U}(- \Delta, \Delta),
\end{align}
possibly with many values missing,
for some coefficients $c \in \mathbb{R}^r$
weighing the $r$ vectors $\{ e_1, e_2, \ldots, e_r \}$ row-wise in $R$, with uniformly-distributed noise $\mathcal{U}$
between $-\Delta$ and $\Delta$.
We compute matrix $R$, using low-rank approximation of the flattened matrix with an explicit consideration of the uniformly-distributed
error in the measurements $M_{ij}$ for $(i,j) \in M$.
Considering the interval uncertainty set $[M_{ij} -\Delta, M_{ij} + \Delta]$ around each observation, this can be seen
as matrix completion with
element-wise lower bounds $X^{\mathcal{L}}_{ij} := M_{ij} -\Delta$ for $(i,j) \in M$ and
element-wise upper bounds $X^{\mathcal{U}}_{ij} := M_{ij} + \Delta$ for $(i,j) \in M$.
(Henceforth we use calligraphic $\mathcal{L}$ for bounds from below and $\mathcal{U}$ for bounds from above.)
To formalise the factorisation $M \approx LR$,
let $L_{i:}$ and $R_{:j}$ be the $i$-th row and $j$-th column of $L$ and $R$, respectively.
With Frobenius-norm regularisation, the completion problem we solve is:
\begin{equation}\label{eq:Specific}
\min_{L\in \mathbb{R}^{m\times r}, \; R\in \mathbb{R}^{r\times n}} f_{\mathcal{L}}(L,R) + f_{\mathcal{U}}(L,R) + \tfrac{\mu}{2}\|L\|_{F}^2 + \tfrac{\mu}{2}\|R\|_{F}^2
\end{equation}
where
\begin{align}\label{defSpecific}
f_{\mathcal{L}}(L,R) &:= \textstyle{\tfrac{1}{2}\sum_{(ij)\in \mathcal{L}}(X^{\mathcal{L}}_{ij}-L_{i:}R_{:j})_+^2},\\
f_{\mathcal{U}}(L,R) &:= \textstyle{\tfrac{1}{2}\sum_{(ij)\in \mathcal{U}}(L_{i:}R_{:j}-X^{\mathcal{U}}_{ij})_+^2},
\end{align}
where
$\xi_+ = \max\{0,\xi\}$.
Notice that this is a smooth, non-convex problem, whose special case of $\Delta = 0$ is
NP-hard~\cite{MR1320206,harvey2006complexity}.
%
%
Considering the factorization $L R$, where $L \in \mathbb{R}^{m \times r}$ and $R\in \mathbb{R}^{r \times n}$ obtained above \eqref{eq:NONCREF},
given an incoming $x \in \mathbb{R}^n$, the maximum likelihood estimate $\hat c \in \mathbb{R}^r$ of $c$ in \eqref{assumption1} is precisely the point minimizing $|x|_\infty$:
\begin{align}
\label{implicitLP}
\min_{\hat c \in \mathbb{R}^r} \max_i |x_i - (\hat c R)_i|
\end{align}
whenever $\hat c \le \Delta$.
We refer to Section 7.1.1 of \cite{Boyd2004} for a discussion.
In a linear program corresponding to \eqref{implicitLP}, we consider a subset of coordinates of $\mathbb{R}^n$ and prove a bound on the one-sided error when using the subset.
This is the first use of a point-to-subspace query in $\ell^\infty$ in event detection.
\section{The Algorithms}
\label{sec:method}
In this section, we present the algorithms that we use with the above model.
As outlined above, two algorithms are needed for the two problems.
The first one is an \emph{inequality-constrained matrix completion} algorithm, which estimates the low-rank approximation
of the input matrix. In our experiments, this algorithm is run in an off-line fashion.
The second algorithm is the \emph{point-to-subspace proximity query}.
As an input, it uses the output of the inequality-constrained matrix completion algorithm
and it is able to predict if a new coming time series vector presents normal or abnormal behavior.
The second algorithm is run in an on-line fashion.
We describe the two algorithms in turn.
\subsection{Matrix completion under interval uncertainty}
\begin{algorithm}[t]
\caption{MACO: Matrix Completion via Alternating Parallel Coordinate Descent of \cite{marecek2017matrix}}
\label{alg:SCDM}
\begin{algorithmic}[1]
\item[] \textbf{Input}: $\mathcal{E}, \mathcal{L}, \mathcal{U}, X^{\mathcal{E}}, X^{\mathcal{L}}, X^{\mathcal{U}}$, rank $r$
\item[] \textbf{Output}: $m \times n$ matrix
\STATE \label{alg:stp:initialpoint} choose $L\in \mathbb{R}^{m\times r}$ and $R\in \mathbb{R}^{r\times n}$
\FOR{$k=0,1,2,\dots$}
\STATE choose random subset $\hat{S}_{\rm row} \subset \{1,\dots,m\}$
\FOR{$i\in \hat{S}_{\rm row}$ {\bf in parallel}}
\STATE choose $\hat r \in \{1,\dots,r\}$ uniformly at random
\STATE compute $\delta_{i \hat r}$ using formula \eqref{eq:delta_L}
\STATE update $L_{i \hat r} \leftarrow L_{i \hat r} + \delta_{i \hat r}$
\ENDFOR
\STATE choose random subset $\hat{S}_{\rm column} \subset \{1,\dots,n\}$
\FOR{$j\in \hat{S}_{\rm column}$ {\bf in parallel}}
\STATE choose $\hat r \in \{1,\dots,r\}$ uniformly at random
\STATE compute $\delta_{\hat{r}j}$ using \eqref{eq:delta_R}
\STATE update $R_{\hat{r}j} \leftarrow R_{\hat{r}j} + \delta_{\hat{r} j}$
\ENDFOR
\ENDFOR
\STATE \textbf{return} $LR$
\end{algorithmic}
\end{algorithm}
The matrix completion under interval uncertainty can be seen as a special case of the inequality-constrained matrix completion of \cite{marecek2017matrix}:
\begin{equation}\label{eq:NONCREF}
\min \{f(L,R)\;:\; L\in \mathbb{R}^{m\times r}, \; R\in \mathbb{R}^{r\times n}\},
\end{equation}
where
\begin{align}\label{defOff}
f(L,R) &:= f_{\mathcal{E}}(L,R) + f_{\mathcal{L}}(L,R) + f_{\mathcal{U}}(L,R) \\
& \; \; + \tfrac{\mu}{2}\|L\|_{F}^2
+ \tfrac{\mu}{2}\|R\|_{F}^2 \notag \\
f_{\mathcal{E}}(L,R) &:= \textstyle{\tfrac{1}{2}\sum_{(ij)\in \mathcal{E}}(L_{i:}R_{:j}-X^{\mathcal{E}}_{ij})^2},\\
f_{\mathcal{L}}(L,R) &:= \textstyle{\tfrac{1}{2}\sum_{(ij)\in \mathcal{L}}(X^{\mathcal{L}}_{ij}-L_{i:}R_{:j})_+^2},\\
f_{\mathcal{U}}(L,R) &:= \textstyle{\tfrac{1}{2}\sum_{(ij)\in \mathcal{U}}(L_{i:}R_{:j}-X^{\mathcal{U}}_{ij})_+^2},
\end{align}
where
for $(i,j) \in \mathcal{U}$ we have an element-wise upper bound $X^{\mathcal{U}}_{ij}$,
for $(i,j) \in \mathcal{L}$ we have an element-wise lower bound $X^{\mathcal{L}}_{ij}$,
for $(i,j) \in \mathcal{E}$ we know the exact value $X^{\mathcal{L}}_{ij}$,
and $\xi_+ = \max\{0,\xi\}$.
A popular heuristic for matrix completion considers a product of
two matrices, $X=L R$, where $L \in \mathbb{R}^{m \times r}$ and $R\in \mathbb{R}^{r \times n}$, obtaining
$X = LR$ of rank at most $r$, cf. \cite{tanner2013normalized}.
In particular, we use a variant of the alternating parallel coordinate descent method for matrix completion introduced by
\cite{marecek2017matrix} under the name of ``MACO'',
summarized in Algorithm~\ref{alg:SCDM}.
It is based on the observation that while $f$ is not convex jointly
in $(L,R)$, it is convex in $L$ for fixed $R$ and in $L$ for fixed $R$.
In Steps 3--8 of the algorithm, we fix $R$, choose random $\hat{r}$ and a random set $\hat{S}_{\rm row}$ of rows of $L$, and
update, in parallel, for $i \in \hat{S}_{\rm row}$: $L_{i\hat{r}} \leftarrow L_{i\hat{r}} + \delta_{i\hat{r}}$. Following~\cite{marecek2017matrix}, we use
\begin{equation} \label{eq:delta_L}\delta_{i\hat{r}}:= - \langle \nabla_L f(L,R), E_{i\hat{r}}\rangle / W_{i\hat{r}},\end{equation}
where the computation of $\langle \nabla_L f(L,R), E_{\hat{r}j}\rangle$ can be simplified considerably, as explained in \cite{marecek2017matrix}.
In Steps 9--14, we fix $L$, choose random $\hat{r}$ and a random set $\hat{S}_{\rm column}$ of columns of $R$, and update, in parallel
for $j \in \hat{S}_{\rm column}$: $R_{\hat{r}j}\leftarrow R_{\hat{r}j} + \delta_{\hat{r}j}$.
\begin{equation} \label{eq:delta_R}\delta_{\hat{r}j}:= - \langle \nabla_R f(L,R), E_{\hat{r}j}\rangle / V_{\hat{r}j},\end{equation}
where the computation of $\langle \nabla_R f(L,R), E_{\hat{r}j}\rangle$ can, again, be simplified as suggested in Figure \ref{fig:gradients}.
We should also like to comment on the choice of $\Delta$ and $\epsilon$.
A sensible approach seems to be based on cross-validation: out of the historical data (or out of $L$),
one can pick one row, and compute the $\Delta$ needed. The maximum of $\Delta$ for any row seems to be a good choice.
We refer to~\cite{marecek2017matrix} for a discussion
of the choice of the parameter $\mu>0$.
\subsection{Point-to-subspace queries in $\ell^\infty$}
\label{sec:subspace}
\begin{algorithm}[t]
\caption{Randomised point-in-subspace membership
}
\label{alg:SPC}
\begin{algorithmic}[1]
\item[] \textbf{Input}: $L \in \mathbb{R}^{r \times n}$,
$x \in \mathbb{R}^n$,
$s, \Delta \in \mathbb{R}$
\item[] \textbf{Output}: true/false
\STATE
\label{line:sample}
choose $S \subset \{1,\dots,n\}, |S| = s$, uniformly at random
\STATE initialise a linear program $P$ in variable $v \in \mathbb{R}^s$
\label{line:init}
\FOR{$i\in S$ }
\STATE
\label{constraint1}
add constraint $x_i - (\textrm{proj}_S(L) v)_i \le \Delta$
\STATE
\label{constraint2}
add constraint $x_i - (\textrm{proj}_S(L) v)_i \ge - \Delta$
\ENDFOR
\IF{$\exists v \in \mathbb{R}^s$ such that the constraints are satisfied}
\label{line:solveLP}
\STATE \textbf{return} true
\ELSE
\STATE \textbf{return} false
\ENDIF
\end{algorithmic}
\end{algorithm}
As suggested previously, instead of computing the distance of an incoming time-series to each one of those already available per-day time-series, classified as event or non-event,
we consider a point-to-subspace query in the infinity norm:
\begin{align}
\label{eq:infty}
\min_{\hat c \in \mathbb{R}^r} \max_i |x_i - (\hat c R)_i|,
\end{align}
and specifically the test whether the distance \eqref{eq:infty} is less than or equal to $\Delta$.
As we described in section~\ref{sec:problem_statement} for uniform noise $\ell^\infty$ gives the maximum likelihood estimate. The infinity-norm is sometimes seen as difficult to work with, due of the lack of differentiability.
However note, that it \eqref{eq:infty} can be recast as a test of feasibility of a linear programming problem:
\begin{align}
\label{eq:inftyLP}
\min_{\hat c \in \mathbb{R}^r} 1 \; \textrm{s.t. } x_i - (\hat c R)_i & \le \Delta, \\
(\hat c R)_i - x_i & \le \Delta. \label{eq:inftyLP2}
\end{align}
Alternatively, this is an intersection of hyperplanes, or a hyper-plane arrangement. As we will show in the following section, this geometric intuition is useful in the analysis of the algorithms.
In Algorithm \ref{alg:SPC} we present a test, which considers only a subset $S, |S| \ll n$ of coordinates, picked uniformly at random.
As we show in the following section,
this test has only a modest one-sided error.
\section{Related work}
\label{sec:related}
Our particular method builds upon a rich history of research in low-rank matrix completion.
There, \cite{fazel2002matrix} suggested to replace the rank with the nuclear norm in the
objective.
The corresponding use of semidefinite programming (SDP) has been very successful in theory \cite{candes2009exact},
while augmented Lagrangian methods \cite{jaggi2010simple,Lee2010,shalev2011large,wang2014rank} and
alternating least squares (ALS) algorithms \cite{srebro2004maximum,Rennie2005}
have been widely used in practice
\cite{srebro2004maximum,Rennie2005,mnih2007probabilistic,4470228,4803763}.
As it turns out \cite{keshavan2010matrix,Jain2013},
they also allow for perfect recovery in some settings.
Matrix completion has a number of applications in statistics, incl. functional data analysis
\cite{Descary2018}.
The inequality-constrained variant of matrix completion, which we employ,
has been introduced by \cite{marecek2017matrix}.
The randomised subspace proximity testers are novel,
as far as we can tell.
There is much related work in change-point and event detection.
Since the work of Lorden \cite{lorden1971procedures},
there has been much work on change-point detection in univariate time-series.
See \cite{basseville1993detection} for a book-length treatment.
There are much fewer papers on the multi-variate problem
\cite{xie2013change,aston2014change,cho2015multiple,zou2015efficient,Wang2018}
and fewer still, which allow for the coping with missing data \cite{xie2013change,Soh2015}.
From those, we differ in our assumptions (uniform noise) and the
focus on efficient algorithms for the test.
In terms of the related work in urban traffic management,
\cite{6763098} propose a method for detecting traffic events
that have impact on the road traffic conditions by extending the Bayesian Robust Principal Component Analysis. They create a sparse structure composed of multiple traffic data-streams (\emph{e.g.}, traffic flow and road occupancy) and use it to localise traffic events in space and time. The data streams are subsequently processed so that with little computational cost they are able to detect events in an on-line and real-time fashion.
\cite{RSSA:RSSA12178} analyse road traffic accidents based on their severity using a space-time multivariate Bayesian model. They include both multivariate spatially structured and unstructured effects, as well a temporal component, to capture the
dependencies between the severity and time effects, within a Bayesian hierarchical formulation.
|
1,108,101,563,634 | arxiv | \section{Introduction}
\label{sec:intro}
\subsection{Background and motivation}
The emergence of computational and experimental engineering has led to a spectrum of new mathematical questions on how
to best merge
{\em data driven} and {\em model based} approaches. The development of corresponding {\em data-assimilation} methodologies has been
originally driven mainly
by meteorological research (see e.g. \cite{Daley,Lewis}) but has meanwhile entered numerous areas in science and engineering bringing, in particular,
the role of reduced order modeling into the focus of attention \cite{Bash}.
The present paper addresses some principal mathematical aspects that arise when trying to numerically capture
a function $u$ which is a state of a physical process with a known law, however with unknown parameters.
We are given measurements of this state and the question is how to best merge these measurements
with the model information to come up with a good approximation to $u$.
A typical setting of this type occurs when {\em all states} of the physical process are described by a specific {\em parametric family of PDEs} which is known to us, in a form
\begin{eqnarray}
\nonumber
{\cal P}(u,\mu)=0,
\end{eqnarray}
where $\mu$ is a vector of parameters ranging in a finite or infinite dimensional set ${\cal P}$. Instead of
knowing the exact value of $\mu$ which would allow us to compute the state $u=u(\mu)$
by solving the equation, we observe one of these states
through some collection of measurements and we want to use these measurements, together with the known parametric PDE, to numerically capture the state, or perhaps even more ambitiously to capture the parameters. Since the solution manifold
\begin{eqnarray}
\nonumber
{\cal M}:=\{u(\mu)\; : \; \mu\inÊ{\cal P}\},
\end{eqnarray}
to a parametric PDE is generally quite complicated, it is usually seen through a sequence of
nested finite dimensional spaces
\begin{eqnarray}
\nonumber
V_0\subset V_1\subset \cdots \subset V_n,\quad \dim(V_j)=j,
\end{eqnarray}
such that each $V_j$ approximates ${\cal M}$ to a known tolerance $\varepsilon_j$. Construction
of such spaces is sometimes referred to as {\em model reduction}.
Various algorithms for generating such spaces, together with error bounds $\varepsilon_j$, have been derived and analyzed.
One of the most prominent of these is the {\em reduced basis method} where the spaces
are generated through particular solution instances $u(\mu^i)$ picked from ${\cal M}$, see \cite{BMPPT,BCDDPW,DPW2,PW}.
Other algorithms with known error bounds are based on polynomial approximations in the parametric variable, see \cite{CDS1,CCDS}.
Thus, the information that the state $u$ we wish to approximate is on the manifold is replaced by the information of how well $u$ can be approximated by the spaces $V_j$. Of course, this is not enough information to pin down $u$ since we do not know where $u$ is on the manifold, or in the new formulation, which particular element of $V_j$ provides a good approximation to $u$. However, additional information about $u$ is given by physical measurements which hopefully are enough to approximately locate $u$. This type of recovery problem was formulated and analyzed in \cite{MPPY}
using an infinite dimensional Hilbert space setting which allows one to properly exploit
the nature of the continuous background model when assimilating observations. This is also the setting adopted in the present paper.
The achievements of the present paper are two-fold. First, we establish that the algorithm proposed in \cite{MPPY} for estimating a state from a given set of observations and the knowledge of its approximability from a space $V_n$ is {\em best possible in the sense of optimal recovery}.
Second, and more importantly, we demonstrate the potential gain in accuracy for state recovery
when combining the approximability by {\em each} of the
subspaces $V_j$ in the given hierarchy. We refer to this
as the {\em multi-space setting} which will be seen to better exploit the information given by reduced bases or polynomial constructions. We give algorithms and performance bounds for these recovery algorithms in the multi-space setting when the observations are fixed and given to us.
These algorithms are online implementable, similar to the ones discussed in \cite{MPPY}.
Let us mention that one emphasis in \cite{MPPY} is on the selection of the measurement functionals in order to
optimize the recovery process, while in the present paper we consider such functionals as given and focus on
optimal recovery as explained above.
\subsection{Conceptual preview}
We study the above problems in the general framework of {\it optimal recovery } in a Hilbert space $\cH$ with inner product $\langle\cdot,\cdot\rangle$ and norm $\|\cdot \|$.
Under this setting, we are wanting to recover a function $u\in \cH$ from its measurements $\ell_i(u)=\langle u,\omega_i\rangle$, where the $\omega_i$ are known elements of $\cH$, $i=1,\dots,m$. If we denote by $W$
the space spanned by the $\omega_i$, $i=1,\dots,m$, then, the measurements determine
{$w=P_W u$}
where throughout this paper
$P_X$ denotes the orthogonal projection onto $X$ for any closed subspace $X\subset \cH$. In going further, we think of measurements as simply providing the knowledge of this projection. In particular, we assume that the $\omega_j$'s are linearly independent i.e., $\dim W=m$. Therefore, our problem is to find an approximation $\hat u(w)$ to $u$ from the information $w\in W$. This is equivalent to constructing a mapping
$A:W\to \cH$ and setting $\hat u(w)=A(w)=A(P_Wu)$.
All elements of the orthogonal complement $W^\perp$ of $W$ have zero measurements. A first observation is that if all the information we have about $u$ is that
{$P_W u= w$},
then we cannot
recover $u$ to any guaranteed accuracy. Indeed, if $u_0$ satisfies the measurements then $u$ could be any of the functions $u_0+\eta$, with $\eta\in W^\perp$, and each of these functions would be assigned the same approximation $\hat u=\hat u(w)$. Therefore, we need additional information about $u$ to have a meaningful problem. A typical assumption is that $u$ is in some known compact set ${\cal S}\subset \cH$. The recovery problem in this case is known as {\it optimal recovery}. A classical setting is that $\cH$ is the space $L_2$ and ${\cal S}$ is a finite ball in a Sobolev or Besov space,
see e.g. \cite{BB, MR, MRW}.
In contrast to the case where ${\cal S}$ is a known Sobolev or Besov ball, our interest is in the setting where ${\cal S}$ is the solution manifold ${\cal M}$ of a parametric PDE. As noted above, the typical way of resolving ${\cal M}$ is through a finite sequence of spaces $\{V_0,\dots,V_n\}$ with $V_k$ of dimension $k$ where the spaces are known to approximate ${\cal M}$ to some known accuracy.
This leads us to the following two settings:
\vskip .1in
\noindent
{\bf The one-space problem:} We assume that all what we know about ${\cal M}$ is that there is a space $ V_n$ of dimension $n$ which is an approximation to ${\cal M}$ with accuracy $\varepsilon_n$. Accordingly, we define
\begin{equation}
\label{defK1}
{\cal K}:={\cal K}^{\mathop{\rm one}}:=\{u\in\cH:
{ \dist (u,V_n)}
\le \varepsilon_n\},
\end{equation}
and consider $u\in{\cal K}$ to be the only information we have about ${\cal M}$. In this case, the information \eref{defK1} is the additional knowledge we have about $u$. We want to combine this knowledge with our measurements
{$P_Wu$}
to construct a good approximation $\hat u$ to $u$. So in this case, the spaces $V_n$ and $W$ are known and fixed.
\vskip .1in
\noindent
{\bf The multi-space problem:} We assume that what we know about ${\cal M}$ is that there is a sequence of spaces $V_0\subset V_1\subset \cdots\subset V_n$ such that each $V_k$ has dimension $k$ and approximates ${\cal M}$ with accuracy $\varepsilon_k$,
where $\varepsilon_0\ge \varepsilon_1\ge\cdots\varepsilon_n>0$. This leads us to define %
\begin{equation}
\label{defKm}
{\cal K}:={\cal K}^{\mathop{\rm mult}}:=\bigcap_{j=0}^n {\cal K}^j,
\end{equation}
where
\begin{eqnarray}
\nonumber
{\cal K}^j:=\{u\in \cH:
{\dist (u,V_j)}
\le \varepsilon_j\}, \quad j=0,\dots,n.
\label{cKj}
\end{eqnarray}
In this case, the information $u\in {\cal K}$ is the additional knowledge we have about $u$. We want to combine this knowledge with our measurements to construct a good approximation $\hat u$ to $u$. As already noted, the multi-space problem is typical when applying reduced bases
or polynomial methods to parametric PDEs.
\subsection{Performance criteria}
This paper is concerned with approximating a function $u\in\cH$ from the information that $u\in {\cal K}$ and $P_Wu=w$
in the two above settings. Note that in both settings, the set ${\cal K}$ is not compact.
The additional information provided by the measurements gives that
$u$ is in the class
\begin{eqnarray}
\nonumber
{\cal K}_w:=\{u\in{\cal K}: P_Wu=w\}.
\end{eqnarray}
This set is the intersection of ${\cal K}$ with the affine space
\begin{eqnarray}
\nonumber
\cH_w:=\{u\in\cH: P_Wu=w\}=w+W^\perp.
\end{eqnarray}
Note that ${\cal K}_w$ may be an empty set for certain $w\in W$.
Recall that an algorithm is a mapping $A: W\to \cH$ which assigns to any $w\in W$
the approximation $ \hat u(w)=A(P_wu)$. In designing an algorithm, we are given the information
of the spaces $(V_k)_{k=0,\dots,n}$ and the error bounds $(\varepsilon_k)_{k=0,\dots,n}$.
There are several ways in which we can measure the performance of an algorithm. Consider first the one-space problem. A first way of measuring the performance of an algorithm is to ask for
an estimate of the form
\begin{equation}
\label{instopt}
\|u-A(P_Wu)\|\le C_A(w)\dist(u,V_n),\quad u\in{\cal K}_w.
\end{equation}
The best algorithm $A$, for a given fixed value of $w$, would give the smallest constant $C_A(w)$ and the algorithm which gives this smallest constant is said to be {\it instance optimal} with constant $C_A(w)$. In this case, the performance bound given by the right side of \eref{instopt}
depends not only on $w$ but on the particular $u$ from ${\cal K}_w$.
The estimate \eref{instopt} also gives a performance bound for the entire class ${\cal K}_w$ in the form
\begin{eqnarray}
\nonumber
\sup_{u\in{\cal K}_w} \|u-A(P_Wu)\|\le C_A(w)\varepsilon_n.
\end{eqnarray}
This leads us to the notion of performance of a recovery algorithm $A$ on any set ${\cal S}\subset \cH$ which is defined by
\begin{eqnarray}
\nonumber
E_A({\cal S}):= \sup_{u\in {\cal S}} \|u-A(P_Wu)\|.
\end{eqnarray}
The {\it class optimal performance} on the set ${\cal S}$ is given by
\begin{equation}
\label{optalg}
E({\cal S}):=\inf_{A}E_A({\cal S}),
\end{equation}
where the infimum is taken over all possible algorithms, i.e., all maps $A:W\to \cH$.
In particular, class optimal performance is defined for both the single space or multi-space settings and for both
the sets ${\cal K}_w$ for each of individual $w$ which gives the measure $E({\cal K}_w)$ or
the entire class ${\cal K}$ which gives the performance $E({\cal K})$. The latter notion is the most meaningful when in applications it is not known
which measurements $w\in W$ will appear or will be available.
The present paper studies each of the above problems with the goal of determining the best algorithms.
For this purpose, we
introduce for any closed subspaces $V$ and $W$ of $\cH$ the quantity
\begin{equation}
\label{mu}
\mu(V,W):= \sup_{\eta\in W^\perp} \frac{\|\eta\|}{\|\eta-P_V\eta\|}=\sup_{\eta\in W^\perp} \frac{\|\eta\|}{\|P_{V^\perp} \eta\|}.
\end{equation}
A simple calculation shows that $\mu(V,W)=\beta(V,W)^{-1}$ where
\begin{eqnarray}
\nonumber
\beta(V,W):=\inf_{v\in V}\frac{\|P_Wv\|}{\|v\|} =\inf_{v\in V}\sup_{w\in W}\frac{ \langle v,w\rangle}{\|v\|\|w\|} .
\end{eqnarray}
Note that in the case where $V=\{0\}$ we have $\mu(V,W)=1$.
In \S \ref{sec:one1} of the paper, we analyze the one space problem, that is, ${\cal K}={\cal K}^{\rm one}$.
The inf-sup constant $\beta$ was used in \cite{MPPY}, for the study of this problem,
where the authors proposed an algorithm, in the form of a certain linear mapping $A^*:w\to A^*(w)$, then analyze its performance.
While the approach in \cite{MPPY}
is based on variational arguments, ours is quite different and geometric in nature. Our first goal is to establish that the algorithm proposed in
\cite{MPPY} is both instance optimal and class optimal. We show that for any function $u\in\cH$ %
\begin{equation}
\label{errorbound1}
\|u-A^*(P_Wu)\|\le \mu(V_n,W)\dist(u,V_n) .
\end{equation}
Notice that if $\beta(V_n,W)=0$, the above estimate would give no bound on approximation as is to be
expected since $V_n$ would contain elements of $W^\perp$ and these cannot be distinguished by the measurements. This would always be the case if $n>m$ and so in going further we always work under the assumption that $n\le m$.
Let us note that this is a modest improvement on the estimate in \cite{MPPY}
which has the constant $\mu(V_n,W)+1$ rather than $\mu(V_n,W)$ on the right side of \eref{errorbound1}.
More importantly, we show that the estimate \eref{errorbound1} is best possible in the sense that the constant $\mu(V_n,W)$ cannot be replaced by a smaller
constant. Another important remark, observed in \cite{MPPY}, is that in \eref{errorbound1}, $\dist(u, V_n)$ can be replaced by the smaller quantity $\dist(u,V_n\oplus(W\cap V_n^\perp))$. We establish, with our approach, the estimate
\begin{equation}
\label{errorbound12}
\|u-A^*(P_Wu)\|\le \mu(V_n,W)\dist(u,V_n\oplus(W\cap V_n^\perp)) ,
\end{equation}
which improves the constant given in \cite{MPPY}.
We again show that $\mu(V_n,W)$ is the best constant in estimates of this form.
In view of \eref{errorbound1}, the algorithm $A^*$ provides the class estimate
\begin{equation}
\label{errorbound13}
E_{A^*}({\cal K})\le \mu(V_n,W)\varepsilon_n.
\end{equation}
We again show that this algorithm is {\it class optimal} in the sense that for the single space problem
\begin{eqnarray}
\nonumber
E ({\cal K})= \mu(V_n,W)\varepsilon_n.
\end{eqnarray}
Our analysis is based on proving lower bounds which show that the upper
{estimates \eref{errorbound12}}
and \eref{errorbound13} cannot be improved.
These lower bounds apply to both linear and nonlinear algorithms, that is,
\eref{errorbound12} and \eref{errorbound13} cannot be improved also using
nonlinear mappings.
Another goal of our analysis of the one-space problem is to simplify the description of the optimal solution
through the choice of, what we call, {\em favorable bases} for
the spaces $V_n$ and $W$. These favorable bases are then used in our analysis of the multi-space problem
which is the object of \S \ref{sec:multiple}. One possible way of proceeding, in the multi-space case, is to examine
the right side of \eref{errorbound13} for each of the spaces $(V_k)_{k=0,\dots,n}$, and choose the one which gives the minimum value. This would produce an algorithm $A$ with the error bound
\begin{equation}
\label{olderror}
E_A({\cal K})\le \min_{0\le k\le n} \mu(V_k,W)\varepsilon_k.
\end{equation}
Notice that the $\varepsilon_k$ are decreasing but
the $\mu(V_k,W)$ are increasing as $k$ gets larger. So these two quantities are working against one another and the minimum may be assumed for an intermediate value of $k$.
It turns out that the algorithm giving the bound \eref{olderror} may be far from optimal and our main achievements
in \S \ref{sec:multiple} are to produce both algorithms and a priori performance bounds
which in general are better than that of \eref{olderror}. We show how the multi-space problem is connected to finding
a point in the intersection of a family of ellipsoids in $\cH$ and propose an algorithm based on this intersection property.
Then, we give {\em a priori bounds} on the performance of our numerical algorithm,
which are shown to be, in general, better than \eref{olderror}.
\section{The one-space problem}
\label{sec:one1}
\subsection{Preliminary remarks}\label{sec:remarks}
\label{sec:prel}
We begin with some general remarks which can be applied to our specific problem. If ${\cal S}\subset \cH$ is a bounded set and we wish to simultaneously
approximate
all of the elements in ${\cal S}$,
then the best approximation is described by the center of the {\it Chebyshev ball} of ${\cal S}$, which is defined as the smallest closed ball that contains
${\cal S}$. To describe this ball, we first define the {\em Chebyshev radius}
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal S}) := \inf \{r: {\cal S}\subset B(v,r)\ {\rm for \ some} \ v\in\cH\}.
\end{eqnarray}
The following well known lemma says that the Chebyshev ball exists and is unique.
\begin{lemma}
\label{lemma:radassumed}
If ${\cal S}$ is any bounded set in $\cH$ with $R:=\mathop{\rm rad}({\cal S})$, then there exists a unique $v^*\in \cH$ such that
\begin{equation}
\label{assumed}
{\cal S}\subset B(v^*, R).
\end{equation}
\end{lemma}
\noindent
{\bf Proof:}
For any $v\in\cH$, we define
\begin{eqnarray}
\nonumber
R_{\cal S}(v):= \inf\,\{r: {\cal S}\subset B(v,r)\},
\end{eqnarray}
which is a well-defined function from $\cH$ to $\mathbb{R}$.
It follows from triangle inequality that $R_{\cal S} : \cH \to \mathbb{R}$ is continuous. It is also easily seen that
\begin{eqnarray}
\nonumber
{\cal S}\subset B(v,R_{\cal S}(v)).
\end{eqnarray}
By definition, $\mathop{\rm rad}({\cal S})= \inf_{v\in\cH}
R_{{\cal S}}(v)$. Now, consider any infimizing sequence $(v_j)_{j\in\mathbb{N}}$, i.e.,
\begin{eqnarray}
\nonumber
\lim_{j\to\infty} R_{{\cal S}}(v_j)= \mathop{\rm rad}({\cal S}).
\end{eqnarray}
We claim that $(v_j)_{j\in\mathbb{N}}$ is a Cauchy sequence. To see this, define $r_j:= R_{{\cal S}}(v_j)$.
For any fixed $j$ and $k$ and any $z \in {\cal S}$ we define
$d_j:= v_j-z$ and $d_k:=v_k-z$. Then, $\|d_j\|\le r_j$, and $\|d_k\|\le r_k$. Therefore,
\begin{eqnarray*}
\|v_j - v_{k}\|^2 &=&\|d_j -d_{k}\|^2 = \langle d_j -d_{k},d_j -d_{k}\rangle\\
&=& 2\langle d_j,d_j\rangle + 2 \langle d_{k},d_{k}\rangle -\langle d_j + d_k,d_j+ d_k\rangle\\
&=& 2 \|d_j\|^2 + 2 \|d_k\|^2 - 4\Big\|\frac 12 (d_j + d_k)\Big\|^2\\
&\leq& 2 r_j^2 + 2 r_k^2 -4\Big\|\frac 12 (v_j + v_k)-z\Big\|^2.
\end{eqnarray*}
Since $z\in {\cal S}$ is arbitrary we get
$$
\|v_j - v_{k}\|^2 \leq 2 r_j^2 + 2 r_k^2 -4 \[ R_{{\cal S}}\( \frac 12 (v_j+v_k)\) \]^2 \leq 2 r_j^2 + 2 r_k^2 -4\mathop{\rm rad}({\cal S})^2.
$$
Since $r_j, r_k \to\mathop{\rm rad}({\cal S})$, this shows that $(v_j)_{j\in\mathbb{N}}$ is a Cauchy sequence and has a limit $v^*$, which
by the continuity of $v\mapsto R_{{\cal S}}(v)$ satisfies $R_{{\cal S}}(v^*)=\mathop{\rm rad}({\cal S})$. The uniqueness of $v^*$
also follows from the above inequality by contradiction. By using the continuity of $v\mapsto R_{{\cal S}}(v)$ one easily shows that \eref{assumed} holds.
\hfill $\Box$
\newline
We sometimes say that $v^*$ in the above lemma is the {\it center} of ${\cal S}$.
For any bounded set ${\cal S}$, the diameter of ${\cal S}$ is related to its Chebyshev radius $\mathop{\rm rad}({\cal S})$ by the inequalities
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal S})\le \diam({\cal S})\le 2 \mathop{\rm rad}({\cal S}).
\end{eqnarray}
For general sets ${\cal S}$ these inequalities cannot be improved. However, we have the following remark.
\begin{remark}
\label{rem:diam2r}
Let ${\cal S}$ be symmetric about a point $z$, i.e. whenever $v\in {\cal S}$, then $2z-v\in {\cal S}$. Then,
the Chebyshev radius of ${\cal S}$ equals half its diameter, that is, $\diam({\cal S})=2\mathop{\rm rad}({\cal S})$ and its center is $z$.
\end{remark}
\begin{remark}
\label{rem:opt}
In the particular setting of this paper, for any given $w\in W$ such that ${\cal K}_w$ is non-empty,
the optimal recovery $u^*(w)$ over the class ${\cal K}_w$ is obviously
given by the center of ${\cal K}_w$, and the class optimal performance is given by
\begin{eqnarray}
\nonumber
E ({\cal K}_w)=\mathop{\rm rad}({\cal K}_w).
\end{eqnarray}
\end{remark}
\begin{remark}
\label{rem:center}
For a bounded, closed, convex set ${\cal S}\subset \cH$ (which is always the case in this paper) its center $u$ is in ${\cal S}$. In fact, if this was not true,
by translating ${\cal S}$, we can assume $u=0$. Let $s_0=\mathop{\rm argmin}_{s\in {\cal S}}\|s\|$. By convexity $s_0$ exists, $s_0\neq 0$, and $\<s,s_0\>\geq \<s_0,s_0\>$,
$s\in {\cal S}$. Thus
$$
\sup_{s\in {\cal S}}\|s-s_0\|^2=\sup_{s\in {\cal S}} ( \<s,s\>-2\<s,s_0\>+\<s_0,s_0\>)\leq \sup_{s\in {\cal S}} \|s\|^2-\|s_0\|^2
$$
which contradicts the assumption that $0$ is the center of ${\cal S}$.
\end{remark}
\subsection{Optimal bounds for the one-space problem}
\label{sec:one}
We next consider the case where the set
${\cal K}={\cal K}^{\mathop{\rm one}}$ is given by \iref{defK1},
where $V_n$ is a fixed and known $n$ dimensional space.
In this section, we derive the algorithm proposed in \cite{MPPY}, however from a different
point of view emphasizing more the optimal recovery and geometric aspects of the problem. This
allows us to improve on their estimates some but, more importantly, it is also useful when treating the multi-space problem.
\newline
{\em In the event that $\beta(V_n,W)=0$, the space $V_n$ contains elements from
$W^\perp$ which implies that if $w\in W$ is such that ${\cal K}_w$ is non-empty, then
${\cal K}_w$ is unbounded, or equivalently $\mathop{\rm rad}({\cal K}_w)$ is infinite, which means
that we cannot hope for any guaranteed performance over ${\cal K}_w$. This is the case
in particular when $n>m$. For this reason, in the rest of the paper, we always assume
that $\beta(V_n,W)>0$, which means in particular that $n\le m$.}
\newline
Let $w$ be any element from $W$.
We claim that the map
\begin{eqnarray}
\nonumber
u\mapsto \|u-P_{V_n}u\|= \|P_{V_n^\perp}u\|,
\end{eqnarray}
admits a unique minimizer over the affine space $\cH_w$. To see this, we let $u_0$ be any element from $\cH_w$. It follows that
every $u\in \cH_w$ can be written as
$u=u_0+\eta$ for some $\eta \in W^\perp$.
Minimizing $\|P_{V_n^\perp}u\|$ over $\cH_w$ therefore amounts to minimizing the function
\begin{eqnarray}
\nonumber
\eta \mapsto f (\eta):=\|P_{V_n^\perp}u_0+P_{V_n^\perp}\eta\|^2,
\end{eqnarray}
over $W^\perp$. We may write
\begin{eqnarray}
\nonumber
f(\eta):=g(\eta)+\|P_{V_n^\perp}\eta\|^2,
\end{eqnarray}
where $g$ is an affine function. Since we have assumed that $\beta(V_n,W)>0$, the inequalities
\begin{eqnarray}
\nonumber
\beta(V_n,W) \|\eta\| \leq \|P_{V_n^\perp}\eta\|Ê\leq \|\eta\|,\quad \eta \in W^\perp.
\end{eqnarray}
show that $\eta\mapsto \|P_{V_n^\perp}\eta\|$ is an equivalent norm over $W^\perp$. Therefore
$\eta \mapsto f(\eta)$ is strongly convex over $W^\perp$ and therefore admits a unique minimizer
\begin{eqnarray}
\nonumber
\eta^* :=\mathop{\rm argmin}_{\eta\in W^\perp} f(\eta).
\end{eqnarray}
It follows that $u^*=u_0+\eta^*$ satisfies
\begin{eqnarray}
\nonumber
u^*=u^* (w) :=\mathop{\rm argmin}_{u\in\cH_w} \|u-P_{V_n}u\|
\end{eqnarray}
and that this minimizer is unique.
\begin{remark}
\label{rem:nonempty}
If $w$ is such that ${\cal K}_w$ is non-empty, there exists a $u\in \cH_w$ such that $\|u-P_{V_n}u\|\leq \varepsilon_n$.
Therefore $\|u^*-P_{V_n}u^*\|\leq \varepsilon_n$, that is, $u^*\in{\cal K}_w$. In particular, $u^*$ minimizes $\|u-P_{V_n}u\|$ over all $u\in {\cal K}_w$.
\end{remark}
We next define
\begin{eqnarray}
\nonumber
v^*:=v^*(w):=P_{V_n}u^*.
\end{eqnarray}
From the definition of $u^*$, it follows that the pair $(u^*,v^*)$ is characterized by the minimization property
\begin{equation}
\|u^*-v^*\|=\min_{u\in \cH_w, \,v\in V_n} \|u-v\|,
\label{doublemin}
\end{equation}
As the following remark shows, $u^*-v^*$ has a certain double orthogonality property.
\begin{remark}
\label{rem:orthogonal} The element $u^*-v^*$ is orthogonal to both spaces $V_n$ and $W^\perp$.
The orthogonality to $V_n$ follows from the fact that $v^*=P_{V_n}u^*$. On the other hand, for any
$\eta\in W^\perp$ and $\alpha\in \mathbb{R}$, we have
\begin{eqnarray}
\nonumber
\|u^*-v^*\|^2\leq \|u-P_{V_n}u\|^2,\quad u:=u^*+\alpha \eta,
\end{eqnarray}
and thus
\begin{eqnarray}
\nonumber
\|u^*-v^*\|^2\le \|u^*-v^*+\alpha(\eta -P_{V_n}\eta) \|^2= \|u^*-v^*\|^2+2\alpha \langle u^*-v^*,\eta\rangle +\alpha^2\|\eta -P_{V_n}\eta\|^2.
\end{eqnarray}
This shows that $u^*-v^*$ is orthogonal to $W^\perp$.
\end{remark}
\begin{remark}
\label{rem:uniquepair} Conversely, if $u\in \cH_w$ and $v\in V_n$ are such that $u-v$ is orthogonal to both spaces $V_n$ and $W^\perp$,
then $u=u^*$ and $v=v^*$. Indeed, from this orthogonality
\begin{eqnarray}
\nonumber
\|u^*-v^*\|^2 = \|u-v\|^2+\|u^*-v^*-(u-v) \|^2.
\end{eqnarray}
This gives that $u,v$ is also a minimizing pair and from uniqueness of the minimizing pair $u=u^*$ and $v=v^*$.
\end{remark}
The next theorem describes the smallest ball that contains ${\cal K}_w$, i.e., the Chebyshev ball for this set,
and shows that the center of this ball is $u^*(w)$.
\begin{theorem}
\label{theo:BA} Let $W$ and $V_n$ be such that $\beta(V_n,W)>0$.
\noindent
{\rm (i)}
For any $w\in W$ such that ${\cal K}_w$ is non-empty, the Chebyshev ball for ${\cal K}_w$ is the ball centered at $u^*(w)$ of radius
\begin{equation}
\label{Rstar}
R^*=R^*(w):=\mu(V_n,W) (\varepsilon _n^2-\|u^*(w)-v^*(w)\|^2)^{1/2}.
\end{equation}
\noindent
{\rm (ii)} The optimal algorithm in the sense of \eref{optalg} for recovering ${\cal K}_w$ from the measurement $w$ is given by the mapping $A^*: w\mapsto u^*(w)$ and gives the performance bound
\begin{equation}
\label{perfba}
E_{A^*}({\cal K}_w)= E({\cal K}_w)=\mu(V_n,W)(\varepsilon_n^2-\|u^*(w)-v^*(w)\|^2)^{1/2}.
\end{equation}
{\rm (iii)} The optimal algorithm in the sense of \eref{optalg} for recovering ${\cal K}$ is given by the mapping $A^*: w\mapsto u^*(w)$ and gives the performance bound
\begin{equation}
\label{perfba1}
E_{A^*}({\cal K})=E({\cal K})=\mu(V_n,W)\varepsilon_n.
\end{equation}
\end{theorem}
\noindent
{\bf Proof:} In order for ${\cal K}_w$ to be nonempty, we need that $\|u^*-v^*\|\le \varepsilon_n$. Any $u\in\cH_w$ can be written as $u=u^*+\eta$ where $\eta\in W^\perp$. Therefore,
\begin{eqnarray}
\nonumber
u-P_{V_n}u=
u^*-v^*+\eta-P_{V_n}\eta.
\end{eqnarray}
Because of the orthogonality in Remark \ref{rem:orthogonal}, we have
\begin{equation}
\label{min3}
\|u-
{P_{V_n}u}
\|^2=\|u^*-v^*\|^2+\|\eta-
{P_{V_n}\eta}
\|^2.
\end{equation}
Thus a necessary and sufficient condition for $u$ to be in ${\cal K}_w$ is that
\begin{eqnarray}
\nonumber
\|P_{V_n^\perp}\eta\|^2= \|\eta-
{P_{V_n}\eta}
\|^2\le \varepsilon_n^2-\|u^*-v^*\|^2.
\end{eqnarray}
From the definition of $\mu(V_n,W)$, this means that any $u\in{\cal K}_w$ is contained in the ball $B(u^*(w),R^*(w))$. Now, if $\eta$ is any element in $W^\perp$ with norm $R^*(w)$ which achieves
the maximum in the definition of $\mu(V_n,W)$, then $u^*\pm \eta$ is in ${\cal K}_w$ and since $\|\eta\| =R^*(w)$ we see that the diameter of ${\cal K}_w$ is at least as large as $2R^*(w)$. Since ${\cal K}_w$ is the translation of a symmetric set, we thus obtain (i) from Remark \ref{rem:diam2r}. The claim (ii) about $A^*$ being the optimal algorithm follows from Remark \ref{rem:opt}. Finally, the
performance bound \iref{perfba1} in the claim (iii) holds because the maximum of $R^*(w)$ is achieved when $w=0$.
\hfill $\Box$
\begin{remark}
The optimal mapping $w\mapsto A^*(w)=u^*(w)$ is independent of $\varepsilon_n$
and the knowledge of $\varepsilon_n$ is not needed in order to compute $A^*(w)$.
\end{remark}
\begin{remark}
\label{rem:character}
Since ${\cal K}_w$ is the intersection of the cylinder ${\cal K}$ with the affine space $\cH_w$, it has
the shape of an ellipsoid. The above analysis describes this ellipsoid as follows:
a point $u^*+\eta$ is in ${\cal K}_w$ if and only if $\|P_{V_n^\perp}\eta\|^2\le \varepsilon_n^2-\|u^*-v^*\|^2$.
In the following section, we give a parametric description of this ellipsoid using certain coordinate systems,
see Lemma \ref{lem:ellipsoids}.
\end{remark}
\begin{remark}
\label{rem:MP} The elements $u^*$ and $v^*$ were introduced in \cite{MPPY} and used
to define the algorithm $A^*$ given in the above theorem. The analysis from \cite{MPPY} establishes the error bound
\begin{eqnarray}
\nonumber
\|u-u^*(w)\|\le (\mu(V_n,W)+1)\dist(u,V_n\oplus (V_n^\perp\cap W)).
\end{eqnarray}
A sharper form of this inequality can be derived from our results. Namely, if $u$ is any element in $\cH$ then we can define $\varepsilon_n:=\|u-P_{V_n}u\|$ and $w:=P_Wu$. Then, $u\in {\cal K}_w$, for this choice of $\varepsilon_n$, and so Theorem \ref{theo:BA} applies
and gives a recovery of $u$ with the bound
\begin{equation}
\label{min5}
\|u-u^*(w)\|\le \mu(V_n,W)(\varepsilon _n^2-\|u^*-v^*\|^2)^{1/2}
= \mu(V_n,W)\|u-P_{V_n} u-(u^*-v^*)\|,
\end{equation}
where the second equality follows from \iref{min3}. We have noticed in Remark \ref{rem:orthogonal} that $u^*-v^*\in V_n^\perp\cap W$,
and on the other hand we have that
{$u-(u^*-v^*) \in V_n+W^\perp$},
which shows that
\begin{eqnarray}
\nonumber
u^*-v^*=P_{V_n^\perp\cap W}u.
\end{eqnarray}
Therefore
\begin{eqnarray}
\nonumber
P_{V_n} u+u^*-v^*=P_{V_n\oplus (V_n^\perp\cap W)} u,
\end{eqnarray}
and \iref{min5} gives
\begin{eqnarray}
\nonumber
\|u-u^*(w)\|\le \mu(V_n,W)\dist(u,V_n\oplus (V_n^\perp\cap W)).
\end{eqnarray}
\end{remark}
\begin{remark}
\label{rem1}
Let us observe that given a space $V_n$ with $n<m$ we have $(W\cap V_n^\perp)\neq \{0\}$, thus the space $\bar V_n:=
V_n\oplus (W\cap V_n^\perp)$ is strictly larger than $V_n$. However $\mu(\bar V_n,W)= \mu( V_n,W) $ because for any $\eta\in W^\perp$, the projection of $\eta$ onto $W\cap V^\perp_n$ is zero. In other words we can enlarge $V_n$ preserving the estimate \iref{perfba1} for class optimality performance as long as we add parts of $W$ that are
orthogonal to $V_n$.
\end{remark}
\subsection{The numerical implementation of the optimal algorithm}
\label{ss:numericalone-space}
Let us next discuss the numerical implementation of the optimal algorithm for the {one-space}
problem. Let $\omega_1,\dots,\omega_m$ be any orthonormal basis for $W$. For theoretical reasons only, we complete it to an orthonormal basis for $\cH$. So $\{\omega_i\}_{i>m}$ is a complete orthonormal system for $W^\perp$. We can write down explicit formulas for $u^*$ and $v^*$. Indeed, any $u\in \cH_w$ can be written
\begin{eqnarray}
\nonumber
u= \sum_{i=1}^m w_i\omega_i+\sum_{i=m+1}^\infty x_i\omega_i,
\end{eqnarray}
where $w_i:=\langle w,\omega_i\rangle$, and $(x_i)_{i>m}$ is any $\ell_2$ sequence. So, for any $v\in V_n$ and $u\in \cH_w$, we have
\begin{eqnarray}
\nonumber
\|u-v\|^2= \sum_{i=1}^m(w_i-v_i)^2+\sum_{i=m+1}^\infty (x_i-v_i)^2,
\end{eqnarray}
where $v_i:=\langle v,\omega_i\rangle$.
Thus, for any $v\in V_n$, its best approximation $u(v)$ from $ \cH_w$ is
\begin{equation}
u(v):=\sum_{i=1}^mw_i\omega_i+\sum_{i=m+1}^\infty v_i \omega_i,
\label{uv}
\end{equation}
and its error of approximation is
\begin{eqnarray}
\nonumber
\|v-u(v)\|^2=\sum_{i=1}^m(w_i-v_i)^2.
\end{eqnarray}
In view of \iref{doublemin} we have
\begin{eqnarray}
\nonumber
v^*=\mathop{\rm argmin}_{v\in V_n}\|v-u(v)\|^2 =\mathop{\rm argmin}_{v\in V_n}\sum_{i=1}^m(w_i-v_i)^2=\mathop{\rm argmin}_{v\in V_n}\|w-P_Wv\|^2.
\end{eqnarray}
For any given orthonormal basis $\{\phi_1,\cdots,\phi_n\}$ for $V_n$, we can find the coordinates of
$v^*\in V_n$ in this basis by solving the $n\times n$ linear system associated to the above least squares problem. Once $v^*$ is found,
the optimal recovery $u^*=u^*(w)$ is given, according to \iref{uv}, by
\begin{eqnarray}
\nonumber
u^*=v^* +\sum_{i=1}^m(w_i-v_i^*)\omega_i,
\end{eqnarray}
where $v_i^*=\<v^*,
{\omega_i}\>$. Note that we may also write
\begin{equation}
\label{lsu}
u^*=\sum_{i=1}^mw_i\omega_i+\sum_{i=m+1}^\infty \langle v^*,\omega_i\rangle \omega_i = w+P_{W^\perp}v^*.
\end{equation}
\subsection{Liftings and favorable bases for $V_n$ and $W$}
\label{ss:favorable}
It turns out that the above optimal algorithm has an even simpler description if we choose suitable bases for $V_n$ and $W$,
which we call {\em favorable bases}. These bases will also be important in our analysis of the multi-space problem. To describe this new geometric view, we introduce the description of algorithms through liftings and see how the best algorithm of the previous section arises in this context.
As noted earlier, any algorithm is a mapping $A:W\to\cH$ which takes $w=P_Wu$ into $\hat u(w)=A(w)=A(P_Wu)$. This image serves as the approximant of all of the $u\in{\cal K}_w$. We can write any $u\in {\cal K}_w$ as $u=w+P_{W^\perp} u$.
So the problem is to find an appropriate mapping $F:W\to {W^\perp}$ and take as the approximation
\begin{eqnarray}
\nonumber
\hat u (w):= A(w):=w+F(w).
\end{eqnarray}
At this stage $F$ can be any linear or nonlinear mapping from $W$ into $W^\perp$. We call such mappings $F$ {\it liftings}.
According to \eref{lsu}, the optimal lifting $F^*$ is defined by
\begin{eqnarray}
\nonumber
F^*(w)= P_{W^\perp}v^*(w)\in P_{W^\perp}V_n,
\end{eqnarray}
which is actually a linear mapping since $v^*$ depends linearly on $w$. The algorithm $A^*(w)=w+F^*(w)$ was shown in the previous section to be optimal for each class ${\cal K}_w$ as well as for ${\cal K}$. Note that this optimality holds even if we open the competition to nonlinear maps $F$, respectively $A$.
We next show that $F^*$ has a simple description as a linear mapping by introducing favorable bases.
We shall make use of the following elementary facts from linear algebra: if $X$ and $Y$ are closed subspaces of a Hilbert space $\cH$,
then:
\begin{itemize}
\item
We have the equality
\begin{eqnarray}
\nonumber
\dim({P_X Y})=\dim({P_YX}).
\end{eqnarray}
This can be seen by introducing the cross-Gramian matrix $G=(\<x_i,y_j\>)$, where $(x_i)$ and $(y_j)$ are orthonormal
bases for $X$ and $Y$. Then $G$ is the matrix representation of
the projection operator $P_X$ from $Y$ onto $X$
with respect to these bases and $G^t$ is the corresponding representation of
the projection operator $P_Y$ from $X$ onto $Y$.
Hence,
\begin{eqnarray}
\nonumber\dim( P_X Y)={\rm rank}(G)={\rm rank}(G^t)=\dim(P_Y X).
\end{eqnarray}
\item
The space $Y$ can be decomposed into a direct orthogonal sum
\begin{equation}
Y= P_Y X \oplus (Y\cap X^\perp).
\label{sum}
\end{equation}
For this, we need to show that $Y\cap X^\perp=Z$ where $Z\subset Y$ is the orthogonal complement of {$P_YX$}
in $Y$. If $y\in Z$, then $\<y,P_Yx\>=0$ for all $x\in X$.
Since $\<y,x-P_Yx\>=0$, if follows that $\<y,x\>=0$, for all $x\in X$, and thus $y\in Y\cap X^\perp$. Conversely
if $y\in Y\cap X^\perp$, then for any $x\in X$
{$\<y,P_Yx\>=-\<y,x-P_Yx\>=0$,}
which shows that $y\in Z$.
\end{itemize}
Now to construct the favorable bases we want, we begin with any orthonormal
{basis}
$\{\phi_1,\dots,\phi_n\}$ of $V_n$ and any orthonormal basis $\{\omega_1,\dots,\omega_m\}$ of $W$.
We consider the $m\times n$ cross-Gramian matrix
\begin{eqnarray}
\nonumber
G:=(\<\omega_i,\phi_j\>),
\end{eqnarray}
which may be viewed as the matrix representation of the projection operator $P_W$ from $V_n$ onto $W$
using these bases since $P_W(\phi_j)=\sum_{i=1}^m\<\omega_i,\phi_j\> \omega_i$. Note that the inf-sup condition $\beta(V_n,W)>0$ means that
\begin{eqnarray}
\nonumber
\dim({P_W V_n})=n,
\end{eqnarray}
or equivalently, the rank of $G$ is equal to $n$. We perform a singular value decomposition of $G$, which gives
\begin{eqnarray}
\nonumber
G=USV^t
\end{eqnarray}
where $U=(u_{i,j})$ and $V=(v_{i,j})$ are unitary $m\times m$ and $n\times n$ matrices, respectively,
and where $S$ is an $m\times n$ matrix with entries $s_i>0$ on the diagonal $i=j$, $i=1,\dots,n$, and zero entries elsewhere.
This allows us to define new orthonormal bases $\{\phi_1^*,\dots,\phi_n^*\}$ for $V_n$ and
$\{\omega_1^*,\dots,\omega_m^*\}$ for $W$ by
\begin{eqnarray}
\nonumber
\phi_j^*=\sum_{i=1}^n v_{i,j}\phi_i\quad {\rm and} \quad \omega_j^*=\sum_{i=1}^m u_{i,j}\omega_i.
\end{eqnarray}
These new bases are such that
\begin{eqnarray}
\nonumber
P_W(\phi_j^*)=s_j \omega_j^*, \quad j=1,\dots, n,
\end{eqnarray}
and have diagonal
cross-Gramian, namely
\begin{eqnarray}
\nonumber
\<\omega_i^*,\phi_j^*\>= s_j\delta_{i,j}.
\label{cross}
\end{eqnarray}
Therefore
$\{\omega_1^*,\dots,\omega_n^*\}$ and $\{\omega_{n+1}^*,\dots,\omega_m^*\}$
are orthonormal bases for the
$n$-dimensional space
{$P_W V_n$}
and respectively its orthogonal complement in $W$
which is $V_n^\perp\cap W$ according to \iref{sum}.
By convention, we organize the singular values in decreasing order
\begin{eqnarray}
\nonumber
0<s_n\leq s_{n-1} \leq \cdots \leq s_1.
\end{eqnarray}
Since $P_W$ is an orthogonal projector, all of them are at most $1$
and in the event where
\begin{eqnarray}
\nonumber
s_1=s_2=\cdots=s_{p}=1,
\end{eqnarray}
for some $0<p\leq n$,
then we must have
\begin{eqnarray}
\nonumber
\omega_j^*=\phi_j^*, \quad j=1,\dots,p.
\end{eqnarray}
This corresponds to the case where $V_n\cap W$ is non-trivial
and $\{\omega_1^*,\dots,\omega_p^*\}$ forms an orthonormal basis for $V_n\cap W$.
We define $p=0$ in the case where $V_n\cap W=\{0\}$.
We may now give a simple description of
the optimal algorithm $A^*$ and lifting $F^*$, in terms of their action
on the basis elements $\omega_j^*$. For $j=n+1,\dots,m$,
we know that $\omega_j^*\in V_n^\perp\cap W$. From Remark \ref{rem:uniquepair}, it follows that the
optimal pair $(u^*,v^*)$ which solves \iref{doublemin} for $w=\omega_j^*$ is
\begin{eqnarray}
\nonumber
u^*=\omega_j^*\quad {\rm and} \quad v^*=0,
\end{eqnarray}
and therefore
\begin{eqnarray}
\nonumber
A^*(\omega_j^*)=\omega_j^*\quad {\rm and}\quad F^*(\omega_j^*)=0
,\quad j=n+1,\dots,m.
\end{eqnarray}
For $j=1,\dots,n$, we know that $\omega_j^*=P_W(s_j^{-1}\phi_j^*)$. It follows that the
optimal pair $(u^*,v^*)$ which solves \iref{doublemin} for $w=\omega_j^*$ is
\begin{eqnarray}
\nonumber
u^*=v^*=s_j^{-1}\phi_j^*.
\end{eqnarray}
Indeed, this follows from Remark \ref{rem:uniquepair} since this pair has $u^*-v^*=0$ and hence has the double orthogonality property.
So, in this case,
\begin{eqnarray}
\nonumber
A^*(\omega_j^*)=s_j^{-1}\phi_j^*\quad {\rm and}\quad F^*(\omega_j^*)=s_j^{-1}\phi_j^*-\omega_j^*.
\end{eqnarray}
Note in particular that $F^*(\omega_j^*)=0$ for $j=1,\dots,p$.
\begin{remark}
\label{infsuprem}
The favorable bases are useful when computing the inf-sup constant $\beta(V_n,W)$. Namely,
for an element $v=\sum_{j=1}^n v_j\phi_j^*\in V_n$
we find that $P_Wv=\sum_{j=1}^n s_jv_j\omega_j^*$
and so
\begin{eqnarray}
\nonumber
\beta(V_n,W)=\min_{v\in V_n} \frac{\|P_Wv\|}{\| v\|}=\min_{v\in V_n} \(\frac{\sum_{j=1}^n s_j^2 v_j^2}{\sum_{j=1}^n v_j^2}\)^{1/2}=\min_{j=1,\dots,n} s_j=s_n.
\end{eqnarray}
Correspondingly,
\begin{eqnarray}
\nonumber
\mu(V_n,W)=s_n^{-1}.
\label{munew}
\end{eqnarray}
Recall that for the trivial space $V_0=\{0\}$, we have $\mu(V_0,W)=1$.
\end{remark}
For further purposes, we complete the favorable bases into orthonormal bases
of $\cH$ by constructing particular orthonormal bases for $V_n^\perp$ and $W^\perp$. According to \iref{sum}
we may write these spaces as direct orthogonal sums
\begin{eqnarray}
\nonumber
V_n^\perp=P_{V_n^\perp} (W) \oplus (V_n^\perp \cap W^\perp),
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
W^\perp=P_{W^\perp}(V_n) \oplus (V_n^\perp \cap W^\perp).
\end{eqnarray}
The second space $V_n^\perp \cap W^\perp$ in the above decompositions may be of infinite dimension and we consider
an arbitrary orthonormal basis $(\psi_i^*)_{i\geq 1}$ for this space. For the first spaces in the above decompositions,
we can build orthonormal bases from the already constructed favorable bases.
For the space $P_{V_n^\perp}(W)$ we first consider the functions
\begin{eqnarray}
\nonumber
P_{V_n^\perp} \omega_i^*, \quad i=1,\dots, m
\end{eqnarray}
These functions are $0$ for $i=1,\dots,p$ since $\omega_i^*\in V_n$ for these values of $i$.
They are equal to $\omega_i^*$ for $i=n+1,\dots,m$ and to $\omega_i^*-s_i\phi_i^*$
for $i=p+1,\dots,n$, and these $m-p$ functions are non-zero pairwise orthogonal. Therefore an orthonormal basis of $P_{V_n^\perp}(W)$
is given by the normalized functions
\begin{eqnarray}
\nonumber
(1-s_i^2)^{-1/2}(\omega_i^*-s_i\phi_i^*) , \quad i=p+1,\dots,n,\quad {\rm and} \quad \omega_i^*, \quad i=n+1,\dots,m.
\end{eqnarray}
By a similar construction, we find that an orthonormal basis of $P_{W^\perp}(V_n)$ is given by the normalized functions
\begin{eqnarray}
\nonumber
(1-s_i^2)^{-1/2}(\phi_i^*-s_i\omega_i^*), \quad i=p+1,\dots,n.
\end{eqnarray}
Therefore bases for $V_n^\perp$ and $W^\perp$ are defined as union of these bases with the basis $(\psi_i^*)_{i\geq 1}$
for $V_n^\perp \cap W^\perp$.
Finally, we close out this section, by giving a parametric description of the set ${\cal K}_w={\cal K}_w(V_n)$ for the single space problem which shows in particular that this set is an ellipsoid.
\begin{lemma}
\label{lem:ellipsoids}
Given a single space $V_n\subset \cH$, the body
$${\cal K}_w:={\cal K}_w(V_n):={\cal K}_w^{\mathop{\rm one}}(V_n):=\{u\in{\cal K}^{\mathop{\rm one}}(V_n):\ P_Wu=w\}$$
is a non-degenerate ellipsoid contained in the affine space $\cH_w$.
\end{lemma}
\noindent
{\bf Proof:} Using the favorable bases for $W$ and $W^\perp$, we can write any $u\in \cH_w$ as
\begin{eqnarray}
\nonumber
u=\sum_{j=1}^m w_j\omega_j^* +\sum_{j=p+1}^n x_j (1-s_j^2)^{-1/2}(\phi_j^*-s_j\omega_j^*)+\sum_{i\geq 1} y_j \psi_j^*,
\end{eqnarray}
where the $w_j=\<w,\omega_j^*\>$ for $j=1,\dots,m$, are given, and the $x_j$ and $y_j$ are the coordinates of $u-w$ in the
favorable basis of $W^\perp$. We may now write
$$
\begin{disarray}{ll}
P_{V_n^\perp} u &=\sum_{j=1}^m w_jP_{V_n^\perp} \omega_j^*+\sum_{j=p+1}^n x_j (1-s_j^2)^{-1/2}P_{V_n^\perp} (\phi_j^*-s_j\omega_j^*)
+\sum_{i\geq 1} y_j \psi_j^*\\
& = \sum_{j=p+1}^m w_j(\omega_j^*-s_j \phi_j^*)-\sum_{j=p+1}^n x_j (1-s_j^2)^{-1/2}s_j(\omega_j^*-s_j \phi_j^*)
+\sum_{i\geq 1} y_j \psi_j^*\\
& = \sum_{j=n+1}^m w_j(\omega_j^*-s_j \phi_j^*)+\sum_{j=p+1}^n(w_j- x_j s_j(1-s_j^2)^{-1/2})(\omega_j^*-s_j \phi_j^*)
+\sum_{i\geq 1} y_j \psi_j^*.
\end{disarray}
$$
All terms in the last sum are pairwise orthogonal and therefore
\begin{eqnarray}
\nonumber
\|P_{V_n^\perp} u\|^2 =\sum_{j=n+1}^m (1-s_j^2)w_j^2+\sum_{j=p+1}^n (1-s_j^2)(w_j- x_j s_j(1-s_j^2)^{-1/2})^2+\sum_{j\geq 1} y_j^2.
\end{eqnarray}
Now $u\in {\cal K}_w$ if and only if $\|P_{V_n^\perp} u\|^2 \leq \varepsilon_n^2$, or equivalently
\begin{equation}
\sum_{j=p+1}^n s_j^2(x_j-a_j)^2+\sum_{j\geq 1} y_j^2 \leq C,
\label{ellips}
\end{equation}
with $C:=\varepsilon_n^2-\sum_{j=n+1}^m (1-s_j^2)w_j^2$ and $a_j:=(1-s_j^2)^{1/2}s_j^{-1} w_j$
which is the equation of a non-degenerate ellipsoid in $\cH_w$.
\hfill $\Box$
\begin{remark}
The above equation \iref{ellips} directly shows that the radius of ${\cal K}_w$ is equal to $s_n^{-1}C^{1/2}$
which is an equivalent expression of \iref{Rstar}.
\end{remark}
\section{The multi-space problem}
\label{sec:multiple}
In this section, we consider the multi-space problem as described in the introduction. We are interested in the optimal recovery of the elements in the set ${\cal K}:={\cal K}^{\mathop{\rm mult}}$ as described
by \eref{defKm}. For any given $w\in W$, we consider the set %
\begin{eqnarray}
\nonumber
{\cal K}_w:={\cal K}_w^{\mathop{\rm mult}}:={\cal K}^{\mathop{\rm mult}}\cap
{\cH_w}
=\bigcap_{j=0}^n {\cal K}^j_w,
\end{eqnarray}
where
\begin{eqnarray}
\nonumber
{\cal K}^j_w:={\cal K}^j\cap \cH_w:=\{ u\in \cH_w\; : \; \dist(u,V_j)\le \varepsilon_j\}.
\end{eqnarray}
In other words, ${\cal K}^j_w$ is the set in the
{one-space}
problem
considered in the previous section.
We have seen that ${\cal K}^j_w$ is an ellipsoid with known center $u^*_j=u_j^*(w)$
and known Chebyshev radius given by \iref{Rstar} with $n$ replaced by $j$,
{and $u^*$ and $v^*$ replaced by $u_j^*$ and $v_j^*$}
in that formula.
Thus, ${\cal K}_w $ is now the intersection of $n+1$ ellipsoids.
The optimal algorithm $A^*$, for the recovery of ${\cal K}_w$, is the one that would find the center of the Chebyshev ball of
this set and its performance would then be given by its Chebyshev radius.
In contrast to the one-space problem, this center and radius do not have simple
computable expressions. The first results of this section provide an a priori estimate of the Chebyshev radius in the multi-space setting by exploiting favorable bases. This a priori analysis illustrates when a gain in performance is guaranteed to occur, although the a priori estimates we provide may be pessimistic.
We then give examples which show that the Chebyshev radius in the multi-space case can be far smaller than the
minimum of the Chebyshev radii of the ${\cal K}^j_w$ for $j=0,\dots,n$. These examples are intended to illustrate that exploiting the multi-space case can be much more advantageous than simply executing the one-space algorithms and taking the one with best performance, see \eref{perfba}.
The latter part of this section proposes two simple algorithmic strategies, each of them converging to a point in ${\cal K}_w$.
These algorithms thus produce a near optimal solution, in the sense that if $A$ is the map
corresponding to either one of them, we have
\begin{equation}
E_A({\cal K}_w)\leq 2E_{A^*}({\cal K}_w)=2E({\cal K}_w), \quad w\in W,
\label{nearopt1}
\end{equation}
and in particular
\begin{equation}
E_A({\cal K})\leq 2E({\cal K}).
\label{nearopt2}
\end{equation}
Both of these algorithms are iterative and based on alternating projections. An a posteriori estimate for the distance between a given
iterate and the intersection of the ellipsoids is given and used both, as a stopping criteria and to analyze the convergence rates of the
algorithms.
\subsection{A priori bounds for the radius of ${\cal K}_w$}
\label{ss;boundrad}
In this section, we derive a priori bounds for $\mathop{\rm rad}({\cal K}_w^{\mathop{\rm mult}})$. Although these bounds may overestimate $\mathop{\rm rad}({\cal K}_w^{\mathop{\rm mult}})$, they allow us to show examples where the multi-space algorithm
is significantly better than simply chosing one space and using the one-space algorithm. Recall that for the one-space problem,
we observed that $\mathop{\rm rad}({\cal K}_w^{\mathop{\rm one}})$ is largest when $w=0$. The following results
{show} that for the multi-space problem
$\mathop{\rm rad}({\cal K}_w^{\mathop{\rm mult}})$ is also controlled by $\mathop{\rm rad}({\cal K}_0^{\mathop{\rm mult}})$, up to a multiplicative constant. Note that ${\cal K}_w^{\mathop{\rm mult}}$ is generally not a symmetric set,
except for $w=0$. In going further in this section ${\cal K}$ and ${\cal K}_w$ will refer to the multi-space sets.
\begin{lemma}
\label{lem:rad}
For the multi-space problem, one has
\begin{equation}
\label{zerobiggest}
\mathop{\rm rad}({\cal K}_w)\le 2\mathop{\rm rad}({\cal K}_0), \quad w\in W.
\end{equation}
Therefore,
\begin{equation}
\label{zerobiggest1}
E({\cal K})\le 2\mathop{\rm rad}({\cal K}_0).
\end{equation}
\end{lemma}
\noindent
{\bf Proof:}
Fix $w\in W$ and let $\tilde u:=\tilde u(w)$ be the center of the Chebyshev ball for ${\cal K}_w$ which by Remark \ref{rem:center}, belongs to ${\cal K}_w$. For
any $u\in {\cal K}_w$ we have $\eta:=\frac{1}{2}(u-\tilde u)$ is in $W^\perp$ and
also
$$
\dist (\eta,V_k)\le \frac{1}{2}(\dist(u,V_k)+\dist(\tilde u,V_k))\le \varepsilon_k,\quad k=0,1,\dots,n.
$$
Hence, $\eta\in {\cal K}_0$ which gives
\begin{eqnarray}
\nonumber
\|u-\tilde u\|=2\|\eta\|\le 2\mathop{\rm rad}({\cal K}_0),
\end{eqnarray}
where we have used the fact that, by Remark \ref{rem:diam2r}, the best Chebyshev ball for ${\cal K}_0$ is centered at $0$. This proves
\eref{zerobiggest}. The estimate \eref{zerobiggest1} follows from the definition of $E({\cal K})$.
\hfill $\Box$
\newline
In view of the above Lemma \ref{lem:rad}, we concentrate on deriving a priori bounds for the radius of the set ${\cal K}_0$.
We know that ${\cal K}_0$ is the intersection of the ellipsoids ${\cal K}_0^{j}$ for ${j}=0,1,\dots,n$, each of which is centered at zero. We also know that the Chebyshev ball for ${\cal K}_0^{j}$ is $B(0,\mathop{\rm rad}({\cal K}_0^{j})$ and we know from \eref{perfba} that
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_0^{j})= \mu(V_{j},W) \varepsilon _{j}, \quad { j}=0,1,\dots,n,
\end{eqnarray}
which is a computable quantity.
This gives the obvious bound
\begin{equation}
\label{oldestimate}
\mathop{\rm rad}({\cal K}_0)\le \min_{0\le k\le n} \mu(V_k,W)\varepsilon_k.
\end{equation}
In the following, we show that we can improve on this bound considerably. Since ${\cal K}_0$ is symmetric around the origin, we have
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_0)=\mathop{\rm argmax}_{\eta\in {\cal K}_0}\|\eta\|.
\end{eqnarray}
So we are interested in bounding $\|\eta\|$ for each $\eta\in {\cal K}_0$.
Since the spaces $V_j$ are nested,
we can consider an orthonormal basis $\{\phi_1,\dots,\phi_n\}$ for $V_n$, for which,
$\{\phi_1,\dots, \phi_j\}$ is an orthonormal basis for each of the $V_j$ for $j=1,\dots,n$.
We will use the favorable bases constructed in the previous section
in the case of the particular space $V_n$. Note that if $\{\phi_1^*,\dots,\phi_n^*\}$ is the favorable basis
for $V_n$, we do not generally have that $\{\phi_1^*,\dots,\phi_j^*\}$ is a basis of $V_j$.
Let $\eta$ be any element from ${\cal K}_0$.
Since ${\rm dist}(\eta,V_n)\leq \varepsilon_{n}$, we may express
$\eta$ as
\begin{eqnarray}
\nonumber
\eta=\sum_{j=1}^n \eta_j \phi_j^* +e=\sum_{j=1}^n \alpha_j \phi_j +e ,\quad e\in V_n^\perp \ {\rm and} \ \|e\|\le \varepsilon _n.
\end{eqnarray}
So,
\begin{eqnarray}
\nonumber
\|\eta\|^2=\sum_{j=1}^n \eta_j^2 +\|e\|^2=\sum_{j=1}^n \alpha_j^2 +\|e\|^2.
\end{eqnarray}
The $\alpha_j$ and $\eta_j$ are related by the equations
\begin{eqnarray}
\nonumber
\sum_{j=1}^n \lambda_{i,j} \alpha_j=\eta_i, \quad i=1,\dots,n,
\end{eqnarray}
where
\begin{eqnarray}
\nonumber
\lambda_{i,j}:= \langle \phi_j,\phi_i^*\rangle ,\quad 1\le i,j\le n.
\end{eqnarray}
The fact that ${\rm dist}(\eta,V_k)\leq \varepsilon_k$ for $k=0,\dots,n$ is expressed by the inequalities
\begin{eqnarray}
\nonumber
\sum_{j=k+1}^n \alpha_j^2+ \|e\|^2\leq \varepsilon_k^2, \quad k=0,\dots,n.
\end{eqnarray}
Since $\eta\in W^\perp$, we have that
\begin{eqnarray}
\nonumber
0=P_W\eta=\sum_{j=1}^n s_j\eta_j \omega_j^*+P_W e \,.
\end{eqnarray}
It follows that
\begin{eqnarray}
\nonumber
\sum_{j=1}^n s_j^2\eta_j^2 = \|P_We\|^2 \leq \|e\|^2 \leq \varepsilon_n^2.
\end{eqnarray}
We now return to the representation of
{${\cal K}_0$}
in the $\phi_j$ coordinate system. We know that all $\alpha_j$ satisfy $|\alpha_j|\le \varepsilon _{j-1} $. This means that the coordinates $\{\alpha_1,\dots,\alpha_n\}$ of any point
in
{${\cal K}_0$}
are in the $n$-dimensional rectangle
\begin{eqnarray}
\nonumber
R=[-\varepsilon _0,\varepsilon _0]\times \cdots \times [-\varepsilon _{n-1},\varepsilon _{n-1}].
\end{eqnarray}
It follows that each
$\eta_i$
satisfies the crude estimate
\begin{equation}
\label{betaestimate}
|\eta_i|\le \sum_{j=1}^n |\lambda_{i,j}||\alpha_j|\le \sum_{j=1}^n |\lambda_{i,j}|\varepsilon _{j-1}=:\theta_i \quad i=1,\dots,n .
\end{equation}
The numbers
$\theta_i$
are computable. The bound \eref{betaestimate} allows us to estimate
\begin{eqnarray}
\nonumber
\mathop{\rm rad} ({\cal K}_0)^2= \sup_{\eta\in{\cal K}_0}\|\eta\| ^2\le \varepsilon _n^2+\sup\Big \{ \sum_{j=1}^n\eta_j^2: \ |\eta_j|\le \theta_j\quad {\rm and} \quad \sum_{j=1}^n s_j^2\eta_j^2 \le \varepsilon _n^2\Big\}
\end{eqnarray}
Since the $s_{j}$ are non-increasing, the supremum on the right side takes the form
\begin{eqnarray}
\nonumber
\delta\theta_{k}^2+ \sum_{j=k+1}^n\theta_{j}^2,\quad 0<\delta\le 1,
\end{eqnarray}
where $k$ is the largest integer such that
\begin{equation}
\label{right1}
\sum_{j=k}^ns_j^2\theta_{j}^2\ge \varepsilon _n^2,
\end{equation}
and $\delta$ is chosen so that
\begin{equation}
\label{right2}
\delta s_k^2\theta_k^2 + \sum_{j=k+1}^n s_j^2\theta_{j}^2= \varepsilon _n^2.
\end{equation}
This gives us the following bound on the Chebyshev radius of ${\cal K}_0$.
\begin{equation}
\label{bounddiameter}
\mathop{\rm rad}({\cal K}_0)^2\le \varepsilon _n^2+ \delta \theta^2_{k}+ \sum_{j=k+1}^n \theta_{j}^2 :=E_n^2.
\end{equation}
Using this estimate together with Lemma \ref{lem:rad}, we have proven the following theorem.
\begin{theorem}
\label{thm:ms}
For the multi-space problem, we have the following estimates for Chebyshev radii. For ${\cal K}_0$, we have
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_0)\le E_n,
\end{eqnarray}
where $E_n:=\Big(\varepsilon _n^2+ \delta \theta^2_{k}+ \sum_{j=k+1}^n \theta_{j}^2\Big)^{1/2}$. For any $w\in W$, we have
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_w)\le 2 E_n.
\end{eqnarray}
For ${\cal K}$, we have the bound
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K})\le 2 E_n.
\end{eqnarray}
\end{theorem}
We next compare the bound in \eref{bounddiameter} with the one space bound %
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_{0})\le\mu(V_n,W)\varepsilon _n= s_n^{-1}\varepsilon _n ,
\end{eqnarray}
which is obtained by considering only the approximation property of $V_n$ and not exploiting the other spaces $V_j$, $j<n$, see \eref{oldestimate}. For this, we return to
the definition of $k$ from \eref{right1}. We can write each term that appears in \eref{right2} as $\gamma_j\varepsilon _n^2$ where
{$\sum_{j=k}^n\gamma_j=1$}. In other words,
\begin{eqnarray}
\nonumber
\theta_{j}^2=\gamma_js_j^{-2}\varepsilon _n^2, \quad k<j\le n,\quad \theta_{k}^2=\delta^{-1}\gamma_{k}s_k^{-2}\varepsilon _n^2.
\end{eqnarray}
Hence,
\begin{eqnarray}
\nonumber
E_n^2\le \varepsilon _n^2+s_n^{-2}\varepsilon _n^2 \leq 2s_n^{-2} \varepsilon_n^2,
\end{eqnarray}
which is at least as good as the old bound up to a multiplicative constant $\sqrt 2$.
We finally observe that the bound $E_n$ is obtained by using
the entire sequence $\{V_0,\dots,V_n\}$. Similar bounds $E_\Gamma$ are obtained
when using a subsequence $\{V_j \; : \; j\in \Gamma\}$ for any $\Gamma \subset \{0,\dots,n\}$.
This leads to the improved bound
\begin{eqnarray}
\nonumber
\mathop{\rm rad}({\cal K}_0)\le
{\min} \{E_\Gamma\; : \; \Gamma\subset \{0,\dots,n\}\}.
\end{eqnarray}
In particular defining $E_j=E_\Gamma$ for $\Gamma=\{0,\dots,j\}$ we find that
\begin{eqnarray}
\nonumber
E_j^2\le 2\mu(V_j,W)^2\varepsilon _j^2.
\end{eqnarray}
Therefore
\begin{eqnarray}
\nonumber
E_n^* \leq \sqrt 2 \min_{j=0,\dots,n} \mu(V_j,W)\varepsilon _j,
\end{eqnarray}
which shows that the new estimate is as good as \iref{oldestimate} up to the multiplicative
constant $\sqrt 2$.
\subsection{Examples}
\label{ss:examples}
One can easily find examples for which the Chebyshev radius of ${\cal K}_w$ is substantially smaller than
the minimum of the Chebyshev radii of the ${\cal K}^j_w$, therefore giving higher potential accuracy
in the multi-space approach. As a simple example to begin this discussion, consider the case where
\begin{eqnarray}
\nonumber
\cH=\mathbb{R}^2 , \quad V_0=\{0\}, \quad V_1=\mathbb{R} e_1, \quad W=\mathbb{R} (e_1+e_2)
\end{eqnarray}
where $e_1=(1,0)$ and $e_2=(0,1)$. So, $V_1$ and $W$ are one dimensional spaces. Then, with the choices
\begin{eqnarray}
\nonumber
\varepsilon_0=1,\quad \varepsilon_1=\frac 1 2, \quad w=\(\frac {\sqrt 3 +1} 4,\frac {\sqrt 3 +1} 4\) ,
\end{eqnarray}
it is easily seen that ${\cal K}_w$ is the single point $\(\frac {\sqrt 3} 2, \frac 1 2\)$ and has therefore
null Chebyshev radius while ${\cal K}^0_w$ and ${\cal K}^1_w$ have positive Chebyshev radii.
In more general settings we do not have such a simple description of ${\cal K}_w$, however we
now give some additional examples that show that even the a priori estimates of the previous
section can be significantly better than the one space estimate as well as the estimate \eref{oldestimate}. We consider the two
extremes in the compatibility between the favorable basis $\{\phi_1^*,\dots,\phi_n^*\}$ and the basis
$\{\phi_1,\dots,\phi_n\}$ which describes the approximation properties of the sequence $\{V_0,\dots,V_n\}$.
\newline
\newline
{\bf Example 1:} In this example we consider the case where the two bases coincide,
\begin{eqnarray}
\nonumber
\phi_i^*=\phi_i, \quad i=1,\dots,n.
\end{eqnarray}
Note that in this case the singular values $\{s_1,\dots,s_k\}$ for the pair $\{V_k,W\}$ coincide with the
first $k$ singular values for the pair $\{V_n,W\}$. Therefore
\begin{eqnarray}
\nonumber
\mu(V_k,W)=s_k^{-1}, \quad k=0,\dots,n,
\end{eqnarray}
where we have set $s_0:=1$. We also have
\begin{eqnarray}
\nonumber
\theta_k=\varepsilon _{k-1},\quad k=1,\dots,n.
\end{eqnarray}
We fix $\varepsilon_n:=\varepsilon$ and $\varepsilon_{n-1}:=\varepsilon_{n-2}:=\varepsilon^{1/2}$ and the values $s_n:= \varepsilon$ and $s_{n-1}:=s_{n-2}:=\varepsilon^{1/2}$ and
all other $\varepsilon_k:=1$ and all other $s_k:=1$. We examine what happens when $\varepsilon$ is very small. The {estimate} \eref{olderror} would give the bound
\begin{eqnarray}
\nonumber
\min_{0\le k\le n} \mu(V_k,W) \varepsilon_k=\min_{0\le k\le n} s_k^{-1}\varepsilon_k =1,
\end{eqnarray}
as the bound for $\mathop{\rm rad}({\cal K}_0)$ and $E({\cal K})$.
On the other hand, since,
\begin{eqnarray}
\nonumber
s_{n}^{2}\varepsilon_{n-1}^2=\varepsilon^3\ll \varepsilon^2 \quad {\rm and}\quad s_{n-1}^{2}\varepsilon_{n-2}^2= \varepsilon^2,
\end{eqnarray}
the value of $k$ in \eref{right1} is $n-1$. It follows that the error $E_n$ in the multi-space method \eref{bounddiameter} satisfies
\begin{eqnarray}
\nonumber
E_n^2\le \varepsilon_{n-2}^2+ \varepsilon_{n-1}^2+ \varepsilon_n^2\le 3\varepsilon.
\end{eqnarray}
Hence, the error for the multi-space method can be arbitrarily small as compared to the error of the one-space method.
\newline
\newline
{\bf Example 2: }
We next consider the other extreme where the two bases are incoherent in the sense that each entry in the change of basis matrix
satisfies
\begin{eqnarray}
\nonumber
|\lambda_{i,j}|\le C_0n^{-1/2},\quad 1\le i,j\le n.
\end{eqnarray}
We want to show that $E_n$ can be smaller
than
the estimate in \iref{oldestimate} in this case as well.
To illustrate how the estimates go, we assume that $n\ge 2$ and $|\lambda_{i,j}|=1/\sqrt{n}$, for all $1\le i,j\le n$. We will
take
\begin{eqnarray}
\nonumber
s_n\ll s=s_1=s_2=\dots=s_{n-1},
\end{eqnarray}
with the values of $s$ and $s_n$ specified below. We define
\begin{eqnarray}
\nonumber
\varepsilon_0:=1/2\quad {\rm and}\quad \varepsilon_j=\frac {1}{2(n-1)},\quad j=1,\dots,n-1,
\end{eqnarray}
so that $\sum_{j=0}^{n-1}\varepsilon_j=1$.
It follows from the definiton of $\theta_k$ given in \eref{betaestimate} that
\begin{eqnarray}
\nonumber
\theta_k=1/\sqrt{n}:=\theta, \quad k=1,\dots,n.
\end{eqnarray}
With these choices, the best one space estimate \eref{olderror}
{is}
\begin{equation}
\label{bestone}
\min\{\varepsilon_0,s^{-1}\varepsilon _{n-1},s_n^{-1}\varepsilon_n\}.
\end{equation}
Now, we take $\varepsilon_n$ very
{small}
and $s_n=\varepsilon_n^2$. We then choose $s$ so that
\begin{equation}
\label{case2}
(s^2+s_n^2) \theta^2 =\varepsilon_n^2.
\end{equation}
This gives $k=n-1$ in \eref{right1} and so
\begin{eqnarray}
\nonumber
E_n^2=\varepsilon_n^2+\theta_{n-1}^2+\theta_n^2 \leq 3n^{-1}.
\end{eqnarray}
On the other hand, \eref{case2} says that
{$s^{-1}=\varepsilon_n^{-1}(n-\varepsilon_n^2)^{-1/2}$}.
Thus, from \eref{bestone}, the best one space estimate
{is}
\begin{eqnarray}
\nonumber
\min\{\varepsilon_0,s^{-1}\varepsilon _{n-1},s_n^{-1}\varepsilon_n\}
=
\min \Big\{\frac{1}{2},
{\frac{1}{2(n-1)\sqrt{n-\varepsilon_n^2}}}
\varepsilon_n^{-1}, \varepsilon_n^{-1} \Big \} =1/2,
\end{eqnarray}
provided {$\varepsilon_n\leq n^{-3/2}$.}
Hence, the multi-space estimate \eref{bounddiameter} is better than the one space estimate by at least the
factor $n^{-1/2}$ in this case.
\subsection{Numerical algorithms}
\label{ss:numerical}
In this section, we discuss some possible numerical algorithms, based on convex optimization,
for the multi-space case. For any given
data $w\in W$,
such that ${\cal K}_w$ is not empty, these algorithms produce, in the limit,
an element $A(w)$ which belongs to ${\cal K}_w$, so that they are
near optimal in the sense of \iref{nearopt1} and \iref{nearopt2}.
We recall that ${\cal K}_w$ is given by
\begin{eqnarray}
\nonumber
{\cal K}_w=\cH_w\cap {\cal K}^0 \cap {\cal K}^1\cap \cdots \cap {\cal K}^n.
\end{eqnarray}
One first observation is that although the set ${\cal K}_w$
{may be}
infinite dimensional, we may reduce the
search for an element in ${\cal K}_w$ to the finite dimensional space
\begin{eqnarray}
\nonumber
{\cal F}:=V_n+ W,
\end{eqnarray}
which has dimension $d=m+n-p$, where $p=\dim(V_n\cap W)$. Indeed, if $u\in {\cal K}_w$, then its projection
$P_{\cal F} u$ onto ${\cal F}$ remains in ${\cal K}_w$, since $u-P_{\cal F} u\in W^\perp\cap V_n^\perp$ implies
\begin{eqnarray}
\nonumber
P_W P_{\cal F} u=P_Wu=w,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
\dist(P_{\cal F} u, V_j)\leq \dist( u, V_j)\leq \varepsilon_j, \quad j=0,\dots,n.
\end{eqnarray}
Therefore, without loss of generality, we may assume that
\begin{eqnarray}
\nonumber
\cH={\cal F},
\end{eqnarray}
and that the sets $\cH_w$ and {${\cal K}^j$} that define ${\cal K}_w$ are contained in this finite dimensional space.
The problem of finding a point in the intersection of convex sets is sometimes referred to as {\it convex feasibility}
and has been widely studied in various contexts. We refer to \cite{Comb1,Comb2} for surveys on
various possible algorithmic methods. We restrict our
discussion to two of them which have very simple expressions in our particular case. Both are based
on the orthogonal projection operators onto the spaces $\cH_w$ and ${\cal K}^j$. Let us first observe that these projections
are very simple to compute. For the projection onto $\cH_w$, we use the orthonormal basis $\{\omega_1,\dots,\omega_m\}$ of $W$.
For any $u\in {\cal F}$ we have
\begin{equation}
P_{\cH_w} u= P_{W^\perp}u+w=u-\sum_{i=1}^{m} \<u,\omega_i\>\omega_i+w.
\label{PHw}
\end{equation}
For the projection onto ${\cal K}^j$, we extend the basis $\{\phi_1,\dots,\phi_n\}$ into
an orthonormal basis $\{\phi_1,\dots,\phi_d\}$ of ${\cal F}$. We then have
\begin{eqnarray}
\nonumber
P_{{\cal K}^j} u= \sum_{i=1}^j \<u,\phi_i\> \phi_i+\alpha\(\sum_{i=j+1}^d \<u,\phi_i\> \phi_i\), \quad \alpha:=\min\Big\{1,\varepsilon_j\(\sum_{i=j+1}^{d}Ê|\<u,\phi_i\>|^2\)^{-1/2}\Big\}.
\end{eqnarray}
We
{now}
describe two elementary and well-known algorithms.
\newline
\newline
{\bf Algorithm 1: sequential projections.} This algorithm is a cyclical application of the above operators. Namely, starting
say from $u^0=w$, we define for $k\geq 0$ the iterates
\begin{eqnarray}
\nonumber
u^{k+1}:=P_{{\cal K}^n}P_{{\cal K}^{n-1}} \cdots P_{{\cal K}^1}P_{{\cal K}^0} P_{\cH_w} u^{k}.
\end{eqnarray}
We know from general results on alternate projections onto convex sets \cite{Br} that this sequence converges towards
a point $u^*\in {\cal K}_w$ when ${\cal K}_w$ is not empty. We make further use of the
following observation: the nestedness
property $V_0\subset V_1 \subset \dots \subset V_n$ implies
that $u^k$ belongs to ${\cal K}={\cal K}^0\cap \dots \cap {\cal K}^n$.
\newline
\newline
{\bf Algorithm 2: parallel projections.} This algorithm combines the projections onto the sets ${\cal K}$ according to
\begin{eqnarray}
\nonumber
u^{k+1}:=P_{\cH_w}\(\sum_{j=0}^n \gamma_jP_{{\cal K}^j}\) u^{k},
\end{eqnarray}
where the weights $0<\gamma_j<1$ are such that $\gamma_0+\cdots +\gamma_n=1$, for example
$\gamma_j:=\frac 1 {n+1}$. It may be
viewed as a projected gradient iteration for the minimization over $\cH_w$ of the differentiable function
\begin{eqnarray}
\nonumber
F(u):=\sum_{j=0}^n \gamma_jF_j(u),\quad F_j{(u)}:=\frac 1 2\dist(u,{\cal K}^j)^2.
\end{eqnarray}
Notice that the minimum of $F$ is attained exactly at each point of ${\cal K}$. Since $\nabla F_j(u)=u-P_{{\cal K}^j}u$, we find that
\begin{eqnarray}
\nonumber
u^{k+1}=P_{\cH_w}(u^k-\nabla F(u^k)).
\end{eqnarray}
Classical results on constrained minimization methods \cite{LP} show that this algorithm converges toward
a minimizer $u^*$ of $F(u)$ over $\cH_w$ which
clearly belongs to ${\cal K}_w$ when ${\cal K}_w$ is not empty.
\subsection{A posteriori estimate and convergence rates}
Each of the above algorithms generates a sequence $(u^k)_{k\ge 1}$ of elements from $ {\cal F}$ which are guaranteed to converge to a point
in ${\cal K}_w$ provided that this set is nonempty. We would like
to
have a bound for $\dist(u^k,{\cal K}_w)$, since this would allow us to check
the progress of the algorithm and also could be utilized
as
a stopping criterion when we have gained sufficient accuracy.
Here we restrict our analysis to Algorithm 1.
We will use certain geometric properties of the set ${\cal K}$, expressed by the following
lemma.
\begin{lemma}
\label{lem:ecc}
If $u_1,u_2\in{\cal K}$ then the ball $B:=B(u_0,r)$ centered at $u_0:=\frac{1}{2}(u_1+u_2)$ of radius
\begin{equation}
\label{radius}
r:=\frac{1}{8}\min_{j=0,\dots,n} \varepsilon_j^{-1}\|P_{V_j^\perp} (u_1)- P_{V_j^\perp} (u_2)\|^2
\end{equation}
is completely contained in ${\cal K}$.
\end{lemma}
\noindent
{\bf Proof:} For $u_1,u_2\in {\cal K}^j$ the ball $B(u_0,r)$ is contained in ${\cal K}^j$ if and only if the ball in $V_j^\perp$ centered at $P_{V_j^\perp}u_0$ with the radius $r$ is contained in
$P_{V_j^\perp}({\cal K}^j)=\{x\in V_j^\perp \ :\ \|x\|\leq \epsilon_j\}:={\cal B}_j$. Let $v^j_s:=P_{V_j^\perp}(u_s)$ for $s=0,1,2$ and let $\delta_j:=\|v_1^j-v_2^j\|$. The parallelogram identity gives
$$
\|v_0^j\|^2=\frac12\| v_1^j\|^2+\frac12\|v_2^j\|^2 -\frac14\|v_1^j-v_2^j\|^2,
$$
so that $\|v_0^j\|^2\leq \varepsilon_j^2-\frac14 \delta_j^2$. Thus for
\begin{eqnarray}
\nonumber
r_j:=\varepsilon_j -\sqrt{\varepsilon_j^2-\frac14 \delta_j^2}=\varepsilon_j\left(1-\sqrt{1-\frac {\delta_j^2}{4\varepsilon_j^2}}\right),
\end{eqnarray}
the ball in $V_j^\perp$ centered at $v_0^j$ with radius $r_j$ is contained in ${\cal B}_j$. Thus, with
\begin{eqnarray}
\nonumber
\rho:=\min_{j=0,1,\dots,n} r_j,
\end{eqnarray}
we have $B(u_0,\rho)\subset {\cal K}$. Since $\delta_j\leq 2 \varepsilon_j$ and $(1-\sqrt{1-x})\geq x/2$ for $0\leq x\leq 1$ we get
$r_j\geq \delta_j^2/(8\varepsilon_j)$ and therefore $\rho\geq r$ from which \iref{radius} follows. \hfill $\Box$
\newline
We have noticed that the iterates $u^k$ of Algorithm 1
all belong to ${\cal K}$ and we would like to estimate their distance
from the convex set ${\cal K}_w$. Let $P_{{\cal K}_w}(x)$ denote the point from ${\cal K}_w$ closest to $x$. This is a well defined map. The following result gives
an estimate for the distance of any $u\in {\cal K}$ from ${\cal K}_w$, in terms of its
distance from the affine space $\cH_w$. This latter quantity is
easily computed using \iref{PHw} which shows that
\begin{eqnarray}
\nonumber
u-P_{\cH_w} u=P_Wu-w=\sum_{i=1}^{m} \<u,\omega_i\>\omega_i-w.
\end{eqnarray}
\noindent
\begin{lemma}
\label{lem:apost}
Let $u\in {\cal K}$ be such that
\begin{eqnarray}
\nonumber
\alpha:=\dist(u,\cH_w)>0.
\end{eqnarray}
Then
\begin{equation}
\label{toprove}
\| P_{\cH_w} u-P_{{\cal K}_w} u\| \leq \rho=\rho(\alpha):= \max_j \mu_j(\alpha +4\sqrt{\alpha \varepsilon_j}),
\end{equation}
where $\mu_j=\mu(V_j, W)$.
Since $u-P_{\cH_w}u$ is orthogonal to $P_{\cH_w}u-P_{{\cal K}_w} u$, we have
\begin{eqnarray}
\nonumber
\dist(u,{\cal K}_w)^2 \leq \alpha^2+\rho(\alpha){^2}.
\end{eqnarray}
\end{lemma}
\noindent
{\bf Proof:} We set $u_2=P_{{\cal K}_w} u$ and $\eta=u-u_2$ which we decompose as
\begin{eqnarray}
\nonumber
\eta=(u-P_{\cH_w} u)+(P_{\cH_w} u-u_2)=:\eta_1+\eta_2.
\end{eqnarray}
We wish to show that $\|\eta_2\|\le \rho$, where $\rho$ is defined in \iref{toprove}. To this end,
observe that $\eta_1\in W$ and $\eta_2\in W^\perp$ so that this is an orthogonal decomposition.
Moreover, using \iref{mu} and noting that $\|\eta_1\|=\alpha$, we have
\begin{equation}
\label{PW5}
\|P_{V_j^\perp} \eta\|\geq \|P_{V_j^\perp} \eta_2\|-\|P_{V_j^\perp}\eta_1\|\geq \beta(V_j,W)\|\eta_2\|-\alpha.
\end{equation}
We infer
from Lemma \ref{lem:ecc} that the ball $B$ with center at $u_0=\frac12(u+u_2)$ and radius
$$
r=\frac18\min_{j=0,1,\dots,n} \varepsilon_j^{-1} \|P_{V_j^\perp} \eta\|^2
$$
is contained in ${\cal K}$. Let us suppose now that $\|\eta_2\|>\rho$ and derive a contradiction.
Then, we obtain from \eref{PW5}
\begin{eqnarray}
\nonumber
\|P_{V_j^\perp} \eta\| > \mu_j^{-1}\rho-\alpha
\geq \mu_j ^{-1}\mu_j(\alpha +4\sqrt{\alpha\varepsilon_j})-\alpha=4\sqrt{\alpha \varepsilon_j},
\end{eqnarray}
and thus
\begin{eqnarray}
\nonumber
r > \frac18\min_{j=0,1,\dots,n} \varepsilon_j^{-1} 16\alpha\varepsilon_j =2\alpha.
\end{eqnarray}
On the other hand,
note that $\|u_0-P_{\cH_w} u_0\|=\frac12 \|u-P_{\cH_w }u\|=\alpha/2$. Therefore, $P_{\cH_w}u_0\in {\cal K}$ and hence in ${\cal K}_w$.
Moreover,
\begin{eqnarray}
\nonumber
\|u- P_{\cH_w}u_0\|^2 = \alpha^2 + \frac 14 \|u_2 - P_{\cH_w}u\|^2,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
\|u- u_2\|^2=\alpha^2 + \|u_2 - P_{\cH_w}u\|^2.
\end{eqnarray}
If $u_2\neq P_{\cH_w}u$, we have $\|u- P_{\cH_w}u_0\| < \|u- u_2\|$ which is a contradiction since $u_2$ is the closest point
to $u$ in ${\cal K}_w$. If $u_2 - P_{\cH_w}u =0$ then $\eta_2=0$ contradicting $\|\eta_2\| >\rho$. This completes the proof.
\hfill $\Box$\\
One
immediate
consequence of the above lemma is an a posteriori error estimate for
the squared distance to ${\cal K}_w$
\begin{eqnarray}
\nonumber
\delta_k:=\dist(u^k,{\cal K}_w)^2,
\end{eqnarray}
in Algorithm 1. Indeed, we have observed that
$u^k\in {\cal K}$ and therefore
\begin{eqnarray}
\nonumber
\delta_k \leq \alpha^2_k+\rho(\alpha_k)^2, \quad \alpha_k:=\dist(u^k,\cH_w).
\end{eqnarray}
This ensures the following accuracy with respect to the unknown $u\in {\cal K}_w$:
\begin{eqnarray}
\nonumber
\|u-u^k\| \leq \sqrt {\alpha^2_k+\rho(\alpha_k)^2}+2 \mathop{\rm rad}({\cal K}_w).
\end{eqnarray}
If we have an a priori estimate for the Chebyshev radius of ${\cal K}_w$, such as the bound $E_n$ from Theorem \ref{thm:ms},
one possible stopping criterion is the validity of
\begin{eqnarray}
\nonumber
\sqrt {\alpha^2_k+\rho(\alpha_k)^2}\leq E_n.
\end{eqnarray}
This ensures that we have achieved accuracy $\|u-u^k\|\leq 3E_n$, however note that $E_n$ can sometimes be a very pessimistic
bound for $\mathop{\rm rad}({\cal K}_w)$ so that significantly higher accuracy is reachable by more iterations.
We can also use Lemma \ref{lem:apost} to establish a convergence estimate for
$\delta_k$ in Algorithm 1. For this purpose, we introduce the
intermediate iterates
\begin{eqnarray}
\nonumber
u^{k+\frac 1 2}:= P_{\cH_w} u^{k},
\end{eqnarray}
and the corresponding squared distance
\begin{eqnarray}
\nonumber
\delta_{k+\frac 1 2}{:=}\dist(u^{k+\frac 1 2},{\cal K}_w)^2.
\end{eqnarray}
Since the distance to ${\cal K}_w$ is non-increasing
in each projection steps, it follows that
\begin{eqnarray}
\nonumber
\delta_{k+1}\leq \delta_{k+\frac 1 2}= \delta_k-\alpha_k^2.
\end{eqnarray}
On the other hand, it easily follows from Lemma \ref{lem:apost} that
\begin{eqnarray}
\nonumber
\delta_k-\alpha_k^2 \leq \rho(\alpha_k)^2\leq A\alpha_k,
\end{eqnarray}
where $A$ is a constant depending on $\epsilon_j$'s, $\mu_j$'s and $\|u\|$. It is easily seen that this implies the validity of the inequality
\begin{eqnarray}
\nonumber
\alpha_k\geq\sqrt{\delta_k+A^2/4}-A/2\geq \frac{\delta_k}{\sqrt{A^2+4\delta_k}}\geq \frac{\delta_k}{\sqrt{A^2+4\delta_0}}:=c\delta_k,
\end{eqnarray}
and {therefore}
\begin{eqnarray}
\nonumber
\delta_{k+1} \leq \delta_k-c^2 \delta_k^2.
\end{eqnarray}
From this, one finds by induction that
\begin{eqnarray}
\nonumber
\delta_k \leq Ck^{-1}, \quad k\geq 1,
\end{eqnarray}
for a suitably chosen constant
$C:=\max\{c^{-2},\delta_1\}$ taking into account that for any $t\ge1$
\begin{eqnarray}
\nonumber
\frac{C}{t}\(1-\frac{Cc^2}{t}\)\le C\(\frac{t-1}{t^2}\) \le \frac{C}{t+1} \,.
\end{eqnarray}
\hfill $\Box$
\begin{remark}
The above convergence rate ${\cal O}(k^{-1/2})$ for the distance between $u^k$ and ${\cal K}_w$ is quite pessimistic,
however, one can easily exhibit examples in which it
indeed occurs due to the fact that $\cH_w$ intersects ${\cal K}$ at a single point
of tangency. On the other hand, one can also easily find other examples for which
convergence of Algorithm 1 is exponential. In particular, this occurs
whenever ${\cal K}_w$ has an element lying in the interior of ${\cal K}$.
\end{remark}
|
1,108,101,563,635 | arxiv | \section{Introduction}\label{sec:intro}
If a model of biological sequence evolution is to be used for
phylogenetic inference, it is essential that the model parameters of
interest --- certainly the tree parameter and usually the numerical
parameters
--- be identifiable from the joint distribution of states at the
leaves of the tree. Though often unstated, the assumption that model
parameters are identifiable underlies the use of both Maximum
Likelihood and Bayesian inference methods. As increasingly
complicated models, incorporating across-site rate variation,
covarion structure, or other types of mixtures, are implemented in
software packages, there is a real possibility that
non-identifiability could confound data analysis. Unfortunately, our
theoretical understanding of this issue lags well behind current
phylogenetic practice.
One natural approach to proving the identifiability of the tree
topology relies on the definition of a phylogenetic distance for the
model, and the $4$-point condition of Buneman \cite{Bun}. For
instance, Steel \cite{S94} used the log-det distance to establish
the identifiability of the tree topology under the general Markov
model and its submodels. Such a distance-based argument shows
additionally that $2$-marginalizations of the full joint
distribution suffice to recover the tree parameter, since distances
require only two-sequence comparisons. Once the tree has been
identified, the numerical parameters giving rise to a joint
distribution for the general Markov model
can be determined by an argument of Chang
\cite{MR97k:92011}.
However, for more general mixture models and rates-across-sites
models no appropriate definition of a distance is known, so proving the
identifiability of the tree parameter requires a different approach.
(Though distance measures have been developed for GTR models with
rate-substitution \cite{GuLi96,WadSt97}, these require that one know
the rate distribution completely, and identifiability of the rate distribution
has yet to be addressed.
Although identifiability of the popular GTR+I+$\Gamma$ model of
sequence evolution was considered in \cite{Rog01}, there are gaps in
the argument, as was pointed out to us by An\'e \cite{AnePC}.)
In \cite{ARidtree}, the viewpoint of algebraic geometry is used to
show the generic identifiability of the tree parameter for the
covarion model of \cite{MR1604518} and for certain mixture models with
a small number of classes.
Though this result is far more general than previous identifiability results,
it still fails to cover the type of rate-variation models
currently in common use for data analysis, and does not
address identifiability of numerical parameters at all.
Much more study of the identifiability question is needed.
\smallskip
In this paper, we focus on the \emph{general Markov plus invariable
sites}, GM+I, model of sequence evolution, a model that encompasses
the GTR+I model that is of more immediate interest to practitioners.
Note that previous work on GM+I by Baake \cite{MR1664261}
focused on \emph{non}-identifiability. In that paper
parameter choices for the $2$-state GM+I model
on two distinct 4-taxon trees
are constructed that give rise to the same pairwise
joint distributions ($2$-marginals). As both sets of parameters have
50\% invariable sites, this shows that the
identifiability of the tree parameter cannot generally hold on the basis of
2-sequence comparisons, even if the distribution of rate factors is
known. Furthermore, it shows a well-behaved phylogenetic distance cannot
be defined for this model, as existence of such a distance would imply tree
identifiability.
Here we prove that all parameters for the GM+I model are indeed
identifiable, through $4$-sequence comparisons. By identifiable, we
mean \emph{generically identifiable} in a geometric sense: For a
fixed tree, the set of numerical parameters for which the joint
distribution could have arisen from either a) a different tree, or
b) a `significantly different' (in a sense to be made clear later)
choice of numerical parameters on the same tree, is of strictly
lower dimension than that of the full numerical parameter space.
(For a concrete example of generic identifiability,
recall the results of Steel and Chang on the general Markov model:
assumptions that
the Markov edge matrices $M_e$ have determinant $\ne 0,1$ and that the
distribution of states at the root has strictly positive entries ensure
identifiability of all parameters. These are
generic conditions.) Thus for natural probability distributions on
the parameter space, with probability one a choice of parameters is
generic.
\medskip
Although identifiability of the tree parameter for GM+I follows from
more general results in \cite{ARidtree}, that paper did not consider
identifiability of numerical parameters. Our arguments here are
tailored to GM+I and yield stronger results
addressing numerical parameters as well as the tree. Our approach
is again based on the determination of \emph{phylogenetic
invariants} for the model. While the invariants described in
\cite{ARidtree} are invariants for more general models than GM+I,
the ones given in this paper apply only to GM+I and its submodels,
and are of much lower degree. As a byproduct of the development of
these GM+I invariants, we are led to rational formulas for
recovering all the parameters related to the invariable sites from a
joint distribution. Indeed, these formulas are crucial to our
identification of numerical parameters.
These formulas can be viewed as GM+I analogs of the formulas for the
proportion of invariable sites in group-based+I models that were
found by the capture-recapture argument of \cite{SHL00}. In the
group-based setting, those formulas were developed into a heuristic
means of estimating the proportion of invariable sites from data
without performing a full tree inference. This has been implemented
in {\tt SplitsTree4} \cite{splitsTree4}. However, it remains unclear
whether a similar useful heuristic can be found for the formulas
presented in this paper.
\smallskip
Since our algebraic methods at times employ computational commutative
algebra software packages, and these tool are not commonly used
in the phylogenetics literature, we have included some examples of
code in Appendix \ref{app:code}.
\section{The GM+I Model}\label{sec:gmi}
Let $T$ denote an\emph{ $n$-taxon tree}, by which we mean a tree
with $n$ leaves labeled by the taxa $a_1,a_2,\dots, a_n$ and all
internal vertices of valence at least 3. We say $T$ is \emph{binary}
if all internal nodes have valence exactly 3.
We begin by describing the parameterization of the $\kappa$-state GM+I
model of sequence evolution along $T$, where $\kappa=4$ corresponds to
usual models of DNA evolution. The \emph{class size parameter}
$\delta$ denotes the probability that any particular site in a
sequence is invariable: conceptually, the flip of a biased coin
weighted by $\delta$ determines if a site is allowed to undergo state
transitions. If a site is invariable, it is assigned state
$i\in[\kappa]=\{1,2,\dots,\kappa\}$ with probability $\pi_I(i)$. Here
$\boldsymbol \pi_I=(\pi_I(1),\dots,\pi_I(k))$ is a vector of
non-negative numbers summing to 1 giving the state distribution for
invariable sites.
All sites that are not invariable mutate according to a common set
of parameters for the GM model, though independently of one another.
For these sites, we associate to each node (including leaves) of $T$
a random variable with state space $[\kappa]$. Choosing any node
$r$ of $T$ to serve as a root, and directing all edges away from
$r$, let $T_r$ denote the resulting directed tree $T$. A \emph{root
distribution} vector $\pr= (\pi_{GM}(1),\dots, \pi_{GM}(\kappa))$,
with non-negative entries summing to $1$, has entries $\pr(j)$
specifying the probability that the root variable is in state $j$.
For each directed edge $e = (v \to w)$ of $T_r$, let $M_e$ be a
$\kappa\times \kappa$ Markov matrix, so that $M_e(i,j)$ specifies
the conditional probability that the variable at $w$ is in state $j$
given that the variable at $v$ is in state $i$. Thus entries of all
$M_e$ are non-negative, with rows summing to 1.
For the GM+I model on an $n$-taxon tree $T$ with edge set $E$, the
stochastic parameter space $S \subset {[0,1]}^N$
is of dimension
$N =1 + (\kappa-1) +(\kappa - 1) + |E|\kappa(\kappa - 1) = 2\kappa - 1 +
|E|\kappa(\kappa - 1).$
The parameterization map giving the joint
distribution of the variables at the leaves of $T$ is
denoted by
\begin{linenomath}
\begin{align*}
\phi_T: S &\longrightarrow {[0,1]}^{\kappa^n},\\
\mathbf{s} &\longmapsto P.
\end{align*}
\end{linenomath}
We view $P$ as an $n$-dimensional $\kappa \times \dots \times
\kappa$ array, with dimensions corresponding to the ordered taxa
$a_1,a_2,\dots,a_n$, and with entries indexed by the states at the
leaves of $T$. The entries of $P$ are polynomial functions in the
parameters $\mathbf{s}$ explicitly given by
\begin{linenomath}
\begin{multline}
P(i_1, \dots, i_n) =\\
\delta\, \epsilon (i_1,i_2,\dots i_n) \pI(i_1) +(1-\delta)
\sum_{(j_v) \in \mathcal{H}} \left( \pr(j_r) \prod_{e} M_e(j_{v_i},
j_{v_f})\right).\label{eq:Pdef}
\end{multline}
\end{linenomath}
Here $\epsilon(i_1,i_2,\dots i_n)$ is 1 if all $i_j$ are equal and 0
otherwise, the product is taken over all edges $e=(v_i\to v_f)\in E$,
and the sum is taken over the set of all possible assignments of
states to nodes of $T$ extending the assignment $(i_1, \dots, i_n)$ to
the leaves: If $V$ is the set of vertices of $T$ then
\begin{linenomath}
$$\mathcal{H} = \left\{(j_v) \in [\kappa]^{|V|} \mid j_v = i_k \mbox{
if } v \mbox{ is a leaf labeled by $a_k$} \right\}.
$$
\end{linenomath}
For notational ease, the entries of $P$, the \emph{pattern
frequencies}, are also denoted by $p_{i_1 \dots i_n} = P(i_1, \dots,
i_n)$.
We note that while a root $r$ was chosen for the tree in order to
explicitly describe the GM portion of the parameterization of our
model, the particular choice of $r$ is not important. Under mild
additional restrictions on model parameters, changing the root
location corresponds to a simple invertible change of variables in
the parameterization. (See \cite{SSH94}, \cite{AR03}, or \cite{ARgm}
for details.) This justifies our slight abuse of language in
referring to the GM or GM+I model on $T$, rather than on $T_r$, and
we omit future references to root location.
Note that equation (\ref{eq:Pdef}) allows us to more succinctly
describe any $P\in \im(\phi_T)$ as
\begin{linenomath}
\begin{equation}P= (1-\delta) P_{GM} + \delta P_I
\label{eq:decomp}\end{equation}
\end{linenomath}
where $P_{GM}$ is an array in the
image of the GM parameterization map on $T$ and
$P_I=\diag(\boldsymbol \pi_I)$ is an $n$-dimensional array whose
off-diagonal entries are zeros and whose diagonal entries are those
of $\pi_I$.
\section{Model Identifiability}\label{sec:modelId}
We now make precise the various concepts of identifiability of a
phylogenetic model. To adapt standard statistical language to the
phylogenetic setting, for a fixed set $A$ of $n$ taxa and $\kappa\ge
2$, consider a collection $\mathcal M$ of pairs $(T,\phi_T )$, where
$T$ is an $n$-taxon tree with leaf labels $A$, and $\phi_T:S_T\to
[0,1]^{\kappa^n}$ is a parameterization map of the joint
distribution of pattern frequencies for the model on $T$. We say
\emph{the tree parameter is identifiable} for $\mathcal M$ if for
every $P\in \cup_{(T,\phi_T)\in
\mathcal M} \im(\phi_T)$, there is a unique $T$ such that $P\in
\im(\phi_T)$. We say that \emph{numerical parameters are identifiable on a
tree $T$} if the map $\phi_T$ is injective, that is if for
every $P\in\im(\phi_T)$ there is a unique $\mathbf s\in S_T$ with
$\phi_T(\mathbf s)=P$. We say the \emph{model $\mathcal M$ is identifiable}
if the tree parameter is identifiable, and for each tree the numerical
parameters are identifiable.
It is well-known that such a definition of identifiability is too
stringent for phylogenetics. First, unless one restricts parameter
spaces, there is little hope that the tree parameter be identifiable:
One need only think of any standard model on a binary 4-taxon tree in
which the Markov matrix parameter on the internal edge is the identity
matrix. Any joint distribution arising from such a parameter choice
could have as well arisen from any other 4-taxon tree topology.
Even if such `special' parameter choices are excluded so the tree
parameter becomes identifiable, identifiability of numerical
parameters also poses problems, as noted by Chang
\cite{MR97k:92011}. For example, consider the 3-taxon tree with the
GM model. Then multiple parameter choices give rise to the same
joint distribution since the labeling of the states at the internal
node can be permuted in $\kappa!$ ways, as long as the Markov matrix
parameters are adjusted accordingly \cite{AR03}. The occurrence of
this sort of `label-swapping' non-identifiability in statistical
models with hidden (unobserved) variables is well-known, but is not
of great concern. However, even for this model more subtle forms of
non-identifiability can occur, in which infinitely many parameter
choices lead to the same joint distribution. These arise from
singularities in the model, and can be avoided by again restricting
parameter space. Such `generic' conditions for the GM model have
already been mentioned in the introduction.
We therefore refine our notions of identifiability. Because we are
concerned primarily with model where the maps $\phi_T$ are given by
polynomials, we give a formulation appropriate to that setting.
Recall that given any collection $\mathcal F$ of polynomials
in $N$ variables, their common zero set,
\begin{linenomath}
$$
V(\mathcal F)=\{z\in \C^N \mid f(z)=0 \text{ for all } f\in \mathcal F\},
$$
\end{linenomath}
is the \emph{algebraic variety} defined by $\mathcal F$. If the algebraic
variety is a proper subset of $\C^N$, then it is said to be \emph{proper}.
\begin{defn}
Let $\mathcal M$ be a model on a collection of $n$-taxon trees, as
defined above.
\begin{enumerate}
\item We say \emph{the tree parameter is generically identifiable}
for $\mathcal M$ if for each tree $T$ there exists a proper
algebraic variety $X_T$ with the
property that
\begin{linenomath}
$$P\in \bigcup_{(T,\phi_T)\in \mathcal M}
\phi_T(S_T\smallsetminus X_T) \text{ implies } P\in
\phi_T(S_T\smallsetminus X_T) \text{ for a unique
$T$}.$$
\end{linenomath}
\item We say that \emph{numerical parameters are generically
locally identifiable on a tree $T$} if there is a proper
algebraic variety $Y_T$ such that for all
$\mathbf s\in S_T\smallsetminus Y_T$, there is a
neighborhood of $\mathbf s$ on which $\phi_T$ is injective.
\item We say the \emph{model $\mathcal M$ is generically locally
identifiable} if the tree parameter is generically identifiable, and
for each tree the numerical parameters are generically locally
identifiable. \end{enumerate}
\end{defn}
Note that the notion of `generic' here is used to mean
`for all parameters but those lying on a proper
subvariety of the parameter space,' and such a variety
is necessarily of lower dimension than the full parameter
space. Using the standard measure
on the parameter space, viewed as a subset of $\R^N$, this notion thus
also implies `for all
parameters except those in a set of measure 0.'
\smallskip
In the important special case of parameterization maps defined by
polynomial formulas, such as that for the GM+I model, generic local
identifiability of numerical parameters is equivalent to the notion in
algebraic geometry of the map $\phi_T$ being \emph{generically
finite}. In this case, there exists a proper variety $Y_T$ and an
integer $k$, the degree of the map $\phi_T$, such that restricted to
$S_T\smallsetminus Y_T$ the map $\phi_T$ is not only locally injective
but also $k$-to-1: That is, if $\mathbf s\in S_T\smallsetminus Y_T$
and $P=\phi_T(\mathbf s)$, then the fiber $\phi^{-1}_T(P)$ has
cardinality $k$.
Because of the label swapping issue at internal nodes, for the GM
model and GM+I on an $n$-taxon tree $T$ with vertex set $V$, fibers of
generic points will always have cardinality at least $\kappa!(|V|-n)$.
Thus for these models, the best we can hope for is generic local
identifiability of the model (both tree and numerical parameters)
where the generic fiber has exactly this cardinality. That in fact is
what we establish in the next section.
\section{Generic Identifiability for the GM+I model}\label{sec:genericId}
We begin our arguments by determining some phylogenetic invariants for
the GM+I model. The notion of a phylogenetic invariant was introduced
by Cavender and Felsenstein \cite{CF87} and Lake \cite{Lake87}, in the
hope that phylogenetic invariants might be useful for practical tree
inference. Their role here, in proving identifiability, is more
theoretical but illustrates their value in analyzing models.
\smallskip
For a parameterization $\phi_T$ given by polynomial formulas on domain
$S_T\subseteq\R^N$, we may uniquely extend to a polynomial map with
domain $\C^N$, given by the same polynomial formulas, which we again
denote by $ \phi_T: \C^N \longrightarrow {\C}^{\kappa^n}.$
\begin{rem}
Extending parameters to include complex values is solely for
mathematical convenience, as algebraic geometry provides the natural
setting for our viewpoint. The collection of stochastic joint
distributions (arising from the original stochastic parameter space)
is a proper subset of $\im(\phi_T)$.
\end{rem}
The \emph{phylogenetic variety}, $V_T$, is the the smallest algebraic
variety in $\C^{\kappa^n}$ containing $ \phi_T(\C^N)$, \emph{i.e.},
the closure of the image of $\phi_T$ under the Zariski topology,
\begin{linenomath}
$$V_T=\overline{\im(\phi_T)}\subseteq\C^{\kappa^n}.$$
\end{linenomath}
\begin{rem}
$V_T$ coincides with the closure of $\im(\phi_T)=\phi_T(\C^N)$ under
the usual topology on $\C^{\kappa^N}.$ However, while $V_T\cap
[0,1]^{\kappa^n}$ contains the closure of $\phi_T(S_T)$ under the
usual topology, these need not be equal.
\end{rem}
Let $\C[P]$ denote the ring of polynomials in the $\kappa^{n}$
indeterminates $\{p_{i_1\dots i_n}\}.$ Then the collection of all
polynomials in $\C[P]$ vanishing on $V_T$ forms a prime ideal $I_T$.
We refer to $I_T$ as a \emph{phylogenetic ideal}, and its elements as
\emph{phylogenetic invariants}. More explicitly, a polynomial $f\in
\C[P]$ is a phylogenetic invariant if, and only if, $f(P_0)=0$ for
every $P_0\in \phi_T(\C^{\kappa^n})$, or equivalently, if, and only
if, $f(P_0)=0$ for every $P_0\in \phi_T(S_T)$.
\medskip
As we proceed, we consider first the special case of $4$-taxon
trees. We highlight the
$\kappa=2$ case, in part to illustrate the arguments for general
$\kappa$ more clearly, and in part because we can go further in understanding
the 2-state model.
\medskip
Consider the $4$-taxon binary tree $T_{ab|cd}$, with taxa $a,b,c,d$ as
shown in Figure \ref{fig:4taxa}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=.75in]{figv01.eps}
\end{center}
\caption{The 4-taxon tree $T_{ab|cd}$}\label{fig:4taxa}
\end{figure}
Suppose that $P$ is a $2 \times 2 \times 2
\times 2$ pattern frequency array,
whose indices correspond to states $[2]=\{1,2\}$
at the taxa in alphabetical order.
Then the internal edge $e$ of $T$ defines the split $ab \mid
cd$ in the tree, and we define the \emph{edge flattening} $F_e$ of $P$
at $e$, a $2^2 \times 2^2$ matrix, by
\begin{linenomath}
\begin{equation}\label{eq:Flat}
F_e =
\begin{pmatrix}
p_{1111} & p_{1112} & p_{1121} & p_{1122}\\
p_{1211} & p_{1212} & p_{1221} & p_{1222}\\
p_{2111} & p_{2112} & p_{2121} & p_{2122}\\
p_{2211} & p_{2212} & p_{2221} & p_{2222}\\
\end{pmatrix}.
\end{equation}
\end{linenomath}
Notice that the rows of $F_e$ are indexed by the states at $\{ab\}$
and the columns by states at $\{cd\}$. The flattening $F_e$ is
intuitively motivated by considering a `collapsed' model induced by
$e$: taxa $a$ and $b$ are grouped together forming a single variable
$\{ab\}$ with $4$ states, and the grouping $\{cd\}$ forms a second
variable with $4$ states.
This construction can be generalized in a natural way: suppose $T$
is an $n$-taxon tree, and $P$ a $\kappa\times\dots\times\kappa$ array with
indices corresponding to the taxa labeling the leaves of $T$. Then
for any edge $e$ in $T$, we can form from $P$ the matrix $F_{e}$ of size
$\kappa^{n_1} \times \kappa^{n_2}$, where $n_1$ and $n_2$ are the
cardinalities of the two sets of taxa in the split induced by $e$.
From \cite{ARgm} (for a more expository presentation, see also
\cite{ARnme}), we have:
\begin{thm} \label{thm:GM}
For the $2$-state GM model on a binary $n$-taxon
tree $T$, the phylogenetic ideal $I_T$ is generated by all $3\times 3$
minors of all edge flattenings $F_e$ of $P$.
Moreover, for the $\kappa$-state GM model on an $n$-taxon tree $T$,
the phylogenetic ideal
$I_T$ contains all
$(\kappa+1) \times (\kappa+1)$ minors of all edge flattenings
of $P$.
\end{thm}
Using this result, we can deduce some
elements of the phylogenetic ideal for
the GM+I model for any number of taxa $n \ge 4$ and any number of
states $\kappa \ge 2$.
\begin{prop}\label{prop:invariants} (Phylogenetic Invariants for GM+I)
\begin{enumerate}
\item \label{prop:inv:item1}
For the $4$-taxon tree $T_{ab|cd}$ and
the $2$-state GM+I model,
the cubic determinantal polynomials
\begin{linenomath}
$$
f_1=\left |\begin{matrix}
p_{1112} & p_{1121} & p_{1122}\\
p_{1212} & p_{1221} & p_{1222}\\
p_{2112} & p_{2121} & p_{2122}\\
\end{matrix}\right |
\mbox{ and } f_2=\left |\begin{matrix}
p_{1211} & p_{1212} & p_{1221}\\
p_{2111} & p_{2112} & p_{2121}\\
p_{2211} & p_{2212} & p_{2221}
\end{matrix}\right |
$$
\end{linenomath}
are phylogenetic invariants. These are the two $3\times 3$ minors of
the matrix flattening $F_{ab \mid cd}$ of equation (\ref{eq:Flat}) that do not
involve either of the entries $p_{1111}$ or $p_{2222}$.
\item More generally, for $n\ge 4$ and $\kappa\ge 2$, consider the
$\kappa$-state GM+I model on an $n$-taxon tree $T$. Then for each
edge $e$ of $T$, all $(\kappa+1)\times (\kappa+1)$ minors of the
flattening $F_e$ of $P$ that avoid all entries $p_{ii\dots i}$,
$i\in[\kappa]$ are phylogenetic invariants.
\end{enumerate}
\end{prop}
\begin{pf} We prove the first statement in detail.
From equation (\ref{eq:decomp}),
for
any $P=\phi_T(s)$ we have $P=(1-\delta)P_{GM}+\delta
P_I$, where $P_{GM}$ is a 4-dimensional table arising from the GM
model on $T$ and $P_I=\diag(\pi_I)$ is a diagonal table with entries
giving the distribution of states for the invariable sites.
Flattening these tables with respect to the internal edge of the
tree, we obtain
\begin{linenomath}
\begin{align}F_{ab \mid cd} &= (1-\delta) F_{GM} + \delta F_I\notag \\
&=(1 - \delta)
\begin{pmatrix}
\tilde p_{1111} & \tilde p_{1112} & \tilde p_{1121} & \tilde p_{1122}\\
\tilde p_{1211} & \tilde p_{1212} & \tilde p_{1221} & \tilde p_{1222}\\
\tilde p_{2111} & \tilde p_{2112} & \tilde p_{2121} & \tilde p_{2122}\\
\tilde p_{2211} & \tilde p_{2212} & \tilde p_{2221} & \tilde
p_{2222}
\end{pmatrix}
+
\delta \begin{pmatrix}
\pi_I(1) &\ 0\ &\ 0\ & 0 \\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & \pi_I(2) \\
\end{pmatrix}.\label{eq:Psum}
\end{align}
\end{linenomath}
By Theorem \ref{thm:GM}, all $3\times 3$ minors of $F_{GM}$ vanish.
Since the `upper right' and `lower left' minors of $F_{ab|cd}$ are the
same as those of $F_{GM}$, up to a factor of $(1-\delta)^3$, they also
vanish.
Straightforward
modifications to this argument give the general case.\hfill\qed
\end{pf}
For arbitrary $n,\kappa$, the GM+I model should have many other
invariants than those found here.
Among these is, of course, the stochastic invariant
\begin{linenomath}
$$f_s(P)=1-\sum_{\mathbf i\in [\kappa]^n} p_{\mathbf i}.$$
\end{linenomath}
In the simplest interesting case of the GM+I model, however, we
have the following computational result.
\begin{prop}\label{prop:invariantsK2} The phylogenetic ideal
for the $2$-state GM+I model on the $4$-taxon tree $T_{ab|cd}$ of Figure
\ref{fig:4taxa} is generated by $f_s$ and
the minors $f_1$, $f_2$ above;
\begin{linenomath}
$$I_T = \langle f_s, f_1, f_2 \rangle.$$
\end{linenomath}
\end{prop}
\begin{pf}
A computation of the Jacobian of the parameterization
$\phi_T: S \subset \C^{13} \to \C^{2^4}$ shows it has full rank at
some points, and so $V_T$ is of dimension 13. If $I =
\langle f_s,f_1, f_2 \rangle$, then $I \subseteq I_T$. Another computation
shows that $I$ is prime and of
dimension $13$. Thus, necessarily $I =
I_T$. (The code for these computations is given in Appendix
\ref{app:code}.)\hfill\qed
\end{pf}
Let $V_{ab | cd}$, $V_{ac | bc}$, $V_{ad | bc}$ be the varieties for
the $2$-state GM+I models for the three $4$-taxon binary tree topologies, with
corresponding phylogenetic ideals $I_{ab \mid cd}$, $I_{ac \mid bd}$,
$I_{ad \mid bc}$.
Of course Proposition \ref{prop:invariantsK2}
gives generators for each of these ideals --- two $3
\times 3$ minors of the flattenings of $P$ appropriate to those tree
topologies, along with $f_s$. A computation (see Appendix \ref{app:code}) shows
that these three ideals are distinct. Therefore the three varieties
are distinct, and their pairwise intersections are proper
subvarieties. Thus for any parameters $\mathbf s$
not lying in the inverse image
of these subvarieties, $T$ is uniquely determined from $\phi_T(\mathbf s)$.
Thus we obtain
\begin{cor} \label{cor:identTreeN4k2}
For the $2$-state GM+I model on binary 4-taxon trees, the tree parameter is
generically identifiable.
\end{cor}
As $\dim(V_{ab|cd})=13$, and the parameter space for $\phi_T$ is 13
dimensional, we also immediately obtain that the map $\phi_T$ is
generically finite. This yields
\begin{cor} \label{cor:idenNumN4k2}
For the $2$-state GM+I model on a binary 4-taxon tree,
numerical parameters are generically locally
identifiable.
\end{cor}
Note that this does approach does not yield the cardinality of the
generic fiber of the parameterization map, which is also of
interest. We will return to this issue in Theorem
\ref{thm:genericIdent}.
\medskip
Further computations show that
$\dim(V_{ab|cd} \cap V_{ac|bd}\cap V_{ad|bc})=11$. As this
intersection contains all points arising from the GM+I
model on the 4-taxon star tree, which is an 11-parameter model, this
is not surprising. In fact, one can verify computationally
that the ideal $I_{ab|cd}+I_{ac|bd}+I_{ad|bc}$ is the defining
prime ideal of the star-tree variety.
We also note that the ideal $I_{ab|cd} + I_{ac|bd}$
decomposes into two primes, both of dimension 11. Thus the variety
defined by this ideal has two components, one of which is the variety
for the star tree.
\medskip
In principle, the ideal $I_T$ of all invariants for the GM+I model
on an arbitrary tree $T$ can be computed from the parameterization
map $\phi_T$ via an elimination of variables using Gr\"obner bases
\cite{MR2001c:92009}. However, if all invariants for the
$\kappa$-state GM model on $T$ are known, they can provide an
alternate approach to finding $I_T$ which, while still proceeding by
elimination, should be less computationally demanding.
To present this most simply, we note that because
our varieties lie in the hyperplane described by the stochastic invariant,
it is natural to consider their projectivizations,
lying in $\mathbb P^{\kappa^n-1}$ rather than $\C^{\kappa^n}$. The
corresponding phylogenetic ideals, which we denote by $J_T$,
are generated by the homogeneous polynomials in $I_T$, and do not contain the
stochastic invariant. Conversely, $I_T$ is generated by the elements of
$J_T$ together with the stochastic invariant.
In addition, we need
not restrict ourselves to the GM model, but rather deal with any
phylogenetic model parameterized by polynomials.
\begin{prop} \label{prop:elim}
Suppose $\widetilde \phi_T:\C^N\to \C^{\kappa^n}$ is a
parameterization map for some phylogenetic model $\mathcal M$ on
$T$, with corresponding homogeneous phylogenetic ideal $\widetilde
J_T$. Let
\begin{linenomath}
$$\phi_T:\C^{N}\times\C^\kappa\to
\C^{\kappa^n}$$
\end{linenomath}
be the parametrization map for the $\mathcal M$+I model
given by
\begin{linenomath}
$$\phi_T(\mathbf s,(\delta,\boldsymbol \pi_I))=(1-\delta)
\widetilde \phi_T(\mathbf s)+\delta \diag(\boldsymbol \pi_I).$$
\end{linenomath}
Let
$P'$ denote the collection of all indeterminate entries of $P$
except those in $P_{eq}=\{p_{ii\dots i}\mid i\in[\kappa]\}$. Then
the homogeneous phylogenetic ideal $J_T$ for the $\mathcal M$+I
model on $T$ is $J_T=\left (\widetilde J_T\cap \C[P'] \right)\C[P].$
Thus $J_T$ can be computed from $\widetilde J_T$ by elimination of
the variables in $P_{eq}$.
\end{prop}
\begin{pf}
Extend the parameterization maps $\widetilde \phi_T, \phi_T$ to
parameterizations of cones by introducing an additional parameter,
\begin{linenomath}
$$ \widetilde \Phi_T(\mathbf s,t)=t\,\widetilde\phi_T(\mathbf s)$$
$$
\Phi_T(\mathbf s,(\delta,\boldsymbol \pi_I ),t)
=t\,\phi_T(\mathbf s,(\delta,\boldsymbol \pi_I))$$
\end{linenomath}
Then
$\im(\Phi_T)=\C^\kappa\times\operatorname{proj}(\im(\widetilde \Phi_T)),$
where $\C^\kappa$ corresponds to coordinates in $P_{eq}$ and
`$\operatorname{proj}$' denotes
the projection map from $P$-coordinates to $P'$-coordinates. As $J_T$
is the ideal of polynomials vanishing on $\im(\Phi_T)$, and
$\tilde J_T\cap \C[P']$ the ideal
vanishing on $\operatorname{proj}(\im(\widetilde \Phi_T))$, the result follows.
\hfill\qed
\end{pf}
Using this, in the appendix we give an alternate computation to show
both part (\ref{prop:inv:item1}) of Proposition
\ref{prop:invariants}, and Proposition \ref{prop:invariantsK2}.
While this computation is quite fast, a more naive attempt to find
GM+I invariants directly from the full parameterization map using
elimination was unsuccessful, demonstrating the utility of the
proposition.
Moreover, we can use this proposition to compute all 2-state GM+I invariants
on the 5-taxon binary tree as well. This leads us to
\begin{conj} On an $n$-taxon binary tree, the ideal of homogeneous
invariants for the 2-state
GM+I model is generated by those $3\times 3$
minors of edge flattenings
that do not involve the variables $p_{11\dots1}$ and $p_{22\dots2}$,
together with the
stochastic invariant.
\end{conj}
\medskip
Although we are unable to determine all GM+I invariants for the
4-taxon tree for general $\kappa$, using only those described in
Proposition \ref{prop:invariants} we can still obtain
identifiability results through a modified argument.
\begin{prop}\label{prop:treeId} For the $\kappa$-state GM+I model on
binary 4-taxon trees, $\kappa\ge 2$, the tree parameter is
generically identifiable.
\end{prop}
\begin{pf} By the argument leading to Corollary \ref{cor:identTreeN4k2},
it is enough to show the varieties $V_{ab \mid cd}$, $V_{ac
\mid bd}$, and $V_{ad|bc}$ are distinct.
Considering, for example, the first two, we can
show that the varieties $V_{ab \mid cd}$ and $V_{ac
\mid bd}$ are distinct, by giving an invariant $f \in I_{ac \mid
bd}$ and a point $P_0\in V_{ab|cd}$
such that $f(P_0)\ne 0$.
Using Proposition \ref{prop:invariants}, we pick an
invariant $f \in I_{ac \mid bd}$ as follows: In the flattening
$F_{ac|bd}$ according to the split $ac|bd$, choose any collection
of $\kappa+1$ $ac$-indices with distinct $a$ and $c$ states, \emph{e.g.},
$\{12,13,\dots,1\kappa,21,23\}$. Using the same set as $bd$-indices,
this determines a $(\kappa+1)\times(\kappa+1)$-minor $f$.
We pick $P_0=\phi_{T_{ab|cd}}(\mathbf s)$ using the parameterization
of equation (\ref{eq:Pdef}) by making a specific choice of parameters
$\mathbf s$. On $T_{ab|cd}$, with the root $r$ located at one of the
internal nodes, choose parameters $\mathbf{s}$ as follows: Let
$\pr$, $\pI$ be arbitrary but with all entries of $\pr$ positive.
Pick any $\delta \in [0,1)$. For the four terminal edges choose
$M_e$ to be the $\kappa \time \kappa \times \kappa$ identity matrix
$I_\kappa$. For the single internal edge $e$ of $T$, choose any
Markov matrix $M_{e}$ with all positive entries. For such
parameters, the entries of the joint distribution $P_0 =
\phi_{T_{ab|cd}} (\mathbf{s})$ are zero except for the pattern
frequencies $p_{iijj}$, where the states at the leaves $a$ and $b$
agree and the states at the leaves $c$ and $d$ agree. Since the
entries of $M_{e}$ and the root distributions are positive, each of
the $p_{iijj} > 0$.
But considering the flattening $F_{ac \mid bd}$ of
$P_0=\phi_{T_{ab|cd}} (\mathbf{s})$ with respect to the `wrong'
topology $T_{ac \mid bd}$, we observe that the $\kappa^2$ non-zero
entries $p_{iijj}$ of $F_{ac \mid bd}$ all lie on the diagonal of
$F_{ac \mid bd}$, in the positions with $ij$ as both $ac$-index and
$bd$-index. Furthermore, by our choice of $f$, a subset of them
forms the diagonal of the submatrix whose determinant is $f$.
Therefore $f(P_0)\ne 0$.\hfill\qed
\end{pf}
\begin{prop}(Recovery of invariable site parameters)\label{prop:idformulas}
\begin{enumerate}
\item For the 4-taxon tree $T_{ab|cd}$ and the 2-state GM+I model, suppose
$P=\phi_T(\mathbf s)$. Then generically the parameters in
$\mathbf s$ related to invariable sites can be recovered from $P$ by
the following formulas:
\begin{linenomath}
$$\delta=\frac {|A_1|+|A_2|}{|B|},\ \ \boldsymbol \pi_I=\frac 1{|A_1|+|A_2|} \left (
|A_1|,|A_2|\right ),$$ where $B=\begin{pmatrix}
p_{1212} & p_{1221} \\
p_{2112} & p_{2121}
\end{pmatrix}$,
$$A_1=\begin{pmatrix}
p_{1111} & p_{1112} & p_{1121}\\
p_{1211} & p_{1212} & p_{1221}\\
p_{2111} & p_{2112} & p_{2121}\\
\end{pmatrix}, \ \
A_2=\begin{pmatrix}
p_{1212} & p_{1221} & p_{1222}\\
p_{2112} & p_{2121} & p_{2122}\\
p_{2212} & p_{2221} & p_{2222}
\end{pmatrix}.
$$
\end{linenomath}
\item More generally, for the $\kappa$-state GM+I model on $T_{ab|cd}$,
the invariable site parameters can be recovered
from a generic point in the image of the parameterization map by
rational formulas of the form
\begin{linenomath}
$$\delta=\frac
{\sum_{i\in[\kappa]}|A_i|}{|B|}, \ \ \boldsymbol \pi_I=\frac
1{\sum_{i\in[\kappa]} |A_i|} \left ( |A_1|,|A_2|,\dots, |A_n|\right
).$$
\end{linenomath}
Here $|B|$ is any $\kappa \times \kappa$ minor of $F_{ab|cd}$
that omits the all rows and columns indexed by $ii$, and $|A_i|$ is
the $(\kappa+1)\times(\kappa+1)$ minor obtained by including all
rows and columns chosen for $B$ and in addition the $ii$ row and
$ii$ column.
\end{enumerate}
\end{prop}
\begin{pf} We
give the complete argument in the case $\kappa = 2$ first. For a
joint distribution $P \in \im(\phi_T)$, write $F_{ab \mid cd} =
(1-\delta) F_{GM} + \delta F_I$ as in equation (\ref{eq:Psum}). Since
$A_1$ is the `upper left' $3 \times 3$ submatrix of $F_{ab \mid
cd}$, using linearity properties of the determinant, and that all $3
\times 3$ minors of $F_{GM}$ evaluate to zero, we observe that
\begin{linenomath}
\begin{align*}
\vert A_1 \vert
&=(1 - \delta)^3 \left|
\begin{matrix}
\tilde p_{1111} & \tilde p_{1112} & \tilde p_{1121} \\
\tilde p_{1211} & \tilde p_{1212} & \tilde p_{1221} \\
\tilde p_{2111} & \tilde p_{2112} & \tilde p_{2121} \\
\end{matrix}
\right| + \left|
\begin{matrix}
\delta \pi_I(1) & 0 &\ 0 \\
0 & (1-\delta) \tilde p_{1212} &\ (1-\delta) \tilde p_{1221} \\
0 & (1-\delta) \tilde p_{2112} &\ (1-\delta) \tilde p_{2121} \\
\end{matrix}\right|\\
\\
&= \delta \pi_I(1) \left|
\begin{matrix}
(1-\delta)\tilde p_{1212} &\ (1-\delta)\tilde p_{1221} \\
(1-\delta) \tilde p_{2112} &\ (1-\delta )\tilde p_{2121} \\
\end{matrix}\right|.
\end{align*}
\end{linenomath}
Thus we have $\vert A_1 \vert = \delta \pi_I(1) \vert B \vert$. Now,
if $\vert B \vert \neq 0$, then
\begin{linenomath}
$$
\delta \pi_I(1) = \frac{\vert A_1 \vert}{\vert B \vert}.
$$
\end{linenomath}
As $|B|$ does not vanish on all of $V_T$, we have a rational formula
to compute $\delta \pi_I(1)$ for generic points on $V_T$.
Similarly, since $A_2$ is the `lower right' submatrix of $F_{ab \mid
cd}$, then
\begin{linenomath}
$$\delta \pi_I(2) =
\frac{\vert A_2 \vert}{\vert B \vert}.
$$
\end{linenomath}
Adding these together, we obtain the stated rational expression for
$\delta$.
Assuming additionally the generic condition that $\delta \neq 0$, then we find
\begin{linenomath}
$$\boldsymbol \pi_I =\left ( \frac{\vert A_1
\vert}{\vert A_1\vert + \vert A_2 \vert},
\frac{\vert A_2
\vert}{\vert A_1 \vert + \vert A_2 \vert}\right ).$$
\end{linenomath}
Thus the parameters $\delta, \boldsymbol \pi_I$ are
generically identifiable for GM+I on $T$.
One readily sees the argument above can be modified for arbitrary
$\kappa$.\hfill\qed
\end{pf}
Note that when $\kappa>2$ the above proposition gives many
alternative rational formulas for the invariable site parameters, as
there are many options for choosing the matrix $B$.
We now obtain our main result.
\begin{thm}\label{thm:genericIdent}
The $\kappa$-state GM+I model on $n$-taxon binary trees, with $n\ge
4$, $\kappa \ge 2$, is generically locally identifiable.
Furthermore, for an $n$-taxon tree with $V$ vertices, the fibers of
generic points of $V_T$ under the parametrization map have
cardinality $\kappa!(|V|-n)$. Thus for generic points, label
swapping at internal nodes is the only source of
non-identifiability.
\end{thm}
\begin{pf} Suppose $T$ is an $n$-taxon tree with $P=\phi_T(\mathbf
s)$. Choose some subset of 4 taxa, say $\{a,b,c,d\}$, and suppose
the induced quartet tree is $T_{ab|cd}$. Then $P_{abcd}$, the
4-marginalization of $P$, is easily seen to be of the form
$P_{abcd}=\phi_{T_{ab|cd}}(\mathbf s_{abcd})$ where $\mathbf
s_{abcd}=g(\mathbf s)$ and $g$ is a surjective polynomial function.
But the tree
$T_{ab|cd}$ is generically identifiable by Proposition
\ref{prop:treeId}, and thus invariable site parameters in $\mathbf s_{abcd}$
are generically identifiable by Proposition \ref{prop:idformulas}.
As these coincide with the invariable site parameters in $\mathbf
s$, and generic conditions on $\mathbf s_{abcd}$ imply generic
conditions on $\mathbf s$, the invariable site parameters are
generically identifiable for the full $n$-taxon model.
As an $n$-taxon binary tree topology is determined
by the collection of all induced quartet tree topologies, one can now see
that $T$ is generically identifiable. Alternately,
using the identified invariable site parameters,
and assuming the additional
generic condition that $\delta\ne 1$, note that
\begin{linenomath}
$$P_{GM} =
\frac{1}{(1-\delta)} \left( P - \delta P_I\right)
$$
\end{linenomath}
is a joint
distribution arising from general Markov parameters. Thus generic
identifiability of the tree can also by obtained from
Steel's result for the GM model \cite{S94} applied to $P_{GM}$.
The generic identifiability of the remaining numerical parameters follows
from Chang's argument \cite{MR97k:92011} applied to $P_{GM}$.
Chang's approach also indicates the cardinality of the generic fiber is
$\kappa!(|V|-n)$ due to the label swapping phenomenon.\hfill\qed
\end{pf}
\section{Estimating Invariable Sites Parameters}\label{sec:estInv}
The concrete result in Proposition \ref{prop:idformulas} gives
explicit rational formulas for recovering parameters relating to
invariable sites from the joint distribution. These can be viewed as
generalizations of the formulas found in \cite{SHL00} for
group-based models. As \cite{SHL00} develops the group-based model
formulas into a heuristic means of estimating the invariable site
parameters from data without performing a full Maximum Likelihood
fit of data to a tree under a $\mathcal M$+I model, one might
suspect the formulas of Proposition \ref{prop:idformulas} could be
used similarly without the need to assume $\mathcal M$ was
group-based, or approximately group-based.
We emphasize that however useful such an
estimate might be, it would not be intended to replace a more
statistical but time-consuming computation, such as obtaining the
Maximum Likelihood estimates for these parameters.
However, it is by no means obvious how to use these formulas well
even for a heuristic estimate. First, for a 4-taxon tree
we have many choices for the
matrix $B$, in fact
\begin{linenomath}
$$\binom{\kappa^2-\kappa}{\kappa}^2$$
\end{linenomath}
of them, so even for $\kappa=4$, there are 245,025 basic sets of the
formulae. Moreover, while these simple formulae
emerged from our method of proof, one could in fact modify them by
adding to any of them a rational function whose numerator is a
phylogenetic invariant for the GM+I model, and whose denominator is
not. Since the invariant vanishes on any joint distribution arising
from the model, the resulting formulae will still recover invariable
site information for generic parameters. Thus there are actually
infinitely many formulas for recovering invariable site parameters.
One can nonetheless consider simple averaging schemes using only the
basic formulas of Proposition \ref{prop:idformulas} and find that on
simulated data they perform quite well at approximately recovering
invariable site parameters from empirical distributions. However,
averaging the large number of formulas give here, and then also
averaging over a large sample of quartets,
as is proposed in \cite{SHL00}, is more
time consuming than one might wish for a fast heuristic. Moreover,
one must be aware that the denominator in these formulas may vanish
on an empirical distribution --- it is certain to be non-zero only
for true distributions for GM+I arising from generic parameters.
Nonetheless, it would be of interest to develop versions of these
formulas with good statistical estimation properties, as the GM+I
model encompasses models such as the GTR+I model which is often
preferred in biological data analysis to group-based+I models. Of
course addressing more general rate-variation models would be even
more desirable, though our results here are not sufficient for that.
|
1,108,101,563,636 | arxiv | \section{Introduction and statement of main results}
Tensors appear in numerous mathematical and scientific contexts. The two contexts
most relevant for this paper are quantum information theory and algebraic
complexity theory, especially the study of the complexity of matrix
multiplication.
There are numerous notions of {\it rank} for tensors. One such, {\it analytic rank}, introduced in
\cite{MR2773103} and developed further in \cite{MR3964143}, is defined only over finite
fields.
In \cite{kopparty2020geometric} they define a new kind of rank for tensors that
is valid over arbitrary fields that is an asymptotic limit (as one enlarges the field)
of analytic rank, that they call {\it geometric rank} (\lq\lq geometric\rq\rq\ in contrast to \lq\lq analytic\rq\rq ), and establish basic properties of geometric rank.
In this paper we begin a systematic study of geometric rank and what it reveals about the
geometry of tensors.
Let $T\in \mathbb C^\mathbf{a}{\mathord{ \otimes } } \mathbb C^\mathbf{b}{\mathord{ \otimes } } \mathbb C^\mathbf{c}$ be a tensor and let
$GR(T)$ denote the geometric rank of $T$ (see Proposition/Definition \ref{GRdef} below for the definition). For all tensors, one has $GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, and when $GR(T)<\operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, we say
$T$ has {\it degenerate geometric rank}. The case of geometric rank one was previously understood,
see Remark \ref{GRone}.
Informally, a tensor is
{\it concise} if it cannot be written as a tensor in a smaller ambient space (see
Definition \ref{concisedef} below for the
precise definition).
Our main results are:
\begin{itemize}
\item Classification of tensors with geometric rank two.
In particular, in $\mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$ there are exactly two concise tensors
of geometric rank two, and in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$, $m> 3$, there is a unique
concise tensor with geometric rank two (Theorem
\ref{GRtwo}).
\item Concise $1_*$-generic tensors (see Definition \ref{1stardef})
in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ with geometric rank at most three have tensor rank at least $2m-3$
and all other concise tensors of geometric rank at most three have
tensor rank at least $m+\lceil \frac {m-1}2 \rceil -2$ (Theorem \ref{GRthree}).
\end{itemize}
We also compute the geometric ranks of numerous tensors of interest
in \S\ref{Exsect}, and analyze the geometry associated to tensors with
degenerate geometric rank in \S\ref{GRgeom}, where we also point out especially intriguing
properties of tensors in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ of minimal border rank.
\subsection*{Acknowledgements} We thank Vladimir Lysikov for numerous
comments and pointing out a gap in the proof of Theorem \ref{GRthree}
in an earlier draft, Filip Rupniewski for pointing out
an error in an earlier version of Proposition \ref{comprex},
Hang Huang for providing a more elegant proof of Lemma \ref{linelem}
than our original one,
Harm Derksen for Remark \ref{gstablerem},
and Fulvio Gesmundo, Hang Huang, Christian Ikenemeyer and Vladimir Lysikov
for useful conversations. We also thank the anonymous referee for useful suggestions.
\section{Definitions and Notation}
Throughout this paper we give our vector spaces names: $A=\mathbb C^\mathbf{a},B=\mathbb C^\mathbf{b}, C=\mathbb C^\mathbf{c}$
and we often will take $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$. Write $\operatorname{End}(A)$ for
the space of linear maps $A\rightarrow A$ and $GL(A)$ for the invertible linear maps.
The dual space to $A$ is denoted $A^*$, its
associated projective space is $\mathbb P}\def\BT{\mathbb T A$, and for $a\in A\backslash 0$,
we let $[a]\in \mathbb P}\def\BT{\mathbb T A$ be its projection to projective space.
For a subspace $U\subset A$, $U^\perp\subset A^*$ is its annihilator.
For a subset $X\subset A$, $\langle X\rangle\subset A$ denotes its linear span.
We write $GL(A)\times GL(B)\times GL(C) \cdot T\subset A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$
for the orbit of $T$, and similarly for the images of $T$ under endomorphisms.
For a set $X$, $\overline{X}$ denotes its closure in the Zariski topology (which, for
all examples in this paper, will also be its closure in the Euclidean topology).
Given $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$, we let $T_A: A^*\rightarrow B{\mathord{ \otimes } } C$ denote the corresponding
linear map, and similarly for $T_B,T_C$. We omit the subscripts when there is no ambiguity. As examples, $T(A^*)$ means $T_A(A^*)$, and given $\beta\in B^*$, $T(\beta)$ means $T_B(\beta)$.
Fix bases $\{a_i\}$, $\{b_j\}$, $\{ c_k\}$ of $A,B,C$, let $\{\alpha_i\}$, $\{\beta_j\}$, $\{ \gamma_k\}$ be the corresponding dual bases of $A^*,B^*$ and $C^*$. The linear space $T(A^*)\subset B\otimes C$ is considered as a space of matrices, and is often presented as the image of a general point $\sum_{i =1}^\mathbf{a} x_i\alpha_i\in A^*$, i.e. a $\mathbf{b}\times \mathbf{c}$ matrix of linear forms in variables $\{x_i\}$.
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$. $T$ has {\it rank one} if
there exists nonzero $a\in A$, $b\in B$, $c\in C$ such that $T=a{\mathord{ \otimes } } b{\mathord{ \otimes } } c$.
For $r\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, write $\Mone^{\oplus r}=\sum_{\ell=1}^r a_\ell{\mathord{ \otimes } } b_\ell{\mathord{ \otimes } } c_\ell$.
We review various notions of rank for tensors:
\begin{definition} \
\begin{enumerate}
\item The smallest $r$ such that $T$ is a sum of $r$ rank one
tensors is called the {\it tensor rank} (or {\it rank}) of $T$ and is denoted $\bold R(T)$. This is
the smallest $r$ such that, allowing $T$ to be in a larger space, $T\in \operatorname{End}_r\times \operatorname{End}_r\times\operatorname{End}_r\cdot \Mone^{\oplus r}$.
\item The smallest $r$ such that $T$ is a limit of rank $r$ tensors
is called the {\it border rank} of $T$ and is denoted $\underline{\mathbf{R}}(T)$. This is the smallest
$r$ such that, allowing $T$ to be in a larger space, $T\in \overline{GL_r\times GL_r\times GL_r\cdot \Mone^{\oplus r}}$.
\item $(\bold{ml}_A,\bold{ml}_B,\bold{ml}_C):= ({\mathrm {rank}}\, T_A,{\mathrm {rank}}\, T_B,{\mathrm {rank}}\, T_C)$
are the three {\it multi-linear ranks} of $T$.
\item The largest $r$ such that $\Mone^{\oplus r}\in \overline{GL(A)\times GL(B)\times GL(C)\cdot T}$
is called the {\it border subrank} $T$ and denoted $\underline{\bold Q}(T)$.
\item The largest $r$ such that $\Mone^{\oplus r}\in \operatorname{End}(A)\times \operatorname{End}(B)\times \operatorname{End}(C)\cdot T$ is called
the {\it subrank} of $T$ and denoted $\bold Q(T)$.
\end{enumerate}
\end{definition}
We have the inequalities
$$\bold Q(T)\leq \underline{\bold Q}(T)\leq \operatorname{min}\{\bold{ml}_A,\bold{ml}_B,\bold{ml}_C\}
\leq \operatorname{max}\{\bold{ml}_A,\bold{ml}_B,\bold{ml}_C\}\leq \underline{\mathbf{R}}(T)\leq \bold R(T),
$$
and all inequalities may be strict.
For example $M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}$ of Example \ref{mmex} satisfies
$\underline{\bold Q}(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=3$ \cite{kopparty2020geometric} and $\bold Q(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=2$ \cite[Prop. 15]{MR4210715}
and all multilinear ranks are $4$. Letting $\mathbf{b}\leq \mathbf{c}$, $T=a_1{\mathord{ \otimes } } (\sum_{j=1}^\mathbf{b} b_j{\mathord{ \otimes } } c_j)$
has $\bold{ml}_A(T)=1$, $\bold{ml}_B(T)=\mathbf{b}$.
A generic tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ satisfies $ \bold{ml}_A=\bold{ml}_B,=\bold{ml}_C=m$
and $\underline{\mathbf{R}}(T)=O(m^2)$. The tensor
$T=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1 {\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$ satisfies
$\underline{\mathbf{R}}(T)=2$ and $\bold R(T)=3$.
We remark that very recently Kopparty and Zuiddam (personal
communication) have shown that a generic
tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ has subrank at most $3m^{\frac 23}$.
In contrast, the corresponding notions for matrices all coincide.
\begin{definition} \label{concisedef}
A tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ is {\it concise} if
$\bold{ml}_A=\mathbf{a}$, $\bold{ml}_B=\mathbf{b}$, $\bold{ml}_C=\mathbf{c}$,
\end{definition}
The rank and border rank of a tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ measure the complexity of
evaluating the corresponding bilinear map $T: A^*\times B^* \rightarrow C$ or trilinear form
$T: A^*\times B^*\times C^*\rightarrow \mathbb C$. A concise tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ of rank $m$ (resp. border rank $m$),
is said to be of {\it minimal rank} (resp. {\it minimal border rank}). It is a longstanding
problem to characterize tensors of minimal border rank, and how much larger
the rank can be than the border rank. The largest rank of any explicitly known
sequence of tensors
is $3m-o(m)$ \cite{MR3025382}. While tests exist to bound the ranks of tensors,
previous to this paper there was no general geometric criteria that would
lower bound tensor rank (now see Theorem \ref{GRthree} below).
The border rank is measured by a classical geometric object: secant
varieties of Segre varieties. The border subrank, to our knowledge,
has no similar classical object. In this paper we discuss how
geometric rank is related to classically studied questions in algebraic
geometry: linear spaces of matrices
with large intersections with the variety of matrices
of rank at most $r$. See Equation \eqref{Xi} for a precise statement.
Another notion of rank for tensors is the
{\it slice rank} \cite{Taoblog}, denoted by $\mathrm{SR}(T)$: it is the smallest $r$ such that there exist
$r_1,r_2,r_3$ such that $r=r_1+r_2+r_3$,
$A'\subset A$ of dimension $r_1$, $B'\subset B$ of dimension $r_2$, and
$C'\subset C$ of dimension $r_3$, such that
$T\in A'{\mathord{ \otimes } } B{\mathord{ \otimes } } C+ A{\mathord{ \otimes } } B'{\mathord{ \otimes } } C+ A{\mathord{ \otimes } } B{\mathord{ \otimes } } C'$.
It was originally introduced in the context of the cap set problem but has
turned out (in its asymptotic version) to be important for quantum information
theory and Strassen's laser method, more precisely, Strassen's theory
of asymptotic spectra, see \cite{MR3826254}.
\begin{remark}\label{gstablerem}
In \cite{derksen2020gstable} a notion of rank for tensors inspired by invariant theory,
called {\it $G$-stable rank} is introduced. Like geometric rank, it is bounded
above by the slice rank and below by the border subrank. Its relation to
geometric rank appears to be subtle: the $G$-stable rank of
the matrix multiplication tensor
$M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}$ equals $n^2$, which is greater than the geometric rank (see Example \ref{mmex}), but
the $G$-stable rank of $W:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$
is $1.5$ ($G$-stable rank need not be integer valued), while $GR(W)=2$.
\end{remark}
Like multi-linear rank,
geometric rank generalizes row rank and column
rank of matrices, but unlike multi-linear rank, it salvages the fundamental theorem
of linear algebra that row rank equals column rank.
Let $Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)\subset \mathbb P}\def\BT{\mathbb T (A^*{\mathord{ \otimes } } B^*)$ denote the
{\it Segre variety} of rank one elements.
Let $\Sigma^{AB}_T = \{ ([\alpha],[\beta])\in \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^* \mid T(\alpha,\beta,\cdot)=0\}$, so
\begin{align}
\label{sigmaab} Seg(\Sigma^{AB}_T)= \mathbb P}\def\BT{\mathbb T (T(C^*)^{\perp})\cap Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)\end{align}
and let $\Sigma^A_j=\{[\alpha]\in \mathbb P}\def\BT{\mathbb T A^*\mid {\mathrm {rank}}(T(\alpha))\leq \operatorname{min}\{\mathbf{b},\mathbf{c}\}-j\}$.
Let $\pi^{AB}_A: \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^* \rightarrow \mathbb P}\def\BT{\mathbb T A^*$ denote the projection.
\begin{proposition/definition}\cite{kopparty2020geometric}\label{GRdef}
The following quantites are all equal and called the {\it geometric rank} of $T$, denoted
$GR(T)$:
\begin{enumerate}
\item $\text{codim} (\Sigma^{AB}_T, \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)$
\item $\text{codim} (\Sigma^{AC}_T, \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T C^*)$
\item $\text{codim} (\Sigma^{BC}_T, \mathbb P}\def\BT{\mathbb T B^*\times \mathbb P}\def\BT{\mathbb T C^*)$
\item $ \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\} -1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^A_j +j) $
\item $ \mathbf{b}+\operatorname{min}\{\mathbf{a},\mathbf{c}\}-1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^B_j +j) $
\item $ \mathbf{c}+\operatorname{min}\{\mathbf{a},\mathbf{b}\} -1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^C_j +j) $.
\end{enumerate}
\end{proposition/definition}
\begin{proof}
The classical row rank equals column rank theorem implies that
when $\Sigma^A_j\neq \Sigma^A_{j+1}$,
the fibers of $\pi^{AB}_A$ are $\pp{j-1}$'s if $\mathbf{b}\geq \mathbf{c}$ and
$\pp{j-1+\mathbf{b}-\mathbf{c}}$'s when $\mathbf{b}<\mathbf{c}$.
The variety $\Sigma^{AB}_T$
is the union of the $(\pi^{AB}_A){}^{-1}(\Sigma^A_j)$, which
have dimension $\operatorname{dim}\Sigma^A_j+j-1$ when $\mathbf{b}\geq \mathbf{c}$ and $\operatorname{dim}\Sigma^A_j+j-1+\mathbf{b}-\mathbf{c}$
when $\mathbf{b}<\mathbf{c}$.
The dimension of a variety is the dimension of a largest dimensional irreducible component.
\end{proof}
\begin{remark} In \cite{kopparty2020geometric}
they work with $\hat\Sigma^{AB}_T:=\{ (\alpha,\beta)\in A^*\times B^* \mid T(\alpha,\beta,\cdot)=0\}$
and define
geometric rank to be
$
GR(T):=\text{codim}(\hat\Sigma^{AB}_T, A^*\times B^*)$. This is equivalent to our
definition, which
is clear except $0\times B^*$ and $A^*\times 0$ are always contained in $\hat\Sigma^{AB}_T$ which implies $ {GR}(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b}\}$ and by symmetry
$GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, but there is no corresponding set in the projective variety $\Sigma^{AB}_T$. Since \eqref{sigmaab} implies
\begin{align*}
\operatorname{dim} \Sigma^{AB}_T&\geq
\operatorname{dim} \mathbb P}\def\BT{\mathbb T (T(C^*)^\perp) + \operatorname{dim} Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)- \operatorname{dim} \mathbb P}\def\BT{\mathbb T (A^*{\mathord{ \otimes } } B^*)\\
&= \mathbf{a}\mathbf{b}-\mathbf{c}-1+\mathbf{a}+\mathbf{b}-2-(\mathbf{a}\mathbf{b}-1) \\
&=\mathbf{a}+\mathbf{b}-\mathbf{c}-2
\end{align*}
we still have $ {GR}(T)\leq \mathbf{a}+\mathbf{b}-\mathbf{c}$ and by symmetry $GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$ using our definition. We note that for tensors with more factors, one must be more careful when working
projectively.
\end{remark}
One has $\underline{\bold Q}(T)\leq GR(T)\leq SR(T)$ \cite{kopparty2020geometric}.
In particular, one may use geometric rank to bound the border subrank.
An example of such a bound
was an important application in \cite{kopparty2020geometric}.
\begin{remark}\label{GRone}
The set of tensors with slice rank one is the
set of tensors living in some $\mathbb C^1{\mathord{ \otimes } } B{\mathord{ \otimes } } C$
(after possibly re-ordering and re-naming factors), and the same is true for tensors with geometric rank one. Therefore for any tensor $T$, $GR(T)=1$ if and only if $\mathrm{SR}(T)=1$.
\end{remark}
\begin{definition}\label{1stardef} Let $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$.
A tensor is {\it $1_A$-generic} if $T(A^*)\subset B{\mathord{ \otimes } } C$ contains
an element of full rank $m$, {\it binding} if it is at least two of $1_A$, $1_B$, $1_C$ generic,
{\it $1_*$-generic} if it is $1_A$, $1_B$ or $1_C$-generic,
and it is {\it $1$-generic} if it is
$1_A,1_B$ and $1_C$-generic. A tensor is {\it $1_A$-degenerate} if it is not $1_A$-generic.
Let $1_A-degen$ denote the variety
of tensors that are not $1_A$-generic,
and let $1-degen$ the variety of tensors that are $1_A,1_B$ and $1_C$ degenerate.
\end{definition}
$1_A$-genericity is important in the
study of tensors as
Strassen's equations \cite{Strassen505} and more generally
Koszul flattenings \cite{MR3081636} fail
to give good lower bounds for tensors that are $1$-degenerate.
Binding tensors are those that arise as structure tensors of algebras, see \cite{MR3578455}.
Defining equations for $1_A-degen$
are given by the module $S^mA^*{\mathord{ \otimes } } \La m B^*{\mathord{ \otimes } } \La m C^*$, see \cite[Prop. 7.2.2.2]{MR2865915}.
\begin{definition} \label{bndrkdef} A subspace $E\subset B{\mathord{ \otimes } } C$ is of {\it bounded rank $r$}
if for all $X\in E$,
${\mathrm {rank}}(X)\leq r$.
\end{definition}
\section{Statements of main results}
Let ${\mathcal G}{\mathcal R}_{s}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)\subset \mathbb P}\def\BT{\mathbb T (A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ denote
the set of tensors of geometric rank at most $s$ which is Zariski closed \cite{kopparty2020geometric},
and write ${\mathcal G}{\mathcal R}_{s,m}:={\mathcal G}{\mathcal R}_{s}(\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m)$.
By Remark \ref{GRone}, ${\mathcal G}{\mathcal R}_{1}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors that
live in some $\mathbb C^1{\mathord{ \otimes } } B{\mathord{ \otimes } } C$, $A{\mathord{ \otimes } } \mathbb C^1{\mathord{ \otimes } } C$,
or $A{\mathord{ \otimes } } B{\mathord{ \otimes } } \mathbb C^1$.
In what follows, a statement of the form \lq\lq there exists a unique tensor...\rq\rq,
or \lq\lq there are exactly two tensors...\rq\rq,
means up to the action of $GL(A)\times GL(B)\times GL(C)\rtimes \FS_3$.
\begin{theorem} \label{GRtwo} For $\mathbf{a},\mathbf{b},\mathbf{c}\geq3$,
the variety $\mathcal{GR}_{2}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors $T$ such that $T(A^*)$, $T(B^*)$,
or $T(C^*)$ has bounded rank 2.
There are exactly two concise tensors in $\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3$ with $GR(T)=2$:
\begin{enumerate}
\item The unique up to scale skew-symmetric tensor $T=\sum_{\sigma\in \FS_3}\tsgn(\sigma) a_{\sigma(1)}{\mathord{ \otimes } } b_{\sigma(2)}{\mathord{ \otimes } } c_{\sigma(3)}\in \La 3\mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$
and
\item $T_{utriv,3}:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1 + a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_2+ a_1{\mathord{ \otimes } } b_3{\mathord{ \otimes } } c_3
+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_3{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_3\in S^2\mathbb C^3{\mathord{ \otimes } } \mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\end{enumerate}
There is a unique concise tensor $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
satsifying $GR(T)=2$ when $m>3$,
namely
$$
T_{utriv,m}:=
a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1+ \sum_{\rho=2}^m [a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho].
$$
This tensor satisfies $\underline{\mathbf{R}}(T_{utriv,m})=m$ and $\bold R(T_{utriv,m})=2m-1$.
\end{theorem}
In the $m=3$ case (1) of Theorem \ref{GRtwo} we have $\Sigma^{AB}_T\cong \Sigma^{AC}_T\cong \Sigma^{BC}_T\cong \mathbb P}\def\BT{\mathbb T A^*\subset \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T A^*$
embedded diagonally and $\Sigma^A_1=\Sigma^B_1=\Sigma^C_1=\mathbb P}\def\BT{\mathbb T A^*$.
In the $m=3$ case (2) of Theorem \ref{GRtwo} we have
\begin{align*}
\Sigma^{AB}_T&=\mathbb P}\def\BT{\mathbb T\langle \alpha_2,\alpha_3\rangle \times \mathbb P}\def\BT{\mathbb T \langle \beta_2,\beta_3\rangle=
\pp 1\times \pp 1\\
\Sigma^{AC}_T&=\{([ s\alpha_2+t\alpha_3 ] , [
u\gamma_1 + v( -t\gamma_2+s\gamma_3)])\in \mathbb{P}A\times\mathbb{P}C \mid [s,t]\in \pp 1, [u,v]\in \pp 1\} \\
\Sigma^{BC}_T&=\{([ s\beta_2+t\beta_3 ], [
u\gamma_1 + v( -t\gamma_2+s\gamma_3)])\in\mathbb{P}B\times\mathbb{P}C \mid [s,t]\in \pp 1, [u,v]\in \pp 1 \}.
\end{align*}
If one looks at the scheme structure,
$\Sigma^A_2$, $\Sigma^B_2$ are lines with multiplicity three and $\Sigma^C_1=\mathbb P}\def\BT{\mathbb T C^*$.
\begin{remark}
The tensor $T_{utriv,m}$ has appeared several times in the literature:
it is the structure tensor of the trivial algebra with unit (hence the name), and
it has the largest symmetry group of any binding tensor \cite[Prop. 3.2]{2019arXiv190909518C}.
It is also closely related to Strassen's tensor of \cite{MR882307}: it
is the sum of Strassen's tensor with a rank one tensor.
\end{remark}
Theorem \ref{GRtwo} is proved in \S\ref{GRtwopf}.
\begin{theorem} \label{GRthree}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be concise and assume $\mathbf{c}\geq\mathbf{b}\geq \mathbf{a}>4$.
If $GR(T)\leq 3$, then
$\bold R(T)\geq \mathbf{b}+ \lceil {\frac {\mathbf{a}-1}2}\rceil -2$.
If moreover $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $T$ is
$1_*$-generic, then $\bold R(T)\geq 2m-3$.
\end{theorem}
In contrast to ${\mathcal G}{\mathcal R}_{1,m}$ and ${\mathcal G}{\mathcal R}_{2,m}$, the variety ${\mathcal G}{\mathcal R}_{3,m}$
is not just the the set of tensors $T$ such that $T(A^*),T(B^*)$ or $T(C^*)$ has bounded rank 3. Other examples include
the structure tensor for $2\times 2$ matrix multiplication $M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}$ (see Example \ref{mmex}),
the large and small Coppersmith-Winograd tensors (see Examples
\ref{CWqex} and \ref{cwqex}) and others (see \S\ref{moregr3}).
Theorem \ref{GRthree} gives the first algebraic way to lower bound tensor rank. Previously,
the only technique to bound tensor rank beyond border rank was the {\it substitution
method} (see \S\ref{submethrev}), which is not algebraic or systematically implementable.
Theorem \ref{GRthree} is proved in \S\ref{GRthreepf}.
\section{Remarks on the geometry of geometric rank}\label{GRgeom}
\subsection{Varieties arising in the study of geometric rank}
Let $G(m,V)$ denote the Grassmannian of $m$-planes through the origin in the
vector space $V$.
Recall the correspondence (see, e.g., \cite{MR3682743}):
\begin{align}\nonumber
&\{ A\text{-concise tensors}\ T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } }
C\}/\{ GL(A)\times GL(B)\times GL(C)-{\rm equivalence}\}\\
&\label{tscorresp} \leftrightarrow\\
&\nonumber \{ \mathbf{a}-{\rm planes} \ E\in G(\mathbf{a} , B{\mathord{ \otimes } } C)\}/\{ GL(B)\times GL(C)-{\rm equivalence}.\}
\end{align}
It makes sense to study the $\Sigma^A_j$
separately, as they have different geometry.
To this end define $GR_{A,j}(T)= \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\}-1 - \operatorname{dim}\Sigma^A_j -j$.
Let ${\mathcal G}{\mathcal R}_{r, A,j}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)=\{[T]\in \mathbb P}\def\BT{\mathbb T (A{\mathord{ \otimes } } B{\mathord{ \otimes } } C ) \mid GR_{A,j}(T)\leq r\}$.
Let $\sigma_r(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))\subset \mathbb P}\def\BT{\mathbb T (B{\mathord{ \otimes } } C)$ denote
the variety of $\mathbf{b}\times \mathbf{c}$ matrices of rank at most $r$.
By the correspondence \eqref{tscorresp}, the study of ${\mathcal G}{\mathcal R}_{r,A,j}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the
study of the variety
\begin{equation}\label{Xi}
\{ E\in G(\mathbf{a} ,B^*{\mathord{ \otimes } } C^*)
\mid \operatorname{dim}(\mathbb P}\def\BT{\mathbb T E\cap \sigma_{\operatorname{min}\{\mathbf{b},\mathbf{c}\}-j}(\mathbb P}\def\BT{\mathbb T B^*\times \mathbb P}\def\BT{\mathbb T C^*))\geq \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\} -j-1-r\}.
\end{equation}
The following is immediate from the definitions, but since it is significant
we record it:
\begin{observation} ${\mathcal G}{\mathcal R}_{\mathbf{a}-1, A,1}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C) = 1_A-degen$. In particular, tensors that are
$1_A$, $1_B$, or $1_C$ degenerate have degenerate geometric rank.
${\mathcal G}{\mathcal R}_{\mathbf{a}-1 , A,\mathbf{a}}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the set of tensors that fail to be $A$ concise. In particular,
non-concise tensors do not have maximal geometric rank.
\end{observation}
It is classical that $\operatorname{dim} \sigma_{m-j}(Seg(\pp{m-1}\times \pp{m-1}))=m^2-j^2-1$.
Thus for a general tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$, $\operatorname{dim}(\Sigma^A_j)=m-j^2$. In particular, it
is empty when $j>\sqrt{m}$.
\begin{observation}\label{Rmin} If $T\in\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ is concise and $GR(T)<m$, then
$\bold R(T)>m$.
\end{observation}
\begin{proof} If $T$ is concise $\bold R(T)\geq m$, and for
equality to hold it can be written as $\sum_{j=1}^m a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j$ for some bases $\{a_j\},\{b_j\}$ and $\{c_j\}$ of $A,B$ and $C$ respectively. But $GR(\sum_{j=1}^m a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j)=m$.
\end{proof}
\begin{question} For concise $1_*$-generic tensors $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } }\mathbb C^m$,
is $\bold R(T)\geq 2m-GR(T)$?
\end{question}
\subsection{Tensors of minimal border rank}
If $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C=\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ is concise of minimal border rank $m$,
then there exist complete flags in $A^*,B^*,C^*$, $0\subset A_1^*\subset A_2^*\subset
\cdots \subset A_{m-1}^*\subset A^*$ etc..
such that $T|_{A_j^*{\mathord{ \otimes } } B_j^*{\mathord{ \otimes } } C_j^*}$ has border rank at most $j$, see \cite[Prop. 2.4]{CHLlaser}.
In particular, $\operatorname{dim}( \mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_j(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))\geq j-1$.
If the inequality is strict for some $j$, say equal to $j-1+q$, we say {\it the $(A,j)$-th flag condition
for minimal border rank is passed with excess $q$}.
\begin{observation} The geometric rank of
a concise tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
is $m$ minus the largest excess of the $(A,j)$ flag conditions for minimal
border rank.
\end{observation}
We emphasize that a tensor with degenerate geometric rank need not have minimal
border rank, and need not pass all the $A$-flag conditions for minimal border rank,
just that one of the conditions is passed with excess.
\section{Examples of tensors with degenerate geometric ranks}\label{Exsect}
\subsection{Matrix multiplication and related tensors}
\begin{example}[Matrix multiplication]\label{mmex} Set $m=n^2$. Let $U,V,W=\mathbb C^n$.
Write $A=U^*{\mathord{ \otimes } } V$, $B=V^*{\mathord{ \otimes } } W$, $C=W^*{\mathord{ \otimes } } U$.
The structure tensor of matrix multiplication is
$T=M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}=\operatorname{Id}_U{\mathord{ \otimes } }\operatorname{Id}_V{\mathord{ \otimes } } \operatorname{Id}_W$ (re-ordered), where $\operatorname{Id}_U\in U^*{\mathord{ \otimes } } U$ is
the identity.
When $n=2$, $\Sigma^{AB}=Seg(\mathbb P}\def\BT{\mathbb T U^*\times \mathcal I_V\times \mathbb P}\def\BT{\mathbb T W)$,
where $\mathcal I_V=\{ [v]\times [\nu]\in \mathbb P}\def\BT{\mathbb T V\times \mathbb P}\def\BT{\mathbb T V^* \mid \nu(v)=0\}$ has dimension $3$, so $GR(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=6-3=3$.
Note that
$\Sigma^A_2=
\Sigma^A_1=Seg(\mathbb P}\def\BT{\mathbb T U^*\times \mathbb P}\def\BT{\mathbb T V)=Seg(\pp 1\times\pp 1)$ (with multplicity two).
For $[\mu{\mathord{ \otimes } } v]\in \Sigma^A_2$, $\pi_A^{-1}[\mu{\mathord{ \otimes } } v]=
\mathbb P}\def\BT{\mathbb T (\mu{\mathord{ \otimes } } v{\mathord{ \otimes } } v^\perp{\mathord{ \otimes } } W)\cong \pp 1$. Since
the tensor is $\BZ_3$-invariant the same holds for $\Sigma^B,\Sigma^C$.
For larger $n$, the dimension of the fibers of $\pi^{AB}_A$ varies with the
rank of $X\in \{\operatorname{det}}\def\tpfaff{\operatorname{Pf}}\def\thc{\operatorname{HC}_{n}=0\}$. The fiber is
$[X]\times \mathbb P}\def\BT{\mathbb T ({\rm Rker}(X){\mathord{ \otimes } } W)$, which has dimension $(n-{\mathrm {rank}}(X))n-1$.
Write $r={\mathrm {rank}}(X)$.
Each $r$ gives rise to a $(n-r)n-1+(2nr-r^2-1)=n^2-r^2+nr-2$ dimensional component
of $\Sigma^{AB}$. There are $n-1$ components, the largest dimension
is attained when $r=\lceil \frac n2\rceil$, where the
dimension is $n^2-\lceil \frac n2\rceil\lfloor \frac n2\rfloor -2$
and we recover the result of \cite{kopparty2020geometric} that $GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})=\lceil \frac 34 n^2\rceil=\lceil \frac 34 m\rceil$, caused by $\Sigma^A_{\lceil \frac n 2\rceil}$.
\end{example}
\begin{example}[Structure Tensor of $\mathfrak{sl}_n$] Set $m=n^2-1$.
Let $U=\mathbb C^n$ , let $A=B=C={\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n={\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}(U)$. For $a,b\in {\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n$, $[a,b]$ denotes their commutator.
Let $T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n}\in\mathfrak{sl}_n(\mathbb{C})^{\otimes 3}$ be
the structure tensor of ${\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n$: $T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n}=\sum_{ij=1}^{n^2-1} a_i\otimes b_j\otimes [a_i,b_j]$. Then $\hat{\Sigma}^{AB} =\{(x,y)\in A^*\times B^*|[x,y]=0\}$.
Let $C(2,n):=\{(x,y)\in U^*{\mathord{ \otimes } } U\times U^*{\mathord{ \otimes } } U\,|\,xy=yx\}$. In \cite{MR86781} it was shown that
$C(2,n)$ is irreducible. Its dimension is
$n^2+n$, which
was computed in \cite[Prop. 6]{MR1753173}.
Therefore
$\hat{\Sigma}^{AB}=(\mathfrak{sl}_n(\mathbb{C})\times\mathfrak{sl}_n(\mathbb{C}))\cap C(2,n)$
has dimension $n^2+n-2$, and $GR(T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n})=\mathrm{dim}(\mathfrak{sl}_n(\mathbb{C})\times\mathfrak{sl}_n(\mathbb{C}))-\mathrm{dim}\hat{\Sigma}^{AB}=n^2-n
=m-\sqrt{m+1}$.
\end{example}
\begin{example}[Symmetrized Matrix Multiplication] Set $m=n^2$.
Let $A=B=C=U^*{\mathord{ \otimes } } U$, with $\operatorname{dim} U=n$.
Let $T=SM_{<n>}\in (U^*{\mathord{ \otimes } } U)^{\otimes 3}$ be
the symmetrized matrix multiplication tensor: $SM_{<n>}(X,Y,Z):=\mathrm{tr}(XYZ)+\mathrm{tr}(YXZ)$.
In \cite{MR3829726} it was shown that the exponent of $SM_{\langle n\rangle}$ equals the exponent
of matrix multiplication. On the other hand, $SM_{\langle n\rangle}$ is a cubic polynomial and thus
may be studied with more tools from classical algebraic geometry, which raises the hope of new
paths towards determining the exponent.
Note that $SM_{<n>}(X,Y,\cdot)=0$ if and only if $XY+YX=0$. So $\hat{\Sigma}^{AB}=\{(X,Y)\in U^*{\mathord{ \otimes } } U\times U^*{\mathord{ \otimes } } U\,|\,XY+YX=0\}$.
Fix any matrix $X$, let $M_X$ and $M_{-X}$ be two copies of $\mathbb{C}^n$ with $\mathbb{C}[t]$-module structures: $t\cdot v:=Xv,\forall v\in M_X$ and $t\cdot w:=-Xw,\forall w\in M_{-X}$, where $\mathbb{C}[t]$ is the polynomial ring.
For any linear map $\varphi:M_X\rightarrow M_{-X}$,
\begin{align*}
\varphi\in\mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X}) & \iff\varphi(tv)=t\varphi(v),\forall v\in M_X\\
&\iff\varphi(Xv)=-X\varphi(v), \forall v\in M_X\\
&\iff\varphi X=-X \varphi.
\end{align*}
This gives a vector space isomorphism $(\pi^{AB}_A)^{-1}(X):=\{Y|XY+YX=0\}\cong \mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X})$.
By the structure theorem of finitely generated modules over
principal ideal domains, $M_X$ has a primary decomposition:
$$M_X\cong \quot{\mathbb{C}[t]}{(t-\lambda_1)^{r_1}}\oplus\cdots\oplus\quot{\mathbb{C}[t]}{(t-\lambda_k)^{r_k}}$$
for some $\lambda_i\in\mathbb{C}$ and $\sum r_i=n$. Replacing $t$ with $-t$ we get a decomposition of $M_{-X}$:
$$M_{-X}\cong \quot{\mathbb{C}[t]}{(t+\lambda_1)^{r_1}}\oplus\cdots\oplus\quot{\mathbb{C}[t]}{(t+\lambda_k)^{r_k}}.
$$
We have the decomposition $\mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X})\cong \bigoplus\limits_{i,j}\mathrm{Hom}_{\mathbb{C}[t]}(\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}},\quot{\mathbb{C}[t]}{(t+\lambda_j)^{r_j}})$. For each $i,j$:
\begin{align*}&\mathrm{Hom}_{\mathbb{C}[t]}\left(\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}},\quot{\mathbb{C}[t]}{(t+\lambda_j)^{r_j}}\right)\\
&=\left\{
\begin{array}{ll}
\langle 1\mapsto(t-\lambda_i)^l\,|\,0\leq l\leq r_j-1\rangle &\mathrm{if}\; \lambda_i+\lambda_j=0\;\mathrm{and}\;r_i\geq r_j;\\
\langle 1\mapsto(t-\lambda_i)^l\,|\,r_j-r_i\leq l\leq r_j-1\rangle &\mathrm{if}\; \lambda_i+\lambda_j=0\;\mathrm{and}\;r_i< r_j;\\
0&\mathrm{otherwise.}
\end{array}
\right.
\end{align*}
Let $d_{ij}(X)$
denote its dimension, then $d_{ij}(X)=\left\{
\begin{array}{ll}
\mathrm{min}\{r_i,r_j\}&\mathrm{if}\; \lambda_i+\lambda_j=0;\\
0&\mathrm{otherwise.}
\end{array}
\right.$
Thus $\mathrm{dim}((\pi^{AB}_A)^{-1} (X))=\sum\limits_{i,j} d_{ij}(X)$.
Each direct summand $\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}}$ of $M_X$ corresponds to a Jordan block of the Jordan canonical form of $X$ with size $r_i$ and eigenvalue $\lambda_i$, denoted as $J_{\lambda_i}(r_i)$.
Assume $X$ has eigenvalues $\pm\lambda_1, \cdots,\pm\lambda_k,\lambda_{k+1},\cdots,\lambda_{l}$ such that $\lambda_i\neq\pm\lambda_j$ whenever $i\neq j$. Let $q_{X,1}(\lambda)\geq q_{X,2}(\lambda) \geq \cdots$ be the decreasing sequence of sizes of the Jordan blocks of $X$ corresponding to the eigenvalue $\lambda$. Let $W(X)$ be the set of matrices $X'$ with eigenvalues $\pm\lambda'_1, \cdots,\pm\lambda'_k,\lambda'_{k+1},\cdots,\lambda'_{l}$ such that $\lambda'_i\neq\pm\lambda'_j$ whenever $i\neq j$, and $q_{X,j}(\pm \lambda_i)=q_{X',j}(\pm \lambda'_i)\,\forall i,j$. Then $W(X)$ is quasi-projective and irreducible of dimension $\mathrm{dim}W(X)=\mathrm{dim}\{P^{-1}XP\,|\,\mathrm{det}P\neq 0\}+l$, and $(\pi^{AB}_A)^{-1}(X')$ is
of the same dimension as $(\pi^{AB}_A)^{-1}(X)$ for all $X'\in W(X)$.
By results in \cite{MR1355688}, the codimension of the orbit of $X$
under the adjoint action of $GL(U)$ is $c_{Jor}(X):=\sum_{\lambda}[q_{X,1}(\lambda)+3q_{X,2}(\lambda)+5q_{X,3}(\lambda)+\cdots]$. Then
$$\mathrm{dim}\hat{\Sigma}^{AB}=\max_X(\mathrm{dim}W(X)+\mathrm{dim}\pi^{-1}_1(X))=\max_X(n^2-c_{Jor}(X)+ \operatorname{dim}(\pi_{A}^{AB}){}^{-1}(X)+l)$$
because
$\hat \Sigma^{AB}=\cup_X(\pi^{AB}_A){}^{-1} (W(X))$ is a finite union.
It is easy to show that $\operatorname{dim}(\pi_{A}^{AB}){}^{-1}(X)-c_{Jor}(X)$ takes maximum $0$ when for every eigenvalue $\lambda_i$ of $X$, $-\lambda_i$ is an eigenvalue of $X$ and $q_{X,j}(\lambda_i)=q_{X,j}(-\lambda_i),\forall i,j$. So the total maximum is achieved when $X$ has the maximum possible number of distinct pairs $\pm\lambda_i$, i.e.,
$$X\cong\left\{
\begin{array}{ll}
\mathrm{diag}(\lambda_1,-\lambda_1,\lambda_2,-\lambda_2,\cdots,\lambda_{\frac{n}{2}},-\lambda_{\frac{n}{2}}) &\mathrm{if}\; n\; \mathrm{is\;even};\\
\mathrm{diag}(\lambda_1,-\lambda_1,\lambda_2,-\lambda_2,\cdots,\lambda_{\frac{n-1}{2}},-\lambda_{\frac{n-1}{2}},0) &\mathrm{if}\; n\; \mathrm{is\;odd}.
\end{array}
\right.
$$
In both cases $\mathrm{dim}\hat{\Sigma}^{AB}=n^2+\lfloor \frac{n}{2}\rfloor$. We conclude that $GR(sM_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})=n^2-\lfloor \frac{n}{2}\rfloor=m-\lfloor \frac {\sqrt{m}}2\rfloor$.
\end{example}
\subsection{Large border rank and small geometric rank}
The following example shows that border rank can be quite large while
geometric rank is small:
\begin{example}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C=\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
have the form
$T=a_1{\mathord{ \otimes } } (b_1{\mathord{ \otimes } } c_1+\cdots + b_m{\mathord{ \otimes } } c_m)+ T'$
where $T'\in A'{\mathord{ \otimes } } B'{\mathord{ \otimes } } C':=\operatorname{span}\{ a_2, \hdots , a_m\}
{\mathord{ \otimes } } \operatorname{span}\{ b_1, \hdots , b_{\lfloor \frac m2\rfloor}\}
{\mathord{ \otimes } } \operatorname{span}\{ c_{\lceil \frac m2\rceil}, \hdots , c_m\}$, where $T'$ is generic.
It was shown in \cite{MR3682743} that $\bold R(T)= \bold R(T')+m$ and
$\underline{\mathbf{R}}(T)\geq \frac{m^2}8$.
We have
$$T(A^*)\subset \begin{pmatrix}
x_1 & & & & & \\
& \ddots & & & & \\
& & x_1 & & & &\\
* & \cdots & * & x_1 & &\\
\vdots & \vdots & \vdots & & \ddots & \\
* &\cdots & * & & & x_1\end{pmatrix}.
$$
Setting $x_1=0$, we see
a component of $\Sigma^A_{\lfloor \frac m2\rfloor}\subset \mathbb P}\def\BT{\mathbb T A^*$ is a hyperplane
so $GR(T)\leq \lceil \frac m2\rceil+1$.
\end{example}
\subsection{Tensors arising in Strassen's laser method}
\begin{example}[Big Coppersmith-Winograd tensor] \label{CWqex}
The following tensor has been used to obtain every new upper bound on the
exponent of matrix multiplication since 1988:
$$T_{CW,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j
+a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j+a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_0+a_0{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_{q+1}
+a_0{\mathord{ \otimes } } b_{q+1}{\mathord{ \otimes } } c_0+a_{q+1}{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_0.
$$
One has
$\bold R(T_{CW,q})=2q+3=2m-1$ \cite[Prop. 7.1]{MR3682743}
and $\underline{\mathbf{R}}(T_{CW,q})=q+2=m$ \cite{MR91i:68058}.
Note
$$
T_{CW,q}(A^*)=
\begin{pmatrix}
x_{q+1}& x_1 &\cdots & x_q &x_0\\
x_1 & x_0 & & & \\
x_2 & & \ddots & & \\
\vdots & & & & \\
x_q & & & x_0 & \\
x_0 & & & & 0
\end{pmatrix}
\ \ \simeq \ \
\begin{pmatrix}
x_0& x_1 &\cdots & x_q &x_{q+1}\\
& x_0 & & & x_1\\
& & \ddots & & x_2\\
& & & & \vdots\\
& & & x_0 & x_q\\
& & & & x_0
\end{pmatrix}
$$
where $\simeq$ means equal up to changes of bases. So we have
$\Sigma^A_{1}=\Sigma^A_2=\cdots =\Sigma^A_{q}=\{x_0=0\}$ and $\Sigma^A_{q+1}=\{x_0=\cdots=x_q=0\}$.
Therefore $GR(T_{CW,q})=2(q+2)-1-(\mathrm{dim}\Sigma^A_{q}+q)=3$.
\end{example}
\begin{example}[Small Coppersmith-Winograd tensor]\label{cwqex} The following tensor was
the second tensor used in the laser method and for $2\leq q\leq 10$, it
could potentially prove the exponent is less than $2.3$:
$T_{cw,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j
+a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j+a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_0$.
It satisfies
$\bold R(T_{cw,q})=2q+1=2m-1$ \cite[Prop. 7.1]{MR3682743}
and $\underline{\mathbf{R}}(T_{cw,q})=q+2=m+1$ \cite{MR91i:68058}. We again have
$GR(T_{cw,q})=3$ as e.g., $\Sigma^{AB}=\{ x_0=y_0=\sum_{j\geq 1} x_j y_j=0\}\cup\{\forall j\geq 1, x_j=y_j=0\}$.
\end{example}
\begin{example} [Strassen's tensor] \label{strassenten} The following is the
first tensor that was used in the laser method:
$T_{str,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } }
c_j + a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j\in \mathbb C^{q+1}{\mathord{ \otimes } } \mathbb C^{q+1}{\mathord{ \otimes } } \mathbb C^q$.
It satisfies $\underline{\mathbf{R}}(T_{str,q})=q+1$ and $\bold R(T_{str,q})=2q$ \cite{MR3682743}.
Since
$$
T_{str,q}(A^*)=
\begin{pmatrix} x_1& \cdots & x_q\\
x_0 & & \\
& \ddots & \\
& & x_0\end{pmatrix}
$$
We see $GR(T_{str,q})=2$ caused by $\Sigma^{A}_q=\mathbb P}\def\BT{\mathbb T \langle\alpha_1, \hdots , \alpha_q\rangle$.
\end{example}
\subsection{Additional examples of tensors with geometric
rank $3$}\label{moregr3}
\begin{example} \label{1ggr3} The following tensor
was shown in \cite{MR3682743} to take minimal values for Strassen's functional (called maximal
compressibility in \cite{MR3682743}):
$$
T_{maxsymcompr,m}
=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1
+\sum_{\rho=2}^m a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho
+ a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_1.
$$
Note
$$T_{maxsymcompr,m}(A^*)=\begin{pmatrix}
x_1 & x_2& x_3& \cdots & x_m\\
x_2 & x_1&0&\cdots&0 \\
x_3 & 0 & x_1 &&\vdots \\
\vdots & \vdots & & \ddots &0\\
x_m& 0&\cdots &0& x_1
\end{pmatrix}.
$$
Restrict to the hyperplane $\alpha_1=0$, we obtain
a space of bounded rank two, i.e., $\Sigma^A_{m-2}\subset \mathbb P}\def\BT{\mathbb T A^*$
is a hyperplane. We conclude, assuming $m\geq 3$, that
$GR(T)= 3$.
\end{example}
\begin{example}\label{badex}
Let $m=2q$ and let
$$
T_{gr3,1deg,2q}:=\sum_{s=1}^{q} a_s{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_s+ \sum_{t=2}^{q}a_{t+q-1}{\mathord{ \otimes } } b_t{\mathord{ \otimes } } c_1
+a_m{\mathord{ \otimes } } (\sum_{u=q+1}^m b_u{\mathord{ \otimes } } c_u),
$$
so
\begin{equation}\label{badA}
T_{gr3,1deg,m}(A^*)=
\begin{pmatrix} x_1 & x_2 & \cdots & x_q & 0 & \cdots & 0\\
x_{q+1}& 0 & \cdots & 0 &0 &\cdots & 0 \\
\vdots & \vdots & &\vdots &\vdots & &\vdots \\
x_{m-1}& 0 & \cdots &0 &0 &\cdots&0 \\
0 &0 &\cdots & 0 &x_m & & \\
\vdots &\vdots & & \vdots & &\ddots & \\
0 &0 &\cdots & 0 & & &x_m \end{pmatrix}.
\end{equation}
Then $GR(T_{gr3,1deg,m})=3$ (set $x_m=0$) and $\bold R(T)=\frac 32m-1$,
the upper bound is clear from the expression the lower bound is given in
Example \ref{badgrex}.
\end{example}
\begin{example}\label{badex2}
Let $m=2q-1$ and let
$$
T_{gr3,1deg,2q-1}:=\sum_{s=2}^{q} a_s{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_s+ a_{s+q-1}{\mathord{ \otimes } } b_s{\mathord{ \otimes } } c_1
+a_1{\mathord{ \otimes } } (\sum_{u=q+1}^m b_u{\mathord{ \otimes } } c_u),
$$
so
\begin{equation}\label{badA2}
T_{gr3,1deg,2q-1}(A^*)=
\begin{pmatrix} 0 & x_2 & \cdots & x_q & 0 & \cdots & 0\\
x_{q+1}& 0 & \cdots & 0 &0 &\cdots & 0 \\
\vdots & \vdots & &\vdots &\vdots & &\vdots \\
x_{m}& 0 & \cdots &0 &0 &\cdots&0 \\
0 &0 &\cdots & 0 &x_1 & & \\
\vdots &\vdots & & \vdots & &\ddots & \\
0 &0 &\cdots & 0 & & &x_1\end{pmatrix}.
\end{equation}
Then $GR(T_{gr3,1deg,2q-1})=3$ (set $x_1=0$) and $\bold R(T_{gr3,1deg,2q-1})= m+\frac{m-1}2-2$,
the upper bound is clear from the expression and the lower bound is given in
Example \ref{badgrex}.
\end{example}
\subsection{Kronecker powers of tensors with degenerate geometric rank}
For tensors $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ and $T'\in A'{\mathord{ \otimes } } B'{\mathord{ \otimes } } C'$, the {\it Kronecker product} of $T$ and $T'$ is the tensor $T\boxtimes T' := T {\mathord{ \otimes } } T' \in (A{\mathord{ \otimes } } A'){\mathord{ \otimes } } (B{\mathord{ \otimes } } B'){\mathord{ \otimes } } (C{\mathord{ \otimes } } C')$, regarded as a $3$-way tensor. Given $T \in A \otimes B \otimes C$, the {\it Kronecker powers} of $T$ are $T^{\boxtimes N} \in A^{\otimes N} \otimes B^{\otimes N} \otimes C^{\otimes N}$, defined iteratively. Rank and border rank are submultiplicative under the Kronecker product, while
subrank and border subrank are super-multiplicative under the Kronecker product.
Geometric rank is neither sub- nor super-multiplicative under the Kronecker product.
We already saw the lack of sub-multiplicativity with $M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}$ (recall $M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}^{\boxtimes 2}=
M_{\langle n^2\rangle}$):
$\frac 34n^2=GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}^{\boxtimes 2})> \frac{9}{16}n^2=GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})^2$.
An indirect example of the failure
of super-multiplicativity is given in \cite{kopparty2020geometric} where they point out that some power of
$W:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$ is
strictly sub-multiplicative. We make this explicit:
\begin{example}
With basis indices ordered $22,21,12,11$ for $B^{{\mathord{ \otimes } } 2},C^{{\mathord{ \otimes } } 2}$, we have
$$
W^{\boxtimes 2} (A^{{\mathord{ \otimes } } 2*})=
\begin{pmatrix}
x_{11}& x_{12}&x_{21}&x_{22}\\
0 & x_{11} & 0 & x_{21}\\
0 & 0 &x_{11} & x_{12}\\
0 &0&0& x_{11}
\end{pmatrix}
$$
which is $T_{CW,2}$ after permuting basis vectors (see Example \ref{CWqex}) so
$ GR(W^{\boxtimes 2})=3<4=GR(W)^2$.
\end{example}
\section{Proofs of main theorems}
In this section, after reviewing facts about spaces of matrices of bounded rank and
the substitution method for bounding tensor rank, we prove
a result lower-bounding the tensor rank of tensors associated to compression
spaces (Proposition \ref{comprex}), a lemma on linear sections of
$\sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ (Lemma \ref{linelem}), and
Theorems \ref{GRtwo} and \ref{GRthree}.
\subsection{Spaces of matrices of bounded rank}\label{detspaces}
Spaces of matrices of bounded rank (Definition \ref{bndrkdef}) is a classical subject dating back at least to
\cite{MR136618}.
The results
most relevant here are from
\cite{MR587090,MR695915}, and they were recast in the language
of algebraic geometry in \cite{MR954659}. We review notions
relevant for our discussion.
A large class of spaces of matrices of bounded rank
$E\subset B{\mathord{ \otimes } } C$ are the {\it compression spaces}.
In bases, the space takes the block format
\begin{equation}\label{comprformat}
E=\begin{pmatrix} *& * \\ * & 0\end{pmatrix}
\end{equation}
where if the $0$ is of size $(\mathbf{b}-k)\times (\mathbf{c} -\ell)$,
the space is of bounded rank $k+\ell$.
If $m$ is odd, then any linear subspace of $\La 2 \mathbb C^m$ is
of bounded rank $m-1$.
More generally one can use the multiplication in any graded algebra
to obtain spaces of bounded rank, the case of $\La 2 \mathbb C^m$ being the
exterior algebra.
Spaces of bounded rank at most three are classified in \cite{MR695915}:
For three dimensional rank two spaces there are only the compression spaces and
the skew symmetric matrices $\La 2\mathbb C^3\subset
\mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\subsection{Review of the substitution method}\label{submethrev}
\begin{proposition}\cite[Appendix
B]{MR3025382} \label{prop:slice}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$.
Fix a basis $a_1, \hdots , a_{\mathbf{a}}$ of $A$, with dual basis $\alpha^1, \hdots , \alpha^\mathbf{a}$.
Write $T=\sum_{i=1}^\mathbf{a} a_i\otimes M_i$, where $M_i =T(\alpha_i)\in B\otimes C$.
Let $\bold R(T)=r$ and $M_1\neq 0$. Then there exist constants $\lambda_2,\dots,
\lambda_\mathbf{a}$, such that the tensor
$$T' :=\sum_{j=2}^\mathbf{a} a_j\otimes(M_j-\lambda_j M_1)\in \operatorname{span}\{ a_2, \hdots , a_{\mathbf{a}}\} {\mathord{ \otimes } }
B{\mathord{ \otimes } }
C,$$
has rank at most $r-1$. I.e., $\bold R(T)\geq 1+\bold R(T')$.
The analogous assertions hold exchanging the role of $A$ with that of $B$ or $C$.
\end{proposition}
A visual tool for using the substitution method is to write
$T(B^*)$ as a matrix of linear forms. Then the $i$-th row of $T(B^*)$ corresponds to a tensor $a_i{\mathord{ \otimes } } M_i\in \mathbb C^1{\mathord{ \otimes } } B {\mathord{ \otimes } } C$. One adds unknown multiples of the first row of $T(B^*)$ to all other rows, and deletes the first row, then the resulting matrix is $T'(B^*)\in \mathrm{span}\{ a_2, \hdots , a_{\mathbf{a}}\}{\mathord{ \otimes } } C$.
In practice one applies Proposition \ref{prop:slice} iteratively, obtaining a sequence
of tensors in spaces of shrinking dimensions. See \cite[\S 5.3]{MR3729273} for a discussion.
For a positive integer $k\leq\mathbf{b}$, if the last $k$ rows of $T(A^*)$ are linearly independent, then one can apply Proposition \ref{prop:slice} $k$ times on the last $k$ rows. In this way, the first $\mathbf{b} - k$ rows are modified by unknown linear combinations of the last $k$ rows, and the last $k$ rows are deleted. Then one obtains a tensor $T'\in A\otimes \mathrm{span}\{b_1,\cdots,b_{\mathbf{b}-k}\}\otimes C$ such that $\bold R(T')\leq \bold R(T)-k$.
\begin{proposition}\label{comprex}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be a concise tensor with $T(A^*)$ a bounded rank $\rho$
compression space. Then $\bold R(T)\geq \mathbf{b}+\mathbf{c}-\rho$. \end{proposition}
\begin{proof} Consider \eqref{comprformat}. Add to the first $k$ rows of $T(A^*)$ unknown linear combinations of the last $\mathbf{b}-k$ rows, each of which is
nonzero by conciseness. Then delete the last $\mathbf{b}-k$ rows. Note that the last $\mathbf{c}-\ell$ columns
are untouched, and (assuming the most disadvantageous combinations
are chosen) we obtain a tensor $T'\in A{\mathord{ \otimes } } \mathbb C^k{\mathord{ \otimes } } C$
satisfying $\bold R(T)\geq (\mathbf{b}-k)+\bold R(T')$. Next add to the first $\ell$ columns of $T'(A^*)$ unknown linear combinations of the last $\mathbf{c}-\ell$ columns, then delete the last $\mathbf{c}-\ell$ columns.
The resulting tensor $T''$ could very well be zero, but
we nonetheless have $\bold R(T')\geq (\mathbf{c}-\ell) +\bold R(T'')$ and thus $\bold R(T)
\geq (\mathbf{b}-k)+(\mathbf{c}-\ell)=\mathbf{b}+\mathbf{c}-\rho$.
\end{proof}
Here are the promised lower bounds for $T_{gr3,1deg,m}$:
\begin{example}\label{badgrex}
Consider \eqref{badA}. Add to the first row unknown linear combinations of the last $m-1$ rows then delete the last $m-1$ rows. The resulting tensor is still $\langle a_1, \hdots , a_{q}\rangle$-concise
so we have $\bold R(T_{gr3,1deg,2q})\geq m-1+ \frac m2$. The case of $T_{gr3,1deg,2q-1}$ is similar.
\end{example}
\subsection{Lemmas on linear sections of $\sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$}\label{linelempf}
\begin{lemma} \label{turbolinelem} Let $E\subset B{\mathord{ \otimes } } C$
be a linear subspace. If $\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ is a hypersurface in $\mathbb P}\def\BT{\mathbb T E$ of degree $r+1$ (counted with multiplicity) and does not contain any hyperplane of $\mathbb P}\def\BT{\mathbb T E$, then
$\mathbb P}\def\BT{\mathbb T E \subset \sigma_{r+1}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
\end{lemma}
\begin{proof}
Write $E=(y^i_j)$ where $y^i_j=y^i_j(x_1, \hdots , x_{q})$, $1\leq i\leq \mathbf{b}$, $1\leq j \leq \mathbf{c}$ and $q=\operatorname{dim} E$.
By hypothesis, all size $r+1$ minors are up to scale equal to a polynomial $S$ of degree $r+1$. No linear
polynomial divides $S$ since otherwise the intersection would contain a hyperplane.
Since $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, there must be a size $r+1$ minor that is nonzero restricted to $\mathbb P}\def\BT{\mathbb T E$. Assume it is the $(1, \hdots , r+1)\times (1, \hdots , r+1)$-minor.
\smallskip
Consider the vector consisting of the first $r+1$ entries of the $(r+2)$-st column. In order that all size $r+1$ minors of the upper left $(r+1)\times(r+2)$ block equal to multiples of $S$, this vector must be a linear combination
of the vectors corresponding to the first $r+1$ entries of the first $r+1$ columns. By adding linear combinations
of the first $r+1$ columns, we may make these entries zero. Similarly, we may make
all other entries in the first $r+1$ rows zero. By the same argument, we may do
the same for the first $r+1$ columns.
We have
\begin{equation}\label{boxformturbo}
\begin{pmatrix} y^1_1 & \cdots & y^1_{r+1} & 0 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
y^{r+1}_1 & \cdots & y^{r+1}_{r+1} & 0 & \cdots & 0\\
0 & \cdots & 0 & y^{r+2}_{r+2} & \cdots & y^{r+2}_\mathbf{c}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \cdots & 0 & y^\mathbf{b}_{r+2} & \cdots & y^\mathbf{b}_\mathbf{c}\end{pmatrix}.
\end{equation}
If $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{r+1}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, some entry in the
lower $(\mathbf{b}-r-1)\times (\mathbf{c}-r-1)$ block is nonzero. Take one such and
the minor with it and a $r\times r$ submatrix of
the upper left minor. We obtain a polynomial that has a linear factor, so it cannot be a multiple
of $S$, giving a contradiction.
\end{proof}
\begin{lemma} \label{linelem} Let $\mathbf{b},\mathbf{c}>4$. Let $E\subset B
{\mathord{ \otimes } } C$
be a linear subspace of dimension $q>4$. Say $\operatorname{dim}(\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=q-2$ and $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{3}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Then either all components of $\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ are linear $\pp {q-2}$'s, or $E\subset\mathbb C^5\otimes\mathbb C^5$.
\end{lemma}
The proof is similar to the argument for Lemma \ref{turbolinelem}, except
that we work in a local ring.
\begin{proof}
Write $E=(y^i_j)$ where $y^i_j=y^i_j(x_1, \hdots , x_{q})$, $1\leq i\leq \mathbf{b}$, $1\leq j \leq \mathbf{c}$.
Assume, to get a contradiction, that there is an irreducible
component of degree greater than one in the intersection, given by an irreducible polynomial $S$ of degree
two or three that divides all size $3$ minors.
By Lemma \ref{turbolinelem},
$\operatorname{deg}(S)=2$.
Since $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, there must be some size $3$ minor that is nonzero restricted to $\mathbb P}\def\BT{\mathbb T E$. Assume it is the $(123)\times (123)$
minor.
\smallskip
Let $\Delta^I_J$ denote a (signed) size $3$ minor restricted to $E$, where $I=(i_1i_2 i_3)$,
$J=(j_1j_2 j_3)$, so $\Delta^I_J=L^I_JS$, for some $L^I_J\in E^*$.
Set $I_0=(123)$.
Consider the $(st4)\times I_0$ minors, where $1\leq s<t\leq 3$.
Using the Laplace expansion, we may write them as
\begin{equation}\label{delta}
\begin{pmatrix}
\Delta^{I_0\backslash 1}_{I_0\backslash 1} &\Delta^{I_0\backslash 1}_{I_0\backslash 2} & \Delta^{I_0\backslash 1}_{I_0\backslash 3}
\\
\Delta^{I_0\backslash 2}_{I_0\backslash 1} &\Delta^{I_0\backslash 2}_{I_0\backslash 2} & \Delta^{I_0\backslash 2}_{I_0\backslash 3}
\\
\Delta^{I_0\backslash 3}_{I_0\backslash 1} & \Delta^{I_0\backslash 3}_{I_0\backslash 2} & \Delta^{I_0\backslash 3}_{I_0\backslash 3}
\end{pmatrix}
\begin{pmatrix}
y^{4}_1 \\ y^{4}_2 \\ y^{4}_3
\end{pmatrix}
=
S \begin{pmatrix}
L^{234}_{I_0}\\
L^{134}_{I_0}\\
L^{124}_{I_0}
\end{pmatrix}
\end{equation}
Choosing signs properly, the matrix on the left is just the cofactor matrix of the
$(123)\times(123)$ submatrix, so its inverse is the transpose
of the original submatrix divided
by the determinant (which is nonzero by hypothesis).
Thus we may write
$$
\begin{pmatrix}
y^{4}_1 \\ y^{4}_2 \\ y^{4}_3
\end{pmatrix}
=
\frac{S}{\Delta^{I_0}_{I_0}}\begin{pmatrix}
y^1_1 & y^2_1 & y^3_1\\
y^1_2 & y^2_2 & y^3_2 \\
y^1_3 & y^2_3 & y^3_3\end{pmatrix}\begin{pmatrix}
L^{234}_{I_0}\\
L^{134}_{I_0}\\
L^{124}_{I_0}
\end{pmatrix}.
$$
In particular, each $y^{4}_s$, $1\leq s\leq 3$, is a rational function
of $L^{(I_0\backslash t),4}_{I_0}$ and $y^u_v$, $1\leq u,v\leq 3$.
$$
(y^{4}_1, y^4_2, y^{4}_3)=\sum_{t=1}^3 \frac{ L^{(I_0\backslash t),4}_{I_0} }{L^{I_0}_{I_0} }
(y^{t}_1, y^{t}_2, y^{t}_3).
$$
Note that the coefficients $\frac{ L^{(I_0\backslash t),4}_{I_0} }{L^{I_0}_{I_0} }$
are degree zero rational functions in $L^{(I_0\backslash t),4}_{I_0}$ and $y^u_v$, $1\leq u,v\leq 3$.
The same is true for all $(y^{\ell}_1, y^{\ell}_2, y^{\ell}_3)$ for $\ell\geq 4$.
Similarly, working with the first $3$ rows we get
$(y^{ 1}_\ell, y^{ 2}_\ell, y^{3}_\ell)$ written in terms of
the $(y_{t}^1, y_{t}^2, y_{t}^3)$ with coefficients degree zero rational functions in the $y^s_t$.
Restricting to the Zariski open subset of $E$ where $L^{I_0}_{I_0} \neq 0$,
we may subtract rational multiples of the first three rows and
columns to normalize our space to \eqref{boxformturbo}:
\begin{equation}\label{boxform}
\begin{pmatrix} y^1_1 & y^1_2 & y^1_3 & 0 & \cdots & 0\\
y^2_1 & y^2_2 & y^2_3 & 0 & \cdots & 0\\
y^3_1 & y^3_2 & y^3_3 & 0 & \cdots & 0\\
0 & 0 & 0 & y^4_4 & \cdots & y^4_\mathbf{c}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & y^\mathbf{b}_4 & \cdots & y^\mathbf{b}_\mathbf{c}\end{pmatrix}.
\end{equation}
Since a Zariski open subset of $\mathbb P}\def\BT{\mathbb T E$
is not contained in $\sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, at least one entry
in the lower right block must be nonzero. Say it is $y^{4}_{4}$.
On the Zariski open set $L^{I_0}_{I_0}\neq 0$,
for all $ 1\leq s<t\leq 3$, $1\leq u<v\leq 3$,
we have $y^4_4\Delta^{st}_{uv}=Q^{st}_{uv}S/L^{I_0}_{I_0}$, where $Q^{st}_{uv}$ is a quadratic polynomial (when $\Delta^{st}_{uv}\neq 0$) or zero (when $\Delta^{st}_{uv}=0$). Then either $y^4_4$ is a nonzero multiple of $S/L^{I_0}_{I_0}$, or all $\Delta^{st}_{uv}$'s are multiples of $S$, because $S$ is irreducible.
If all $\Delta^{st}_{uv}$'s are multiples of $S$,
at least one must be nonzero, say $\Delta^{12}_{12}\neq 0$. Then by a change of bases we set $y^3_1,y^3_2,y^1_3,y^2_3$ to zero. At this point,
for all $1\leq \alpha,\beta\leq 2$, $\Delta^{\alpha 3}_{\beta 3}$ becomes $y^\alpha_\beta y^3_3$. By hypothesis $\Delta^{\alpha 3}_{\beta 3}$ is a multiple of the irreducible quadratic polynomial $S$, so $y^\alpha_\beta y^3_3=0$. Therefore either all $y^\alpha_\beta$'s are zero, which contradicts with $\Delta^{12}_{12}\neq 0$, or $y^3_3=0$, in which case all entries in the third row and the third column are zero, contradicting
our assumption that the first $3\times 3$ minor is nonzero.
If there exists $\Delta^{st}_{uv}\neq0$ that is not a multiple of $S$, change bases such that it is $\Delta^{12}_{12}$. Note that $y^4_4=\Delta^{1234}_{1234}/\Delta^{123}_{123}=(\Delta^{1234}_{1234}/S)/L^{I_0}_{I_0}$ where $(\Delta^{1234}_{1234}/S)$ is a quadratic polynomial. By hypothesis $S$ divides $\Delta^{124}_{124}=\Delta^{12}_{12}y^4_4$. Since $S$ is irreducible and $\Delta^{12}_{12}$ is not a multiple of $S$, $\Delta^{1234}_{1234}/S$ must be a multiple of $S$. Therefore $\Delta^{1234}_{1234}$ is a multiple of $S^2$. This is true for all size $4$ minors, therefore we can apply \ref{turbolinelem}. By the proof of \ref{turbolinelem}, all entries of $E$ can be set to zero except those in the upper left $5\times 5$ block, so $E\subset\mathbb C^5\otimes\mathbb C^5$.
\end{proof}
\begin{remark}
The normalization in the case $\operatorname{deg}(S)=2$ is not possible in general without the restriction
to the open subset where $L^{I_0}_{I_0}\neq 0$. Consider
$
T(A^*)
$
such that the upper $3\times 3$ block
is
$$\begin{pmatrix} x_1& x_2&x_3\\ -x_2 & x_1 & x_4\\
-x_3& -x_4& x_1\end{pmatrix}.
$$
Then the possible entries in the first three
columns of the fourth row are not limited to the span of
the first three rows. The element
$(x_4,-x_3,x_2)$ is also possible.
\end{remark}
\subsection{Proof of Theorem \ref{GRtwo}}\label{GRtwopf}
We recall the statement:
\begin{customthm} {\ref{GRtwo}} For $\mathbf{a},\mathbf{b},\mathbf{c}\geq3$,
the variety $\mathcal{GR}_{2}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors $T$ such that $T(A^*)$, $T(B^*)$,
or $T(C^*)$ has bounded rank 2.
There are exactly two concise tensors in $\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3$ with $GR(T)=2$:
\begin{enumerate}
\item The unique up to scale skew-symmetric tensor $T=\sum_{\sigma\in \FS_3}\tsgn(\sigma) a_{\sigma(1)}{\mathord{ \otimes } } b_{\sigma(2)}{\mathord{ \otimes } } c_{\sigma(3)}\in \La 3\mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$
and
\item $T_{utriv,3}:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1 + a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_2+ a_1{\mathord{ \otimes } } b_3{\mathord{ \otimes } } c_3
+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_3{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_3\in S^2\mathbb C^3{\mathord{ \otimes } } \mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\end{enumerate}
There is a unique concise tensor $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
satsifying $GR(T)=2$ when $m>3$,
namely
$$
T_{utriv,m}:=
a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1+ \sum_{\rho=2}^m [a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho].
$$
This tensor satisfies $\underline{\mathbf{R}}(T_{utriv,m})=m$ and $\bold R(T_{utriv,m})=2m-1$.
\end{customthm}
\begin{proof} For simplicity, assume $\mathbf{a}\leq \mathbf{b}\leq \mathbf{c}$.
A tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ has geometric rank 2 if and only if
$\mathbb{P}T(A^*)\not\subset Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$,
and either $\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$ has dimension $\mathbf{a}-2$ or $\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
For the case $\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, $T(A^*)$ is of bounded rank $2$. By the classification of spaces of bounded rank $2$, up to equivalence $T(A^*)$ must be in one of the following forms:
\begin{center}
$\begin{pmatrix}
* & \cdots & * \\
* & \cdots & * \\
0 & \cdots & 0 \\
\vdots & &\vdots\\
0 & \cdots & 0
\end{pmatrix}$,
$\begin{pmatrix}
* & * &\cdots & * \\
* & 0 &\cdots & 0 \\
\vdots &\vdots & &\vdots\\
* & 0 &\cdots & 0
\end{pmatrix}$, or
$\begin{pmatrix}
0&x&y& 0 &\cdots & 0\\
-x&0&z& 0 &\cdots & 0\\
-y&-z&0& 0 &\cdots & 0\\
0 &&&\cdots && 0\\
\vdots &&&\vdots && \vdots\\
0 &&&\cdots && 0
\end{pmatrix}$.
\end{center}
When $T$ is concise, it must be of the second form or the third
with $\mathbf{a}=\mathbf{b}=\mathbf{c}=m=3$. If it is the third, $T$ is the unique up to scale skew-symmetric tensor in $ \mathbb{C}^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$. If it is of the second form and $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$, the entries in the first column must be linearly independent, as well as the entries in the first row. Thus we may choose a basis of $A$ such that $T=a_1\otimes b_1\otimes c_1 + \sum^m_{i>1} y_i\otimes b_i\otimes c_1 + \sum^m_{i>1}z_i\otimes b_1 \otimes c_i$ where $y_i$'s and $z_i$'s are linear combinations of $a_2,\cdots,a_m$. Then by a change of basis in $b_2,\cdots,b_m$ and $c_2,\cdots,c_m$ respectively, we obtain $T_{utriv,m}$.
For the case $\operatorname{dim}(\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=\mathbf{a}-2$, by Lemma \ref{turbolinelem}, if this intersection is an irreducible quadric, we are
reduced to the case
$\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Thus all $2\times2$ minors of $T(A^*)$ have a common linear factor.
Assume
the common factor is $x_1$. Then $\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\supset\{x_1=0\}$. Hence $\mathbb{P}T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)\subset Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$, i.e. $T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)$ is of bounded rank one. By a change of bases in $B,C$ and exchanging $B$ and $C$, all entries but the first row of $T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)$ becomes zero. Then all entries but the first column and the first row of $T(C^*)$ are zero, so $T(C^*)$ is of bounded rank $2$.
When $T$ is concise and $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$, the change of bases as in the case when $T(A^*)$ is of bounded rank $2$
shows that up to a reordering of $A$, $B$ and $C$ we obtain $T_{utriv,m}$.
\end{proof}
\subsection{Proof of Theorem \ref{GRthree}}\label{GRthreepf}
We recall the statement:
\begin{customthm}{\ref{GRthree} } Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be concise and assume $\mathbf{c}\geq\mathbf{b}\geq \mathbf{a}>4$.
If $GR(T)\leq 3$, then
$\bold R(T)\geq \mathbf{b}+ \lceil {\frac {\mathbf{a}-1}2}\rceil -2$.
If moreover $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $T$ is
$1_*$-generic, then $\bold R(T)\geq 2m-3$.
\end{customthm}
\begin{proof}
In order for $GR(T)=3$, either $\mathbb P}\def\BT{\mathbb T T(A^*)\subset \sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$,
$\mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ has dimension $\mathbf{a}-2$,
or $\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$ has dimension $\mathbf{a}-3$.
Case $\mathbb P}\def\BT{\mathbb T T(A^*)\subset \sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$: Since $\mathbf{a}>3$,
it must be a compression space. We conclude by Proposition \ref{comprex}.
\medskip
Case
$\mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ has dimension $\mathbf{a}-2$:
By Lemma \ref{linelem}, there exists $a\in A$
such that $\mathbb P}\def\BT{\mathbb T T( a^\perp)\subset \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Write $T(A^*)=x_1Z+U$, where $Z$ is a matrix of scalars
and $U=U(x_2, \hdots , x_\mathbf{a})$ is a matrix of linear forms of bounded rank two.
As discussed in \S\ref{detspaces}, there are two possible normal forms for $U$ up to symmetry.
If $U$ is zero outside of the first two rows,
add to the first two rows an unknown combination of the last $\mathbf{b}-2$
rows (each of which is nonzero by conciseness), so that the resulting tensor $T'$ satisfies
$\bold R(T)\geq \mathbf{b}-2+\bold R(T')$. Now the last $\mathbf{b}-2$ rows only contained
multiples of $x_1$ so $T'$ restricted to $a_1^\perp$ is $\langle a_2, \hdots , a_\mathbf{a}\rangle$-concise
and thus of rank at least $\mathbf{a}-1$, so $\bold R(T)\geq \mathbf{a}+\mathbf{b}-3$.
Now say $U$ is zero outside its first row and column.
Subcase: $\mathbf{a}=\mathbf{b}=\mathbf{c}$ and $T$ is $1_*$-generic. Then either $T$ is $1_A$-generic, or the first
row or column of $U$ consists of linearly independent entries.
If $T$ is $1_A$-generic, we may change bases so that $Z$ is of full rank.
Consider $T(B^*)$. Its first row consists
of linearly independent entries. Write $\mathbf{a}=\mathbf{b}=\mathbf{c}=:m$ and apply the substitution method to delete the last $m-1$ rows (each
of which is nonzero by conciseness). Call the resulting tensor
$T'$, so $\bold R(T)\geq \bold R(T')+m-1$.
Let $T''=T'|_{A^*\otimes\mathrm{span}\{\beta_2,\cdots,\beta_m\} \otimes\mathrm{span}\{\gamma_2,\cdots,\gamma_m\} }$. Then $T''(A^*)$ equals to the matrix obtained by removing the first column and the first row from $x_1Z$, so $\bold R(T'')\geq \bold R(x_1Z)-2=m-2$. Thus $\bold R(T)\geq (m-1)+m-2$ and we conclude.
If the first row of $U$ consists of linearly independent entries, then the
same argument, using $T(A^*)$, gives the bound.
Subcase: $T$ is $1$-degenerate or $\mathbf{a},\mathbf{b},\mathbf{c}$ are not all equal.
By $A$-conciseness, either the first row or column
must have at least $\lceil\frac{\mathbf{a}-1 }2\rceil$ independent entries of $\mathrm{span}\{x_2,\cdots,x_{\mathbf{a}}\}$. Say it is the
first row.
Then applying the substitution method, adding an unknown combination
of the last $\mathbf{b}-1$ rows to the first, then deleting the last $\mathbf{b}-1$ rows. Note that all entries in the first
row except the $(1,1)$ entry are only
altered by multiples of $x_1$, so there are at least $\lceil\frac{\mathbf{a}-1 }2\rceil -1$ linearly independent entries in the resulting matrix.
We obtain
$\bold R(T)\geq \mathbf{b}-1+ \lceil\frac{\mathbf{a}-1 }2\rceil -1$.
\medskip
Case
$\operatorname{dim}(\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=\mathbf{a}-3$:
We split this into three sub-cases based on the dimension
of the span of the intersection:
\begin{enumerate}
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-3$
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-2$
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-1$
\end{enumerate}
Sub-case (1): the intersection must be a linear space.
We may choose bases such that
$$
T(A^*)=
\begin{pmatrix}
0 & 0 & x_3& \cdots & x_\mathbf{a}\\
0& 0& 0 & \cdots & 0\\
& & \ddots & & \\
& & & \ddots & \\
& & & & 0\end{pmatrix} + x_1Z_1+ x_2Z_2
$$
where $Z_1,Z_2$ are $\mathbf{b}\times \mathbf{c}$ scalar matrices.
Add to the first row a linear combination of the last $\mathbf{b}-1$ rows
(each of which is nonzero by conciseness)
to obtain a tensor of rank at least $\mathbf{b}-2$, giving $\bold R(T)\geq \mathbf{a}+\mathbf{b}-3$.
Sub-case (2): By Lemma \ref{turbolinelem} the intersection must
contain a $\pp{\mathbf{a}-2}$, and one argues
as in case (1), except there is just $x_1Z_1$ and $x_2$ also appears in the first row.
\medskip
Sub-case (3):
$T(A^*)$ must have a basis of elements of rank one.
The only way $T$ can be concise, is for $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $m$ elements
of the $B$ factor form a basis and same for the $C$
factor. Changing bases, we
have $T=\sum_{j=1}^ma_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j$ which just intersects
the Segre in points, so this case cannot occur.
\end{proof}
\bibliographystyle{amsplain}
|
1,108,101,563,637 | arxiv | \section{ Introduction }\label{Introduction}
\noindent The binomial edge ideal of a graph was introduced in \cite{HHHKR}, and \cite{O} at about the same time.
Let $G$ be a finite simple graph with vertex set $[n]$ and edge set $E(G)$.
Also, let $S=K[x_1,\ldots,x_n,y_1,\ldots,y_n]$ be the polynomial ring over a field $K$. Then the \textbf{binomial edge ideal} of $G$ in $S$,
denoted by $J_G$, is generated by binomials $f_{ij}=x_iy_j-x_jy_i$, where $i<j$ and $\{i,j\}\in E(G)$. Also, one could see this ideal as an ideal
generated by a collection of 2-minors of a $(2\times n)$-matrix whose entries are all indeterminates. Many of the algebraic properties of such ideals
were studied in \cite{D}, \cite{EHH}, \cite{EZ}, \cite{HHHKR}, \cite{SK}, \cite{SZ}, \cite{Z} and \cite{ZZ}. In \cite{EHHQ}, the authors introduced the binomial edge ideal of a pair of graphs, as a
generalization of the binomial edge ideal of a graph. Let $G_1$ be a graph on the vertex set $[m]$ and $G_2$ a graph on the vertex set $[n]$, and
let $X= (x_{ij})$ be an $(m\times n)$-matrix of indeterminates. Let
$S=K[X]$ be the polynomial ring in the variables $x_{ij}$, where $i=1,\ldots,m$ and $j=1,\ldots,n$. Let $e=\{i,j\}$ for some $1\leq i < j\leq m$
and $f=\{t, l\}$ for some $1\leq t < l\leq n$. To
the pair $(e,f)$, the following $2$-minor of $X$ is assigned:
\[p_{e,f}=[i,j|t,l]=x_{it}x_{jl}-x_{il}x_{jt}.\] Then, the ideal \[J_{G_1,G_2}=(p_{e,f}:~e\in E(G_1), f\in E(G_2))\] is called
the \textbf{binomial edge ideal of the pair} $(G_1,G_2)$. Some properties of this ideal were studied in \cite{SK1}. Note that if $G_1$ is just an edge, then $J_{G_1,G_2}$ is isomorphic to $J_{G_2}$.
In \cite{SK1}, the authors posed a question about the Castelnuovo-Mumford regularity of the binomial edge ideal of graphs, which is if $\mathrm{reg}(J_G)\leq c(G)+1$, where $c(G)$ is the number of maximal cliques of $G$. They also answered this question for closed graphs, in \cite{SK}. In \cite{MM}, the authors gained an upper bound for the regularity of the binomial edge ideal of a graph on $n$ vertices. They showed that $\mathrm{reg}(J_G)\leq n$. They also posed a conjecture which is $\mathrm{reg}(J_G)\leq n-1$, whenever $G$ is not $P_n$, the path over $n$ vertices. In this paper, we investigate about the regularity of the binomial edge ideals and especially these two problems. This paper is organized as follows. In Section~\ref{Preliminaries}, we pose some definitions, facts and notation which will be used throughout the paper. In Section~\ref{The Castelnuovo-Mumford regularity of binomial edge ideals}, we show that the conjecture of Matsuda and Murai is true for every graph which has a cut edge or a simplicial vertex. Hence, we prove this conjecture for all chordal graphs. We also show that the regularity of the binomial edge ideal of the join of two graphs $G_1$ and $G_2$ (not both complete) is equal to $\mathrm{max}\{\mathrm{reg}(J_{G_1}),\mathrm{reg}(J_{G_2}),3\}$. Applying this fact, we prove the conjecture of Matsuda and Murai, in the case of join of graphs, and hence for complete $t$-partite graphs. Then, we generalize some results of Schenzel and Zafar about complete $t$-partite graphs. Using a similar discussion as one appeared in \cite{EZ}, we prove that the conjecture due to the authors is true for a class of chordal graphs including block graphs, which we call them "generalized block graphs". Hence, we extend the recent results of Ene and Zarojanu about block graphs.
Throughout the paper, we mean by a graph $G$, a simple graph. Moreover, if $V=\{v_1,\ldots,v_n\}$ is the vertex set of $G$ (which contains $n$ elements), then, for simplicity, we denote it by $[n]$.
\section{ Preliminaries }\label{Preliminaries}
\noindent In this section, we review some notions and facts around graphs and binomial ideals associated to graphs, which we need throughout.
In \cite{HHHKR}, the authors determined all graphs whose binomial edge ideal have a Gr\"{o}bner basis with respect to the lexicographic order induced by $x_1>\cdots >x_n>y_1>\cdots >y_n$, and called this class of graphs, \textbf{closed graphs}. There are also some combinatorial descriptions for these graphs, for example in \cite{EHH}, the authors proved that a graph $G$ is closed if and only if there exists a labeling of $G$ such that all facets of $\Delta(G)$ are intervals $[a,b]\subseteq [n]$. Here, $\Delta(G)$ is the clique complex of $G$, the simplicial complex whose facets are the vertex sets of the maximal cliques of $G$. We say that a vertex of $G$ is a \textbf{free vertex}, if it is a free vertex of the simplicial complex $\Delta(G)$, i.e. it is just contained in one facet of $\Delta(G)$. This is equivalent to say that such a vertex $v$ has this property that $N_G(v)$ induces a complete subgraph of $G$, and the vertex $v$ is also called a \textbf{simplicial vertex}. By $N_G(v)$, we mean the set of all neighbors (i.e. adjacent vertices) of the vertex $v$ in $G$.
Let $G$ and $H$ be two graphs on $[m]$ and $[n]$, respectively. We denote by $G*H$, the \textit{join} (product) of two graphs $G$ and $H$, that is
the graph with vertex set $[m]\cup [n]$, and the edge set $E(G)\cup E(H)\cup \{\{v,w\}~:~v\in [m],~w\in [n]\}$. In particular, the cone of a vertex $v$ on a graph $G$ is defined to be their join, that is $v*G$, and is sometimes denoted by $\mathrm{cone}(v,G)$. Let $V$ be a set. To simplify our notation throughout this paper, we introduce the join of two collection of subsets of $V$, $\mathcal{A}$ and $\mathcal{B}$, denoted by $\mathcal{A}\circ \mathcal{B}$, as $\{A\cup B: A\in \mathcal{A}, B\in \mathcal{B}\}$. If $\mathcal{A}_1,\ldots,\mathcal{A}_t$ are collections of subsets of $V$, then we denote their join, by $\bigcirc_{i=1}^{t}\mathcal{A}_i$.
Let $G$ be a graph and $e=\{v,w\}$ an edge of it. Then vertices $v$ and $w$ are called the endpoints of $e$. If $\{e_1,\ldots,e_t\}$ is a set of edges of $G$, then by $G\setminus \{e_1,\ldots,e_t\}$, we mean the graph on the same vertex set as $G$ in which the edges $e_1,\ldots,e_t$ are omitted. Here, we simply write $G\setminus e$, instead of $G\setminus \{e\}$. An edge $e$ of $G$ whose deletion from the graph, implies a graph with more connected components than $G$, is called a \textbf{cut edge} of $G$. Now, we recall a notation from \cite{MSh}. If $v,w$ are two vertices of a graph $G=(V,E)$ and $e=\{v,w\}$ is not an edge of $G$, then $G_e$ is defined to be the graph on the vertex set $V$, and the edge set $E \cup \{\{x,y\}~:~x,y\in N_G(v)~\mathrm{or}~x,y\in N_G(w)\}$.
A vertex $v$ of $G$ whose deletion from the graph, implies a graph with more connected components than $G$, is called a \textbf{cut point} of $G$. A \textbf{nonseparable} graph is a connected and nontrivial graph with no cut points. A \textbf{block} of a graph is a maximal nonseparable subgraph of it. A \textbf{block graph} is a connected graph whose blocks are complete graphs (see \cite{H} for more information in this topic). A disconnected graph is called a block graph, if all of its connected components are block graphs. One can see that a graph $G$ is a block graph if and only if it is a chordal graph in which every two maximal cliques have at most one vertex in common. This class was considered in~\cite[Theorem~1.1]{EHH}. Here, we introduce a class of chordal graphs including block graphs: Let $G$ be a connected chordal graph such that for every three maximal cliques of $G$ which have a nonempty intersection, the intersection of each pair of them is the same. In other words, $G$ has the property that for every $F_i,F_j,F_k\in \Delta(G)$, if $F_i\cap F_j\cap F_k\neq \emptyset$, then $F_i\cap F_j=F_i\cap F_k=F_j\cap F_k$. We call $G$, a \textbf{generalized block} graph. A disconnected graph is called a generalized block graph, if all of its connected components are generalized block graphs. By the above, it is clear that a block graph is also a generalized block graph. Hence, a tree is also a generalized block graph. The graph depicted in Figure~\ref{Generalized} is a generalized block graph, which is not a block graph.
\begin{center}
\begin{figure}
\hspace{0 cm}
\includegraphics[height=2.1cm,width=3.2cm]{Generalized.eps}
\caption{\footnotesize{}}\hspace{2 cm}
\label{Generalized}
\end{figure}
\end{center}
Suppose that $G$ is a graph on $[n]$. Let $T$ be a subset of $[n]$, and let $G_1,\ldots,G_{c_G(T)}$ be the connected
components of $G_{[n]\setminus T}$, the induced subgraph of $G$ on $[n]\setminus T$. For each $G_i$, we denote by $\widetilde{G}_i$ the complete graph on the vertex set $V(G_i)$. If
there is no confusion, then we may simply write $c(T)$ instead of $c_G(T)$. Set $$P_T(G)=(\bigcup_{i\in T}\{x_i,y_i\}, J_{\widetilde{G}_1},\ldots,J_{\widetilde{G}_{c(T)}}).$$ Then, $P_T(G)$ is a prime ideal, where $\mathrm{height}\hspace{0.35mm}P_T(G)=n+|T|-c(T)$, by \cite[Lemma~3.1]{HHHKR}. Moreover, $J_G=\bigcap_{T\subset [n]}P_T(G)$, by \cite[Theorem~3.2]{HHHKR}. So that, $\mathrm{dim}\hspace{0.35mm}S/J_G=\mathrm{max}\{n-|T|+c(T):T\subset [n]\}$, by \cite[Cororally~3.3]{HHHKR}. If each $i\in T$ is a cut point of the graph $G_{([n]\setminus T)\cup \{i\}}$, then we say that $T$ has \textbf{cut point
property} for $G$. Let $\mathcal{C}(G)=\{\emptyset\}\cup \{T\subset [n]:T~\mathrm{has~cut~point~property~for}~G\}$. One has $\mathcal{C}(G)=\{\emptyset\}$ if
and only if $G$ is a complete graph. On the other hand, denoted by $\mathcal{M}(G)$, we mean the set of all minimal prime ideals of $J_G$. Then, one has $T\in \mathcal{C}(G)$ if and only if $P_T(G)\in \mathcal{M}(G)$, by \cite[Corollary~3.9]{HHHKR}.
\section{ The Castelnuovo-Mumford regularity of binomial edge ideals }\label{The Castelnuovo-Mumford regularity of binomial edge ideals}
\noindent In this section, we deal with two recent conjectures on the Castelnuovo-Mumford regularity of the binomial edge ideal of a graph, one is due to Matsuda and Murai and the other is due to the authors. Recall that the \textbf{Castelnuovo-Mumford regularity} (or simply, \textbf{regularity}) of a graded $S$-module $M$ is defined as
$$\mathrm{reg}(M)=\mathrm{max}\{j-i~:~\beta_{i,j}(M)\neq 0\}.$$
Denoted by $c(G)$, we mean the number of maximal cliques of the graph $G$. The following are those mentioned conjectures:\\
\noindent \textbf{Conjecture A.} (see \cite{MM}) Let $G$ be a graph on $n$ vertices which is not a path. Then $\mathrm{reg}(J_G)\leq n-1$.\\
\noindent \textbf{Conjecture B.} (see \cite{SK1}) Let $G$ be a graph. Then $\mathrm{reg}(J_G)\leq c(G)+1$.\\
The following bounds for the regularity were given in \cite{MM} and \cite{SK}:
\begin{thm}\label{Kiani-Saeedi}
\cite[Theorem~3.2]{SK} Let $G$ be a closed graph. Then $\mathrm{reg}(J_G)\leq c(G)+1$.
\end{thm}
\begin{thm}\label{Matsuda-Murai}
\cite[Theorem~1.1]{MM} Let $G$ be a graph on $n$ vertices. Then $\mathrm{reg}(J_G)\leq n$.
\end{thm}
Moreover, note that recently, Ene and Zarojanu computed the exact value of the regularity of the binomial edge ideal of a closed graph with respect to the graphical terms in \cite{EZ}, which also yields Theorem~\ref{Kiani-Saeedi}.
Also, by Theorem~\ref{Matsuda-Murai}, it is clear that Conjecture~B is true for trees and unicyclic graphs, whose unique cycles have length greater that three.
Before stating the main theorems, we introduce the notion of the \textit{reduced graph} of a graph.
\begin{defn}
{\em Let $G$ be a graph and $e$ a cut edge of $G$ such that its endpoints are the free vertices of $G\setminus e$. Then, we call $e$, a \textbf{free cut edge} of $G$. Suppose that $\{e_1,\ldots,e_t\}$ is the set of all free cut edges of $G$. Then, we call the graph $G\setminus \{e_1,\ldots,e_t\}$, the \textbf{reduced graph} of $G$, and denote it by $\mathcal{R}(G)$. If $G$ does not have any free cut edges, then we set $\mathcal{R}(G):=G$. }
\end{defn}
The following are two main theorems of this section:
\begin{thm}\label{main}
Let $G\neq P_n$ be a graph on $n$ vertices which is a disconnected graph, or else \\
{\em{(a)}} has a simplicial vertex, or \\
{\em{(b)}} has a cut edge, or \\
{\em{(c)}} is the join of two graphs, or \\
{\em{(d)}} is an $n$-cycle. \\
Then, we have $\mathrm{reg}(J_G)\leq n-1$.
\end{thm}
\begin{thm}\label{main2}
Suppose that $G$ is a graph such that every connected components of $\mathcal{R}(G)$ is a closed graph, or a generalized block graph. Then, we have $\mathrm{reg}(J_G)\leq c(G)+1$.
\end{thm}
We proceed this section to prove the above theorems. The following theorem might be more general than Theorem~\ref{main2}.
\begin{thm}\label{construction}
Let $G$ be a connected graph on $n$ vertices and $R_1,\ldots,R_q$ the connected components of $\mathcal{R}(G)$. If Conjecture~B is true for all $R_1,\ldots,R_q$, then it is also true for $G$.
\end{thm}
To prove Theorem~\ref{construction}, we need the following propositions. Here, by $f_e$, we mean the binomial $f_{ij}=x_iy_j-x_jy_i$, where $e=\{i,j\}$ is an edge of $G$.
\begin{prop}\label{colon1}
Let $G$ be a graph and $e$ be a cut edge of $G$. Then we have \\
{\em{(a)}} $\beta_{i,j}(J_G)\leq \beta_{i,j}(J_{G\setminus e})+\beta_{i-1,j-2}(J_{{(G\setminus e)}_e})$, for all $i,j\geq 1$, \\
{\em{(b)}} $\mathrm{pd}(J_G)\leq \mathrm{max}\{\mathrm{pd}(J_{G\setminus e}),\mathrm{pd}(J_{{(G\setminus e)}_e})+1\}$, \\
{\em{(c)}} $\mathrm{reg}(J_G)\leq \mathrm{max}\{\mathrm{reg}(J_{G\setminus e}),\mathrm{reg}(J_{{(G\setminus e)}_e})+1\}$.
\end{prop}
\begin{proof}
It is enough to consider the short exact sequence $$0\longrightarrow S/J_{G\setminus e}:f_e(-2)\stackrel{f_e} \longrightarrow S/J_{G\setminus e}\longrightarrow S/J_G\rightarrow 0.$$ Then, the statements follow by the mapping cone, and the fact $J_{G\setminus e}:f_e=J_{{(G\setminus e)}_e}$, (see \cite[Theorem~3.4]{MSh}).
\end{proof}
Using the above proposition, we partially answer a question that J. Herzog had posed in a discussion with us. He asked if $\beta_{1,j}(J_G)=0$, for all $j>n$, where $G$ is a graph on $n\geq 4$ vertices. Note that in \cite[Theorem~2.2]{SK}, the authors mentioned that $\beta_{1,j}(J_G)=0$, for all $j>2n$.
\begin{cor}\label{Herzog's question}
Let $G$ be a forest on $n\geq 4$ vertices. Then $\beta_{1,j}(J_G)=0$, for all $j>n$.
\end{cor}
\begin{proof}
We use the induction on the number of edges of a forest. If $G$ is a graph with no edges, then $J_G=(0)$, and hence the result is obvious. Now, let $G$ be a forest on $n\geq 4$ vertices and at least one edge. Let $e$ be an arbitrary edge of $G$. So that $e$ is clearly a cut edge of $G$. Thus, by Proposition~\ref{colon1}, for all $j>n$, we have $\beta_{1,j}(J_G)\leq \beta_{1,j}(J_{G\setminus e})+\beta_{0,j-2}(J_{{(G\setminus e)}_e})$. But $\beta_{0,j-2}(J_{{(G\setminus e)}_e})=0$, since $j\geq 5$. Thus, we have $\beta_{1,j}(J_G)\leq \beta_{1,j}(J_{G\setminus e})$, for all $j>n$. By the induction hypothesis, we have $\beta_{1,j}(J_{G\setminus e})=0$, for all $j>n$. Thus, the desired result follows.
\end{proof}
In the statements of the above proposition, the equality does not occur necessarily, because the free resolution which is obtained for $S/J_G$ by mapping cone is not minimal necessarily. But, in a more special case, as in the next proposition, we get the minimal free resolution and hence desired equalities.
\begin{prop}\label{colon2}
Let $G$ be a graph and $e$ be a free cut edge of $G$. Then we have \\
{\em{(a)}} $\beta_{i,j}(J_G)=\beta_{i,j}(J_{G\setminus e})+\beta_{i-1,j-2}(J_{G\setminus e})$, for all $i,j\geq 1$, \\
{\em{(b)}} $\mathrm{pd}(J_G)=\mathrm{pd}(J_{G\setminus e})+1$, \\
{\em{(c)}} $\mathrm{reg}(J_G)=\mathrm{reg}(J_{G\setminus e})+1$.
\end{prop}
\begin{proof}
The proof is as similar as Proposition~\ref{colon1}. Note that $J_{{(G\setminus e)}_e}=J_{G\setminus e}$, since $e$ is a free cut edge of $G$. So, one may consider the short exact sequence $$0\longrightarrow S/J_{G\setminus e}(-2)\stackrel{f_e} \longrightarrow S/J_{G\setminus e}\longrightarrow S/J_G\rightarrow 0$$ instead of that was mentioned in the proof of Proposition~\ref{colon1}. Let $\mathcal{E}$ be the minimal graded free resolution of $S/J_{G\setminus e}$. Now, consider the homomorphism of complexes $\Phi:\mathcal{E}(-2)\longrightarrow \mathcal{E}$, as the multiplication by $f_e$. In fact it is a lift of the map $S/J_{G\setminus e}(-2)\stackrel{f_e} \longrightarrow S/J_{G\setminus e}$. Obviously, the mapping cone over $\Phi$ resolves $S/J_G$. In addition, it is minimal, because $\mathcal{E}$ is minimal and all the maps in the complex homomorphism $\Phi$ are of positive degrees.
\end{proof}
\begin{cor}\label{reg-reduced}
Let $G$ be a connected graph and $R_1,\ldots,R_q$ the connected components of $\mathcal{R}(G)$. Then $\mathrm{reg}(J_G)=\sum_{i=1}^q\mathrm{reg}(J_{R_i})$.
\end{cor}
\begin{proof}
Since $\mathcal{R}(G)$ has $q$ connected components, $G$ has exactly $q-1$ free cut edges. So, by using Proposition~\ref{colon2} repeatedly, we have that $\mathrm{reg}(J_G)=\mathrm{reg}(J_{\mathcal{R}(G)})+q-1$. On the other hand, $\mathrm{reg}(J_{\mathcal{R}(G)})=\sum_{i=1}^{q}\mathrm{reg}(J_{R_i})-q+1$, and hence $\mathrm{reg}(J_G)=\sum_{i=1}^{q}\mathrm{reg}(J_{R_i})$.
\end{proof}
{\em Proof of Theorem~\ref{construction}.}
By Corollary~\ref{reg-reduced}, we have $\mathrm{reg}(J_G)=\sum_{i=1}^q\mathrm{reg}(J_{R_i})$. By the assumption, Conjecture~B is true for $R_i$, for all $i=1,\ldots,q$, so that $\mathrm{reg}(J_{R_i})\leq c(R_i)+1$. Thus, $\mathrm{reg}(J_G)\leq \sum_{i=1}^qc(R_i)+q$. On the other hand, $c(G)=\sum_{i=1}^qc(R_i)+q-1$, because $\mathcal{R}(G)$ has $q$ connected components and hence $G$ has $q-1$ free cut edges. Thus, $\mathrm{reg}(J_G)\leq c(G)+1$, which implies that the conjecture is also true for $G$. $~~~~~~~~\Box$ \\
Combining Proposition~\ref{colon2} and \cite[Theorem~2.2]{EZ}, we get the following:
\begin{cor}\label{reg-closed-Ene}
Let $G$ be a connected graph and $R_1,\ldots,R_q$ the connected components of $\mathcal{R}(G)$. If $\mathcal{R}(G)$ is closed, then $$\mathrm{reg}(J_G)=\sum_{i=1}^ql_i+\sharp\{\mathrm{free~cut~edges~of~}G\},$$ where $l_i$ is the length of the longest induced path in $R_i$.
\end{cor}
The following corollary compares the linear strand of $J_G$ and $J_{G\setminus e}$.
\begin{cor}\label{linear strand1}
Let $G$ be a graph and $e$ be a cut edge of $G$. Then $\beta_{i,i+2}(J_G)\leq \beta_{i,i+2}(J_{G\setminus e})$, for all $i\geq 1$.
In particular, if $e$ is a free cut edge of $G$, then $\beta_{i,i+2}(J_G)=\beta_{i,i+2}(J_{G\setminus e})$, for all $i\geq 1$.
\end{cor}
Now, we will get ready to prove the above theorem. We divide the rest of this section into three subsections, each helps us to prove the main theorems of this section.
\subsection{Graphs with a simplicial vertex}\label{Generalized block graphs}
Here, we focus on graphs containing a simplicial vertex. This class of graphs includes a famous class of graphs, i.e. chordal graphs. Indeed, we show that Conjecture~A is true in this case. Moreover, we prove Conjecture~B for a class of chordal graphs containing block graphs. The following is one of the main theorems of this subsection:
\begin{thm}\label{chordal}
Let $G\neq P_n$ be a graph on $[n]$ which contains a simplicial vertex. Then $\mathrm{reg}(J_G)\leq n-1$.
\end{thm}
To prove the above theorem, we need some facts which are mentioned below.
\begin{prop}\label{reg-chordal}
Let $G$ be a graph and $e$ be an edge of $G$. Then we have
$$\mathrm{reg}(J_G)\leq \mathrm{max}\{\mathrm{reg}(J_{G\setminus e}), \reg(J_{G\setminus e}:f_e)+1\}.$$
\end{prop}
\begin{proof}
It suffices to consider the short exact sequence $$0\longrightarrow S/J_{G\setminus e}:f_e(-2)\stackrel{f_e} \longrightarrow S/J_{G\setminus e}\longrightarrow S/J_G\rightarrow 0.$$ Then, the statement follows by applying the mapping cone and \cite[Corollary~18.7]{P}.
\end{proof}
\begin{thm}\label{M-colon1}
\cite[Theorem~3.4]{MSh} Let $G$ be a graph and $e$ be a cut edge of $G$. Then we have $J_{G\setminus e}:f_e=J_{{(G\setminus e)}_e}$.
\end{thm}
\begin{thm}\label{M-colon2}
\cite[Theorem~3.7]{MSh} Let $G$ be a graph and $e=\{i,j\}$ be an edge of $G$. Then we have
$$J_{G\setminus e}:f_e=J_{{(G\setminus e)}_e}+I_G,$$
where $I_G=(g_{P,t}~:~P:i,i_1,\ldots,i_s,j~\mathrm{is~a~path~between~}i,j~\mathrm{in}~G~\mathrm{and}~0\leq t\leq s)$, $g_{P,0}=x_{i_1}\cdots x_{i_s}$ and for every $1\leq t\leq s$, $g_{P,t}=y_{i_1}\cdots y_{i_t}x_{i_{t+1}}\cdots x_{i_s}$.
\end{thm}
\begin{lem}\label{colon3}
Let $G$ be a graph on $[n]$, $v$ a simplicial vertex of $G$ with $\mathrm{deg}_{G}(v)\geq 2$, and $e$ an edge incident with $v$. Then we have $\reg(J_{G\setminus e}:f_e)\leq n-2$.
\end{lem}
\begin{proof}
Let $v_1,\ldots,v_t$ be all the neighbors of the simplicial vertex $v$, and $e_1,\ldots,e_t$ be the edges joining $v$ to $v_1,\ldots,v_t$, respectively, where $t\geq 2$. Without loss of generality, assume that $e:=e_t$. Note that for each $i=1,\ldots,t-1$, $v,v_i,v_t$ is a path between $v$ and $v_t$ in $G\setminus e$, so that for all $i=1,\ldots,t-1$, $x_i$ and $y_i$ are in the minimal monomial set of generators of $I_G$. Also, all other paths between $v$ and $v_t$ in $G\setminus e$ contain $v_i$ for some $i=1,\ldots,t-1$. Thus, all the monomials correspond to these paths, are divisible by either $x_i$ or $y_i$ for some $i=1,\ldots,t-1$. Hence, we have
$I_G=(x_i,y_i:1\leq i\leq t-1)$. So that $J_{G\setminus e}:f_e=J_{{(G\setminus e)}_e}+(x_i,y_i:1\leq i\leq t-1)$. The binomial generators of $J_{{(G\setminus e)}_e}$ correspond to the edges containing vertices $v_1,\ldots,v_{t-1}$, are contained in $I_G$. Let $H:={(G\setminus e)}_e$. Then, we have $J_{G\setminus e}:f_e=J_{H_{[n]\setminus \{v,v_1,\ldots,v_{t-1}\}}}+(x_i,y_i:1\leq i\leq t-1)$, since $v$ is a free vertex of $\Delta(G)$. Thus, $\reg(J_{G\setminus e}:f_e)=\reg(J_{H_{[n]\setminus \{v,v_1,\ldots,v_{t-1}\}}})$. But, $\reg(J_{H_{[n]\setminus \{v,v_1,\ldots,v_{t-1}\}}})\leq n-2$, by Theorem~\ref{M-colon1}, since $t\geq 2$. Therefore, $\reg(J_{G\setminus e}:f_e)\leq n-2$, as desired.
\end{proof}
Now, we go to the proof of Theorem~\ref{chordal}: \\
{\em proof of Theorem~\ref{chordal}.}
We use induction on the number of the vertices. Let $G$ be a graph on $[n]$, with a simplicial vertex, which is not a path. We consider two following cases:
(i) Suppose that $G$ has a simplicial vertex which is a leaf, say $v$. Then, assume that $w$ is the only neighbor of $v$, and $e=\{v,w\}$ is the edge joining $v$ and $w$. We have $\mathrm{reg}(J_{G\setminus e})=\mathrm{reg}(J_{{(G\setminus e)}_{[n]\setminus v}})$, since $v$ is an isolated vertex of $G\setminus e$. Thus, by Theorem~\ref{Matsuda-Murai}, $\mathrm{reg}(J_{G\setminus e})\leq n-1$. On the other hand, we have $\reg(J_{G\setminus e}:f_e)=\reg(J_{{(G\setminus e)}_e})$, by Theorem~\ref{M-colon1}. Note that $v$ is also an isolated vertex of ${(G\setminus e)}_e$, so that we can disregard it in computing the regularity. Thus, $\reg(J_{{(G\setminus e)}_e})\leq n-2$, by the induction hypothesis, since ${(G\setminus e)}_e$ has $w$ as a simplicial vertex. Hence, $\reg(J_{G\setminus e}:f_e)+1\leq n-1$. Thus, by Proposition~\ref{reg-chordal}, we get $\reg(J_G)\leq n-1$.
(ii) Suppose that all the simplicial vertices of $G$ have degree greater than one. Let $v$ be a simplicial vertex of $G$ and $v_1,\ldots,v_t$ be all the neighbors of $v$, and $e_1,\ldots,e_t$ be the edges joining $v$ to $v_1,\ldots,v_t$, respectively, where $t\geq 2$. Using repeatedly Proposition~\ref{reg} and Lemma~\ref{colon3}, we get $\reg(J_G)\leq \mathrm{max}\{\reg(J_{G\setminus \{e_1,\ldots,e_{t-1}\}}),n-1\}$. Note that $G\setminus \{e_1,\ldots,e_{t-1}\}$ is a graph on $n$ vertices in which $v$ is a leaf. Thus, by case~(i), we have $\reg(J_{G\setminus \{e_1,\ldots,e_{t-1}\}}))\leq n-1$, by case~(i). Thus, $\reg(J_G)\leq n-1$.
Therefore, by the above cases, we get the desired result. ~~~$~~~~~~~~\Box$ \\
Now, recall that a facet $F$ of a simplicial complex $\Delta$ is
called a \textbf{leaf}, if either $F$ is the only facet, or there exists a facet $G$, called a \textbf{branch} of $F$, such that for each facet $H$ of $\Delta$, with $H\neq F$, one has $H\cap F \subseteq G\cap F$. One can see that each leaf $F$ has at least a free vertex. A simplicial complex $\Delta$ is called a \textbf{quasi-forest} if its facets can be ordered as $F_1,\ldots,F_r$ such that for all $i>1$, $F_i$ is a leaf of $\Delta$ with facets $F_1,\ldots,F_{i-1}$. Such an order of the facets is called a \textbf{leaf order}. A connected quasi-forest is called a \textbf{quasi-tree}.
The following theorem extends \cite[Theorem~2.9~and~Corollary~2.10]{EZ} to a wider class of chordal graphs.
\begin{thm}\label{reg-generalized}
Let $G$ be a generalized block graph on $[n]$. Then $$\mathrm{reg}(J_G)\leq c(G)+1.$$
\end{thm}
\begin{proof}
Our proof is similar to the proof of \cite[Theorem~2.9]{EZ}, which is based on the technique applied in the proof of \cite[Theorem~1.1]{EHH}. So, we omit some details. By Dirac's theorem (see \cite{D}), $\Delta(G)$ is a quasi tree, since $G$ is connected and chordal. Let $c:=c(G)$, and $F_1,\ldots,F_c$ be a leaf order of the facets of $\Delta(G)$. We use induction on $c$, the number of maximal cliques of $G$. If $c=1$, then the result is obvious. Let $c>1$ and $F_{t_1},\ldots,F_{t_q}$ be all the branches of the leaf $F_c$. Since $G$ is a generalized block graph, each pair of the facets $F_c,F_{t_1},\ldots,F_{t_q}$ intersect in exactly the same set of vertices, say $A$, and also, $F_c\cap F_l=\emptyset$, for all $l\neq t_1,\ldots,t_q$, as $F_c$ is a leaf. Hence, we have that $A\cap F_l=\emptyset$, for all $l\neq t_1,\ldots,t_q$. On the other hand, for $T\subset [n]$ with $T\in \mathcal{C}(G)$, we have that $A\nsubseteq T$ if and only if $A\cap T=\emptyset$; because if $|A|>1$ and $v\in A\cap T$, then $v$ is not a cut point of the graph $G_{([n]\setminus T)\cup \{v\}}$, as $A\setminus T\neq \emptyset$, so it is a contradiction. Thus, let $J_G=Q\cap Q'$, where
\begin{equation}
Q=\bigcap_{\substack{
T\in \mathcal{C}(G) \\
A\cap T=\emptyset
}}
P_T(G)~~,~~Q'=\bigcap_{\substack{
T\in \mathcal{C}(G) \\
A\subseteq T
}}
P_T(G).
\nonumber
\end{equation}
Similar to the proof of \cite[Theorem~2.9]{EZ} and \cite[Theorem~1.1]{EHH}, let $G'$ be the graph obtained from $G$, by replacing the cliques $F_c,F_{t_1},\ldots,F_{t_q}$, by a clique on the vertex set $F_c\cup (\bigcup_{j=1}^qF_{t_j})$. One can see that $Q=J_{G'}$ and $Q'=(x_i,y_i:i\in A)+J_{G_{[n]\setminus A}}$. Thus, we have $Q+Q'=(x_i,y_i:i\in A)+J_{{G'}_{[n]\setminus A}}$. By the definition of generalized block graphs, it is not difficult to see that $G'$, $G_{[n]\setminus A}$ and ${G'}_{[n]\setminus A}$ are also generalized block graphs. Then, considering the short exact sequence $$0\rightarrow J_G\rightarrow Q\oplus Q'\rightarrow Q+Q'\rightarrow 0,$$ the result follows, as in \cite[Theorem~2.9]{EZ}.
\end{proof}
\subsection{Join of Graphs}\label{Join of Graphs}
Here, we focus on the join of two graphs and deal with the conjectures for this class of graphs. Consequently, we gain some results on complete $t$-partite graphs, which generalize some previous results.
Note that the join of two complete graphs is also obviously complete, so that its binomial edge ideal has a linear resolution, by \cite[Theorem~2.1]{SK}, and hence its regularity is equal to $2$. The following theorem determines the regularity of the binomial edge ideal of the join of two graphs with respect to the original graphs', when they are not both complete graphs.
\begin{thm}\label{reg-join}
Let $G_1$ and $G_2$ be graphs on $[n_1]$ and $[n_2]$, respectively, not both complete. Then
$$\mathrm{reg}(J_{G_1*G_2})=\mathrm{max}\{\mathrm{reg}(J_{G_1}),\mathrm{reg}(J_{G_2}),3\}.$$
\end{thm}
To prove the above theorem, we need some facts which are mentioned in the sequel. If $H$ is a graph with connected components $H_1,\ldots,H_r$, then we denote it by $\bigsqcup_{i=1}^r H_i$.
\begin{prop}\label{both disconnected 1}
Suppose that $G_1=\bigsqcup_{i=1}^r G_{1i}$ and $G_2=\bigsqcup_{i=1}^s G_{2i}$ are two graphs on disjoint sets of vertices
$[n_1]=\bigcup_{i=1}^r [n_{1i}]$ and $[n_2]=\bigcup_{i=1}^s [n_{2i}]$,
respectively, where $r,s\geq 2$. Then we have
$$\mathcal{C}(G_1*G_2)=\{\emptyset\}\cup \big{(}(\bigcirc_{i=1}^{r}\mathcal{C}(G_{1i}))\circ \{[n_2]\} \big{)}\cup \big{(}(\bigcirc_{i=1}^{s}\mathcal{C}(G_{2i}))\circ \{[n_1]\} \big{)}.$$
\end{prop}
\begin{proof}
Let $G:=G_1*G_2$ and $T\in (\bigcirc_{i=1}^{r}\mathcal{C}(G_{1i}))\circ \{[n_2]\}$. So, $T=[n_2]\cup (\bigcup_{i=1}^r T_{1i})$, where $T_{1i}\in \mathcal{C}(G_{1i})$, for $i=1,\ldots,r$. We show that $T$ has cut point property. Let $j\in T$. If $j\in T_{1i}$, for some $i=1,\ldots,r$, then $G_{([n]\setminus T)\cup \{j\}}={G_{1i}}_{([n_{1i}]\setminus T_{1i})\cup \{j\}}\sqcup (\bigsqcup_{l=1,l\neq i}^r {G_{1l}}_{([n_{1l}]\setminus T_{1l})})$. In this case, $j$ is a cut point of ${G_{1i}}_{([n_{1i}]\setminus T_{1i})\cup \{j\}}$, since $T_{1i}\in \mathcal{C}(G_{1i})$. So that $j$ is also a cut point of $G_{([n]\setminus T)\cup \{j\}}$. If $j\in [n_2]$, then $G_{([n]\setminus T)\cup \{j\}}=j*\bigsqcup_{i=1}^r {G_{1i}}_{([n_{1i}]\setminus T_{1i})}$. So, $j$ is a cut point of $G_{([n]\setminus T)\cup \{j\}}$, since $G_{([n]\setminus T)}$ is disconnected. Thus, in both cases, $T$ has cut point property. If $T\in (\bigcirc_{i=1}^{s}\mathcal{C}(G_{2i}))\circ \{[n_1]\}$, then similarly, we have $T\in \mathcal{C}(G)$.
For the other inclusion, let $\emptyset \neq T\in \mathcal{C}(G)$. If $T$ does not contain $[n_1]$ and $[n_2]$, then $G_{[n]\setminus T}$ is connected, and hence no element $i$ of $T$ is a cut point of $G_{([n]\setminus T)\cup \{i\}}$. So, we have $[n_1]\subseteq T$ or $[n_2]\subseteq T$. Suppose that $[n_1]\subseteq T$. Then, $T=[n_1]\cup (\bigcup_{i=1}^s T_{2i})$, where $T_{2i}\subseteq [n_2]$, for $i=1,\ldots,s$. Let $1\leq i\leq s$. If $T_{2i}=\emptyset$, then, clearly, $T_{2i}\in \mathcal{C}(G_{2i})$. If $T_{2i}\neq \emptyset$, then
each $j\in T_{2i}$, is a cut point of $G_{([n]\setminus T)\cup \{j\}}$, since $T\in \mathcal{C}(G)$. So that $j$ is a cut point of ${G_{2i}}_{([n_{2i}]\setminus T_{2i})\cup \{j\}}$, because $j\in T_{2i}$ and $G_{2i}$'s are on disjoint sets of vertices. Thus, $T_{2i}\in \mathcal{C}(G_{2i})$. Therefore, $T\in (\bigcirc_{i=1}^{s}\mathcal{C}(G_{2i}))\circ \{[n_1]\}$. If $[n_2]\subseteq T$, then similarly we get $T\in (\bigcirc_{i=1}^{r}\mathcal{C}(G_{1i}))\circ \{[n_2]\}$.
\end{proof}
The following is a corollary of a result in \cite{SK1} about induced subgraphs of a graph:
\begin{prop}\label{induced}
\cite[Proposition~8]{SK1} Let $G$ be a graph and $H$ an induced subgraph of $G$. Then we have \\
{\em{(a)}} $\beta_{i,j}(J_{H})\leq \beta_{i,j}(J_{G})$, for all $i,j$.\\
{\em{(b)}} $\mathrm{reg}(J_{H})\leq \mathrm{reg}(J_{G})$. \\
{\em{(c)}} $\mathrm{pd}(J_{H})\leq \mathrm{pd}(J_{G})$.
\end{prop}
{\em Proof of Theorem~\ref{reg-join}.} Let $G:=G_1*G_2$. Note that since $G$ is not a complete graph, $J_G$ does not have a linear resolution, by \cite[Theorem~2.1]{SK}. So that $\mathrm{reg}(J_G)\geq 3$. On the other hand, by Proposition~\ref{induced}, $\mathrm{reg}(J_G)\geq\mathrm{reg}(J_{G_1})$ and $\mathrm{reg}(J_G)\geq\mathrm{reg}(J_{G_2})$, because $G_1$ and $G_2$ are induced subgraphs of $G$. So, $\mathrm{reg}(J_G)\geq \mathrm{max}\{\mathrm{reg}(J_{G_1}),\mathrm{reg}(J_{G_2}),3\}$. For the other inequality, first, suppose that $G_1$ and $G_2$ are both disconnected graphs. Let $G_1=\bigsqcup_{i=1}^r G_{1i}$ and $G_2=\bigsqcup_{i=1}^s G_{2i}$ be two graphs on disjoint sets of vertices
$[n_1]=\bigcup_{i=1}^r [n_{1i}]$ and $[n_2]=\bigcup_{i=1}^s [n_{2i}]$,
respectively, where $r,s\geq 2$. By Proposition~\ref{both disconnected 1}, $\mathcal{C}(G)=\{\emptyset\}\cup \big{(}(\bigcirc_{i=1}^{r}\mathcal{C}(G_{1i}))\circ \{[n_2]\} \big{)}\cup \big{(}(\bigcirc_{i=1}^{s}\mathcal{C}(G_{2i}))\circ \{[n_1]\} \big{)}$. So, $J_{G}=Q\cap Q'$, where
\begin{equation}
Q=\bigcap_{\substack{
T\in \mathcal{C}(G) \\
[n_1]\subseteq T
}}
P_T(G)~~,~~Q'=\bigcap_{\substack{
T\in \mathcal{C}(G) \\
[n_1]\nsubseteq T
}}
P_T(G).
\nonumber
\end{equation}
Thus, we have
\begin{equation}
Q=(x_i,y_i:i\in [n_1])+\bigcap_{\substack{
T\in \mathcal{C}(G) \\
[n_1]\subseteq T
}}
P_{T\setminus [n_1]}(G_2)
\nonumber
\end{equation}
and
\begin{equation}
Q'=P_{\emptyset}(G)\cap\big{(}\bigcap_{\substack{
\emptyset\neq T\in \mathcal{C}(G) \\
[n_1]\nsubseteq T
}}
P_T(G)\big{)}
=P_{\emptyset}(G)\cap \big{(}(x_i,y_i:i\in [n_2])+\bigcap_{\substack{
T\in \mathcal{C}(G) \\
[n_2]\subseteq T
}}
P_{T\setminus [n_2]}(G_1)\big{)}.
\nonumber
\end{equation}
So, one can see that $Q=(x_i,y_i:i\in [n_1])+J_{G_2}$, $Q'=J_{K_n}\cap \big{(}(x_i,y_i:i\in [n_2])+J_{G_1}\big{)}$ and $Q+Q'=(x_i,y_i:i\in [n_1])+J_{K_{n_2}}$.
Now, consider the short exact sequence $$0\rightarrow J_G\rightarrow Q\oplus Q'\rightarrow Q+Q'\rightarrow 0.$$
By \cite[Corollary~18.7]{P}, we have $\mathrm{reg}(J_G)\leq \mathrm{max}\{\mathrm{reg}(Q),\mathrm{reg}(Q'),\mathrm{reg}(Q+Q')+1\}$. On the other hand, we have $\mathrm{reg}(Q)=\mathrm{reg}(J_{G_2})$, $\mathrm{reg}(Q')\leq \mathrm{max}\{\mathrm{reg}(J_{G_1}),\mathrm{reg}(K_{n_1})+1=3\}$ (by using a suitable short exact sequence as above), and $\mathrm{reg}(Q+Q')=\mathrm{reg}(K_{n_2})+1=3$. Hence, $\mathrm{reg}(J_G)\leq \mathrm{max}\{\mathrm{reg}(J_{G_2}),\mathrm{reg}(J_{G_1}),3\}$. Now, suppose that $G_1$ or $G_2$ is connected. We add an isolated vertex $v$ to $G_1$ and an isolated vertex $w$ to $G_2$. Thus, we obtain two disconnected graphs $G_1'$ and $G_2'$. So, by the above discussion, we have $\mathrm{reg}(J_{G_1'*G_2'})\leq \mathrm{max}\{\mathrm{reg}(J_{G_1'}),\mathrm{reg}(J_{G_2'}),3\}$. But, clearly, we have $\mathrm{reg}(J_{G_1'})=\mathrm{reg}(J_{G_1})$ and $\mathrm{reg}(J_{G_2'})=\mathrm{reg}(J_{G_2})$, so that $\mathrm{reg}(J_{G_1'*G_2'})\leq \mathrm{max}\{\mathrm{reg}(J_{G_1}),\mathrm{reg}(J_{G_2}),3\}$. Thus, the result follows by Proposition~\ref{induced}, since $G_1*G_2$ is an induced subgraph of $G_1'*G_2'$. ~~~$~~~~~~~~\Box$
\begin{rem}\label{Sharpness}
{\em By Theorem~\ref{reg-join}, we have if $G$ is a (multi)-fan graph (i.e. $K_1*\bigsqcup_{i=1}^t P_{n_i}$, for some $t\geq 1$, which might be a non-closed graph), then $\mathrm{reg}(J_G)=c(G)+1$. This implies that if Conjecture~B is true, then the given bound is sharp. }
\end{rem}
\begin{cor}\label{ConjB-join}
Let $G_1$ and $G_2$ be two graphs on $[n_1]$ and $[n_2]$, respectively. If Conjecture~B is true for $G_1$ and $G_2$, then it is also true for $G_1*G_2$.
\end{cor}
\begin{proof}
It is enough to note that $c(G_1*G_2)=c(G_1)c(G_2)$, and if $G_1$ and $G_2$ are complete graphs, then $G_1*G_2$ is also complete and Conjecture~B is true for it.
\end{proof}
The following corollary proves the conjecture of Matsuda and Murai in the case of join of graphs:
\begin{cor}\label{ConjA-join}
Let $G$ be a graph on $n$ vertices which is the join of two graphs. If $G$ is not $P_2$ nor $P_3$, then $\mathrm{reg}(J_{G})\leq n-1$.
\end{cor}
\begin{proof}
It is enough to apply Theorem~\ref{reg-join} and Theorem~\ref{Matsuda-Murai}.
\end{proof}
The following corollary generalizes the result of \cite{SZ} on the regularity of complete bipartite graphs:
\begin{cor}\label{reg-t-partite}
Let $G$ be a complete $t$-partite graph, where $t\geq 2$. If $G$ is not complete, then $\mathrm{reg}(J_{G})=3$. In particular, Conjecture~A is true for complete $t$-partite graphs.
\end{cor}
\begin{proof}
We use induction on $t\geq 2$, the number of parts. If $t=2$, then $G$ is the join of two graphs each consisting of some isolated vertices. So, the regularity of the binomial edge ideal of each of them is $0$. Thus, by Theorem~\ref{reg-join}, we have $\mathrm{reg}(J_{G})=3$. Now, suppose that $t>2$ and the result is true for every complete $(t-1)$-partite graph which is not complete. Let $V_1,\ldots,V_t$ be the partition of the vertices of $G$ to $t$ parts. Hence, we have
$G=G_{V_t}*G_{V\setminus V_t}$, and $G_{V\setminus V_t}$ is a complete $(t-1)$-partite graph. If $G_{V\setminus V_t}$ is a complete graph, then $|V_t|>1$, since, otherwise, $G$ is a complete graph, a contradiction. So, by Theorem~\ref{reg-join}, $\mathrm{reg}(J_{G})=3$. If $G_{V\setminus V_t}$ is not complete, then by the induction hypothesis, we have $\mathrm{reg}(J_{G_{V\setminus V_t}})=3$. Thus, again by Theorem~\ref{reg-join}, the result follows.
\end{proof}
\subsection{Proof of the main theorems}\label{Proof}
Now, we go to the proof of our main theorems. \\
{\em Proof of Theorem~\ref{main}.} If $G$ is a disconnected graph with $r\geq 2$ connected components $H_1,\ldots,H_r$, over $n_1,\ldots,n_r$ vertices, respectively, then we have $\mathrm{reg}(J_G)=\sum_{i=1}^{r}\mathrm{reg}(J_{H_i})-r+1$. By Theorem~\ref{Matsuda-Murai}, we have $\mathrm{reg}(J_{H_i})\leq n_i$. So that $\mathrm{reg}(J_G)\leq \sum_{i=1}^rn_i-r+1=n-r+1\leq n-1$, since $r\geq 2$. So, the result follows in this case. Now, suppose that $G$ has a cut edge $e$. Then, $G\setminus e$ is disconnected and hence $\mathrm{reg}(J_{G\setminus e})\leq n-1$. On the other hand, ${(G\setminus e)}_e$ has two connected components, say $G_1$ and $G_2$, which both have a simplicial vertex. So that Conjecture~A is true for both of them, by Theorem~\ref{chordal}. Note that $G_1$ and $G_2$ are not both paths, as $G$ is not. Thus, as mentioned above, we have $\mathrm{reg}(J_{{(G\setminus e)}_e})=\mathrm{reg}(J_{G_1})+\mathrm{reg}(J_{G_2})-1\leq n-2$. So, by Proposition~\ref{colon2}, we get the result in this case too. Now, combining Theorem~\ref{reg-chordal}, Theorem~\ref{reg-join} and \cite[Corollary~3.8]{ZZ} we get the result. $~~~~~~~~\Box$
\\
{\em Proof of Theorem~\ref{main2}.} By Theorem~\ref{construction}, Theorem~\ref{Kiani-Saeedi} and Theorem~\ref{reg-generalized}, the result follows.
$~~~~~~~~\Box$ \\
\textbf{Acknowledgments:} The authors would like to thank Professor J. Herzog for some useful comments. Also, the authors would like to thank to the Institute for Research in Fundamental Sciences (IPM) for financial support. The research of the first author was in part supported by a grant from IPM (No. 92050220).
\providecommand{\byame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
|
1,108,101,563,638 | arxiv | \section{Introduction}
The ubiquity of the normal distribution as indicated by the Central Limit Theorem (CLT) is
a somewhat mysterious result. One of the possible explanations for it is an interpretation
of the CLT as a fixed-point theorem; for a plethora of approaches see
\cite{Tro59,Gol76,HW84,Bar86,Swa91,Sin92}. The starting point for this analysis is the
following weak form of the CLT for independent identically distributed random variables.
Let the operator $T$ be defined on probability measures on $\ensuremath{\mathbb R}$ by $T \mu = (\mu \ast \mu)
\circ S_{\frac{1}{\sqrt{2}}}$. Here $\ast$ is the convolution, and $S_r$ (for ``scaling'')
is the dilation operator, $d(\mu \circ S_r)(x) = d\mu(r^{-1}x)$. We call this operator $T$
the \emph{central limit} operator. The theorem follows from the CLT, and is well known.
\begin{Thmn}
The fixed points of the operator $T$ are the scaled normal distributions $\chi \circ S_t$.
If $\mu$ is a probability measure with zero mean and unit variance, then the iterations
$T^n \mu$ weakly converge to $\chi$.
\end{Thmn}
In this approach, the starting point for the study of the CLT is the investigation of the
operator $T$. This operator is clearly non-linear, and as the first approximation we
consider the properties of the linearization of this operator \cite{Sin92}. These
properties are of course well-known (although we have not found adequate references; but
see \cite{Sin92} and also \cite{Sin76}). However, there is now a different version of
probability theory, with its own CLT, which has not been investigated to the same degree.
This is the free probability theory of Voiculescu (for an introduction, see e.g.
\cite{VDN92}), which, in particular, turns out to describe the behavior of certain large
random matrices. In this theory the notion of (commutative) independence is replaced by the
notion of free independence (for the operator-theoretic definition and the motivation
behind it see e.g. \cite{VDN92}). Now the classical convolution can be defined in terms of
independence as follows: $\mu \ast \nu$ is the distribution of the sum of two independent
random variables with distributions $\mu$ and $\nu$ (and it is a theorem that the
distribution of the sum of independent random variables depends only on the distributions
of the summands). Correspondingly, in free probability theory one defines the (additive)
free convolution of measures $\mu \boxplus \nu$ as the distribution of the sum of two
freely independent random variables with distributions $\mu$ and $\nu$
\cite{Voi85,Voi86,BV93,VDN92} (and the above comment applies). Thus the \emph{free central
limit operator} is $T(\mu) = (\mu \boxplus \mu) \circ S_{\frac{1}{\sqrt{2}}}$. One of the
main technical differences between classical and free theories of probability is that the
operator of convolution with a given measure is linear, while the operator of taking a free
convolution with a given measure is highly non-linear. However, the above operator $T$ is
non-linear even in the classical case, and thus one can expect similarities between
linearizations of the classical and free versions of this operator.
We have two somewhat different approaches at our disposal. The original one, initiated and
largely developed by Voiculescu \cite{Voi86,BV93, VDN92} (see also \cite{Maa92}), is to
define a certain operation, called the \emph{$R$-transform}, on the space of analytic
functions, which linearizes the additive free convolution. Thus this operation is an
analogue of the logarithm of the Fourier transform in the classical case. Another
approach, due to Speicher \cite{Spe90} and developed, among others, by Speicher and Nica
\cite{Spe94,Nic95}, is to use a certain analogue of the classical combinatorial
moment-cumulant formula. This approach is somewhat less general, but the parallel with the
classical situation is more explicit.
In what follows we want to indicate the parallels between the classical and the free case.
Therefore, whenever appropriate, we will use the same notation for both cases. The
situations where (important) differences between the two theories arise will also be
indicated.
\noindent {\bf Acknowledgments.} We would like to thank Prof.\ D.-V.~Voiculescu for
suggesting the problem as well as many helpful discussions. We would also like to thank
Prof.\ N.G.~Makarov for some suggestions.
\section{The Combinatorial Approach}
\label{sec:Comb}
\subsection{Notation}
Let $\mu$ be a probability measure. We denote by $T$ both the central limit operator
$T(\mu) = (\mu \ast \mu) \circ S_{\frac{1}{\sqrt{2}}}$ and the free central limit
operator $T(\mu) = (\mu \boxplus \mu) \circ S_{\frac{1}{\sqrt{2}}}$. It will be clear from
the context which one is meant. We denote by $\mathcal{T}$ the manifestations of $T$ on
auxiliary spaces: in Section~\ref{sec:Comb}, the spaces of sequences; in
Section~\ref{sec:Anal}, the spaces of continuous functions. Precise definitions will be
given at appropriate times. Also, we denote by $\chi$ the appropriate normal
distributions: standard Gaussian in the classical context and the standard Wigner
semicircle law \cite{Voi85} in the free context.
\subsection{Background}
For a measure $\mu$, its $n$-th moment is $m^\mu_n = \int x^n d \mu(x)$. In this section we
consider only probability measures whose moments of all orders are finite. In fact,
throughout most of the section we disregard the non-uniqueness and identify the measure
with its collection of moments. Thus let $\ensuremath{\mathcal{M}}$ be the space of all one-sided real-valued
sequences. We will call the elements of $\ensuremath{\mathcal{M}}$ the moment sequences and denote them by $\ensuremath{\mathbf{m}}
:= \{m_i\}_{i=1}^\infty$, even though only some of them are moment sequences of measures.
On $\ensuremath{\mathcal{M}}$ we set up the topology of entrywise convergence; this is the weak$^\ast$-topology
on the space $\ensuremath{\mathcal{M}}$ as the dual of the space of the ``eventually $0$'' sequences, and it
turns $\ensuremath{\mathcal{M}}$ into a topological vector space. Note that if a sequence of elements of $\ensuremath{\mathcal{M}}$ do
in fact correspond to measures, and if its limit corresponds to a \emph{unique} measure,
then one has weak convergence of the corresponding measures \cite{Dur91}.
For every such moment sequence $\ensuremath{\mathbf{m}}$ there is also the corresponding \emph{free cumulant}
sequence $\ensuremath{\mathbf{c}} := \{c_i\}_{i=1}^\infty$ \cite{Spe94,Nic95} determined by
\begin{equation}
\label{eq:freemc}
\text{(free)} \qquad m_k = \sum_{\substack{\pi \in \mathcal{P}_{nc}(k)\\
\pi = \{B_1, \ldots, B_n\}}} \prod_{j=1}^{n} c_{\abs{B_j}}
\end{equation}
Here $\mathcal{P}_{nc}(k)$ is the set of \emph{noncrossing} partitions of the set
$\{1,\ldots,k\}$ \cite{Kre72,Spe90,Spe94,Nic95}, which can be described as follows: these
are partitions of the vertices of an $k$-gon such that the vertices in each class can be
connected by lines inside the $k$-gon so that the lines for different classes do not
cross. Also, $B_i$-s denote the classes of the partition $\pi$, and $\abs{B_i}$ denotes
the number of elements of $B_i$.
The classical cumulant sequence can also be described by a similar formula \cite{Shi96,Nic95},
namely
\begin{equation}
\label{eq:classmc}
\text{(classical)} \qquad m_k = \sum_{\substack{\pi \in \mathcal{P}(k)\\
\pi = \{B_1, \ldots, B_n\}}} \prod_{j=1}^{n} (\abs{B_j} - 1)! \; c_{\abs{B_j}}
\end{equation}
where $\mathcal{P}(k)$ is the collection of all partitions of $\{1,\ldots,k\}$.
Let us denote the transformation from the moment sequence to the cumulant sequence
determined by the formula \eqref{eq:freemc} (resp., \eqref{eq:classmc}) by $\ensuremath{\mathcal{R}} :
\{m_i\}_{i=1}^\infty \rightarrow \{c_i\}_{i=1}^\infty$. We call $\ensuremath{\mathcal{R}}^{-1}$ the
\emph{cumulant-moment} transform. Note that $\ensuremath{\mathcal{R}}$ is given implicitly; there are also
explicit formulas~\cite{Nic95}. Note also that $k$-th moment depends only on the cumulants
of orders less than or equal to $k$, and vice versa. Therefore both $\ensuremath{\mathcal{R}}$ and $\ensuremath{\mathcal{R}}^{-1}$
are continuous bijections $\ensuremath{\mathcal{M}} \rightarrow \ensuremath{\mathcal{M}}$; however, we will think of the domain of
$\ensuremath{\mathcal{R}}$ as moment sequences and of its range as cumulant sequences.
The point of the transformation from the moment to the cumulant sequence is that for
sequences corresponding to probability measures, the appropriate action of the operator
$T$ on the cumulant side is linear. Indeed,
\begin{equation*}
\label{eq:cumT}
c^{\mu \boxplus \nu}_k = c^\mu_k + c^\nu_k \qquad \text{and} \qquad c^{\mu \circ S_r}_k
= r^k c^\mu_k
\end{equation*}
where $c^\eta$ are the cumulants of the measure $\eta$. That is,
\begin{equation}
\label{cumulants}
c_k^{T \mu} = 2^{1 - \frac{k}{2}} c_k^\mu
\end{equation}
Thus define, on the space of cumulant sequences, the operator $\mathcal{T^R}$ by
$(\mathcal{T^R}(\ensuremath{\mathbf{c}}))_k = 2^{1 - k/2} c_k$, and on $\ensuremath{\mathcal{M}}$ the operator $\mathcal{T} =
\ensuremath{\mathcal{R}}^{-1} \mathcal{\circ T^R \circ R}$. Clearly, since $\mathcal{T^R}$ is linear, in order
to linearize the operator $\mathcal{T}$, we are interested in the linearization of the
cumulant-moment transform $\ensuremath{\mathcal{R}}^{-1}$.
In the sequel, by a \emph{linearization} of a map $A$ at a point $x$ we mean its
G\^{a}teaux derivative: $(D_x A) (y) = \lim_{\epsilon \rightarrow 0} \frac{A(x +
\epsilon y) - A(x)}{\epsilon}$ when the limit exists in the appropriate topology. Note also
that from~\eqref{cumulants}, a fixed point of $T$ for which the cumulant sequence is
defined must have all the cumulants other than the second one equal to 0. In the classical
case, this describes the Gaussian distributions; in the free case, this describes the free
normal distributions, which are the dilations of the Wigner semicircle law \cite{Spe90}.
\begin{Prop}
The linearization of the cumulant-moment transform at $($the cumulant series corresponding
to$)$ the normal distribution $\chi$ $($respectively, standard Gaussian in the classical
case and standard Wigner semicircular distribution in the free case \cite{VDN92}$)$ is
given by a $($formal$)$ infinite lower-triangular matrix $A = (a_{ij})_{i,j = 1}^\infty$,
where
\begin{enumerate}
\item
In the classical case, $a_{n + 2k, n}$ is $(n-1)!$ times the number of partitions of
$(n+2k)$ elements into classes exactly one of which contains $n$ elements and the
remaining $k$ classes are pairs.
\item
In the free case, $a_{n + 2k, n}$ is the number of noncrossing partitions of $(n + 2k)$
elements into classes exactly one of which contains $n$ elements and the remaining $k$
classes are pairs.
\end{enumerate}
In both cases $a_{ij}=0$ if $j > i$ or $(i-j)$ is odd. For explicit values, see
Theorem~\ref{thm:Herm}.
\end{Prop}
\begin{proof}
The value of the $n$-th cumulant is a polynomial function of the first $n$ moments only,
and vice versa. Thus in the topology of entrywise convergence, the differentials of both
$\ensuremath{\mathcal{R}}$ and $\ensuremath{\mathcal{R}}^{-1}$ exist.
Given two moment sequences $\ensuremath{\mathbf{m}}^o, \ensuremath{\mathbf{m}}^d$, we define the sequence $\{f(\ensuremath{\mathbf{m}}^o,
\ensuremath{\mathbf{m}}^d)_n\}_{n=1}^\infty$ recursively by
\begin{equation}
\label{CombDeriv}
m_k^d = \sum_{\substack{\pi \in \mathcal{P}_{nc} (k)\\
\pi = \{B_1, \ldots, B_n\}}} \sum_{i =1}^n
\left( \prod_{j \neq i} c_{\abs{B_j}}^o \right) f(\ensuremath{\mathbf{m}}^o, \ensuremath{\mathbf{m}}^d)_{\abs{B_i}}
\end{equation}
where $\{c^o_i\} = \ensuremath{\mathcal{R}}(\ensuremath{\mathbf{m}}^o)$ are the free cumulants. Then
\begin{equation*}
\sum_{\substack{\pi \in \mathcal{P}_{nc} (k)\\
\pi = \{B_1, \ldots, B_n\}}} \prod_{i=1}^n
\left( c_{\abs{B_i}}^o + \epsilon f(\ensuremath{\mathbf{m}}^o, \ensuremath{\mathbf{m}}^d)_{\abs{B_i}} \right)
= m_k^o + \epsilon m_k^d + o(\epsilon)
\end{equation*}
Note that if $\ensuremath{\mathbf{m}}^o = \ensuremath{\mathbf{m}}^\mu, \ensuremath{\mathbf{m}}^d = \ensuremath{\mathbf{m}}^\nu$, then the last expression above is just
$\ensuremath{\mathbf{m}}^{\mu + \epsilon \nu} + o(\epsilon)$. So the sequence $\{f(\ensuremath{\mathbf{m}}^o,
\ensuremath{\mathbf{m}}^d)_n \}_{n=1}^\infty$ is the derivative of the moment-cumulant transform $\ensuremath{\mathcal{R}}$ at $\ensuremath{\mathbf{m}}^o$
in the direction $\ensuremath{\mathbf{m}}^d$. For $\ensuremath{\mathbf{m}}^o = \ensuremath{\mathbf{m}}^\chi$, the free standard normal (semicircular)
distribution, $c^\chi_i = \delta_{i2}$, and so~\eqref{CombDeriv} becomes
\begin{equation}
m_k^d = \sum_{n=1}^k a_{k,n} f(\ensuremath{\mathbf{m}}^d)_n
\end{equation}
where $f(\ensuremath{\mathbf{m}}^d) := f(\ensuremath{\mathbf{m}}^\chi, \ensuremath{\mathbf{m}}^d)$ and $a_{n + 2k,n}$ is the number of noncrossing
partitions of $(n + 2k)$ elements into classes exactly one of which has $n$ elements and
the remaining $k$ classes are pairs. Thus $m = Af$, where $A$ is the lower-triangular
matrix $(A)_{i,j} = a_{i,j}$.
In the classical case, we start with
\begin{equation}
m_k^d = \sum_{\substack{\pi \in \mathcal{P} (k)\\
\pi = \{B_1, \ldots, B_n\}}} (\abs{B_i}-1)! \sum_{i=1}^n
\left( \prod_{j \neq i} c_{\abs{B_j}}^o \right) f(\ensuremath{\mathbf{m}}^o, \ensuremath{\mathbf{m}}^d)_{\abs{B_i}}
\end{equation}
and by the same sort of reasoning see that the derivative of $\ensuremath{\mathcal{R}}$ at the standard
Gaussian is given by the lower-triangular matrix $A$ with $a_{n + 2k, n} = (n-1)! \times $
(the number of partitions of $(n + 2k)$ elements into classes exactly one of which contains
$n$ elements and the remaining $k$ classes are pairs).
\end{proof}
As stated above, the operator $\mathcal{T^R}$ is linear. It is easy to see that its
spectrum is discrete. Its eigenvectors are the cumulant sequences $\xi_j =
\{\delta_{ij}\}_{i=1}^\infty$, for $j=1,2,\ldots$, with corresponding eigenvalues
$2^{1- j/2}$. Therefore for the linearization of operator $\mathcal{T}$, the eigenvalues
are the same, and the eigenvectors are the moment sequences $e_j
= \{a_{i,j}\}_{i=1}^\infty$, where $a_{i,j}$-s are defined in the above theorem. In fact,
these are true moment sequences, and so give the eigenfunctions for the central limit
operator $T$.
\begin{Thm}
\label{thm:Herm}
On the space of measures with all moments finite, the linearization of the operator $T$ has
eigenvalues $2^{1 - n/2}, n = 1, 2, \ldots$. The corresponding eigenfunctions are
absolutely continuous with respect to the Lebesgue measure, with densities:
\begin{enumerate}
\item
In the classical case, $\frac{d^n}{dx^n} e^{-x^2/2} = e^{-x^2/2} H_n(x)$, multiples of the
Hermite polynomials \cite{Sin92}.
\item
In the free case, $\ensuremath{\mathbf{1}}_{[-2,2]}(t) \frac{1}{\sqrt{4 - t^2}} T_n(t/2)$, multiples of the
Chebyshev polynomials of the first kind.
\end{enumerate}
\end{Thm}
\begin{proof}
In the classical case, $a_{n+2k, n} = (n-1)! \; \times$ the number of partitions of
$(n+2k)$ objects into one class of $n$ elements and $k$ classes of 2 elements. It is easy
to see that $a_{n+2k, n} = \frac{(n+2k)!}{n k! 2^k}$ and $a_{k,n}=0$ for $k <n$ or $(k-n)$
odd. Therefore for fixed $n$ the Fourier transform (defined by $\sum_{j=0}^\infty
\frac{1}{j!} m_j (it)^j = \int e^{itx} d \mu(x)$) of the $n$-th eigenfunction of $T$ is
\begin{equation*}
\sum_{k=0}^\infty a_{n+2k, n}
\frac{1}{(n+2k)!} (it)^{n+2k} = \frac{1}{n} (it)^n \exp(- t^2/2)
\end{equation*}
and the sum converges absolutely. Thus the eigenfunctions are the multiples of Hermite
polynomials $\frac{d^n}{dx^n} e^{-x^2/2} \,dx = e^{-x^2/2} H_n(x) \,dx$ (note that these
are not exactly what one usually means by the Hermite functions).
In the free case, $a_{n+2k, n} = $ the number of noncrossing partitions of \mbox{$(n+2k)$}
objects into one class of $n$ elements and $k$ classes of 2 elements. It has been
calculated by Kreweras \cite{Kre72} to be $a_{n+2k, n} = \binom{n+2k}{k}$ (one uses an
inductive argument based on the following fact: a partition $\pi$ with a class of $n$
elements is noncrossing iff each of the $n$ intervals in the complement of this class is a
union of complete classes of $\pi$, and $\pi$ restricted to each of these intervals is
noncrossing). The Cauchy transform (defined by $\sum_{j=0}^\infty m_j z^{-(j+1)} = \int
\frac{d \mu(x)}{z - x}$) (\cite{Akh65,VDN92}, see also the next section) of the $n$-th
eigenfunction is $\sum_k \binom{n+2k}{k} z^{-(n+2k+1)}$. For $z \in \ensuremath{\mathbb C}^+$, the series
converges absolutely for $\abs{z} >2$. Its integral is
\begin{equation*}
F_n (z) = - \sum_k \frac{1}{n+k} \binom{n + 2k-1}{k} z^{-(n+2k)} = -\sum_k
\frac{1}{n+2k} \binom{n + 2k}{k} z^{-(n+2k)}
\end{equation*}
In particular, for $n=1$ we have $F_1(z) = - \sum_k \frac{1}{k+1} \binom{2k}{k}
z^{-(2k+1)}$. Thus ${F_1(z)}^2 = - F_1(z)z - 1$. Therefore $F_1$ is related to the
generating function for the Catalan numbers \cite{Rio68}, and is in fact $ \frac{-z +
\sqrt{z^2 - 4}}{2}$. Similarly
\begin{Lemman}
For $n \geq 1$ the integral of the Cauchy transform of the $n$-th eigenfunction is
$-\frac{1}{n} \left( \frac{z - \sqrt{z^2 - 4}}{2} \right)^n$, i.e. $ -\sum_k
\frac{1}{n+2k} \binom{n + 2k}{k} z^{-(n+2k)}= - \frac{1}{n}
\left( \frac{z - \sqrt{z^2 - 4}}{2} \right)^n$
\end{Lemman}
\begin{proof}[Proof of the Lemma]
The series converges absolutely for $\abs{z} > 2$. The proof is by induction, using the
identity $\left( \frac{z - \sqrt{z^2 - 4}}{2} \right)^n \left( \frac{z - \sqrt{z^2 -
4}}{2} \right)^m = \left( \frac{z - \sqrt{z^2 - 4}}{2} \right)^{(n + m)}$. By equating
coefficients, we have to prove the combinatorial identity
\begin{equation*}
\sum_{\substack{k+l=t \\
k,l \geq 0}} \frac{n}{n+2k} \binom{n + 2k}{k} \frac{m}{m+2l}
\binom{m + 2l}{l} = \frac{n + m}{n + m + 2t} \binom{n + m + 2t}{t}
\end{equation*}
for $m, n = 1, 2, \ldots$ and $t=0, 1, \ldots$. But this is a particular case of the
generalized Vandermonde (also known as Rothe) identity (\cite[Sec. 4.5]{Rio68}, see also
\cite{GK66}).
\renewcommand{\qed}{}
\end{proof}
The above moments determine a unique distribution \cite{Dur91}, and we can see directly
that the $n$-th eigenfunction is related to the Chebyshev polynomials of the first kind,
namely it is a scalar multiple of $\ensuremath{\mathbf{1}}_{[-2,2]}(t) \frac{1}{\sqrt{4 - t^2}} T_n(t/2)
\,dt$, where $T_n(t) = \cos^{-1}(n \cos t)$.
\end{proof}
\begin{Remark}
Besides being eigenfunctions of the operator $DT$ on a topological vector space, the above
functions in fact form orthogonal bases in (smaller) Hilbert spaces. The Hermite functions
$\frac{d^n}{dx^n} e^{-x^2/2} = e^{-x^2/2} H_n(x)$ for $n=0, 1, \ldots$ form an orthogonal
basis in $L^2(e^{x^2/2} \,dx)$, while the Chebyshev functions of the first kind
$\ensuremath{\mathbf{1}}_{[-2,2]}(t) \frac{1}{\sqrt{4 - t^2}} T_n(t/2)$ for $n = 0, 1, \ldots$ form an
orthogonal basis in the space $L^2(\ensuremath{\mathbf{1}}_{[-2,2]}(t) \sqrt{4-t^2} \,dt)$. Thus, in the
classical case the operator $DT$ is a compact self-adjoint operator on $L^2(e^{x^2/2}
\,dx)$, while in the free case $DT$ is a compact self-adjoint operator on
$L^2(\ensuremath{\mathbf{1}}_{[-2,2]}(t) \sqrt{4-t^2} \,dt)$.
\end{Remark}
We have the following interpretation of the CLT.
\begin{Cor}
We say that a measure $\mu$ is in $L^2 (\varphi)$ if $\mu \ll \varphi\,dx$ and $\frac{d
\mu}{dx} \in L^2 (\varphi)$. Note that for all the measures in $L^2(e^{x^2/2} \,dx)$ or
$L^2(\ensuremath{\mathbf{1}}_{[-2,2]}(t) \sqrt{4-t^2} \,dt)$, the moments of all orders are finite.
\begin{enumerate}
\item
On the space of probability distributions in $L^2(e^{x^2/2} \,dx)$, with mean $0$ and
variance $1$, the normal distribution $\chi$ as a fixed point of the central limit operator
$T$ is strictly spectrally stable, that is, the differential of the operator at this point
has the spectrum inside the unit disc.
\item
On the space of probability distributions in $L^2(\ensuremath{\mathbf{1}}_{[-2,2]}(t) \sqrt{4-t^2} \,dt)$,
with mean $0$ and variance $1$, the free normal (semicircular) distribution $\chi$ as a
fixed point of the free central limit operator $T$ is strictly spectrally stable.
\end{enumerate}
\end{Cor}
\begin{proof}
Classical case: \cite{Sin92} The Hermite functions $\frac{d^n}{dx^n} e^{-x^2/2} =
e^{-x^2/2} H_n(x)$ for $n=0, 1, \ldots$ form an orthogonal basis in $L^2(e^{x^2/2} \,dx)$.
The conditions on the moments of the distribution in the hypothesis mean that in the linear
approximation we consider only the perturbations $f$ with 0th, 1st, 2nd moments equal to 0.
This means precisely that $f$ is orthogonal to $e^{-x^2/2}$ and (the densities of) the
first two eigenfunctions of $T$ . Since the eigenvalues of $T$ are $2^{1 - n/2}$, all
eigenvalues for $n > 2$ are less than $1$.
Free case: the Chebyshev functions $\ensuremath{\mathbf{1}}_{[-2,2]}(t) \frac{1}{\sqrt{4 - t^2}} T_n(t/2)$
for $n = 0, 1, \ldots$ form an orthogonal basis in the space $L^2(\ensuremath{\mathbf{1}}_{[-2,2]}(t)
\sqrt{4-t^2} \,dt)$. Again the eigenvalues are less than $1$ for all but the first $3$ of
these. But here, the hypothesis correspond to the orthogonality to the first $3$ Chebyshev
functions only if all the distributions considered are \emph{supported in the same
interval} $[-2, 2]$. Note, however, that due to the results in \cite{BV95}, this
restriction is weaker than it appears.
\end{proof}
Thus we would expect that on this subspace the fixed point is attracting, just as the CLT
states.
\begin{Remark}
The moments $a_{n+2k,k}$ can be calculated in a way similar to the above for the setting
of the $R_q$ transforms \cite{Nic95}, related to $q$-independence. However, there does not
seem to be in that case a nice recurrence formula, and so the corresponding eigenfunctions
are not calculated directly. Moreover, note that the orthogonality conditions in the above
Corollary are different in the classical and the free cases: in the free case the inner
product is given by the free normal (semicircular) distribution, while in the classical
case it is the inverse of the normal distribution. Also, we are not aware of any standard
interpolation between the Hermite polynomials and the Chebyshev polynomials of the first
kind. Thus one would not necessarily expect to have a similar construction for the
interpolations between free and classical cases, e.g. related to $q$-independence.
\end{Remark}
\section{The Analytic Approach}
\label{sec:Anal}
In this section we consider the problem of linearizing the operator $T$ by analytic means.
First we briefly go over the classical situation.
\subsection{Classical Picture}
Let $\alpha \in (0, 2]$, $\beta = 2^{- 1/\alpha}$, $T_\alpha \mu = (\mu \ast \mu) \circ
S_\beta$. For $\varphi_\alpha =$ $\alpha$-strictly stable distribution \cite{Shi96,Dur91}
(the skewness coefficient does not appear explicitly in the sequel and so is not included
in the notation), $T_\alpha (\varphi_\alpha) = \varphi_\alpha$. Then the differential of
$T_\alpha$ at $\varphi_\alpha$ is
\begin{equation*}
DT_\alpha \nu = 2 (\nu \ast \varphi_\alpha) \circ S_\beta
\end{equation*}
Taking the Fourier transforms,
\begin{equation}
\label{Ftrans}
\widehat{DT_\alpha \nu} (t)= 2 \hat{\nu}(\beta t) \hat{\varphi}_\alpha (\beta t)
\end{equation}
Also by stability $\hat{\varphi}_\alpha^2(\beta t) = \hat{\varphi}_\alpha(t)$. Therefore
for $\hat{\nu}(t) = h(t) \hat{\varphi}_\alpha(t)$, the right-hand-side expression
in~\eqref{Ftrans} is $2 h(\beta t) \hat{\varphi}_\alpha (t)$. For $h$, on the space of
continuous functions the eigenfunctions are $h(t) = t^a$, $a \in \ensuremath{\mathbb C}$, $\re a > 0$ or
$a=0$, with eigenvalues $2 \cdot \beta^a= 2 \cdot 2^{-a/\alpha}$, corresponding to
$\hat{\nu_a}(t) = t^a \hat{\varphi}_\alpha (t)$. Here we use the principal branch of the
logarithm. Now let $\beta=1/\sqrt{2}$, i.e. $\varphi_\alpha = \varphi_2 = \chi$. In this
case, among all the eigenfunctions we can distinguish the integer values of $a$ as follows:
among the functions $t^a$, the smooth ones are precisely those for $a \in \ensuremath{\mathbb N}$. Thus among
all $\nu_a$, the ones whose densities decay faster than any polynomial are just the
$\nu_n$-s, $n \in \ensuremath{\mathbb N}$ \cite{Shi96,Dur91}. These measures are manifestly in $L^2(e^{x^2/2}
\,dx)$. They are $\nu_n = \frac{d^n}{dx^n} e^{-x^2/2} \,dx$, with eigenvalues $2^{1 - n/2}$, and we obtain the result of the
previous section.
\subsection{Free Picture}
In the free probability picture, the main device is the $R$-transform, introduced by
Voiculescu \cite{Voi86,BV93,VDN92}. Given a measure $\mu$, for $z \in \ensuremath{\mathbb C} \backslash
\text{supp}(\mu)$ one defines the Cauchy transform (sometimes called Stieltjes or Borel
transform) of $\mu$ by $G_\mu(z) = \int \frac{d \mu(t)}{z-t}$. For positive $\mu$ the
Cauchy transform is an analytic map $\ensuremath{\mathbb C}^+ \leftrightarrow \ensuremath{\mathbb C}^-$; it has the property
$\bar{G}_\mu(z) = G_\mu (\bar{z})$. The measure can be reconstructed from its Cauchy
transform by taking the boundary values $- \frac{1}{\pi} \im G_\mu(x+0i)$~\cite[Ch.3,
Addenda and Problems]{Akh65},~\cite[3.1]{Hor90}.
On a nontangential neighborhood of 0 (Stolz angle) in $\ensuremath{\mathbb C}^-$, we can define $K_\mu (w) =
G_\mu^{-1}(w)$, and the $R$-transform $R_\mu(w) = K_\mu(w) - \frac{1}{w}$. $R_\mu$ is an
analytic map $\ensuremath{\mathbb C}^- \rightarrow \ensuremath{\mathbb C}^-$ on a nontangential neighborhood of 0. The main
property of the $R$-transform is that it also linearizes the additive free convolution:
$R_{\mu \boxplus \nu}(w) = R_\mu(w) + R_\nu(w)$. In fact, if all the moments of a measure
$\mu$ are finite, then $R_\mu(w) = \sum_{i=1}^\infty c_i^\mu w^{i-1}$, where $c_i$ are the
free cumulants. Also $R_{\mu \circ S_r}(w) = r R_\mu(r w)$. Thus the action of the operator
$T_\alpha$ on the $R$-transform side is just $R_{T_\alpha \mu}(w) = 2\beta R_\mu(\beta w)$,
and in particular it is linear. Here we again define the operator
\begin{equation*}
T_\alpha (\mu) = (\mu \boxplus \mu) \circ S_\beta
\end{equation*}
for $\alpha \in (0,2]$, and $\varphi_\alpha$ = free $\alpha$-strictly stable distribution
\cite{BV93,Pat95,BPB96}. By the above observations, the linearization of $T_\alpha$ is
again given by the linearization of the $R$-transform.
The quasilinear differential equation governing the behavior of free convolution
semigroups has first appeared in \cite[Theorem 4.3]{Voi86}. Here we need a variant of that
theorem. The proof is quite similar to \cite{Voi86}.
\begin{Thm}
Let $\nu$ be a freely infinitely divisible probability measure \cite{Voi86,VDN92}. Let
$\psi$ be a function analytic in $\ensuremath{\mathbb C}^+$. For any point $z \in \ensuremath{\mathbb C}^+$, for small enough $t$
the function $G(z,t)$, which is the functional inverse of $K_\nu (z) + t \psi(z)$, is
defined at $z$. Consequently for all $z \in \ensuremath{\mathbb C}^+$,
\begin{equation}
G_\nu'(z)\psi(G_\nu(z)) + \frac{\partial G}{\partial t} (z,0) = 0
\end{equation}
\end{Thm}
\begin{proof}
By \cite[Proposition 5.12]{BV93} for $\nu$ freely infinitely divisible, $G_\nu$ maps
$\ensuremath{\mathbb C}^+$ conformally onto a domain. Let $\Omega$ be a bounded domain in $\ensuremath{\mathbb C}^+$ whose closure
is contained in $\ensuremath{\mathbb C}^+$. Choose $t$ so that for $z \in \Omega$, $t \abs{\psi'(G_\nu(z))} <
1$. Then the function $z + t \psi(G_\nu (z))$ is univalent on $\Omega$ and invertible on
its image. Denote this image by $\Omega'$ and this inverse function by $f_t$. Since, for
$z \in \Omega$,
\begin{equation*}
(K_\nu + t \psi)(G_\nu (z)) = z + t \psi(G_\nu (z))
\end{equation*}
we also have
\begin{equation*}
(K_\nu + t \psi)(G_\nu (f_t(z)) = z
\end{equation*}
for $z \in \Omega'$. Consequently we can define $G(z,t)
= G_\nu (f_t(z))$ for $z \in
\Omega'$.
As one possible construction, define
\begin{equation*}
\Omega_n = \left\{ z \big| -n \leq \re z \leq n, \frac{1}{n} \leq \im z \leq n \right\}
\end{equation*}
Let $t_n$ satisfy the condition above and also $t_n \abs{\psi(G_\nu (z))} < 1/n$ for $z
\in \Omega_n$. Then $G(z,t)$ is defined on
\begin{equation*}
\Omega_n' = \left\{ z \big| -n+ \frac{1}{n} \leq \re z \leq n- \frac{1}{n}, \frac{2}{n}
\leq \im z \leq n- \frac{1}{n} \right\}
\end{equation*}
These domains exhaust $\ensuremath{\mathbb C}^+$. On their common domains, since their inverse is analytic in
$t$, $G(z,t)$ is differentiable in $t$; the derivative exists as a limit in the topology
of uniform convergence on compact sets in the upper half plane.
By definition $G(K_\nu + t \psi(z), t) = z$ for $z \in G_\nu(\Omega)$. Differentiating
with respect to $t$ at $t=0$, we get
\begin{equation*}
\frac{\partial G}{\partial z}(K_\nu(z), 0) \psi(z) +
\frac{\partial G}{\partial t}(K_\nu(z), 0) = 0
\end{equation*}
for $z \in G_\nu(\ensuremath{\mathbb C}^+)$. Substituting $G_\nu(z)$ for $z$, we get the required equation,
for $z \in \ensuremath{\mathbb C}^+$.
\end{proof}
\begin{Remark}
In the above theorem no consideration is given to the positivity or even existence of the
distribution corresponding to the Cauchy transform $G(z,t)$. While we do not know of a
satisfactory description of these, there are various conditions.
\subsubsection{Necessary Conditions}
\label{nec}
For $K_\nu + \psi$ to correspond to a positive measure, it is necessary that (1) the
nontangential limit of $z \psi(z)$ as $z \rightarrow 0$ be 0, and (2) there exist a Stolz
angle at 0, $\Gamma \subset \ensuremath{\mathbb C}^+$ s.t. $(K_\nu + \psi)(\Gamma) \subset \ensuremath{\mathbb C}^-$ \cite{BV93}.
\subsubsection{Sufficient Conditions}
\label{suf}
\begin{enumerate}
\item
\label{deform}
The following are the known cases where $K_\nu + \psi$ corresponds to a positive measure:
(1) $\psi$ is an $R$-transform of a positive (hence necessarily freely infinitely
divisible) measure \cite{Voi86}. (2) $\nu$ is the free normal (semicircular) distribution;
$\psi$ is analytic in a neighborhood of the unit disc, sufficiently small, and
$\psi(\bar{z}) = \bar{\psi}(z)$~\cite{BV95}.
\item
For a measure $\mu$ and a distribution $\nu$, the $R$-transform of $((1 - \epsilon)\mu +
\epsilon \nu)$ is
\begin{equation*}
R_{(1 - \epsilon)\mu + \epsilon \nu} = R_\mu(w) - \epsilon K_\mu'(w) \cdot (G_\nu -
G_\mu)(K_\mu(w)) + o(\epsilon)
\end{equation*}
Denote $\psi = K_\mu'(w) \cdot (G_\nu - G_\mu)(K_\mu(w))$. Then if $\nu$ is positive, the
deformation in the direction of $\psi$ is tangent to a curve of positive measures. Note
also that we have the inverse formula,
\begin{equation*}
G_\nu(w) = G_\mu(w) + \psi(G_\mu(w)) \cdot G_\mu'(w)
\end{equation*}
\end{enumerate}
\end{Remark}
\subsection{Discussion}
The theorem can be interpreted as follows. Let $G$ be a univalent function on $\ensuremath{\mathbb C}^+$.
Define $K=G^{-1}$, $\ensuremath{\mathcal{R}}(G)(z) = K(z) - 1/z$, and let $\psi$ be analytic on $G(\ensuremath{\mathbb C}^+)$.
Then the derivative of the map $\ensuremath{\mathcal{R}}^{-1}$ (which is nothing other than the differential
of the operation of functional inversion) at a point $\ensuremath{\mathcal{R}}(G)$ in the direction $\psi$ is
$- G' \psi(G)$. This linear map is invertible; the inverse linear map (which is the
differential of $\ensuremath{\mathcal{R}}$) is $\psi \mapsto -K' \psi(K)$ (where $\psi$ is now analytic in
$\ensuremath{\mathbb C}^+$). Finally, let $\mathcal{T}_\alpha^\ensuremath{\mathcal{R}} (\psi)(z) = 2\beta \psi(\beta z)$ and
denote $G_\alpha := G_{\varphi_\alpha}$. Then the derivative of $\mathcal{T}_\alpha =
\ensuremath{\mathcal{R}}^{-1} \circ \mathcal{T}_\alpha^\ensuremath{\mathcal{R}} \circ \ensuremath{\mathcal{R}}$ at $G_\alpha$ is
\begin{equation*}
\begin{split}
D_{G_\alpha} \mathcal{T}_\alpha (\psi)(z)&= 2G_\alpha'(z) \beta K_\alpha'(\beta
G_\alpha(z)) \psi(K_\alpha(\beta G_\alpha(z))) \\
&= 2 \psi(\omega_\alpha(z)) \omega_\alpha'(z)
\end{split}
\end{equation*}
Here
\begin{equation*}
\begin{split}
\omega_\alpha(z) &= K_{\varphi_\alpha}(\beta G_{\varphi_\alpha}(z)) =
\frac{1}{\beta} \omega_{\varphi_\alpha \circ S_{\beta}, \varphi_\alpha } \\
&= \frac{1}{\beta} \omega_{\varphi_\alpha \circ S_{\beta},
\varphi_\alpha \circ S_{\beta} \boxplus \varphi_\alpha \circ S_{\beta}} =
\frac{1}{\beta} K_{\varphi_\alpha \circ S_{\beta}} \circ G_{\varphi_\alpha \circ S_{\beta}
\boxplus \varphi_\alpha \circ S_{\beta}}
\end{split}
\end{equation*}
is a particular instance of the transition probability function of \cite{Voi93,Bia95}.
The eigenfunctions of $\mathcal{T}_\alpha^\ensuremath{\mathcal{R}}$ on the space of all analytic functions in
the upper half plane are of the form $t e^{i\phi} z^a$, with eigenvalues
$2^{1-(a+1)/\alpha}$. Restricting to various spaces selects particular values of $\phi,
a$. Thus the eigenfunctions for the differential of $\mathcal{T}_\alpha$ (resp.,
$T_\alpha$) are the (boundary values of) the functions $e^{i \phi} G_\alpha' G_\alpha^a$.
\begin{Ex}
For the free normal (semicircular) case $\alpha=2$, $\nu = \chi$ the Cauchy transforms of
the eigenfunctions of the operator $D T$ (which are the eigenfunctions of the operator $D
\mathcal{T}$) are given by
\begin{equation*}
G_\chi' G_\chi^a = e^{i \phi} \frac{1}{\sqrt{z^2 - 4}} \left( \frac{z - \sqrt{z^2 - 4}}{2}
\right)^{x+yi}
\end{equation*}
By taking the boundary values $- \frac{1}{\pi} \im G(t + 0i)$ (see \cite{Akh65,Hor90}),
the eigenfunctions themselves are
\begin{multline*}
\frac{1}{\sqrt{t^2 -4}} {\abs{ \frac{t - \sqrt{t^2 - 4}}{2} }}^x e^{- y \pi}
\sin \left( y \log \abs{ \frac{t - \sqrt{t^2 -4}}{2} } + \phi + x \pi \right)
\ensuremath{\mathbf{1}}_{(- \infty, -2]} (t) \,dt\\
+\frac{1}{\sqrt{t^2 -4}} {\abs{ \frac{t - \sqrt{t^2 - 4}}{2} }}^x e^{- y \pi}
\sin \left( y \log \abs{ \frac{t - \sqrt{t^2 -4}}{2} } + \phi \right)
\ensuremath{\mathbf{1}}_{[2, \infty)} (t) \,dt
\\
+\frac{1}{\sqrt{4 - t^2}} \exp \left( y \cos^{-1}(t/2) \right)
\cos \left( x \cos^{-1}(t/2) - \phi \right) \ensuremath{\mathbf{1}}_{[-2, 2]}(t) \,dt
\end{multline*}
It is easy to see that the Criterion~\ref{nec} requires that (for some $\phi$) we have $x
= \re a \geq 1$ or ($x= \re a \geq -1, y = \im a =0$); the corresponding point spectrum is
the union of the unit disc and the interval $[1,2]$. On the other hand, for $-1 < x < 1,
y=0$ the functions are the $R$-transforms of freely stable distributions, while for $x
\in \ensuremath{\mathbb N},\, y=0$ the functions are entire. Thus by Criterion~\ref{suf}\ref{deform} the
corresponding eigenfunctions are in the tangent space to the space of positive measures.
All the moments of a measure are finite iff its Cauchy transform is analytic at infinity.
If a function $G$ is defined by the above expression on $\ensuremath{\mathbb C}^+$ and satisfies $G(\bar{z}) =
\overline{G(z)}$, it is analytic at infinity iff $a \in \ensuremath{\mathbb N}$ and $\phi=0$. Notice that
among the above measures, these are precisely the compactly supported ones. Explicitly
their Cauchy transforms are $- \frac{1}{\sqrt{z^2 - 4}} \left( \frac{z - \sqrt{z^2 -
4}}{2} \right)^n$. The eigenfunctions themselves are
\begin{equation*}
\frac{1}{\sqrt{4 - x^2}} \cos(n \cos^{-1}(x/2)) \ensuremath{\mathbf{1}}_{[-2,2]}(x) \,dx
\end{equation*}
That is, we recover the Chebyshev functions of the first kind.
\end{Ex}
\begin{Remark}
Here we can see another difference from the classical case. As noted above, by a result of
Bercovici and Voiculescu \cite{BV95} the deformations in the directions $z^n, n \in \ensuremath{\mathbb N}$
actually produce positive measures (for small enough time). This is in contrast with a
classical theorem of Marcinkiewicz, which states that for $P$ a polynomial, $e^P$ is never
a characteristic function (i.e. a Fourier transform of a positive measure) if the degree of
$P$ is greater than 2 (see e.g. \cite[Thm 3.13]{Ram67}).
\end{Remark}
\begin{Ex}
For the 1-stable symmetric distribution, which is the Cauchy distribution $\varphi_1$, the
eigenfunction Cauchy transforms are $\frac{1}{(z-i)^a}$, and the eigenfunctions are $D^a
\frac{1}{x^2 + 1} \,dx$. Notice that these are exactly the same as in the classical case. Therefore not only are
the free 1-stable distributions the same as classical ones \cite{BV93}, but their small
neighborhoods look the same as well.
\end{Ex}
\begin{Remark}
For a general freely stable distribution, using for example the formula $D_G
\ensuremath{\mathcal{R}}^{-1}(\psi) = - \frac{\psi}{K'}(G)$, one can obtain parametric expression \`{a} la
Biane \cite[Appendix]{BPB96} for the densities of the corresponding boundary values. In
particular, one has such expressions for the densities of the eigenfunctions of the free
stable central limit operators. It is not clear whether they are of use.
\end{Remark}
\subsection{Composition operators}
Finally, we have a brief discussion of the connections with the theory of composition
operators (see e.g. \cite{Val31,CM95}). The action of the operator $\mathcal{T}_\alpha$ on
the primitive (in $\ensuremath{\mathbb C}^+$) of a Cauchy transform is, up to a multiplicative constant 2 and
up to an additive constant, just the composition with the function $\omega_\alpha$.
One of the main theorems about composition operators is that such an operator is
necessarily conjugate to an operator of composition with a linear function \cite[Thm.~
2.53]{CM95}. In our case, in the terminology of \cite[Section 2.4]{CM95} the operator
$\mathcal{T}_\alpha$ has a natural halfplane-dilation model provided by conjugating with
the linearization of the $R$-transform, i.e. $G_\alpha \circ \omega_\alpha = \beta
G_\alpha$. A \emph{fundamental set} \cite[Defn.~2.54]{CM95} for $\psi$ is an open,
connected, simply connected domain $\Delta$ such that $\psi(\Delta) \subset \Delta$ and the
iterates of any compact set end up in it after a finite number of steps. It is not hard to
see (e.g. \cite[Appendix]{BPB96}) that $\ensuremath{\mathbb C}^+$ serves as a fundamental set for both
$G_\alpha$ and $\omega_\alpha$ while $G(\ensuremath{\mathbb C}^+)$ is a fundamental set for $K_\alpha$.
A number of results on the spectra of composition operators on various classical spaces are
known. In particular, on a Hardy space $H^\infty$ the spectral radius of a composition
operator is equal to 1 \cite[3.1]{CM95}. In our case the composition operator is defined on
tangent spaces to a certain cone in $H^\infty$.
|
1,108,101,563,639 | arxiv | \section{Introduction}
In the recent times, we have observed the renaissance in
the nucleon structure studies through the Drell-Yan type processes
in the existing (FermiLab, Relativistic
Heavy Ion Collider, see \cite{RHIC,Gamberg:2013kla}) and future (J-Parc, NICA) experiments.
One of the most interesting subjects of such experimental studies in this direction
is the so-called single spin asymmetry (SSA) which is expressed with
the help of the hadron tensor, see for instance \cite{Qiu:1991pp} or
\cite{Teryaev, Boer}.
Lately, we have reconsidered \cite{AT-GP} this process in the contour gauge.
We have found that there is a contribution from the {\it non-standard} diagram which
produces the imaginary phase required to have the SSA. This additional contribution
leads to an extra factor of $2$ for the asymmetry. This
conclusion was supported by analysis of the QED gauge invariance of the hadron tensor.
In comparison, the analysis presented in \cite{Zhou:2010ui} which uses the axial and Feynman gauges
does not support the latter conclusion. For this reason, we perform here the
detailed analysis of hadron tensor in the Feynman gauge with the particular emphasis
on the QED gauge invariance.
We find that the QED gauge invariance can be maintained only by taking into account the non-standard diagram.
Moreover, the results in the Feynman and contour gauges coincide if the
gluon poles in the correlators $\langle\bar\psi\gamma_\perp A^+\psi\rangle$ are absent.
This is in agreement with the relation between gluon poles and the Sivers function which
corresponds to the "leading twist" Dirac matrix $\gamma^+$.
We confirm this important property by comparing the light-cone dynamics for different correlators.
As a result, we derive the QED gauge invariant hadron tensor which
completely coincides with the expression obtained within
the light-cone contour gauge for gluons, see \cite{AT-GP}.
\section{Kinematics}
We study the hadron tensor which contributes to the single spin
(left-right) asymmetry
measured in the Drell-Yan process with the transversely polarized nucleon (see Fig.~\ref{Fig-DY}):
\begin{eqnarray}
N^{(\uparrow\downarrow)}(p_1) + N(p_2) &\to& \gamma^*(q) + X(P_X)
\nonumber\\
&\to&\ell(l_1) + \bar\ell(l_2) + X(P_X).
\end{eqnarray}
Here, the virtual photon producing the lepton pair ($l_1+l_2=q$) has a large mass squared
($q^2=Q^2$)
while the transverse momenta are small and integrated out.
The left-right asymmetry means that the transverse momenta
of the leptons are correlated with the direction
$\textbf{S}\times \textbf{e}_z$ where $S_\mu$ implies the
transverse polarization vector of the nucleon while $\textbf{e}_z$ is a beam direction \cite{Barone}.
Since we perform our calculations within a {\it collinear} factorization,
it is convenient to fix the dominant light-cone directions as
\begin{eqnarray}
\label{kin-DY}
&&p_1\approx \frac{Q}{x_B \sqrt{2}}\, n^*\, , \quad p_2\approx \frac{Q}{y_B \sqrt{2}}\, n,
\\
&&n^{*\,\mu}=(1/\sqrt{2},\,{\bf 0}_T,\,1/\sqrt{2}), \quad n^{\mu}=(1/\sqrt{2},\,{\bf 0}_T,\,-1/\sqrt{2}).
\nonumber
\end{eqnarray}
So, the hadron momenta $p_1$ and $p_2$ have the plus and minus dominant light-cone
components, respectively. Accordingly, the quark and gluon momenta $k_1$ and $\ell$ lie
along the plus direction while the antiquark momentum $k_2$ -- along the minus direction.
The photon momentum reads (see Fig.~\ref{Fig-DY})
\begin{eqnarray}
q= l_1+l_2=k_1 + k_2\,
\end{eqnarray}
which, after factorization, will take the form:
\begin{eqnarray}
q= x_1 p_1 + y p_2\,+ q_T.
\end{eqnarray}
\section{The DY hadron tensor}
We work within the Feynman gauge for gluons.
The standard hadron tensor
generated by the diagram depicted in Fig.~\ref{Fig-DY}(the left panel) reads
\begin{eqnarray}
\label{HadTen1-2}
&&d{\cal W}_{(\text{Stand.})}^{\mu\nu}=\int d^4 k_1\, d^4 k_2 \, \delta^{(4)}(k_1+k_2-q)\times
\nonumber\\
&&
\int d^4 \ell \,
\Phi^{(A)\,[\gamma_\beta]}_\alpha (k_1,\ell) \, \bar\Phi^{[\gamma^-]} (k_2)\times
\nonumber\\
&&
\text{tr}\big[
\gamma^\mu \gamma^\beta \gamma^\nu \gamma^+ \gamma^\alpha
S(\ell-k_2)
\big]\, ,
\end{eqnarray}
where
\begin{eqnarray}
\label{PhiF}
&&\Phi^{(A)\,[\gamma_\beta]}_\alpha (k_1,\ell)
=
\\
&&{\cal F}_2\Big[
\langle p_1, S^T | \bar\psi(\eta_1)\gamma_\beta gA_{\alpha}(z) \psi(0) | S^T, p_1\rangle \Big] ,
\nonumber\\
&&\bar\Phi^{[\gamma^-]}(k_2)={\cal F}_1 \Big[
\langle p_2 | \bar\psi(\eta_2)\gamma^- \psi(0)| p_2\rangle \Big].
\end{eqnarray}
Throughout this paper, ${\cal F}_1$ and ${\cal F}_2$
denote the Fourier transformation with the measures
\begin{eqnarray}
d^4\eta_2\, e^{ik_2\cdot\eta_2}\,\,\, \text{and} \,\,\,
d^4\eta_1\, d^4 z\, e^{-ik_1\cdot\eta_1-i\ell\cdot z} ,
\end{eqnarray}
respectively, while ${\cal F}_1^{-1}$ and ${\cal F}_2^{-1}$ mark the inverse
Fourier transformation with the measures
\begin{eqnarray}
dy \, e^{i y\lambda}\,\,\, \text{and} \,\,\,
dx_1 dx_2 \, e^{i x_1\lambda_1+ i(x_2 - x_1)\lambda_2}.
\end{eqnarray}
We now implement the {\it factorization procedure} (see for instance \cite{Anikin:2009bf, Efremov:1984ip}) which contains the following steps:
(a) the decomposition of loop integration momenta around the corresponding dominant direction:
$k_i = x_i p + (k_i\cdot p)n + k_T$
within the certain light cone basis formed by the vectors $p$ and $n$ (in our case, $n^*$ and $n$);
(b) the replacement:
$d^4 k_i \Longrightarrow d^4 k_i \,dx_i \delta(x_i-k_i\cdot n)$
that introduces the fractions with the appropriated spectral properties;
(c) the decomposition of the corresponding propagator products around the dominant direction.
In Eqn.~(\ref{HadTen1-2}), we have (here, $x_{ij}=x_i-x_j$)
\begin{eqnarray}
\label{S-decom}
&&S(\ell-k_2) = S(x_{21}p_1-yp_2) +
\\
&&\frac{\partial S(\ell-k_2)}{\partial \ell_\rho} \Bigg|^{k_2=yp_2}_{\ell=x_{21}p_1} \, \ell^T_\rho + \ldots \,;
\nonumber
\end{eqnarray}
(d) the use of the collinear Ward identity:
\begin{eqnarray}
\frac{\partial S(k)}{\partial k_\rho} = S(k)\gamma_{\rho}S(k),\quad
S(k)=\frac{-\slashed k}{k^2 + i\varepsilon};
\nonumber
\end{eqnarray}
(e) performing of the Fierz decomposition for $\psi_\alpha (z) \, \bar\psi_\beta(0)$ in
the corresponding space up to the needed projections.
After factorization, the standard tensor, see Eqn.~(\ref{HadTen1-2}), is split into two terms: the first term includes
the correlator without the transverse derivative, while the second term contains the correlator with the transverse derivative,
see Eqns.~(\ref{S-decom}) and (\ref{ParFunB1})-(\ref{ParFunBperp}).
The non-standard contribution comes from the diagram depicted in Fig.~\ref{Fig-DY} (the right panel).
The corresponding hadron tensor takes the form \cite{AT-GP}:
\begin{eqnarray}
\label{HadTen2}
&&d{\cal W}_{(\text{Non-stand.})}^{\mu\nu}=
\\
&&
\int d^4 k_1\, d^4 k_2 \, \delta^{(4)}(k_1+k_2-q)
\text{tr}\big[
\gamma^\mu {\cal F}(k_1) \gamma^\nu \bar\Phi(k_2)
\big]
\, ,
\nonumber
\end{eqnarray}
where the function ${\cal F}(k_1)$ reads
\begin{eqnarray}
\label{PhiF2}
&&{\cal F}(k_1)=
S(k_1) \gamma^\alpha \int d^4\eta_1\, e^{-ik_1\cdot\eta_1}\times
\nonumber\\
&&
\langle p_1, S^T | \bar\psi(\eta_1) \, gA_{\alpha}(0) \, \psi(0) |S^T,p_1\rangle \, .
\end{eqnarray}
For convenience, we introduce the unintegrated tensor $\overline{\cal W}_{\mu\nu}$ for the
factorized hadron tensor ${\cal W}_{\mu\nu}$ of the process. It reads
\begin{eqnarray}
\label{Uninteg-FacHadTen}
&&{\cal W}^{\mu\nu}=\int d^2 \vec{\textbf{q}}_T d{\cal W}^{\mu\nu}=\frac{2}{q^2}
\int d^2 \vec{\textbf{q}}_T \,\delta^{(2)}(\vec{\textbf{q}}_T) \times
\nonumber\\
&&
i\, \int dx_1 \, dy \,
\big[\delta(x_1/x_B-1) \delta(y/y_B-1)\big]
\overline{\cal W}^{\mu\nu}.
\end{eqnarray}
After calculation of all relevant traces in the factorized hadron tensor and after some algebra, we arrive at the following
contributions for the unintegrated hadron tensor (which involves all relevant contributions except the mirror ones):
the standard diagram depicted in Fig.~\ref{Fig-DY}, the left panel, gives us
\begin{eqnarray}
\label{DY-St}
&&\overline{\cal W}_{(\text{Stand.})}^{\mu\nu}
+ \overline{\cal W}_{(\text{Stand.},\,\partial_\perp)}^{\mu\nu}=\bar q(y)\,
\Bigg\{
\\
&&
- \frac{p_{1}^{\mu}}{y}\,
\varepsilon^{\nu S^T - p_2}\, \int dx_2 \frac{x_1-x_2}{x_1-x_2+i\epsilon} B^{(1)}(x_1,x_2)
\nonumber\\
&& -
\Big[ \frac{p_{2}^{\nu}}{x_1} \varepsilon^{\mu S^T - p_2} + \frac{p_{2}^{\mu}}{x_1} \varepsilon^{\nu S^T - p_2} \Big]
x_1\int dx_2 \frac{B^{(2)}(x_1,x_2)}{x_1-x_2+i\epsilon}
\nonumber\\
&&+ \frac{p_{1}^{\mu}}{y} \,
\varepsilon^{\nu S^T - p_2}\, \int dx_2 \frac{B^{(\perp)}(x_1,x_2)}{x_1-x_2+i\epsilon}
\Bigg\}\,,
\nonumber
\end{eqnarray}
while the non-standard diagram presented in Fig.~\ref{Fig-DY}, the right panel,
contributes as
\begin{eqnarray}
\label{DY-NonSt}
&&\overline{\cal W}_{(\text{Non-stand.})}^{\mu\nu}=
\bar q(y)
\frac{p_{2}^{\mu}}{x_1}
\varepsilon^{\nu S^T -p_2}\times
\nonumber\\
&&
\int dx_2 \Big\{ B^{(1)}(x_1,x_2) +
B^{(2)}(x_1,x_2)
\Big\}.
\end{eqnarray}
Here we introduce the shorthand notation:
$\varepsilon^{A B C D}= \varepsilon^{\mu_1 \mu_2 \mu_3 \mu_4} A_{\mu_1} B_{\mu_2} C_{\mu_3} D_{\mu_4}$
with $\varepsilon^{0123}=1$.
Moreover, the parametrizing functions are associated with the following correlators:
\begin{eqnarray}
\label{ParFunB1}
&&i\varepsilon^{\alpha + S^T -} (p_1p_2)\, B^{(1)}(x_1,x_2)=
\\
&&{\cal F}_2\Big[\langle p_1, S^T| \bar\psi(\eta_1)\, \gamma^+ \, gA^\alpha_\perp(z)\, \psi(0) | S^T,p_1 \rangle \Big]\,,
\nonumber\
\end{eqnarray}
\begin{eqnarray}
\label{ParFunB2}
&&i\varepsilon^{+ \beta S^T -} (p_1p_2)\, B^{(2)}(x_1,x_2)=
\\
&&{\cal F}_2\Big[\langle p_1, S^T| \bar\psi(\eta_1)\, \gamma^\beta_\perp \, gA^+(z)\, \psi(0) | S^T,p_1 \rangle \Big]\,,
\nonumber\
\end{eqnarray}
\begin{eqnarray}
\label{ParFunBperp}
&& i p_1^+ \varepsilon^{\rho + S^T -} (p_1p_2) B^{(\perp)}(x_1,x_2)=
\\
&&{\cal F}_2\Big[\langle p_1, S^T| \bar\psi(\eta_1)\, \gamma^+\, \big(\partial^\rho_\perp\, gA^+(z)\big)\, \psi(0) | S^T,p_1 \rangle \Big]\,,
\nonumber
\end{eqnarray}
where $\eta_1=\lambda_1\tilde n$, $z=\lambda_2 \tilde n$, and
the light-cone vector $\tilde n$ is a dimensionful analog of $n$ ($\tilde n^-=p_2^-/(p_1p_2)$).
As known from \cite{AT-GP}, the function $B^{(1)}(x_1,x_2)$ for the DY process
can be unambiguously written as
\begin{eqnarray}
\label{B1-fun}
B^{(1)}(x_1,x_2)=\frac{T(x_1,x_2)}{x_1-x_2+i\varepsilon}\,,
\end{eqnarray}
where the function $T(x_1,x_2)\in\Re\text{e}$
parametrizes the corresponding projection of $\langle \bar\psi\, G_{\alpha\beta}\,\psi \rangle$, {\it i.e.}
\begin{eqnarray}
\label{parT}
&&\varepsilon^{\alpha + S^T -}\,(p_1p_2)\, T(x_1,x_2)=
\\
&&{\cal F}_2\Big[ \langle p_1, S^T | \bar\psi(\eta_1)\, \gamma^+ \,
\tilde n_\nu G^{\nu\alpha}_T(z) \,\psi(0)
|S^T, p_1 \rangle\Big]\,.
\nonumber
\end{eqnarray}
Notice that we have derived (see \cite{AT-GP}) the certain complex prescription in the
{\it r.h.s.} of (\ref{B1-fun}) within the contour gauge. In this letter, we assume that
the same prescription takes place in the Feynman gauge too \footnote{Generally speaking, in the Feynman gauge
the arguments how to derive the certain complex prescription differ from that we used in \cite{AT-GP}.
For example, the prescription can be defined by ordering of operator positions on the light-cone direction.}.
With respect to the functions $B^{(2)}(x_1,x_2)$ and $B^{(\perp)}(x_1,x_2)$, we demonstrate below that
these functions do not possess the gluon poles and, therefore, cannot be presented in the form of (\ref{B1-fun}).
Summing up all contributions from the standard and non-standard diagrams, we finally obtain
the expression for the unintegrated hadron tensor as
\nopagebreak
\begin{widetext}
\begin{eqnarray}
\label{DY-ht-1}
&&\overline{\cal W}^{\mu\nu}=
\overline{\cal W}_{(\text{Stand.})}^{\mu\nu} + \overline{\cal W}_{(\text{Stand.},\,\partial_\perp)}^{\mu\nu}+
\overline{\cal W}_{(\text{Non-stand.})}^{\mu\nu}=
\bar q(y)\,
\Bigg\{ \Big[ \frac{p_{2}^{\mu}}{x_1} - \frac{p_{1}^{\mu}}{y} \Big] \,
\varepsilon^{\nu S^T -p_2}\, \int dx_2 B^{(1)}(x_1,x_2) +
\nonumber\\
&& \frac{p_{2}^{\mu}}{x_1} \,
\varepsilon^{\nu S^T - p_2}\, \int dx_2 B^{(2)}(x_1,x_2) -
\Big[ \frac{p_{2}^{\nu}}{x_1} \varepsilon^{\mu S^T - p_2} + \frac{p_{2}^{\mu}}{x_1} \varepsilon^{\nu S^T - p_2} \Big]
x_1\int dx_2 \frac{B^{(2)}(x_1,x_2)}{x_1-x_2+i\epsilon} +
\nonumber\\
&& \frac{p_{1}^{\mu}}{y} \,
\varepsilon^{\nu S^T - p_2}\, \int dx_2 \frac{B^{(\perp)}(x_1,x_2)}{x_1-x_2+i\epsilon}
\Bigg\}\,,
\end{eqnarray}
\end{widetext}
Notice that the first term in Eqn.~(\ref{DY-ht-1}) coincides with the hadron tensor calculated within the light-cone gauge
$A^+=0$.
\section{QED gauge invariance of hadron tensor}
Let us now discuss the QED gauge invariance of the hadron tensor.
From Eqn.~(\ref{DY-ht-1}),
we can see that the QED gauge invariant combination is
\begin{eqnarray}
\label{GI-comb}
&&{\cal T}^{\mu\nu}=\Big[ \frac{p_{2}^{\mu}}{x_1} - \frac{p_{1}^{\mu}}{y} \Big] \,
\varepsilon^{\nu S^T -p_2},\,
\nonumber\\
&&\text{with}\quad
q_\mu {\cal T}^{\mu\nu} = q_\nu {\cal T}^{\mu\nu}=0.
\end{eqnarray}
We can see that there is a single term with $p_{2}^{\nu}$ which does not have
a counterpart to construct the gauge-invariant combination
\begin{eqnarray}
\label{GI-com-2}
\frac{p_{2}^{\mu}}{x_1} - \frac{p_{1}^{\mu}}{y}.
\end{eqnarray}
Therefore, the second term in Eqn.~(\ref{DY-St}) should be equal to zero.
This also leads to nullification of the second term in Eqn.~(\ref{DY-NonSt}).
Hence, the only way to get the QED gauge invariant combination (see (\ref{GI-comb})) is
to combine the first terms in Eqns.~(\ref{DY-St}) and (\ref{DY-NonSt}). This combination justifies the treatment of gluon pole in $B^{(1)}(x_1,x_2)$
using the complex prescription.
In addition, we conclude that the third term in (\ref{DY-St}) does not contribute to SSA.
The suggested proof explores only the gauge and Lorentz invariance.
Let us consider the other reasoning to justify these properties of correlators,
starting with the correlator which generates the function $B^{(2)}(x_1,x_2)$:
\begin{eqnarray}
\label{Corr-1}
&&\int (d\lambda_1\, d\lambda_2) e^{-ix_1\lambda_1 - i(x_2-x_1)\lambda_2}\times
\nonumber\\
&&\langle p_1, S^T| \bar\psi(\lambda_1 \tilde n)\, \gamma_\perp^\beta \, A^+(\lambda_2 \tilde n)\, \psi(0) | S^T,p_1 \rangle =
\nonumber\\
&&
i\varepsilon^{+ \beta S^T - }\, (p_1 p_2)\, B^{(2)}(x_1,x_2)\,.
\end{eqnarray}
We are going over to the momentum representation for the correlator from the l.h.s. of Eqn.~(\ref{Corr-1}).
Schematically, we have
\begin{eqnarray}
\label{Corr-2}
\Big[ \bar u(k_1) \gamma^\perp_\beta u(k_2)\Big] \times .... \times \frac{1}{\ell^2 + i\varepsilon}\,,
\end{eqnarray}
where the gluon momentum is $\ell = k_2-k_1$ and $k_1=(x_1 p^+_1, k^-_1, \vec{{\bf k}}_{1\,\perp})$, $k_2=(x_2 p^+_1, k^-_2, \vec{{\bf k}}_{2\,\perp})$.
This situation has been illustrated in Fig.~\ref{Fig-DY-2}, see the left panel.
Up to the order of $g$, we are also able to write down that (see Fig.~\ref{Fig-DY-2}, the right panel)
\begin{eqnarray}
\label{Corr-3}
\Big[ \bar u(k_1) \gamma^\perp_\beta {\cal S}(k_2) u(k_1 )\Big] \times .... \times \frac{1}{\ell^2 + i\varepsilon}\,,
\end{eqnarray}
where ${\cal S}(k_2)=S(k_2)\gamma^+$.
From both these equations, it is clear that to get the non-zero contribution we must have either $\vec{{\bf k}}_{1\,\perp}\neq 0$
or $\vec{{\bf k}}_{2\,\perp}\neq 0$. Indeed,
\begin{eqnarray}
\Big[ \bar u(k_1) \gamma^\perp_\beta {\cal S}(k_2) u(k_1 )\Big] \Rightarrow S_{\beta k_2 + k_1}= k^\perp_{2\,\beta} k^+_1 + k^\perp_{1\,\beta} k^+_2\,.
\end{eqnarray}
Therefore, the gluon propagator in Eqns.~(\ref{Corr-2}) and (\ref{Corr-3}) takes the following form (cf. \cite{Braun}):
\begin{eqnarray}
\label{gluon-prop}
\frac{1}{\ell^2+i\varepsilon}= \frac{1}{2(x_2-x_1)p^+_1 \ell^- - \vec{{\bf l}}^2_{\perp} +i\varepsilon}.
\end{eqnarray}
One can conclude that, in the case of the substantial transverse component of the momentum,
there are no sources for the gluon poles at $x_1=x_2$. As a result, the function $B^{(2)}(x_1,x_2)$ has no gluon poles and,
due to T-invariance \cite{Efremov:1984ip} ($B^{(2)}(x_1,x_2)= - B^{(2)}(x_2,x_1)$), obeys $B^{(2)}(x,x) = 0$.
On the other hand, if we have $\gamma^+$ in the correlator (see Eqn. (\ref{ParFunB1})), the transverse components of gluon momentum
are not substantial and can be neglected. That ensures the existence of the gluon poles for the function $B^{(1)}(x_1,x_2)$.
This corresponds to the fact that the Sivers function, being related to gluon poles, contains the "leading twist" projector $\gamma^+$.
Moreover, we may conclude that the structure $\gamma^+ (\partial^\perp A^+)$ does not produce the imaginary part as well as SSA in the Feynman gauge.
\section{Conclusions and discussions}
Working within the Feynman gauge, we derive the
QED gauge invariant (unintegrated) hadron tensor for the polarized DY process:
\begin{eqnarray}
\label{DY-ht-GI}
&&\overline{\cal W}_{\text{GI}}^{\mu\nu}=
\overline{\cal W}_{(\text{Non-stand.})}^{\mu\nu} +
\overline{\cal W}_{(\text{Stand.})}^{\mu\nu}=
\nonumber\\
&&\bar q(y)
\Big[ \frac{p_{2}^{\mu}}{x_1} - \frac{p_{1}^{\mu}}{y} \Big]
\varepsilon^{\nu S^T -p_2} \hspace{-1.5mm}\int dx_2 B^{(1)}(x_1,x_2).
\end{eqnarray}
After calculating the imaginary part (or, in other words, after adding the mirror contributions),
and, then integrating over $x_1$ and $y$
(see Eqn.~(\ref{Uninteg-FacHadTen})), we get
the QED gauge invariant hadron tensor as
\begin{eqnarray}
\label{DY-ht-GI-2}
W_{\text{GI}}^{\mu\nu}= \bar q(y_B)\,
\Big[ \frac{p_{2}^{\mu}}{x_B} - \frac{p_{1}^{\mu}}{y_B} \Big] \,
\varepsilon^{\nu S^T -p_2}\, T(x_B ,x_B)\,.
\end{eqnarray}
This expression fully coincides with the hadron tensor which
has been derived within the light-cone gauge for gluons.
Moreover, the factor of $2$ in the hadron tensor that
we found within the axial-type gauge \cite{AT-GP} is still present in the frame
of the Feynman gauge.
In order to show this factor of $2$, let us introduce the mutually orthogonal basis (see \cite{Barone})
as
\begin{eqnarray}
\label{vecZ}
Z_\mu= \widehat p_{1\,\mu} - \widehat p_{2\,\mu} \equiv x_B \, p_{1\,\mu}- y_B\, p_{2\,\mu} \,
\end{eqnarray}
and
\begin{eqnarray}
\label{vecX-Y}
&&\hspace{-0.5cm}X_\mu= -\frac{2}{s} \biggl[
(Z p_2)\biggl(p_{1\, \mu} - \frac{q_\mu}{2x_B} \biggr) -
(Z p_1)\biggl(p_{2\, \mu} - \frac{q_\mu}{2y_B} \biggr)
\biggr],
\nonumber\\
&&\hspace{-0.5cm}Y_\mu=\frac{2}{s} \, \varepsilon_{\mu p_1 p_2 q}.
\end{eqnarray}
Here $\widehat p_{i\,\mu}$ are the partonic momenta
($q^\mu=\widehat p_{1\,\mu}+\widehat p_{2\,\mu}$).
With the help of (\ref{vecZ}) and (\ref{vecX-Y}), the lepton momenta can be written as
(this is the lepton c.m. system)
\begin{eqnarray}
\label{lepmom}
&&l_{1\,\mu} = \frac{1}{2} q_\mu + \frac{Q}{2} f_\mu(\theta,\varphi; \hat X, \hat Y, \hat Z)\, ,
\nonumber\\
&&l_{2\,\mu} = \frac{1}{2} q_\mu - \frac{Q}{2} f_\mu(\theta,\varphi; \hat X, \hat Y, \hat Z)\, ,
\end{eqnarray}
where $\hat A = A/\sqrt{-A^2}$ and
\begin{eqnarray}
&&f_\mu(\theta,\varphi; \hat X, \hat Y, \hat Z)=
\\
&&\hat X_\mu\, \cos\varphi \,\sin\theta +
\hat Y_\mu\, \sin\varphi \,\sin\theta + \hat Z_\mu\, \cos\theta \, .
\nonumber
\end{eqnarray}
Within this frame, the contraction of the lepton tensor with the gauge invariant
hadron tensor (\ref{DY-ht-GI-2}) reads
\begin{eqnarray}
\label{Contraction}
{\cal L}_{\mu\nu} \, W_{\text{GI}}^{\mu\nu} =
-2 \cos\theta\, \varepsilon^{l_1 S^T p_1 p_2} \,\bar q(y_B)\, T(x_B,x_B)\, .
\end{eqnarray}
We want to emphasize that this expression in (\ref{Contraction}) differs by the factor of $2$ in comparison with the case where
only one diagram (presented in Fig. \ref{Fig-DY}, the left panel) has been included in the (gauge non-invariant)
hadron tensor, {\it i.e.}
\begin{eqnarray}
\label{Diff2}
{\cal L}_{\mu\nu} \, W_{(\text{Stand.})}^{\mu\nu} =
\frac{1}{2}\, {\cal L}_{\mu\nu} \, W_{\text{GI}}^{\mu\nu} \, .
\end{eqnarray}
Therefore, from the practical point of view, if we neglect the diagram in
Fig. \ref{Fig-DY} (right panel) or, in other words, if we
use the QED gauge non-invariant hadron tensor, it yields the error of the factor of two.
Further, based on the light-cone dynamics we argue that there are no gluon poles
in the correlators $\langle\bar\psi\gamma_\perp A^+\psi\rangle$.
This means that the function $B^{(2)}(x_1,x_2)$ does not have the representation similar to (\ref{B1-fun}).
We also show that
the Lorentz and QED gauge invariances of the hadron tensor calculated within the Feynman gauge
require that the function $B^{(2)}(x_1,x_2)$ cannot have gluon poles.
The fact that the function $B^{(2)}(x_1,x_2)$ cannot be presented in the form of (\ref{B1-fun})
directly leads to the absence of $dT/dx$ in the final expression of the gauge-invariant hadron tensor.
Indeed, from (\ref{DY-St}), one can see that $B^{(2)}(x_1,x_2)$ contributes to the standard hadron tensor as
\begin{eqnarray}
\label{DY-St-dT}
\Big[ p_{2}^{\nu} \varepsilon^{\mu S^T - p_2} + p_{2}^{\mu} \varepsilon^{\nu S^T - p_2} \Big]
\int dx_2 \frac{B^{(2)}(x_1,x_2)}{x_1-x_2+i\epsilon}.
\end{eqnarray}
In order to obtain the $dT/dx$-contribution, we have to impose the representation (\ref{B1-fun})
on $B^{(2)}(x_1,x_2)$ and, then perform the integration over $dx_2$ by part.
However, as shown above, $B^{(2)}(x_1,x_2)$ does not have the representation (\ref{B1-fun}).
This property seems to be natural from the point of view of gluon poles relation \cite{Boer:2003cm} to Sivers functions
as the latter is related to the projection $\gamma^+$. As for the function $B^{(\perp)}(x_1,x_2)$, the transverse derivative
of Sivers function resulting from taking its moments may act on both integrand and boundary value. Our result suggests that only the
action on the boundary value related to $B^{(1)}(x_1,x_2)$ should produce SSA.
It is certainly not unnatural keeping in mind that the integrand
differentiation is present even for simple straight-line contours which are not producing SSA.
\nopagebreak
\begin{figure*}[ht]
\centerline{\includegraphics[width=0.45\textwidth]{Fig-DY-1.pdf}
\hspace{1.cm}\includegraphics[width=0.45\textwidth]{Fig-DY-2.pdf}}
\caption{The Feynman diagrams which contribute to the polarized Drell-Yan hadron tensor.}
\label{Fig-DY}
\end{figure*}
\begin{figure*}[ht]
\centerline{\includegraphics[width=0.3\textwidth]{DY-fg-F1.pdf}
\hspace{2.cm}\includegraphics[width=0.26\textwidth]{DY-fg-F2.pdf}}
\vspace{1cm}
\caption{The matrix element (correlator) of nonlocal twist-3 quark-gluon operator within the momentum
representation. Here $\ell=k_2-k_1$ and
$k_1=(x_1 p^+_1, k^-_1, \vec{{\bf k}}_{1\,\perp})$, $k_2=(x_2 p^+_1, k^-_2, \vec{{\bf k}}_{2\,\perp})$}
\label{Fig-DY-2}
\end{figure*}
\section*{Acknowledgments}
We thank A.V.~Efremov and A.~Prokudin for useful discussions.
The work by I.V.A. was partially supported by the Heisenberg-Landau Program of the
German Research Foundation (DFG).
|
1,108,101,563,640 | arxiv | \section{Introduction} \label{sec:introduction}
Batch normalization \citep{ioffe2015batch}, in conjunction with skip connections \citep{he2016deep, he2016identity}, has allowed the training of significantly deeper networks, so that most state-of-the-art architectures are based on these two paradigms.
The main reason why this combination works well is that it yields well behaved gradients (removing \textit{mean-shift}, avoiding \textit{vanishing} or \textit{exploding} gradients). As a consequence, the training problem can be ``easily'' solved by SGD or other first-order stochastic optimization methods. Furthermore, batch normalization can have a regularizing effect \citep{hoffer2017, luo2019understanding}.
However, while skip connections can be easily implemented and integrated in any network architecture without major drawbacks, batch normalization poses a few practical challenges. As already observed and discussed by \cite{brock2021characterizing, brock2021highperformance} and references therein, batch normalization adds a significant memory overhead, introduces a discrepancy between training and inference time, has a tricky implementation in distributed training, performs poorly with small batch sizes \citep{yan2020towards} and breaks the independence between training examples in a minibatch, which can be extremely harmful for some learning tasks \citep{lee2020residual, lomonaco2020rehearsal}.
For these reasons a new stream of research emerged which aims at removing batch normalization from modern architectures. Several works \citep{zhang2019, soham2020batch, bachlechner2020rezero} aim at removing normalization layers by introducing a learnable scalar at the end of the residual branch, i.e., computing a residual block of the form $x_{l} = x_{l-1} + \alpha f(x_{l-1})$. The scalar $\alpha$ is often initialized to zero so that the gradient is dominated, early on in the training, by the skip path. While these approaches have been shown to allow the training of very deep networks, they still struggle to obtain state-of-the-art test results on challenging benchmarks.
\cite{shao2020normalization} propose a different modification of the standard residual layer, suitably carrying out a weighted sum of the identity and the non-linear branches.
More recently \cite{brock2021characterizing, brock2021highperformance} proposed an approach that combines a modification of the residual block with a careful initialization, a variation of the Scaled Weight Standardization \citep{hang2017centered, qiao2020microbatch} and a novel adaptive gradient clipping technique. Such combination has been shown to obtain competitive results on challenging benchmarks.
In this work we propose a simple modification of the residual block summation operation that, together with a careful initialization, allows to train deep residual networks without any normalization layer. Such scheme does not require the use of any standardization layer nor algorithmic modification. Our contributions are as follows:
\begin{itemize}
\item We show that while \textit{NFNets} of \cite{brock2021characterizing, brock2021highperformance} enjoy a perfect forward variance (as already noted by \cite{brock2021characterizing}), it puts the network in a regime of \textit{exploding gradients}. This is shown by looking at the variance of the derivatives of the loss w.r.t.\ to the feature maps at different depths.
\item We propose a simple modification of the residual layer and then develop a suitable initialization scheme building on the work of \cite{he2015delving}.
\item We show that the proposed architecture achieves competitive results on CIFAR-10, CIFAR-100, and ImageNet. \citep{krizhevsky2009learning}, which we consider evidence supporting our theoretical claims.
\end{itemize}
\section{Background} \label{sec:background}
As highlighted in a number of recent studies \citep{hanin2018start, devansh2019initialize, yann2019metainit}, weights initialization is crucial to make deep networks work in absence of batch normalization. In particular, the weights at the beginning of the training process should be set so as to correctly propagate the forward activation and the backward gradients signal in terms of mean and variance.
This kind of analysis was first proposed by \cite{glorot2010understanding} and later extended by \cite{he2015delving}. These seminal studies considered architectures composed by a sequence of convolutions and Rectified Linear Units (ReLU), which mainly differ from modern ResNet architectures for the absence of skip-connections.
The analysis in \cite{he2015delving} investigates the variance of each response layer $l$ (\textit{forward variance}):
\begin{gather*}
z_l =\text{ReLU}(x_{l-1}),\qquad
x_{l} = W_l z_l.
\end{gather*}
The authors find that if $\mathbb{E}[x_{l-1}]=0$ and $\text{Var}[x_{l-1}]=1$ the output maintains zero mean and unit variance if we initialize the kernel matrix in such a way that:
\begin{equation} \label{eq:he_for}
\text{Var}[W] = \frac{2}{n_{\text{in}}},
\end{equation}
where $n_{\text{in}}=k^2c$ with $k$ the filter dimension and $c$ the number of input channels (\textit{fan in}).
A similar analysis is carried out considering the gradient of the loss w.r.t.\ each layer response (\textit{backward variance}) $\dd{\cal L}{x_l}$. In this case we can preserve zero mean and constant variance if we have
\begin{equation} \label{eq:he_back}
\text{Var}[W] = \frac{2}{n_{\text{out}}},
\end{equation}
where $n_{\text{out}}=k^2d$ with $k$ the filter dimension and $d$ the number of output channels (\textit{fan out}).
Note that equations (\ref{eq:he_for}) and (\ref{eq:he_back}) only differ for a factor which, in most common network architectures, is in fact equal to 1 in the vast majority of layers. Therefore, the initialization proposed by \cite{he2015delving} should generally lead to the conservation of both \textit{forward} and \textit{backward} signals.
The two derivations are reported, for the sake of completeness, in Appendix A
In a recent work \cite{brock2021characterizing} argued that initial weights should not be considered as random variables, but are rather the realization of a random process. Thus, empirical mean and variance of the weights should do not coincide with the moments of the generating random process. Hence, normalization of the weights matrix should be performed after sampling to obtain the desired moments. Moreover, they argue that channel-wise responses should be analyzed. This leads to the different initialization strategy:
\begin{equation} \label{eq:brock_for}
\text{Var}[W_i] = \frac{2/(1-\frac{1}{\pi})}{n_{\text{in}}},
\end{equation}
where $W_i$ is a single channel of the filter. Note that if mean and variance are preserved channel-wise, then they are also preserved if the whole layer is taken into account.
The authors do not take into account the \textit{backward variance}.
\cite{brock2021characterizing} show that the latter initialization scheme allows to experimentally preserve the channel-wise activation variance, whereas He's technique only works at the full-layer level.
In the ResNet setting, initialization alone is not sufficient to make the training properly work without batch normalization, if the commonly employed architecture with Identity Shortcuts (see Figure \ref{fig:original_preactivation}) is considered.
In particular, the skip-branch summation
\begin{equation}
x_l = x_{l-1} + f_l(x_{l-1}),
\end{equation}
at the end of each block does not preserve variance, causing the phenomenon known as \textit{internal covariate shift} \citep{ioffe2015batch}.
In order to overcome this issue, Batch Normalization has been devised. More recently, effort has been put into designing other architectural and algorithmic modifications that dot not rely on batch statistics.
Specifically, \cite{zhang2019, soham2020batch, bachlechner2020rezero} modified the skip-identity summation as to downscale the variance at the beginning of training, biasing, in other words, the network towards the identity function, i.e., computing
\begin{equation*}
x_{l+1}=x_{l-1}+\alpha f_l(x_{l-1}).
\end{equation*}
This has the downside that $\alpha$ must be tuned and is dependent on the number of layers. Moreover, while these solutions enjoy good convergence on the training set, they appear not to be sufficient to make deep ResNets reach state-of-the-art test accuracies \citep{brock2021characterizing}.
Similarly, \cite{shao2020normalization} suggest to compute the output of the residual branch as a weighted sum between the identity and the non-linear branch. Formally, the residual layer becomes
$$x_{l} = \alpha_l x_{l-1} + \beta_l f(x_{l-1}),$$ where coefficients $\alpha_l$ and $\beta_l$ can be set so that the \textit{forward variance} is conserved by imposing that $\alpha_l^2+\beta_l^2=1$. Different strategies can be employed to choose their relative value.
More recently, \cite{brock2021characterizing} proposed to additionally perform a runtime layer-wise normalization of the weights, together with the empirical channel-wise intialization scheme.
However, we show in the following that the latter scheme, while enjoying perfectly conserved forward variances, induces the network to work in a regime of \textit{exploding gradients}, i.e., the variance of the gradients of the shallowest layers is exponentially larger than that of the deepest ones.
Reasonably, \cite{brock2021highperformance} found the use of a tailored adaptive gradient clipping to be beneficial because of this reason.
\section{The Proposed Method} \label{sec:proposed_method}
In order to overcome the issue discussed at the end of the previous section, we propose to modify the summation operation of ResNet architectures so that, at the beginning of the training, the mean of either the activations or the gradients is zero and the variance is preserved throughout the network. In our view, our proposal is a natural extension of the work of \cite{he2015delving} for the case of ResNet architectures. Note that, to develop an effective initialization scheme, the residual block summation has to be slightly modified.
Namely, we analyze the following general scheme (see Figure \ref{fig:proposed_preactivation}):
\begin{equation} \label{eq:gen_scheme}
x_l = c \cdot \left(h(x_{l-1}) + f_l(x_{l-1})\right),
\end{equation}
where $c$ is a suitable constant, $h$ is a generic function operating on the skip branch and $f_l(x_{l-1})$ represents the output of the convolutional branch. As we will detail later in this work, this framework generalizes the most commonly employed skip connections.
We assume that we are able, through a proper initialization, to have zero mean and controlled variance (either backward or forward) for each block $f_l$.
In a typical ResNet architecture, $f_l$ is a sequence of two or three convolutions, each one preceded by a ReLU activation - \textit{pre-activation} \citep{he2016identity} - allowing to control both mean and variance through initialization schemes (\ref{eq:he_for}) and (\ref{eq:he_back}).
Note that \textit{post-activated} ResNets do not allow $f_l$ to have zero (either gradient or activation) mean, which corroborates the analysis done by \cite{he2016deep}.
We perform the analysis in this general setting, deriving the condition $h$ and $c$ must satisfy in order to preserve either the forward or backward variance. Then, we propose different ways in which $h$ and $c$ can be defined to satisfy such conditions.
\subsection{The Forward Case}
Let us assume that $\mathbb{E}[x_{0}] = 0$ and $\text{Var}[x_{0}] = 1$, being $x_0$ the input data, and let us reason by induction.
By the inductive step we assume $\mathbb{E}[x_{l-1}] = 0$ and $\text{Var}[x_{l-1}] = 1$; if weights of each block $f$ are initialized following rule (\ref{eq:he_for}), we can easily verify that $$\mathbb{E}[f_l(x_{l-1})] = \mathbb{E}[x_{l-1}] = 0, \quad \text{Var}[f_l(x_{l-1})] = \text{Var}[x_{l-1}]=1.$$
Recalling \cite{shao2020normalization}, we are allowed to assume that $f_l(x_{l-1})$ and $h(x_{l-1})$ have zero correlation, thus, getting
\begin{align*}
\mathbb{E}[x_l] & = c\cdot(\mathbb{E}[h(x_{l-1})]) + \mathbb{E}[f_l(x_{l-1})]) \\
& = c\cdot\mathbb{E}[h(x_{l-1})], \\
\text{Var}[x_l] & = c^2\cdot(\text{Var}[h(x_{l-1})] + \text{Var}[f_l(x_{l-1})]) \\
& = c^2\cdot (\text{Var}[h(x_{l-1})]+1).
\end{align*}
Thus, defining $h$ so that $\mathbb{E}[h(x_{l-1})] = 0$ and $\text{Var}[h(x_{l-1})] = \frac{1}{c^2}-1$ the activation signal can be preserved and the induction step established.
\subsection{The Backward Case}
Let us assume that for the gradients at the output layer $L$ we have $\mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{L}}\right]=0$ and $\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{L}}\right]=C$ and that we initialize the weight of each block $f_l$ by rule (\ref{eq:he_back}).
Now, we can assume by induction that the gradients at layer $l$ have zero mean and preserved variance, i.e., $\mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]=0$ and $\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]=C$. Since for the gradients at layer $l-1$ we have
\begin{align*}
\frac{\partial \mathcal{L}}{\partial x_{l-1}} &= \frac{\partial \mathcal{L}}{\partial x_{l}}\frac{\partial x_l}{\partial x_{l-1}}=c\cdot\frac{\partial \mathcal{L}}{\partial x_{l}}\left(\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}+\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right),
\end{align*}
we get
\begin{align*}
\mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right] &=c\cdot \mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\frac{\partial x_l}{\partial x_{l-1}}\right] \\ &=c\cdot \mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\mathbb{E}\left[\frac{\partial x_l}{\partial x_{l-1}}\right] = 0.
\end{align*}
Moreover, under the reasonable assumption that there is zero correlation between $\frac{\partial \mathcal{L}}{\partial x_{l}}$ and $\frac{\partial x_l}{\partial x_{l-1}}$, we can further write
\footnotesize
\begin{align*}
\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right]&=c^2\left(\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\text{Var}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}+\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right]\right. \\&\quad+ \text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\mathbb{E}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}+\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right]^2 \\&\quad\left.+ \mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]^2\text{Var}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}+\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right] \right)\\
&=c^2\left(\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\left(\text{Var}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}\right]+\text{Var}\left[\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right]\right)\right. \\&\quad+ \text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\left(\mathbb{E}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}\right]+\mathbb{E}\left[\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right]\right)^2 \\&\quad\left.+ \mathbb{E}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]^2\left(\text{Var}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}\right]+\text{Var}\left[\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right]\right) \right).
\end{align*}
\normalsize
Thanks to the initialization rule (\ref{eq:he_back}), it holds $\mathbb{E}\left[\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right] = 0$ and $\text{Var}\left[\frac{ \partial f_l(x_{l-1})}{\partial x_{l-1}}\right] = 1$. Therefore we can conclude
\begin{equation} \label{eq:fine_conti_back}
\begin{aligned}
\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right]=\;&c^2\cdot\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\left(\text{Var}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}\right]+1\right)\\&+ c^2\cdot\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\mathbb{E}\left[\frac{ \partial h(x_{l-1})}{\partial x_{l-1}}\right]^2.
\end{aligned}
\end{equation}
The induction step can therefore be established and the preservation of the gradients signal obtained by suitably defined $h$ and $c$.
We argue that some of the techniques proposed by \cite{brock2021characterizing,brock2021highperformance} to train deep Residual Networks (weight normalization layers, adaptive gradient clipping, etc.) become necessary because initialization (\ref{eq:brock_for}) focuses on the preservation of the forward activation signal while disregarding the backward one.
Indeed, the correction factor $\gamma_g^2=2/(1-\frac{1}{\pi})$ in \eqref{eq:brock_for} breaks the conservation property of the gradients signal, as opposed to \eqref{eq:he_for}. As we back-propagate through the model, the factor $\gamma_g^2$ amplifies the gradients signal at each layer, so that the gradients at the last layers are orders of magnitude larger than those at the first layers (going from output to input layers), i.e., the network is in a regime of \textit{exploding gradient}. In the section devoted to the numerical experiments we will show the forward and backward behaviour of these nets.
\subsection{Gradients signal preserving setups} \label{sec:proposed_initialization}
It is well know that \textit{exploding gradients} make training hard (from an optimization perspective). Indeed, without further algorithmic or architectural tricks we are unable to train very deep networks. It is important to note that in the seminal analyses from \cite{glorot2010understanding} and \cite{he2015delving} the derivation implied that preserving the forward variance entailed preserving also the backward variance too (at least to some reasonable amount). Indeed forward and backward variance can be equally preserved if, as already noted, for each layer, the number of input and output channels is equal. On the contrary, in the derivation of \cite{brock2021characterizing, brock2021highperformance}, this relationship between forward and backward variance is lost so that conserving the forward variance implies \textit{exploding gradients}.
For this reason, in the following we mainly focus on the backwards signal, which we argue being a more important thing to look at when forward and backward variance are not tightly related. For this reason, we propose three different possible schemes for choosing $c$ and $h$ in \eqref{eq:gen_scheme}. In particular:
\begin{enumerate}
\item \textbf{scaled identity shortcut (IdShort):} $h(x) = x$, $c=\sqrt{0.5}$.
This choice, substituting in \eqref{eq:fine_conti_back}, leads to
\begin{align*}
\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right]&=\frac{1}{2}\cdot\left(\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot 1 + \text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot 1\right)\\&=\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right],
\end{align*}
i.e., the variance of gradients is preserved. As for the activations, we get $\mathbb{E}[x_l]=0$ and $$\text{Var}[x_l]=0.5\cdot(1+1)=1,$$ i.e., activations signal preservation, for all layers where input and output have the same size.
Note that the latter scheme is significantly different from approaches, like those from \cite{zhang2019, soham2020batch, bachlechner2020rezero}, that propose to add a (learnable) scalar that multiplies the skip branch. In fact, in the proposed scheme the (constant) scalar multiplies both branches and aims at controlling the total variance, without biasing the network towards the identity like in the other approaches.
This is the simplest variance preserving modification of the original scheme that can be devised, only adding a constant scalar scaling at the residual block.
\item \textbf{scaled identity shortcut with a learnable scalar (LearnScalar):} $h(x) = \alpha x$, $\alpha$ initialized at $1$, $c=\sqrt{0.5}$.
In \eqref{eq:fine_conti_back} we again get at initialization
\begin{align*}
\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right]&=\frac{1}{2}\cdot\left(\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot 1 + \text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot\alpha^2\right)\\&=\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right],
\end{align*}
and similarly as above we also obtain the forward preservation at all layers with $N=\hat{N}$.
\item \textbf{scaled identity shortcut with a $\boldsymbol{(1\times 1)}$-strided convolution (ConvShort):} $h(x) = W_sx$ initialized by (\ref{eq:he_back}), $c=\sqrt{0.5}$. Since we use He initialization on the convolutional shortcut \citep{he2016identity}, we have $\mathbb{E}\left[\frac{\partial h(x_{l-1})}{\partial x_{l-1}}\right] = 0$ and $\text{Var}\left[\frac{\partial h(x_{l-1})}{\partial x_{l-1}}\right] = 1$, hence we obtain in \eqref{eq:fine_conti_back}
\begin{align*}
\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l-1}}\right]&=\frac{1}{2}\cdot\left(\text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot 2 + \text{Var}\left[\frac{\partial \mathcal{L}}{\partial x_{l}}\right]\cdot0\right).
\end{align*}
Again, if we consider the layers with equal size for inputs and outputs, we also get $\mathbb{E}[x_l]=0$ and $\text{Var}[x_l]=0.5\cdot(1+1)=1$.
Note that this setting (without the scale factor) is commonly used in most ResNet architectures when $x_{l-1}$ and $f_l(x_{l-1})$ have not the same pixel resolution (for instance because $f$ contains some strided convolution) or the same number of channels.
\end{enumerate}
\begin{figure}
\centering
\begin{subfigure}{0.47\columnwidth}
\centering
\includegraphics[height=0.25\textheight]{images/full_preactivation_res_layer.png}
\caption{Standard pre-activated Residual Block}
\label{fig:original_preactivation}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\columnwidth}
\centering
\includegraphics[height=0.25\textheight]{images/gen_preactivation_res_layer.png}
\caption{Generalized Normalizer-Free Residual Block}
\label{fig:proposed_preactivation}
\end{subfigure}
\caption{Architectures of Residual Blocks. For both pictures the grey arrow marks the easiest path to propagate the information.}
\label{fig:original_vs_our_preactivation}
\end{figure}
\section{Experiments} \label{sec:experiments}
We start the investigation by numerically computing forward and backward variances for the different initialization schemes. We employ the recently introduced Signal Propagation Plots \citep{brock2021characterizing} for the forwards variance and a modification that looks at the gradients instead of the activations for the backwards case.
We employ the ResNet-50 and ResNet-101 architectures to extract the plots.
In particular we extract the plots for
\begin{itemize}
\item classical ResNet with He initialization, \textit{fan in} mode (\ref{eq:he_for}) and \textit{fan out} mode (\ref{eq:he_back});
\item same of the preceding with batch normalization;
\item ResNet with the three proposed residual summation modifications and their proper intialization to preserve the backwards variance\footnote{Note that, as in the standard implementation, in IdShort and LearnScalar we employ ConvShort when $x$ has not the same pixel resolution or number of channels of $f(x)$.};
\item same as the preceding but employing the intialization of \cite{brock2021characterizing}.
\end{itemize}
For all the initialization schemes, we perform the empirical standardization to zero mean and desired variance of weights at each layer, after the random sampling.
From Figure \ref{fig:resnet_spp} we first note that, as already pointed out by \cite{brock2021characterizing}, classical ResNets with He initialization do not preserve neither forwards nor backwards signals while the use of batch normalization manages to fix things up. Interestingly, we note that the observed trends are more conspicuous in deeper networks.
\begin{figure}[ht!]
\centering
\begin{subfigure}[h]{1.\columnwidth}
\centering
\includegraphics[width=0.99\columnwidth]{plots/spp/ResNet50.pdf}
\caption{ResNet-50}
\label{fig:resnet50_spp}
\end{subfigure} \hfill
\begin{subfigure}[h]{1.\columnwidth}
\centering
\includegraphics[width=0.99\columnwidth]{plots/spp/ResNet101.pdf}
\caption{ResNet-101}
\label{fig:resnet101_spp}
\end{subfigure} \hfill
\caption{Signal propagation plots representing the variance of the forward activations (on the left) and the backward gradients variance (on the right) under different initialization schemes: both values refer to residual block output. Values on the $x$-axis denote the residual layer depth, while on the $y$-axis the variance of the signal is reported in a logarithmic scale.}
\label{fig:resnet_spp}
\end{figure}
Next, we note that employing the proposed strategies (with proper initialization) we are able to conserve the variance of the gradients. On the contrary, the initialization proposed by \cite{brock2021characterizing} amazingly preserves the forward signal but puts the network in a regime of exploding gradients. Namely, the variance of the gradients exponentially increases going from the deepest to the shallowest residual layers. Additionally, we can also note how the proposed strategies also preserve the activations variance, up to some amount, while when employing the scheme of \cite{brock2021characterizing} the relationship between forward and backward variance is lost.
We continue the analysis by performing a set of experiments on the well-known CIFAR-10 dataset \citep{krizhevsky2009learning} in order to understand if an effective training can be actually carried out under the different schemes and compare them in terms of both train and test accuracy. In particular, we are interested in checking out if the proposed schemes can reach batch normalization test performance.
All the experiments described in what follows have been performed using SGD with an initial learning rate of 0.01, a momentum of 0.9 and a batch size of 128 (100 for ImageNet), in combination with a Cosine Annealing scheduler \citep{loshchilov2016sgdr} that decreases the learning rate after every epoch. Moreover, in addition to the standard data augmentation techniques, we have also employed the recently proposed RandAugment method \citep{cubuk2020rand} and, just for ImageNet, the Label Smoothing technique \citep{zhang2021delving}.
In Figure \ref{fig:resnet_accuracies} both train and test accuracies are shown for all the configurations. The results report the mean and the standard deviation of three independent runs.
\begin{figure}[ht!]
\centering
\begin{subfigure}[h]{1.\columnwidth}
\centering
\includegraphics[width=0.99\columnwidth]{plots/accuracies/ResNet50.pdf}
\caption{ResNet-50}
\label{fig:resnet50_accuracies}
\end{subfigure} \hfill
\begin{subfigure}[h]{1.\columnwidth}
\centering
\includegraphics[width=0.99\columnwidth]{plots/accuracies/ResNet101.pdf}
\caption{ResNet-101}
\label{fig:resnet101_accuracies}
\end{subfigure} \hfill
\caption{Test and Train accuracies of ResNet on CIFAR-10 dataset under different combinations of residual block modifications and initialization: standard ResNet with BatchNorm and IdShort, LearnScalar, ConvShort using both \cite{brock2021characterizing} and our initialization. Each experiment has been run three times: the solid line is the mean value while the surrounding shadowed area represents the standard deviation. Finally, on the $x$-axis we reprot the epoch at the which the accuracy (in the $y$-axis) has been computed.}
\label{fig:resnet_accuracies}
\end{figure}
The first thing to notice is that with the initialization scheme of \cite{brock2021characterizing} we are unable to train the network (the curve is actually absent from the plot) for both ResNet-50 and ResNet-101. This is due to the fact that the network, at the start of the training, is in a regime of exploding gradients, as observed in the SPPs. ResNet-18 can be traines using all the considered initialization (see Appendix B).
On the contrary, we can see how, thanks to the correct preservation of the backward signals, training is possible for all the proposed schemes when a gradient preserving initialization scheme is employed.
We also notice that, while all the schemes achieve satisfactory test accuracies, only the \textit{ConvShort} modification has an expressive power able to close the gap (and even outperform at the last epochs) with the network trained using with Batch Normalization. Thus, according to Figure \ref{fig:resnet50_accuracies} and \ref{fig:resnet101_accuracies}, \textit{ConvShort} appears to be an architectural change that, in combination with the proposed initialization strategy, is able to close the gap with a standard pre-activated ResNet with Batch Normalization (it achieves the second-best in ResNet-18, see Appendix B).
To confirm the effectiveness of the proposed method we also considered more resource-intensive settings, where gradient clipping is expected to be necessary. In particular, we considered the well-known datasets CIFAR-100 \citep{krizhevsky2009learning} and ImageNet \cite{deng2009imagenet}. Based on the results obtained with CIFAR-10, we decided to test the most promising among our architectures, namely, \textit{ConvShort} modification.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\columnwidth]{plots/accuracies/cifar100.png}
\caption{Comparison of Train and Test accuracies of ResNet-50 between standard ResNet with BatchNorm and ConvShort with our initialization using CIFAR-100. Values on $x$-axis denote the epoch at the which the accuracy on the $y$-axis has been computed.}
\label{fig:rn50_cifar100_accuracies}
\end{figure}
In Figure \ref{fig:rn50_cifar100_accuracies} we report the results obtained using our ShortConv modification and a standard ResNet-50 with BatchNormalization. As it is possible to see, training is slower for our setup, but the performance gap eventually closes and testing accuracy of our approach becomes even slightly superior at the end of the process.
In Figure \ref{fig:rn50_imagenet_accuracies} we show the results obtained with our ShortConv on the well-know ImageNet dataset. In order to evaluate the soundness of our proposal, we compare our results with the accuracy, reported on PyTorch \cite{NEURIPS2019_9015}, reached by a standard ResNet-50 trained on ImageNet. We can observe that the performance obtained with our architecture is in line with the state-of-the-art.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\columnwidth]{plots/accuracies/imagenet.png}
\caption{Results obtained training ResNet-50 with our ConvShort modification on ImageNet. Values on $x$-axis denote the epoch at the which the accuracy on the $y$-axis has been computed. The dashed red line is the accuracy reported by PyTorch \cite{NEURIPS2019_9015} for a standard ResNet-50 trained on ImageNet.}
\label{fig:rn50_imagenet_accuracies}
\end{figure}
The overall trend seems to indicate that DNNs can be trained up to state-of-the-art performance even without BN, even if this might come at the cost of a slightly longer training; moreover, a strong data augmentation might be needed to compensate the lack of the implicit regularization effects of BN.
To conclude, we report the number of parameters and FLOPs for the considered architecture in Table \ref{tab:computational_burden_models}. It is important to note that, despite \textit{ConvShort} and \textit{BatchNorm} have the same computational cost, our proposed method have some desirable characteristics (like the independence between the examples in a mini-batch). Moreover, the others configurations can be employed as more light-weight alternatives.
\begin{table}[htb]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|lccc|}
\hline
\multicolumn{1}{|c}{\textbf{Model}} & \multicolumn{1}{c}{\textbf{Input Resolution}} & \multicolumn{1}{c}{\textbf{Params (M)}} & \multicolumn{1}{c|}{\textbf{\#FLOPs (G)}} \\
\hline\hline
ResNet-50 BatchNorm & $32 \times 32 \times 3$ & 38.02 & 4.2 \\
ResNet-50 IdShort & $32 \times 32 \times 3$ & 23.47 & 2.6 \\
ResNet-50 LearnScalar & $32 \times 32 \times 3$ & 23.47 & 2.6 \\
ResNet-50 ConvShort & $32 \times 32 \times 3$ & 38.02 & 4.2 \\
\hline
ResNet-101 BatchNorm & $32 \times 32 \times 3$ & 74.78 & 8.92 \\
ResNet-101 IdShort & $32 \times 32 \times 3$ & 42.41 & 5.02 \\
ResNet-101 LearnScalar & $32 \times 32 \times 3$ & 42.41 & 5.02 \\
ResNet-101 ConvShort & $32 \times 32 \times 3$ & 74.78 & 8.92 \\
\hline
\end{tabular}
}
\caption{Computational cost and number of parameters of the considered architectures.}
\label{tab:computational_burden_models}
\end{table}
\section{Conclusion}
In this work we proposed a slight architectural modification of ResNet-like architectures that, coupled with a proper weights initialization, can train deep networks without the aid of Batch Normalization. Such initialization scheme is general and can be applied to a wide range of architectures with different building blocks. Importantly, our strategy does not require any additional regularization nor algorithmic modifications, as compared to other approaches. We show that this setting achieves competitive results on CIFAR-10, CIFAR-100, and ImageNet. The obtained results are in line with the discussed theoretical analysis.
\subsection{Acknowledgements}
The authors would like to thank Dr.\ Soham De for kindly explaining to us some crucial aspects of his work. We would also like to thank Prof.\ Fabio Schoen for letting us work on this topic and putting at our disposal the resources of GOL and Prof.\ Andrew D. Bagdanov for his precious help in the refinement of this manuscript.
\bibliographystyle{apalike}
|
1,108,101,563,641 | arxiv | \section{Introduction}
In model predictive control (MPC), an optimization problem is solved at each time step to determine an optimal control action. When MPC is used to control safety-critical systems in real time, the employed optimization solvers need to be reliable and efficient, demands that become particularly challenging when considering embedded systems due to limited computational resources and memory.
For MPC of \textit{linear} systems with \textit{continuous} states and controls, the optimization problems in question are commonly convex quadratic programs (QPs), for which there exist several reliable and efficient solvers that have been developed specifically for real-time MPC \citep[e.g.][]{patrinos2013accelerated,ferreau2014qpoases,frison2020hpipm,arnstrom2022daqp}. In \textit{hybrid} MPC, where some states and/or controls are restricted to take binary values, the resulting optimization problems are instead \textit{mixed-integer} QPs (MIQPs). Since MIQPs are nonconvex they cannot be solved as efficiently and reliably as their continuous/convex counterpart.
A popular framework for solving MIQPs is that of \textit{branch and bound} (B\&B) \citep{land1960automatic}, where several QP relaxations of the nominal MIQP are solved in sequence. What differentiate B\&B solvers is how they determine which QP relaxations to solve, and how these are solved.
A survey of strategies for selecting relaxations to solve is given in \cite{achterberg2005branching}. In the context of MPC, methods have been proposed that solve the relaxation using active-set methods \citep{axehill2006mixed,bemporad2015solving,bemporad2018numerically, hespanhol2019structure}, gradient projections methods \citep{axehill2008dual,naik2017embedded}, operator splitting methods \citep{stellato2018embedded}, and interior-point methods \citep{frick2015embedded,liang2020early}.
Although B\&B often finds an optimal solution sufficiently fast in practice for medium-sized problems, solving MIQPs is NP-hard, i.e., the worst-case complexity is limiting. This theoretical worst-case complexity, hence, often restricts B\&B methods from being used in real-time applications. As an alternative to B\&B methods, heuristic approaches based on, for example, ADMM \citep{takapoui2020simple} and machine learning \citep{bertsimas2022online} have been proposed to solve MIQPs.
While these methods show impressive computational time and often gives a sufficient, albeit suboptimal, solution, there is no formal guarantees on the solution quality, which make the resulting control law unreliable and, hence, unsuitable for MPC of safety-critical systems.
Another emerging research direction that addresses the (theoretical) conservative worst-case complexity of B\&B methods is to develop complexity certification methods tailored for MPC. Such methods determine upper bounds on the worst-case number of computations required to solve any MIQP encountered in a \textit{given} hybrid MPC application \citep{axehill2010improved,shoja2022overall}.
In this paper we present an MIQP solver that is based on B\&B and that uses the dual active-set solver DAQP \citep{arnstrom2022daqp} for solving relaxations, combined with a search strategy, described in Section \ref{ssec:branch}, that is tailored for embedded applications (by prioritizing simplicity).
The proposed method falls directly into the complexity framework proposed in \cite{shoja2022overall}, enabling an overall worst-case certification of the solvers complexity given an MPC application.
The solver exploits the well-known warm-starting capabilities of active-set methods when solving sequences of similar QP relaxations in B\&B. Moreover, considering a \textit{dual} active-set method, in contrast to, for example, the \textit{primal} method used in \cite{hespanhol2019structure}, has two advantages. First, the dual solution to a relaxation is always a feasible starting point for a subsequent relaxation, while a primal solution is not (assuming that a binary constraint is fixed after solving a relaxation, see Section \ref{ssec:branch} for details). Secondly, a dual method allows for early termination when solving relaxations \citep{fletcher1998numerical}, which often saves a lot of computational effort.
Compared with the active-set methods in \cite{bemporad2015solving,bemporad2018numerically}, which also can be interpreted as \textit{dual} active-set methods, the proposed method avoids some overhead stemming from a nonnegative least-squares reformulation used therein; see Sec. III.A in \cite{arnstrom2022daqp} and Remark \ref{rem:comp-nnls} herein for details.
The main contributions of the paper are: (i) An open-source C implementation of an MIQP solver for embedded applications, available under a permissive license and with support for complexity certification;
(ii) Use of a least-distance formulation of the relaxations to reduce computations (for example when computing upper bounds); (iii) A compact representation of the search tree; only $3 n_b$ integers are maximally needed to represent the tree, where $n_b$ is the number of binary constraints; (iv) A simple, yet effective, way of regularizing the Hessian when binary variables do not nominally enter the objective function (common in hybrid MPC, where such ``auxiliary'' binary variables originate from logical rules and switches).
\section{Preliminaries}
\subsection{Problem formulation}
We consider problems of the form
\begin{subequations}
\label{eq:miqp}
\begin{align}
&\underset{x}{\text{minimize}}&&\frac{1}{2} x^T H x + f^T x \label{eq:miqp-obj}\\
&\text{subject to} &&\underline{b} \leq A x \leq \bar{b} \label{eq:miqp-con}\\
& && A_i x \in \left\{ \underline{b}_i, \bar{b}_i \right\},\quad \forall i \in \mathcal{B}\label{eq:miqp-bin},
\end{align}
\end{subequations}
with decision variable $x\in \mathbb{R}^n$. The objective function \eqref{eq:miqp-obj} is characterized by $H\in \mathbb{S}^{n}_{++}$ and $f\in \mathbb{R}^n$ (how to handle a singular $H$ is addressed in Section \ref{sssec:ex-reg}); the feasible set \eqref{eq:miqp-con} is characterized by $A \in \mathbb{R}^{m\times n}$ and $\overline{b},\underline{b} \in \mathbb{R}^m$. The binary constraints \eqref{eq:miqp-bin}, which make \eqref{eq:miqp} a nonconvex problem, are given by $\mathcal{B} \subseteq \{1,\dots, m\}$ with $|\mathcal{B}| = n_b$. For a vector $v$ (matrix $M$), we denote $v_i$ ($M_i$) its $i$:th element (row).
Specifically, \eqref{eq:miqp} generalizes mixed-integer quadratic programming problems, where a subset of the decision variables $x$ are binary, i.e., when $x_i \in \{0,1\}$ for $i\in \mathcal{B} \subseteq \{1,\dots,n\}$. Such problems are encountered in, for example, hybrid MPC of mixed logical dynamical (MLD) systems \citep[see, e.g.,][]{bemporad1999control}.
Instead of solving \eqref{eq:miqp} directly, we transform it into a least-distance problem (LDP) of the form
\begin{equation}
\begin{aligned}
\label{eq:mildp}
&\underset{u}{\text{minimize}}&&\frac{1}{2} \|u\|_2^2 \\
&\text{subject to} &&\underline{d} \leq M u \leq \bar{d} \\
& && M_i u \in \left\{\underline{d}_i, \bar{d}_i \right\},\quad \forall i \in \mathcal{B},
\end{aligned}
\end{equation}
by doing the coordinate transformation $u = R x + v$, where $R$ is an upper triangular Cholesky factor of $H$ {(i.e., $H = R^T R$)} and $v\triangleq R^{-T} f$. Consequently, $M$, $\underline{d}$ and $\bar{d}$ are given by.
\begin{equation}
\label{eq:aux-def}
M \triangleq A R^{-1}, \quad \underline{d}\triangleq \underline{b}+M v, \quad \bar{d}\triangleq \bar{b}+M v.
\end{equation}
This transformation reduces intermediary computations in the proposed solver, presented in Section \ref{sec:main}, in particular when computing the upper bounds described in Section \ref{sssec:early-term}. Moreover, the up-front cost for transforming \eqref{eq:miqp} into \eqref{eq:mildp} is negligible for MIQPs encountered in hybrid MPC applications, which are often small to medium-sized, compared with the required computations for solving the relaxations described in Section \ref{ssec:solve-relax} (and solving relaxations of the transformed problem is often cheaper than solving relaxations of \eqref{eq:miqp}). Finally, for \textit{linear} hybrid MPC problems, the transformation can be done \textit{a priori} since $H$ and $A$ remain constant, resulting in no additional overhead online due to the transformation.
\begin{rem}
\label{rem:comp-nnls}
The relaxations solved in \cite{bemporad2015solving,bemporad2018numerically} can also be interpreted as LDPs, but these methods require the LDP solution to be transformed back into ``normal'' coordinates every time a relaxation is solved, resulting in a significant overhead (see Sec. III.A in \cite{arnstrom2022daqp} for details). Our approach only transforms the final, global, solution back into normal coordinates, i.e., it only performs a single coordinate transformation.
\end{rem}
\subsection{Branch and Bound}
A naive approach for solving \eqref{eq:mildp} would be to enumerate all possible combinations arising from the binary constraints in $\mathcal{B}$ and select the feasible solution among these subproblems with the smallest norm. Such a brute-force approach do, however, require $2^{|\mathcal{B}|}$ LDPs to be solved, which quickly becomes intractable when the number of binary constraints increases.
In branch-and-bound (B\&B) methods \citep{land1960automatic}, all of these combinations are \textit{implicitly} considered by gradually fixing the constraints in $\mathcal{B}$, encoded by the sets $\underline{\mathcal{B}} \subseteq \mathcal{B}$ and $\overline{\mathcal{B}}\subseteq \mathcal{B}$ that contain indices corresponding to binary constraints that have been fixed at its lower and upper bound, respectively. After such fixations, LDP relaxations of the form
\begin{equation}
\begin{aligned}
\label{eq:ldp-relax}
&\underset{u}{\text{minimize}}&&\frac{1}{2} \|u\|_2^2 \\
&\text{subject to} &&\underline{d} \leq M u \leq \bar{d} \\
& && M_i u = \underline{d}_i,\quad \forall i \in \underline{\mathcal{B}} \\
& && M_i u = \bar{d}_i,\quad \forall i \in \overline{\mathcal{B}}
\end{aligned}
\end{equation}
are solved, and their solutions are used to dismiss other binary combinations that cannot (based on Lemma \ref{lem:dom} below) be optimal, which avoids explicitly solving the corresponding relaxations. To systematically consider such binary combinations, relaxations of the form \eqref{eq:ldp-relax} can be ordered in a tree, where a node in this tree is defined in the following way:
\begin{defn}[Node]
A \textit{node} is a pair $(\underline{\mathcal{B}},\overline{\mathcal{B}})$ with the sets $\underline{\mathcal{B}}, \overline{\mathcal{B}}\subseteq \mathcal{B}$ satisfying $\underline{\mathcal{B}} \cap \overline{\mathcal{B}} = \emptyset$. The \textit{level} of a node is given by a mapping $\ell: \mathbb{P}(\mathcal{B}) \times \mathbb{P}(\mathcal{B}) \to \mathbb{Z}_{\geq 0}$ defined by the rule $\ell(\underline{\mathcal{B}},\overline{\mathcal{B}})\triangleq |\underline{\mathcal{B}}|+|\overline{\mathcal{B}}|$, where $\mathbb{P}(\mathcal{B})$ is the power set of $\mathcal{B}$.
\end{defn}
By \textit{processing}, or \textit{exploring}, a node $(\underline{\mathcal{B}}, \overline{\mathcal{B}})$ we mean solving the corresponding LDP relaxation in \eqref{eq:ldp-relax}. The ``gradual fixing'' of binary constraints mentioned above corresponds to moving down the tree, which corresponds to processing \textit{descendants} to nodes that have already been processed.
\begin{defn}[Descendant]
A node $(\underline{\mathcal{B}}_d,\overline{\mathcal{B}}_d)$ is said to be a \textit{descendant} to the node $(\underline{\mathcal{B}},\overline{\mathcal{B}})$ if $\underline{\mathcal{B}} \subseteq \underline{\mathcal{B}}_d$, $\overline{\mathcal{B}} \subseteq \overline{\mathcal{B}}_d$,
and $\ell(\underline{\mathcal{B}_d},\overline{\mathcal{B}}_d) > \ell(\underline{\mathcal{B}},\overline{\mathcal{B}})$.
Moreover $(\underline{\mathcal{B}}_d,\overline{\mathcal{B}}_d)$ is a \textit{child} to $(\underline{\mathcal{B}},\overline{\mathcal{B}})$ (and, conversely, $(\underline{\mathcal{B}}, \overline{\mathcal{B}})$ is a \textit{parent} to $(\underline{\mathcal{B}}_d,\overline{\mathcal{B}}_d)$) if
$\ell(\underline{\mathcal{B}_d},\overline{\mathcal{B}}_d)- \ell(\underline{\mathcal{B}},\overline{\mathcal{B}})=1$.
\end{defn}
Making the tree exploration more concrete, after processing a node $(\underline{\mathcal{B}},\overline{\mathcal{B}})$ in a B\&B method, an index $i\in \mathcal{B}: i\notin \underline{\mathcal{B}}$ and $i\notin \overline{\mathcal{B}}$ is selected and the descendants $(\underline{\mathcal{B}}\cup\{i\},\overline{\mathcal{B}})$ and $(\underline{\mathcal{B}},\overline{\mathcal{B}}\cup\{i\})$ are added to a list, denoted $\mathcal{T}$, which contain \textit{pending} nodes, i.e., nodes that are to be processed. This is the \textit{branching} step in B\&B. Selecting which index to branch over, and in which order pending nodes are processed, are described in more detail in Section~\ref{ssec:branch}.
To avoid explicitly enumerating all possible nodes, the solutions to previous relaxations can sometimes be used to dismiss the branching, commonly known as \textit{cuts} in the tree, while ensuring that an optimal solution is still obtained. These cuts are based on the following, well-known, lemma:
\begin{lem}[Dominance]
\label{lem:dom}
Let $J$ be the optimal objective function value of a relaxation corresponding to a node $(\underline{\mathcal{B}},\overline{\mathcal{B}})$, and let $J_d$ be the optimal objective function value of one of its descendants; then $J \leq J_d$.
\end{lem}
\begin{pf}
Directly follows from the feasible set of a child being a subset of the feasible set of its parent's feasible set (since more equality constraints are enforced in descendants). \qed
\end{pf}
Specifically, Lemma \ref{lem:dom} gives rise to two types of cuts: binary feasibility cuts and dominance cuts.
\subsubsection{Binary feasibility}
If a solution $\tilde{u}$ to a relaxation satisfies the binary constraints, i.e., if $M_i \tilde{u} \in \{\underline{d}_i,\bar{d}_i\}$, $\forall i\in \mathcal{B}$, no further descendant need to be explored since Lemma~\ref{lem:dom} implies that doing so could only lead to worse binary feasible solutions.
\subsubsection{Dominance}
If $\bar{J}$ is the objective function value of the best binary feasible solution found so far, and the solution to a relaxation for a particular node yields $J \geq \bar{J}$, no further descendant need to be explored since, again, Lemma \ref{lem:dom} implies that solving descendant nodes can only lead to worse binary feasible solutions (infeasibility is a special case of this if we use the convention of $J=\infty$ for infeasible problems). Dominance cuts can also sometimes be invoked before a relaxation is solved completely if a dual ascent method is used as inner solver (see Section \ref{sssec:early-term} for details.)
\subsubsection{General branch-and-bound method}
Based on the above concepts (solving relaxations, branching, and cuts), a generic formulation of a branch-and-bound method for solving \eqref{eq:mildp} is provided in Algorithm \ref{alg:bnb}, where relaxations are solved at Step \ref{step:solve-relax}, and the dominance/feasibility cuts are invoked at Step \ref{step:dom-cut} and \ref{step:feas-cut}.
Still, several steps in Algorithm \ref{alg:bnb} remain to be made concrete; namely, how the relaxations at Step \ref{step:solve-relax} are solved, and in what order the tree is explored (determined by the selections at Steps \ref{step:node-selection}, \ref{step:branch-selection}, \ref{step:child-selection}.)
Particular implementations, suitable for embedded applications, of Steps \ref{step:node-selection}, \ref{step:solve-relax}, \ref{step:branch-selection}, and \ref{step:child-selection} are the subject of the next section.
\begin{algorithm}
\caption{Generic B\&B method for solving \eqref{eq:mildp}}
\label{alg:bnb}
\begin{algorithmic}[1]
\Require $M,\underline{d},\bar{d}, \mathcal{B}$
\Ensure $u^*, J^*$
\State $\bar{u}\leftarrow \star$, $\bar{J} \leftarrow \infty$, $\mathcal{T} \leftarrow \{(\emptyset, \emptyset)$\}
\While{$\mathcal{T}\neq \emptyset$}
\State $(\underline{\mathcal{B}},\overline{\mathcal{B}}) \leftarrow$ select node from $\mathcal{T}$ \label{step:node-selection}
\State $u, J \leftarrow$ solve \eqref{eq:ldp-relax}\label{step:solve-relax}
\If{$J \geq \bar{J}$} \textbf{continue} \label{step:dom-cut}
\Comment dominance
\EndIf
\If{$M_i u \in \{\underline{d}_i,\bar{d}_i\}, \forall i \in \mathcal{B}$} \label{step:feas-cut}
\State $\bar{u} \leftarrow u, \bar{J} \leftarrow J$
\Comment binary feasibility
\Else
\State $i\leftarrow$ select $i\in \mathcal{B}: M_i u \notin \{\underline{d}_i,\bar{d}_i\}$ \label{step:branch-selection}
\State add $(\underline{\mathcal{B}}\cup\{i\},\overline{\mathcal{B}})$ and $(\underline{\mathcal{B}},\overline{\mathcal{B}}\cup\{i\})$ to $\mathcal{T}$ \label{step:child-selection}
\EndIf
\EndWhile
\State \Return $u^* \leftarrow \bar{u}$, $J^* \leftarrow J$
\end{algorithmic}
\end{algorithm}
\section{Proposed MIQP solver}
\label{sec:main}
In this section we outline the proposed MIQP solver, which is the main contribution of this paper, by making the steps in the generic B\&B method in Algorithm \ref{alg:bnb} specific. In particular, we describe how the relaxations at Step \ref{step:solve-relax} are solved (Section~\ref{ssec:branch}), and how the selections at Steps \ref{step:node-selection}, \ref{step:branch-selection} and \ref{step:child-selection} are made in Algorithm \ref{alg:bnb} (Section~\ref{ssec:branch}).
All of these specifications coalesce into Algorithm \ref{alg:bnb-full}, presented in Section \ref{ssec:complete-method}.
\subsection{Solving relaxations}
\label{ssec:solve-relax}
To solve the LDPs in \eqref{eq:ldp-relax} we use the dual active-set solver DAQP \citep{arnstrom2022daqp} given in Algorithm~\ref{alg:daqp}, where $\lambda$ denotes the dual iterate. Here, we highlight some aspects that are essential for incorporating it in a branch-and-bound method; for a detailed description we refer the reader to \cite{arnstrom2022daqp}.
In particular, early-termination and double-sided/equality constraints were mentioned in \cite{arnstrom2022daqp}, but were never presented jointly in a single algorithm (which we do here in Algorithm \ref{alg:daqp}.)
\begin{algorithm}
\caption{The dual active-set method from \cite{arnstrom2022daqp}, extended to handle equality constraints, double-sided constraints, early-termination, and infeasibility detection, to be able to efficiently solve LDP relaxations of the form \eqref{eq:ldp-relax}.}
\label{alg:daqp}
\begin{algorithmic}[1]
\Require $M,\underline{d},\overline{d}, \underline{B}, \overline{B}, \lambda_0, \bar{J}$
\Ensure $u^*, J^*, \mathcal{U}^*, \mathcal{L}^*$
\algrenewcommand\algorithmicindent{0.825em}%
\State $\mathcal{U}\leftarrow \underline{B}$, $\mathcal{L} \leftarrow \overline{\mathcal{B}}$, $\lambda \leftarrow \lambda_0$, $\mathcal{E}\leftarrow \underline{\mathcal{B}} \cup \overline{\mathcal{B}}$
\Repeat
\If{$M_{\mathcal{\mathcal{U}\cup\mathcal{L}}} M_{\mathcal{\mathcal{U}\cup\mathcal{L}}}^T$ is nonsingular}\label{step:nonsingular-start}
\State $\lambda^* \leftarrow$ solve \eqref{eq:subproblem}
\If{$\lambda^*_i \geq 0$ $\forall i \in \mathcal{U}$ and $\lambda^*_i \leq 0$ $\forall i \in \mathcal{L}$} \label{step:csp}
\State $u \leftarrow -M_{\mathcal{U}\cup\mathcal{L}}^T \lambda^*_{\mathcal{U}\cup\mathcal{L}}$
\If{$\frac{1}{2}\|u\|^2_2 \geq \bar{J}$} infeasible, \Return $\star, \infty, \star, \star$\label{step:early-term}
\EndIf
\State $\lambda \leftarrow \lambda^*$
\State $\mu^+ \leftarrow \overline{d} -M u$,\quad $\mu^-\leftarrow M u - \underline{d}$
\If{$\mu^+,\mu^- \geq 0$} optimal, \textbf{goto} \ref{step:return}\label{step:primalfeas}
\Else \quad$j \leftarrow \text{argmin}_{i\notin {\mathcal{U}\cup\mathcal{L}}} \min\{\mu^+_i, \mu^-_i\}$
\If{$\mu^+_j < \mu^-_j$} \:$\mathcal{U} \leftarrow \mathcal{U}\cup\{j\}$
\EndIf
\If{$\mu^+_j > \mu^-_j$} \:$\mathcal{L} \leftarrow \mathcal{L}\cup\{j\}$
\EndIf
\EndIf
\Else\: (blocking constraint)
\State $p \leftarrow \lambda^*-\lambda$\label{step:innerstart}
\State$\mathcal{C}\leftarrow\{i\in\mathcal{U}\setminus \mathcal{E}: \lambda^*_i < 0\} \cup \{i\in\mathcal{L}\setminus \mathcal{E}: \lambda^*_i > 0\}$ \label{step:nonsingblocking}
\State $\lambda, \mathcal{U}, \mathcal{L} \leftarrow$ \textsc{fixComponent}$(\lambda,\mathcal{U},\mathcal{L},\mathcal{C},p)$\label{step:innerstop}
\EndIf
\Else $\:(M_{{\mathcal{U}\cup\mathcal{L}}} M_{\mathcal{\mathcal{U}\cup\mathcal{L}}}^T \text{singular})$\label{step:singular-start}
\State $p \leftarrow$ solve \eqref{eq:subproblem-singular} \label{step:daqp-sing-dir}
\State $\mathcal{C} \leftarrow \{i\in \mathcal{U}\setminus \mathcal{E}: p_i < 0\}\cup\{i\in \mathcal{L}\setminus \mathcal{E}: p_i > 0\}$ \label{step:singblocking}
\If{$\mathcal{C}\setminus \mathcal{E} = \emptyset$} infeasible, \Return $\star, \infty, \star, \star$
\EndIf
\State $\lambda, \mathcal{U},\mathcal{L} \leftarrow$ \textsc{fixComponent}$(\lambda,\mathcal{U},\mathcal{L},\mathcal{C},p)$\label{step:singular-stop}
\EndIf
\Until termination
\State \Return $u$, $\frac{1}{2}\|u\|^2_2$, $\mathcal{U}$, $\mathcal{L}$ \label{step:return}
\hrule\rule{0pt}{1pt}
\Procedure{fixComponent}{$\lambda, \mathcal{U},\mathcal{L},\mathcal{C},p$}
\State $j \leftarrow \text{argmin}_{i \in \mathcal{C}} -\lambda_i/p_i $ \label{step:ratio}
\State $\lambda \leftarrow \lambda - (\lambda_j/p_j) p$
\If{$p_j<0$} \:$\mathcal{U} \leftarrow \mathcal{U}\setminus\{j\}$
\EndIf
\If{$p_j>0$} \:$\mathcal{L} \leftarrow \mathcal{L}\setminus \{j\}$
\EndIf
\State \Return $\lambda,\mathcal{U},\mathcal{L}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsubsection{Double-sided constraints}
The main focus in \cite{arnstrom2022daqp} was on problems with single-sided constraints of the form ${M u \leq d}$, while the problem in \eqref{eq:mildp} contains double-sided constraints of the form $\underline{d} \leq M u \leq \bar{d}$. Handling double-sided constraints requires not only keeping track of the components of the dual variable that are free (which are bookkept in a so-called working set $\mathcal{W}$), but also whether the component is allowed to be positive or negative, corresponding to an upper or lower constraints, respectively. Hence, the working set $\mathcal{W}$ is replaced by the working sets $\mathcal{L}$ and $\mathcal{U}$ (where $\mathcal{W}= \mathcal{L}\cup \mathcal{U}$), which contain lower and upper constraints, respectively, that are active (hold with equality). See Section IV.C in \cite{arnstrom2022daqp} for details.
Introducing $\mathcal{L}$ and $\mathcal{U}$ results in the subproblem that is solved in an iteration (Steps \ref{step:csp} and \ref{step:daqp-sing-dir}) being of the form
\begin{equation}
\label{eq:subproblem}
\begin{aligned}
\lambda^* = &\:\:\underset{\lambda}{\argmin}&& \frac{1}{2}\lambda^T M M^T \lambda + \overline{d}_{\mathcal{U}}^T \lambda_{\mathcal{U}}+\underline{d}_{\mathcal{L}}^T \lambda_{\mathcal{L}}\\
&\text{subject to} &&\lambda_i = 0,\: \forall i\notin \mathcal{L}\cup \mathcal{U},
\end{aligned}
\end{equation}
when the matrix $M_{\mathcal{U}\cup \mathcal{L}} M_{\mathcal{U}\cup \mathcal{L}}^T$ is non-singular (index sets as subscripts means extracting the corresponding rows from the vector/matrix); and
\begin{equation}
\label{eq:subproblem-singular}
p=
\underset{p}{\sol}
\left\{
\begin{aligned}
M_{\mathcal{U}\cup \mathcal{L}} M_{\mathcal{U}\cup \mathcal{L}}^T \:p_{\mathcal{U}\cup \mathcal{L}}&= 0,\\
\overline{d}_{\mathcal{U}}^T p_{\mathcal{U}} +\underline{d}^T_{\mathcal{L}}p_{\mathcal{L}} < 0,
\quad p_i=&0\:\: \forall i\notin \mathcal{U}\cup \mathcal{L}
\end{aligned}
\right\},
\end{equation}
when the matrix $M_{\mathcal{U}\cup \mathcal{L}} M_{\mathcal{U}\cup \mathcal{L}}^T$ is singular. Again, more details about the subproblems are given in \cite{arnstrom2022daqp}.
In DAQP, both \eqref{eq:subproblem} and \eqref{eq:subproblem-singular} are solved efficiently by decomposing $M_{\mathcal{U}\cup \mathcal{L}} M_{\mathcal{U}\cup \mathcal{L}}^T$ with an LDL$^T$ factorization. Moreover, this factorization is recursively updated whenever a constraint is added/removed to/from the working sets $\mathcal{U}$ and $\mathcal{L}$. For details regarding these updates, see Section II.B in \cite{arnstrom2022daqp}.
\subsubsection{Equality constraints}
To handle equality constraints, which is necessary due to the form of the relaxations in \eqref{eq:ldp-relax}, we make sure that indices corresponding to these, given by the set $\mathcal{E}$, are always contained in the working set (which enforces the constraints to hold with equality.) To this end, we ensure that the set of candidate indices for removal from $\mathcal{U}$ or $\mathcal{L}$, denoted $\mathcal{C}$ (computed at Step \ref{step:nonsingblocking} and \ref{step:singblocking}), never contains any index in $\mathcal{E}$.
\subsubsection{Early-termination and infeasibility detection}
\label{sssec:early-term}
Infeasibility of \eqref{eq:ldp-relax} can be detected whenever $\mathcal{M}_{\mathcal{U}\cup \mathcal{L}} \mathcal{M}_{\mathcal{U}\cup \mathcal{L}}^T$ is singular and $\mathcal{C}= \emptyset$, since then the dual objective function can be made unbounded (see, e.g., Sec. 5.2.2 in \cite{boyd2004convex} for details.)
Moreover, whenever we have dual feasible iterates (i.e., whenever the condition at Step \ref{step:csp} is satisfied) the corresponding primal iterate $u$ will, by duality theory, yield a lower bound $\frac{1}{2} \|u\|^2_2$ of the solution to \eqref{eq:ldp-relax}. As is mentioned in \cite{arnstrom2022daqp} and explored in \cite{fletcher1998numerical}, this lower bound can be used to terminate the solver early if we require the solution to be less than some upper bound $\bar{J}$. If $\frac{1}{2} \|u\|_2^2\geq \bar{J}$, we, hence, consider the problem ``futile'' and early-terminate Algorithm \ref{alg:daqp} at Step \ref{step:early-term}.
\subsubsection{Exact regularization}
\label{sssec:ex-reg}
Nominally, the method in \cite{arnstrom2022daqp} requires the relaxations, and hence the MIQP in \eqref{eq:miqp}, to be strictly convex, i.e., it requires $H \succ 0$. One way of handling singular Hessians is to perform proximal-point iterations \citep{bemporad2018numerically}. This introduces an extra layer of complexity since a sequence of QPs need to be solved for each relaxation. Moreover, the above-mentioned early termination cannot, then, be applied directly since each proximal-point iteration decreases the objective function, resulting in the dual objective function of inner, regularized, QPs not necessarily being a lower bound to the optimal objective function of \eqref{eq:miqp}.
In hybrid MPC applications, singularity of the Hessian $H$ often originates from binary variables $\delta \in \{0,1\}$ not entering the objective function, called auxiliary variables in MLD systems, since these often encode logical rules that are not directly penalized ($\delta$, hence, only enters the constraints). A naive way of dealing with such singularities is to add a regularizing term $ \epsilon \|\delta\|_2^2$ to the objective. However, this can perturb the true solution, and, more critically in practice, often leads to weakly active constraints in the relaxations, which in turn can lead to numerical instability. We propose instead to add a regularizing term $\epsilon \|\delta-\frac{1}{2}\|_2^2$ to the objective, which does not perturb the solution, and does not encourage weakly active constraints.
We generalize this by the following proposition.
\begin{prop}[Exact regularization]
Adding regularizing terms of the form $\|A_i x - \frac{\underline{b}_i + \overline{b}_i}{2}\|_2^2$, $i\in \mathcal{B}$ to the objective function in \eqref{eq:miqp-obj} does not change the solution of \eqref{eq:miqp}.
\end{prop}
\begin{pf}
Denote $q(x)$ the objective function \eqref{eq:miqp-obj} and let
\begin{equation}
\label{eq:reg-problem}
\begin{aligned}
\tilde{x}^* =\: &\underset{x}{\argmin} && q(x) + \|A_i x - \tfrac{\underline{b}_i + \overline{b}_i}{2}\|^2_2\\
&\text{subject to} && \text{\eqref{eq:miqp-con} and \eqref{eq:miqp-bin}},
\end{aligned}
\end{equation}
for $i \in \mathcal{B}$. Any $x$ that satisfies the binary constraints \eqref{eq:miqp-bin} also satisfies $\|A_i x- \frac{\underline{b}_i+\overline{b}_i}{2}\|^2_2 = \|\frac{\underline{b}_i - \overline{b}_i}{2}\|_2^2$ for any $i \in \mathcal{B}$.
Now let $x^*$ be a solution to \eqref{eq:miqp} and assume that $q(x^*) < q(\tilde{x}^*)$. Then, since both $x^*$ and $\tilde{x}^*$ satisfy \eqref{eq:miqp-bin}, we have that
\begin{equation*}
q(x^*) + \|A_i x^* - \tfrac{\underline{b}_i+\overline{b}_i}{2}\|^2_2 < q(x^*) + \|A_i \tilde{x}^* - \tfrac{\underline{b}_i+\overline{b}_i}{2}\|^2_2,
\end{equation*}
which contradicts \eqref{eq:reg-problem}. In conclusion $q(x^*) \geq q(\tilde{x}^*)$, so $\tilde{x}^*$ must be a solution to \eqref{eq:miqp}.
\qed
\end{pf}
\subsection{Tree exploration}
\label{ssec:branch}
There are three choices that affect the tree exploration in Algorithm \ref{alg:bnb}:
\textit{node} selection (Step \ref{step:node-selection}), \textit{branch} selection (Step \ref{step:branch-selection}), and \textit{child} selection (Step \ref{step:child-selection}). Specific choices of these, particularly suited for embedded applications, are given below.
\subsubsection{Node selection} The two most popular search strategies in B\&B are \textit{depth-first} and \textit{best-first}.
A depth-first search selects the pending node with the highest level $\left|{\overline{\mathcal{B}}}\right|+\left|\underline{\mathcal{B}}\right|$, which often encourages processing of nodes that yield binary feasible solutions. Best-first selects the pending node with the lowest objective function value $\underline{J}$, which encourages processing of nodes that yield higher quality solutions. While a best-first search often results in fewer nodes being processed in total compared with depth-first, we employ a depth-first because it leads to a reduced memory footprint since the number of pending node is kept low.
Moreover, since a depth-first search promotes processing children right after their parents have been processed, the inner solver can be \textit{hot-started}, i.e., the working set and the corresponding LDL$^T$ factorization that were formed in the parent can be \textit{directly} reused when solving a child's relaxation, significantly reducing computations. Hot-starting can also be used in best-first, but this requires LDL$^T$ factorizations to be stored for each pending node, leading to a large memory footprint that is, again, not suitable for embedded applications.
Finally, depth-first enables the entire tree to be compactly represented, as is formalized in the following proposition and exemplified below.
\begin{prop}[Compact tree representation]
\label{prop:storage}
If a depth-first search strategy is employed at Step \ref{step:node-selection} in Algorithm \ref{alg:bnb}, a node can be represented with 2 signed integers and in total only $2 n_b + n_b$ signed integers need to be allocated to represent the tree.
\end{prop}
\begin{pf}
Let $\tilde{\mathcal{B}}$ be an array containing $n_b$ signed integers. Consider a node at level $\ell$ that was spawned from its parent by fixing $i\in \mathcal{B}$. When this node is processed, let the $\ell$th element of $\tilde{\mathcal{B}}$ be set to index $\pm i$ ($+$ if the upper bound was fixed, $-$ if the lower was fixed). Then when processing any node $(\underline{\mathcal{B}}, \overline{\mathcal{B}})$ at level $\tilde{\ell}$, the first $\tilde{\ell}-1$ elements in $\tilde{\mathcal{B}}$ contains $\underline{\mathcal{B}}$ (the negative elements) and $\overline{\mathcal{B}}$ (the positive elements), since a depth first search ensures that only elements in $\tilde{B}$ at index $\geq \tilde{\ell}$ can have been modified before processing the current node, which means that all singed indices from its parent will be the first $\tilde{\ell}-1$ element of $\tilde{B}$. The only additional information needed to fully retrieve $\underline{\mathcal{B}}$ and $\overline{\mathcal{B}}$, then, is the index that was fixed in the node's parent.
Hence, the only required storage for a single node is two integers: its level and the (signed) index to add. Moreover, since the number of pending nodes can maximally be $n_b$ when a depth-first search is used, the memory footprint for all pending nodes is maximally $2 n_b$ signed integers. Finally, the buffer $\tilde{\mathcal{B}}$ contains $n_b$ elements, resulting in maximally $2 n_b + n_b$ signed integer being necessary for representing the tree at any iteration.\qed
\end{pf}
To exemplify the result of Proposition \ref{prop:storage}, consider an example with three binary constraints ($n_b =3$) with indices $i$, $j$, and $k$, which requires $\tilde{\mathcal{B}}$ to contain at the maximum three elements, i.e.,
$\tilde{\mathcal{B}} = [\star, \star, \star]$. A possible B\&B-tree for this scenario is shown in Figure \ref{fig:tree}, and how this tree can be compactly represented will now be motivated.
\input{figs/tree.tex}
Consider the case when node $5$ should be processed. Then $\tilde{\mathcal{B}} = \{\underline{i}, \overline{j}, \star\}$ since node $4$ was processed right before. The only information required at node $5$ is its placement in the tree (level 3) and which index should be fixed (index $\underline{k}$). All other information is implicitly stored in $\tilde{\mathcal{B}}$ since we know that all elements up to level 3 ($\underline{i}$ and $\overline{j}$) will be correct (only nodes at the same level or below the current node could have been processed in between the current node's parent and itself since we use a depth-first search).
\subsubsection{Branch selection}
We use a lexicographic selection rule that picks the smallest index in $\mathcal{B}$ that corresponds to a constraint that is satisfied; that is, the branching index $i$ is selected as $i = \min_j \left\{j\in \mathcal{B}: M_j u \notin \{\underline{d}_j, \bar{d}_j\} \right\}$. This allows the user to order the constraints according to their branching priority.
There are several branching rules that are more advanced, for example \textit{strong branching}, \textit{reliability branching}, or \textit{hybrid branching} \citep{achterberg2005branching}. While the simpler lexicographical rule might lead to more processed nodes compared with the more complex rules mentioned above, we use it for its simplicity and, hence, its small overhead, particularly suitable for embedded applications.
Also, in the context of MPC of hybrid systems, a simple lexicographic selection rule with an ordering according to a time index can be very effective \citep{bemporad1999efficient}.
\subsubsection{Child selection} We explore the child that corresponds to the bound that is closest to being satisfied in the parent. That is, if $\tilde{u}$ is the solution in the parent $(\underline{\mathcal{B}}, \overline{\mathcal{B}})$ and constraint $i$ is selected to be branched upon, the node $(\underline{\mathcal{B}}\cup\{i\}, \overline{\mathcal{B}})$ is processed first if $M_i \tilde{u} \leq \frac{\underline{d}_i+\bar{d}_i}{2}$; otherwise, the node $(\underline{\mathcal{B}}, \overline{\mathcal{B}}\cup\{i\})$ is processed first.
\begin{rem}[Most fractional branching rule]
A similar concept to the one used for child selection can be used in branch selection (known as the ``most fractional rule'' \cite{achterberg2005branching}). However, this requires the product $M_i \tilde{u}$ to be computed \textit{for all} branching candidates (while only a single product needs to be computed when it is just used for child selection). Moreover, as is shown in \cite{achterberg2005branching}, the most fractional rule for node selection seldom improves the number of processed nodes compared with selecting the branching index randomly.
\end{rem}
\subsection{Complete algorithm}
\label{ssec:complete-method}
We are now ready to present the main contribution of this paper.
Algorithm \ref{alg:bnb-full} concretizes Algorithm \ref{alg:bnb} by adding the details described in the preceding subsections. Namely, by solving the relaxations using Algorithm \ref{alg:daqp} and by using the search strategy outlined in Section \ref{ssec:branch}.
In addition, the algorithm contains a step for transforming an MIQP of the form \eqref{eq:miqp} into an MILDP of the form \eqref{eq:mildp} at Step \ref{step:transform}, and a step for retrieving the solution to \eqref{eq:miqp} from a solution of \eqref{eq:mildp} at Step \ref{step:retreive}.
\begin{algorithm}[H]
\algrenewcommand\algorithmicindent{1.2em}
\caption{B\&B method for solving the MIQP in \eqref{eq:miqp}}
\label{alg:bnb-full}
\begin{algorithmic}[1]
\Require $R,f,A,\underline{b},\bar{b}, \mathcal{B}$
\Ensure $x^*$
\State $M \leftarrow A R^{-1}$, $v\leftarrow R^{-T}f$, $\underline{d} \leftarrow \underline{b}+Mv$, $\overline{d} \leftarrow \overline{b} + M v$ \label{step:transform}
\State $\bar{u}\leftarrow 0$, $\bar{J} \leftarrow \infty$, $\mathcal{T} \leftarrow \{(\emptyset, \emptyset)$\}
\While{$\mathcal{T}\neq \emptyset$}
\State $(\underline{\mathcal{B}},\overline{\mathcal{B}}) \leftarrow$ pop from $\mathcal{T}$
\State $u, J, \mathcal{U},\mathcal{L} \leftarrow$ solve \eqref{eq:ldp-relax} using Algorithm \ref{alg:daqp}
\If{$J \geq \bar{J}$} \textbf{continue}
\EndIf
\If{$\mathcal{B} \subseteq (\mathcal{U}\cup \mathcal{L})$}
\State $\bar{u} \leftarrow u, \bar{J} \leftarrow J$
\Else \Comment branching
\State $i\leftarrow \min \{ i\in \mathcal{B}: i\notin \mathcal{U}\cup \mathcal{L}\}$
\If{$M_i u \leq \frac{\underline{d}_i+\bar{d}_i}{2}$}
\State push $(\underline{\mathcal{B}},\overline{\mathcal{B}}\cup\{i\})$ to $\mathcal{T}$; push $(\underline{\mathcal{B}}\cup\{i\},\overline{\mathcal{B}})$ to $\mathcal{T}$
\Else
\State push $(\underline{\mathcal{B}}\cup\{i\},\overline{\mathcal{B}})$ to $\mathcal{T}$; push $(\underline{\mathcal{B}},\overline{\mathcal{B}}\cup\{i\})$ to $\mathcal{T}$
\EndIf
\EndIf
\EndWhile
\State \Return $x^* \leftarrow - R^{-1}(\bar{u}-v)$\label{step:retreive}
\end{algorithmic}
\end{algorithm}
\subsection{Complexity certification}
When considering MIQPs originating from hybrid \textit{linear} MPC, $f,\overline{b},$ and $\underline{b}$ are affine functions of a parameter $\theta \in \mathbb{R}^p$. This structure can be exploited by complexity-certification methods to determine \emph{tight} worst-case guarantees on the number of required computations Algorithm \ref{alg:bnb-full} requires for solving \emph{any} MIQP generated by $\theta$ \citep{shoja2022overall}.
The complexity-certification in \cite{shoja2022overall} can be directly applied to Algorithm \ref{alg:bnb-full} since it fulfills the requirements specified therein. One such requirement is fulfilled by the inner solver DAQP being certifiable by the framework in \cite{arnstrom2022unifying}.
Detailed parametric complexity-certification results provided by the method in \cite{shoja2022overall} for the proposed solver is out of scope here and will be considered in future work.
\section{Numerical Experiments}
In this section we report results from numerical experiments for a C implementation\footnote{available at \url{https://github.com/darnstrom/daqp}}
of Algorithm \ref{alg:bnb-full} (BnB-DAQP). In Section \ref{ssec:result-rand} we compare BnB-DAQP with the state-of-the-art solver Gurobi on a set of randomly generated MIQPs. Then, in Section \ref{ssec:result-embed}, we highlight the embeddability of BnB-DAQP by implementing it on an MCU with limited memory and computational resources, which is then used for hybrid MPC.
Code for all experiments is available online at \url{https://github.com/darnstrom/ifac2023-bnbdaqp}.
\begin{rem}[Comparing against additional MIQP solvers]
We only compare against Gurobi because most MIQP solvers in the MPC literature \citep[e.g.,][]{bemporad2015solving,bemporad2018numerically,hespanhol2019structure,liang2020early,stellato2018embedded} are not publicly accessible or is not readily embeddable. Hence, the open-source C implementation\footnotemark[1] of our solver is a major contribution of this work. Nevertheless, most of the above-mentioned solvers are compared with Gurobi; so a rough comparison could be made implicitly by comparing their relative performance with the results in Section \ref{ssec:result-rand}.
\end{rem}
\subsection{Random mixed-integer QPs}
\label{ssec:result-rand}
First, we compare BnB-DAQP
with the state-of-the-art commercial solver Gurobi (version 9.5.2) on a set of randomly generated MIQPs. The problems are of the form
\begin{subequations}
\label{eq:miqp-random}
\begin{align}
&\underset{x}{\text{minimize}}&&\frac{1}{2} x^T H x + f^T x \label{eq:miqprandom-obj}\\
&\text{subject to} &&\underline{b} \leq A x \leq \bar{b} \label{eq:miqpradnom-con}\\
& && x_i \in \left\{0, 1 \right\},\quad \forall i \in \{1,\dots, n_b\},
\end{align}
\end{subequations}
where elements of $A,\bar{b}$, and $\underline{b}$ are generated as
$A\sim\mathcal{N}(0,1)$, $\bar{b}\sim \mathcal{U}(0,20)$, and $\underline{b}\sim \mathcal{U}(-20,0)$, with $\mathcal{N}(\mu,\sigma)$ denoting a normal distribution with mean $\mu$ and standard deviation $\sigma$, and $\mathcal{U}(l,u)$ denotes the uniform distribution over the interval $[l,u]$. Moreover, the Hessian $H$ was generated with the MATLAB function \texttt{sprandsym} with density $1$ and condition number $10^{-4}$, and the linear term $f$ was partitioned as $f = \left(\begin{smallmatrix}
f_b \\ f_c
\end{smallmatrix}\right)$, with $f_b \sim -|\mathcal{N}(0,10^2)|$ and $f_c \sim \mathcal{N}(0,10^2)$; the negativity of $f_b$ was enforced to counteract a bias of $x^*_i=0$, $i\in \{1,\dots, n_b\}$, since $H \succ 0$.
\input{figs/random_qp.tex}
The result for solving MIQPs of the form \eqref{eq:miqp-random} for varying problem dimensions is shown in Figure \ref{fig:random-fig}.
Figure \ref{subfig:random-time} shows that BnB-DAQP outperforms Gurobi on the considered problems/dimensions (representable of problems commonly encountered in hybrid MPC applications), both concerning average solution time and worst-case solution time. From Figure \ref{subfig:random-nodes} we see that the tree exploration in Gurobi is more effective, as is to be expected since the tree exploration in BnB-DAQP favours simplicity. Despite this, however, the number of iterations (the number of solved linear equation systems), shown in Figure \ref{subfig:random-iter}, is initially lower for BnB-DAQP.
\subsection{Embedded MPC}
\label{ssec:result-embed}
To illustrate the proposed solver's embeddability, we apply it to a hybrid MPC example, with the controller running on an MCU with limited computing power and memory. Specifically, we use an STM32F411 MCU, running at 84 MHz and with 512 kB Flash memory and 128 kB of SRAM. The MCU has no data caches and an FPU that only supports floating point operations in \textit{single} precision.
\input{figs/invpend.tex}
We consider the hybrid MPC of a linearized inverted pendulum on a cart, surrounded by a wall (giving rise to contact forces), considered in, for example, \cite{marcucci2020warm}, and visualized in Figure \ref{fig:invpend-tikz}. The control goal is to stabilize the pendulum in the upright position ($\phi=0$) at the origin ($x=0$). The control is a force $F$ directly applied to the cart. There is also a contact force $F_{\text{con}}$ present when the tip of the pendulum touches the wall, necessitating binary variables in the model.
The specific control task considered in the experiment is a recovery task, where the pendulum is initialized on a collision course ($x = 0, \dot{x} = -1$) with the wall. Moreover, after two seconds another impulse to the velocity of the cart ($\dot{x} = -0.7$) is applied, setting it on yet another collision course with the wall. A similar scenario was considered in \cite{marcucci2020warm}.
An MPC with a prediction/control horizon 6, where each time step is $0.1$ seconds, was used to control the platform. We impose the control constraint $|F| \leq 1$ and the state constraints $|x| \leq d$ , $|\dot{x}|\leq 1$, $|\phi| \leq \frac{\pi}{10}$, $|\dot{\phi}| \leq 1$ at each time step. Additionally, there are several constraints that contain binary variables to model the contact force. The dimensions of the resulting MIQP problems are $n= 24$, $m=96$, and $n_b = 12$.
The same weights and model parameters as in \cite{marcucci2020warm} were used. More details about the MPC problem, for example how the constraints from the contact force are formulated in the MIQP, are also given in \cite{marcucci2020warm}.
The solution times for computing a control in the simulated scenario is shown in Figure \ref{fig:invpend-res}, where it can be seen that the controller is able to operate well below the sampling time of 0.1 seconds (i.e., a control frequency of 10 Hz). Note that the solution time also includes forming the problem from a given state estimate. Figure \ref{fig:random-fig} also shows that there are a non-zero contact force at some time steps, meaning that the MIQPs are non-trivial. The total memory footprint (including problem data and code for logging) used on the MCU was 25.4 kB.
\input{figs/invpend_result.tex}
\section{Conclusion}
We have proposed a mixed-integer QP solver that is suitable for use in embedded applications such as hybrid model predictive control (MPC). The solver is based on branch-and-bound, and the recently proposed dual active-set solver DAQP \citep{arnstrom2022unifying} is used for solving QP relaxations. We showed that the proposed solver outperforms Gurobi on small to medium-sized MIQPs, commonly encountered in embedded hybrid MPC applications. Finally, the embeddability of the solver was shown by successfully using it on an MCU with limited memory and computing power for the MPC of an inverted pendulum on a cart with contact forces.
A {C implementation} of the proposed solver is available for download at {\url{https://github.com/darnstrom/daqp}}.
\linespread{1.0}\selectfont
|
1,108,101,563,642 | arxiv | \section{Introduction}
The field of spintronics investigates the interplay between the spin (magnetic) and charge degrees of freedom in a solid-state system~\cite{zutic_spintronics_2004, wolf_spintronics_2001}. Initial experimental techniques have focused on the electronic or optical detection of the magnetization, where the latter is controlled or initialized via an external magnetic field. It has subsequently been realized that the magnetization direction can also be manipulated via spin-polarized charge currents utilizing the phenomenon of spin-transfer torques (STT)~\cite{Slonczewski1996,Berger1996,Ralph2008}. The physics underlying STT may be understood with reference to a simple model in which the magnetization results from the localized d-electrons while the mobile s-electrons mediate transport. Due to an exchange coupling between the s and d electrons, the mobile s-electrons experience a torque exerted by the magnetization. Reciprocally, the magnetization experiences an equal and opposite torque. This technique has successfully been employed for magnetization switching and domain wall motion, and forms the basis for a number of devices such as racetrack~\cite{Parkin2008} and STT-magnetoresistive random access memories~\cite{Akerman2005}.
While the mechanism for spin-polarization of current relies on the conductor magnetization in the above mentioned devices, pure spin currents have also been generated and detected in non-magnetic materials, with spin-orbit interaction enabling interconversion between charge and spin currents~\cite{Dyakonov2008,hirsch_spin_1999,Saitoh2006}. Although there are a number of microscopic mechanisms contributing to this interconversion~\cite{Sinova2015}, a simple picture is provided by asymmetric scattering from impurities. An electron experiences, due to spin-orbit interaction, a spin-dependent impurity potential and scattering probability in the transverse direction (see Fig.\,\ref{fig:Schematic}). Thus, a charge flow leads to a spin current in the transverse direction and vice-versa. This phenomenon has been termed spin Hall effect (SHE) and the conversion efficiency is quantified by the so-called spin Hall angle ($\theta$). Since the spin current cannot escape the material, a spin accumulation builds up close to the conductor edges so that the diffusive backflow compensates the SHE current at the edge. This spin accumulation decays exponentially over a distance, called spin diffusion length ($\lambda$), from the interface and is well described within a diffusive transport theory~\cite{takahashi_spin_2008}.
In heterostructures comprising a magnet (F) and a non-magnetic metal (N)\footnote{Non-magnetic here shall denote metals that do not show long-range magnetic order such as ferro- or ferrimagnetism.}, the transport and magnetization electrons may be spatially separated. One mechanism for STT in these systems is via the SHE mediated accumulation of electron spins at the interface, when a charge current is driven in N. In addition to altering or moving the magnetic textures, STT also enables injection of pure spin currents into the magnetic material. This interplay between electronic and magnonic spin currents~\cite{weiler_experimental_2013} is exemplified by phenomena like spin pumping~\cite{tserkovnyak_spin_2002,czeschka_scaling_2011}, electrical spin injection~\cite{johnson_interfacial_1985}, spin Seebeck effect~\cite{uchida_observation_2008,jaworski_observation_2010,xiao_theory_2010}, and spin Hall magnetoresistance (SMR)~\cite{Nakayama2013,Althammer2013,Chen2013,Chen2016}.
Different methods for spin accumulation detection are necessary in different materials. In semiconductors, direct spatially resolved optical detection has been achieved via Kerr rotation measurements~\cite{kato_observation_2004,stern_current-induced_2006} and recently, Stamm et al. reported (non-spatially resolved) detection of the spin accumulation in metal thin films \cite{Stamm2017}. The latter turns out to be challenging in metals due to their small electromagnetic field penetration depths and the resultig Kerr angles of the order of \SI{10e-9}{rad}. Typical techniques employed in metals rely therefore on examining an effect of the spin accumulation and constitute an indirect measurement. For example, the N thickness dependence of SMR in an F$|$N heterostructure allows inferring the spin Hall angle, but the approach relies on accurate knowledge about the interface and the interplay between the material systems~\cite{Meyer2014,Vlietstra2013}. These interfacial properties are not easily determined and vary in a wide range~\cite{weiler_experimental_2013}.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{Strayfield3D-schematic-new2.pdf}
\caption{Schematic illustration of SHE mediated spin separation and accumulation in a metallic strip. The conductor is assumed long with width $z_0$ and thickness $y_0$. A charge current density $j_{\mathrm{e}}$ flows along the $x$-direction. Due to spin Hall effect (SHE), the conduction electrons are scattered in different directions depending on their spin polarization: Up-spins (red; polarized along $\mathbf{\hat{y}}$), e.\,g., are deflected in the $-z$-direction, while down-spins (blue; polarized along $\mathbf{\hat{y}}$) are deflected in the $+z$-direction. This results in an accumulation of spin-polarized electrons at the surfaces of the strip. The resulting magnetization close to the edges and the magnetic fields induced by these moments are illustrated, respectively, by colored arrows and black lines. The field lines indicate the net magnetic stray field around the strip, i.\,e.~the sum of the stray and Oersted fields.}
\label{fig:Schematic}
\end{figure}
Here we suggest to detect SHE mediated spin accumulation, and thus characterize spin transport parameters, in a metallic strip by measuring the magnetic `stray' field resulting from the non-equilibrium magnetization associated with the spin-polarized electrons. While the net magnetic moment in the system vanishes, a finite magnetization is generated near the boundaries of the metal. We evaluate the ensuing stray field analytically within a simplified model as well as numerically, and find that the field is well within the detection range of the state-of-the-art sensing techniques such as NV centers\cite{Maze:2008cs,Grinolds:2013gi,Maletinsky:2012ge}, magnetic force microscopy\cite{rugar_single_2004,Taylor:2008bp}, scanning SQUID magnetometers\cite{Vasyukov:2013ed,Kirtley:2016kv}, or muon spin resonance~\cite{Leutkens2003}. We further show that the magnetic stray field of spin accumulation may exceed and can be disentangled from the Oersted field arising due to the current flow, that generates the spin accumulation via SHE, using their distinct spatial profiles. The proposed method thus enables a direct access to important spin transport properties - spin diffusion length $\lambda$ and the spin-Hall angle $\theta$ in metals - while circumventing the difficulties associated with F$|$N interfaces.
The paper is organized as follows: In Section \ref{sec:SpinAcc}, we derive the spin accumulation profile in the metallic strip (Fig.\,\ref{fig:Schematic}) and obtain an analytic expression for the magnetic stray field at large (compared to $\lambda$) distances from the surface. Section\,\ref{sec:values} discusses the spatial magnetic field distribution evaluated using the approximate analytic expression as well as numerically. In Section \ref{sec:oersted} we evaluate the Oersted field distribution generated by the charge current in the strip. We discuss the field distribution for a multilayer system in Sec.\,\ref{sec:trilayer} and demonstrate that the Oersted field can be reduced significantly by allowing a counterflow of current in an adjacent layer. We conclude with discussion of experimental issues and a short summary of our results in Section \ref{sec:summary}.
\section{Spin accumulation and magnetic stray field\label{sec:SpinAcc}}
We consider a long metallic strip with width $z_0$ and thickness $y_0$ which supports a charge current density of $j_{\mathrm{e}}$ driven by an electric field $E_0 \mathbf{\hat{x}}$ along its length (Fig. \ref{fig:Schematic}). The general current response in a non-magnetic conductor including SHE reads~\cite{takahashi_spin_2008}:
\begin{widetext}
\begin{equation}
\left( \begin{array}{c}
\mathbf{j}_{e} \\ \mathbf{j}_{sx} \\ \mathbf{j}_{sy} \\ \mathbf{j}_{sz}
\end{array} \right)
= \sigma_N
\left( \begin{array}{cccc}
1 & \theta \mathbf{\hat{x}} \times & \theta \mathbf{\hat{y}} \times & \theta \mathbf{\hat{z}} \times \\
\theta \mathbf{\hat{x}} \times & 1 & 0 & 0 \\
\theta \mathbf{\hat{y}} \times & 0 & 1 & 0 \\
\theta \mathbf{\hat{z}} \times & 0 & 0 & 1
\end{array} \right)
\left( \begin{array}{c}
\mathbf{E} \\ - \pmb{\nabla} \mu_{sx} / {2e} \\ - \pmb{\nabla} \mu_{sy} / {2e} \\ - \pmb{\nabla} \mu_{sz} / {2e}
\end{array} \right),
\end{equation}
\end{widetext}
where $\sigma_N$ is the conductivity, $e (>0)$ is the electronic charge, $\theta$ is the spin Hall angle, $\mathbf{j}_{e}$ is the charge current density, $\mathbf{E}$ is the applied electric field, and $\mathbf{j}_{si}$ is the spin current density polarized in the $i$-direction ($i=x,y,z$). $\mu_{si}$ is the corresponding spin chemical potential, which obeys the diffusion equation: \cite{chen_theory_2013, Aqeel2017, fabian_semiconductor_2007}
\begin{equation}\label{eq:diffusionequation}
\nabla^2 {\mu}_{si} = \frac{{\mu}_{si}}{\lambda^2},
\end{equation}
with the spin diffusion length $\lambda$. The boundary conditions for (\ref{eq:diffusionequation}) are vanishing spin current flow normal to all interfaces, which in the chosen coordinate system read:
\begin{equation}
j_{si}^{y} (y = \pm y_0/2) = 0 \quad\text{and}\quad j_{si}^{z} (z = \pm z_0/2) = 0. \label{eq:boundaryconditions}
\end{equation}
Here the superscript denotes the spatial direction of spin current flow while the subscript represents the spin polarization direction. The diffusion equation (\ref{eq:diffusionequation}) with the boundary conditions [Eq. (\ref{eq:boundaryconditions})] admits the solution~\cite{Mosendz2010}:
\begin{align}\label{eq:musy}
\mu_{sy}(\mathbf{r}) & = - 2 e \theta \lambda E_0 \frac{\sinh{(z/\lambda)}}{\cosh{(z_0/(2\lambda))}}, \\
\mu_{sz}(\mathbf{r}) & = 2 e \theta \lambda E_0 \frac{\sinh{(y/\lambda)}}{\cosh{(y_0/(2\lambda))}}, \label{eq:musz}
\end{align}
where $\mathbf{r}$ is the position vector. As detailed in Appendix \ref{app:spinaccdens}, the spin accumulation density is related to the spin chemical potential by \cite{fabian_semiconductor_2007}
\begin{equation}\label{eq:spinaccdensity}
n_{si} = \frac{\sigma_N}{2 e^2 D} \mu_{si},
\end{equation}
where $D=\lambda^2/\tau$ denotes the electron diffusion constant \cite{johnson_charge_2003,takahashi_spin_2008} and $\tau$ is the spin-flip time. The $n_{si}$ are defined as $n_{si}=n_{\uparrow}-n_{\downarrow}$, where the subscript arrows $\uparrow, \downarrow$ denote the up- and down-polarized spins for the respective quantization axes.
The spin accumulation is thus spatially localized to a region within $\sim \lambda$ from the surfaces. While the exact evaluation of the magnetic field arising from this charge-current induced magnetization requires numerics, analytical expressions can be obtained in the limit of $\lambda \ll r_p$, where $\mathbf{r}_p$ is the position vector of the point at which the magnetic field is measured. We refer to this as the `far-field limit'. Relegating the details to Appendix \ref{app:fluxdensity}, the magnetic field distribution $\mathbf{B}(\mathbf{r}_p)$ in this limit is evaluated as:
\begin{widetext}
\begin{align}\label{eq:Btot}
\mathbf{B}(\mathbf{r}_p) & = \frac{\mu_0\gamma\hbar}{8\pi} \frac{j_e \theta\tau}{e} \left(\frac{1}{\cosh\left(\frac{y_0}{2\lambda}\right)} - \frac{1}{\cosh\left(\frac{z_0}{2\lambda}\right)}\right)\mathbf{F}(y_0,z_0;\mathbf{r}_p), \\
\mathbf{F}(y_0,z_0;\mathbf{r}_p) & = \sum_{\sigma_{1},\sigma_{2} = \pm 1} \frac{2(y_p-\sigma_1 y_0/2)}{(y_p-\sigma_1 y_0/2)^2+(z_p- \sigma_2 z_0/2)^2} \mathbf{\hat{y}} + \frac{2(z_p- \sigma_2 z_0/2)}{(y_p- \sigma_1 y_0/2)^2+(z_p- \sigma_2 z_0/2)^2} \mathbf{\hat{z}}, \label{eq:F}
\end{align}
\end{widetext}
where $\mu_0$ is the permeability of free space and $\gamma$ is the gyromagnetic ratio in the metal. From the expression above, we note that a high aspect ratio leads to larger stray field. Thus it is desirable to have the metal in the shape of a film.
\section{Magnetic stray field: spatial profile \label{sec:values}}
We next compute the spatial distribution of the stray field originating from spin accumulation for the example of a platinum (Pt) conductor. The material parameters employed~\cite{Meyer2014} are spin Hall angle $\theta=0.08$, electric conductivity $\sigma_{\mathrm{N}}=\SI{9.52e6}{A/Vm}$, spin diffusion length $\lambda=\SI{4}{nm}$, spin flip time $\tau=\SI{60}{ps}$ \footnote{This value is calculated from Ref.\,\cite{Sinova2015}, Table I and Ref.\,11 therein.} and $\gamma\hbar=\mu_{\text{B}}=\SI{9.27e-24}{J/T}$. For the geometric dimensions of the Pt strip we choose $y_0=\SI{2}{nm}$ and $z_0=\SI{30}{nm}$ and we assume a current density $j_{\mathrm{e}}=\SI{1e10}{A/m^2}$.
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig2_neu_s.pdf}
\caption{Schematic illustration of the spin accumulation in the platinum strip, shown together with the calculated spin accumulation as a function of $y$ resp.~$z$.}
\label{fig:SingleLayerSchematicWithSpinAcc}
\end{figure}
With these material parameters, we calculate the spin accumulation at the surfaces $|n_{sy}(z=\pm z_0/2)|=\SI{7.8e25}{m^{-3}}$ and $|n_{sz}(y=\pm y_0/2)|=\SI{1.9e25}{m^{-3}}$. This corresponds to a net spin polarisation of about 0.1 percent present at the interface \footnote{Here, we have compared the calculated $n_{si}$ to the experimentally determined free electron density in platinum thin films, $n=\SI{1.6e28}{m^{-3}}$ (see Ref.~\citenum{fischer_mean_1980}).}. As evident from Eqs.~(\ref{eq:musy}) and (\ref{eq:musz}), the spin polarisation decays exponentially with decay length $\lambda$ into the body of the metal, as shown in Fig.~\ref{fig:SingleLayerSchematicWithSpinAcc}.
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig3_neu_s.pdf}
\caption{Magnetic stray field profile $\mathbf{B}(\mathbf{r}_{\mathrm{p}})$ of spin accumulation in the conducting strip evaluated (a) and (b) numerically as well as (c) analytically using Eq. \eqref{eq:Btot}. The white arrows indicate the magnetic field direction, the color encodes its magnitude, where white regions indicate fields above $\SI{200}{\micro T}$. The transparent (solid) gray rectangle depicts the cross-section of the metal strip for the numerical (analytical) evaluation. The pink solid line represents the $\SI{20}{\micro T}$ contour line. Panels (b) and (c) show a zoom-in around the top-right edge of the strip to compare the numerical and analytical model.}
\label{fig:StrayfieldColorPlot}
\end{figure}
The corresponding spatial distribution of the magnetic stray field calculated numerically (see Appendix) is plotted in Fig.~\ref{fig:StrayfieldColorPlot}a. Here, the gray transparent box indicates the conductor cross-section. The stray field diverges at the edges of the strip, exceeding $\SI{20}{\micro T}$ within a radius of about $d=\SI{5}{nm}$ (Fig.~\ref{fig:StrayfieldColorPlot}a). The stray field calculated using Eq. (\ref{eq:Btot}) matches the numerical solution very well at large distances (Fig.~\ref{fig:StrayfieldColorPlot}c). Near the conducting strip, however, the approximation \eqref{eq:Btot} leads to significant errors. In the far-field limit, the stray field decays $\sim 1/r_{\mathrm{p}}^3$.
\section{Oersted field: spatial profile \label{sec:oersted}}
Relegating the evaluation details to Appendix \ref{app:oerstedfield}, we discuss the magnetic field distribution of the Oersted field $\mathbf{B}_{\mathrm{oer}}(\mathbf{r}_{\mathrm{p}})$ created by the charge current flow in the conductor. Figure~\ref{fig:OerstedFieldSLSpatial} shows the spatial distribution of the Oersted field around the conductor. It has its maximum of about $\SI{16}{\micro T}$ at the left and right edge of the strip. In the far-field limit, the Oersted field decays proportional to $1/r_{\mathrm{p}}$ as expected for the far-field. Thus, at large distances the Oersted field dominates the stray field. This is also illustrated in Fig.~\ref{fig:SinglelayerStrayfieldColorPlotAndFarfieldLimits}, where the ratio $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|$ is plotted as a function of the sensor position $\mathbf{r}_{\mathrm{p}}$ including white solid line indicating $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|=1$. Nevertheless, the spatial dependence of the Oersted field significantly differs from that of the magnetic stray field of spin accumulation. Thus, using a spatially resolved magnetic field sensing technique should allow to disentangle the SHE induced stray field from the Oersted field.
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig4_neu_s.pdf}
\caption{Oersted field $\mathbf{B}_{\mathrm{oer}} (\mathbf{r}_{\mathrm{p}})$ as a function of the sensor position $\mathbf{r}_{\mathrm{p}}$. Panel (b) depicts a zoom-in of the upper right edge and panel (c) shows the total magnetic field $|\mathbf{B}_{\mathrm{tot}}|=|\mathbf{B}+\mathbf{B_{\text{oer}}}|$.}
\label{fig:OerstedFieldSLSpatial}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig5_neu_s.pdf}
\caption{\textbf{a.} $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|$ as a function of $y_p$ and $z_p$. The white solid line represents the $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|=1$ contour line indicating that the spin accumulation induced stray field exceeds the oersted field significantly. Areas, where $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|$ exceeds 5 are displayed in white. The gray (semi-transparent) rectangle depicts the cross-section of the metal strip. \textbf{b.} Close-up of the edge region of the strip. \textbf{c.} $|\mathbf{B}|$ and $|\mathbf{B_\mathrm{oer}}|$ as a function of $d$ for the sensor position depicted in Fig.~\ref{fig:SingleLayerSchematicWithSpinAcc}. The solid red (black) line corresponds to the full numerical (analytical, i.e. (\ref{eq:Btot})) computation of $|\mathbf{B}|$, while the blue line depicts $|\mathbf{B_{\text{oer}}}|$. We find $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|>1$ for $d\lesssim \SI{6}{nm}$.%
}
\label{fig:SinglelayerStrayfieldColorPlotAndFarfieldLimits}
\end{figure}
\section{Trilayer geometry \label{sec:trilayer}}
In order to suppress the contribution of the Oersted field to the total magnetic field, we suggest a trilayer sample geometry where the strip consists of two conducting layers with a thin insulating layer (thickness $d_{\text{ins}}$) in between. We consider the upper layer (thickness $y_0$) to have a large spin Hall angle $\theta$, while the spin Hall angle of the lower conducting layer (thickness $y'_0$) vanishes. In the following we discuss the situation, where current flows through both conducting layers with equal magnitude but opposite signs. In the near field, the trilayer geometry reduces the Oersted field contribution. As we assume the spin Hall angle in the bottom conducting layer to be zero, the stray field of the top layer is not affected by the bottom layer. As a consequence, the ratio $B/B_{\text{oer}}$ can be increased significantly.
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig6_neu_s.pdf}
\caption{\textbf{a.} Oersted field $\mathbf{B_{\text{oer,TL}}}$ as a function of $y_p$ and $z_p$ for the proposed trilayer sample. The semi-tranparent rectangles depict the cross-sections of the two metal strips. \textbf{b.} Total magnetic field $\mathbf{B_{\text{tot}}}= \mathbf{B}+\mathbf{B_{\text{oer,TL}}}$ close to the edge of the upper conductive strip. \textbf{c.} Oersted field of the same region for comparison.}
\label{fig:TrilayerOerstedField}
\end{figure}
For a quantitative analysis, we calculate both the stray field and the Oersted field around the trilayer geometry as a function of the sensor position $\mathbf{r}_{\mathrm{p}}$. We here set $y_0=y_0'=\SI{2}{nm}$, $d_{\text{ins}}=\SI{2}{nm}$ and $z_0=\SI{30}{nm}$ \footnote{$z_0$ can be chosen large compared to $y_0$ without significantly decreasing the stray field!} and leave the material parameters unchanged. Figure~\ref{fig:TrilayerOerstedField} shows the calculated Oersted field for this trilayer geometry. Compared to the above discussed single-layer geometry (see Fig.~\ref{fig:OerstedFieldSLSpatial}), we observe a significant suppression of the Oersted field. The ratio $B/B_{\text{oer,TL}}$, plotted in Fig.~\ref{fig:TrilayerStrayfieldColorPlotAndFarfieldLimits}a, shows maxima around the edges of the top strip where the stray field clearly dominates the Oersted field. In particular, we find that the ratio of stray field and Oersted field, $B/B_{\text{oer}}$, is $5.5$ at $d=\SI{5}{nm}$ and $3.4$ at $d=\SI{10}{nm}$. Thus the contribution of the spin accumulation to the total magnetic field around the conductor is easily detectable and quantifiable in the presented geometry.
Table\,\ref{tab:values} lists the $y$-components of magnetic field $B_y$ and magnetic field gradient $\partial B_y/\partial y$ for a sample-sensor distance of $d=\SI{5}{nm}$ and $d=\SI{20}{nm}$ (cf. Fig \ref{fig:SingleLayerSchematicWithSpinAcc}).
The $d$-dependence of stray and Oersted fields is depicted in Fig.~\ref{fig:TrilayerStrayfieldColorPlotAndFarfieldLimits}b. We find that the Oersted farfield around the proposed trilayer sample decays proportional to $1/r_{\mathrm{p}}^2$, compared to $1/r_{\mathrm{p}}$ for the Oersted field of a single conducting layer. Besides, fig.~\ref{fig:TrilayerStrayfieldColorPlotAndFarfieldLimits}b shows the $1/r_{\mathrm{p}}^3$-dependence of the stray field. As a consequence, in trilayer geometry, up to $d=\SI{100}{nm}$ away from the edges, the magnetic stray field exceeds the Oersted field of the two conducting layers.
\begin{figure}[tb]
\centering
\includegraphics[width=80mm]{Fig7_neu_s.pdf}
\caption{\textbf{a.} and \textbf{c.} Field ratio $|\mathbf{B}|/|\mathbf{B_{\text{oer,TL}}}|$ as a function of $y_p$ and $z_p$ for the proposed trilayer sample. The white solid line represents the $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|=1$ contour line. Areas, where $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|>10$ are also shaded in white. The gray (semi-transparent) rectangle depicts the cross-section of the metal strip. \textbf{b.} $|\mathbf{B}|$, $|\mathbf{B_{\text{oer,TL}}}|$ and $|\mathbf{B_{\text{oer}}}|$ as a function of $d$ for the sensor position depicted in Fig.~\ref{fig:SingleLayerSchematicWithSpinAcc}.
\textbf{c.} $|\mathbf{B}|$ and $|\mathbf{B_\mathrm{oer}}|$ as a function of $d$ for the sensor position depicted in Fig.~\ref{fig:SingleLayerSchematicWithSpinAcc}. The solid red line corresponds to the full numerical computation of $|\mathbf{B}|$, while the blue line depicts $|\mathbf{B_{\text{oer}}}|$ of the trilayer configuration. We find $|\mathbf{B}|/|\mathbf{B_{\text{oer}}}|>1$ for $d\lesssim \SI{50}{nm}$.
}
\label{fig:TrilayerStrayfieldColorPlotAndFarfieldLimits}
\end{figure}
\begin{table}[b]
\begin{center}
\caption{$y$-components of magnetic field $B_y$ and magnetic field gradient $\partial B_y/\partial y$ for a sample-sensor distance of $d=\SI{10}{nm}$ and $d=\SI{5}{nm}$ (trilayer sample geometry).}
\begin{tabular}{cccc}
\hline
{} & {} & \textbf{Stray field} & \textbf{Oersted field} \\
\hline\hline
$d=10 {nm}$ & $B_y$ &$\SI{-3.6}{\mu T}$ &$\SI{-1.0}{\mu T}$ \\
{} & $\partial B_y/\partial y$ & $\SI{460}{T / m}$ & $\SI{100}{T / m}$\\
\hline
$d=5 {nm}$ & $B_y$ & $\SI{-8.8}{\mu T / m}$ & $\SI{-1.9}{\mu T / m}$\\
{} & $\partial B_y/\partial y$ & $\SI{1560}{T / m}$ & $\SI{282}{T / m}$\\
\hline
\end{tabular}%
\end{center}
\label{tab:values}%
\end{table}%
\section{Discussion and Summary \label{sec:summary}}
We consider magnetic force microscopy (MFM) as a potential candidate for the measurement of the stray field profile~\cite{meyer_scanning_2004,schwarz_magnetic_2008} and estimate the sensitivity required. The force acting on a MFM tip is $\mathbf{F}=(\mathbf{m}\cdot\nabla)\mathbf{B}$, where $\mathbf{m}=(0,m,0)$, $m\approx\SI{1e-13}{emu}=\SI{1e-16}{Am^2}$ is typical magnetic moment of a MFM tip~\cite{ferri_atomic_2012}. Using $\partial B_y/\partial y$ from Tab.~\ref{tab:values}, we expect a force in $y$-direction, $|F_y|$, of $\SI{46}{fN}$ (for $d=\SI{10}{nm}$) or $\SI{156}{fN}$ ($d=\SI{5}{nm}$), respectively. The state-of-the-art sensitivity concerning force measurements using MFM is about $\SI{10}{fN}$ at room temperature~\cite{jiles_magnetism_2003}. Besides, Mamin et al.~\cite{mamin_sub-attonewton_2001} have reported the detection of aN forces with MFM operated at $\SI{100}{mK}$. Using MFM in frequency-modulated detection mode, force gradient sensitivities down to $\SI{0.14}{\micro N/m}$ have been reported~\cite{straver_phdthesis_2004}. This is well below the expected stray field force gradients $\partial F_y/\partial y=\SI{7.8}{\micro N/m}$ ($d=\SI{20}{nm}$) and $\partial F_y/\partial y=\SI{45}{\micro N/m}$ ($d=\SI{5}{nm}$).
In summary, we have discussed a direct method to detect spin accumulation in a non-magnetic metal strip. The proposed approach is based on the measurement of the magnetic stray field arising from the electron spin accumulation close to the surfaces of the metal strip . To this end, we have derived an analytical expression for the spin accumulation and the corresponding magnetic stray field around a non-magnetic, metallic strip with rectangular cross-section. Based on this, we proposed a sample geometry for a future experiment and calculated the spatial distribution of the magnetic stray field. We showed that the stray field is large enough for detection using the state-of-the-art sensing techniques. Besides, we compared the stray field to the Oersted field around the non-magnetic conductor and found that for the proposed trilayer sample geometry, the Oersted field is dominated by the stray field near the edges of the conducting strip. Such a direct detection of spin accumulation should enable a reliable measurement of spin transport properties, such as spin diffusion length and spin Hall angle, in metals thereby circumventing interfacial complexities.
\section*{Acknowlegdments}
We acknowledge funding from DFG via Priority program 1538 Spin-Caloric Transport (Project GO 944/4) and SPP1601 (HU 1896/2). AK is funded by A. v. Humboldt foundation.
|
1,108,101,563,643 | arxiv | \section{Introduction}
\label{intro}
In recent years impressive technical advances have prompted
intensive experimental and theoretical investigations of proximity
phenomena, originating from quantum and thermal fluctuations of
the electromagnetic field existing in the vicinity of all bodies.
In general, these phenomena can be grouped in two broad classes,
namely equilibrium phenomena one side, and non-equilibrium
phenomena on the other. Well-known examples of equilibrium
phenomena are the Casimir effect, and the Casimir-Polder atom-wall
forces \cite{parse,Mohid}. On the other hand, a much studied
non-equilibrium phenomenon is provided by non-contact radiative
heat transfer between two closely spaced bodies \cite{volokitin}.
In recent times much interest has been devoted to the new field of
Casimir and Casimir-Polder forces between macroscopic bodies
and/or atoms out of thermal equilibrium. These phenomena are
intensely investigated now both theoretically
\cite{antezza,buhmann} and experimentally \cite{cornell}. We
should also like to mention the very interesting phenomenon of
non-contact quantum friction \cite{pendry}. The problems of heat
transfer, Casimir forces and quantum friction in a system of two
plane-parallel plates at different temperatures, in relative
uniform motion in a direction parallel to the plates, have also
been investigated recently \cite{pers}.
While the material dependence of the above phenomena has been well
studied in simple planar geometries, there exists presently much
interest in exploring more complicated geometries. Indeed the
highly non-trivial geometry dependence of the near-field opens up
the possibility of tailoring the features of the radiation field
for new applications, that range from micro- and nanomachines
operated by the Casimir force \cite{chan,capasso}, to designing
the thermal emission of photonic crystals \cite{chan2}. For nearly
flat surfaces, the shape dependence can be studied using the
so-called proximity force approximation (PFA), which amounts to
averaging the plane-parallel result over the slowly varying
distance between the opposing surfaces of the bodies. It is well
known however that the PFA has only a limited range of validity,
and it can lead to inaccurate predictions for surfaces with deep
corrugations \cite{chan3}. For this reason, it is widely
recognized today that more accurate methods are needed to describe
arbitrary geometries. For systems in equilibrium, much progress
has been made recently. New powerful numerical techniques to
compute the Casimir force, based on the Green's function approach
and on the path-integral approach have been reported
\cite{gies,rodriguez}. Another approach that is being vigorously
developed is based on the multiple scattering formalism, that was
first introduced long ago \cite{balian} to study the Casimir
energy for a system of perfect conductors of arbitrary shape.
The scattering approach is actually closely related to the Green's
function method, and we address the reader to Ref. \cite{balian}
for further details on the connection between the two methods.
There exist today several variants of the scattering approach, that have been
developed to deal also with real materials (for a comparative
review see \cite{milton}): one of the variants \cite{emig,kenneth}
is better suited for dealing with compact objects not too close to
each other, and it is based on a multipolar expansion of the e.m.
field. Another variant \cite{genet} is instead better adapted to
deal with planar-like structures in close proximity, and it uses a
decomposition of the e.m. field into plane waves. Contrasted with
these important advances in the mathematical techniques for
computing equilibrium Casimir forces, the theory is much less
developed for problems out of thermal equilibrium. In fact, the
Casimir force out of thermal equilibrium has been investigated
only in the plane-parallel case \cite{antezza}, while the problem
of computing the near-field radiative transfer between two spheres
was addressed only very recently \cite{chen}, using a dyadic
Green's function approach (see also the recent experiment
\cite{nara}). Investigating in depth the shape-dependence of
proximity effects out of thermal equilibrium is indeed very
interesting, in view of potential applications, because out of
equilibrium there exists a richness of behaviors, associated for
example with resonances in the spectrum of surface excitations,
that are absent at equilibrium \cite{volokitin,antezza}.
In this paper we develop an exact method for computing Casimir
forces and the power of heat transfer between two arbitrary plates
out of thermal equilibrium. The method is based on a
generalization of the scattering approach, that has proven so
successful in equilibrium Casimir problems. We consider the
variant of the scattering approach \cite{genet} that is best
suited for planar-like nanostructured surfaces at close
separations, like those of \cite{chan2,chan3}. In this paper, we
shall only present the derivation of the basic formulae, leaving
concrete numerical applications for a successive exposition. The
main new result is the demonstration that also out of thermal
equilibrium, the shape and material dependence enter only through
the scattering matrices of the bodies involved, analogously to
what has been found for systems in thermal equilibrium. We remark
that our results provide exact expressions in terms of the
scattering matrices of the intervening bodies. Of course, the
scattering matrix is in principle a complicated object, but there
exist methods, both analytical and numerical, for computing it
accurately. Many new geometries have been considered recently in
Casimir investigations, and the corresponding scattering matrices
have been estimated for this purpose. The formulae derived in this
paper permit to consider these geometries out of thermal
equilibrium.
The plan of the paper is as follows: in Sec. 2 we present the
general derivation of the correlators for the e.m. field in the
gap between two arbitrarily shaped plates at different
temperatures. In Sec. 3 the correlators derived in Sec. 2 are used
to obtain an exact expression for the Casimir force and the power
of radiative heat transfer between the plates. Finally, in Sec. 4
we present our conclusions and outline directions for future work.
\section{General principles: Rytov's theory.}
We consider a geometry of the type considered in Refs.
\cite{genet}, i.e. a cavity consisting of two large plates plates
at temperatures $T_1$ and $T_2$, the whole system being in a {\it
stationary configuration}. The shapes of their opposing surfaces
can be arbitrary, apart from the assumption (usually implicit in
scattering approaches to equilibrium Casimir problems
\cite{genet,kenneth,emig}) that there must exist between them a
vacuum gap of thickness $a>0$ bounded by two parallel planes. This
condition excludes from our consideration interpenetrating
surfaces, like the one studied in the second of Refs.
\cite{rodriguez}. We also assume for simplicity that the plates
are thick, in such a way that no radiation from outside can enter
the gap.
The basic problem that we face is to determine the correlators
for the fluctuating electromagnetic field existing in the empty
gap between the plates. This can be done by suitably generalizing
the methods used in heat transfer studies \cite{volokitin}, which
are also at the basis of the recent out-of-equilibrium Casimir
investigations \cite{antezza}. Both are based on the well known
Rytov's theory \cite{rytov} of electromagnetic fluctuations. On
the basis of this theory, the field in the gap can be interpreted
as the result of multiple scatterings off the plates surfaces, of
the radiation fields originating from quantum and thermal
fluctuating polarizations within the plates. Importantly, the
local character of the polarization fluctuations implies that the
two plates radiate {\it independently} from each other. Therefore,
in a {\it stationary configuration}, the radiation from either
plate is the {\it same} as the one that would be radiated by
that plate, if it were in {\it equilibrium} with the environment
(at its own temperature), the other plate being {\it removed}.
This physical picture permits to separate the problem of
determining the fluctuating field in the gap in two separate
steps. In the first step, one determines the field radiated by
either plate in isolation, a problem that can be solved by using
the general equilibrium formalism. The two plates are considered
together only in the second step, where the intracavity field is
finally determined, by taking account of the effect of multiple
scatterings on the radiation fields radiated by the plates, as
found in step one.
After these general remarks, we can now start our two-step
computation of the fluctuating e.m. field in the gap. We let
$\{x,y,z\}$ cartesian coordinates such that the vacuum gap is
bounded by the planes $z=0$ and $z=a$ respectively, with plate
one (two) lying at the left (right) of the $z=0$ ($z=a$) plane. We
suppose that the lateral sizes $L_x$ and $L_y$ of the plates are
both much larger than the separation $a$: $L_x,L_y \gg a$ Boundary
effects being negligible, it is mathematically convenient to
impose periodic boundary conditions on the fields in the $(x,y)$
directions, on the opposite sides of the plates. When dealing with
the e.m. field, it is sufficient to consider the electric field
${\bf E}(t,{\bf r})$ only, for the magnetic field ${\bf B}(t,{\bf
r})$ can be obtained from ${\bf E}(t,{\bf r})$ by using Maxwell
Equations. The geometry being planar-like, the electric field in
the gap can always be expressed as a sum of (positive-frequency)
plane-wave modes of the form \begin{equation} {\bf E}^{(\pm)}_{\alpha,{\bf
k}_{\perp} }(t,{\bf r})=2\,{\rm Re}\,[b^{(\pm)}_{\alpha,{\bf
k}_{\perp} }{\bf \cal E}^{(\pm)}_{\alpha,{\bf k}_{\perp}
}({\omega;\bf r}) \,e^{-i \omega t}]\label{mode0}\end{equation} where \begin{equation}
{\bf \cal E}^{(\pm)}_{\alpha,{\bf k}_{\perp} }({\omega;\bf
r})={\bf e}^{(\pm)}_{\alpha,{\bf k}_{\perp} }(\omega)\,e^{i {\bf
k}^{(\pm)}\cdot {\bf r}}\;. \label{modes}\end{equation} Here $\omega$ is the
frequency, and ${\bf k}_{\perp}$ is the projection of the
wave-vector onto the $(x,y)$ plane. Periodicity in $(x,y)$
directions implies that the wave-vectors ${\bf k}_{\perp}$ belong
to a discrete set labelled by two integers $(n_x,n_y)$: $k_x=2 \pi
n_x/L_x$, $k_y=2 \pi n_y/L_y$. The index $\alpha=s,p$ denotes
polarization, where $s$ and $p$ correspond, respectively, to
transverse electric and transverse magnetic polarizations. The
superscripts (+) and (-) in Eqs. (\ref{mode0}) and (\ref{modes})
refer to the direction of propagation along the $z$-axis, the
$(+)$ and $(-)$ signs corresponding to propagation in the positive
and negative $z$ directions, respectively. Moreover, ${\bf
k}^{(\pm)}={\bf k}_{\perp} \pm k_z {\hat {\bf z}}$, where
$k_z=\sqrt{\omega^2/c^2-k^2_{\perp}}$ (the square root is defined
such that ${\rm Re}(k_z) \ge 0$, ${\rm Im}(k_z) \ge 0$), ${\bf
e}^{(\pm)}_{s,{\bf k}_{\perp}}(\omega)={\hat {\bf z}} \times {\hat
{\bf k}}_{\perp}$, ${\bf e}^{(\pm)}_{p,{\bf
k}_{\perp}}(\omega)=(c/\omega)\,{\bf k}^{(\pm)} \times {\bf
e}^{(\pm)}_{s,{\bf k}_{\perp}}$. We note that for $\omega/c
> k_{\perp}$, when $k_z$ is real, the modes ${\bf \cal
E}^{(\pm)}_{\alpha,{\bf k}_{\perp} }$ represent {\it propagating}
waves, while for $\omega/c < k_{\perp}$, when $k_z$ is imaginary,
they describe {\it evanescent} modes. It is opportune to
introduce a shortened index notation, that will prove useful in
the sequel. We shall use a roman index $i$ to denote collectively
the $i$-th component of a vector and the position ${\bf r}$, while
a greek index $\alpha$ will denote collectively the polarization
$\alpha$ and the wave-vector ${\bf k}_{\perp}$. In this notation
the $i$-th component of ${\bf {\cal E}}^{(\pm)}_{\alpha,{\bf
k}_{\perp} }(\omega;{\bf r})$ shall be denoted as ${\cal
E}^{(\pm)}_{i \alpha}(\omega)$. Similarly, a kernel
$A_{\alpha,{\bf k}_{\perp};\alpha',{\bf k}'_{\perp}}(\omega)$
shall be denoted as $A_{\alpha,\alpha'}(\omega)$. We also set
$\sum_\omega \equiv \int d \omega/(2 \pi)$, $\delta_{\omega,
\omega'}\equiv 2 \pi \delta(\omega-\omega')$, $\sum_{\alpha}
\equiv 1/{\cal A} \sum_{n_x,n_y}\sum_{\alpha}$, and
$\delta_{\alpha,\alpha'}\equiv {\cal A}\, \delta_{n_x,n'_x}
\delta_{n_y,n'_y}\delta_{\alpha,\alpha'}$, where ${\cal A}=L_x
L_y$ is the area of the plates. Finally, for any kernel
$A_{\alpha,{\bf k}_{\perp};\,\alpha',{\bf k}'_{\perp}}$, we define
$${\rm Tr}_{\alpha} A= \sum_{\alpha} A_{\alpha,\alpha}\equiv
\frac{1}{\cal A}\sum_{n_x,n_y}\sum_{\alpha} A_{\alpha,{\bf
k}_{\perp};\,\alpha,{\bf k}_{\perp}}\;.$$ Having set our
notations, we pass now to step one.
\subsection{Step one: the field radiated by a single plate in thermal equilibrium}
As explained above, we begin by considering each plate in
isolation to determine its radiation, and we let ${{\cal
E}}^{(A)}_i(\omega)\;,A=1,2$ the time Fourier-transform of the
field radiated by plate $A$. The {\it total} radiation field
${{\cal E}}^{({\rm eq};A)}_i(\omega )$ existing, respectively, to
the right of plate one and to the left of plate two, when either
plate is in equilibrium at temperature $T_A$ with the environment
(the other plate being absent) can be expressed in the form: \begin{equation}
{{\cal E}}^{({\rm eq};A)}_i(\omega)={{\cal E}}^{(A)}_i(\omega)+{
\cal E}^{({\rm env};A)}_i(\omega)+{\cal E}^{(\rm
sc;A)}_i(\omega)\;,\label{etot}\end{equation} where ${ \cal E}^{({\rm
env};A)}_i(\omega)$ describes the environment radiation, including
vacuum fluctuations and black-body radiation, impinging on plate
$A$ (from the right for plate one, and from the left for plate
two), and ${\cal E}^{({\rm sc},A)}_i(\omega))$ is the
corresponding scattered radiation. These fields have the
expansions: \begin{equation} {{\cal E}}^{(A)}_i(\omega)=\sum_{\alpha}
{\cal E}^{(\pm)}_{i \alpha}(\omega)\,b^{(A)}_{\alpha}(\omega)\,\;\label{Afield}\;,\end{equation}
\begin{equation} {{\cal
E}}^{({\rm env};A)}_i(\omega)=\sum_{\alpha}
{\cal E}^{(\mp)}_{i \alpha}(\omega)\,b^{(\rm env)}_{\alpha}(\omega)\,\;\label{bbfield}\;,\end{equation}
\begin{equation} {{\cal
E}}^{({\rm sc};A)}_i(\omega)=\sum_{\alpha,\alpha'} {\cal
E}^{(\pm)}_{i
\alpha}(\omega)\,S^{(A)}_{\alpha,\alpha'}(\omega)\,b^{(\rm
env)}_{\alpha'}(\omega)\,,\label{scfield}\end{equation} where, here and in
Eqs. (\ref{scapos}) and (\ref{green}) below, the upper (lower)
sign is for plate one (two), and $S_{\alpha \alpha'}^{(A)}$ is the
scattering matrix of plate $A$, for radiation impinging on the
right (left) surface of plate one (two). It is important to note
that, for fixed plates orientations, the matrix $S_{\alpha
\alpha'}^{(A)}$ depends in general on the position ${\bf x}^{(A)}$
of some fixed reference point $Q^{(A)}$ chosen on plate $A$. If
${\tilde S}_{\alpha \alpha'}^{(A)}$ is the scattering matrix of
plate $A$ relative to a coordinate system with origin at
$Q^{(A)}$, then: \begin{equation} {S}^{(A)}_{\alpha \alpha'}=e^{-i {\bf
k}^{(\pm)}\cdot {\bf x}^{(A)}}\,{\tilde S}^{(A)}_{\alpha
\alpha'}\, e^{i {\bf k}^{'(\mp)}\cdot {\bf
x}^{(A)}}\;.\label{scapos}\end{equation} The amplitudes $b^{(\rm
env)}_{\alpha}(\omega)$ for the environment radiation in Eqs.
(\ref{bbfield}) and (\ref{scfield}) are characterized by the
following non-vanishing well known correlators: \begin{equation} \langle
b^{(\rm env)}_{\alpha}(\omega)\, b^{(\rm env)*}_{\alpha'}(\omega')
\rangle= \frac{2 \pi \omega}{c^2} \,F(\omega, T_A)\, {\rm
Re}\left( \frac{1}{k_z}\right)\delta_{\omega \omega'}
\delta_{\alpha \alpha'}\,\label{bbcor}\end{equation} where
$F(\omega,T)=(\hbar \omega/2) \coth(\hbar \omega/(2 k_B T))$, with
$k_B$ Boltzmann constant.
The desired correlators for the amplitudes
$b^{(A)}_{\alpha}(\omega)$ can now be determined by exploiting the
following relation implied by the fluctuation-dissipation
theorem: \begin{equation} \langle {{\cal E}}^{({\rm eq}; A)}_i(\omega )\,{{\cal
E}}^{({\rm
eq};A)*}_{i'}(\omega')\rangle=\frac{2}{\omega}\,F(\omega,T_A)\,
\delta_{\omega \omega'}{\rm Im} \,
G^{(A)}_{ii'}(\omega)\;,\label{FDT}\end{equation} where
$G^{(A)}_{ii'}(\omega)$ is the dyadic $retarded$ Green function of plate $A$.
In the vacuum to the right (left) of plate one (two), the Green
function $G^{(A)}_{ii'}(\omega)$ can be expressed in terms of the
scattering matrix $S^{(A)}_{\alpha \alpha'}$ as follows: \begin{equation}
G^{(A)}_{ii'}(\omega)=G_{ii'}^{(0)}(\omega)+ \frac{2 \pi i
\omega^2}{c^2} \sum_{\alpha \alpha'} {\cal
E}_{i\alpha}^{(\pm)}S_{\alpha \alpha'}^{(A)} {\cal
E}_{J(i')\alpha'}^{(\mp)}\frac{1}{k'_z}\;,\label{green}\end{equation} where
$G_{ii'}^{(0)}(\omega)$ is the $retarded$ Green function in free space:
$$
G_{ii'}^{(0)}(\omega)=\frac{2 \pi i \omega^2}{c^2}
\sum_{\alpha}\frac{1}{k_z}\left(\theta(z-z') \,{\cal
E}_{i\alpha}^{(+)} {\cal E}_{J(i')\alpha}^{(+)}\right.$$\begin{equation}\left.+
\,\theta(z'-z)\, {\cal E}_{i\alpha}^{(-)} {\cal
E}_{J(i')\alpha}^{(-)}\right)\;,\label{freegr}\end{equation} with $\theta(z)$
Heaviside step-function ($\theta(z)=1$ for $z\ge 0$, $\theta(z)=0$
for $z<0$). Here, $J$ denotes the {\it inversion} operator, whose
action on space-indices is defined as $J(i) \equiv J(i,{\bf
r})=(i,-{\bf r})$. It is useful to define the action of $J$ also
on polarizations, wave-vectors and propagation directions as
$J(\alpha)\equiv J(\alpha,{\bf k}_{\perp})=(\alpha,-{\bf
k}_{\perp})$ and $J((\pm))=(\mp)$. The following relations hold
\begin{equation} {\cal E}^{(\pm) *}_{i \alpha}={\cal E}^{(\pm)}_{J(i)
\alpha}(1+s_{\alpha})/2 +{\cal E}^{(\mp)}_{J(i)
\alpha}(1-s_{\alpha})/2 \;,\label{rel1}\end{equation} where $s_{\alpha}
\equiv {\rm sign}(\omega^2/c^2-k^2_{\perp})$ and \begin{equation} {\cal
E}^{(\pm)}_{J(i) J(\alpha)}=(-1)^{P(\alpha)}{\cal
E}^{(\mp)}_{ia}\;,\label{rel2}\end{equation} where $P(\alpha)$ is one (zero)
for $s$ ($p$) polarization. The reciprocity relations
$G^{(A)}_{ii'}(\omega)=G^{(A)}_{i'i}(\omega)$ satisfied by the
Green's function, as a consequence of microscopic reversibility,
imply via Eq. (\ref{green}) the following important Onsager's
relations that must hold for any scattering matrix \begin{equation}
S^{(A)}_{\alpha
\alpha'}=\frac{k'_z}{k_z}(-1)^{P(\alpha)+P(\alpha')}S^{(A)}_{J(\alpha')J(\alpha)}\;.\label{ons}\end{equation}
Upon substituting the expression for ${\cal E}^{({\rm
eq};A)}_i(\omega )$ provided by Eqs.(\ref{etot}-\ref{scfield})
into the l.h.s. of Eq. (\ref{FDT}), and after substituting the
expression of the Green function Eqs. (\ref{green}) into the
r.h.s. of Eq. (\ref{FDT}), by making use of Eqs. (\ref{bbcor}),
(\ref{rel1}), (\ref{rel2}) and (\ref{ons}) one obtains the
following expression for the non-vanishing correlators of the
amplitudes $b^{(A)}_{\alpha}(\omega)$: $$ \langle b^{(A)}
(\omega)\, b^{(B)\dagger} (\omega') \rangle =\delta_{AB}\,\frac{2
\pi \omega}{c^2} F(\omega, T_A)\, \delta_{\omega \omega'}$$ \begin{equation}
\times \left(\Sigma^{\rm (pw)}_{-1} - S^{(A)} \Sigma^{\rm
(pw)}_{-1} S^{(A)\dagger}+S^{(A)} \Sigma^{\rm
(ew)}_{-1}-\Sigma^{\rm (ew)}_{-1} S^{(A)\dagger}
\right)\,,\label{kirgen}\end{equation} where we collected the amplitudes
$b^{(A)}_{\alpha}(\omega)$ into the (column) vector
$b^{(A)}(\omega)$ and we set $\Sigma_{n}^{\rm (pw/ew)}=k_z^n
\Pi^{\rm (pw/ew)} $, where $\Pi^{\rm (pw)}_{\alpha \alpha'}=
\delta_{\alpha \alpha'}\,{(1+s_{\alpha})}/{2}$ and $\Pi^{\rm
(ew)}_{\alpha \alpha'}= \delta_{\alpha
\alpha'}\,{(1-s_{\alpha})}/{2}$ are the projectors onto the
propagating and evanescent sectors, respectively. Eq.
(\ref{kirgen}) generalizes the well known Kirchhoff's law (as can
be found for example in \cite{volokitin}) to non-planar surfaces,
and it shows that the fluctuating field radiated by plate $A$ is
fully determined by its scattering matrix $S^{(A)}$. We remark
that for non-planar surfaces the matrix $S^{(A)}$ is non-diagonal,
and therefore the order of the factors on the r.h.s. of Eq.
(\ref{kirgen}) must be carefully respected. Now we move to step
two.
\subsection{Step two: determination of the intracavity field}
Without loss of generality, the intra-cavity field can be
represented as a superposition of waves of the form: \begin{equation} {\cal
E}_{i \alpha}(\omega)=b_{\alpha}^{(+)}(\omega){\cal
E}_{i\alpha}^{(+)}(\omega)+b_{\alpha}^{(-)}(\omega){\cal
E}_{i\alpha}^{(-)}(\omega)\;.\end{equation} The intuitive physical picture of
the intra-cavity field as resulting from repeated scattering off
the two surfaces of the radiation field {\it emitted} by the
surfaces of the two plates leads to the following equations for
$b^{(\pm)}(\omega)$:
\begin{equation} b^{(+)}=b^{(1)}+{S}^{(1)}\,b^{(-)}\;,\;\;
b^{(-)}=b^{(2)}+{S}^{(2)}\,b^{(+)}\;.\label{intrafield}\end{equation}
Equations (\ref{intrafield}) are easily solved: \begin{equation} b^{(+)}= U^{(12)}\,b^{(1)}+{S}^{(1)} U^{(21)}\,
b^{(2)}\;,\label{bplus}\end{equation} \begin{equation}
b^{(-)}={S}^{(2)} U^{(12)}\, b^{(1)}+ U^{(21)}\, b^{(2)}
\;,\label{bminus}\end{equation} where $U^{(AB)}=(1-{
S}^{(A)}{S}^{(B)})^{-1}$. Together with Eq. (\ref{kirgen}), Eqs.
(\ref{bplus}) and (\ref{bminus}) completely determine the
intra-cavity field. In particular, they determine the matrix
$C^{(KK')}$ for the non-vanishing correlators of the intracavity
field: \begin{equation}\langle b^{(K)}(\omega) b^{(K')\dagger}(\omega')
\rangle= \delta_{\omega, \omega'}C^{(KK')}\;.\end{equation} The explicit
expression of $C^{(KK')}$ in terms of ${S}^{(1)}$ and ${S}^{(2)}$
can be easily obtained from Eqs. (\ref{kirgen}), (\ref{bplus}) and
(\ref{bminus}), and it is not shown for brevity.
\section{Observables}
The above results permit to evaluate the average of any
observables constructed out of the intracavity field. Typically,
the observables are symmetric bilinears of the electric field, of
the form \begin{equation} {\bar {\cal O}} \equiv \sum_{i j}\int d^2{\bf
r}_{\perp} \int d^2 {\bf r}'_{\perp} E_i(t,{\bf r}) {\cal
O}_{ij}({\bf r},{\bf r}')E_j(t,{\bf r}')\;,\end{equation} where ${\cal
O}_{ij}({\bf r},{\bf r}')={\cal O}_{ji}({\bf r'},{\bf r})$. Upon
defining the matrix $$ {{\cal O}}^{(KK')}_{\alpha,\alpha'}=\sum_{i
j}\int d^2{\bf r}_{\perp}\!\! \int d^2 {\bf r}'_{\perp} {\cal
E}_{i\alpha}^{(K)*}(\omega,{\bf r})$$ \begin{equation} \times {\cal
O}_{ij}({\bf r},{\bf r}')\,{\cal E}_{j\alpha'}^{(K')}(\omega,{\bf
r}')\end{equation} the statistical average of ${\bar {\cal O}}$ can be
written as
\begin{equation}\langle {\bar {\cal O}}\rangle= 2 \sum_{\omega >0}
\sum_{K,K'} {\rm Tr}_{\alpha} [C^{(KK')} {{\cal O}}^{(K'K)}
]\,.\label{aver}\end{equation} Below we shall use this formula to determine
the Casimir force and the power of heat transfer between the two
plates.
\subsection{The Casimir force out of thermal equilibrium}
As our first example, we consider the $(x,y)$ integral of the $zz$
components of the Maxwell stress tensor $T_{ij}$, that provides
the total Casimir force between the plates. After a simple
computation, one finds: \begin{equation} { {\cal
O}}^{(KK')}\,[T_{zz}]=\frac{c^2 k^2_z}{4 \pi \omega^2}\, \left(
\delta_{KK'}\,\Pi^{(\rm pw)}+ \delta_{KJ(K')}\,\Pi^{(\rm
ew)}\right)\;.\label{Tzz}\end{equation} Evaluation of Eq. (\ref{aver}) with
${\cal O}^{(KK')}$ given by Eq. (\ref{Tzz}), leads to the
following representation for the {\it unrenormalized} Casimir
force $F_z^{(0\,\rm neq)}$ out of thermal equilibrium: $$
F_z^{(0\,\rm neq)} \!=\!\sum_{\omega
>0}\frac{1}{\omega}\,[F(\omega,T_1)J(S^{(1)},S^{(2)}) $$ \begin{equation} + \;F(\omega,T_2)J(S^{(2)},S^{(1)})]\;,\label{unren}\end{equation}
where $J(S^{(A)},S^{(B)})$ is the quantity $$ J(S^{(A)},S^{(B)})=
{\rm Tr_{\alpha}}\! \left[
U^{(AB)}\left(\Sigma^{\rm (pw)}_{-1} -S^{(A)}\Sigma^{\rm
(pw)}_{-1}S^{(A)\dagger}\right. \right.
$$
$$
\left.+S^{(A)}\Sigma^{\rm (ew)}_{-1}-\Sigma^{\rm
(ew)}_{-1}S^{(A)\dagger}\right) U^{(AB)\dagger}\,\left(
\Sigma^{\rm (pw)}_{2}\right.$$ \begin{equation} \left.\left. + S^{(B)\dagger}
\Sigma^{\rm (pw)}_{2}S^{(B)}+ \Sigma^{\rm (ew)}_{2}
S^{(B)}+S^{(B)\dagger} \Sigma^{\rm (ew)}_{2} \right)\right]
\;.\label{jqua}\end{equation} After we add and subtract one half of the
quantity
$$B=F(\omega,T_2)J(S^{(1)},S^{(2)})+
F(\omega,T_1)J(S^{(2)},S^{(1)})$$ from the expression inside the
square brackets on the r.h.s. of Eq. (\ref{unren}), it is easily
seen that Eq. (\ref{unren}) can be recast in the form: $$
F_z^{(0\,\rm neq)}(T_1,T_2) =\frac{F_z^{(0\,\rm
eq)}(T_1)+F_z^{(0\,\rm eq)}(T_2)}{2}$$ \begin{equation} +\;\Delta F_z^{(\rm
neq)}(T_1,T_2)\;,\label{unrbis}\end{equation} where \begin{equation} F_z^{(0\,\rm eq)}
\!=\!\sum_{\omega
>0}\frac{1}{\omega}F(\omega,T)[J(S^{(1)},S^{(2)})+J(S^{(2)},S^{(1)})]\;,\label{unreq} \end{equation}
and $$\Delta F_z^{(\rm neq)}(T_1,T_2)= \sum_{\omega
>0}\frac{1}{2 \omega}(F(\omega,T_1)-F(\omega,T_2))$$
\begin{equation} \times
\,[J(S^{(1)},S^{(2)})-J(S^{(2)},S^{(1)})]\;.\label{neqf0}\end{equation} Using
the identity \begin{equation} F(\omega,T)=\hbar
\omega\left[\frac{1}{2}+n(\omega,T)\right]\end{equation} where \begin{equation}
n(\omega,T)=\frac{1}{\exp(\hbar \omega/(k_B T))-1}\;,\end{equation} Eq.
(\ref{neqf0}) can be written as:
$$\Delta F_z^{(\rm neq)}(T_1,T_2)= \frac{\hbar}{2}\sum_{\omega
>0}(n(\omega,T_1)-n(\omega,T_2))$$
\begin{equation} \times
\,[J(S^{(1)},S^{(2)})-J(S^{(2)},S^{(1)})]\;.\label{neqf}\end{equation} On the
other hand, upon substituting Eq. (\ref{jqua}) into the r.h.s. of Eq.
(\ref{unreq}), after a somewhat lengthy algebraic manipulation, it
can be seen that the quantity $F_z^{(0\,\rm eq)}(T)$ can be
further decomposed as \begin{equation} F_z^{(0\,\rm eq)}(T)=A^{(0)}(T)+
F_z^{(\rm eq)}(T)\;.\label{unreqbis}\end{equation} Here, $A^{(0)}(T)$ denotes
the divergent quantity: \begin{equation} A^{(0)}(T)=2\sum_{\omega
>0} \frac{F(\omega,T)}{\omega} {\rm
Tr_{\alpha}}\left[k_z \Pi^{\rm (pw)}\right].\end{equation} As we see, this
quantity depends neither on the material constituting the plates
nor on their distance, and we neglect it altogether \footnote{In
effect, the divergent quantity $A^{(0)}(T)$ includes a {\it
finite} temperature-dependent contribution, which may give rise to
a distance-independent force on the plates. The actual magnitude
of the resulting constant force on either plate depends on the
temperature of the environment outside the cavity, but it is
independent of both the material constituting the plates, as well
as of their shapes. For a detailed discussion of this point, the
reader may consult the third of Refs.\cite{antezza}}. As to the
second contribution $F_z^{(\rm eq)}$ occurring on the r.h.s. of
Eq. (\ref{unreqbis}), it has the expression
$$ F_z^{(\rm eq)}(T)=2 \,{\rm Re} \sum_{\omega \ge 0}
\frac{F(\omega,T)}{\omega} \,{\rm Tr}_{\alpha}\,\left[k_z\left(
U^{(12)}\,S^{(1)} S^{(2)}\right. \right.$$ \begin{equation} \left. \left.+
\,U^{(21)}\,{S^{(2)} S^{(1)}} \right) \right]\;\label{eqcas}.\end{equation}
Recalling that, according to Eq. (\ref{scapos}), the scattering
matrices $S^{(A)}$ depend on the mutual positions of the plates,
it is easy to verify that the above equilibrium force $F_z^{(\rm
eq)}$ has an associated free energy $F(a,T)$ ($F_z^{(\rm
eq)}=\partial F(a,T)/\partial a$) equal to: \begin{equation} F(a,T)=2 \,{\rm
Im} \sum_{\omega \ge 0} \frac{F(\omega,T)}{\omega} \,{\rm
Tr}_{\alpha} \log (1-S^{(1)} S^{(2)})\;.\label{freen}\end{equation} Eqs.
(\ref{eqcas}) and (\ref{freen}) coincide with the equilibrium
expressions, as derived within the scattering approach
\cite{genet}. Putting everything together, after in Eq.
(\ref{unrbis}) we remove the divergent contribution proportional
to $A^{(0)}(T_1)+A^{(0)}(T_2)$, we obtain the following new {\it
exact} expression for the renormalized Casimir force between the
plates: \begin{equation} F_z^{(\rm neq)}(T_1,T_2)=\frac{ F_z^{(\rm
eq)}(T_1)+F_z^{(\rm eq)}(T_2)}{2}+\,\Delta F_z^{(\rm
neq)}(T_1,T_2)\;.\label{renneq}\end{equation} Some comments are now in order.
We note first of all that the quantities $ F_z^{(\rm eq)}(T)$ and
$\Delta F_z^{(\rm neq)}(T_1,T_2)$ are both {\it finite}. Indeed,
as we said earlier, our expression for $F_z^{(\rm eq)}(T)$
coincides with the known scattering-approach expression for the
equilibrium Casimir force, which has been shown to be finite in
previous studies \cite{genet}. As to $\Delta F_z^{(\rm
neq)}(T_1,T_2)$, it is apparent from Eq. (\ref{neqf}) that this
quantity is finite, thanks to the Boltzmann factors
$n(\omega,T_i)$. We also note that our result has the same general
structure as the formula derived in Refs.\cite{antezza}, for the
simpler case of two plane-parallel plates. Analogously to that
case, we indeed see from Eq. (\ref{renneq}) that the
non-equilibrium force is the sum of the average of the equilibrium
forces, for the temperatures $T_1$ and $T_2$, plus a contribution
$\Delta F_z^{(\rm neq)}(T_1,T_2)$, that vanishes for $T_1=T_2$
(see Eq. (\ref{neqf})). Moreover, it is interesting to observe
that even for $T_1 \neq T_2$, the quantity $\Delta F_z^{(\rm
neq)}$, being antisymmetric in the scattering matrices of the two
plates, vanishes if the two plates have identical scattering
matrices. Such a case is realized, for example, if the two plates
are made of the same material and if their profiles are specularly
symmetric with respect to the $(x,y)$ plane. When this happens,
the non-equilibrium Casimir force is just the average of the
equilibrium forces, for the two temperatures of the plates. An
analogous statement can be found in the third of Refs.
\cite{antezza}. We can easily verify that in the case of
plane-parallel homogeneous dielectric plates our general formula
Eq. (\ref{renneq}) reproduces the result of Refs. \cite{antezza}.
In the flat case the scattering matrices of the plates are
diagonal, and can be taken to be of the form
\begin{eqnarray}
S^{(1)}_{\alpha
\alpha'} &=& \delta_{\alpha \alpha'} R_{\alpha}^{(1)}\;,\nonumber\\
S^{(2)}_{\alpha \alpha'} &=& \delta_{\alpha \alpha'} R_{\alpha}^{(2)}\,e^{2 i k_z
a}\;,\label{plane}
\end{eqnarray}
where $R_{\alpha}^{(A)}$ denote the familiar Fresnel reflection
coefficients. When these diagonal scattering matrices are plugged
into Eq. (\ref{eqcas}), one obtains: $$ F_z^{(\rm
eq)}(T)=4\,{\cal A}\,{\rm Re} \sum_{\omega \ge 0}
\frac{F(\omega,T)}{\omega} \sum_{\alpha} k_z
\frac{R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}}{1-R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}}$$
\begin{equation}
=4
{\cal A} \,{\rm Re} \sum_{\omega \ge 0} \frac{F(\omega,T)}{\omega}
\sum_{\alpha} k_z \left[\frac{e^{-2 i k_z
a}}{R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}} -1\right]^{-1}\;.
\label{lifs}\end{equation} In the limit of infinite plates, when
$$\frac{1}{\cal A}\sum_{n_x,n_y}\rightarrow \int \frac{d^2{\bf k}_{\perp}}{(2
\pi)^2}\;,$$ the above formula reproduces the well known Lifshitz
formula \cite{parse} for the Casimir force between two dielectric
plane-parallel slabs. On the other hand, when the scattering
matrices in Eq. (\ref{plane}) are substituted into Eq.
(\ref{neqf}), one finds:
$$\Delta F_z^{(\rm neq)}(T_1,T_2)={\cal A} \times {\hbar}\sum_{\omega
>0}[n(\omega,T_1)-n(\omega,T_2)]$$ $$\times \sum_{\alpha}\left[{\rm Re}(k_z)
\frac{|R_{\alpha}^{(2)}|^2-|R_{\alpha}^{(1)}|^2
}{|1-R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}|^2}-2 \,{\rm Im}(k_z)\,e^{-2 a{\rm Im}(k_z)}\right.$$
\begin{equation}\left.
\times \frac{{\rm Im}(R_{\alpha}^{(1)}){\rm Re}(R_{\alpha}^{(2)})-
{\rm Re}(R_{\alpha}^{(1)}){\rm Im}(R_{\alpha}^{(2)})}{|1-R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}|^2}\right]\;.\label{delplane}\end{equation}
After we substitute Eqs. (\ref{lifs}) and (\ref{delplane}) into
Eq. (\ref{renneq}), and upon taking the limit of infinite plates,
one finds that the result coincides with the non-equilibrium
Casimir force computed in Refs.\cite{antezza}.
\subsection{Power of heat transfer}
We consider now the total power $W$ of heat transfer between the
plates. This requires that we evaluate the statistical average of
the $(x,y)$ integral of the $z$-component $S_z$ of the Poynting
vector in the gap between the plates. A simple computation shows
that: \begin{equation} { {\cal O}}^{(KK')}\,[S_{z}]=\frac{c^2 k_z}{4 \pi
\omega}\,(-1)^K \left( \delta_{KK'}\,\Pi^{(\rm pw)}+
\delta_{KJ(K')}\,\Pi^{(\rm ew)}\right)\;.\label{Sz}\end{equation} When this
expression is plugged into Eq. (\ref{aver}) we obtain: \begin{equation} W
=\sum_{\omega
>0}[F(\omega,T_1)H(S^{(1)},S^{(2)})-F(\omega,T_2)H(S^{(2)},S^{(1)})]\;,\label{hetra}\end{equation}
where $H(S^{(A)},S^{(B)})$ is the quantity $$ H(S^{(A)},S^{(B)})=
{\rm Tr_{\alpha}}\! \left[
U^{(AB)}\left(\Sigma^{\rm (pw)}_{-1} -S^{(A)}\Sigma^{\rm
(pw)}_{-1}S^{(A)\dagger}\right. \right.
$$
$$
\left.+S^{(A)}\Sigma^{\rm (ew)}_{-1}-\Sigma^{\rm
(ew)}_{-1}S^{(A)\dagger}\right) U^{(AB)\dagger}\,\left(
\Sigma^{\rm (pw)}_{1}\right.$$ \begin{equation} \left.\left. - S^{(B)\dagger}
\Sigma^{\rm (pw)}_{1}S^{(B)}- \Sigma^{\rm (ew)}_{1}
S^{(B)}+S^{(B)\dagger} \Sigma^{\rm (ew)}_{1} \right)\right]
\;.\label{hqua}\end{equation} By a lengthy computation, it is possible to
verify that the quantity $H(S^{(1)},S^{(2)})$ is symmetric under
the exchange of $S^{(1)}$ and $S^{(2)}$: \begin{equation}
H(S^{(1)},S^{(2)})=H(S^{(2)},S^{(1)})\;.\end{equation} By virtue of this
identity, the above formula for the power of heat transfer can be
rewritten as: \begin{equation} W =\hbar \sum_{\omega
>0}\omega [n(\omega,T_1)-n(\omega,T_2)]H(S^{(1)},S^{(2)}) \;.\label{hetrabis}\end{equation}
We stress once again that this formula provides an {\it exact}
expression for $W$ in terms of the scattering matrices of the
surfaces. We can consider the simple special case of two planar
slabs. When the scattering matrices for two planar surfaces, given
in Eq. (\ref{plane}), are substituted into Eq. (\ref{hetrabis}),
the expression for the power of heat transfer takes the following
simple form:
$$W={\cal A} \times {\hbar}\sum_{\omega
>0} \omega [n(\omega,T_1)-n(\omega,T_2)]$$ $$\times
\sum_{\alpha}\left[\theta(k_z^2)
\frac{(1-|R_{\alpha}^{(1)}|^2)(1-|R_{\alpha}^{(2)}|^2)
}{|1-R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}|^2}\right.$$ \begin{equation}\left. + \,\theta(-k_z^2)\,4 \,e^{-2 a{\rm
Im}(k_z)}\,\frac{{\rm Im}(R_{\alpha}^{(1)}){\rm
Im}(R_{\alpha}^{(2)})
}{|1-R_{\alpha}^{(1)}\,R_{\alpha}^{(2)}\,e^{2 i k_z
a}|^2}\right]\;.\label{heatplane}\end{equation} In the limit of large plates,
the above expression coincides with the known formula for the
power of heat transfer between two infinite plane-parallel
dielectric slabs separated by an empty gap \cite{volokitin}.
\section{Conclusions}
In conclusion, we have developed a new {\it exact} method for
computing Casimir forces and the power of heat transfer between
two plates of arbitrary compositions and shapes at different
temperatures, in vacuum. The method is based on a generalization
to systems out of thermal equilibrium of the the scattering
approach recently used to study the Casimir effect in non-planar
geometries \cite{emig,kenneth,genet}. Similarly to the equilibrium
case, we find that also out of thermal equilibrium the dependence
on shape and material appears only through the scattering matrices
of the intervening bodies. The expressions that have been obtained
are exact, and lend themselves to numerical or perturbative
computations once the scattering matrices for the desired geometry
are evaluated. Our results provide the tool for a systematic
investigation of the shape dependence of thermal proximity effects
in nanostructured surfaces, that could be of interest for future
applications to nanotechnology and to photonic crystals. In a
successive publication \cite{bimonte}, we shall use the formulae
derived in this paper to compute the Casimir force and the power
of heat transfer between two periodic dielectric gratings, like
those considered in last of Refs.\cite{genet}. The explicit form
of the scattering matrices for rectangular gratings has been
worked out there, on the basis of a suitable generalization of the
Rayleigh expansion. At any finite order $N$ of the Rayleigh
expansion, the scattering matrices $S_{\alpha,\alpha'}$ are of the
form \begin{equation} S_{\alpha,\alpha'} \equiv
\hat{S}(\tilde{{k}}_x,k_y)\,\delta({\tilde k}_x-{\tilde k}'_x)\,
\delta(k_y-k'_y)\;,\end{equation} where ${\hat S}(\tilde{{k}}_x,k_y)$ is a
square matrix of dimension $2( 2N+1)$, $\tilde{{k}}_x$ belongs to
the first Brillouin zone, and $k_y$ is unrestricted. For
scattering matrices of this form, our explicit formulae for the
Casimir force and the power of heat transfer can be evaluated
numerically quite easily, at least for sufficiently small $N$.\\
\noindent {\it Acknowledgements} The author thanks the ESF
Research Network CASIMIR for financial support.
|
1,108,101,563,644 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper we derive and analyze a numerical method for minimizing a class energies that arise in economics (optimal location problems), electrical engineering (quantization), and materials science (crystallization and pattern formation). Applications are discussed further in \S\ref{sec:App}. These energies can be formulated either in terms of atomic measures and the Wasserstein distance, equation \eqref{eqn:energy_fundamental}, or in terms of generalized Voronoi diagrams, equation \eqref{eqn:energy}. These formulations are equivalent, but \eqref{eqn:energy_fundamental} is more common in the applied analysis literature (e.g., \cite{BouchitteEtAl}, \cite{ButtazzoSantambrogio}) and \eqref{eqn:energy} is more common in the computational geometry and quantization literature (e.g, \cite{Du1999}, \cite{GershoGray}). Importantly for us, formulation \eqref{eqn:energy} is much more convenient for numerical work. We work with formulation \eqref{eqn:energy} throughout the paper after first deriving it from \eqref{eqn:energy_fundamental} in \S\ref{sec:1.1} and \S\ref{sec:1.2}. We start from \eqref{eqn:energy_fundamental} rather than directly from \eqref{eqn:energy} in order to highlight the connection between the different communities.
\subsection{Wasserstein formulation of the energy}
\label{sec:1.1}
Let $\Omega$ be a bounded subset of $\mathbb{R}^d$, $d \ge 2$, and $\rho: \Omega \to [0,\infty)$ be a given density on $\Omega$. Let $f:[0,\infty)\to \mathbb{R}$. We consider the following class of discrete energies, which are defined on sets of weighted points $\{\ensuremath{\bm{x}}_i,m_i \}_{i=1}^N \in (\Omega \times (0,\infty))^N$, $\ensuremath{\bm{x}}_i \ne \ensuremath{\bm{x}}_j$ if $i \ne j$:
\begin{equation}
\label{eqn:energy_fundamental}
F\left(\{\bm{x}_i,m_i\}\right)=\sum_{i=1}^N f(m_i)+d^2\left(\rho,\sum_{i=1}^N m_i\delta_{x_i}\right).
\end{equation}
The second term is the square of the Wasserstein distance between the density $\rho$ and the atomic measure $\sum_{i=1}^N m_i\delta_{x_i}$. It is defined below in equation \eqref{eq:Wass}.
This energy models, e.g., the problem of optimally locating resources (such as recycling points, polling stations, or distribution centres) in a city or country $\Omega$ with population density $\rho$. The points $\ensuremath{\bm{x}}_i$ are the locations of the resources and the weights $m_i$ represent their size.
The first term of the energy penalizes the cost of building or running the resources. The second term penalizes the total distance between the population and the resources.
In our case the Wasserstein distance $d(\cdot,\cdot)$ can be defined by
\begin{multline}
\label{eq:Wass}
d^2\left(\rho,\sum_{i=1}^Nm_i\delta_{x_i}\right) = \\
\min_{T:\Omega \to \{ \ensuremath{\bm{x}}_i \}_{i=1}^N} \left\{ \sum_{i=1}^N \int_{T^{-1}(\ensuremath{\bm{x}}_i)} |\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_i|^2 \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}} : \int_{T^{-1}(\ensuremath{\bm{x}}_i)} \rho \, d \ensuremath{\bm{x}} = m_i \; \forall \; i \right\}.
\end{multline}
See, e.g., \cite{Villani}.
In two dimensions the minimization problem \eqref{eq:Wass} can be interpreted as the following optimal partitioning problem: The map $T$ partitions, e.g., a city $\Omega$ with population density $\rho$ into $N$ regions, $\{ T^{-1}(\ensuremath{\bm{x}}_i) \}_{i=1}^N$. Region $T^{-1}(\ensuremath{\bm{x}}_i)$ is assigned to the resource (e.g., polling station) located at point $\ensuremath{\bm{x}}_i$ of size $m_i$. The optimal map $T$ does this in such a way to minimize the total distance squared between the population and the resources subject to the constraint that each resource can meet the demand of the population assigned to it.
The Wasserstein distance is well-defined provided that the weights $m_i$ are positive and satisfy the mass constraint
\begin{equation}
\label{eq:con}
\sum_i m_i = \int_{\Omega} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\end{equation}
It can be shown that $d(\cdot,\cdot)$ is a metric on measures and that it metrizes weak convergence of measures, meaning that if $\rho_n$ converges to $\rho$, then $d(\rho,\rho_n) \to 0$. See, e.g., \cite[Ch.~7]{Villani}.
It is not necessary to be familiar with measure theory or the Wasserstein distance since we will soon reformulate the minimization problem $\min F$ as a more elementary computational geometry problem involving generalized Voronoi diagrams (power diagrams).
The given data for the problem are $\Omega$, $f$, $\rho$. We assume that $f$ is twice differentiable and
\begin{equation}
\label{eq:assump}
\Omega \textrm{ is convex}, \quad f'' \le 0, \quad f(0) \ge 0, \quad \rho \in C^0(\Omega), \quad \rho \ge 0.
\end{equation}
We also exclude linear functions $f(m)=a m$, $a \in \mathbb{R}$, since otherwise the first term of the energy is a constant,
$\sum_i f(m_i) = a \sum_i m_i = \int_\Omega \rho \, d \ensuremath{\bm{x}}$, and $F$ has no minimizer (see below). However, affine functions $f(m)=am + b$, $b > 0$, are admissible. The necessity and limitations of assumptions \eqref{eq:assump} are discussed in \S \ref{sec:Lim}.
The number $N$ of weighted points is \emph{not} prescribed and is an unknown of the problem: The goal is to minimize $F$ over sets of weighted points $\{\ensuremath{\bm{x}}_i,m_i \}_{i=1}^N$, subject to the constraint \eqref{eq:con}, and over $N$. The optimal value of $N$ is determined by the competition between the two terms of $F$. Amongst finite $N$, the first term is minimized when $N=1$, due to the concavity of $f$.
The infimum of the second term is zero, which is obtained in the limit $N \to \infty$ (this is because the measure $\rho \, d \ensuremath{\bm{x}}$ can be approximated arbitrarily well with dirac masses, e.g., by using a convergent quadrature rule, and because the Wasserstein distance $d(\cdot,\cdot)$ metrizes weak convergence of measures).
Energies of the form of $F$ and generalizations have received a great deal of attention in the applied analysis literature, e.g., \cite{ButtazzoSantambrogio} and \cite{BouchitteEtAl} study the existence and properties of minimizers for broad classes of optimal location energies.
There is far less work, however, on numerical methods for such problems. Exceptions include the case of \eqref{eqn:energy_fundamental} with $f=0$, which has been well-studied numerically. This is discussed in \S \ref{sec:CVTs}.
\subsection{Power diagram formulation of the energy}
\label{sec:1.2}
Minimizing $F$ numerically is challenging due to presence of the Wasserstein term, which is defined implicitly in terms of the solution to the optimal transportation problem \eqref{eq:Wass}. This is an infinite-dimensional linear programming problem in which every point in $\Omega$ has to be assigned to one of the $N$ weighted points $(\ensuremath{\bm{x}}_i,m_i)$. Therefore even evaluating the energy $F$ is expensive. One option is to discretize $\rho$ so that \eqref{eq:Wass} becomes a finite-dimensional linear programming problem. This is still costly, however, and it turns out that by exploiting a deep connection between optimal transportation theory and computational geometry we can reformulate the minimization problem $\min F$ in such a way that we can avoid solving \eqref{eq:Wass} altogether.
First we need to introduce some terminology from computational geometry.
The \emph{power diagram} associated to a set of weighted points
$\{ \ensuremath{\bm{x}}_i , w_i \}_{i=1}^N$, where $\ensuremath{\bm{x}}_i \in \Omega$, $w_i \in \mathbb{R}$, is the collection of subsets $P_i \subseteq \Omega$ defined by
\begin{equation}
\label{eq:pd}
P_i=\{\bm{x}\in\Omega : \left|\bm{x}-\bm{x}_i\right|^2-w_i\le\left|\bm{x}-\bm{x}_k\right|^2-w_k \; \forall \; k\}.
\end{equation}
The individual sets $P_i$ are called \emph{power cells} (or cells) of the power diagram.
The power diagram is sometimes called the Laguerre diagram, or the radical Voronoi diagram.
If all the weights $w_i$ are equal we obtain the standard Voronoi diagram, see Figure \ref{fig:vorpow}. From equation \eqref{eq:pd} we see that the power cells $P_i$ are obtained by intersecting half planes and are therefore convex polytopes (or the intersection of convex polytopes with $\Omega$ in the case of cells that touch $\partial \Omega$): in dimension $d=3$ the cells are convex polyhedra, in dimension $d=2$ the cells are convex polygons. Note that some of the cells may be empty. The classical reference on generalized Voronoi diagrams is \cite{Okabe2000}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{ex_vor}
\includegraphics[width=0.5\textwidth]{ex_pow}
\caption{\label{fig:vorpow}A comparison of a standard Voronoi diagram (left) with a power diagram (right). The location of the generators in both cases is the same, but the power diagram carries additional structure via the weights associated with each generator. The size of the weights in the power diagram is indicated by the radii of the dashed circles. Notice that in the power diagram it is possible for the generator to lie outside the cell or for the cell associated with a generator to be empty (the Voronoi diagram has 20 cells and the power diagram has 19 cells). The geometrical construction of the power diagram in terms of the generator locations and the circles is simple; for each point $\bm{x}$ construct a tangent line from $\bm{x}$ to the circles centred at $\bm{x}_i$ with radii $r_i$, the length of the tangent line is called the \emph{power} of the point $\bm{x}$, the point $\bm{x}$ belongs to the power cell that has minimum power. The weights of the generators in this case are $w_i=-r_i^2$.}
\end{figure}
Given weighted points $\{ \ensuremath{\bm{x}}_i, m_i \}_{i=1}^N \in (\Omega \times (0,\infty))^N$, let $T_*$ be the minimizer in \eqref{eq:Wass}.
The optimal transport regions $\{ T_*^{-1}(\ensuremath{\bm{x}}_i) \}_{i=1}^N$ form a power diagram:
There exits $\{w_i\}_{i=1}^N \in \mathbb{R}^N$
such that the power diagram $\{P_i \}_{i=1}^N$ generated by $\{ \ensuremath{\bm{x}}_i, w_i \}_{i=1}^N$ satisfies $P_i = T_*^{-1}(\ensuremath{\bm{x}}_i)$ for all $i$
(up to sets of $\rho \, d \ensuremath{\bm{x}}$--measure zero). Conversely, if
$\{ P_i \}_{i=1}^N$ is any power diagram with generators $\{ \ensuremath{\bm{x}}_i, w_i \}_{i=1}^N$, then
\begin{equation}
\label{eq:WassPower}
d^2\left(\rho,\sum_{i=1}^N m_i\delta_{x_i}\right) = \sum_{i=1}^N \int_{P_i} |\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_i |^2 \rho \, d \ensuremath{\bm{x}} \quad \textrm{where} \quad
m_i = \int_{P_i} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\end{equation}
These results can be shown using Brenier's Theorem \cite[Thm.~2.12]{Villani} or the Kantorovich Duality Theorem \cite[Thm.~1.3]{Villani}.
See \cite[Thm.~1 \& 2]{Merigot} or \cite[Prop.~4.4]{BournePeletierRoper}. As far as we are aware these results first appeared in \cite{Aurenhammer98}, although not stated in the language of Wasserstein distances.
Equation \eqref{eq:WassPower} gives an explicit formula for the Wasserstein distance, without the need to solve a linear programming problem, provided that the weights $m_i$ can be written as $\int_{P_i} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}$ for some power diagram $\{ P_i \}$ (with generating points $\ensuremath{\bm{x}}_i$). In practice actually finding this power diagram involves solving another linear programming problem
(the generating weights $w_i$ come from the solution to the dual linear programming problem to \eqref{eq:Wass}, see \cite[Prop.~4.4]{BournePeletierRoper}), but in our case this can be avoided since we are interested in minimizing $F$ rather than evaluating it at any given point.
We use this connection between the Wasserstein distance and power diagrams to rewrite the energy $F$ in new variables, changing variables from $\{ \ensuremath{\bm{x}}_i, m_i \}_{i=1}^N \in (\Omega \times (0,\infty))^N$ to $\{ \ensuremath{\bm{x}}_i, w_i \}_{i=1}^N \in (\Omega \times \mathbb{R})^N$.
By the results above, minimizing $F$ is equivalent to minimizing
\begin{equation}
\label{eqn:energy}
\boxed{ E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = \sum_{i=1}^N \left\{ f(m_i)+\int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \rho(\ensuremath{\bm{x}}) \,d \ensuremath{\bm{x}} \right\} }
\end{equation}
where $\{P_i \}$ is the power diagram generated by $\{\ensuremath{\bm{x}}_i,w_i\}$ and $m_i := \int_{P_i} \rho \, d \ensuremath{\bm{x}}$.
The equivalence of $E$ and $F$ is in the following sense:
Given $\{ \ensuremath{\bm{x}}_i, w_i \}_{i=1}^N \in (\Omega \times \mathbb{R})^N$ and the corresponding power diagram $\{ P_i \}_{i=1}^N$, equation \eqref{eq:WassPower} implies that
\[
E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = F \left( \left\{\ensuremath{\bm{x}}_i, m_i \right\} \right)
\quad \textrm{for} \quad
m_i = \int_{P_i} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\]
Conversely, it can be shown (e.g., \cite[Prop.~4.4]{BournePeletierRoper}) that given any $\{ \ensuremath{\bm{x}}_i, m_i \}_{i=1}^N \in (\Omega \times (0,\infty))^N$, there exists
$\{ w_i \}_{i=1}^N \in \mathbb{R}^N$
such that the power diagram $\{ P_i \}_{i=1}^N$ generated by $\{ \ensuremath{\bm{x}}_i, w_i \}_{i=1}^N$ satisfies $\int_{P_i} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}} = m_i$ for all $i$.
Then it follows from \eqref{eq:WassPower} that
$F \left( \{\ensuremath{\bm{x}}_i,m_i\} \right) = E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right)$. The weights $\{ w_i \}_{i=1}^N \in \mathbb{R}^N$ are unique up to the addition
of a constant; it is easy to see from \eqref{eq:pd} that $\{ w_i + c \}_{i=1}^N$ and $\{ w_i \}_{i=1}^N$ generate the same power diagram.
While the energies $E$ and $F$ are equivalent, from a numerical point of view it is far more practical to work with $E$ since it can be easily evaluated, unlike $F$, since computing power diagrams is easy while solving the linear programming problem \eqref{eq:Wass} is not. In the rest of the paper we focus on finding local minimizers of $E$.
\subsection{Centroidal power diagrams and a generalized Lloyd algorithm}
\label{sec:genLloyd}
From now on we will write $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})=((\ensuremath{\bm{x}}_1, \ldots, \ensuremath{\bm{x}}_N),(w_1, \ldots , w_N)) \in \Omega^N \times \mathbb{R}^N$ to denote the generators of a power diagram. In this section we introduce an algorithm for finding critical points of $E=E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$.
Let $\mathcal{G}^N \subset \Omega^N \times \mathbb{R}^N$ be the smaller class of generators such that no two generators coincide and there are no empty cells:
\begin{equation}
\label{eqn:G}
\mathcal{G}^N = \{ (\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \Omega^N \times \mathbb{R}^N : (\ensuremath{\bm{x}}_i,w_i)\ne(\ensuremath{\bm{x}}_j,w_j) \textrm{ if } i \ne j, \, P_i \ne \emptyset \;
\forall \; i \}.
\end{equation}
Define $\ensuremath{\boldsymbol{\xi}} : \mathcal{G}^N \to \Omega^N$ and $\ensuremath{\boldsymbol{\omega}} : \mathcal{G}^N \to \mathbb{R}^N$ by
\begin{equation}
\nonumber
\label{eqn:lloydmap}
\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) := (\ensuremath{\boldsymbol{\xi}}_1(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ldots,\ensuremath{\boldsymbol{\xi}}_N(\ensuremath{\bm{X}},\ensuremath{\bm{w}})), \quad
\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) := (\omega_1(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ldots,\omega_N(\ensuremath{\bm{X}},\ensuremath{\bm{w}})),
\end{equation}
where
\begin{equation}
\ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) := \frac{1}{m_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})} \int_{P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})} \ensuremath{\bm{x}} \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \quad
\omega_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) := -f'(m_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})).
\end{equation}
Here $P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is the $i$-th power cell in the power diagram generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ and $m_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is its mass:
\[
m_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \int_{P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\]
Note that $\ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is the centroid (or centre of mass) of the $i$-th power cell. We will sometimes denote this by $\overline{\ensuremath{\bm{x}}}_i$.
In \S \ref{sec:deriv} we show that critical points of $E$ are fixed points of the Lloyd maps:
\[
\nabla E(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \ensuremath{\bm{0}}
\quad \iff \quad
(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})) = (\ensuremath{\bm{X}},\ensuremath{\bm{w}})
\]
(up to the addition of a constant vector to $\ensuremath{\bm{w}}$ -- see Proposition \ref{prop:critE} for a precise statement).
The condition $\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})=\ensuremath{\bm{X}}$ means that the power diagram generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ has the property that
$\ensuremath{\bm{x}}_i$ is the centroid of its power cell $P_i$ for all $i$. We call these special types of power diagrams \emph{centroidal power diagrams}. This is in analogy with centroidal Voronoi tessellations (CVTs), which are special types of Voronoi diagrams with the property that the generators of the Voronoi diagram are the centroids of the Voronoi cells. See \cite{Du1999} for a nice survey of CVTs. Note also that CVTs can be viewed as a special type of centroidal power diagram where all the weights are equal, $w_i=c$ for all $i$, $c \in \mathbb{R}$, since power diagrams with equal weights are just Voronoi diagrams.
The following algorithm is an iterative method for finding fixed points of $(\ensuremath{\boldsymbol{\xi}},\ensuremath{\boldsymbol{\omega}})$, and therefore critical points of $E$:
\begin{algorithm}[H]
\label{algo_lloyd}
\textbf{Initialization:} Choose $N_0 \in \mathbb{N}$ and $(\ensuremath{\bm{X}}^0,\ensuremath{\bm{w}}^0) \in \mathcal{G}^{N_0}$. \newline
\textbf{At each iteration:}
\begin{itemize}
\item[\textbf{(1)}] Update the generators: Given $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k) \in \mathcal{G}^{N_k}$, compute the corresponding power diagram
and define $(\ensuremath{\bm{X}}^{k+1},\ensuremath{\bm{w}}^{k+1}) \in \Omega^{N_k} \times \mathbb{R}^{N_k}$ by
\[
\ensuremath{\bm{X}}^{k+1} = \ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k), \quad \ensuremath{\bm{w}}^{k+1} = \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k).
\]
\item[\textbf{(2)}] Remove empty cells: Compute the power diagram $\{P_i^{k+1}\}_{i=1}^{N_k}$ generated by $(\ensuremath{\bm{X}}^{k+1},\ensuremath{\bm{w}}^{k+1})$ and let
\[
J = \left\{ j \in \{1,\ldots,N_k\} : P^{k+1}_j = \emptyset \right\}.
\]
For all $j \in J$, remove $(\ensuremath{\bm{x}}_j^{k+1},w_j^{k+1})$ from the list of generators. Then replace $N_k$ with $N_{k+1}=N_k - |J|$.
\end{itemize}
\caption{\label{algo:genLloyd}The generalized Lloyd algorithm for finding critical points of $E$}
\end{algorithm}
In particular this algorithm computes centroidal power diagrams, and it is a generalization of Lloyd's algorithm \cite{Lloyd}, which is a popular method for computing centroidal Voronoi tessellations. See \cite{Du1999}.
The classical Lloyd algorithm is recovered from our generalized Lloyd algorithm by simply taking the weights to be constant at each iteration, e.g., $\ensuremath{\bm{w}}^k = \ensuremath{\bm{0}}$ for all $k$. Due to this relation, we refer to $\ensuremath{\boldsymbol{\xi}}$ and $\ensuremath{\boldsymbol{\omega}}$ as generalized Lloyd maps.
Step (2) of the algorithm means that, given $N_0 \in \mathbb{N}$ and $(\ensuremath{\bm{X}}^0,\ensuremath{\bm{w}}^0) \in \mathcal{G}^{N_0}$, the algorithm can converge to a fixed point $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^{N}$ with $N < N_0$. This means that the algorithm can partly correct for an incorrect initial guess $N_0$ (recall that we are minimizing $E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ over $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^N$ and over $N$). It is still possible, however, that the algorithm converges to a local minimizer of $E$, possibly with a non-optimal value of $N$. Note also that the algorithm can eliminate generators, but it cannot create them. Therefore it is impossible for the algorithm to find a global minimizer of $E$ if the initial value of $N_0$ is less than the optimal value. We discuss
strategies for finding global as opposed to local minimizers in \S\ref{sec:implement} and \S\ref{sec:illus}.
Algorithm \ref{algo:genLloyd} was introduced for the special case of $d=2$, $\rho=1$, $f(m) = \sqrt{m}$ in \cite[Sec.~4]{BournePeletierRoper}.
In the current paper we extend it to the broader class of energies \eqref{eqn:energy}, analyze it (prove that it is energy decreasing and that it converges, Theorems \ref{thm:energyDecrease}, \ref{thm:conv}), and implement it in both two and three dimensions. In addition, the derivation here, unlike in \cite{BournePeletierRoper}, is accessible to those not familiar with measure theory and optimal transport theory
since we work with formulation \eqref{eqn:energy} rather than \eqref{eqn:energy_fundamental}.
\subsection{The case $f=0$ and $N$ fixed: CVTs and Lloyd's algorithm}
\label{sec:CVTs}
Setting $f=0$ in \eqref{eqn:energy_fundamental} and fixing $N$ gives the energy
\[
F_N\left(\{\bm{x}_i,m_i\}\right)=d^2\left(\rho,\sum_{i=1}^N m_i\delta_{x_i}\right).
\]
It is necessary to fix $N$ since otherwise this has no minimizer; the infimum is zero, which is obtained in the limit $N \to \infty$ by approximating
$\rho$ with dirac masses. It can be shown that minimizing $F_N$ is equivalent to minimizing
\[
E_N(\{\ensuremath{\bm{x}}_i\}) = \sum_{i=1}^N \int_{V_i} |\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_i |^2 \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}
\]
where $\{ V_i \}_{i=1}^N$ is the Voronoi diagram generated by $\{ \ensuremath{\bm{x}}_i \}_{i=1}^N$:
\[
V_i= \{ \ensuremath{\bm{x}} \in \Omega : | \ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i | \le | \ensuremath{\bm{x}}-\ensuremath{\bm{x}}_k | \; \forall \; k \}.
\]
See \cite[Sec.~4.1]{BournePeletierRoper}. Numerical minimization of $E_N$ has been well-studied. A necessary condition for minimality is that
$\{ \ensuremath{\bm{x}}_i \}_{i=1}^N$ generates a centroidal Voronoi tessellation (CVT). CVTs can be easily computed using the classical Lloyd algorithm. See, e.g., \cite{Du1999}. Convergence of the algorithm is studied in \cite{Du2006}, \cite{Du1999} and \cite{SabinGray}, among others, and there is a large literature on CVTs and Lloyd's algorithm. However, we are not aware of any work (other than \cite{BournePeletierRoper}) on numerical minimization of $E$ for $f \ne 0$.
\subsection{Applications}
\label{sec:App}
Energies of the form \eqref{eqn:energy}, or equivalently \eqref{eqn:energy_fundamental}, arise in many applications.
\subsubsection{Simple model of pattern formation: block copolymers}
\label{Subsubsec: block copolymer}
The authors first came in contact with energies of the form \eqref{eqn:energy_fundamental} in a pattern formation problem in materials science
\cite{BournePeletierRoper}. The following energy is a simplified model of phase separation for two-phase materials called block copolymers, for the case where one phase has a much smaller volume fraction than the other:
\begin{equation}
\label{eqn:block}
E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = \sum_{i=1}^N \left\{ \lambda m_i^{\frac{d-1}{d}} + \int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \,d \ensuremath{\bm{x}} \right\}
\end{equation}
where $m_i = \int_{P_i} 1 \, dx = | P_i |$ and $d=2$ or $3$.
The measure $\nu = \sum_i m_i \delta_{\ensuremath{\bm{x}}_i}$ represents the minority phase. In three dimensions, $d=3$, this represents $N$ small spheres of the minority phase centred at $\{ \ensuremath{\bm{x}}_i \}_{i=1}^N$. The weights $m_i$ give the relative size of the spheres. These spheres are surrounded by a `sea' of the majority phase. In two dimensions, $d=2$, the measure $\nu$ represents $N$ parallel cylinders of the minority phases and $\Omega$ is a cross-section perpendicular to the axes of the cylinders.
The first term of $E$ penalizes the surface area between the two phases and so prefers phase separation ($N=1$), and the second term prefers phase mixing ($N=\infty$). The parameter $\lambda$ represents the repulsion strength between the two phases. Equation \eqref{eqn:block} is the special case of \eqref{eqn:energy} with $\rho=1$ and $f(m)=\lambda m^{\frac{d-1}{d}}$.
This energy can be viewed as a toy model of the popular Ohta-Kawasaki model of block copolymers (see, e.g., \cite{ChoksiPeletierWilliams}). Like the Ohta-Kawasaki energy,
it is non-convex and non-local (in the sense that evaluating $E$ involves solving an auxiliary infinite-dimensional problem).
Unlike the Ohta-Kawasaki energy, however, it is discrete, which makes it much more amenable to numerics and analysis. In general it can be viewed as a simplified model of non-convex, non-local energy-driven pattern formation, and it has applications in materials science outside block copolymers, e.g., to crystallization. It is also connected to the Ginzburg-Landau model of superconductivity \cite[p.~123--124]{BournePeletierTheil}.
In \cite{BournePeletierRoper} it was demonstrated numerically that for $d=2$ minimizers of $E$ tend to a hexagonal tiling as $\lambda \to 0$
(in the sense that the power diagram generated by $\{ \ensuremath{\bm{x}}_i,w_i \}$ tends to a hexagonal tiling). This was proved in \cite{BournePeletierTheil}, and it agrees with block copolymer experiments, where in some parameter regime the minority phase forms hexagonally packed cylinders. It was conjectured in \cite{BournePeletierRoper} that for the case $d=3$, minimizers of $E$ tend to a body-centred cubic (BCC) lattice as $\lambda \to 0$ (meaning that
$\{ \ensuremath{\bm{x}}_i \}$ tend to a BCC lattice and $w_i \to 0$). We examine this conjecture in \S \ref{Subsec:3D}. In particular, numerical minimization of $E$ in three dimensions suggests that the BCC lattice is at least a local minimizer of $E$ when $\Omega$ is a periodic box.
Again, this agrees with block copolymer experiments, where in some parameter regime the minority phase forms a BCC lattice.
\subsubsection{Quantization}
Energies of the form \eqref{eqn:energy} can be used for data compression using a technique called \emph{vector quantization}.
By taking $f=0$ in \eqref{eqn:energy} and evaluating the resulting energy at $w_i=0$ for all $i$, so that the power diagram $\{ P_i \}_{i=1}^N$ generated
by $\{\ensuremath{\bm{x}}_i,0 \}_{i=1}^N$ is just the Voronoi diagram $\{V_i \}_{i=1}^N$ generated by $\{ \ensuremath{\bm{x}}_i \}_{i=1}^N$, we obtain the energy
\begin{equation}
\label{eqn:D}
D(\{\ensuremath{\bm{x}}_i \}) = \sum_{i=1}^N \int_{V_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \rho(\ensuremath{\bm{x}}) \,d \ensuremath{\bm{x}} \equiv \int_\Omega \min_i | \ensuremath{\bm{x}} - \ensuremath{\bm{x}}_i |^2 \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\end{equation}
This is known in the quantization literature as the \emph{distortion}. See \cite[Sec.~33]{Gruber07} for a mathematical introduction to vector quantization and \cite{GershoGray} and \cite{GrayNeuhoff} for comprehensive treatments.
Roughly speaking, the points $\ensuremath{\bm{x}}$ of $\Omega$ represent signals (e.g., parts of an image or speech) and $\ensuremath{\bm{x}}_i$ represent codewords in the codebook $\{ \ensuremath{\bm{x}}_i \}_{i=1}^N$.
The function $\rho$ is a probability density on the set of signals $\Omega$.
If a signal $\ensuremath{\bm{x}}$ belongs to the Voronoi cell $V_i$, then the encoder assigns to it the codeword $\ensuremath{\bm{x}}_i$, which is then stored or transmitted.
$D$ measures the quality of the encoder, the average distortion of signals. The minimum value of $D$ is called the \emph{minimum distortion}.
In practice distortion is minimized subject to a constraint on the number of bits in the codebook. The codewords $\ensuremath{\bm{x}}_i$ are mapped to binary vectors before storage or transmission. In \emph{fixed-rate} quantization all these vectors have the same length. In \emph{variable-rate quantization} the length depends on the probability density $\rho$: Let $m_i = \int_{V_i} \rho \, d \ensuremath{\bm{x}}$ be the probability that a signal lies in Voronoi cell $V_i$. If $m_i$ is large, then $\ensuremath{\bm{x}}_i$ should be mapped to a short binary vector since it occurs often. For cells with lower probabilities, longer binary vectors can be used. The \emph{rate} of an encoder has the form
\[
R = \sum_{i=1}^N l_i m_i
\]
where $l_i$ is the length of the binary vector representing $\ensuremath{\bm{x}}_i$. Note that $R$ is the expected value of the length.
Distortion $D$ is decreased by choosing more codewords. On the other hand, this means that the rate $R$, and hence the storage/transmission cost, is increased.
Optimal encoders can be designed by trading off distortion against rate by minimizing energies of the form
\[
\lambda R + D
\]
where $\lambda$ is a parameter determining the tradeoff. See \cite[p.~2342]{GrayNeuhoff}. Our energy \eqref{eqn:energy} generalises this: Take $l_i = l(1/m_i)$ for some concave function $l$ so that $m \mapsto l(1/m)m$ is concave.
In addition, $l$ should be increasing so that the code length decreases as the probability $m$ increases.
We replace the Voronoi cells in \eqref{eqn:D} with power cells, which means that signals in power cell $P_i$ are mapped to codeword $\ensuremath{\bm{x}}_i$. Then the energy $\lambda R + D$ has the form of \eqref{eqn:energy}:
\[
E(\{ \ensuremath{\bm{x}}_i,w_i \}) = \sum_{i=1}^N \left\{ f(m_i)+\int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \rho(\ensuremath{\bm{x}}) \,d \ensuremath{\bm{x}} \right\} \quad \textrm{where} \quad f(m) = \lambda l\left( \dfrac 1m \right) m.
\]
\subsubsection{Optimal location of resources}
As discussed in \S\ref{sec:1.1} and \S\ref{Subsec: Non-const}, energies of the form \eqref{eqn:energy_fundamental} and \eqref{eqn:energy} can be used to model the optimal location of resources $\{ \ensuremath{\bm{x}}_i \}$ in a city or country $\Omega$ with population density $\rho$. The resources have size $m_i$, serve region $P_i$, and cost $f(m_i)$ to build or run. The assumption that $f$ is concave (introduced for mathematical convenience to prove Theorem \ref{thm:energyDecrease}) is also natural from the modelling point of view since it corresponds to an economy of scale. The energy trades off building/running costs against distance between the population and the resources.
\subsubsection{Other applications and connections}
Energies of the form \eqref{eqn:energy}, usually with $f=0$, also arise in data clustering and pattern recognition ($k$-means clustering) \cite{Hartigan}, \cite{MacQueen}, image compression (this is a special case of vector quantization) \cite[Sec.~2.1]{Du1999}, numerical integration \cite[Sec.~2.2]{Du1999}, \cite[p.~497--499]{Gruber07} and convex geometry (packing and covering problems, approximation of convex bodies by convex polytopes) \cite[Sec.~33]{Gruber07}. Taking $f \ne 0$ in \eqref{eqn:energy} gives the algorithm more freedom, e.g., to automatically select the number of data clusters in addition to their location, based on a cost per cluster.
Voronoi diagrams have recently gained a lot of interest in the materials science community, e.g., to model solid foams \cite{Harrison} and grains in metals \cite{Kok}, although this is usually done in a more heuristic manner than by energy minimization. Global minimizers of $E$ can be difficult to find if they have a large value of $N$, and the generalized Lloyd algorithm tends to converge to local minimizers. These often resemble grains in metals, see Figure \ref{fig:flatness_large}, which suggests that energy minimization might be a good method to produce Representative Volume Elements for the finite element simulation of materials with microstructure.
Several important PDEs, such as the heat equation and Fokker-Plank equation, can be written as a time-discrete gradient flow of an energy with respect to the Wasserstein distance \cite{JordanKinderlehrerOtto}. For example, for the heat equation, an energy related to \eqref{eqn:energy_fundamental} is minimized at every time step, with the important differences that the first term of the energy is the integral of a convex function (as opposed to the sum of a concave function) and the second term is the Wasserstein distance between two absolutely continuous measures (as opposed to between an absolutely continuous measure and an atomic measure). A spatial discretization would bring the second terms in line and replace the integral in the first term by a sum. It could be argued, however, that we do not need another numerical method to solve the linear heat equation. Energies involving the Wasserstein distance also arise in models of dislocation dynamics \cite{Lucia}.
\subsection{Limitations of the algorithm}
\label{sec:Lim}
First we discuss the assumptions on the data given in equation \eqref{eq:assump}.
The assumption that $\Omega$ is convex ensures that
the centroid of each power cell lies in $\Omega$. Without this assumption the algorithm could produce an unfeasible solution with $\ensuremath{\bm{x}}_i \notin \Omega$
for some $i$. For example, if $\Omega$ is the annulus $A(r_1,r_2)$ centred at the origin, $\rho=1$, and $f$ is chosen suitably, then $E$ is minimized when $N=1$ by $(\ensuremath{\bm{x}}_1,w_1)$ in which $|\ensuremath{\bm{x}}_1|=r_1$ (the generator lies on the interior boundary of the annulus) and $w_1$ is irrelevant (in the case where there is only one cell the weight is not determined).
The generalized Lloyd algorithm, however, initialised with $N_0=1$, would return $\ensuremath{\bm{x}} = \ensuremath{\bm{0}} \notin \Omega$. This strong limitation on the shape of $\Omega$ means that the algorithm cannot be used to solve optimal location problems in highly nonconvex countries like Scotland. We plan to address this issue in a future paper.
The concavity assumption on $f$, $f'' \le 0$, is necessary to prove Theorem \ref{thm:energyDecrease}, which asserts that step (1) of the algorithm decreases the energy at every iteration. As discussed in \S \ref{sec:App}, it is also a reasonable modelling assumption for many applications.
The assumption that $f(0)\ge 0$ ensures that iteration step (2) is also energy decreasing.
If $f$ is convex then the energy behaves very differently and the generalized Lloyd algorithm may not be suitable. The first term is not necessarily minimized when $N=1$, but when all the power cells have the same mass, since by Jensen's inequality
\[
\sum_{i=1}^N f(m_i) \ge N f \left( \dfrac{M}{N} \right) \quad \textrm{where} \quad M = \int_{\Omega} \rho \, d \ensuremath{\bm{x}}.
\]
If in addition $f \ge 0$ and $N f(M/N) \to 0$ as $N \to \infty$, then $E$ does not have a global minimizer.
Its infimum is zero, obtained in the limit $N \to \infty$
by approximating $\rho$ arbitrarily well by dirac masses. We have not studied the case where $f$ is neither concave nor convex.
As discussed in \S \ref{sec:genLloyd}, another limitation of the algorithm is that, while it can annihilate generators, step (2), it cannot create them. Therefore the initial guess $N_0$ for the optimal number of generators should be an over estimate. This limitation could be addressed by using a simulated annealing method to randomly introduce new generators at certain iterations. This could also be used to prevent the algorithm from getting stuck at a local minimizer.
\subsection{Generalizations}
While we have focussed on energy \eqref{eqn:energy}, our general methodology could be easily applied to broader classes of optimal location energies where the first term is more general, e.g., to
\[
E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = g(\{\ensuremath{\bm{x}}_i,m_i\})+ \sum_{i=1}^N \int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \rho(\ensuremath{\bm{x}}) \,d \ensuremath{\bm{x}}
\]
where $m_i = \int_{P_i} \rho \, d \ensuremath{\bm{x}}$.
Our algorithm can also be modified to minimize the following energy, which is obtained from \eqref{eqn:energy_fundamental} by replacing the square of the 2-Wasserstein distance with the $p$-th power of the $p$-Wasserstein distance, $p \in [1,\infty)$:
\[
F_p\left(\{\bm{x}_i,m_i\}\right)=\sum_{i=1}^N f(m_i)+d^p_p\left(\rho,\sum_{i=1}^N m_i\delta_{x_i}\right).
\]
See \cite[Chap.~7]{Villani} for the definition of $d_p(\cdot,\cdot)$.
In this case the energy can be rewritten in terms of what we call \emph{$p$-power diagrams}.
These are a generalization of power diagrams where the cells generated by $\{ \ensuremath{\bm{x}}_i, w_i \}$ are defined by
\[
P_i=\{\bm{x}\in\Omega : \left|\bm{x}-\bm{x}_i\right|^p-w_i\le\left|\bm{x}-\bm{x}_k\right|^p-w_k\;\forall \; k\}.
\]
For $p=2$ this is just the power diagram.
For $p=1$ this is known as the Appollonius diagram (or the additively weighted Voronoi diagram, or the Voronoi diagram of disks).
For general $p$ there does not seem to be a standard name, although they fall into the class of
generalized Dirichlet tessellations, or generalized additively weighted Voronoi diagrams.
It can be shown that minimizing $F_p$ is equivalent to minimizing
\begin{equation}
E_p \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = \sum_{i=1}^N \left\{ f(m_i)+\int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^p \rho(\ensuremath{\bm{x}}) \,d \ensuremath{\bm{x}} \right\}
\end{equation}
where $\{P_i \}$ is the $p$-power diagram generated by $\{\ensuremath{\bm{x}}_i,w_i\}$ and $m_i := \int_{P_i} \rho \, d \ensuremath{\bm{x}}$.
See \cite[Sec.~4.2]{BournePeletierRoper}. Critical points of $E_p$ can be found using a modification of the generalized Lloyd algorithm where
for each $i$ the map $\ensuremath{\boldsymbol{\xi}}_i$ returns the $p$-centroid of the $p$-power cell $P_i$, i.e., $\ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ satisfies the equation
\begin{equation}
\label{eq:p_cent}
\int_{P_i} (\ensuremath{\boldsymbol{\xi}}_i-\ensuremath{\bm{x}})|\ensuremath{\boldsymbol{\xi}}_i-\ensuremath{\bm{x}}|^{p-2} \, dx = \ensuremath{\bm{0}}.
\end{equation}
See \cite[Th.~4.16]{BournePeletierRoper}.
For the case $p=2$ this equation just says that $\ensuremath{\boldsymbol{\xi}}_i$ is the centroid of $P_i$.
Therefore in principle the algorithm can be extended to all $p \in [1,\infty)$. In practice it is much harder to implement.
Except for the cases $p=1,2$, we are not aware of any efficient algorithms for computing $p$-power diagrams. This is due to the fact that
for $p \ne 2$ the boundaries between cells are curved (unless all the weights are equal).
In addition, evaluating the Lloyd map $\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ involves solving the nonlinear equation \eqref{eq:p_cent}. We plan to say more about this aspects in a future paper.
\subsection{Structure of the paper}
The generalized Lloyd algorithm, Algorithm \ref{algo:genLloyd}, is derived in \S\ref{sec:deriv}. In \S\ref{sec:prop} we prove that it is energy decreasing, prove a convergence theorem, and study its structure. Implementation issues, such as how to compute power diagrams, are discussed in \S\ref{sec:implement}. Numerical illustrations in two and three dimensions are given in \S\ref{sec:illus}. In the appendix we give some useful formulas for implementing the algorithm for the special case $\rho=$ constant, in which case it is not necessary to use a quadrature rule.
\section{Derivation of the algorithm}
\label{sec:deriv}
In this section we derive the generalized Lloyd algorithm, Algorithm \ref{algo:genLloyd}, which is a fixed point method for the calculation of stationary points of the energy $E$, defined in equation \eqref{eqn:energy}. Calculating the gradient of $E$ requires care since this involves differentiating the integrals appearing in the definition of $E$ with respect to their domains. We perform this calculation in \S\ref{Subsec: H} and \S\ref{Subsec: Crit points}, after introducing some notation in \S\ref{Subsec: Notation}.
\subsection{Notation for power diagrams}
\label{Subsec: Notation}
Throughout this paper we take $\Omega$ to be a bounded, convex subset of $\mathbb{R}^d$, $d \ge 2$.
We will take $d=2$ or $3$ for purposes of illustration, but the theory developed applies for all $d\ge 2$.
Given weighted points
$(\ensuremath{\bm{X}},\ensuremath{\bm{w}})=((\ensuremath{\bm{x}}_1, \ldots, \ensuremath{\bm{x}}_N),(w_1, \ldots , w_N))
\in \Omega^N \times \mathbb{R}^N$ and the associated power diagram $\{P_i\}_{i=1}^N$ (defined in equation \eqref{eq:pd}), we introduce the following notation:
\begin{gather}
\label{eqn:def1}
\quad d_{ij} = | \ensuremath{\bm{x}}_j - \ensuremath{\bm{x}}_i |, \qquad \ensuremath{\bm{n}}_{ij} = \frac{\ensuremath{\bm{x}}_j - \ensuremath{\bm{x}}_i}{d_{ij}}, \qquad F_{ij} = P_i \cap P_j, \\
\label{eqn:def2}
m_i = \int_{P_i} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \qquad m_{ij} = \int_{F_{ij}} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \\
\label{eqn:def3}
\overline{\ensuremath{\bm{x}}}_i = \frac{1}{m_i} \int_{P_i} \ensuremath{\bm{x}} \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \qquad \overline{\ensuremath{\bm{x}}}_{ij} = \frac{1}{m_{ij}} \int_{F_{ij}} \ensuremath{\bm{x}} \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \\
\label{eqn:def4}
J_i = \{ j \ne i : P_i \cap P_j \ne \emptyset \}.
\end{gather}
Here $d_{ij}$ is the distance between points $\ensuremath{\bm{x}}_i$ and $\ensuremath{\bm{x}}_j$; $\ensuremath{\bm{n}}_{ij}$ is the unit vector pointing from $\ensuremath{\bm{x}}_i$ to $\ensuremath{\bm{x}}_j$; the set $F_{ij}$ is the \emph{face} common to both cells $P_i$ and $P_j$; $m_i$ is the mass of cell $P_i$; $m_{ij}$ is the mass of face $F_{ij}$; $\overline{\ensuremath{\bm{x}}}_i$ is the \emph{centre of mass} of the cell $P_i$ and $\overline{\ensuremath{\bm{x}}}_{ij}$ is the centre of mass of face $F_{ij}$. The set of indices of the neighbours of cell $P_i$ is given by the index set $J_i$.
In the case $d=2$ the power cells are convex polygons and rather than referring to the intersections of neighbouring cells as faces, we refer to them as edges.
Recall that we sometimes write $P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ for the power cells generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$,
instead of simply $P_i$, to emphasize that the power diagram is generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$. Similarly, we will sometimes write $m_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ for the mass of the $i$-th power cell. From equation \eqref{eq:pd} it is easy to see that adding a constant $c \in \mathbb{R}$ to all the weights generates the same power diagram: $P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}}+\ensuremath{\bm{c}})=P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ for all $i$, where $\ensuremath{\bm{c}} = (c, \ldots , c ) \in \mathbb{R}^N$.
Let $\mathbb{R}_+ = [0,\infty)$ and let $\ensuremath{\bm{m}} : \Omega^N \times \mathbb{R}^N \to \mathbb{R}_{+}^N$ be the function defined by
\begin{equation}
\label{eqn:m}
\ensuremath{\bm{m}} (\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = (m_1(\ensuremath{\bm{X}},\ensuremath{\bm{w}}), \ldots, m_N(\ensuremath{\bm{X}},\ensuremath{\bm{w}})),
\end{equation}
which gives the mass of all of the cells generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$. Note that some of the cells may be empty (at most $N-1$ of them), in which case the corresponding components of $\ensuremath{\bm{m}}$ take the value zero. Given a density $\rho : \Omega \to [0,\infty)$, let the space of admissible masses be
\begin{equation}
\mathcal{M}^N = \left\{ \ensuremath{\bm{M}} \in \mathbb{R}_+^N : \sum_{i=1}^N M_i = \int_\Omega \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}} \right\}.
\end{equation}
Throughout this paper $\ensuremath{\bm{I}}_m$ denotes the $m$-by-$m$ identity matrix.
\subsection{The helper function $H$}
\label{Subsec: H}
Motivated by \cite{Du2006}, where convergence of the classical Lloyd algorithm is studied, we introduce a helper function
$H$
defined by
\begin{multline}
H\left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2),\ensuremath{\bm{M}} \right) := \\
\sum_{i=1}^N \left\{ M_i w^1_i + f(M_i)
+ \int_{P_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2)} ( |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}^1_i|^2-w^1_i ) \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}
\right\}
\end{multline}
where $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k) = ((\ensuremath{\bm{x}}_1^k,\ldots,\ensuremath{\bm{x}}_N^k),(w_1^k,\ldots,w_N^k))$ for $k \in \{1, 2 \}$, $\ensuremath{\bm{M}} = (M_1,\ldots,M_N)$, and the domain of $H$ is
$(\Omega^N \times \mathbb{R}^N) \times (\Omega^N \times \mathbb{R}^N) \times \mathcal{M}^N$.
The energy $E$ is recovered by choosing the arguments of $H$ appropriately:
\begin{equation}
\label{eqn:E=H}
E\left( \ensuremath{\bm{X}},\ensuremath{\bm{w}} \right) = H\left((\ensuremath{\bm{X}},\ensuremath{\bm{w}}),(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ensuremath{\bm{m}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})\right).
\end{equation}
Note that $H$ is invariant under addition of a constant to all the weights:
\begin{equation}
H\left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1+\ensuremath{\bm{c}}_1),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2+\ensuremath{\bm{c}}_2),\ensuremath{\bm{M}} \right) = H\left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2),\ensuremath{\bm{M}} \right)
\end{equation}
for all $\ensuremath{\bm{c}}_i = c_i (1, \ldots ,1) \in \mathbb{R}^N$, $i \in \{ 1,2 \}$, since $\ensuremath{\bm{M}} \in \mathcal{M}^N$.
\begin{lemma}[Properties of $H$]
\label{lemma:H}
Let $\ensuremath{\boldsymbol{\xi}}$, $\omega$ be the Lloyd maps defined in equation \eqref{eqn:lloydmap}. Then
\begin{align}
\nonumber
& (i) \quad \min_{\ensuremath{\bm{X}}^1 \in \Omega^N} H \left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{M}} \right) =
H \left( \left( \ensuremath{\boldsymbol{\xi}} (\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2),\ensuremath{\bm{w}}^1 \right) , (\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{M}} \right), \\
\nonumber
& (ii) \quad H \left( (\ensuremath{\bm{X}},\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ensuremath{\bm{m}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \right) = E\left( \ensuremath{\bm{X}},\ensuremath{\bm{w}} \right),
\textrm{ i.e., is independent of } \ensuremath{\bm{w}}^1,
\\
\nonumber
& (iii) \; \, \, H \left( (\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1), (\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) ,\ensuremath{\bm{M}} \right) \ge H \left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),\ensuremath{\bm{M}} \right),
\\
\nonumber
& \phantom{(iii)}
\textrm{ with equality if and only if } P_i(\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1)=P_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \textrm{ for all } i,
\\
\nonumber
& (iv) \; \, \max_{\ensuremath{\bm{M}} \in \mathbb{R}_+^N} H \left( \left( \ensuremath{\bm{X}}^1, \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \right),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{M}} \right) =
\\
\nonumber
& \phantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaa} H \left( \left( \ensuremath{\bm{X}}^1, \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \right), (\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{m}}(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \right).
\end{align}
\end{lemma}
\begin{proof}
Property (i): For fixed $\ensuremath{\bm{X}}^2 \in \Omega^N$, $\ensuremath{\bm{w}}^1, \ensuremath{\bm{w}}^2 \in \mathbb{R}^N$ and $\ensuremath{\bm{M}} \in \mathcal{M}^N$, define the function
$h:\Omega^N \to \mathbb{R}$ by $h(\ensuremath{\bm{X}}^1) := H \left((\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{M}} \right)$.
Then
\[
\frac{\partial h}{\partial \ensuremath{\bm{x}}_i^1} (\ensuremath{\bm{X}}^1) = 2 \int_{P_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2)} (\ensuremath{\bm{x}}^1_i-\ensuremath{\bm{x}}) \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}} = 2 m_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) (\ensuremath{\bm{x}}_i^1-\ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2))
\]
by the definition \eqref{eqn:lloydmap} of $\ensuremath{\boldsymbol{\xi}}_i$.
Therefore $\ensuremath{\boldsymbol{\xi}} ( \ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2 )$ is a critical point of $h$. Moreover it is a global minimum point since $h$ is convex:
\[
\frac{\partial^2 h}{\partial \ensuremath{\bm{x}}_i^1 \partial \ensuremath{\bm{x}}^1_j} = \left\{
\begin{array}{cl}
2 m_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \ensuremath{\bm{I}}_d & \textrm{if } i=j, \\
\ensuremath{\bm{0}} & \textrm{if } i \ne j,
\end{array}
\right.
\]
where $\ensuremath{\bm{I}}_d$ and $\ensuremath{\bm{0}}$ are the $d$-by-$d$ identity and zero matrices.
(Note that $h$ is not necessarily strictly convex since $m_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2)$ may be zero for some $i$, which is the case when the power cell
$P_i(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2)$ is empty.)
Property (ii) is immediate from the definitions of $H$ and $E$.
Property (iii): This follows from the fact that for any partition $\{ S_i \}_{i=1}^N$ of $\Omega$ we have
\[
\sum_i \int_{S_i} ( |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}^1_i|^2-w^1_i ) \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}
\ge
\sum_i \int_{P_i(\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1)} ( |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}^1_i|^2-w^1_i ) \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}
\]
with equality if and only if $\{ S_i \}_{i=1}^N$ is the power diagram generated by $(\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1)$ (up to sets of $\rho \, d \ensuremath{\bm{x}}$--measure zero).
This follows since
\[
\sum_i \int_{P_i(\ensuremath{\bm{X}}^1,\ensuremath{\bm{w}}^1)} ( |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}^1_i|^2-w^1_i ) \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}} = \int_\Omega \min_i \{ |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}^1_i|^2-w^1_i \} \rho (\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\]
Property (iv): First we check that $\ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^2 , \ensuremath{\bm{w}}^2 \right)$ is a critical point of the function defined by
$g \left( \ensuremath{\bm{M}} \right) = H \left( \left( \ensuremath{\bm{X}}^1, \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2) \right),(\ensuremath{\bm{X}}^2,\ensuremath{\bm{w}}^2), \ensuremath{\bm{M}} \right)$:
\begin{equation}
\nonumber
\frac{\partial g}{\partial M_j} \left( \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^2 , \ensuremath{\bm{w}}^2 \right) \right)
=
\omega_j \left( \ensuremath{\bm{X}}^2 , \ensuremath{\bm{w}}^2 \right) + f'\left( m_j \left( \ensuremath{\bm{X}}^2 , \ensuremath{\bm{w}}^2 \right) \right) = 0
\end{equation}
by the definition \eqref{eqn:lloydmap} of $\omega_j$.
Note that the function $g$ is concave since its Hessian is diagonal with non-positive diagonal entries:
\begin{equation}
\nonumber
D^2 g = \textrm{diag} \left( f''(M_1),f''(M_2),\ldots,f''(M_N) \right).
\end{equation}
Therefore the critical point $\ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^2 , \ensuremath{\bm{w}}^2 \right)$ is a global maximum point of $g$, as required.
\end{proof}
\subsection{Critical points of $E$}
\label{Subsec: Crit points}
In this section we show that critical points of $E$ are fixed points of the Lloyd maps $\ensuremath{\boldsymbol{\xi}}$, $\ensuremath{\boldsymbol{\omega}}$.
\begin{lemma}[Partial derivatives of $E$]
\label{lem:DE}
The partial derivatives of $E$ are
\begin{align}
\label{eqn:E_x}
\frac{\partial E}{\partial \ensuremath{\bm{x}}_i}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})
& = 2 m_i (\ensuremath{\bm{x}}_i - \ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})) + \sum_{j=1}^N \frac{\partial m_j}{\partial \ensuremath{\bm{x}}_i} (w_j - \omega_j (\ensuremath{\bm{X}},\ensuremath{\bm{w}})),
\\
\label{eqn:E_w}
\frac{\partial E}{\partial w_i}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})
& = \sum_{j=1}^N \frac{\partial m_j}{\partial w_i} (w_j - \omega_j (\ensuremath{\bm{X}},\ensuremath{\bm{w}}))
\end{align}
for $i \in \{ 1, \ldots, N \}$. In matrix notation:
\begin{equation}
\label{eq:mform}
\begin{pmatrix}
\nabla_{\ensuremath{\bm{X}}} E \\ \nabla_{\ensuremath{\bm{w}}} E
\end{pmatrix}
=
\begin{pmatrix}
2 \hat{\ensuremath{\bm{M}}} & \nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}} \\
\ensuremath{\bm{0}} & \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}
\end{pmatrix}
\begin{pmatrix}
\ensuremath{\bm{X}} - \ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \\
\ensuremath{\bm{w}} - \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})
\end{pmatrix}
\end{equation}
where
\begin{equation}
\label{eq:Mhat}
\hat{\ensuremath{\bm{M}}} := \mathrm{diag}(m_1,\ldots,m_N) \otimes \ensuremath{\bm{I}}_{d} =
\mathrm{diag}(m_1 \ensuremath{\bm{I}}_{d},\ldots, m_N \ensuremath{\bm{I}}_{d}).
\end{equation}
\end{lemma}
\begin{proof}
From equation \eqref{eqn:E=H},
\begin{equation}
\label{eqn:E_x 2}
\frac{\partial E}{\partial \ensuremath{\bm{x}}_i}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \frac{\partial H}{\partial \ensuremath{\bm{x}}^1_i} + \frac{\partial H}{\partial \ensuremath{\bm{x}}^2_i}
+ \sum_j \frac{\partial H}{\partial M_j} \frac{\partial m_j}{\partial \ensuremath{\bm{x}}_i}
\end{equation}
where the derivatives of $H$ are evaluated at $((\ensuremath{\bm{X}},\ensuremath{\bm{w}}),(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ensuremath{\bm{m}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}))$.
The second term on the right-hand side is zero by Lemma \ref{lemma:H}(iii). Direct computation (as in the proof of Lemma \ref{lemma:H}(i),(iv)) gives
\begin{equation}
\label{eqn:H_x1, H_m}
\frac{\partial H}{\partial \ensuremath{\bm{x}}^1_i} = 2 m_i (\ensuremath{\bm{x}}_i-\ensuremath{\boldsymbol{\xi}}_i), \quad
\frac{\partial H}{\partial M_j} = w_j + f'(m_j(\ensuremath{\bm{X}},\ensuremath{\bm{w}})).
\end{equation}
Combining \eqref{eqn:E_x 2}, \eqref{eqn:H_x1, H_m} and the definition of $\omega_j$ yields \eqref{eqn:E_x}.
Differentiating \eqref{eqn:E=H} with respect to $w_i$ gives
\begin{equation}
\label{eqn:E_w 2}
\frac{\partial E}{\partial w_i}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \frac{\partial H}{\partial w^1_i} + \frac{\partial H}{\partial w^2_i}
+ \sum_j \frac{\partial H}{\partial M_j} \frac{\partial m_j}{\partial w_i}
\end{equation}
where the derivatives of $H$ are evaluated at $((\ensuremath{\bm{X}},\ensuremath{\bm{w}}),(\ensuremath{\bm{X}},\ensuremath{\bm{w}}),\ensuremath{\bm{m}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}))$.
The first two terms on the right-hand side are zero by Lemma \ref{lemma:H}(ii),(iii). Therefore combining \eqref{eqn:E_w 2} and
\eqref{eqn:H_x1, H_m}$_2$ yields \eqref{eqn:E_w}.
\end{proof}
\paragraph{Weighted graph Laplacian matrices}
Given a power diagram $\{P_i (\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \}$ define a graph $G$ that has as vertices $\ensuremath{\bm{X}}$, and edges given by the neighbour relations of the power diagram: $\ensuremath{\bm{x}}_i$ is connected by an edge to $\ensuremath{\bm{x}}_j$ if and only if $i \in J_j$ (and equivalently $j \in J_i$). If we associate a weight $u_{ij}=u_{ji}$ to each edge of this graph, then we can define the weighted graph Laplacian matrix $L= L(G,u)$ by
\begin{equation}
L_{ij} =
\left\{
\begin{array}{cl}
\displaystyle \sum_{k \in J_j} u_{jk} & \textrm{if } i=j, \\
- u_{ij} & \textrm{if } i \in J_j, \\
0 & \textrm{otherwise}.
\end{array}
\right.
\end{equation}
The symmetric matrix $L$ is the difference between the weighted degree matrix and weighted adjacency matrix of $G$. It is well-known that the dimension of the null space of $L$ equals the number of connected components of $G$. See \cite[p.~117, Th.~3.1]{Mohar2004}. In our case $G$ is connected and so, for any edge-weighting $u$, the null space of $L(G,u)$ is one-dimensional and is spanned by $(1,1,\ldots,1)$. In an analogous way, one can define (block) weighted graph Laplacian matrices for vector-valued weights $\bm{u}_{ij}$.
Computing the derivatives of $m_j$ that appear in equations \eqref{eqn:E_x} and \eqref{eqn:E_w} is delicate since this involves differentiating the integrals $m_j = \int_{P_j(\ensuremath{\bm{X}},\ensuremath{\bm{w}})} \rho \, d \ensuremath{\bm{x}}$ with respect to $\ensuremath{\bm{x}}_i$ and $w_i$. It turns out that these derivatives are weighted graph Laplacian matrices:
\begin{lemma}[Weighted graph Laplacian structure of $\nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}$ and $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$]
\label{lem:Dm}
Let $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^N$ be the generators of a power diagram with the generic property that adjacent cells have a common face (a common edge in 2D).
The partial derivatives of $\ensuremath{\bm{m}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ are
\begin{align}
\nonumber
\frac{\partial m_j}{\partial \ensuremath{\bm{x}}_i}
& =
\left\{
\begin{array}{cl}
\displaystyle \sum_{k\in J_j}\frac{m_{jk}}{d_{jk}}\left(\overline{\ensuremath{\bm{x}}}_{jk}-\bm{x}_j\right) & \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle -\frac{m_{ij}}{d_{ij}}\left(\overline{\ensuremath{\bm{x}}}_{ij}-\bm{x}_i\right) & \textrm{if }i \in J_j,
\vspace{0.1cm}
\\
\ensuremath{\bm{0}} & \textrm{otherwise},
\end{array}
\right.
\\
\nonumber
\frac{\partial m_j}{\partial w_i}
& =
\left\{
\begin{array}{cl}
\displaystyle \sum_{k\in J_j}\frac{m_{jk}}{2d_{jk}} & \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle -\frac{m_{ij}}{2d_{ij}} & \textrm{if } i \in J_j,
\vspace{0.1cm}
\\
0 & \textrm{otherwise},
\end{array}
\right.
\end{align}
for $i \in \{ 1, \ldots, N \}$.
In particular, the $N$-by-$N$ matrix $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$, which has components $[\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}]_{ij}=\partial m_j / \partial w_i$, is the weighted graph Laplacian matrix of $G(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ with respect to the weights $\frac{m_{ij}}{2 d_{ij}}$. Therefore the null space of $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$ is one-dimensional and is spanned by $(1,1,\ldots,1) \in \mathbb{R}^N$. Note that $(1,1,\ldots,1)$ also belongs to the null space of the $(Nd)$-by-$N$ matrix $\nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}$, which has $d$-by-$1$ blocks $[\nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}]_{ij}=\partial m_j/ \partial \ensuremath{\bm{x}}_i$.
\end{lemma}
\begin{proof}
Given the power diagram $\{ P_j \}_{j=1}^N$ generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^N$, let $\{ P_j^t \}_{j=1}^N$ be the power diagram generated by
$(\ensuremath{\bm{X}}^t,\ensuremath{\bm{w}}^t):=(\ensuremath{\bm{X}} + t \tilde{\ensuremath{\bm{X}}},\ensuremath{\bm{w}} + t \tilde{\ensuremath{\bm{w}}})$ for some $ \tilde{\ensuremath{\bm{X}}} \in (\mathbb{R}^d)^N$, $\tilde{\ensuremath{\bm{w}}} \in \mathbb{R}^N$. For $t$ in a small enough neighbourhood of zero, this family of power diagrams has the same number of cells, and each cell has the same number of faces, as the power diagram generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ (this follows from the assumption that adjacent cells have a common face).
Let $\varphi^t : \Omega \to \Omega$ be any flow map with the properties that $\varphi^0$ is the identity map, $\varphi^t(\ensuremath{\bm{X}})=\ensuremath{\bm{X}}^t$, $\varphi^t(P_j)=P_j^t$ for all $j$, and that $\varphi^t$ maps the faces of $P_j$ to the faces of $P_j^t$ for all $j$.
Fix $j$ and consider
\begin{equation}
\label{eqn:mt}
m_j(\ensuremath{\bm{X}}^t,\ensuremath{\bm{w}}^t) = \int_{P_j^t} \rho \, d \ensuremath{\bm{x}} = \int_{\varphi^t(P_j)} \rho \, d \ensuremath{\bm{x}}.
\end{equation}
Define $V(\ensuremath{\bm{x}}) = \frac{d}{dt} \varphi^t(\ensuremath{\bm{x}}) |_{t=0}$.
By the Reynolds Transport Theorem, differentiating \eqref{eqn:mt} with respect to $t$ and evaluating at $t=0$ gives
\begin{equation}
\label{eqn:deriv}
\sum_{i=1}^N \frac{\partial m_j}{\partial \ensuremath{\bm{x}}_i} \cdot \tilde{\ensuremath{\bm{x}}}_i + \frac{\partial m_j}{\partial w_i} \tilde{w}_i
= \int_{\partial P_j} \rho \, V \cdot \ensuremath{\bm{n}} \, dS
= \sum_{k \in J_j} \int_{F_{jk}} \rho \, V \cdot \ensuremath{\bm{n}}_{jk} \, dS.
\end{equation}
Now we compute $V \cdot \ensuremath{\bm{n}}_{jk}$.
Choose a face $F_{jk}=P_j \cap P_k$ and some point $\ensuremath{\bm{x}} \in F_{jk}$. Then $\ensuremath{\bm{x}}^t := \varphi^t(\ensuremath{\bm{x}}) \in F^t_{jk} = P_j^t \cap P_k^t$ and so it satisfies
\[
| \ensuremath{\bm{x}}^t - \ensuremath{\bm{x}}_j^t |^2 - w_j^t = | \ensuremath{\bm{x}}^t - \ensuremath{\bm{x}}_k^t |^2 - w_k^t.
\]
Differentiating with respect to $t$ and setting $t=0$ gives
\begin{equation}
\label{eqn:bndry}
2 (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_j) \cdot (V(\ensuremath{\bm{x}})-\tilde{\ensuremath{\bm{x}}}_j) - \tilde{w}_j =
2 (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_k) \cdot (V(\ensuremath{\bm{x}})-\tilde{\ensuremath{\bm{x}}}_k) - \tilde{w}_k.
\end{equation}
Recall that $\ensuremath{\bm{n}}_{jk} = (\ensuremath{\bm{x}}_k - \ensuremath{\bm{x}}_j)/d_{jk}$. Therefore rearranging \eqref{eqn:bndry} and dividing by $d_{jk}$ yields
\begin{equation}
\label{eq:Vdotn}
V(\ensuremath{\bm{x}}) \cdot \ensuremath{\bm{n}}_{jk} = \frac{ (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_j) \cdot \tilde{\ensuremath{\bm{x}}}_j - (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_k) \cdot \tilde{\ensuremath{\bm{x}}}_k}{d_{jk}} + \frac{\tilde{w}_j - \tilde{w}_k}{2 d_{jk}}.
\end{equation}
Substituting this into \eqref{eqn:deriv} and using \eqref{eqn:def2}$_2$ and \eqref{eqn:def3}$_2$ gives
\[
\sum_{i=1}^N \frac{\partial m_j}{\partial \ensuremath{\bm{x}}_i} \cdot \tilde{\ensuremath{\bm{x}}}_i + \frac{\partial m_j}{\partial w_i} \tilde{w}_i
= \sum_{k \in J_j}
\frac{ m_{jk}}{d_{jk}} [(\overline{\ensuremath{\bm{x}}}_{jk} - \ensuremath{\bm{x}}_j) \cdot \tilde{\ensuremath{\bm{x}}}_j - (\overline{\ensuremath{\bm{x}}}_{jk} - \ensuremath{\bm{x}}_k) \cdot \tilde{\ensuremath{\bm{x}}}_k ]
+ \frac{m_{jk}}{2 d_{jk}}(\tilde{w}_j - \tilde{w}_k).
\]
The derivatives in Lemma \ref{lem:Dm} can be read off from this equation by making suitable choices of $(\tilde{\ensuremath{\bm{X}}},\tilde{\ensuremath{\bm{w}}})$.
\end{proof}
\begin{remark}
\upshape
The fact that $(1,1,\ldots,1)\in \mathbb{R}^N$ belongs to the null space of the matrix $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$ corresponds to the fact that the power diagram has fixed total mass and that it is invariant under the addition of a constant to all its weights:
\begin{equation}
\sum_j m_j = \int_\Omega \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}, \qquad m_j (\ensuremath{\bm{X}},\ensuremath{\bm{w}} + (c,c,\ldots,c)) = m_j (\ensuremath{\bm{X}},\ensuremath{\bm{w}}).
\end{equation}
Differentiating the first equation with respect to $w_i$ gives $\sum_j \partial m_j/\partial w_i = 0$ for all $i$, and so $(1,1,\ldots,1)$ belongs to the null space of $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$.
Differentiating the second equation with respect to
$c$ and then setting $c=0$ gives $\sum_i \partial m_j / \partial w_i = 0$ for all $j$, and so $(1,1,\ldots,1)$ belongs to the null space of $(\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}})^T$ (which equals $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$ since $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}$ is symmetric).
\end{remark}
The main result of this section is the following:
\begin{proposition}[Critical points of $E$ are fixed points of the Lloyd maps]
\label{prop:critE}
Let $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^N$ be a critical point of $E$. Then, up to the addition of a constant to the weights, $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is a fixed point of the Lloyd maps $\ensuremath{\boldsymbol{\xi}}$ and $\ensuremath{\boldsymbol{\omega}}$:
\begin{equation}
\label{eq:fp}
\ensuremath{\boldsymbol{\xi}} ( \ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \ensuremath{\bm{X}}, \quad \ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \ensuremath{\bm{w}} + \ensuremath{\bm{c}}
\end{equation}
where $\ensuremath{\bm{c}} = c (1,1,\ldots,1) \in \mathbb{R}^N$.
In particular, critical points of $E$ are centroidal power diagrams.
\end{proposition}
\begin{proof}
Equation \eqref{eqn:E_w} yields
\[
\ensuremath{\bm{0}} = \nabla_{\ensuremath{\bm{w}}} E = \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}} (\ensuremath{\bm{w}}-\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})).
\]
By Lemma \ref{lem:Dm}, $\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \ensuremath{\bm{w}} + \ensuremath{\bm{c}}$ for some $\ensuremath{\bm{c}} = c (1,1,\ldots,1) \in \mathbb{R}^N$.
Since $\ensuremath{\bm{c}}$ belongs to the null space of $\nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}$, then equation \eqref{eqn:E_x} implies that
\begin{equation}
\label{eq:CP}
\ensuremath{\bm{0}} = \frac{\partial E}{\partial \ensuremath{\bm{x}}_i}(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = 2 m_i (\ensuremath{\bm{x}}_i - \ensuremath{\boldsymbol{\xi}}_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})).
\end{equation}
By assumption the power diagram generated by $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ has no empty cells. Therefore $m_i \ne 0$ for any $i$ and equation \eqref{eq:CP} gives $\ensuremath{\bm{X}} -\ensuremath{\boldsymbol{\xi}} ( \ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \ensuremath{\bm{0}}$, as required.
\end{proof}
\begin{remark}[Examples of critical points of $E$]
\upshape
Any centroidal Voronoi tessellation of $\Omega$ with the property that all cells have the same mass is a critical point of $E$. If $\rho=$ constant and $\Omega$ is a domain with nice symmetry, e.g., a square or a disc, then it is easy to write down lots, in fact infinitely many, centroidal Voronoi tessellations with this property and hence find infinitely many critical points of $E$ (although not all will be local minima). The highly non-convex nature of the energy landscape makes it difficult to find global minima. See \S\ref{subsec:nonconvex}.
\end{remark}
\section{Properties of the algorithm}
\label{sec:prop}
Our main result is the following:
\begin{theorem}
\label{thm:energyDecrease}
The generalized Lloyd algorithm is energy decreasing:
\begin{equation}
\nonumber
E( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} ) \le E( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} )
\end{equation}
where $\ensuremath{\bm{X}}^{n+1}=\ensuremath{\boldsymbol{\xi}} \left( \ensuremath{\bm{X}}^n, \ensuremath{\bm{w}}^n\right)$, $\ensuremath{\bm{w}}^{n+1}=\ensuremath{\boldsymbol{\omega}} \left( \ensuremath{\bm{X}}^n, \ensuremath{\bm{w}}^n\right)$, $(\ensuremath{\bm{X}}^n,\ensuremath{\bm{w}}^n)\in\mathcal{G}^N$.
The inequality is strict unless $(\ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1}) = (\ensuremath{\bm{X}}^{n+2}, \ensuremath{\bm{w}}^{n+2})$, i.e., unless the algorithm has converged.
\end{theorem}
\begin{proof}
The proof follows easily by stringing together the properties of $H$ from Lemma \ref{lemma:H}:
\begin{align*}
& E \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right)
\\
& = H \left( \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n+1} \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) \right)
& \textrm{(by Lemma \ref{lemma:H}(ii))}
\\
& = H \left( \left( \ensuremath{\bm{X}}^{n},\ensuremath{\boldsymbol{\omega}} \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) \right)
& \textrm{(by definition of } \ensuremath{\bm{w}}^{n+1} \textrm{)}
\\
& \ge H \left( \left( \ensuremath{\bm{X}}^{n},\ensuremath{\boldsymbol{\omega}} \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) \right)
& \textrm{(by Lemma \ref{lemma:H}(iv))}
\\
& = H \left( \left( \ensuremath{\bm{X}}^{n},\ensuremath{\bm{w}}^{n+1} \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) \right)
& \textrm{(by definition of } \ensuremath{\bm{w}}^{n+1} \textrm{)}
\\
& \ge H \left( \left( \ensuremath{\boldsymbol{\xi}} \left( \ensuremath{\bm{X}}^{n},\ensuremath{\bm{w}}^{n} \right),\ensuremath{\bm{w}}^{n+1} \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) \right)
& \textrm{(by Lemma \ref{lemma:H}(i))}
\\
& = H \left( \left( \ensuremath{\bm{X}}^{n+1},\ensuremath{\bm{w}}^{n+1} \right) , \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) \right)
& \textrm{(by definition of } \ensuremath{\bm{X}}^{n+1} \textrm{)}
\\
& \ge H \left( \left( \ensuremath{\bm{X}}^{n+1},\ensuremath{\bm{w}}^{n+1} \right) , \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) , \ensuremath{\bm{m}} \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right) \right)
& \textrm{(by Lemma \ref{lemma:H}(iii))}
\\
& = E \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right)
& \textrm{(by equation \eqref{eqn:E=H})}.
\end{align*}
By Lemma \ref{lemma:H}(iii) the last inequality is strict unless $P_i \left( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1} \right)= P_i \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right)$ for all
$i$, up to sets of $\rho \, d \ensuremath{\bm{x}}$--measure zero, in which case $x_i^{n+2}$ (which is the centroid of $P_i( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1})$) equals $x_i^{n+1}$ (which is the centroid of $P_i( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n})$)
and
\begin{equation}
\nonumber
w_i^{n+2}=-f'(|P_i( \ensuremath{\bm{X}}^{n+1}, \ensuremath{\bm{w}}^{n+1})|) = -f'(|P_i \left( \ensuremath{\bm{X}}^{n}, \ensuremath{\bm{w}}^{n} \right)|) = w_i^{n+1}
\end{equation}
as required.
\end{proof}
\begin{remark}[Elimination of generators is energy decreasing]
\upshape
The generalized Lloyd algorithm removes generators corresponding to empty cells, i.e., if $P^n_i=\emptyset$, then
the generator pair $(\ensuremath{\bm{x}}^n_i,w^n_i)$ is removed in Step (2) of Algorithm \ref{algo:genLloyd}. The assumption that $f(0) \ge 0$ ensures that removing generators is energy decreasing.
\end{remark}
Recall from equation \eqref{eqn:G} that $\mathcal{G}^N$ is the set of $N$ generators such that no two generators
coincide and that the corresponding power diagram has no empty cells.
The energy-decreasing property of the algorithm can be used to prove the following convergence result,
which is a generalization of convergence theorem for the classical Lloyd algorithm \cite[Thm.~2.6]{Du2006}:
\begin{theorem}[Convergence of the generalized Lloyd algorithm]
\label{thm:conv}
Assume that $E$ has only finitely many critical points with the same energy.
Let $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)$ be a sequence generated by Algorithm \ref{algo:genLloyd}. Let $K$ be large enough such that, for all $k \ge K$,
$(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k) \in \mathcal{G}^N$ for $N$ fixed, i.e., there is no elimination of generators after iteration $K$.
If the sequence $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)_{k > K}$ is a compact subset of $\mathcal{G}^N$, then it converges to a critical point of $E$.
\end{theorem}
\begin{proof}
This follows by combining a minor modification of the proof of the Global Convergence Theorem from \cite[p.~206]{LuenbergerYe} with a convergence theorem for the classical Lloyd algorithm \cite[Thm.~2.5]{Du2006}.
Note that the Lloyd maps $\ensuremath{\boldsymbol{\xi}}_i$, $\ensuremath{\boldsymbol{\omega}}_i$ and the energy $E$ are continuous on $\mathcal{G}^N$ by the continuity of the mass and first and second moments of mass of the power cells $P_i$,
and the continuity of $f$.
Let $(\ensuremath{\bm{X}}^{k_j},\ensuremath{\bm{w}}^{k_j})$ be a convergent subsequence converging to $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})\in \mathcal{G}^N$. By the continuity of $E$ on $\mathcal{G}^N$, $E(\ensuremath{\bm{X}}^{k_j},\ensuremath{\bm{w}}^{k_j}) \to E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$. Take $J$ large enough so that $E(\ensuremath{\bm{X}}^{k_J},\ensuremath{\bm{w}}^{k_J}) - E(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) < \varepsilon$.
By Theorem \ref{thm:energyDecrease} the whole sequence $E(\ensuremath{\bm{X}}^{k},\ensuremath{\bm{w}}^{k})$ converges to $E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ since for all $k > k_J$
\[
0 \le E(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k) - E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})
\le E(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k) - E(\ensuremath{\bm{X}}^{k_J},\ensuremath{\bm{w}}^{k_J}) + E(\ensuremath{\bm{X}}^{k_J},\ensuremath{\bm{w}}^{k_J}) - E(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) < \varepsilon.
\]
Next we check that $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is a fixed point of the Lloyd maps and hence a critical point of $E$. Consider the sequence
$(\ensuremath{\bm{X}}^{k_j-1},\ensuremath{\bm{w}}^{k_j-1})$. By the compactness of $(\ensuremath{\bm{X}}^{k},\ensuremath{\bm{w}}^{k})$ there is a subsequence $(\ensuremath{\bm{X}}^{k_{j_l}-1},\ensuremath{\bm{w}}^{k_{j_l}-1})$
converging to $(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-) \in \mathcal{G}^N$. The continuity of the Lloyd maps on $\mathcal{G}^N$ implies that
\[
(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}^{k_{j_l}-1},\ensuremath{\bm{w}}^{k_{j_l}-1}),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}^{k_{j_l}-1},\ensuremath{\bm{w}}^{k_{j_l}-1}))=(\ensuremath{\bm{X}}^{k_{j_l}},\ensuremath{\bm{w}}^{k_{j_l}}) \to
(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-)).
\]
But $(\ensuremath{\bm{X}}^{k_{j_l}},\ensuremath{\bm{w}}^{k_{j_l}}) \to (\ensuremath{\bm{X}},\ensuremath{\bm{w}})$. Therefore
$(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-)) = (\ensuremath{\bm{X}},\ensuremath{\bm{w}})$.
Since $E(\ensuremath{\bm{X}}^{k},\ensuremath{\bm{w}}^{k}) \to E(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$, we obtain that
\[
E(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-) = E(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = E(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-))
\]
and thus, by Theorem \ref{thm:energyDecrease}, $(\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-),\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}}_-,\ensuremath{\bm{w}}_-))=(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$
is a fixed point of the Lloyd maps.
We have shown that any accumulation point of $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)$ is a fixed point of the Lloyd maps and, by the energy-decreasing property of the algorithm,
all accumulation points have the same energy. Therefore, by the first assumption of the theorem, it follows that
$(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)$ has only finitely many accumulation points.
Finally, the whole sequence $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)$ converges to $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ by the following result, which is proved in \cite[Thm.~2.5]{Du2006}
for the classical Lloyd algorithm but holds for general fixed point methods of the form $z^{k+1} = T(z^k)$: If the sequence $\{z^k\}$ generated by $z^{k+1} = T(z^k)$ has finitely many accumulation points, $T$ is continuous
at them, and they are fixed points of $T$, then $z^k$ converges. This completes the proof.
\end{proof}
\begin{remark}[Assumptions of the convergence theorem]
\upshape
The assumption that $E$ has only finitely many critical points with the same energy is true for generic domains $\Omega$
but not for all, e.g., if $\Omega$ is a ball and $\rho$ is radially symmetric then there could be infinitely many fixed points
with the same energy by rotational symmetry. The assumption that $(\ensuremath{\bm{X}}^k,\ensuremath{\bm{w}}^k)_{k > K}$ is a compact subset of $\mathcal{G}^N$ is stronger.
It means that in the limit there is no elimination of generators.
We need this assumption since the Lloyd maps are not defined if there are empty cells, $P_i = \emptyset$ for some $i$.
While numerical experiments suggest that cells do not
disappear in the limit, it is difficult to prove, even for the classical Lloyd algorithm; it was proved in one-dimension
by \cite[Prop.~2.9]{Du2006}. For further convergence theorems for the classical Lloyd algorithm see \cite{Du1999} and \cite{SabinGray}.
\end{remark}
\begin{remark}[Interpretation of the Lloyd algorithm as a descent method]
\upshape
In the following proposition we study the structure of the generalized Lloyd algorithm. Recall that an iterative method is a
descent method for an energy $\mathcal{E}$ if it can be written in the form
\begin{equation}
\label{eqn:descent}
\ensuremath{\bm{z}}_{n+1} = \ensuremath{\bm{z}}_n - \alpha_n \ensuremath{\bm{B}}_n \nabla \mathcal{E}
\end{equation}
where $\ensuremath{\bm{B}}_n$ is positive-definite, $\alpha_n$ is the step size, and $-\ensuremath{\bm{B}}_n \nabla \mathcal{E}$ is the step direction, e.g., $\ensuremath{\bm{B}}_n = \ensuremath{\bm{I}}$ is the steepest descent method, $B_n = (D^2\mathcal{E})^{-1}$, $\alpha_n=1$ is Newton's method.
The following proposition asserts that the generalized Lloyd algorithm can be written in the form \eqref{eqn:descent}, but not that
$\ensuremath{\bm{B}}_n$ is positive-definite, which we are unable to prove:
\end{remark}
\begin{proposition}
\label{prop:DM}
The generalized Lloyd algorithm can be written in the form
\begin{equation}
\label{eq:DM}
\begin{pmatrix}
\ensuremath{\bm{X}}^{n+1} \\ \ensuremath{\bm{w}}^{n+1}
\end{pmatrix}
=
\begin{pmatrix}
\ensuremath{\bm{X}}^{n} \\ \ensuremath{\bm{w}}^{n}
\end{pmatrix}
- \ensuremath{\bm{B}}_n
\begin{pmatrix}
\nabla_{\ensuremath{\bm{X}}} E^n \\ \nabla_{\ensuremath{\bm{w}}} E^n
\end{pmatrix}
+
\begin{pmatrix}
\ensuremath{\bm{0}} \\ \ensuremath{\bm{c}}
\end{pmatrix}
\end{equation}
where $\ensuremath{\bm{B}}_n$ is a square matrix of dimension $N(d+1)$
and $\ensuremath{\bm{c}} = c (1,1,\ldots,1)^T$ for some $c \in \mathbb{R}$.
\end{proposition}
\begin{proof}
Recall that
\[
m_i^n = \int_{P_i(\ensuremath{\bm{X}}^n,\ensuremath{\bm{w}}^n)} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\]
and $\hat{\ensuremath{\bm{M}}}_n = \textrm{diag}(m_1^n \ensuremath{\bm{I}}_{d},\ldots, m_N^n \ensuremath{\bm{I}}_{d})$.
Equation \eqref{eq:mform} implies that
\begin{equation}
\label{eq:2invert}
\begin{pmatrix}
\nabla_{\ensuremath{\bm{X}}} E^n \\ \nabla_{\ensuremath{\bm{w}}} E^n
\end{pmatrix}
=
\begin{pmatrix}
2 \hat{\ensuremath{\bm{M}}}_n & \nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}^n \\
\ensuremath{\bm{0}} & \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n
\end{pmatrix}
\begin{pmatrix}
\ensuremath{\bm{X}}^n - \ensuremath{\bm{X}}^{n+1} \\
\ensuremath{\bm{w}}^n - \ensuremath{\bm{w}}^{n+1}
\end{pmatrix},
\end{equation}
where $\ensuremath{\bm{0}}$ is the $N$-by-$(Nd)$ zero matrix.
By Lemma \ref{lem:Dm}, the matrix on the right-hand side has a one-dimensional nullspace. Therefore rewriting these equations in the form \eqref{eq:DM} requires some care.
Let $\ensuremath{\bm{e}}_1, \ldots, \ensuremath{\bm{e}}_N$ be the standard basis vectors for $\mathbb{R}^N$. We introduce the new basis
\[
\ensuremath{\bm{f}}_1 := \ensuremath{\bm{e}}_1 - \ensuremath{\bm{e}}_2, \quad \ensuremath{\bm{f}}_2 := \ensuremath{\bm{e}}_2 - \ensuremath{\bm{e}}_3, \quad \ldots \quad \ensuremath{\bm{f}}_{N-1} := \ensuremath{\bm{e}}_{N-1} - \ensuremath{\bm{e}}_N, \quad \ensuremath{\bm{f}}_N := \ensuremath{\bm{e}}_1 + \cdots + \ensuremath{\bm{e}}_N.
\]
Note that $\ensuremath{\bm{f}}_N$ spans the null space of $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n$. Let $P$ be the invertible change-of-basis matrix satisfying $P \ensuremath{\bm{f}}_i = \ensuremath{\bm{e}}_i$. In particular
\[
P^{-1} =
\begin{pmatrix}
\phantom{-}1 & & & & & 1 \\
-1 & \phantom{-}1 & & & & 1 \\
& -1 & \phantom{-}1 & & & 1 \\
& & \ddots & \ddots & & \vdots \\
& & & \ddots & \ddots & \vdots \\
& & & & -1 & 1
\end{pmatrix}
\]
with zeros where no entry is given.
Let $\Pi:\mathbb{R}^N \to \mathbb{R}^{N-1}$ be the projection onto $\{\ensuremath{\bm{f}}_N\}^\perp$:
\[
\Pi = ( \ensuremath{\bm{I}}_{N-1} | \ensuremath{\bm{0}} )
\]
where $\ensuremath{\bm{0}}$ is the $(N-1)$-by-$1$ zero vector.
Observer that for all $\ensuremath{\bm{y}} \in \mathbb{R}^N$
\begin{equation}
\label{eq:mess}
\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n \, \ensuremath{\bm{y}} = \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n \, P^{-1} \Pi^T \Pi P \, \ensuremath{\bm{y}}
\end{equation}
since $\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n \, \ensuremath{\bm{f}}_N = \ensuremath{\bm{0}}$, $\Pi P \, \ensuremath{\bm{f}}_N = \ensuremath{\bm{0}}$, and
$\Pi^T \Pi \, \ensuremath{\bm{e}}_i = \ensuremath{\bm{e}}_i$ for all $i \in \{ 1, \ldots, N-1 \}$.
We check that the following $(N-1)$-by-$(N-1)$ matrix is invertible:
\begin{equation}
\label{eq:An}
A_n := \Pi P \, \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n P^{-1} \Pi^T.
\end{equation}
If $A_n \ensuremath{\bm{x}} = \ensuremath{\bm{0}}$, then $P \, \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n P^{-1} \Pi^T \, \ensuremath{\bm{x}} = c \,\ensuremath{\bm{e}}_N$ for some $c \in \mathbb{R}$, and so
$\nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n P^{-1} \Pi^T \, \ensuremath{\bm{x}} = c \ensuremath{\bm{f}}_N$. But
$\ensuremath{\bm{f}}_N^T \nabla_{\ensuremath{\bm{w}}}\ensuremath{\bm{m}}^n = (\nabla_{\ensuremath{\bm{w}}}\ensuremath{\bm{m}}^n \ensuremath{\bm{f}}_N)^T = \ensuremath{\bm{0}}$
and thus $c=0$.
Therefore, by Lemma \ref{lem:Dm}, $P^{-1} \Pi^T \, \ensuremath{\bm{x}} = a \ensuremath{\bm{f}}_N$ for some $a \in \mathbb{R}$. It follows from the definitions of $P$ and $\Pi$ that $a=0$ and $\ensuremath{\bm{x}}=\ensuremath{\bm{0}}$.
Using equations \eqref{eq:mess} and \eqref{eq:An}, we see that the equation
\[
\nabla_{\ensuremath{\bm{w}}} E^n = \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}^n (\ensuremath{\bm{w}}^n - \ensuremath{\bm{w}}^{n+1})
\]
can be inverted to give
\[
A_n^{-1} \Pi P \, \nabla_{\ensuremath{\bm{w}}} E^n = \Pi P (\ensuremath{\bm{w}}^n - \ensuremath{\bm{w}}^{n+1}).
\]
Therefore
\begin{equation}
\label{eq:wnp1}
\ensuremath{\bm{w}}^{n+1} = \ensuremath{\bm{w}}^n - P^{-1} \Pi^T A_n^{-1} \Pi P \, \nabla_{\ensuremath{\bm{w}}} E^n + c \ensuremath{\bm{f}}_N
\end{equation}
for some $c \in \mathbb{R}$.
We conclude from equations \eqref{eq:2invert} and \eqref{eq:wnp1} that
\[
\begin{pmatrix}
\ensuremath{\bm{X}}^{n+1} \\ \ensuremath{\bm{w}}^{n+1}
\end{pmatrix}
=
\begin{pmatrix}
\ensuremath{\bm{X}}^{n} \\ \ensuremath{\bm{w}}^{n}
\end{pmatrix}
- \ensuremath{\bm{B}}_n
\begin{pmatrix}
\nabla_{\ensuremath{\bm{X}}} E^n \\ \nabla_{\ensuremath{\bm{w}}} E^n
\end{pmatrix}
+
\begin{pmatrix}
\ensuremath{\bm{0}} \\ c \ensuremath{\bm{f}}_N
\end{pmatrix}
\]
where $\ensuremath{\bm{B}}_n$ is the matrix
\[
\ensuremath{\bm{B}}_n =
\begin{pmatrix}
\frac12 \hat{\ensuremath{\bm{M}}}_n^{-1} & -\frac 12 \hat{\ensuremath{\bm{M}}}_n^{-1} \nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}}^n P^{-1} \Pi^T A_n^{-1} \Pi P \\
\ensuremath{\bm{0}} & P^{-1} \Pi^T A_n^{-1} \Pi P
\end{pmatrix},
\]
where $\ensuremath{\bm{0}}$ is the $N$-by-$(Nd)$ zero matrix. This completes the proof.
\end{proof}
\begin{remark}[Alternative algorithm]
\upshape
The following proposition gives explicit expressions for the derivatives of the Lloyd maps $\ensuremath{\boldsymbol{\xi}}$ and $\ensuremath{\boldsymbol{\omega}}$. These could
be used to find critical points of $E$ in an alternative way, e.g., by solving the nonlinear equations
\eqref{eq:fp} using Newton's method.
\end{remark}
\begin{proposition}[Derivatives of the Lloyd maps]
\label{prop:DLloyd}
Given a face $F$ of a power diagram, define the matrix $\mathcal{S}(F)$ by
\[
\mathcal{S}(F) = \frac{1}{m(F)} \int_F \ensuremath{\bm{x}} \otimes \ensuremath{\bm{x}} \, \rho(\ensuremath{\bm{x}}) \, dS
\]
where $m(F)=\int_F \rho \, dS$ is the mass of the face.
Let $(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) \in \mathcal{G}^N$ be the generators of a power diagram with the generic property that adjacent cells have a common face (a common edge in 2D).
The derivatives of the Lloyd maps $\ensuremath{\boldsymbol{\xi}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ and $\ensuremath{\boldsymbol{\omega}}(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ are
\[
\left( \frac{\partial \ensuremath{\boldsymbol{\xi}}}{\partial \ensuremath{\bm{X}}} \right)_{ij} = \frac{\partial \ensuremath{\boldsymbol{\xi}}_i}{\partial \ensuremath{\bm{x}}_j} =
\left\{
\begin{array}{cl}
\displaystyle
\frac{1}{m_i} \sum_{k \in J_i} \frac{m_{ik}}{d_{ik}}
(\mathcal{S}(F_{ik}) - \overline{\ensuremath{\bm{x}}}_{ik} \otimes \ensuremath{\bm{x}}_i + \overline{\ensuremath{\bm{x}}}_i \otimes (\ensuremath{\bm{x}}_i - \overline{\ensuremath{\bm{x}}}_{ik}))
& \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle
- \frac{m_{ij}}{m_i d_{ij}} (\mathcal{S}(F_{ij}) - \overline{\ensuremath{\bm{x}}}_{ij} \otimes \ensuremath{\bm{x}}_j + \overline{\ensuremath{\bm{x}}}_i \otimes (\ensuremath{\bm{x}}_j - \overline{\ensuremath{\bm{x}}}_{ij}))
& \textrm{if } j \in J_i,
\vspace{0.1cm}
\\
\ensuremath{\bm{0}} & \textrm{otherwise},
\end{array}
\right.
\]
\[
\left( \frac{\partial \ensuremath{\boldsymbol{\xi}}}{\partial \ensuremath{\bm{w}}} \right)_{ij} = \frac{\partial \ensuremath{\boldsymbol{\xi}}_i}{\partial w_j} =
\left\{
\begin{array}{cl}
\displaystyle
\frac{1}{2 m_i} \sum_{k \in J_i} \frac{m_{ik}}{d_{ik}} ( \overline{\ensuremath{\bm{x}}}_{ik}-\overline{\ensuremath{\bm{x}}}_i )
& \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle
- \frac{m_{ij}}{2 m_i d_{ij}} (\overline{\ensuremath{\bm{x}}}_{ij}-\overline{\ensuremath{\bm{x}}}_i ) & \textrm{if } j \in J_i,
\vspace{0.1cm}
\\
\ensuremath{\bm{0}} & \textrm{otherwise},
\end{array}
\right.
\]
\[
\left( \frac{\partial \ensuremath{\boldsymbol{\omega}}}{\partial \ensuremath{\bm{X}}} \right)_{ij} = \frac{\partial \omega_i}{\partial \ensuremath{\bm{x}}_j} =
\left\{
\begin{array}{cl}
\displaystyle -f''(m_i) \sum_{k\in J_i}\frac{m_{ik}}{d_{ik}}\left(\overline{\ensuremath{\bm{x}}}_{ik}-\bm{x}_i\right) & \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle f''(m_i) \frac{m_{ij}}{d_{ij}}\left(\overline{\ensuremath{\bm{x}}}_{ij}-\bm{x}_i\right) & \textrm{if }j \in J_i,
\vspace{0.1cm}
\\
\ensuremath{\bm{0}} & \textrm{otherwise},
\end{array}
\right.
\]
\[
\left( \frac{\partial \ensuremath{\boldsymbol{\omega}}}{\partial \ensuremath{\bm{w}}} \right)_{ij} = \frac{\partial \omega_i}{\partial w_j} =
\left\{
\begin{array}{cl}
\displaystyle -f''(m_i) \sum_{k\in J_i}\frac{m_{ik}}{2d_{ik}} & \textrm{if } i=j,
\vspace{0.1cm}
\\
\displaystyle f''(m_i)\frac{m_{ij}}{2d_{ij}} & \textrm{if } j \in J_i,
\vspace{0.1cm}
\\
0 & \textrm{otherwise}.
\end{array}
\right.
\]
We order the block matrices $\partial \ensuremath{\boldsymbol{\xi}} / \partial \ensuremath{\bm{X}}$, $\partial \ensuremath{\boldsymbol{\xi}} / \partial \ensuremath{\bm{w}}$, $\partial \ensuremath{\boldsymbol{\omega}} / \partial \ensuremath{\bm{X}}$ and $\partial \ensuremath{\boldsymbol{\omega}} / \partial \ensuremath{\bm{w}}$
so that they have dimensions $(Nd)$-by-$(Nd)$, $(Nd)$-by-$N$, $N$-by-$(Nd)$ and $N$-by-$N$.
\end{proposition}
\begin{proof}
Since $\omega_i = -f'(m_i)$, then the partial derivatives of $\ensuremath{\boldsymbol{\omega}}$ are obtained immediately from Lemma \ref{lem:Dm}.
Obtaining the partial derivatives of $\ensuremath{\boldsymbol{\xi}} = \tfrac{1}{m_i} \int_{P_i} \ensuremath{\bm{x}} \rho \, dx$ requires a bit more work.
Observe that
\begin{equation}
\label{eq:Dxi}
\frac{\partial \ensuremath{\boldsymbol{\xi}}_i}{\partial \ensuremath{\bm{x}}_j}=\frac{1}{m_i}\left( \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial \ensuremath{\bm{x}}_j}- \ensuremath{\boldsymbol{\xi}}_i \otimes \frac{\partial m_i}{\partial \ensuremath{\bm{x}}_j} \right),
\quad
\frac{\partial \ensuremath{\boldsymbol{\xi}}_i}{\partial w_j}=\frac{1}{m_i}\left( \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial w_j}- \frac{\partial m_i}{\partial w_j} \ensuremath{\boldsymbol{\xi}}_i \right).
\end{equation}
Lemma \ref{lem:Dm} gives $\partial m_i / \partial \ensuremath{\bm{x}}_j$, $\partial m_i / \partial w_j$ and so we just need to compute
$\partial(m_i \ensuremath{\boldsymbol{\xi}}_i)/\partial \ensuremath{\bm{x}}_j$, $\partial(m_i \ensuremath{\boldsymbol{\xi}}_i)/\partial w_j$, i.e., compute the partial derivatives of
\[
(m_i \ensuremath{\boldsymbol{\xi}}_i)(\ensuremath{\bm{X}},\ensuremath{\bm{w}}) = \int_{P_i(\ensuremath{\bm{X}},\ensuremath{\bm{w}})} \ensuremath{\bm{x}} \rho(\ensuremath{\bm{x}}) \, d \ensuremath{\bm{x}}.
\]
The computation is similar to the proof of Lemma \ref{lem:Dm} and so we just sketch the details.
Consider the same 1-parameter family of power diagrams used in the proof of Lemma \ref{lem:Dm}: $\{ P_i^t \} = \{ \varphi^t(P_i) \}$. As
for equation \eqref{eqn:deriv},
\[
\left. \frac{d}{dt} \right|_{t=0} (m_i \ensuremath{\boldsymbol{\xi}}_i)(\ensuremath{\bm{X}}^t,\ensuremath{\bm{w}}^t)
= \sum_{j=1}^N \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial \ensuremath{\bm{x}}_j} \tilde{\ensuremath{\bm{x}}}_j + \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial w_j} \tilde{w}_j
= \sum_{k \in J_i} \int_{F_{ik}} \ensuremath{\bm{x}} \rho(\ensuremath{\bm{x}}) V \cdot \ensuremath{\bm{n}}_{ik} \, d S
\]
where $V(\ensuremath{\bm{x}}) = \frac{d}{dt} \varphi^t(\ensuremath{\bm{x}}) |_{t=0}$.
Combining this with equation \eqref{eq:Vdotn} gives
\begin{align}
\nonumber
& \sum_{j=1}^N \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial \ensuremath{\bm{x}}_j} \tilde{\ensuremath{\bm{x}}}_j + \frac{\partial (m_i \ensuremath{\boldsymbol{\xi}}_i)}{\partial w_j} \tilde{w}_j
\\
\nonumber
& =
\sum_{k \in J_i} \int_{F_{ik}} \ensuremath{\bm{x}} \rho(\ensuremath{\bm{x}})
\left[ \frac{ (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_i) \cdot \tilde{\ensuremath{\bm{x}}}_i - (\ensuremath{\bm{x}} - \ensuremath{\bm{x}}_k) \cdot \tilde{\ensuremath{\bm{x}}}_k}{d_{ik}} + \frac{\tilde{w}_i - \tilde{w}_k}{2 d_{ik}} \right]
\, dS
\\
\label{eq:T}
& = \sum_{k \in J_i} \frac{m_{ik}}{d_{ik}} \left[ (\mathcal{S}(F_{ik})-\overline{\ensuremath{\bm{x}}}_{ik} \otimes \ensuremath{\bm{x}}_i) \tilde{\ensuremath{\bm{x}}}_i
- (\mathcal{S}(F_{ik})-\overline{\ensuremath{\bm{x}}}_{ik} \otimes \ensuremath{\bm{x}}_k) \tilde{\ensuremath{\bm{x}}}_k + \frac{\overline{x}_{ik}(\tilde{w}_i - \tilde{w}_k)}{2}
\right]
\end{align}
where the matrix $\mathcal{S}(F_{ik})$ was defined in the statement of the proposition.
By combining equations \eqref{eq:Dxi} and \eqref{eq:T} (with suitable choices of $\tilde{\ensuremath{\bm{X}}}$ and $\tilde{\ensuremath{\bm{w}}}$) and Lemma \ref{lem:Dm} we obtain the desired expressions
for $\partial \ensuremath{\boldsymbol{\xi}}_i / \partial \ensuremath{\bm{x}}_j$ and $\partial \ensuremath{\boldsymbol{\xi}}_i / \partial w_j$.
\end{proof}
Potentially these derivatives could also be used to prove convergence of the Lloyd algorithm by proving that the Lloyd map pair $(\ensuremath{\boldsymbol{\xi}},\ensuremath{\boldsymbol{\omega}}):\mathcal{G}^N \to \mathcal{G}^N$ is a contraction. These derivatives are also needed to evaluate the Hessian of $E$, which can be used to check the stability of fixed points:
\begin{proposition}[The Hessian of $E$ evaluated at fixed points]
\label{prop:D2E}
If $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is a fixed point of the Lloyd maps $\ensuremath{\boldsymbol{\xi}}$ and $\ensuremath{\boldsymbol{\omega}}$, i.e., if it satisfies equation \eqref{eq:fp}, then the Hessian of $E$
evaluated at $(\ensuremath{\bm{X}},\ensuremath{\bm{w}})$ is
\[
\begin{pmatrix}
E_{\ensuremath{\bm{X}} \ensuremath{\bm{X}}} & E_{\ensuremath{\bm{X}} \ensuremath{\bm{w}}} \\ E_{\ensuremath{\bm{w}} \ensuremath{\bm{X}}} & E_{\ensuremath{\bm{w}} \ensuremath{\bm{w}}}
\end{pmatrix}
=
\begin{pmatrix}
2 \hat{\ensuremath{\bm{M}}} & \nabla_{\ensuremath{\bm{X}}} \ensuremath{\bm{m}} \\
\ensuremath{\bm{0}} & \nabla_{\ensuremath{\bm{w}}} \ensuremath{\bm{m}}
\end{pmatrix}
\begin{pmatrix}
\ensuremath{\bm{I}}_{Nd} - \frac{\partial \ensuremath{\boldsymbol{\xi}}}{\partial \ensuremath{\bm{X}}} & - \frac{\partial \ensuremath{\boldsymbol{\xi}}}{\partial \ensuremath{\bm{w}}} \\
- \frac{\partial \ensuremath{\boldsymbol{\omega}}}{\partial \ensuremath{\bm{X}}} & \ensuremath{\bm{I}}_{N} - \frac{\partial \ensuremath{\boldsymbol{\omega}}}{\partial \ensuremath{\bm{w}}}
\end{pmatrix},
\]
where $\ensuremath{\bm{0}}$ is the $N$-by-$(Nd)$ zero matrix, $\hat{\ensuremath{\bm{M}}}$ was defined in equation \eqref{eq:Mhat},
$E_{\ensuremath{\bm{X}} \ensuremath{\bm{X}}}$ is the $(Nd)$-by-$(Nd)$ block matrix with $d$-by-$d$ blocks $\partial^2 E/\partial \ensuremath{\bm{x}}_i \partial \ensuremath{\bm{x}}_j$,
$E_{\ensuremath{\bm{w}} \ensuremath{\bm{w}}}$ is the $N$-by-$N$ matrix with entries $[E_{\ensuremath{\bm{w}} \ensuremath{\bm{w}}}]_{ij}=\partial^2 E/\partial w_i \partial w_j$,
$E_{\ensuremath{\bm{X}} \ensuremath{\bm{w}}}$ is the $(Nd)$-by-$N$ block matrix with $d$-by-$1$ blocks $\partial^2 E/\partial \ensuremath{\bm{x}}_i \partial w_j$, and
$E_{\ensuremath{\bm{w}} \ensuremath{\bm{X}}}$ is the $N$-by-$(Nd)$ block matrix with $1$-by-$d$ blocks $\partial^2 E/\partial w_i \partial \ensuremath{\bm{x}}_j$.
\end{proposition}
\begin{proof}
This follows immediately from equation \eqref{eq:mform}.
\end{proof}
To evaluate the Hessian of $E$ at an arbitrary point, rather than just at a fixed point, requires the computation of the Hessian of $\ensuremath{\bm{m}}$, which is
a rather painful computation that we choose not to do.
\section{Implementation}
\label{sec:implement}
The generalized Lloyd algorithm relies upon the computation of power diagrams. In this section we briefly review different methods for the calculation of the power diagram given a domain $\Omega$ and generators $\{ \ensuremath{\bm{x}}_i,w_i \}_{i=1}^N$.
\subsection{Half-plane intersection}\label{sec:half-plane}
Recall that $F_{ij}=F_{ji}=P_i\cap P_j$ is the boundary between power cells $P_i$ and $P_j$. Assume that $P_i\cap P_j\neq\emptyset$ and take two distinct points $\bm{x}$ and $\bm{y}$ in $F_{ij}$. By the definition \eqref{eq:pd} of the cells $P_i$ and $P_j$ we have $\left|\bm{x}-\bm{x}_i\right|^2-w_i=\left|\bm{x}-\bm{x}_j\right|^2-w_j$ and $\left|\bm{y}-\bm{x}_i\right|^2-w_i=\left|\bm{y}-\bm{x}_j\right|^2-w_j$. Subtracting leaves
\[
\left(\bm{x}-\bm{y}\right)\cdot\left(\bm{x}_i-\bm{x}_j\right)=0.
\]
This establishes that boundaries between cells are planes with the normal to $F_{ij}$ parallel to $\bm{x}_i-\bm{x}_j$.
A point on the plane can be found by writing $\bm{p}=\bm{x}_i+s\left(\bm{x}_j-\bm{x}_i\right)$ and noting that $\bm{p}\in F_{ij}$ implies
\[
\left|\bm{p}-\bm{x}_i\right|^2-w_i=\left|\bm{p}-\bm{x}_j\right|^2-w_j
\]
from which we deduce
\[
s=\frac{1}{2}+\frac{w_i-w_j}{2\left|\bm{x}_i-\bm{x}_j\right|^2},\quad\bm{p}=\frac{1}{2}\left(\bm{x}_i+\bm{x}_j\right)-\frac{\left(w_j-w_i\right)}{2\left|\bm{x}_j-\bm{x}_i\right|^2}\left(\bm{x}_j-\bm{x}_i\right).
\]
If we define the \emph{half-plane}
\begin{equation}
\label{eqn:halfplane}
H_{ij}=H\left(\bm{x}_i,w_i,\bm{x}_j,w_j\right)=\{\bm{x}\;:\;\|\bm{x}-\bm{x}_i\|^2-w_i\le\|\bm{x}-\bm{x}_j\|^2-w_j\}
\end{equation}
then
\[
P_i=\bigcap_{j=1,j\neq i}^{j=N} H\left(\bm{x}_i,w_i,\bm{x}_j,w_j\right).
\]
The observation that power cells can be expressed as the intersection of half-planes, and the explicit expressions for both a point on the plane and the normal to the plane, is the basis for the \emph{half-plane} method for the computation of a power-diagram \cite{Okabe2000}. The power cell is built iteratively according to Algorithm \ref{algo:halfplane}.
\begin{algorithm}
\caption{The half-plane intersection method, \cite{Okabe2000}.}
\label{algo:halfplane}
\begin{algorithmic}
\REQUIRE{The set $\Omega$ is a convex polyhedron with $n_\Omega$ faces, and there are $N$ generators $\{\bm{x}_i,w_i\}_{i=1}^N$.}
\FOR{Generator $(\bm{x}_i,w_i)$}
\STATE{$\tilde{P}_i=\Omega$}
\FOR{Generators $(\bm{x}_j,w_j)$, $j\neq i$}
\STATE{Calculate $H_{ij}$, given by \eqref{eqn:halfplane}}
\STATE{$\tilde{P}_i\leftarrow\tilde{P_i}\cap H_{ij}$}
\ENDFOR
\STATE{$P_i\leftarrow\tilde{P}_i$}
\RETURN{Power cell $P_i$}
\ENDFOR
\RETURN{The power diagram composed of at most $N$ power cells, $\{P_i\}$}
\end{algorithmic}
\end{algorithm}
The na\"ive half-plane method sets the cell $\tilde{P}_i=\Omega$ initially, and following repeated intersections with half-planes $H_{ij}$ forms the power cell $P_i$. As discussed in \cite{Okabe2000} for Voronoi diagrams, the construction of each cell requires $N-1$ half-plane intersections and the number of operations in each intersection depends upon the number of faces of the cell $\tilde{P}_i$ (we must check whether the boundary of the new half-plane intersects with any of the faces of $\tilde{P}_i$). At worst, each half-plane intersection increases the number of faces by $1$. If initially the cell has $n_\Omega$ faces, then the total number of checks is at most $n_\Omega+\left(n_\Omega+1\right)+\ldots+(n_\Omega+(N-2))=(N-2)n_\Omega+(N-1)(N-2)/2=O(N^2)$. The intersections must be performed to create each cell so the overall time complexity of this method is \emph{at worst} $O(N^3)$ and is usually $O(N^2)$.
Once the power cells are obtained, the centroid and the mass of cell $P_i$ can be determined by quadrature, or in the special case of constant $\rho$ can be calculated explicitly given the vertices of the cell (see Appendix \ref{append:poly}). These quantities are needed to evaluate the energy and to perform a step of the generalized Lloyd algorithm.
\subsection{Lifting method}\label{sec:lift}
A faster method for the computation of the power diagram is given in \cite{Aurenhammer1987}, in which the generators $\{\bm{x}_i,w_i\}_{i=1}^N$ are lifted into $\mathbb{R}^{d+1}$. Given a generator $\left(\ensuremath{\bm{x}},w\right)$, where $\ensuremath{\bm{x}}$ has components $x_j$, $j=1,\ldots,d$, the lifted generator is the vector in $\mathbb{R}^{d+1}$ with components $(x_1,x_2,\ldots, x_d,z)$ where $z=|\ensuremath{\bm{x}}|^2-w$. In the power diagram computation the \emph{lower convex hull} of the lifted generators is found, giving rise to a regular triangulation of the generators. The $j$-faces of the triangulation (for example in two dimensions the $0$-faces are the generators, the $1$-faces are the edges and the $2$-faces are the triangles) are then transformed into $(d-j)$-faces via a \emph{polar map}. The result of this is that the triangulation formed by the lower convex hull of the lifted generators is transformed into the power diagram based on the generators. The expensive step in this calculation is the calculation of the lower convex hull of a set of points in $\mathbb{R}^{d+1}$. When $d=2$ then convex hull algorithms with complexity $O(N\log N)$ can be used.
\subsection{Other implementation issues}\label{sec:otherimp}
It is worth noting that Algorithm \ref{algo:genLloyd} converges to local minima and a strategy must be adopted to find global minima. In our simulations
we start with a large number of random initial configurations, apply Algorithm \ref{algo:genLloyd} and periodically sort the results. We then continue using Algorithm \ref{algo:genLloyd} on a subset of configurations that are the lowest energy states. In this way we search for global minima, although we cannot guarantee to find them with this heuristic method.
When using constant $\rho$ the results of Appendix \ref{append:poly} allow fast computation of the integrals required. When using non-constant $\rho$ we employ quadrature: the cells are triangulated and each triangle mapped to a reference triangle on which an $N$-point (we use $N=31$) quadrature rule is applied.
\section{Illustrations and Applications}
\label{sec:illus}
In this section we implement the algorithm in two and three dimensions. We use crystallization and optimal location problems to illustrate the typical flatness and non-convexity of the energy landscape and the rate of convergence of the algorithm. We finish in \S\ref{Subsec:3D} with a more serious application, where we use the algorithm to test a conjecture about the optimality of the BCC lattice for a crystallization problem in three dimensions.
\subsection{Non-convexity and flatness of energy landscape}
\label{subsec:nonconvex}
In this section we look for critical points of the two-dimensional block copolymer energy from \S\ref{Subsubsec: block copolymer}:
\begin{equation}
\label{eqn:bc}
E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = \sum_{i=1}^N \left\{ \lambda \sqrt{m_i} + \int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \,d \ensuremath{\bm{x}} \right\}
\end{equation}
where $\ensuremath{\bm{x}}_i \in \Omega = [0,1]^2$.
This example first appeared in \cite{BournePeletierRoper}.
It is the special case of \eqref{eqn:energy} with $\rho=1$, $f(m)=\lambda \sqrt{m}$, where $\lambda > 0$ is a parameter representing the strength of the repulsion between the two phases of the block copolymer. The scaling of the energy suggests that the optimal value of $N$ scales like $\lambda^{-\frac{2}{3}}$. Figure \ref{fig:flatness} shows local minimizers of $E$ for $\lambda=0.005$.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{l0_005_sqrt_000}
\caption{\label{fig:flatness} Flatness of energy landscape: Some local minimizers of the energy \eqref{eqn:bc} for $\lambda=0.005$. The polygons are the power cells $P_i$ and the points are the generators $\ensuremath{\bm{x}}_i$. The weights $w_i$ are not shown. The shading corresponds to the number of sides of the cells.}
\end{figure}
We believe that the top-left figure is a global minimizer. These were generated using $25,000$ random initial conditions to probe the non-convex energy landscape. The energy has infinitely many critical points, e.g., every centroidal Voronoi tessellation of $[0,1]^2$ with cells of equal area (such as the checkerboard configuration) is a critical point. The flatness of the energy landscape can be seen from the energy values in Figure \ref{fig:flatness}.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{large_tri_1000.pdf}
\includegraphics[width=0.5\textwidth]{large_rand_1000.pdf}
\caption{\label{fig:flatness_large} Two local minimizers of the energy \eqref{eqn:bc} for $\lambda=10^{-5}$ with $N=1037$ in both cases. In the first case the cell generators were initially arranged in a triangular lattice, in the second case they were distributed randomly.}
\end{figure}
As $\lambda$ decreases it becomes harder to find global minimizers. Figure \ref{fig:flatness_large} shows two local minimizers for $\lambda=10^{-5}$. The figure on the left was obtained by using the triangular lattice as an initial condition. It was proved in \cite{BournePeletierTheil} that the triangular lattice is optimal in the limit $\lambda \to 0$. The figure on the right was obtained with a random initial condition. The `grains' of hexagonal tiling resemble grains in metals. This suggests that energies of the form \eqref{eqn:energy} could be used to simulate material microstructure, for example to produce Representative Volume Elements for finite element simulations \cite{Harrison}.
\subsection{Convergence rate}
In this section we study the rate of convergence of the algorithm to critical points of the energy \eqref{eqn:bc} with $\lambda=0.005$.
Figure \ref{fig:convergence}
shows the logarithm of the approximate error of the energy plotted against the number of iterations $n$ for three simulations with random initial conditions. The initial number of generators was $N=6, 10, 25$ and there was no elimination of generators throughout the simulations. The approximate error was computed using the value of the energy at the final iteration. The graph shows that the energy converges linearly, meaning that the error at the $n$--th iteration $\varepsilon_n$ satisfies $\varepsilon_{n+1}/\varepsilon_n \to r$, where $r \in (0,1)$ is the rate of convergence. We observe that the rate of convergence decreases as the number of generators increases and that $r \sim 1- \frac{C}{N}$ for some constant $C$.
In \cite{Du2006} it was found that for the classical Lloyd algorithm with $\rho=1$ in one dimension the rate of convergence of the generators (rather than the energy) is approximately $1-1/(4\pi^2N^2)$. This was found from the spectrum of the derivative of the Lloyd map. In principle the rate of convergence of the generalized Lloyd algorithm could be found using the derivatives given in Proposition \ref{prop:DLloyd}.
We believe that region ($\star$) in the figure is the result of the Lloyd iterates passing close to a saddle point of the energy on the way to a local minimum.
\begin{figure}[h!]
\includegraphics{convergence}
\caption{\label{fig:convergence} Rate of convergence of the generalized Lloyd algorithm to critical points of the energy \eqref{eqn:bc} with $\lambda=0.005$: Approximate error of the energy against the number of iterations on semi-log axes for three simulations with random initial conditions. The initial number of generators was $N=6,10,25$ and there was no elimination of generators throughout the simulations. We see that the algorithm converges linearly. The rate $r$ was computed by fitting straight lines to the data.}
\end{figure}
\subsection{An optimal location problem with non-constant $\rho$}
\label{Subsec: Non-const}
In the block copolymer example in the previous sections we had $\rho=1$. In an optimal location problem $\rho$ need not be uniform and might represent population density. The term $f(m)$ represents the cost of building or running a facility to serve $m$ individuals. The function $f$ is concave, which represents an economy of scale.
A particular case of interest would be to determine where to locate government agencies (stations) to which people must attend at some rate (for example, a trip to the passport office). Somewhat artificially we may propose that the cost per person of a trip of length $l$ is $\tilde{c} \,c(l/L)$ where $L$ is a representative distance, $\tilde{c}$ is a constant with units of cost per person, and $c$ is a non-dimensional cost function. Let $\{P_i\}_{i=1}^N$ be a power diagram with generators $\{ \ensuremath{\bm{x}}_i,w_i\}$. We assume that if a person belongs to power cell $P_i$, then they must use the station located at $\ensuremath{\bm{x}}_i$, and that they visit the station $\omega$ times per year. Then the cost per year $C$ to the people travelling to the locations $\{\ensuremath{\bm{x}}_i\}_{i=1}^N$ is
$$C= \tilde{c} \omega \sum_{i=1}^N \int_{P_i} c\left(\frac{|\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|}{L}\right)\rho\left(\ensuremath{\bm{x}}\right)\,d\ensuremath{\bm{x}}.$$
Suppose that the cost per year of running a station that serves $m$ individuals is $s f(m/M)$ where $M$ is a characteristic number of people, $s$ has units of cost per year, and $f$ is a non-dimensional cost function. Using $c(x)=x^2$ we obtain
$$\sum_{i=1}^N \left\{ sf\left(\frac{m_i}{M}\right)+\frac{\tilde{c}\omega}{L^2}\int_{P_i}|\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2\rho\left(\ensuremath{\bm{x}}\right)\,d\ensuremath{\bm{x}} \right\}$$
which represents the combined cost per year of running the stations and the travel costs of the users. This cost must be minimised. By rescaling we obtain the energy
\[
E(\{ \ensuremath{\bm{x}}_i, w_i \}) = \sum_{i=1}^N \left\{ \lambda f(m) + \int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2\rho\left(\ensuremath{\bm{x}}\right)\,d\ensuremath{\bm{x}} \right\}.
\]
The parameter $\lambda$ is a measure of the cost of running a station compared to the cost incurred by the individuals using the station; small values of $\lambda$ represent a station that is low in cost to run and large values of $\lambda$ represent a station that is high in cost to run (perhaps because of infrequent visits by its users). As a concrete example we take European population data covering metropolitan France and ask where to cite stations for different choices of $\lambda$, using the function $f(m)=- m \log m$. Figure \ref{fig:france_example} shows the results for two different choices of $\lambda$, loosely corresponding to departments/regions and their centres of administration and extents.
\begin{figure}[h!]
\begin{center}
\setlength\fboxsep{0pt}
\setlength\fboxrule{0.5pt}
\fbox{\includegraphics[width=0.75\textwidth]{france_testlog_l0_01_a_9000}}
\fbox{\includegraphics[width=0.75\textwidth]{france_testlog_l20_a_9000}}
\caption{\label{fig:france_example} An example illustrating the algorithm applied to the convex hull of metropolitan France (recall that we require the region $\Omega$ to be convex). The colour represents the population density $\rho$, red is high density and green is low density (population data obtained from Eurostat). Areas in which no data was available have been assigned a population density of zero, for example in the seas and oceans. The figures were produced using Algorithm \ref{algo:genLloyd} with an initial condition of 5000 random generators and 9000 iterations. The top figure has $\lambda=0.01$ and the bottom figure has $\lambda=20$. In the particular instances here, the final local minimum of the energy has 128 cells when $\lambda=0.1$ and 21 cells when $\lambda=20$.}
\end{center}
\end{figure}
\subsection{An example in three dimensions: crystallization}
\label{Subsec:3D}
In this section we implement the generalized Lloyd algorithm in three dimensions for the block copolymer model from \S\ref{Subsubsec: block copolymer}:
\begin{equation}
\label{eqn:bc3D}
E \left( \{\ensuremath{\bm{x}}_i,w_i\} \right) = \sum_{i=1}^N \left\{ \lambda m_i^{\frac{2}{3}} + \int_{P_i} |\ensuremath{\bm{x}}-\ensuremath{\bm{x}}_i|^2 \,d \ensuremath{\bm{x}} \right\}
\end{equation}
where $\ensuremath{\bm{x}}_i \in \Omega \subset \mathbb{R}^3$. It was conjectured in \cite{BournePeletierRoper} that global minimizers of $E$ tend to a body-centred cubic (BCC) lattice as $\lambda \to 0$, meaning that the set $\{ \ensuremath{\bm{x}}_i \}$ tends to a BCC lattice and $w_i \to 0$ for all $i$. This conjecture was motivated by block copolymer experiments and by results for the special case $\lambda=0$, $N \to \infty$: in \cite{Barnes1983} it was proved that the BCC lattice has asymptotically the lowest energy amongst all lattices and \cite{Du2005} provided numerical evidence that it has asymptotically the lowest energy amongst all possible configurations $\{ \ensuremath{\bm{x}}_i \}$. Simulations in \cite[Sec.~4.1]{BournePeletierRoper} in two dimensions demonstrate that global minimizers of $E$ for $\lambda>0$ (centroidal power diagrams) are close to global minimizers of $E$ for $\lambda=0$ (centroidal Voronoi tessellations). In this section we give further numerical support for the conjecture.
It is computationally expensive to study the limit $\lambda \to 0$ in three dimensions since the optimal number of generators grows like $N \sim \lambda^{-1}$.
Instead we restrict our attention to the case where $\Omega$ is a periodic cube. Figure \ref{fig:bcccell} shows a representative Voronoi cell generated by the BCC lattice, which is a truncated octahedron (Kelvin proposed a deformed version of the truncated octahedron as a candidate for three-dimensional foams).
If $N$ is chosen appropriately, then $N$ of these cells fit exactly into the periodic cube and there is no boundary layer. If $\{\ensuremath{\bm{z}}_i\}_{i=1}^N$ are the centres of these cells, then $\{ \ensuremath{\bm{z}}_i, 0 \}_{i=1}^N$ is a critical point of $E$ (because it is a centroidal Voronoi tessellation and all cells have the same mass). We study its stability using the generalized Lloyd algorithm.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{bcc_cell}
\end{center}
\caption{\label{fig:bcccell}A Voronoi cell generated by the BCC lattice.}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{BCC_c_0999_plot}
\includegraphics[width=0.4\textwidth]{new_bcc_a_1999_plot}\\
\includegraphics[width=0.4\textwidth]{BCC_c_0000_plot}
\includegraphics[width=0.4\textwidth]{new_bcc_a_0000_plot}
\end{center}
\caption{\label{fig:bcc_perturb}
Numerical evidence that the BCC lattice is a local minimizer of \eqref{eqn:bc3D} for $\lambda=10^{-3}$.
Left column: the initial condition (top-left) is a perturbation of the BCC lattice (a perturbation of both the generator locations and weights). The perturbation is small enough that the algorithm converges to the BCC lattice (bottom-left, configuration after $2000$ iterations). Right column: the initial condition (top-right) is a large enough perturbation of the BCC lattice to cause the algorithm to converge to a different local minimum (bottom-right, configuration after $2000$ iterations). In all figures $N=128$.}
\end{figure}
We implemented the algorithm in C++ using the Voro++ software library to compute power diagrams \cite{Rycroft2009}. We found that the BCC lattice is stable under small perturbations; if the initial condition $ (\ensuremath{\bm{X}}^0, \ensuremath{\bm{w}}^0)$ is taken to be a small enough perturbation of the BCC lattice, then the generalized Lloyd algorithm converges back to the BCC lattice. See Figure \ref{fig:bcc_perturb}, left column (the initial configuration is top-left, the final configuration is bottom-left). This suggests that the BCC lattice is at least a local minimizer of the energy. Under larger perturbations the Lloyd algorithm converges to a different critical point with a higher energy. See Figure \ref{fig:bcc_perturb}, right column (the initial configuration is top-right, the final configuration is bottom-right). We also tested the energy of the BCC lattice against the energy of several common lattices and found that it was lower in each case. Due to the non-convexity and flatness of the energy landscape, however, the conjecture requires a more detailed numerical study.
|
1,108,101,563,645 | arxiv | \section{Time averaged mean squared displacement for unconfined SBM}
While we are used to think of a diffusion process in terms of the MSD (\ref{msd})
calculated as the spatial average of $x^2$ over the probability density function
$P(x,t)$, single particle tracking experiments typically provide few but long
individual time series $x(t)$. These are evaluated in terms of the time averaged
MSD \cite{pt}
\begin{equation}
\label{tamsd}
\overline{\delta^2(\Delta,t)}=\frac{1}{t-\Delta}\int_0^{t-\Delta}\Big(x(t'+\Delta)
-x(t')\Big)^2dt',
\end{equation}
where $t$ is the length of the time series (measurement time) and $\Delta$ the
lag time. In an ergodic system for sufficiently long $t$ ensemble and time averages
provide identical information, formally, $\langle x^2(\Delta)\rangle=\lim_{t\to
\infty}\overline{\delta^2(\Delta,t)}$, and $\overline{\delta^2}$ for different
trajectories are identical. For anomalous diffusion the behaviour of the MSD
(\ref{msd}) and the time averaged MSD (\ref{tamsd}) may be fundamentally different.
Simultaneously $\overline{\delta^2(\Delta,t)}$ for different trajectories may become
intrinsically irreproducible. This phenomenon of irreproducibility and disparity
$\langle x^2(\Delta)\rangle\neq\lim_{t\to\infty}\overline{\delta^2(\Delta,t)}$ is
usually called weak ergodicity breaking \cite{pt} and was discussed in detail for
the subdiffusive CTRW \cite{yonghe,lubelski,pccp,pnas,igor,pt}. In the following,
for simplicity we neglect the explicit dependence of $\overline{\delta^2}$ on the
measurement time $t$.
For SBM the mean of the time averaged MSD over multiple trajectories $i$, $\left<
\overline{\delta^2(\Delta)}\right>=N^{-1}\sum_{i=1}^N\overline{\delta^2_i(\Delta)}$
can be derived from the Langevin equation (\ref{langevin}), yielding
\cite{fulinski1}
\begin{equation}
\label{tamsd_sbm_full}
\left<\overline{\delta^2(\Delta)}\right>=\frac{2K_{\alpha}t^{1+\alpha}}{
(\alpha+1)}\frac{\left[1-\left(\frac{\Delta}{t}\right)^{1+\alpha}-\left(1-
\frac{\Delta}{t}\right)^{1+\alpha}\right]}{t-\Delta}.
\end{equation}
In the limit $\Delta\ll t$, we recover the behaviour \cite{thiel}
\begin{equation}
\label{tamsd_sbm}
\left<\overline{\delta^2(\Delta)}\right>\sim2K_{\alpha}\frac{\Delta}{t^{
1-\alpha}},
\end{equation}
and when the lag time $\Delta$ approaches the measurement time $t$ the limiting
form
\begin{equation}
\left<\overline{\delta^2(\Delta)}\right>\sim2K_{\alpha}t^{\alpha}-
\frac{\alpha K_{\alpha}}{t^{1-\alpha}}(t-\Delta)+\frac{\alpha(\alpha-1)
K_{\alpha}}{3t^{2-\alpha}}(t-\Delta)^2
\end{equation}
describes a cusp around $\Delta=t$. Result (\ref{tamsd_sbm_full}) is important
to deduce the full behaviour of $\overline{\delta^2(\Delta)}$
when $\Delta$ approaches $t$, as shown in Fig.~\ref{fig_msd_cusp} in the Appendix.
\begin{figure}
\includegraphics[width=8cm]{fig1a.eps}\\[0.2cm]
\includegraphics[width=8cm]{fig1b.eps}
\caption{MSD and time averaged MSD for SBM with $\alpha=1/2$ (top) and $\alpha=
3/2$ (bottom). The simulations of the SBM-Langevin equation (\ref{langevin})
show excellent agreement with the analytical results, Eqs.~(\ref{msd}) and
(\ref{tamsd_sbm_full}). We also show results for
the time averaged MSD for 20 individual trajectories. Apart from the region where
the lag time $\Delta$ approaches the length $t$ of the time series and statistics
worsen, there is hardly any amplitude scatter between individual $\overline{\delta
^2}$.}
\label{fig_tamsd}
\end{figure}
In Fig.~\ref{fig_tamsd} we show results from simulations of the SBM process for
both sub- and superdiffusion, observing excellent agreement with the analytical
findings. We also see that the scatter between individual trajectories is very
small. Such scatter is a characteristic for anomalous diffusion models and can
be used, e.g., to reliably distinguish FBM from subdiffusive CTRW processes
\cite{pccp,jae_jpa}. For CTRW subdiffusion we find pronounced scatter between
$\overline{\delta^2(\Delta)}$ from individual trajectories even in the limit of
extremely long trajectories $t\to\infty$ \cite{yonghe,pccp,pt}, while for FBM
the scatter vanishes for longer $t$, a characteristic of the ergodic nature of
FBM \cite{goychuk,deng,jae_jpa,pccp}. The amplitude scatter of individual
$\overline{\delta^2}$ can be characterised in terms of the dimensionless variable
$\xi=\overline{\delta^2(\Delta)}/\left<\overline{\delta^2(\Delta)}\right>$. If
$\xi$ has a narrow distribution around $\xi=1$ and the width decreases with
increasing $t$, the process is usually considered ergodic.
This width is characterised in terms of the ergodicity breaking parameter
$\mathrm{EB}=\langle\xi^2\rangle-\langle \xi\rangle^2$. For SBM it was found
\cite{thiel}
\begin{equation}
\label{sbm_eb}
\mathrm{EB}=\left\{\begin{array}{ll}4I_{\alpha}\left(\Delta/t\right)^{2\alpha},
&0<\alpha<1/2\\[0.2cm]
\frac{1}{3}(\Delta/t)\ln(t/\Delta),&\alpha=1/2\\[0.2cm]
\frac{4\alpha^2}{3(2\alpha-1)}\left(\Delta/t\right),&\alpha>1/2
\end{array}\right.
\end{equation}
with the integral $I_{\alpha}=\int_0^1dy\int_0^{\infty}dx[(x+1)^{\alpha}-
(x+y)^{\alpha}]^2$ \cite{thiel}. The $\mathrm{EB}$ parameter for SBM thus
clearly decays to zero for increasing $t$. For $\alpha=1$ we obtain the
known form $\mathrm{EB}=\frac{4}{3}\Delta/t$ for Brownian motion. The
$\Delta/t$ scaling also characterises FBM for $\alpha<3/2$ \cite{deng}. Our
analysis for SBM shows that the scatter distribution $\phi(\xi)$ is indeed narrow
and of Gaussian shape, albeit it is broader than the Gaussian form predicted for
FBM in Ref.~\cite{jae_jpa} (not shown). It decreases with the ratio $\Delta/t$ and
thus indicates a reproducible behaviour between individual trajectories.
The coexistence of the disparity $\overline{\delta^2(\Delta)}
\neq\langle x^2(\Delta)\rangle$ and asymptotically vanishing ergodicity breaking
parameter, $\lim_{t\to\infty}\mathrm{EB}=0$ is a new class of non-ergodic processes
the more detailed mathematical nature of which remains to be examined.
From the analysis so far for free motion we can see that the SBM process has a
truly split personality. Thus its PDF is identical to that for free FBM.
In contrast to the ergodic behaviour $\overline{\delta^2(\Delta)}=\langle x^2(
\Delta)\rangle$ of FBM for sufficiently long $t$ \cite{deng,pccp}, however, SBM
exhibits weak ergodicity breaking, as demonstrated in
Eq.~(\ref{tamsd_sbm}). This scaling form $\overline{\delta^2}\simeq\Delta/t^{1-
\alpha}$ exactly matches the result for
CTRW subdiffusion \cite{pt,yonghe,pccp} or diffusion processes with space-dependent
diffusivity \cite{andrey,andrey_pccp}. Unlike the weakly non-ergodic dynamics of
the latter two, for SBM the fluctuations around the mean $\left<\overline{\delta^2
(\Delta)}\right>$ measured by the distribution $\phi(\xi)$ are narrow and decrease
with longer $t$. We now show that the behaviour of confined SBM is also
unconventional.
\section{Confined SBM}
An important physical property of a stochastic process is
its response to external forces or spatial confinement. From an experimental point
of view, this is of relevance to tracer particles moving in the confines of
cellular compartments or when the particle is traced by the help of optical
tweezers, which exert an Hookean restoring force on the particle \cite{lene}.
We study the paradigmatic case of an harmonic potential $V(x)\propto\frac{1}{2}
kx^2$, for which the motion is governed by the Fokker-Planck equation
\begin{equation}
\label{fokker}
\frac{\partial}{\partial t}P(x,t)=\frac{\partial}{\partial x}\left(kx+\mathscr{K}
(t)\frac{\partial}{\partial x}\right)P(x,t),
\end{equation}
which follows directly from the Langevin equation (\ref{langevin}) with the
additional Hookean force term $-kx(t)$. Note that in absence of the external
force ($k=0$) this equation is that of Batchelor \cite{batchelor}. For the MSD
we find an exact expression in terms of the Kummer function $M$ \cite{abramowitz},
\begin{equation}
\langle x^2(t)\rangle=2K_{\alpha}t^{\alpha}e^{-2kt}M(\alpha,1+\alpha,2kt),
\end{equation}
from which we obtain the initial free subdiffusion (\ref{msd}) for $t\ll1/k$ and
the scaling form
\begin{equation}
\label{msd_harm}
\langle x^2(t)\rangle\sim\alpha K_{\alpha}k^{-1}t^{\alpha-1}
\end{equation}
in the long time limit $t\gg1/k$. For subdiffusion ($0<\alpha<1$) the MSD thus
has a power-law decay to zero, while for superdiffusion ($1<\alpha<2$) it
grows infinitely. This
counterintuitive behaviour of SBM is due to the fact that the time dependence of
the diffusivity $\mathscr{K}(t)$ corresponds to a time dependent temperature
\cite{fulinsky} or a time dependent viscosity. Thus SBM is a most non-stationary
process that never reaches stationarity. Fig.~\ref{harmonic} corroborates this
analytical result with simulations based directly on the Langevin equation
(\ref{langevin}): after the free anomalous diffusion behaviour of the MSD for a
particle starting at the vertex of the potential, we observe a turnover to a
power-law behaviour with negative or positive scaling exponent.
\begin{figure}
\includegraphics[width=8cm]{fig2a.eps}\\[0.2cm]
\includegraphics[width=8cm]{fig2b.eps}
\caption{MSD $\langle x^2(t)\rangle$ and time averaged MSD $\left<\overline{
\delta^2(\Delta)}\right>$ for $\alpha=1/2$ (top) and $\alpha=3/2$ (bottom) in an
harmonic potential. In each case we consider the force constants $k=0.01$ and
$k=0.1$. Convergence of the corresponding
ensemble and time averages at $t=5\times10^4$ can be shown numerically.
The shown analytical curve is based on the full solution for $\left<\overline{
\delta^2(\Delta)}\right>$ provided in the Appendix.}
\label{harmonic}
\end{figure}
The result for the time averaged MSD is similarly remarkable. As shown in
Fig.~\ref{harmonic} for SBM simulations based on the Langevin equation
(\ref{langevin}) with the Hookean forcing, it exhibits a pronounced apparent
plateau for lag times $\Delta\gg1/k$ for both sub- and superdiffusive SBM.
This behaviour is in excellent agreement with the full analytical solution
(\ref{tamsd_harm_full}) provided in the Appendix in terms of Kummer functions.
Taking the limit $\Delta\ll t$ we obtain
\begin{equation}
\label{tamsd_harm}
\left<\overline{\delta^2(\Delta)}\right>\sim\frac{K_{\alpha}}{k}\left[\frac{t^{
\alpha}-\Delta^{\alpha}}{t-\Delta}+(t-\Delta)^{\alpha-1}\left(1-2e^{-k\Delta}
\right)\right],
\end{equation}
which indeed features the extended plateau and provides a good approximation
for $\Delta\gg1/k$. Note that when $\Delta$ approaches the measurement time $t$
the time averaged MSD $\left<\overline{\delta^2(\Delta)}\right>$ converges to the
value of the MSD $\langle x^2(t)\rangle$ due to the pole in expression
(\ref{tamsd}). This behaviour is analysed in more detail in
Fig.~\ref{fig_msd_cusp} in the Appendix.
\begin{figure}
\hspace*{-0.8cm}\includegraphics[width=9.8cm]{fig3.eps}
\caption{Trajectories of SBM in an harmonic potential, for $\alpha=1/2$ (left),
$\alpha=1$ (centre), and $\alpha=3/2$ (right) for three different confinement
strengths $k$. The fluctuations are stationary only in the Brownian case
$\alpha=1$.}
\label{harmtraj}
\end{figure}
Fig.~\ref{harmtraj} analyses this behaviour in the harmonic potential further by
showing the time series $x(t)$ for subdiffusive SBM, Brownian motion, and
superdiffusive SBM. Indeed we see that for subdiffusive and superdiffusive confined
SBM the fluctuations continue to decrease and increase with time, while those in
the Brownian limit become stationary. This is the direct effect of the time
dependent temperature or viscosity encoded in the SBM diffusivity $\mathscr{K}(t)$.
If the system is stationary or thermalised this behaviour clearly
underlines the unsuitability of SBM for the description of anomalous diffusion.
The behaviour of SBM dynamics under confinement is the central result of our
study. The continued temporal decay or increase of the MSD that we obtained for
SBM is in stark contrast to the behaviour of confined CTRW subdiffusion, for which
the MSD
$\langle x^2\rangle$ saturates to the thermal plateau, while the time averaged
MSD continues to grow in the power-law form $\overline{\delta^2}\simeq(\Delta/t)
^{1-\alpha}$ up until the lag time $\Delta$ approaches $t$ \cite{pnas,pccp}. It
strongly differs from FBM, which relaxes to a plateau for both the MSD and the
time averaged MSD, and for which only a transient disparity between $\langle x^2
\rangle$ and $\overline{\delta^2}$ exists \cite{lene1}. Finally, SBM is also at
variance with heterogeneous
diffusion processes with a space dependent diffusion coefficient relax to a plateau
for both $\langle x^2\rangle$ and $\overline{\delta^2}$ \cite{andrey_confined}.
\section{Discussion}
As we showed SBM is a truly paradoxical stochastic process. Somewhat similar to
a chameleon, each time we compare SBM with other established anomalous diffusion
processes we find certain similarities. Looking at the sum of its features,
however, SBM is a truly independent process with a range of remarkable properties.
For free SBM the probability density function $P(x,t)$ equals that of FBM, despite
the fact that both processes are governed by different stochastic (Langevin)
equations. SBM's time averaged MSD scales equally to those of the weakly
non-ergodic CTRW subdiffusion and diffusion processes with space-dependent
diffusivity. Despite this weakly non-ergodic character of the mean time
averaged MSD $\left<\overline{\delta^2}\right>$, the amplitude scatter between
the time averaged MSD $\overline{\delta^2}$ of individual realisations is
small and the distribution has a Gaussian shape, as otherwise observed for the
ergodic FBM or for Brownian motion of finite time $t$. In that sense SBM
represents a new class of non-ergodic processes. The most striking behaviour of
SBM is, however, its strongly non-stationary behaviour under confinement. Instead
of relaxing to a plateau the MSD acquires a power-law decay or growth
mirroring a continuously decreasing or increasing temperature encoded in
SBM's time dependent diffusion coefficient $\mathscr{K}(t)$.
SBM is thus at variance with the currently available experimental observations in
complex liquids using single particle tracking by video microscopy or by optical
tweezers tracking of single submicron particles. Thus the free anomalous diffusion
data garnered so far was classified into FBM-like and CTRW-like behaviour, or
combinations thereof \cite{weber,weigel,tabei}. Note that also for fluorescence
correlation spectroscopy experiments recent analysis tools corroborated an FBM
nature of the data \cite{szymanski}. For optical tweezers tracking of lipid
granules in different complex liquids the time averaged MSD either
continues to grow under confinement, reflecting the non-ergodic features of CTRWs
\cite{lene,pt}, or it relaxes to a plateau value mirroring an ergodic dynamics
\cite{lene1,naturephot}.
How can FBM and SBM have the same distribution (\ref{prop}) in free space? Simply
put for FBM, the viscoelastic properties of the environment effecting the
long-range correlations of FBM lead to a frequency dependent response of the
environment to a disturbance, while the materials properties remain unchanged in
time. For free FBM this gives rise to the subdiffusive MSD
(\ref{msd}), in which the time dependent diffusivity effectively encodes the
frequency dependent response of the viscoelastic environment. The distribution
(\ref{prop}) for FBM and its description in terms of a Fokker-Planck equation of
the form (\ref{fokker}) is treacherous, however. This can already be seen when
we use the PDF (\ref{prop}) or the dynamics equation (\ref{fokker}) to
calculate the first passage behaviour. This procedure leads to the wrong result
\cite{epl}, and the full analytical description of FBM in the presence of
boundaries remains elusive, a difficulty imposed by the highly correlated
fractional Gaussian noise driving its Langevin equation. In
contrast, SBM, according to its Langevin description (\ref{langevin}),
is driven by uncorrelated noise but the environment itself is changing as
function of time, effecting an extremely non-stationary process. The equivalence
of the PDF (\ref{prop}) of both processes is thus simply due to the fact that a
Gaussian PDF is completely defined by its second moment, the MSD (\ref{msd}).
With its interesting behaviour that is so different from the other conventional
anomalous diffusion models, SBM may indeed
have relevant applications to weakly coupled or fully adiabatic systems, as
well as for active systems, in which the existence of a temperature is not
meaningful. In particular, in the superdiffusive case SBM or analogous dynamics
with other increasing effective diffusivities may represent an alternative
approach to active Brownian motion \cite{ebeling}.
The difference of SBM to other processes can also be seen in Fig.~\ref{trajs}.
For the subdiffusive case a sample trajectory of SBM is compared to that of FBM and
the noisy CTRW \cite{noisy}, in which the pure subdiffusive CTRW is superimposed
with Ornstein-Uhlenbeck noise to accommodate the thermal noise of the environment
observed in experiments \cite{wong}. We see that SBM with its Gaussian probability
density function and uncorrelated driving noise appears more similar to the noisy
CTRW motion, albeit
it has a more pronounced tendency to reach larger amplitudes than the CTRW with
its waiting time periods. The fluctuations of SBM are dramatically less than
those of the anticorrelated FBM, which also shows the largest amplitudes in
$x(t)$. For superdiffusion, we compare SBM with a noisy version of the L{\'e}vy
walk process \cite{jeocheme} and again with FBM. Note the vastly different size
of the window shown on the ordinate. This time, the persistent FBM shows
pronouncedly larger excursions and a distinct reduction of the noise compared to
SBM. The shape of $x(t)$ of the latter does not appear to be qualitatively
different from the
subdiffusive case. The noisy CTRW is fundamentally different from both SBM and FBM.
Despite their disparate physical nature and their dissimilarity when setting
one against the other in a direct comparison, we note that generally it is
difficult to identify a stochastic process solely from the appearance of the
recorded trajectory.
\begin{figure}
\includegraphics[width=8cm]{fig4a.eps}\\[0.2cm]
\includegraphics[width=8cm]{fig4b.eps}
\caption{Individual trajectories of anomalous stochastic processes: FBM, SBM,
and noisy CTRW (nCTRW), for subdiffusion with $\alpha=1/2$ (top) and
superdiffusion with $\alpha=3/2$ (right).}
\label{trajs}
\end{figure}
From this discussion it is obvious that one thing is to have at our disposal a
handy and easy-to-use description for anomalous diffusion processes. SBM with
its Gaussian and uncorrelated nature appears deceivingly simple and is therefore
easy to implement in numerical analyses and descriptions such as
diffusion limited reactions. However, the other face of the medal is about the
physical relevance of a model process. Using SBM with its time-dependent diffusion
coefficient violates the physical setting in typical experiments, in which the
system is held at approximately constant temperature and its predictions are at
odds with actual observations. The unphysical nature for the kind of processes we
have in mind is most obvious for confined motion. In contrast, FBM and fractional
Langevin equation motion are ergodic processes with the physical background of an
effective particle motion in a viscoelastic multi-body environment. FBM and
fractional Langevin equation motion exhibit a transient disparity between the MSD
and the time averaged MSD under confinement \cite{lene1}. Weakly non-ergodic
CTRW processes emerge due to immobilisation periods imprinted on the
dynamics by the structure of the environment of binding events to the environment.
Finally, diffusion processes with space dependent diffusivities
arise naturally in non-homogeneous systems such as biological cells or subsurface
aquifers.
To extract physically meaningful information from anomalous diffusion
data one needs to have some physical insight into the observed process and properly
analyse the data using complementary tools \cite{pt,tabei,weigel,garini,lene1,
noisy,vincent,radons,berez} before settling for the appropriate physical model.
We finally note that it will be interesting to compare the predictions of the SBM
model with that of active processes in viscoelastic environments \cite{active,
active1}.
\acknowledgments
We acknowledge funding from the Academy of Finland within the FiDiPro scheme.\\
\begin{appendix}
\section*{Appendix}
The covariance of the position for SBM in an harmonic potential follows from the
Langevin equation (\ref{langevin}). Our result is
\begin{equation}
\label{cova}
\left< x(t_1)x(t_2)\right>=2K_{\alpha}t_1^{\alpha}e^{-k(t_1+t_2)}M(\alpha,1+
\alpha,2kt_1)
\end{equation}
for $t_1<t_2$. In absence of the confinement ($k=0$) the covariance reduces to
the MSD (\ref{msd}) and in the Brownian limit $\alpha=1$ we recover the familiar
covariance
\begin{equation}
\left< x(t_1)x(t_2)\right>=\frac{K_1}{k}\left(e^{-k(t_2-t_1)}-e^{-k(t_1+t_2)}
\right).
\end{equation}
From Eq.~(\ref{cova}) we derive the exact result for the time averaged MSD
(\ref{tamsd}),
\begin{eqnarray}
\nonumber
\left<\overline{\delta^2(\Delta)}\right>&=&\frac{2K_{\alpha}}{1+\alpha}
(t-\Delta)^{\alpha}e^{-2k(t-\Delta)}\\
\nonumber
&&\times M(1+\alpha,2+\alpha,2k[t-\Delta])\\
\nonumber
&&\hspace*{-1.2cm}
+\frac{2K_{\alpha}}{(1+\alpha)(t-\Delta)}\Big[t^{1+\alpha}e^{-2kt}M(1+\alpha,
2+\alpha,2kt)\\
\nonumber
&&\hspace*{0.6cm}-\Delta^{1+\alpha}e^{-2k\Delta}M(1+\alpha,2+\alpha,2k\Delta)\Big]\\
\nonumber
&&\hspace*{-1.2cm}
-\frac{4K_{\alpha}}{1+\alpha}(t-\Delta)^{\alpha}e^{-2kt+k\Delta}\\
&&\times M(1+\alpha,2+\alpha,2k[t-\Delta]).
\label{tamsd_harm_full}
\end{eqnarray}
To derive the limit (\ref{tamsd_harm}) of the time averaged MSD for confined SBM
in the long time limit $t\to\infty$ we use the property
\begin{equation}
M(\alpha,1+\alpha,x)\sim\frac{\Gamma(1+\alpha)}{\Gamma(\alpha)}\times\frac{e^x}{x}
\end{equation}
of the Kummer function \cite{abramowitz}.
\begin{figure}
\includegraphics[width=8cm]{fig5a.eps}\\[0.2cm]
\includegraphics[width=8cm]{fig5b.eps}
\caption{Convergence of the time averaged MSD $\left<\overline{\delta^2(\Delta)}
\right>$ to the MSD $\langle x^2(\Delta)\rangle$ at $\Delta\to t$ for $\alpha=1/2$
and $\alpha=3/2$. The measurement time $t=10^5$. Top: Free SBM. For the
superdiffusive case the dashed black line of unity slope aids in demonstrating
the non-linear behaviour of $\left<\overline{\delta^2(\Delta)}\right>$. Bottom: SBM
in an harmonic potential with $k=0.1$. Note the different ranges of the abscissa
in the two cases. For the extremely small window for the subdiffusive case needed
to illustrate the cusp at $\Delta\to t$ and the relatively large variation of
$\left<\overline{\delta^2(\Delta)}\right>$ the MSD $\langle x^2(\Delta)\rangle$
appears almost constant.}
\label{fig_msd_cusp}
\end{figure}
In Fig.~\ref{fig_msd_cusp} we illustrate the convergence of the time averaged
MSD $\left<\overline{\delta^2(\Delta)}\right>$ to the value of the MSD $\langle
x^2(t)\rangle$ in the limit $\Delta\to t$.
\end{appendix}
|
1,108,101,563,646 | arxiv | \section{INTRODUCTION}
The study of laser-plasma interactions in the ultra-relativistic regime is an active frontier of research with close ties to nonlinear optics and high-energy particle physics \cite{Mourou-2006}. In the strong-field limit, approaching the Schwinger critical field in the particle rest frame, quantum electrodynamical effects govern the interaction and novel processes emerge as the vacuum itself begins to respond nonlinearly \cite{Mourou-2006, Bulanov-2015}. Next-generation laser systems like the Extreme Light Infrastructure (ELI) facilities in Europe \cite{Tanaka-2020} will operate in the intensity range of $10^{23}$--$10^{24}\textrm{ W/cm}^2$ and enable production of QED-dominated plasmas in a controlled laboratory environment. Therefore, processes that are suppressed exponentially for field strengths well-below the Schwinger limit, like Breit-Wheeler pair creation and nonlinear Compton scattering, will become accessible for experimental study due to the high particle energies that can be obtained \cite{Gu-2019}.
\par
In recent years, the particle-in-cell (PIC) approach to numerical plasma modeling \cite{Dawson-1983, Arber-2015} has been extended to account for quantum effects. The implementation of QED-PIC routines is reviewed extensively in, e.g., \cite{Sokolov-2011, Ridgers-2014, Gonoskov-2015} and references contained therein. They have also been successful at providing insights into processes like energetic photon emission \cite{Nakamura-2012, Ridgers-2012}, pair creation \cite{Ridgers-2012, Vranic-2018}, and quantum radiation-reaction \cite{Vranic-2014, Vranic-2016}. In this paper, we introduce a new set of PIC diagnostic methods to study the $\gamma$-ray emission mechanism in particular, paying close attention to nonlinear Compton scattering and magneto-bremsstrahlung radiation. One of the primary results, consistent with other numerical studies \cite{Lezhnin-2018, Arefiev-2020}, is that an unprecedented laser-to-$\gamma$ energy conversion efficiency on the order of 10\% is attainable with field strengths that are just within reach of modern experimental facilities. Our latest results highlight the precise location of bulk $\gamma$-ray production within the plasma target as well as the interaction between the ultra-relativistic particles and the scattered and quasi-static field components. The techniques are applied in the post-processing stage, and so they are usable with any PIC code capable of exporting reduced-domain pseudo-particle and field data. Our results were obtained with the code EPOCH \cite{Arber-2015} and verified with OSIRIS \cite{osiris-paper} for consistency and extendability.
\par
In what follows, we will briefly summarize the theory and numerical implementation of photon-emitting processes. Then we will discuss the model plasma target and simulation parameters, followed by an outline of each diagnostic method, the resulting data, and its significance. We will conclude with general remarks and discuss future directions.
\subsection{Photon Emission: Theory and Numerical Implementation}
The primary mechanisms responsible for high-energy photon emission are nonlinear Compton scattering and magneto-bremsstrahlung radiation. The high photon flux from ultra-intense laser fields, characterized by the normalized amplitude $a_0=e\sqrt{-A_\mu A^\mu}/mc^2$ being significantly greater than unity, leads to nonlinear Compton scattering in which an electron absorbs $n\gg1$ laser photons $\omega_0$ before emitting a photon $\omega_\gamma$ of its own \cite{DiPiazza-2011}. (Standard notation is adopted throughout; $e$ and $m$ are the electron charge magnitude and mass, $c$ is the speed of light, and $A^\mu$ is the field 4-vector potential.) In cases where the emitted photon energy is comparable to the electron rest mass, the electron also experiences a non-negligible recoil which must be taken into account. For dense plasmas, the current produced by the collective motion of ultra-relativistic electrons can drive a quasi-static magnetic field strong enough to induce magneto-bremsstrahlung radiation. The dynamical behavior of electrons in such a scenario is referred to as a forward sliding-swing acceleration, and it arises in situations where the plasma current is significantly greater than the Alfv\'{e}n limit $I_A=\beta\gamma mc^3/e$, where $\beta=|\vec{v}\,|/c$ is the normalized velocity, and $\gamma=1/\sqrt{1-\beta^2}$ is the Lorentz factor. The quasi-static field strength is reduced from the MT-scale oscillating field by an order of magnitude, and the resulting transverse confinement of electrons leads to highly-collimated $\gamma$-ray beams \cite{Arefiev-2020}.
\par
In terms of numerical implementation, analytic expressions for the $S$-matrix elements are used to derive an emission probability rate and the stochastic nature of the process is captured by a Monte Carlo algorithm \cite{Ridgers-2014}. The overall emission probability, accounting for all processes through a generic field term, depends on the parameters:
\begin{equation}
\eta = \frac{e\hbar}{m^3c^4}|F_{\mu\nu}p^\nu|\quad\textrm{and}\quad \chi = \frac{e\hbar^2}{m^3c^4}|F_{\mu\nu}k^\nu|,
\label{eqn:QuantumParams}
\end{equation}
which characterize the strength of nonlinear QED effects for electrons and photons, respectively. Here, $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$ is the electromagnetic field tensor, and $p^\nu$ ($\hbar k^\nu$) is the 4-momentum of the electron (photon). Alternatively, $\eta = E_\textsc{rf}/E_\textsc{s}$, where $E_\textsc{rf}\sim\gamma E$ is the field magnitude in the electron rest frame, and $E_\textsc{s}=m^2c^3/\hbar e\simeq1.32\times10^{18}\textrm{ V/m}$ is the Schwinger critical field strength. These parameters serve as a measure of energy in the joint particle-field system, though it must be remembered that the 4-vector magnitudes result in an angular dependence between the field and momentum components. The quantum parameters are maximized if the particle momentum is anti-parallel to the field wavevector $\vec{k}$, and minimized if they are parallel: $\eta\,_\textsc{em} = \gamma(E \pm \beta B)/E_\textsc{s}$, and similarly for $\chi$. Typical values of $\eta$ in our simulations range from $10^{-1}$ and below.
\par
In our case, we are interested in $\eta$ as a measure of the photon emission probability. By recording pseudo-particle momentum data and interpolating the field to their positions, we can determine which components result in high-$\eta$ electrons as the interaction unfolds. Given $\eta$, the spin-averaged emission rate of photons about a $d\chi$ interval is:
\begin{equation}
\frac{d^2N}{d\chi\,dt} = \sqrt{3}\frac{mc^2}{\hbar}\alpha b\frac{F(\eta,\chi)}{\chi},\quad F(\eta,\chi) = \frac{4\chi^2}{\eta^2}yK_{2/3}(y) + \Big(1-\frac{2\chi}{\eta}\Big)\,y\int_{y}^{\infty}dt\,K_{5/3}(t),
\label{eqn:DiffEmRate}
\end{equation}
where $\alpha=e^2/\hbar c$ is the fine-structure constant, $b=E/E_\textsc{s}$ is the normalized field amplitude, and $F(\eta,\chi)$ is the synchrotron function which depends on $y=4\chi/[3\eta(\eta-2\chi)]$, and on modified Bessel functions of the second kind $K_n(x)$ \cite{Ridgers-2014, DiPiazza-2011, LL-QED}. By Eq.~(\ref{eqn:DiffEmRate}), the most probable emitted-photon quantum parameter increases with $\eta$, though the emission rate varies inversely with $\chi$ so that high-energy photons are relatively rare (see Fig.~\ref{fig:SynchFunc}).
\begin{figure}[b]
\centerline{\includegraphics[height=0.185\textheight]{synch_emission.png}}
\caption{(a) Differential photon emission rate. (b) Synchrotron function vs. electron $\eta$ and photon $\chi$ quantum parameters.}
\label{fig:SynchFunc}
\end{figure}
\clearpage
\section{RESULTS}
\subsection{Simulation Setup}
The setup used to study $\gamma$-ray emission is shown in Fig.~\ref{fig:SimulationSetup}. A 30 fs, 20 PW pulse ($I\simeq 2\times10^{23}\textrm{ W/cm}^2$) polarized in the transverse direction is incident on a plasma target whose density increases exponentially to $20n_{cr}$ over a distance of $\ell_\parallel=40\textrm{ }\mu\textrm{m}$. Here, $n_{cr}=m\omega_0^2/4\pi e^2$ is the critical density associated with the laser frequency $\omega_0$. In this case, the vacuum wavelength is $\lambda=1\textrm{ }\mu\textrm{m}$, the Gaussian beam waist is $w=5\lambda$, and $n_{cr}\simeq1.12\times10^{21}\,\textrm{ cm}^{-3}$. Moreover, the over-dense target slab is $\ell_S=10\textrm{ }\mu\textrm{m}$ thick. These values are optimal for a high laser-to-$\gamma$ energy conversion efficiency (30\%) and they are based on the multi-parametric studies of Lezhnin \textit{et al.} \cite{Lezhnin-2018}. While our diagnostics are applicable to 2D simulations, we also performed full-3D runs on a $90\lambda\textrm{ (L) }\times20\lambda\textrm{ (W) }\times20\lambda\textrm{ (H)}$ grid utilizing 12 $\textrm{nodes}/\lambda$ and 6 pseudo-particles per cell. The threshold energy required for photon emission was set to $mc^2/10\simeq51.1\textrm{ keV}$. This is a purely computational limit used to prevent the simulation from overflowing with pseudo-particles and encountering memory issues. We also chose only to record where photons were emitted, neglecting their propagation and any subsequent interactions. Of course, if pair-creation or other QED processes were of interest this would not be valid.
\vspace{2em}
\begin{figure}[h]
\centerline{\includegraphics[width=0.3\textwidth]{simulation_setup.png}}
\caption{Illustration of the plasma profile used to study $\gamma$-ray emission.}
\label{fig:SimulationSetup}
\end{figure}
\vfill
\subsection{Field Analysis}
The first technique we developed to study photon emission involves isolating the reflected and quasi-static components of the combined laser-plasma field. We recorded 1D on-axis lineouts of each spatial component at a rate of 50 attoseconds (20 PHz). By performing a 2D Fourier transform on the resulting coordinate-time data $(t,x_1)\to(\omega,k_1)$ and filtering regions in frequency-space where $\omega/k_1>0$, we isolated the reflected field [Fig.~\ref{fig:FilteredLineoutData}(a)]. (All throughout, subscript 1 denotes the longitudinal pulse-propagation axis, subscript 2 the transverse axis, and subscript 3 the axis orthogonal to 1 and 2.) A similar filtering method about the zero-frequency signal was employed to capture the slowly-varying nature of $B_3$, the orthogonal magnetic field [Fig.~\ref{fig:FilteredLineoutData}(b)].
\par
From these diagrams, we see that there is a small amount of reflection occurring within the pre-plasma which increases as the pulse approaches the dense target slab. This is expected, as the plasma is partially transparent to such an ultra-relativistic pulse. The reflected field amplitude is an order of magnitude smaller than the incident component, and it is strongly attenuated within the slab beyond $t\simeq 300$ fs. Moreover, it has a total lifetime of about 100 fs. As we will discuss below, the attenuation aligns spatially and temporally with $\gamma$-ray production in the plasma, i.e., energy from the reflected field is depleted to generate $\gamma$-rays. In light of this and the theory discussed in the previous section, we believe that optimizing the field reflection of the interior region of the target will increase the overall conversion efficiency and peak $\gamma$ energy. This may motivate future multi-parametric studies. Lastly, the quasi-static magnetic field has a maximum of approximately 0.1 MT which is capable of inducing magneto-bremsstrahlung radiation, though the dominant mechanism here is nonlinear Compton scattering due to the pulse intensity and target geometry under consideration \cite{Lezhnin-2018}. Transverse electron confinement is less pronounced for a slab geometry than, e.g., a plasma waveguide structure \cite{Arefiev-2020}, which reduces the efficiency of the magneto-bremsstrahlung process.
\clearpage
\begin{figure}[t]
\hspace{-2em}\centerline{\includegraphics[height=0.2\textheight]{r10v2-E2R-B3S-avg1um.png}}
\caption{Plots in $t$-$x_1$ space of (a) the reflected transverse electric field $E_2$ and (b) the quasi-static azimuthal magnetic field $B_3$. The spatial profile of the plasma target is overlaid in white. The quasi-static field was obtained by inverse-transforming the region: $-\omega_0/4\leq\omega\leq\omega_0/4$ ($0\pm75\textrm{ THz}$) and $-|\mathbf{k}|/4\leq k_1\leq |\mathbf{k}|/4$ ($0\pm\pi/2\textrm{ }\mu\textrm{m}^{-1}$).}
\label{fig:FilteredLineoutData}
\end{figure}
\begin{figure}[h!]
\hspace{-2em}\centerline{\includegraphics[height=0.2\textheight]{r10v2-overlays.png}}
\caption{Overlay diagrams in $t$-$x_1$ space of various quantities, including the full and reflected transverse electric field, the average electron energy, and the photon energy-density. (a) The pulse bores through the dense slab and accelerates electrons to GeV-scale energies. (b) Interaction between the reflected field and energetic electrons leads to $\gamma$-ray emission via nonlinear Compton scattering. (b) (inset) Energy vs.~time of the forward/reflected field and $\gamma$ rays. Note: The exponentially-decaying pre-plasma is not visible on this scale, nor is any photon emission occurring within the pre-plasma or beyond the slab.}
\label{fig:OverlayDiagram}
\end{figure}
\begin{figure}[h!]
\hspace{-2em}\centerline{\includegraphics[height=0.2\textheight]{r10-eta-logavg.png}}
\caption{Average electron quantum parameter at (a) the instant of maximum $\gamma$-ray emission, $t=250$ fs and (b) $100$ fs later.}
\label{fig:QParamAvg}
\end{figure}
\subsection{Overlay Diagrams}
Extending the lineout diagnostic, particle data including average energy and number density was recorded. Overlaying these quantities in $t$-$x_1$ space provides valuable and unique insights into this process. In Fig.~\ref{fig:OverlayDiagram}(a), lineouts of the full transverse electric field, electron density, and average electron energy are overlaid together. This diagram illustrates how the pulse expels electrons from the target completely, accelerating them to sub-GeV energies. Within the target slab, the transverse field distorts and decreases in strength owing to the strong reflection noted above. In Fig.~\ref{fig:OverlayDiagram}(b), the average photon energy-density is plotted together with the electron density. It must be remembered that our simulations were performed with photon propagation turned off, in order to allow us to observe the precise location of $\gamma$-ray emission. The highest energy photons are emitted within the target slab, though $\gamma$-ray radiation also occurs beyond it as the relativistic electrons continue interacting with the field. Lastly, Fig.~\ref{fig:OverlayDiagram}(b) also shows the reflected transverse field from Fig.~\ref{fig:FilteredLineoutData}(a). Here, we see the spatiotemporal alignment between the two key events: rapid decay of the reflected field and emission of $\gamma$-rays.
\subsection{Quantum Parameter Tracking}
Another diagnostic we developed evaluates the electron quantum parameter [see Eq.~(\ref{eqn:QuantumParams})] for all $\sim$20 million pseudo-particles in the simulation (Fig.~\ref{fig:QParamAvg}). We record the pseudo-particle Lorentz factor and momentum components for $p^\nu$, and bicubic interpolation \cite{NumericalRecipes} is used to obtain the tensor components $F_{\mu\nu}$ at each pseudo-particle position. We also tracked individual electron trajectories over time. It was found that as an electron propagates through the dense target slab at $t\simeq250$ fs, it obtains a high quantum parameter on the order of $10^{-1}$ which aligns temporally with the instant of peak photon emission. (This value may not seem significant, but if one recalls the equivalent definition $\eta=E_\textsc{rf}/E_\textsc{s}$, where $E_\textsc{rf}$ is the field in the particle rest frame, then one sees that we are dealing with non-negligible fractions of the Schwinger field, i.e. the onset of nonlinear QED.) Upcoming work with our particle-tracking diagnostic involves determining the average radiated power for a group of closely-initialized electrons, and comparing it with other groups initialized in different regions. However, the dynamical behavior is highly nonlinear, and so consistency in radiated power between electron pseudo-particles initialized relatively close together may not be guaranteed. Lastly, Fig. \ref{fig:3DPlot} shows a three-dimensional view of the $\gamma$-flare, from which it is apparent that there are longitudinal columns where $\gamma$ production is the most efficient. This is due to a combination of relativistic self-focusing of the pulse and electron expulsion away from the central axis. With full photon dynamics included, the $\gamma$ flash propagates out in a spherically-symmetric manner but the flash-front has the highest energy photons.
\vfill
\begin{figure}[h]
\centerline{\includegraphics[width=0.675\textwidth]{r1-1-t200fs-v2.png}}
\caption{(a) Isometric view of the plasma target and $\gamma$-ray energy density at the instant of maximum radiated power. (b) Particle energy spectrum. (c) Total $\gamma$ fluence in the direction of pulse propagation.}
\label{fig:3DPlot}
\end{figure}
\clearpage
\section{CONCLUSION}
The interaction between an ultra-relativistic field and an over-critical plasma target presents the opportunity to study nonlinear QED effects, such as intense $\gamma$-ray emission. Using particle-in-cell simulations, we have developed new techniques to analyze this process and discovered that the scattered field plays a central role in maximizing the photon emission probability. It then becomes important to understand how the energy transfer between the forward-propagating field, whose primary role includes accelerating particles to the required energy levels, and the back-scattered field, which induces photon emission, can be optimized. This may motivate future multi-parametric studies with hopes that any insights gained about the plasma target properties will guide experimental campaigns. Our main technique can also be used to isolate the quasi-static nature of any field component, which is important for photon emission studies based on the magneto-bremsstrahlung process. Unfortunately, it is limited in that information can only be extracted from narrow sections of the domain. In the near future, we may explore ways of extending this diagnostic. Finally, we observed longitudinal columns where $\gamma$-rays are generated without any pre-manufactured electron-confining structures in the target geometry. These energetic photons can be used for other studies in fundamental physics, such as relativistic photo-ionization and Breit-Wheeler pair creation.
\section{ACKNOWLEDGMENTS}
This work was supported by the US DOE Office of Science under Interagency Agreement number 89243018SSC000006 and the Directed Energy Society's Directed Energy Summer Internship program. This research was conducted while A. D. held an NRC Research Associateship award at the Naval Research Laboratory. Simulations were performed on the National Energy Research Scientific Computing (NERSC) center's Cori computing cluster.
\nocite{*}
\bibliographystyle{aac}%
|
1,108,101,563,647 | arxiv | \section{Introduction}\label{sec:introduction}
\begin{definition}\label{def:stratification}
Let $M$ be a variety. By a \textbf{stratification} $\mathcal{Y}_o$ of $M$, we mean a family of locally closed subvarieties indexed by a poset $\mathcal{Y}$ such that: $M=\bigsqcup_{X_o\in Y_o}X_o$, $\overline{X_o}=\bigsqcup_{X_o^\prime\leq X_o} X_o^\prime.$
\end{definition}
\noindent Frequently we'll work with the closures of the pieces, called \emph{strata}, and we'll indicate this by writing $\mathcal{Y}$ instead of $\mathcal{Y}_o$.
\begin{definition}\label{def:bruhatlas}
(He-Knutson-Lu \cite{HKL}) Let $M$ be a manifold with a stratification $\mathcal{Y}$ whose minimal strata are points. A \textbf{Bruhat atlas} on $(M,\mathcal{Y})$ is the following data:
\begin{enumerate}
\item A Kac-Moody group $H$ with Borel subgroup $B_H$.
\item An open cover for $M$ consisting of open sets $U_f$ around the minimal strata $M=\bigcup_{f\in \mathcal{Y}_{\text{min}}} U_f.$
\item A ranked poset injection $w:\mathcal{Y}^{\text{opp}}\hookrightarrow W_H$ whose image is a union $\bigcup_{f\in \mathcal{Y}_{\text{min}}}[e,w(f)]$ of Bruhat intervals.
\item For $f\in \mathcal{Y}_{\text{min}}$, a stratified isomorphism
\[
c_f:U_f \stackrel{\sim}{\to} X_o^{w(f)} \subset H/B_H, \quad \text{where}\quad X_o^{w(f)}=\overline{Bw(f)B/B}.
\]
\end{enumerate}
\end{definition}
\noindent Examples of manifolds with Bruhat atlases include:
\begin{enumerate}
\item Grassmannians $Gr(k,n)$ with their positroid stratification, whose $H=\widehat{SL(n)}$ (Snider \cite{S}).
\item More generally, partial flag varieties $G/P$ with the stratification by projected Richardson varieties (for this stratification, see Knutson-Lam-Speyer \cite{KLS}) (He-Knutson-Lu \cite{HKL}).
\item Wonderful compactifications of groups (He-Knutson-Lu \cite{HKL}).
\end{enumerate}
\begin{definition}\label{def:eqvtbruhatlas}
Let $(M,\mathcal{Y})$ be a stratified manifold with an action of a torus $T_M$. An \textbf{equivariant Bruhat atlas} is a Bruhat atlas $(H,\{ c_f \}_{f\in\mathcal{Y}_{\text{min}}},w)$ and a map $T_M\hookrightarrow T_H$ such that
\begin{enumerate}
\item each of the chart maps $c_f$ is $T_M$-equivariant, and
\item there is a $T_M$-equivariant degeneration
\begin{align}
M\rightsquigarrow M^\prime &:= \bigcup_{f\in \mathcal{Y}_{\text{min}}} X^{w(f)} \label{eqn:degen}
\end{align}
of $M$ into a union of Schubert varieties, carrying the anticanonical line bundle on $M$ to the $\mathcal{O}(\rho)$ line bundle restricted from $H/B_H$.
\end{enumerate}
\end{definition}
\noindent When $M$ is a toric variety (as it will be in this paper) then (\ref{eqn:degen}) gives us a decomposition of $M$'s moment polytope into the moment polytopes of the $X^{w(f)}$'s, e.g.:
\begin{figure}[h]
\centering
\includegraphics{eqvtbruhatlases.png}\caption{Equivariant Bruhat atlases}\label{fig:eqvtbruhatlases}
\end{figure}
\noindent The first polytope in Figure \ref{fig:eqvtbruhatlases} (the moment polytope of $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$) is subdivided into four smaller squares, which represent $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$ degenerating into a union of four $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$'s. In the second polytope, $\mathbb{C}\mathbb{P}^2$ is degenerating to a union of three Schubert varieties, each isomorphic to the first Hirzebruch surface. The labels of the vertices are coming from the map $w$ of definition \ref{def:bruhatlas}, and the groups $H$ are $(SL_2(\mathbb{C}))^4$ and $\widehat{SL_2(\mathbb{C})}$, respectively.
\subsection{Kazhdan-Lusztig atlases}\label{sec:KLatlas}
For inductive classification purposes, we want to determine what sort of structure a stratum $Z\in\mathcal{Y}$ inherits from the Bruhat atlas on $M$. Each $Z$ has a stratification $\restr{\mathcal{Y}}{Z}$, and an open cover
\[
\bigcup_{f\in \mathcal{Y}_{\text{min}}} U_f\cap Z,\quad\text{with}\quad U_f\cap Z \cong X^{w(f)}_o\cap X_{w(Z)}
\]
compatible with the stratification, since by (\ref{def:bruhatlas}), the isomorphism $U_f\cong X^{w(f)}_o$ is stratified. Therefore $Z$ has an ``atlas'' composed of Kazhdan-Lusztig varieties (defined as $X^{w}_{v,o}=X^{w}_o\cap X_{v}$). This leads us to the following definition:
\begin{definition}\label{def:KLatlas}(He-Knutson-Lu \cite{HKL}) A \textbf{Kazhdan-Lusztig atlas} on a stratified $T$-variety $(V,\mathcal{Y})$ with $V^T$ finite is:
\begin{enumerate}
\item A Kac-Moody group $H$.
\item A ranked poset injection $w_M:\mathcal{Y}^{\text{opp}}\to W_H$ whose image is $\bigcup_{f\in V^T} [w(V),w(f)]$.
\item An open cover $V=\bigcup U_f$ consisting, around each $f\in V^T$ of an affine variety $U_f$ and a choice of a $T$-equivariant stratified isomorphism
\[
U_f \cong X^{w(f)}_o\cap X_{w(V)}.
\]
In particular, $V$ and $U_f$ need not be smooth.
\item A $T_V$-equivariant degeneration $V \rightsquigarrow V^\prime = \bigcup_{f\in V^T} X^{w(f)}\cap X_{w(V)}\label{eqn:kldegen}$.
\end{enumerate}
\end{definition}
\subsection{Toric surfaces with Bruhat atlases}
We are interested in the classification of manifolds with equivariant Bruhat atlases. We consider toric manifolds as a starting point. Putting an equivariant Bruhat atlas on a toric manifold $M$ would mean associating an element $w(f)\in W_H$ to each face of $M$'s moment polytope (provided we figure out what the group $H$ should be). Obviously there are restrictions to this; for instance, each of the vertex labels must have length equal to $n=\text{dim}(M)$.
The simplest nontrivial case of a toric manifold is a toric surface, and in this case, the moment polytope of $M$ is just a convex polygon.
\begin{theorem}
The only toric surfaces admitting equivariant Bruhat atlases are $\mathbb{C}\mathbb{P}^2$ and $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$, as in Figure \ref{fig:eqvtbruhatlases}.
\end{theorem}
\begin{proof}
This will follow from our main theorem (see section \ref{sec:listofatlases}).
\end{proof}
\noindent As we mentioned before, our strategy is to to try to classify smooth toric surfaces admitting a Kazhdan-Lusztig atlas, and use this knowledge to answer questions about Bruhat atlases on higher-dimensional manifolds. Our main results are:
\begin{itemize}
\item The classification (in \ref{sec:pieces}) of \textbf{Richardson quadrilaterals}, the moment polytopes of $2$-dimensional Richardson varieties in Kac-Moody flag manifolds with respect to their $\mathcal{O}(\rho)$ line bundles.
\item The classification of all lattice polygons with decompositions into the moment polytopes of Richardsons appearing in simply laced Kac-Moody groups.
\item In the simply laced case: whenever possible, a description of a Kazhdan-Lusztig atlas on each of the smooth toric varieties with lattice polygons as above.
\item In the simply laced case, embeddings of the degenerations (\ref{eqn:kldegen}) in $H/B_H$ for atlases with $H$ of finite type.
\end{itemize}
\section{Pizzas}
\subsection{Motivation and definition}
Since any smooth lattice polygon in $\mathbb{Z}^2\subset\mathbb{R}^2$ has a smooth toric variety associated to it, we only need to look at which varieties degenerate to our desired unions of Schubert varieties in various flag manifolds. Since the degeneration preserves symplectic volume and is $T_M$-equivariant, the moment polytope of $M^\prime$ will be a subdivision of that of $M$. Moreover, the newly formed pieces have to be moment polytopes of Richardson surfaces in $H/B_H$, hence they have to be quadrilaterals (since height $2$ intervals in Bruhat order are diamonds). So the moment polytope $\Phi^\prime(M^\prime)$ will look like a sliced up pizza, e.g.
\begin{figure}[H]
\centering
\includegraphics[width=5cm]{pentagon1.png}
\end{figure}
Motivated by the above figure, we define
\begin{definition}\label{def:latticepizza} A \textbf{lattice pizza} is a lattice polygon with a ``star-shaped'' subdivision into Richardson quadrilaterals, which will be referred to as \textbf{pizza pieces}\footnote{It is regrettable that we have to avoid the obvious name ``pizza slice'' for these, but slicing already has a standard meaning in mathematics.} (listed in section \ref{sec:pieces}).
\end{definition}
\begin{definition}\label{def:pizza} A \textbf{pizza} is an equivalence class of lattice pizzas under the following equivalence relation: Two lattice pizzas are equivalent if there is a stratification-preserving homeomorphism such that, up to a global $GL(2,\mathbb{Z})$-transformation, the angles between the edges match simultaneously.
\end{definition}
We want to see when we can glue a list of pieces into a pizza. If we $SL(2,\mathbb{Z})$-shear a piece to be in a position where the center of the pizza is the piece's bottom right corner, and the edges adjacent to the bottom left corner of the piece are in the position of the standard basis in $\mathbb{R}^2$ (we will refer to this as the \textbf{standard position}), and compare this to how the next piece has to be glued on, we can associate a matrix in $GL(2,\mathbb{R})_+$ to a piece. Consider the following picture of a piece corresponding to an opposite Schubert surface in $A_2$.
\begin{figure}[H]
\centering
\includegraphics[width=3cm]{basischangedemonstration_1.png}
\end{figure}
\noindent The piece has been sheared to this standard position, with the red basis at the SW corner being the standard basis, and the green basis at the NE corner is where we have to glue the next piece. So we associate the matrix
\[
M=
\begin{pmatrix}
0 & 1 \\
-1 & 1
\end{pmatrix}
\]
to this piece. If the next piece we attach is a $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$, then this will change the green basis to the purple one in the following picture:
\begin{figure}[H]
\centering
\includegraphics[width=4cm]{basischangedemonstration2_1.png}
\end{figure}
In order to find how the basis has changed from the red one to the purple one, note that first we changed the red basis to the green one using $M$, and then used the second piece to turn the green one to the purple one. So if we know how the $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$ piece changes the standard basis, say, by a matrix $N$, then we can compute how the red basis turns into the purple one by computing the product
\[
( M N M^{-1} ) M = MN.
\]
So we only need to associate one matrix in $GL_2(\mathbb{R})_+$ to a piece, namely the one where the bottom right corner has been moved to the standard basis position. It is not hard to see that the matrix associated to $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$ is
\[
N=
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}
\]
and we obtain
\[
MN=\begin{pmatrix}
0 & 1 \\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}
=\begin{pmatrix}
-1 & 0\\
-1 & -1
\end{pmatrix}
\]
which indeed corresponds to the purple basis. Now it should be easy to believe the following theorem:
\begin{theorem}\label{thm:pizzacondition}
Let $M_1,M_2,\ldots, M_l$ be the matrices associated to a given list of pizza pieces. If the pieces form a pizza, then $\prod_{i=1}^l M_i =
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}$.
\end{theorem}
Since our pizza pieces are all lattice polygons, we know that the $GL(2,\mathbb{R})_+$-matrices associated to the pizza pieces will have integer entries. Therefore, in order to satisfy Theorem \ref{thm:pizzacondition}, all the matrices of the pieces will in fact be in $SL(2,\mathbb{Z})$.
The above condition is necessary, but we can make some further observations to reduce this to a finite problem. We would like to embed our pizza in $\mathbb{R}^2$, so we would like to wind around the origin once using the pieces. To contend with the winding number, we lift these matrices from $SL(2,\mathbb{R})$ to its universal cover $\widetilde{SL_2(\mathbb{R})}$. We will represent an element of $\widetilde{SL_2(\mathbb{R})}$ by its matrix $M$, together with a homotopy class of a path $\gamma$ in $\mathbb{R}^2\setminus \overrightarrow{0}$ connecting $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ to $M\begin{pmatrix} 1 \\ 0 \end{pmatrix}$. Elements of $\widetilde{SL_2(\mathbb{R})}$ multiply by multiplying the matrices and concatenating the paths appropriately.
We will therefore, associate to a pizza piece a pair $(M,\gamma)$ where $M$ is the matrix defined above and $\gamma$ is the (class of the) straight line path connecting $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ to $M\begin{pmatrix} 1 \\ 0 \end{pmatrix}$, i.e.
\begin{figure}[H]
\centering
\includegraphics{nutrition_path.png}
\end{figure}
\noindent By direct check on the list of pieces (in section \ref{sec:pieces}), we note that none of these straight line paths pass through the origin. Then attaching a pizza piece $(N,\mu)$ clockwise to a sequence of pieces with current basis $M$ and current path $\gamma$ will yield $(MN, \gamma \circ M(\mu))$. Consequently, if a given set of pieces results in a pizza, we will have a closed loop around the origin based at $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$, with a well-defined winding number $1$. Also, as this path is equivalent (by sending all vectors to their negatives) to the path consisting of following the primitive vectors of the spokes (\ref{def:spoke}) of the pizza, the winding number will coincide with the number of layers of our pizza, as exemplified in the following picture:
\begin{figure}[H]
\centering
\includegraphics{nutrition_path1.png}
\end{figure}
\begin{theorem}\label{thm:braidgroup} (Wikipedia) The preimage of $SL_2(\mathbb{Z})$ inside $\widetilde{SL_2(\mathbb{R})}$ is $Br_3$, the braid group on 3 strands.
\end{theorem}
\noindent A pizza piece therefore could be associated an element of $Br_3$, but for practical reasons we would prefer to work with matrices instead of braids.
We will represent braids in terms of the standard braid generators in Figure \ref{fig:braidgens}.
\begin{figure}[H]
\centering
\includegraphics{braidshorizontal1.png}
\caption{Generators of the braid group\label{fig:braidgens}}
\end{figure}
\begin{lemma}\label{lem:sl2br3} The map $Br_3 \rightarrow SL_2(\mathbb{Z})\times \mathbb{Z}$, with second factor $\mathrm{ab}$ given by abelianization, is injective.
\end{lemma}
\begin{proof}
The kernel of the map $Br_3\to SL_2(\mathbb{Z})$ is generated by the ``double full twist'' braid $(AB)^6$, while $\mathrm{ab}$ sends both generators to $1$, so $\mathrm{ab}((AB)^6)=12$.
\end{proof}
It was easy to determine the $SL(2,\mathbb{Z})$-matrix of a piece by just looking at it, but determining $\mathrm{ab}(S)$ is a little more subtle. Since abelianization is a functor, the following Lemma gives us some clues:
\begin{lemma}\label{lem:sl2zmod12}
(Example 2.5. in \cite{Ko}) The abelianization of $SL_2(\mathbb{Z})$ is $\mathbb{Z}/12\mathbb{Z}$. Moreover, for
\[
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}\in SL_2(\mathbb{Z}),
\]
the image in $\mathbb{Z}/12\mathbb{Z}$ can be computed by taking
\[
\chi \begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
= ((1-c^2 )(bd+3(c-1)d+c+3)+c(a+d-3))/12\mathbb{Z}.
\]
\end{lemma}
So from the matrix of a piece $S$, we can determine $\mathrm{ab}(S)\mod 12$. To figure out the exact value, we notice that if one can build a pizza from the given sequence of pieces, then we must have $\sum_S\mathrm{ab}(S)\equiv 0\mod 12$. If one further insists that the pizza should be ``single-layered'', then we must have $\sum_S\mathrm{ab}(S)=12$. So for instance, the existence of the two pizzas in Figure \ref{fig:eqvtbruhatlases} implies that $\mathrm{ab}(\mathbb{P}^1\times \mathbb{P}^1)=3$ and $\mathrm{ab}(X^{s_1s_2})=4$ where $X^{s_1s_2}\in SL_2{\mathbb{C}}$ is a Schubert variety. Then we can use the list of pizzas (section \ref{sec:listofpizzas}) to figure out the values of the other pieces.
\begin{definition}\label{nutritive} For a piece $S\in Br_3$, define the \textbf{nutritive value} $\nu(S)$ of $S$ as the rational number $\frac{m}{12}$ where $m=\mathrm{ab}(S)$.
\end{definition}
Now we can make sure that our pizza is bakeable in a conventional oven by requiring that $\sum_S\nu(S)=\frac{12}{12}$. This (almost) reduces this part of the classification to a finite problem.
\subsection{Pizza pieces}\label{sec:pieces}
It follows from Definition \ref{def:KLatlas} that the pieces of the pizza (c.f. Definition \ref{def:pizza}) must be moment polytopes of Richardson surfaces in $H$. We will use the shorthand $X_v^w=X_v\cap X^w$ for Richardson varieties. To obtain a classification, we would like to list all the isomorphism types of moment polytopes of Richardson surfaces in arbitrary Kac-Moody groups. We will need the following strengthening of a special case of Corollary 3.11. of \cite{D}:
\begin{proposition}\label{thm:richardsonquads}
The moment polytope of a Richardson surface in any $H$ is part of the X-ray of the moment polytope (with possibly not the $V(\rho)$-embedding) of a flag manifold of a rank $2$ Kac-Moody group.
\end{proposition}
\begin{proof}
Let $X^w_v$ be a Richardson surface in $H$. We know that $v\lessdot r_\alpha v \lessdot r_\beta r_\alpha v=w$ for some positive roots $\alpha, \beta$. The moment polytope of $X^w_v$ is a quadrilateral with edge labels:
\begin{figure}[H]
\centering
\includegraphics{richardsonquad1.png}
\end{figure}
\noindent We claim that $\gamma, \delta\in \mathrm{Span}_\mathbb{R} (\alpha, \beta)$. Since the polytope is $2$-dimensional, and $\{ v(\alpha), v(\beta) \}$ is linearly independent, we know that $v(\gamma), v(\delta) \in \mathrm{Span}_\mathbb{R}(v(\alpha), v(\beta))$, and $v$ is a linear transformation. Therefore all roots that are labeling the edges of this quadrilateral lie in a $2$-dimensional subspace of $\mathfrak{h}^*$, so if we intersect the root system of $H$ with the $2$-plane $\mathrm{Span}_\mathbb{R} (\alpha, \beta)$, we obtain a rank $2$ root system with corresponding Kac-Moody group $H^\prime=Z_H(\ker \alpha \cap \ker \beta)$. Then, up to the equivalence relation in definition \ref{def:pizza}, $X^w_v$'s polytope will appear in $H^\prime/B_{H^\prime}$.
\end{proof}
It remains to look for moment quadrilaterals in all rank $2$ Kac-Moody groups. The (bottom of the) moment polytope of $\widetilde{A_1}$ is:
\begin{figure}[H]
\centering
\includegraphics{affineA1polytope_thick.png}
\end{figure}
There are only a couple of types of quadrilaterals to check here:
\begin{enumerate}
\item The two Schubert surfaces $X_e^{s_1s_2}$ and $X_e^{s_2s_1}$ are smooth, and they appear in $B_2$.
\item Those of the form $X_{s_1v}^{s_1w}$ or $X_{s_2v}^{s_1w}$ are all singular, as the primitive vectors from the top right vertex are $\begin{pmatrix} -1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix} -1 \\ -k \end{pmatrix}$ for $k\geq 2$.
\item Those of the form $X_{s_2v}^{s_2w}$ or $X_{s_1v}^{s_2w}$ are all singular, as their top left vertex will have primitive vectors $\begin{pmatrix} 1 \\ -1 \end{pmatrix}$ and $\begin{pmatrix} 1 \\ -k \end{pmatrix}$ for $k\geq 3$.
\end{enumerate}
A similar situation arises in the Kac-Moody groups arising from the generalized Cartan matrix $\begin{pmatrix} 2 & -1 \\ -k & 2 \end{pmatrix}$, and, more generally $\begin{pmatrix} 2 & -j \\ -i & 2 \end{pmatrix}$. The only smooth Richardson surfaces are the Schubert surfaces, and they are of the form (either the red or the yellow vertex must be in the center):
\begin{figure}[H]
\caption{$KM(k)$}
\centering
\includegraphics{k.png}
\end{figure}
To find the nutritive value of $KM(k)$ for $k\geq 4$, note the difference between the pieces $KM(k)$ and $KM(k+1)$:
\begin{figure}[H]
\centering
\includegraphics{KMkplus1.png}
\end{figure}
\begin{lemma}\label{lem:KMk} The correct lift to the braid group of the pizza piece $KM(k)$ is $B^kABA$ in terms of the standard braid generators in Figure \ref{fig:braidgens}.
\end{lemma}
\begin{proof}
We will show this by showing that the pizza piece $KM(k)$ (as a braid) is equivalent to a sequence of pieces, whose lifts we already know. Let $S(k)=A_2^{opp}b,B_2^{opp}b,\ldots ,B_2^{opp}b$ (with $k-1$ $B_2^{opp}b$s). We claim that the piece $KM(k)$ is equivalent to the sequence of pieces $S(k),A_2^{opp}b$. We will induct on $k$. Since $KM(1)$ is the Schubert variety in $A_2$'s flag manifold whose lift is $BABA$, and the lift of $A_2^{opp}b$ is $BA$, the base case holds. We may slice the piece $KM(k+1)$ as the following picture suggests:
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{slicing.png}
\end{figure}
Note that the shaded region is the only difference between $KM(k+1)$ and the sequence $S(k),B_2^{opp}b,A_2^{opp}b$, and it is irrelevant to how this piece, or the sequence of pieces fits into a pizza. In terms of braids, this means that in $Br_3$, $KM(k+1)$ lifts to the same element as $S(k),B_2^{opp}b,A_2^{opp}b=A_2^{opp}b, (B_2^{opp}b)^{k}) ,A_2^{opp}b$, which is
\[
BA*(BAB^{-1})^k*BA=B^kABA.
\]
Note that this implies that for $k\geq 1$, $\nu(KM(k))=\frac{k+3}{12}$, in particular, the pieces $KM(k)$ for $k\geq 10$ can never be part of a pizza for nutritional reasons.
\end{proof}
Therefore since we want $M$ smooth, we may start with rank $2$ finite type groups, and look at all the equivalence classes of polytopes of Richardson surfaces there, including the infinite family above, then add $KM(k)$ for $k=4,\ldots, 9$ to the list ($KM(10)$ is more nutritious than a whole pizza). Below we give a table of the Richardson quadrilaterals together with the corresponding matrices in $SL(2,\mathbb{Z})$. Note that if a piece has matrix $A$ then the piece backwards (i.e. reflected across the $y$-axis) has matrix
\[
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
A^{-1}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\]
In the table the center is always the bottom left vertex (in red). In case of non-simply-laced groups, $s_1$ is always the reflection across the short root. We display the smallest (by edge-length) pieces, but will consider pieces up to equivalence by the equivalence relation in Definition \ref{def:pizza}. We write the braid in terms of the generators in Figure \ref{fig:braidgens}.
\begin{tabular}{ | c | c | c | c | c | m{4cm} |}
\hline
name & Richardson surface & $SL(2,\mathbb{Z})$ matrix & braid & $\nu$ & Richardson quadrilateral \\
\hline
$A_1\times A_1$ & $X^{s_1s_2}_e$ & $\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}$ & $ABA$ & $\frac{3}{12}$ & \vspace{0.1cm}\includegraphics{A_1xA_1.png} \\
\hline
$A_2$ & $X^{s_1s_2}_e$ & $\begin{pmatrix}
0 & 1 \\
-1 & -1
\end{pmatrix}$ & $BABA$ & $\frac{4}{12}$ & \vspace{0.1cm}\includegraphics{A_2.png} \\
\hline
$A_2^{\text{opp}}$ & $X_{s_1}^{w_0}$ & $\begin{pmatrix}
0 & 1 \\
-1 & 1
\end{pmatrix}$ & $AB$ & $\frac{2}{12}$ & \vspace{0.1cm}\includegraphics{A_2opp.png} \\
\hline
$B_2$ & $X^{s_1s_2}_e$ & $\begin{pmatrix}
0 & 1 \\
-1 & -2
\end{pmatrix}$ & $BBABA$ & $\frac{5}{12}$ & \vspace{0.1cm}\includegraphics{B_2.png} \\
\hline
$B_2^{\text{opp}}$ & $X^{w_0}_{s_1s_2}$ & $\begin{pmatrix}
0 & 1 \\
-1 & 2
\end{pmatrix}$ & $B^{-1}AB$ & $\frac{1}{12}$ & \vspace{0.1cm}\includegraphics{B_2opp.png} \\
\hline
$B_2^{\text{sing}}$ & $X^{s_2s_1s_2}_{s_2}$ & $\begin{pmatrix}
1 & 1 \\
-2 & -1
\end{pmatrix}$ & $BBA$ & $\frac{3}{12}$ & \vspace{0.1cm}\includegraphics{B_2sing.png} \\
\hline
$G_2$ & $X^{s_1s_2}_e$ & $\begin{pmatrix}
0 & 1 \\
-1 & -3
\end{pmatrix}$ & $BBBABA$ & $\frac{6}{12}$ & \vspace{0.1cm}\includegraphics[width=4cm]{G_2.png} \\
\hline
$G_2^{\text{opp}}$ & $X^{w_0}_{s_1s_2s_1s_2}$ & $\begin{pmatrix}
0 & 1 \\
-1 & 3
\end{pmatrix}$ & $B^{-1}B^{-1}AB$ & $\frac{0}{12}$ & \vspace{0.1cm}\includegraphics[width=4cm]{G_2opp.png} \\
\hline
\end{tabular}
\begin{tabular}{ | c | c | c | c | c | m{5cm} |}
\hline
name & Richardson surface & $SL(2,\mathbb{Z})$ matrix & braid & $\nu$ & Richardson quadrilateral \\
\hline
$KM(k)$ & $X^{s_1s_2}_e$ & $\begin{pmatrix}
0 & 1 \\
-1 & -k
\end{pmatrix}$ & $B^kABA$ & $\frac{k+3}{12}$ & \vspace{0.1cm}\includegraphics[width=5cm]{k.png} \\
\hline
$G_2^{\text{short}}$ & $X^{s_2s_1s_2}_{s_2}$ & $\begin{pmatrix}
1 & 1 \\
-3 & -2
\end{pmatrix}$ & $BBBA$ & $\frac{4}{12}$ & \vspace{0.1cm}\includegraphics[width=5cm]{G_2short.png} \\
\hline
$G_2^{\text{long}}$ & $X^{s_2s_1s_2s_1s_2}_{s_2s_1s_2}$ & $\begin{pmatrix}
2 & 1 \\
-3 & -1
\end{pmatrix}$ & $BBAB^{-1}$ & $\frac{2}{12}$ & \vspace{0.1cm}\includegraphics[width=5cm]{G_2long.png} \\
\hline
\end{tabular}
\section{Simply laced pizzas}
Since the $G_2^{\text{opp}}$ piece has nutritive value $\frac{0}{12}$, it could appear arbitrarily many times in a pizza (we will find limits in section \ref{sec:nonsimplylacedpizzas}). To avoid this inconvenience, we restrict our attention to \textbf{simply-laced pizzas}, i.e. pizzas with pieces from simply laced groups only. Note that since $\widetilde{A_2}$ is simply laced and contains a subgroup $\widetilde{A_1}$, we have to include the $B_2$ piece together with the $A_1\times A_1$, $A_2$, and $A_2^{\text{opp}}$ in the list of pieces we are allowed to use.
\subsection{List of simply laced pizzas}\label{sec:listofpizzas}
Since the invariants of all allowed pieces are strictly positive, we just have to list all possible arrangements of the pieces where the nutritive values add up to $1$, and check if the resulting matrices multiply to the identity. The following list of all the $20$ inequivalent pizzas has been obtained by this brute force computation in Sage \cite{sage}:
\begin{figure}[H]
\centering
\includegraphics{manypizzas1.png}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics{manypizzas2.png}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics{manypizzas3.png}
\end{figure}
\section{Putting atlases on pizzas}\label{sec:atlasesonpizzas}
Everything we did so far was to derive some necessary conditions for part $4$ of Definition \ref{def:KLatlas} to be satisfied. To actually put a K-L atlas on a lattice pizza, we need to specify $H$ and a map $w_M$ from the vertices of the pizza to $W_H$. This is not easy, as in general, there are many choices for such an $H$ and $w_M$. For instance, if one pair $(H,w_M)$ exists, then we can take $H\times H^\prime$ and $w_M\times v$ for some constant $v\in W_{H^\prime}$. We will look for atlases that are minimal in some sense.
The map $w_M$ will label the vertices in a lattice pizza by elements of $W_H$, with the edges corresponding to covering relations in Bruhat order. All covering relations $v\lessdot w$ are of the form $vs_\beta=w$ for some positive root $\beta$ (note that this does not privilege right-multiplication, as equivalently $r_{v\cdot \beta}v=w$). We will label the edges of a pizza by the roots in the covering relations. As we remarked before, we can do this by labeling them by left- or right-multiplication.
\begin{lemma}\label{lem:leftmult} If we label the edges of $H/B_H$'s moment polytope by left-multiplication, then any two edges with identical labels are parallel.
\end{lemma}
\begin{proof}
Let $v_i,w_i$ be elements of $W_H$ labeling vertices of the pizza such that $w_i=r_\beta v_i$. Since $\beta$ is a positive root, there is an associated subgroup $SL_2^\beta\cong SL_2(\mathbb{C})$ of $H$. Then the $T$-invariant $\mathbb{C}\mathbb{P}^1$'s with fixed points $v_i,w_i$ are the $SL_2^\beta$-orbits of the $v_i$'s. Therefore their moment polytopes (lattice line segments) are parallel to $\beta$.
\end{proof}
So we could label the edges by left-multiplication, but this is slightly redundant.
\begin{lemma}\label{lem:rightmult} If we label the edges of $H/B_H$'s moment polytope by right-multiplication, then the labeling roots are the equivariant homology classes of the corresponding invariant $\mathbb{C}\mathbb{P}^1$'s in $H_2^T(H/B_H)$.
\end{lemma}
\begin{proof}
Since labeling by right-multiplication is (left-)$W$-invariant, it suffices to check this for $w=e$, where left and right multiplication are the same, and we get our result by lemma \ref{lem:leftmult}.
\end{proof}
\begin{theorem}\label{thm:rholinebundle} For the labeling by right-multiplication, the lattice length of an edge in a lattice pizza equals the height of the corresponding root.
\end{theorem}
\begin{proof}
An edge in a pizza corresponds to an embedded $T$-invariant $\mathbb{P}^1$ in $H/B_H$. In general, a $T$-equivariant line bundle over $\mathbb{P}^1$ is constructed by letting $T$ act on
\[
\mathcal{O}(\lambda,\mu)=\mathcal{O}(m)\qquad\text{where }\mathbb{P}^1=\mathbb{P}(\mathbb{C}_\lambda \oplus \mathbb{C}_\mu).
\]
The moment polytope of such a variety is an interval in $\mathfrak{t}^*$ with endpoints $\lambda$ and $\mu$. Let $\frac{\mu-\lambda}{m}$ be the primitive vector in that interval, then we compute
\begin{align*}
\int_{\mathbb{P}^1}c_1(\mathcal{O}(\lambda,\mu)) &= \int_{\mathbb{P}^1}c_1(\mathcal{O}(0,\mu-\lambda)) &&=\int_{\mathbb{P}^1}c_1\left(\mathcal{O}\left(0,\frac{\mu-\lambda}{m}\right)^{\otimes m}\right) \\
&=m\int_{\mathbb{P}^1}c_1\left(\mathcal{O}\left(0,\frac{\mu-\lambda}{m}\right)\right) &&= m\int_{\mathbb{P}^1}c_1\left(\mathcal{O}\left(0,(1,0,\ldots , 0)\right) \right)= m,
\end{align*}
which is the number of lattice points in the interval, and we may move the primitive element $\frac{\mu-\lambda}{m}$ to $(1,0,\ldots , 0)$ by applying an element of $SL(n,\mathbb{Z})$.
For an arbitrary $H$-weight $\nu$, we have
\[
\xymatrix{
\mathcal{O}(m)\ar[d] \ar[r] & L(\nu=\sum_i c_i\omega_i) \ar[d] \\
\mathbb{P}^1 \ar@{^{(}->}[r]^i & H/B_H
}
\]
where $i_*([\mathbb{P}^1])=\sum_i d_i [X^{s_i}]$ in $H_2(H/B_H)$. We also know that the divisor line bundle for the opposite Schubert divisor $X_{s_i}$ is $L(\omega_i)$. So we have
\begin{align*}
m&=\int_{\mathbb{P}^1} c_1 (i^*(\mathcal{O}(\nu)) \\
&= \int_{\mathbb{P}^1} i^* \left( c_1(\mathcal{O}(\nu) \right) &\text{by naturality of }c_1\\
&= [c_1(L(\nu))] \cup [i_*(\mathbb{P}^1)] &\text{by the push-pull formula}\\
&= \left[\sum_i c_i X_{s_i}\right] \cup \left[\sum_i d_i X^{s_i}\right]=\sum_i c_id_i &\text{by duality of the bases }\{X_{s_i}\},\{X^{s_i}\}
\end{align*}
In particular, for $\nu=\rho$, we get $\sum_i c_id_i=\sum_i d_i=\mathrm{ht}(\mu-\lambda)$, since $\mu-\lambda$ is a root by assumption.
\end{proof}
This is promising, since now if an edge in a lattice pizza is length $1$, then it must correspond to a simple root, and if we find enough of them, we might be able to find an $H$ we are looking for. However, the situation is more complicated in general, since it may happen that a certain pizza has no Kazhdan-Lusztig atlas, but a different lattice pizza in the same pizza class does. We will give an example for this in section \ref{sec:toppings}.
Also, we know that length $2$ Bruhat intervals are all diamonds, which leads us to the following Lemma:
\begin{lemma}\label{lem:diamond}
Let $\alpha,\beta$ be roots in some simply laced root system such that $\alpha+\beta$ is a root. Let $w\in W$ and $C=\{ wr_\alpha ,wr_\beta, wr_{\alpha+\beta} \}$. If two elements of $C$ cover $w$ in Bruhat order and are covered by $\widetilde{w}$, then the third element of $C$ cannot cover $w$.
\end{lemma}
\begin{proof}
We will prove the statement for $w_1=wr_\alpha$, $w_2=wr_{\alpha+\beta}$; the other cases are symmetric. Assume that $w\lessdot wr_\beta$. We know that height two Bruhat intervals are diamonds, so it suffices to show that $\widetilde{w}$ covers $wr_\beta$, and we will have a contradiction. By the assumptions on $\alpha$ and $\beta$, we could choose $\alpha$ and $\beta$ to be both be simple roots, so we have
\[
r_{\alpha+\beta}=r_\alpha r_\beta r_\alpha = r_\beta r_\alpha r_\beta.
\]
Choose a reduced word $\widetilde{w}=s_1\cdots s_l$. Then $wr_\alpha=s_1\cdots \widehat{s_i} \cdots s_l$ and $wr_{\alpha+\beta}=s_1\cdots \widehat{s_j} \cdots s_l$. We may assume without loss of generality that $i<j$, so $w=s_1\cdots \widehat{s_i}\cdots \widehat{s_j}\cdots s_l$. Then
\[
\widetilde{w}=wr_{\alpha+\beta}r_\alpha=w(r_\alpha r_\beta r_\alpha) r_\alpha = wr_\alpha r_\beta.
\]
Now by assumption $w\lessdot wr_\beta$, but $\widetilde{w}=wr_\beta (r_\beta r_\alpha r_\beta)=wr_\beta r_{\beta+\alpha}$ is a covering relation in Bruhat order since $l(wr_\beta)=l(\widetilde{w})-1$.
\end{proof}
\subsection{Toppings}\label{sec:toppings}
In this section we will describe a way to find all the ``minimal'' flag manifolds $H/B_H$ in which a pizza can have a Kazhdan-Lusztig atlas.
Assume that a pizza has a Kazhdan-Lusztig atlas in $H/B_H$, i.e. $M^\prime\subseteq H/B_H$. If a simple root $\alpha$ does not appear as a summand in any of the edge labels, then we could replace $H$ by a smaller group $H^\prime$ by removing $\alpha$ from $H$'s Dynkin diagram. Since $\alpha$ did not appear on any edge labels, $s_\alpha$ does not appear in any of the vertex labels, and the same vertex labels define a Kazhdan-Lusztig atlas in $H^\prime/B_{H^\prime}$. Therefore, to find a minimal $H$, we should look at all possible ways a simple root can appear in the edge labels of the pizza. We first look at how a simple root can label edges of individual pieces. Recall that (Lemma \ref{lem:rightmult}) the edge labels represent homology classes of the invariant $\mathbb{C}\mathbb{P}^1$s of the pieces. Since all our pieces are toric, we know what the relations between the classes of the edges are from the Jurkiewicz-Danilov theorem (\cite{CLS}, Theorem 12.4.4). We represent ways of a simple root appearing as edge labels of a piece by drawing a curve across the edges where it does so. Figures \ref{fig:smoothtopping} and \ref{fig:singulartopping} show all the possible ways.
\begin{figure}[H]
\centering
\includegraphics{toppings_nonempty.png}\caption{Smooth pieces\label{fig:smoothtopping}}
\end{figure}
\begin{definition}
A \textbf{topping} on a pizza piece is a generator of $H_2^{\text{effective}}$ of the pizza slice, a \textbf{compatible topping configuration} is a compatible set of toppings on the pieces of a pizza.
\end{definition}
\begin{figure}[H]
\centering
\includegraphics{toppings_sing_nonempty.png}\caption{Singular pieces\label{fig:singulartopping}}
\end{figure}
A simple root of $H$ then appears as a summand on edge labels for a compatible topping configuration on the pizza, e.g.
\begin{figure}[H]
\centering
\includegraphics{sadfacepizzasampletopping.png}
\end{figure}
So our strategy is the following:
\begin{itemize}
\item List all possible compatible topping configurations on the pizza.
\item Choose a minimal subset of them.
\item Let $H$ be the group with precisely those simple roots.
\end{itemize}
To provide an example, we will perform this on the ``sad face'' pizza just above. The compatible topping configurations are:
\begin{figure}[H]
\centering
\includegraphics{sadfacepizzaallsimpleroots2.png}
\end{figure}
We have labeled the topping configurations by the simple roots that they represent. Note that more than one simple root may have the same topping configuration. Next, we put all labels on the pizza (note that we have to increase almost all edge lengths for this):
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzaalltoppings1.png}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzaallsimpleroots.png}
\end{subfigure}
\end{figure}
\begin{definition}\label{def:spoke}
The \textbf{spokes} of a pizza are the edges connected to the central vertex.
\end{definition}
Now we should choose a minimal subset of these toppings. Our subset should satisfy the following:
\begin{enumerate}
\item Every edge has a topping on it.\label{nocontraction}
\item The roots labeling the edges all are real roots in $H$, since they correspond to reflections in $W_H$.\label{realroots}
\item No two spokes have the same label (so we do not contradict part $2$ of definition \ref{def:KLatlas}).
\item No three spoke labels contradict Lemma \ref{lem:diamond}.
\end{enumerate}
Such a choice is a good candidate for a minimal $H$.
\begin{enumerate}
\item To satisfy condition \ref{nocontraction}, we see that we need $\gamma$ for sure. Upon closer inspection, we may conclude that we need at least one of $\{\alpha, \varepsilon\},\{\beta,\delta\}$ to be a subset of the simple roots.
\item To satisfy condition \ref{realroots}, we see that we can not have $\alpha, \beta, \delta, \varepsilon$ simultaneously. The reason for this is that $\alpha+\beta, \beta+\varepsilon, \varepsilon+\delta, \delta+\alpha$ are all (real) roots, since they label edges of the pizza. So $\{\alpha, \beta\}, \{\beta, \varepsilon\}, \{\varepsilon\,{\delta}\}, \{\delta, \alpha\}$ all form root systems of type $A_2$, so altogether $\{\alpha, \beta, \delta, \varepsilon\}$ must form a root system of type $\widetilde{A_3}$, in which $\alpha+\beta+\delta+\varepsilon$ is an imaginary root.
\end{enumerate}
Now we will analyze each of the cases.
\begin{itemize}
\item If $\alpha$ is not a simple root, then we must have at least $\beta,\gamma,\delta$ as simple roots. Using only these three, we see that we violate Lemma \ref{lem:diamond}. So we have to also use $\varepsilon$. So our labels must be
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanoalpha1.png}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanoalpha.png}
\end{subfigure}
\end{figure}
\noindent Since $\beta+\varepsilon, \varepsilon+\delta, \delta+\gamma, \gamma+\beta$ are all roots, $\{ \beta, \gamma, \delta, \varepsilon \}$ form a root system of type $\widetilde{A_3}$. Now we should find an element $w_M(\text{center})$ that labels the central vertex of the pizza such that all the edges correspond to covering relations. If $w_M(\text{center})=s_\delta s_\beta s_\varepsilon$, then we have a Kazhdan-Lusztig atlas on this pizza.
\item If $\varepsilon$ is not a simple root, we see that we need all of $\{\alpha,\beta,\gamma,\delta\}$ to be simple roots. So our labels must be
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanoepsilon1.png}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanoepsilon.png}
\end{subfigure}
\end{figure}
In $H$'s Dynkin diagram, there should be edges $\{\alpha,\beta\}$, $\{\alpha,\delta\}$, and either an edge $\{\alpha,\gamma\}$, or both $\{\beta,\gamma\}$ and $\{\gamma,\delta\}$. Let us choose the $\{\alpha,\gamma\}$ edge so $H$ is of type $D_4$. Choosing $w_m(\text{center)}=s_\beta s_\delta$ yields a Kazhdan-Lusztig atlas.
\item Note that the pizza has a symmetry which exchanges $\beta$ and $\delta$, so it suffices to look at the case when $\delta$ is not a simple root. Again, we need all the remaining roots, so our labels must be
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanodelta1.png}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics{sadfacepizzanodelta.png}
\end{subfigure}
\end{figure}
Since $\beta+\varepsilon$, $\alpha+\beta$, $\alpha+\gamma$ must be roots, we may choose the root system to be of type $A_4$ in the following way:
\begin{figure}[H]
\centering
\includegraphics{A4_grp.png}
\end{figure}
Choosing $w_m(\text{center})=s_\beta s_\varepsilon s_\beta$ yields a Kazhdan-Lusztig atlas.
\end{itemize}
So in this case, any compatible topping configuration leads to a Kazhdan-Lusztig atlas. We do not know if this is always true.
\section{Non-simply laced pizzas}\label{sec:nonsimplylacedpizzas}
The restriction to simply laced pizzas was made primarily to avoid having to deal with the $G_2^{\text{opp}}$ piece. Since it has nutritive value $\frac{0}{12}$, one fears it might appear arbitrarily many times in a pizza. In fact it can, as
\begin{proposition}\label{thm:infinitepizzafamily}
For $k\in\mathbb{N}$, the sequence of pieces $[(G_2^{\text{opp}}b)^k, B_2^{\text{opp}}b, A_2^{\text{opp}}b,A_1\times A_1,(G_2^{\text{opp}})^k, B_2^{\text{opp}}, A_2^{\text{opp}},A_1\times A_1]$ is a valid pizza.
\end{proposition}
\begin{proof}
Recall that, as elements of $Br_3$, $A_2^{\text{opp}}=AB, B_2^{\text{opp}}=B^{-1}AB, G_2^{\text{opp}}=B^{-1}B^{-1}AB$ (and their backwards analogs are the same braids read backwards). For $k=0$ this is a pizza by direct checking. The general case follows from the fact that in $Br_3$,
\begin{align*}
G_2^{\text{opp}} B_2^{\text{opp}} A_2^{\text{opp}}&= B^{-1}B^{-1}ABB^{-1}ABAB \\
&=B^{-1}B^{-1}AABAB \\
&=B^{-1}B^{-1}ABABB \\
&=B^{-1}B^{-1}BABBB \\
&=B^{-1}ABBB \\
&=B^{-1}ABABB^{-1}A^{-1}BB \\
&=B_2^{\text{opp}} A_2^{\text{opp}} (G_2^{\text{opp}})^{-1},
\end{align*}
which implies
\[
G_2^{\text{opp}} B_2^{\text{opp}} A_2^{\text{opp}} G_2^{\text{opp}} = B_2^{\text{opp}} A_2^{\text{opp}}.
\]
Then our proposition follows from the fact that
\[
G_2^{\text{opp}} (A_1\times A_1) = (A_1\times A_1) G_2^{\text{opp}}b
\]
\end{proof}
However, not all of these will be labelable. Using results of Dyer (\cite{D2}) we are able to reduce the general case to a finite problem. Since only the $G_2^{\text{opp}}$ and $G_2^{\text{opp}}b$ pieces have nutritive value $\frac{0}{12}$, if we can show that a pizza can not have a Kazhdan-Lusztig atlas if it has too many of these pieces, we would be again left with a finite problem.
\begin{proposition}\label{thm:atlaspizzasfinite}
If a $G_2^{\text{opp}}$ or $G_2^{\text{opp}}b$ piece is adjacent to a $B_2^{\text{opp}},B_2^{\text{opp}}b, G_2^{\text{opp}}$ or $G_2^{\text{opp}}b$ piece in a pizza, then the pizza can not have an atlas. Note that this implies that no two $\frac{0}{12}$ nutritional pieces appear consecutively. Also, if a pizza has an atlas, then the only piece sequence in which two $B_2^{\text{opp}}$ or $B_2^{\text{opp}}b$ pieces can be adjacent to each other is $B_2^{\text{opp}},B_2^{\text{opp}}b$.
\end{proposition}
\begin{proof}
We will check the $G_2^{\text{opp}}b, G_2^{\text{opp}}$ case; the other cases are very similar to this one. The sequence of slices looks like (the central vertex is highlighted in red)
\begin{figure}[H]
\centering
\includegraphics{G2oppb_G2opp.png}
\end{figure}
Note that we do not know the heights of the roots $\alpha,\beta,\gamma$, but we know from Proposition \ref{thm:richardsonquads} and the discussion afterwards that both $\alpha,\beta$ and $\beta,\gamma$ must form root systems of type $G_2$, with $\beta$ being the short root in both of them.
If $w\in W_H$ is an element covered by $wr_\beta, wr_{\alpha+3\beta}, wr_{3\beta+\gamma}$, then $w$ must move all the following roots to negatives: $\{\alpha, \alpha+\beta, 2\alpha+3\beta, \alpha+2\beta,\gamma, \beta+\gamma, 3\beta+2\gamma, 2\beta+\gamma \}$, since by Theorem 1.4. of \cite{D2}, it suffices to check this in the reflection subgroups $W_{\alpha,\beta}=\langle r_\alpha, r_\beta \rangle, W_{\beta, \gamma}=\langle r_\beta,r_\gamma \rangle$ (both isomorphic to $W_{G_2}$). Now consider the reflection subgroup $W_{\alpha,\beta,\gamma}=\langle r_\alpha, r_\beta ,r_\gamma\rangle$. Clearly any root that is a convex combination of the roots above will be moved to negative roots, in particular, any root of the form $c_1\alpha+c_2\beta+c_3\gamma$ as long as $c_2\leq 2c_1+2c_3$. There are infinitely many roots of this form, so such $w$ would need to have infinite length, which is a contradiction.
\end{proof}
\begin{lemma}\label{thm:reflsubgrplemma}
Assume that $\alpha,\beta$ are simple roots in a root system of type $A_2\; (\text{resp. }B_2, G_2)$ with $\beta$ being the short root (if there are two root lengths). If $w\in W_{\alpha,\beta}$ such that $w\lessdot ws_\beta, w\lessdot ws_{\alpha+\beta}\; (\text{resp. }ws_{\alpha+2\beta}$, or, in the $G_2$ case, $ws_{\alpha+3\beta})$, then $w\cdot \gamma$ is negative for every element of the following set of positive roots: $\{\alpha\}\; (\text{resp. }\{\alpha,\alpha+\beta\}$, or, in the $G_2$ case, $\{\alpha,\alpha+\beta,2\alpha+3\beta,\alpha+2\beta\})$ to negative roots, or, equivalently $ws_\alpha < w\; (\text{resp. }ws_\alpha, wr_{\alpha+\beta} < w$, or, in the $G_2$ case, $ws_\alpha, wr_{\alpha+\beta}, wr_{2\alpha+3\beta}, wr_{\alpha+2\beta}<w)$.
\end{lemma}
\begin{proof}
The only element $w$ that satisfies the covering relations is $s_\alpha\; (\text{resp. }s_\beta s_\alpha,\text{ or, in the $G_2$ case, } s_\beta s_\alpha s_\beta s_\alpha)$ which also moves the above-mentioned roots to negatives.
\end{proof}
Proposition \ref{thm:atlaspizzasfinite} reduces the general case to a finite problem. The following is the best we can say at this moment:
\begin{theorem}\label{thm:atlasbound}
The number of pizzas with Kazhdan-Lusztig atlases is at most $7543$, each having at most $12$ pieces.
\end{theorem}
\begin{proof}
This is a brute-force check by Sage \cite{sage}, using Proposition \ref{thm:atlaspizzasfinite}.
\end{proof}
\section{Some Kazhdan-Lusztig atlases for simply laced pizzas}\label{sec:listofatlases}
\begin{definition}
We define the \textbf{height} of a Kazhdan-Lusztig atlas to be the length of the $W_H$-element at the central vertex of the pizza.
\end{definition}
We will only describe one of the (possibly many) minimal height atlases for each pizza. All of these atlases have been obtained following the algorithm of section \ref{sec:toppings}. The Dynkin diagram of the group $H$ is displayed near the pizza. The search for the $W_H$ element at the center of each pizza was done by Sage \cite{sage}. Regrettably, we were unable to find an atlas (even a non-simply-laced one) for the pizza $[A_2^{opp}, A_2^{oppb}, A_2^{opp}, G_2b]$, we suspect that it does not have an atlas.
\begin{enumerate}
\subsection*{Height $0$ (Bruhat atlases)}
\begin{minipage}{\textwidth}\item The pizza $[A_2, A_2, A_2]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1_A1_A1_1.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1_A1_A1grp_1.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1, A_1\times A_1, A_1\times A_1, A_1\times A_1]$:
\begin{figure}[H]
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A1xA1_A1xA1_A1xA1.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A1xA1_A1xA1_A1xA1grp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\subsection*{Height $1$}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1, A_1\times A_1, A_2, A_2^{opp}b]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A1xA1_A2_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A1xA1_A2_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1,A_1\times A_1,A_2b,A_2^{opp}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A1xA1_A2b_A2opp.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A1xA1_A2b_A2oppgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1,A_2,A_1\times A_1,A_2^{opp}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A2_A1xA1_A2opp.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A2_A1xA1_A2oppgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2, A_2,A_2^{opp}b, A_2^{opp}b]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A2_A2_A2oppb_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A2_A2_A2oppb_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\subsection*{Height $2$}
\begin{minipage}{\textwidth}\item The pizza $[A_2b,A_2^{opp}, A_2^{opp}b, A_2]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A2b_A2opp_A2oppb_A2.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A2b_A2opp_A2oppb_A2grp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2,A_2^{opp}, A_2b, A_2^{opp}b]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.69\textwidth}
\includegraphics{A2_A2opp_A2b_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics{A2_A2opp_A2b_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2^{opp}b, A_2, A_2^{opp}b, A_2]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.69\textwidth}
\includegraphics{A2_A2oppb_A2_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics{A2_A2oppb_A2_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2^{opp}, A_2b, A_2, A_2^{opp}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.69\textwidth}
\includegraphics{A2opp_A2b_A2_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics{A2opp_A2b_A2_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\subsection*{Height $3$}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1,A_2^{opp}, A_2^{opp}b,B_2]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A2opp_A2oppb_B2.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A2opp_A2oppb_B2grp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1,A_2^{opp}, B_2b,A_2^{opp}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A2opp_B2b_A2opp.png}
\end{subfigure}
\begin{subfigure}{0.2
\textwidth}
\includegraphics{A1xA1_A2opp_B2b_A2oppgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2^{opp}b, A_2^{opp},B_2b, A_1\times A_1]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.79\textwidth}
\includegraphics{A1xA1_A2oppb_A2opp_B2b.png}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\includegraphics{A1xA1_A2oppb_A2opp_B2bgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1, A_1\times A_1, A_2^{opp}, A_2^{opp}, A_2^{opp}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.69\textwidth}
\includegraphics{A1xA1_A1xA1_A2opp_A2opp_A2opp.png}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics{A1xA1_A1xA1_A2opp_A2opp_A2oppgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_1\times A_1, A_2^{opp},A_1\times A_1, A_2^{opp}b, A_2^{opp}b]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics{A1xA1_A2opp_A1xA1_A2oppb_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics{A1xA1_A2opp_A1xA1_A2oppb_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[ A_2^{\text{opp}}, A_2^{\text{opp}}, A_2^{\text{opp}}, A_2^{\text{opp}}b, A_2^{\text{opp}}b, A_2^{\text{opp}}b ]$
\begin{figure}[H]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics{A2opp_A2opp_A2opp_A2oppb_A2oppb_A2oppb.png}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics{A2opp_A2opp_A2opp_A2oppb_A2oppb_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\end{minipage}
\subsection*{Height $5$}
\begin{minipage}{\textwidth}\item The pizza $[A_2, A_2^{\text{opp}}b, A_2^{\text{opp}}b, A_2^{\text{opp}}b, A_2^{\text{opp}}b]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics{A2_A2oppb_A2opp_A2opp_A2oppbroots.png}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics{A2_A2oppb_A2opp_A2opp_A2oppbgrp.png}
\end{subfigure}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics{A2_A2oppb_A2opp_A2opp_A2oppbwelements.png}
\end{figure}
\end{minipage}
\begin{minipage}{\textwidth}\item The pizza $[A_2^{opp}, A_2^{opp}, A_2^{opp}, A_2^{opp}b, A_2]$
\begin{figure}[H]
\centering
\begin{subfigure}{0.7\textwidth}
\includegraphics{A2opp_A2opp_A2opp_A2oppb_A2_roots.png}
\end{subfigure}
\begin{subfigure}{0.29\textwidth}
\includegraphics{A2opp_A2opp_A2opp_A2oppb_A2grp.png}
\end{subfigure}
\end{figure}
and the $W$-elements:
\begin{figure}[H]
\centering
\includegraphics{A2opp_A2opp_A2opp_A2oppb_A2_welements.png}
\end{figure}
\end{minipage}
\subsection*{Height $9$}
\begin{minipage}{\textwidth}\item The pizza $[A_2, A_2^{\text{opp}}b, A_2^{\text{opp}}, A_2^{\text{opp}}, A_2^{\text{opp}}]$:
\begin{figure}[H]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics{A2_A2oppb_A2opp_A2opp_A2opproots.png}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics{A2_A2oppb_A2opp_A2opp_A2oppgrp.png}
\end{subfigure}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics{A2_A2oppb_A2opp_A2opp_A2oppwelements.png}
\end{figure}
\end{minipage}
\end{enumerate}
\section{Embedded degenerations}\label{sec:embeddeddegens}
Here we will describe an embedded degeneration (from definition \ref{def:KLatlas}) of a smooth toric surface $M$ into the union of Richardson varieties. We will find a point $x$ in $H/B_H$ such that $\overline{T_M\cdot x}\cong M$. We will do this by determining which Pl\"{u}cker coordinates vanish. Consider the following diagram:
\[
\xymatrix{
M\ar@{^{(}->}[d]\ar[rr]^{\Phi_{T_H}} & & \Phi_{T_H}(M)\ar@{^{(}->}[d]\ar[r]^{\Phi_{T_M}} & \Phi_{T_M}(M)\ar@{^{(}->}[d] \\
H/B_H\ar@{->>}[r] & H/P^{\alpha}\ar^{\Phi_{T_H}}[r] & Q\ar^{\Phi_{T_M}}[r] & \Phi_{T_M}(Q)
}
\]
where $P^{\alpha}$ is the maximal proper parabolic not containing the subgroup corresponding to $-\alpha$, $Q=\Phi_{T_H}(H/P^{\alpha})$ is the moment polytope of $H/P^{\alpha}$, and $\Phi_{T_M}:\mathfrak{r}^*_H\to\mathfrak{r}^*_M$ is the map induced by $T_M\subseteq T_H$. Each of the vertices $\lambda$ of $\Phi_{T_H}(Q)$ corresponds to a Pl\"{u}cker coordinate, and if $\Phi_{T_M}(\lambda)\notin \Phi_{T_M}(M)$, then we know that the $\lambda$ Pl\"{u}cker coordinate should vanish on $M$. To find an embedding $M\hookrightarrow H/B_H$, we just need to find an element $x\in H/B_H$ for which exactly these Pl\"{u}cker coordinates vanish, and take $\overline{T_M\cdot x}\subseteq H/B_H$.
We go through an example. Consider the pizza
\begin{figure}[H]
\centering
\includegraphics{degen1.png}
\end{figure}
\noindent and $H$ with diagram
\begin{figure}[H]
\centering
\includegraphics{A3xA1.png}
\end{figure}
\noindent and labels (written in one line notation by the identification $W=S_4\times S_2$)
\begin{figure}[H]
\centering
\includegraphics{degenatlas.png}
\end{figure}
\noindent We relabel the edges in a way that they correspond to the left-multiplications (action on values as opposed to positions) in $W$:
\begin{figure}[H]
\centering
\includegraphics{degen2.png}
\end{figure}
\noindent Then read off the directions that left-multiplication by simple roots correspond to
\begin{figure}[H]
\centering
\includegraphics{degenvectors.png}
\caption{$T_M\subset T_H$\label{fig:TMinsideT}}
\end{figure}
\noindent Note that this expresses how the subtorus $T_M$ sits in $T_H$. Then for all four simple roots, we contract the edges of the pizza except the ones whose label contains the chosen simple root as a summand, and see which Pl\"{u}cker coordinates lie outside the polytope:
\begin{figure}[H]
\centering
\includegraphics{contractions.png}
\end{figure}
So the vanishing Pl\"{u}cker coordinates are: $(1,-)$ and $(234,-)$. A representative in $H$ for which precisely these Pl\"{u}cker coordinates vanish is the pair of matrices
\[
r=
\left(
\begin{pmatrix}
0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
1 & 1 & 2 & 0 \\
1 & 2 & 3 & -1
\end{pmatrix},
\begin{pmatrix}
1 & 0 \\
1 & 1
\end{pmatrix}
\right),
\]
where $(1,-)$ is the $(1,1)$-entry of the first matrix, and $(234,-)$ is the minor formed by the columns $1,2,3$ and the rows $2,3,4$ of the first matrix.
A parametrization for $T_M$ is (by figure \ref{fig:TMinsideT})
\[
\left(
\begin{pmatrix}
a & 0 & 0 & 0 \\
0 & ab^{-2} & 0 & 0 \\
0 & 0 & a^{-3}b^4 & 0 \\
0 & 0 & 0 & ab^{-2}
\end{pmatrix},
\begin{pmatrix}
b^{-1} & 0 \\
0 & b
\end{pmatrix}
\right),
\]
with $a,b\in \mathbb{C}^\times$, so we have $M \cong \overline{T_M\cdot rB_H/B_H}$.
|
1,108,101,563,648 | arxiv | \section{Introduction}
Boundary value problems (BVPs) for third order nonlinear differential equations appear in many applied fields, such as flexibility mechanics, chemical engineering, heat conduction and so on. A lot of works are devoted to the qualitative aspects of the problems (see e.g. \cite{Bai,Cabada2,Gross,Guo1,Rez,Sun,Zhai}). There are also many methods concerning the solution of third order BVPs including analytical methods \cite{Abus,Lv-Gao,Pue} and numerical methods by using interpolation polynomials \cite{Al-Sa}, quartic splines \cite{Gao}, \cite{Pandey2}, quintic splines\cite{Khan}, Non-polynomial splines \cite{Isla1}, \cite{Isla2}, \cite{Sriv}, and wavelet \cite{Faza}. The majority of the mentioned above numerical methods are devoted to linear equations or special nonlinear third order differential equations.\\
In this paper we consider the following BVP
\begin{equation}\label{eq1}
\begin{split}
u^{(3)}(t) &=f(t, u(t), u'(t), u''(t)), \quad 0 < t < 1, \\
u(0)&=c_1, u'(0)=c_2, u'(1)=c_3.
\end{split}
\end{equation}
Some authors studied the existence and positivity of solution for this problem, for example, by using the lower and upper solutions method and fixed point theorem on cones, in \cite{Yao-Feng} Yao and Feng established the existence of solution and positive solution for the case $f=f(t,u(t))$, in \cite{Feng-Liu} Feng and Liu obtained existence results by the use of the lower and upper solutions method and a new maximum principle for the case $f=f(t,u(t), u'(t))$. It should be emphasized that the results of these two works are pure existence but not methods for finding solutions. Many researchers are interested in numerical solution of the problem \eqref{eq1} without attention to qualitative aspects of it or refer to the book \cite{Agarwal}. \par
Below we mention some works devoted to solution methods for the problem \eqref{eq1}. Namely,
Al Said et al. \cite{Al-Said} have solved a third order two point BVP using cubic splines. Noor
et al. \cite{Noor} generated second order method based on quartic splines. Other authors \cite{Cala,Khan} generated finite difference using fourth degree B-spline and quintic polynomial spline for this problem subject to other boundary conditions. El-Danaf \cite{Danaf} constructed a new spline method based on quartic nonpolynomial spline functions that has a polynomial part and a trigonometric part to develop numerical methods for a linear differential equation with the boundary conditions as in \eqref{eq1}. Recently, in 2016 Pandey \cite{Pandey1} solved the problem for the case $f=f(t,u)$ by the use of quartic polynomial splines. The convergence of the method at least $O(h^2)$ for the linear case $f=f(t)$ was proved. In the next year this author in \cite{Pandey2} proposed two difference schemes for the general case
$f=f(t, u(t), u'(t), u''(t))$ and also established the second order accuracy for the linear case. In the beginning of 2019 Chaurasia et al. \cite{Chau} use exponential amalgamation of cubic spline functions to form a novel numerical method of second-order accuracy.
It should be emphasized that all mentioned above authors only draw attention to the construction of discrete analog of the problem \eqref{eq1} and estimate the error of the obtained solution assuming that the nonlinear system of algebraic equations can be solved by known iterative methods. Thus, they did not take into account the errors arisen in the last iterative methods.\par
Motivated by these facts, in this paper we propose a completely different method, specifically,
an iterative method on both continuous and discrete levels for the problem \eqref{eq1}. We give an analysis of total error of the solution actually obtained. This error includes the error of the iterative method on continuous level and the error arisen in numerical realization of this iterative method. The obtained total error estimate suggests to choose suitable grid size for discretization if desiring to get approximate solution with a given accuracy. In order to justify the total error estimate, first we establish some results on existence, uniqueness of solution. These results are obtained by the method developed in \cite{Dang1}-\cite{Dang8}. Some examples demonstrate the validity of the obtained theoretical results and the efficiency of the iterative method.
\section{Existence results}
For simplicity of presentation we consider the problem \eqref{eq1} with homogeneous boundary conditions, i.e., the problem
\begin{equation}\label{eq2}
\begin{split}
u^{(3)}(t) &=f(t, u(t), u'(t), u''(t)), \quad 0 < t < 1, \\
u(0)&=0, u'(0)=0, u'(1)=0.
\end{split}
\end{equation}
To investigate this problem we associate it with an operator equation as follows.\par
For functions $\varphi (x) \in C[0, 1]$ consider the nonlinear operator $A$ defined by
\begin{equation}\label{defA}
(A\varphi )(t)=f(t,u(t), u'(t), u''(t)),
\end{equation}
where $u(t)$ is the solution of the problem
\begin{equation}\label{eq3}
\begin{split}
u'''(t)&=\varphi (t), \quad 0<t<1\\
u(0)&=0, u'(0)=0, u'(1)=0.
\end{split}
\end{equation}
\begin{Proposition}\label{prop1} If the function $\varphi (x)$ is a fixed point of the operator $A$, i.e., $\varphi (t)$ is a solution of the operator equation
$\varphi = A\varphi $ ,
then the function $u(t)$ determined from the BVP \eqref{eq3} solves the problem \eqref{eq2}. Conversely, if $u(t)$ is a solution of the BVP \eqref{eq2} then the function
$\varphi(t)=f(t,u(t), u'(t), u''(t))$
is a fixed point of the operator $A$ defined above by \eqref{defA}, \eqref{eq3}.
\end{Proposition}
Thus, the problem \eqref{eq2} is reduced to the fixed point problem for $A$.\par
Now, we study the properties of $A$. For this purpose, notice that the problem \eqref{eq2} has a unique solution representable in the form
\begin{equation}\label{eq2.8}
u(t)=\int_0^1 G_0(t,s)\varphi(s)ds, \quad 0<t<1,
\end{equation}
where $G_0(t,s)$ is the Green function of the problem \eqref{eq3}
\begin{equation*}
\begin{aligned}
G_0(t,s)=\left\{\begin{array}{ll}
\dfrac{s}{2}(t^2-2t+s), \quad 0\le s \le t \le 1,\\
\, \, \dfrac{t^2}{2}(s-1), \quad 0\le t \le s \le 1.\\
\end{array}\right.
\end{aligned}
\end{equation*}
Differentiating both sides of \eqref{eq2.8} gives
\begin{align}
u'(t) & =\int_0^1 G_1(t,s)\varphi(s)ds, \label{eq2.8a}\\
u''(t) & =\int_0^1 G_2(t,s)\varphi(s)ds, \label{eq2.8b}
\end{align}
where
\begin{equation*}
G_1(t,s)=\left\{\begin{array}{ll}
s(t-1), \quad 0\le s \le t \le 1,\\
t(s-1), \quad 0\le t \le s \le 1,\\
\end{array}\right.
\end{equation*}
\begin{equation}\label{eqG2}
G_2(t,s)=\left\{\begin{array}{ll}
s,& \quad 0\le s \le t \le 1,\\
s-1,& \quad 0\le t \le s \le 1.\\
\end{array}\right.
\end{equation}
It is easily seen that
$G_0(t,s) \leq 0, \; G_1(t,s) \leq 0$
in $Q=[0,1]^2$ and
\begin{equation}\label{valGreen}
\begin{aligned}
&M_0= \max _{0\le t\le 1} \int _0 ^1 |G(t,s)| \ ds =\dfrac{1}{12},\quad M_1= \max _{0\le t\le 1} \int _0 ^1 |G_1(t,s)| \ ds =\dfrac{1}{8},\\
&M_2= \max _{0\le t\le 1} \int _0 ^1 |G_2(t,s)| \ ds =\dfrac{1}{2}.\\
\end{aligned}
\end{equation}
Next, for each fixed real number $M>0$ introduce the domain
\begin{equation*}
\mathcal{D}_M=\{ (t,x,y,z)| \ 0\leq t\leq 1, \,\,|x| \leq M_0M, \,\, |y| \leq M_1M, \,\,
|z| \leq M_2M \},
\end{equation*}
and as usual, by $B[O,M]$ we denote the closed ball of radius $M$ centered at $0$ in the space of continuous in $[0, 1]$ functions, namely,
$
B[O,M]=\{ \varphi \in C[0,1]| \ \| \varphi \| \leq M \},
$
where
$\| \varphi \|= \max_{0 \leq t \leq 1} |\varphi (t)|. $
By the analogous techniques as in \cite{Dang1}-\cite{Dang8} we have proved the following results.
\begin{Theorem}[Existence of solutions]\label{theorem1}
Suppose that there exists a number $M>0$ such that the function $f(t,x,y,z)$ is continuous and bounded by $M$ in the domain $\mathcal{D}_M$, i.e.,
\begin{equation*}
|f(t,x,y,z)| \leq M
\end{equation*}
for any $(t,x,y,z) \in \mathcal{D}_M .$\par
Then, the problem \eqref{eq1} has a solution $u(t)$ satisfying
\begin{equation*}
|u(t)| \leq M_0M, \; |u'(t)| \leq M_1M,\; |u''(t)| \leq M_2M \text{ for any } 0 \le t \le 1.
\end{equation*}
\end{Theorem}
\begin{Theorem}[Existence and uniqueness of solution]\label{theorem3}
Assume that there exist numbers
$M,L_0, L_1$, $ L_2 \geq 0$ such that
\begin{equation*}
|f(t,x,y,z)| \leq M,
\end{equation*}
\begin{multline}
|f(t,x_2,y_2,z_2)-f(t,x_1,y_1,z_1)| \leq L_0|x_2-x_1|+ L_1|y_2-y_1|+L_2|z_2-z_1|
\end{multline}
for any $(t,x,y,z), (t,x_i,y_i,z_i) \in \mathcal{D}_M \ (i=1,2)$ and
\begin{equation*}
q:=L_0M_0+ L_1M_1+L_2M_2<1.
\end{equation*}
Then, the problem \eqref{eq2} has a unique solution $u(t)$ such that $|u(t)| \leq M_0M,$ $|u'(t)| \leq M_1M, \,\, |u''(t)| \leq M_2M $ for any $0 \le t \le 1$.
\end{Theorem}
\noindent {\bf Remark.} The problem \eqref{eq1} for $u(t)$ with non-homogeneous boundary conditions can be reduced to the problem with homogeneous for function $v(t)$ if setting $u(t)=v(t)+P_2(t)$, where $P_2(t)$ is the second degree polynomial satisfying the boundary conditions $P_2(0)=c_1, P'_2(0)=c_2, P_2(1)=c_3$.
\section{Iterative method on continuous level}\label{IterMeth}
Consider the following iterative method for solving the problem \eqref{eq2}:
\begin{enumerate}
\item Given
\begin{equation}\label{iter1c}
\varphi_0(t)=f(t,0,0,0).
\end{equation}
\item Knowing $\varphi_k(t)$ $(k=0,1,...)$ compute
\begin{equation}\label{iter2c}
\begin{split}
u_k(t) &= \int_0^1 G_0(t,s)\varphi_k(s)ds ,\\
y_k(t) &= \int_0^1 G_1(t,s)\varphi_k(s)ds ,\\
z_k(t) &= \int_0^1 G_2(t,s)\varphi_k(s)ds ,
\end{split}
\end{equation}
\item Update
\begin{equation}\label{iter3c}
\varphi_{k+1}(t) = f(t,u_k(t),y_k(t),z_k(t)).
\end{equation}
\end{enumerate}
Set
\begin{equation*}
p_k=\dfrac{q^k}{1-q}\| \varphi _1 -\varphi _0\|.
\end{equation*}
\begin{Theorem}[Convergence]\label{theorem5} Under the assumptions of Theorem \ref{theorem3} the above iterative method converges and there hold the estimates
\begin{equation*}
\|u_k-u\| \leq M_0p_k, \quad \|u'_k-u'\| \leq M_1p_k,\quad
\|u''_k-u''\| \leq M_2p_k,
\end{equation*}
where $u$ is the exact solution of the problem \eqref{eq2} and $M_0, M_1, M_2$ are given by \eqref{valGreen}.
\end{Theorem}
This theorem follows straightforward from the convergence of the successive approximation method for finding fixed point of the operator $A$ and the representations \eqref{eq2.8}-\eqref{eq2.8b} and \eqref{iter2c}.
\section{Discrete iterative method 1}
To numerically realize the above iterative method we construct the corresponding discrete iterative methods. For this purpose cover the interval $[0, 1]$ by the uniform grid $\bar{\omega}_h=\{t_i=ih, \; h=1/N, i=0,1,...,N \}$ and denote by $\Phi_k(t), U_k(t), Y_k(t), Z_k(t)$ the grid functions, which are defined on the grid $\bar{\omega}_h$ and approximate the functions $\varphi_k (t), u_k(t), y_k(t), z_k(t)$ on this grid, respectively.\par
First, consider the following discrete iterative method, named {\bf Method 1}:
\begin{enumerate}
\item Given
\begin{equation}\label{iter1d}
\Phi_0(t_i)=f(t_i,0,0,0),\ i=0,...,N.
\end{equation}
\item Knowing $\Phi_k(t_i),\; k=0,1,...; \; i=0,...,N, $ compute approximately the definite integrals \eqref{iter2c} by trapezium formulas
\begin{equation}\label{iter2d}
\begin{split}
U_k(t_i) &= \sum _{j=0}^N h\rho_j G_0(t_i,t_j)\Phi_k(t_j), \\
Y_k(t_i) &= \sum _{j=0}^N h\rho_j G_1(t_i,t_j)\Phi_k(t_j) ,\\
Z_k(t_i) &= \sum _{j=0}^N h\rho_j G_2^*(t_i,t_j)\Phi_k(t_j) ,\; i=0,...,N,
\end{split}
\end{equation}
\noindent where $\rho_j$ are the weights of the trapezium formula
\begin{equation*}
\rho_j =
\begin{cases}
1/2,\; j=0,N\\
1, \; j=1,2,...,N-1
\end{cases}
\end{equation*}
and
\begin{equation}\label{eqg2*}
G_2^*(t,s) =
\begin{cases}
s, \quad & 0\leq s < t\leq 1, \\
s-1/2, \quad & s=t, \\
s-1, & 0\leq t < s\leq 1.
\end{cases}
\end{equation}
\item Update
\begin{equation}\label{iter3d}
\Phi_{k+1}(t_i) = f(t_i,U_k(t_i),Y_k(t_i),Z_k(t_i)).
\end{equation}
\end{enumerate}
In order to get the error estimates for the numerical approximate solution for $u(t)$ and its derivatives on the grid we need some following auxiliary results.
\begin{Proposition}\label{prop1}
Assume that the function $f(t,x,y,z)$ has all continuous partial derivatives up to second order in the domain $\mathcal{D}_M$. Then for the functions $u_k(t), y_k(t), z_k(t), k=0,1,...$, constructed by the iterative method \eqref{iter1c}-\eqref{iter3c} there hold
$z_k(t) \in C^3 [0, 1], \; y_k(t) \in C^4 [0, 1], \;u_k(t) \in C^5 [0, 1].$
\end{Proposition}
\begin{Proof}
We prove the proposition by induction. For $k=0,$ by the assumption on the function $f$ we have $\varphi_0(t) \in C^2[0, 1]$ since $\varphi_0(t)=f(t,0,0,0)$. Taking into account the expression \eqref{eqG2} of the function $G_2(t,s)$ we have
\begin{equation*}
z_0(t)=\int _0^1 G_2(t,s) \varphi_0(s) ds=
\int_0^t s \varphi_0(s) ds -\int_t^1 (s-1) \varphi_0(s) ds.
\end{equation*}
It is easy to see that $z_0'(t)=\varphi_0(t)$. Therefore, $z_0(t) \in C^3[0, 1]$. It implies $y_0(t) \in C^4 [0, 1], \;u_0(t) \in C^5 [0, 1]$.\par
Now suppose $z_k(t) \in C^3 [0, 1], \; y_k(t) \in C^4 [0, 1], \;u_k(t) \in C^5 [0, 1].$ Then, because
$\varphi_{k+1}(t) = f(t,u_k(t),y_k(t),z_k(t))$
and the function $f$ by the assumption has continuous derivative in all variables up to order 2, it follows that $\varphi_{k+1}(t) \in C^2[0, 1]$. Repeating the same argument as for $\varphi_0(t)$ above we obtain that $z_{k+1}(t) \in C^3 [0, 1], \; y_{k+1}(t) \in C^4 [0, 1], \;u_{k+1}(t) \in C^5 [0, 1].$
Thus, the proposition is proved.
\end{Proof}
\begin{Proposition}\label{prop2}
For any function $\varphi (t) \in C^2[0, 1]$ there hold the estimates
\begin{equation}\label{eq:prop2}
\int_0^1 G_n (t_i,s) \varphi (s) ds = \sum _{j=0}^N h\rho_j G_n(t_i,t_j)\varphi(t_j) +O(h^2),
\quad (n=0,1)
\end{equation}
\begin{equation}\label{eq:prop2a}
\int_0^1 G_2 (t_i,s) \varphi (s) ds = \sum _{j=0}^N h\rho_j G_2^*(t_i,t_j)\varphi(t_j) +O(h^2).
\end{equation}
\end{Proposition}
\begin{Proof}
In the case $n=0, 1$, since the functions $G_n(t_i,s)$ are continuous at $s=t_i$ and are polynomials in $s$ in the intervals $[0, t_i]$ and $[t_i, 1]$ we have
\begin{align*}
& \int_0^1 G_n (t_i,s) \varphi (s) ds =\int_0^{t_i} G_n (t_i,s) \varphi (s) ds + \int_{t_i}^1 G_n (t_i,s) \varphi (s) ds\\
&= h\big( \tfrac{1}{2}G_n (t_i,t_0)\varphi (t_0)+ G_n (t_i,t_1)\varphi (t_1)+...+G_n (t_i,t_{i-1})\varphi (t_{i-1})+ \tfrac{1}{2}G_2 (t_i,t_i)\varphi (t_i) \big)\\
& + h\big( \tfrac{1}{2}G_n (t_i,t_i)\varphi (t_i)+ G_n (t_i,t_{i+1})\varphi (t_{i+1})+...+G_n (t_i,t_{N-1})\varphi (t_{N-1})\\
&+ \tfrac{1}{2}G_n (t_i,t_N)\varphi (t_N) \big) +O(h^2)\\
&= \sum _{j=0}^N h\rho_j G_n(t_i,t_j)\varphi(t_j) +O(h^2)\quad (n=0,1).
\end{align*}
Thus, the estimate \eqref{eq:prop2} is established.
The estimate \eqref{eq:prop2a} is obtained using the following result, which is easily proved.
\begin{lemma}\label{le2}
Let $p(t)$ be a function having continuous derivatives up to second order in the interval $[0,1]$ except for the point $t_i, \ 0<t_i<1$, where it has a jump.
Denote $\lim _{t \rightarrow {t_i-0}}p(t)=p_i^- $,
$\lim _{t \rightarrow {t_i+0}}p(t)=p_i^+,$
$ p_i= \tfrac{1}{2}(p_i^- +p_i^+) $.
Then
\begin{equation}
\int_0^1 p(t) dt= \sum _{j=0}^N h\rho_j p(j) +O(h^2),
\end{equation}
where $p_j=p(t_j), j \ne i.$
\end{lemma}
\end{Proof}
\begin{Proposition}\label{prop3}
Under the assumption of Proposition \ref{prop1} for any $k=0,1,...$ there hold the estimates
\begin{equation}\label{eq:prop3a}
\|\Phi_k -\varphi_k \|= O(h^2),
\end{equation}
\begin{equation}\label{eq:prop3b}
\begin{split}
\|U_k -u_k \|&=O(h^2), \; \|Y_k -y_k \|=O(h^2), \; \|Z_k -z_k \|=O(h^2).
\end{split}
\end{equation}
\noindent where $\|.\|_{C(\bar{\omega}_h)}$ is the max-norm of function on the grid $\bar{\omega}_h$.
\end{Proposition}
\begin{Proof}
We prove the proposition by induction. For $k=0$ we have immediately $\|\Phi_0 -\varphi_0 \|= 0$. Next, by the first equation in \eqref{iter2c} and Proposition \ref{prop2} we have
\begin{equation}
u_0(t_i)=\int_0^1 G_0 (t_i,s) \varphi_0 (s) ds = \sum _{j=0}^N h\rho_j G_0(t_i,t_j)\varphi_0(t_j)+O(h^2)
\end{equation}
for any $i=0,...,N$.
On the other hand, in view of the first equation in \eqref{iter2d} we have
\begin{equation}
U_0(t_i)= \sum _{j=0}^N h\rho_j G_0(t_i,t_j)\varphi_0(t_j).
\end{equation}
Therefore, $|U_0(t_i)- u_0(t_i)|= O(h^2)$. Consequently, $\|U_0 -u_0 \|=O(h^2) $.\\
Similarly, we have
\begin{equation}
\|Y_0 -y_0 \|=O(h^2), \; \|Z_0 -z_0 \|=O(h^2).
\end{equation}
Now suppose that \eqref{eq:prop3a} and \eqref{eq:prop3b} are valid for $k \ge 0$. We shall show that these estimates are valid for $k+1$.\\
Indeed, by the Lipshitz condition of the function $f$ and the estimates \eqref{eq:prop3b} it is easy to obtain the estimate
\begin{equation}\label{eq:diffPhi}
\|\Phi_{k+1} -\varphi_{k+1} \|= O(h^2)
\end{equation}
Now from the first equation in \eqref{iter2c} by Proposition \ref{prop2} we have
\begin{equation*}
u_{k+1}(t_i)=\int_0^1 G_0 (t_i,s) \varphi_{k+1} (s) ds = \sum _{j=0}^N h\rho_j G_0(t_i,t_j)\varphi_{k+1}(t_j)+O(h^2)
\end{equation*}
On the other hand by the first formula in \eqref{iter2d} we have
\begin{equation*}
U_{k+1}(t_i) = \sum _{j=0}^N h\rho_j G_0(t_i,t_j)\Phi_{k+1}(t_j).
\end{equation*}
From the above equalities,
having in mind the estimate \eqref{eq:diffPhi} we obtain the estimate
$$\|U_{k+1} -u_{k+1} \|=O(h^2).
$$
Similarly, we obtain
$$\|Y_{k+1} -y_{k+1} \|=O(h^2),\; \|Z_{k+1} -z_{k+1}\|=O(h^2).
$$
Thus, by induction we have proved the proposition.
\end{Proof}
Now combining Proposition \ref{prop3} and Theorem \ref{theorem5} results in the following theorem.
\begin{Theorem}\label{thm4}
For the approximate solution of the problem \eqref{eq2} obtained by the discrete iterative method on the uniform grid with gridsize $h$ there hold the estimates
\begin{equation*}
\begin{split}
\|U_k-u\| &\leq \left(M_0+\frac{1}{r}\right)p_kd +O(h^2), \; \|Y_k-u'\| \leq M_1p_kd +O(h^2), \\
\|Z_k-u''\| &\leq M_2p_kd +O(h^2).
\end{split}
\end{equation*}
\end{Theorem}
\noindent {\bf Remark 1.} We perform the discrete iterative process \eqref{iter1d}-\eqref{iter3d} until $ \|\Phi_{k+1}-\Phi_k \| \le TOL$, where $TOL$ is a given tolerance. From Theorem \ref{thm4} it is seen that the accuracy of the discrete approximate solution depends on both the number $q$ defined in Theorem \ref{theorem3}, which determines the number of iterations of the continuous iterative method and the gridsize $h$.
The number $q$ presents the nature of the BVP, therefore, it is necessary to choose appropriate $h$ consistent with $q$ because the choice of very small $h$ does not increase the accuracy of the approximate discrete solution. Below, in examples we shall see this fact.\\
\noindent {\bf Remark 2.} As mentioned in the Introduction, in 2016 Pandey \cite{Pandey1} discretized the problem \eqref{eq1} by quartic splines and proved the second order convergence only for the linear case (when $f=f(x)$). Next year, in \cite{Pandey2} he constructed two difference schemes for the problem and also proved the second order convergence for the linear case. The obtained system of difference equations are solve iteratively by the Gauss-Seidel or Newton-Raphson method. The error arising in these iterative methods are not considered together with the error of the discretization.
\section{Discrete iterative method 2}
Consider another discrete iterative method, named {\bf Method 2}. The steps of this method are the same of Method 1 with essential difference in Step 2 and now the number of grid points is even, $N=2n$. Namely,\\
2'. Knowing $\Phi_k(t_i),\; k=0,1,...; \; i=0,...,N, $ compute approximately the definite integrals \eqref{iter2c} by the modified Simpson formulas
\begin{equation}\label{iter2dNew}
\begin{split}
U_k(t_i) &= F(G_0 (t_i,.)\Phi_k(.)),\\
Y_k(t_i) &= F(G_1 (t_i,.)\Phi_k(.)),\\
Z_k(t_i) &= F(G_2^* (t_i,.)\Phi_k(.)),
\end{split}
\end{equation}
where
\begin{equation}\label{•}
F(G_l (t_i,.)\Phi_k(.)) =
\begin{cases}
\sum _{j=0}^N h\rho_j G_l(t_i,t_j)\Phi_k(t_j) \; \text{ if } i \text { is even }\\
\sum _{j=0}^N h\rho_j G_l(t_i,t_j)\Phi_k(t_j) +\dfrac{h}{6}\Big ( G_l(t_i, t_{i-1})\Phi_k(t_{i-1}) -2G_l(t_{i}, t_i)\Phi_k(t_i)\\
\quad +G_l(t_{i} , t_{i+1})\Phi_k(t_{i+1}) \Big ) \;
\text{ if } i \text { is odd } , \\
l=0,1; \; i=0,1,2,...,N.
\end{cases}
\end{equation}
\noindent $\rho_j$ are the weights of the Simpson formula
\begin{equation*}
\rho_j =
\begin{cases}
1/3,\; j=0,N\\
4/3, \; j=1,3,...,N-1\\
2/3, \; j=2,4,...,N-2,
\end{cases}
\end{equation*}
$F(G_2^* (t_i,.)\Phi_k(.))$ is calculated in the same way as $F(G_l (t_i,.)\Phi_k(.))$ above, where $G_l$ is replaced by $G_2^*$ defined by the formula \eqref{eqg2*}.
\begin{Proposition}\label{prop1New}
Assume that the function $f(t,x,y,z)$ has all continuous partial derivatives up to fourth order in the domain $\mathcal{D}_M$. Then for the functions $u_k(t), y_k(t), z_k(t), \varphi_{k+1}(t)$, $k=0,1,...$, constructed by the iterative method \eqref{iter1c}-\eqref{iter3c} there hold
$z_k(t) \in C^5 [0, 1], \; y_k(t) \in C^6 [0, 1], \;u_k(t) \in C^7 [0, 1], \varphi_{k+1}(t) \in C^4 [0,1].$
\end{Proposition}
\begin{Proposition}\label{prop2New}
For any function $\varphi (t) \in C^4[0, 1]$ there hold the estimates
\begin{equation}\label{eq:Sim1}
\int_0^1 G_l (t_i,s) \varphi (s) ds = F(G_l (t_i,.)\varphi(.)) +O(h^3),
\quad (l=0,1)
\end{equation}
\begin{equation}\label{eq:Sim2}
\int_0^1 G_2 (t_i,s) \varphi (s) ds = F(G_2^* (t_i,.)\varphi(.)) +O(h^3).
\end{equation}
\end{Proposition}
\begin{Proof}
Recall that the interval $[0,1]$ is divided into $N=2n $ by the points $t_i =ih, h=1/N$. In each subinterval $[0, t_i]$ and $[t_i, 1$ the functions $G_l(t_i,s)$ are continuous as polynomials. Therefore, if $i$ is even number, $i=2m$ then we represent
\begin{equation*}
\int_0^1 G_l (t_i,s) \varphi (s) ds=\int _0^{t_{2m}} \; + \int_{t_{2m}}^1 .
\end{equation*}
Applying the Simpson formula to the integrals in the right-hand side we obtain
\begin{equation*}
\int_0^1 G_l (t_i,s) \varphi (s) ds= F(G_l (t_i,.)\varphi(.)) +O(h^4)
\end{equation*}
because by assumption $\varphi (t) \in C^4[0, 1]$.\\
Now consider the case when $i$ is odd number, $i=2m+1$. In this case we represent
\begin{equation}\label{eqS3}
I= \int_0^1 G_l (t_i,s) \varphi (s) ds=\int _0^{t_{2m}} \; + \int _{t_{2m}}^{t_{2m+1}} + \int _{t_{2m+1}}^{t_{2m+2}}+
\int_{t_{2m+2}}^1 .
\end{equation}
For simplicity we denote
$$f_j=G_l (t_i,s_j) \varphi (s_j)
$$
Applying the Simpson formula to the first and the fourth integrals in the right-hand side \eqref{eqS3} and the trapezium formula to the second and the third integrals there, we obtain
\begin{align*}
I &=\dfrac{h}{3}[ f_0+f_{2m}+4(f_1+f_3+...+f_{2m-1})+2(f_2+f_4+...+f_{2m-2})]+O(h^4)\\
& +\dfrac{h}{2}(f_{2m}+f_{2m+1})+O(h^3)
+\dfrac{h}{2}(f_{2m+1}+f_{2m+2})+O(h^3)\\
& +\dfrac{h}{3}[ f_{2m+2}+f_{2n}+4(f_{2m+3}+f_{2m+5}+...+f_{2n-1})+2(f_{2m+4}+f_{2m+6}+...+f_{2n-2})]+O(h^4)\\
&= \dfrac{h}{3}[ f_0+f_{2n}+4(f_1+f_3+...+f_{2n-1})+2(f_2+f_4+...+f_{2n-2})]\\
&+ \dfrac{h}{6}(f_{2m}-2f_{2m+1}+f_{2m+2}) +O(h^3)\\
& = F(G_l (t_i,.)\varphi(.)) +O(h^3)
\end{align*}
Thus, in the both cases of $i$, even or odd, we have the estimate \eqref{eq:Sim1}.\\
The estimate \eqref{eq:Sim2} is obtained analogously as \eqref{eq:Sim1} if taking into account that
$$ 2G_2^* (t_i,t_i)= G_2^- (t_i,t_i)+G_2^+ (t_i,t_i),
$$
where
\begin{align*}
G_2^{\pm} (t_i,t_i)=\lim _{s\rightarrow t_i \pm 0}G_2(t_i,s)
\end{align*}
\end{Proof}
\begin{Theorem}\label{thm5}
Under the assumptions of Proposition \ref{prop1New}, for the approximate solution of the problem \eqref{eq2} obtained by the discrete iterative method 2 on the uniform grid with gridsize $h$ there hold the estimates
\begin{equation*}
\begin{split}
\|U_k-u\| &\leq \left(M_0+\frac{1}{r}\right)p_kd +O(h^3), \; \|Y_k-u'\| \leq M_1p_kd +O(h^3), \\
\|Z_k-u''\| &\leq M_2p_kd +O(h^3).
\end{split}
\end{equation*}
\end{Theorem}
\section{Examples}
Consider some examples for confirming the validity of the obtained theoretical results and the efficiency of the proposed iterative method. \\
\noindent \textbf{Example 1. } (Problem 2 in \cite{Pandey1}) \\
Consider the problem
\begin{align*}
\begin{split}
u'''(x)&=x^4u(x)-u^2(x)+f(x), \; 0<x<1,\\
u(0)&=0, \; u'(0)=-1,\; u'(1)=\sin (1),
\end{split}
\end{align*}
where $f(x)$ is calculated so that the exact solution of the problem is $$u^*(x)=(x-1)\sin (x).$$
It is easy to verify that with $M=7$ all conditions of Theorem \ref{theorem3} are satisfied, so the problem has a unique solution.
The results of the numerical experiments with two different tolerances are given in Tables \ref{table:1}- \ref{table:3}.
\begin{table}[h!]
\centering
\caption{The convergence in Example 1 for $TOL= 10^{-4}$}
\label{table:1}
\begin{tabular}{cccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Order$ & $Error_{Simp}$ & $Order$\\
\hline
8 & 3 & 9.9153e-04 & & 9.7143e-04 & \\
16 & 3 & 2.4646e-04 & 2.0083 & 1.3101e-04 & 2.8905 \\
32 & 3 & 6.0906e-05 & 2.0167 & 1.6020e-05& 3.0317 \\
64 & 3 & 1.4563e-05 & 2.0643 & 1.2587e-06& 3.6696 \\
128 & 3 & 2.9796e-06 & 2.2891 & 8.8553e-07& 0.5073 \\
256 & 3 & 4.3187e-07 & 2.7865 & 8.8165e-07& 0.0063 \\
512 & 3 & 6.7435e-07 & -0.6429 & 8.8118e-07& -7.7719e-04 \\
1024 & 3 & 8.2295e-07 & -0.2873 & 8.8112e-07& 9.6181e-05 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{The convergence in Example 1 for $TOL= 10^{-6}$}
\label{table:2}
\begin{tabular}{cccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Order$ & $Error_{Simp}$ & $Order$\\
\hline
8 & 4 & 9.99237e-04 & & 9.7223e-04 & \\
16 & 4 & 2.4734e-04 & 2.0044 & 1.3189e-04 & 2.8820 \\
32 & 4 & 6.1802e-05 & 2.0008 & 1.6915e-05& 2.9629 \\
64 & 4 & 1.5462e-05 & 1.9989 & 2.1492e-06& 2.9765 \\
128 & 4 & 3.8797e-06 & 1.9947 & 2.8688e-07& 2.9053 \\
256 & 4 & 9.8437e-07 & 1.9787 & 5.2749e-08& 2.4439 \\
512 & 4 & 2.6054e-07 & 1.9177 & 2.3446e-08& 1.1698 \\
1024 & 4 & 7.9583e-08 & 1.7110 & 1.9786e-08& 0.2448 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{The convergence in Example 1 for $TOL= 10^{-10}$}
\label{table:3}
\begin{tabular}{cccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Order$ & $Error_{Simp}$ & $Order$\\
\hline
8 & 7 & 9.9235e-04 & & 9.7222e-04 & \\
16 & 7 & 2.4732e-04 & 2.0045 & 1.3187e-04 & 2.8822 \\
32 & 7 & 6.1782e-05 & 2.0011 & 1.6896e-05& 2.9643 \\
64 & 7 & 1.5443e-05 & 2.0003 & 2.1301e-06& 2.9877 \\
128 & 7 & 3.8605e-06 & 2.0001 & 2.6774e-07& 2.9923 \\
256 & 7 & 9.6511e-07 & 2.0000 & 3.3544e-08& 2.9965 \\
512 & 7 & 2.4128e-07 & 2.0000 & 4.1977e-09& 2.9984 \\
1024 & 7 & 6.0319e-08 & 2.0000 & 5.2483e-10& 2.9997 \\
\hline
\end{tabular}
\end{table}
In the above tables $N$ is the number of grid points, $K$ is the number of iterations, $Error_{trap},\; Error_{Simp} $ are errors $ \| U_K-u^* \|$ in the cases of using Method 1 and Method 2, respectively,$Order$ is the order of convergence calculated by the formula
$$ Order=\log _2 \frac{\|U^{N/2}_K-u^*\|}{\|U^{N}_K-u^*\|}.
$$
In the above formula the superscripts $N/2$ and $N$ of $U_K$ mean that $U_K$ is computed on the grid with the corresponding number of grid points. \\
From the tables we observe that for each tolerance the number of iterations is constant and the errors of the approximate solution decrease with the rate (or order) close to 2 for Method 1 and close to 3 for Method 2 until they cannot improved. This can be explained as follows. Since the total error of the actual approximate solution consists of two terms: the error of the iterative method on continuous level and the error of numerical integration at each iteration, when these errors are balanced, the further increase of number of grid points $N$(or equivalently, the decrease of grid size $h$) cannot in general improve the accuracy of approximate solution.
Notice that in \cite{Pandey1} the author used Newton-Raphson iteration method to solve nonlinear system of equations arisen after discretization of the differential problem. Iteration process is continued until the maximum difference between two successive iterations , i.e., $\|U_{k+1}-U_k \|$ is less than $10^{-10}$. The number of iterations for achieving this tolerance is not reported. The accuracy for some different $N$ is (see \cite[Table 2]{Pandey1})
\begin{table}[h!]
\centering
\caption{The results in \cite{Pandey1} for the problem in Example 1}
\label{table:4}
\begin{tabular}{ccccc}
\hline
$N$ & 8 & 16 & 32 & 64 \\
\hline
Error & 0.11921225e-01 & 0.33391170e-02 & 0.87742222e-03 & 0.23732412e-03 \\
\hline
\end{tabular}
\end{table}\\
From the tables of our results and of Pandey it is clear that our method gives much better accuracy.\\
\noindent \textbf{Example 2. } (Problem 2 in \cite{Pandey2}) \\
Consider the problem
\begin{align*}
\begin{split}
u'''(x)&=-xu''(x)-6x^2+3x-6, \; 0<x<1,\\
u(0)&=0, \; u'(0)=0,\; u'(1)=0.
\end{split}
\end{align*}
It is easy to verify that with $M=9$ all conditions of Theorem \ref{theorem3} are satisfied, so the problem has a unique solution. This solution is $u(x)=x^2 (\frac{3}{2}-x)$.
The results of the numerical experiments with different tolerances are given in Tables \ref{table:5}, \ref{table:6} and \ref{table:7}.\\
\begin{table}[h!]
\centering
\caption{The convergence in Example 2 for $TOL= 10^{-4}$}
\label{table:5}
\begin{tabular}{cccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Order$ & $Error_{Simp}$ & $Order$\\
\hline
8 & 6 & 0.0078& & 9.7662e-04 & \\
16 & 6 & 0.0020 & 2.0000 & 1.2215e-04 & 2.9991 \\
32 & 6 & 4.8837e-04 & 1.9998 & 1.5345e-05& 2.9929 \\
64 & 6 & 1.2216e-04 & 1.9992 & 1.9936e-06& 2.9443 \\
128 & 6 & 3.0604e-05 & 1.9969 & 3.2471e-07& 2.6181 \\
256 & 6& 7.7157e-06 & 1.9878 & 1.1612e-07& 1.4835 \\
512 & 6 & 1.9937e-06 & 1.9524 & 9.0051e-08& 0.3868 \\
1024 & 6 & 5.6316e-07 & 1.8238 & 8.6794e-08& 0.0532 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{The convergence in Example 2 for $TOL= 10^{-6}$}
\label{table:6}
\begin{tabular}{cccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Order$ & $Error_{Simp}$ & $Order$\\
\hline
8 & 8 & 0.0078& & 9.7662e-04 & \\
16 & 6 & 0.0020 & 2.0000 & 1.2215e-04 & 2.9991 \\
32 & 6 & 4.8837e-04 & 1.9998 & 1.5345e-05& 2.9929 \\
64 & 6 & 1.2216e-04 & 1.9992 & 1.9936e-06& 2.9443 \\
128 & 6 & 3.0604e-05 & 1.9969 & 3.2471e-07& 2.6181 \\
256 & 6& 7.7157e-06 & 1.9878 & 1.1612e-07& 1.4835 \\
512 & 6 & 1.9937e-06 & 1.9524 & 9.0051e-08& 0.3868 \\
1024 & 6 & 5.6316e-07 & 1.8238 & 8.6794e-08& 0.0532 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{The convergence in Example 2 for $TOL= 10^{-10}$}
\label{table:7}
\begin{tabular}{cccccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ & $N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ \\
\hline
8 & 11 & 0.0078 & 2.0650e-13 & 64 & 11 & 1.2207e-04 & 2.5890e-13\\
16 & 11 & 0.0020 & 2.6790e-13 & 128 & 11 & 3.0518e-05 & 2.5790e-13 \\
32 & 11 & 4.8828e-04 & 2.6279e-13 & 256 & 11 & 7.6294e-06 & 2.5802e-13\\
\hline
\end{tabular}
\end{table}
Notice that \cite{Pandey2} the author used Gauss-Seidel iteration method to solve linear system of equations arisen after discretization of the differential problem. Iteration process is continued until the maximum difference between two successive iterations , i.e., $\|U_{k+1}-U_k \|$ is less than $10^{-10}$. The results for some different $N$ are
\begin{table}[h!]
\centering
\caption{The results in \cite{Pandey2} for the problem in Example 2}
\label{table:8}
\begin{tabular}{ccccc}
\hline
$N$ & 128 & 256 & 512 & 1024 \\
\hline
Error & 0.30696392e-4 & 0.61094761(-5) & 0.14379621e-5 & 0.41723251e-6 \\
\hline
Iter & 53 & 5 & 3 & 4\\
\hline
\end{tabular}
\end{table}\\
From the tables of our results and of Pandey it is clear that our method gives better accuracy and requires less computational work.\\
\noindent \textbf{Example 3. } \\
Consider the problem
\begin{align*}
\begin{split}
u'''(x)&=(u(x))^2+u'(x)-e^{2x}, \; 0<x<1,\\
u(0)&=1, \; u'(0)=1,\; u'(1)=e.
\end{split}
\end{align*}
It is easy to verify that with $M=10$ all conditions of Theorem \ref{theorem3} are satisfied, so the problem has a unique solution. This solution is $u(x)=e^x$.
The results of the numerical experiments with different tolerances are given in Tables \ref{table:9} and \ref{table:10}.\\
\begin{table}[h!]
\centering
\caption{The convergence in Example 3 for $TOL= 10^{-4}$}
\label{table:9}
\begin{tabular}{cccccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ & $N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ \\
\hline
16 & 8 & 5.4059e-04 & 5.2038e-05 & 128 & 8 & 1.0341e-05 & 2.6902e-06\\
32 & 8 & 1.3655e-04 & 1.4204e-05 & 256 & 8 & 4.0312e-06 & 2.1184e-06 \\
64 & 8 & 3.5582e-05 & 4.9811e-06 & 512 & 8 & 2.4537e-06 & 1.9755e-06\\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{The convergence in Example 3 for $TOL= 10^{-6}$}
\label{table:10}
\begin{tabular}{cccccccc}
\hline
$N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ & $N$ & $K$& $Error_{trap}$ & $Error_{Simp}$ \\
\hline
16 & 11 & 5.3866e-04 & 5.0053e-05 & 128 & 11 & 8.3853e-06 & 7.3348e-07\\
32 & 11 & 1.3460e-04 & 1.2241e-05 & 256 & 11 & 2.0750e-06 & 1.6199e-07 \\
64 & 11 & 3.3627e-05 & 3.0231e-06 & 512 & 11 & 4.9743e-07 & 1.9180e-08\\
\hline
\end{tabular}
\end{table}
\noindent \textbf{Example 4. }\\
Consider the problem for fully third order differential equation
\begin{align*}
\begin{split}
u'''(x)&=-e^{u(x)}-e^{u'(x)}-\frac{1}{10}(u''(x))^2, \; 0<x<1,\\
u(0)&=0, \; u'(0)=0,\; u'(1)=0.
\end{split}
\end{align*}
It is easy to verify that with $M=3$ all conditions of Theorem \ref{theorem3} are satisfied, so the problem has a unique solution.
\begin{table}[h!]
\centering
\caption{The convergence in Example 4 for $TOL= 10^{-10}$}
\begin{tabular}{ccccc}
\hline
$N$ & 8 & 16 & 32 & 64 \\
\hline
$K$ & 15 & 15 & 15 & 15\\
\hline
\end{tabular}
\end{table}\\
The numerical solution of the problem is depicted in Figure \ref{fig3}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5cm,width=8cm]{ex4.eps}
\caption{The graph of the approximate solution in Example $4$. }
\label{fig3}
\end{center}
\end{figure}
\section{Conclusion}
In this paper we established the existence and uniqueness of solution for a boundary value problem for fully third order differential equation. Next, for finding this solution we proposed an iterative method at both continuous and discrete levels. The numerical realization of the discrete iterative method is very simple. It is based on popular rules for numerical integration.
One of the important results is that we obtained an estimate for total error of the approximate solution which is actually obtained. This total error depends on the number of iterations performed and the discretization parameter. The validity of the theoretical results and the efficiency of the iterative method are illustrated in examples. \par
The method for investigating the existence and uniqueness of solution and the iterative schemes for finding solution in this paper can be applied to other third order nonlinear boundary value problems, and in general, for higher order nonlinear boundary value problems.
|
1,108,101,563,649 | arxiv | \section{Introduction and main result}\label{s:intro}
\subsection{Introduction}
The Piunikhin--Salamon--Schwarz isomorphism between the Hamiltonian Floer homology and the quantum homology of a symplectic manifold $M$ was constructed in \cite{PSS}. Nowadays it has become a standard tool of symplectic topology and is known under the abbreviated name of the PSS isomorphism. The analogs of the PSS isomorphism for Lagrangian Floer homology were constructed at varying levels of generality and rigor in \cite{Katic_Milinkovic_PSS_Lagr_intersections, Cornea_Lalonde_Cluster_homology, Biran_Cornea_Quantum_structures_Lagr_submfds, Albers_Lagr_PSS_comparison_morphisms_HF, Leclercq_spectral_invariants_Lagr_FH, Biran_Cornea_Rigidity_uniruling, HLL11}. Applications of these isomorphisms include spectral invariants, see \cite{Oh_Construction_sp_invts_Ham_paths_closed_symp_mfds, Leclercq_spectral_invariants_Lagr_FH, Leclercq_Zapolsky_Spectral_invts_monotone_Lags} and references therein, as well as problems such as homological Lagrangian monodromy \cite{HLL11}.
Our contribution in the present paper is as follows:
\begin{itemize}
\item We extend the construction of the PSS isomorphism for a monotone Lagrangian submanifold $L \subset M$ with minimal Maslov number at least two, appearing in \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, to arbitrary coefficients, under a certain assumption on the second Stiefel--Whitney class of $L$.
\item We show that the natural algebraic structures on Lagrangian Floer and quantum homology, such as products and the quantum module action, are intertwined by this isomorphism. This has been known to experts (see \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}), and here we provide details of the relevant construction, again with arbitrary coefficients.
\item We use and develop the approach of canonical orientations, appearing for instance in \cite{Welschinger_Open_strings_Lag_conductors_Floer_fctr, Seidel_The_Book, Abouzaid_Symp_H_Viterbo_thm}, to tackle the issue of signs, rather than using the usual approach of coherent orientations; this allows us to generalize the construction of the Lagrangian Floer and quantum complexes, as well as of the PSS isomorphism, and in particular our Lagrangian is not assumed to be orientable or to carry a relative Pin-structure.
\end{itemize}
The canonical Floer and quantum complexes that we construct in this paper distinguish homotopy classes of cappings. In applications it is often more desirable to have smaller complexes in which cappings are only distinguished by their area or homology class. Relative Pin-structures allow us to do that. They are also used in order to endow the canonical complexes and their homologies with module structures over Novikov rings
The existing literature does not explicitly cover one particular case appearing when proving that the quantum boundary operator squares to zero, namely the case analogous to bubbling off of a holomorphic disk of Maslov index $2$ in Floer homology. Even though it is fairly straightforward, at least with coefficients in ${\mathbb{Z}}_2$ \cite{Paul_Octav_Private_comm_July_2015}, we include a treatment of this case with arbitrary coefficients in \S \ref{ss:boundary_op_squares_zero_bubbling_QH} for the sake of completeness.
We also provide a description of various auxiliary algebraic structures, which have already appeared in the literature, such as augmentations and duality \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, and spectral sequences \cite{Oh_HF_spectral_sequences_Maslov_class, Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling, Biran_Lag_non_intersections}. These require careful formulation due to the presence of orientations.
Lastly, we compute the canonical quantum complexes for the standard monotone ${\mathbb{R}} P^n \subset {\mathbb{C}} P^n$, $n \geq 1$, as well as certain monotone Lagrangian tori in ${\mathbb{C}} P^2$ and ${\mathbb{C}} P^1 \times {\mathbb{C}} P^1$.
\subsection{Main result}
Let us fix a closed \footnote{With straightforward modifications, everything said here can be formulated for open symplectic manifolds which are convex at infinity \cite{Eliashberg_Gromov_Convex_symp_mfds}.} connected symplectic manifold $(M,\omega)$ of dimension $2n$ and a closed connected Lagrangian submanifold $L$. There are two natural homomorphisms associated to $L$:
$$\omega {:\ } \pi_2(M,L) \to {\mathbb{R}}\,, \quad \text{the \textbf{symplectic area}, and}$$
$$\mu {:\ } \pi_2(M,L) \to {\mathbb{Z}}\,, \quad\text{the \textbf{Maslov class}.}$$
We say that $L$ is \textbf{monotone} if there is a positive constant $\tau$ such that $\omega = \tau \mu$. We denote by $N_L$ the positive generator of $\mu(\pi_2(M,L)) \subset {\mathbb{Z}}$ if this subgroup is nonzero, otherwise we put $N_L = \infty$. This is called the \textbf{minimal Maslov number} of $L$.
\medskip
\textbf{Throughout this paper we assume that $L$ is monotone and that its minimal Maslov number is at least $2$.}
\medskip
Since we are dealing with Lagrangian Floer theory with arbitrary coefficients, we need to impose a condition on the second Stiefel--Whitney class of $L$.
\medskip
\textbf{Assumption (O): the class $w_2(TL)$ vanishes on the image of the boundary homomorphism $\pi_3(M,L) \to \pi_2(L)$.}
\medskip
\noindent We impose assumption \textbf{(O)} throughout.
\begin{rem}
If only coefficients in a ring of characteristic $2$ are required, this assumption is not needed. See also \S\ref{ss:arbitrary_rings_loc_coeffs}.
\end{rem}
\begin{rem}
Assumption \textbf{(O)} is implied by and is strictly weaker than being relatively $\Pin^\pm$, see \S\ref{ss:relative_Pin_structs_coh_ors_disks}.
\end{rem}
Given a commutative ground ring $R$, we construct the Lagrangian Floer complex
$$(CF_*(H:L),\partial_{H,J})$$
of $L$ over $R$, relative to a Hamiltonian perturbation $H$, where $J$ is an $\omega$-compatible almost complex structure on $M$, and the Lagrangian quantum complex
$$(QC_*({\mathcal{D}}:L),\partial_{\mathcal{D}})\,,$$
where ${\mathcal{D}}=(f,\rho,I)$ is a quantum datum, $(f,\rho)$ being a Morse--Smale pair on $L$, and $I$ another compatible almost complex structure. We prove that the homologies $HF_*(L)$ and $QH_*(L)$ of these complexes are independent of the auxiliary data. These carry the structure of unital associative algebras over $R$. Our main result in this paper is
\begin{thm}\label{thm:main_result_existence_PSS_iso}
There is a canonical PSS isomorphism
$$\PSS {:\ } HF_*(L) \to QH_*(L)\,,$$
which is a unital algebra isomorphism.
\end{thm}
\noindent We wish to emphasize that the Lagrangian Floer and quantum complexes, and the PSS isomorphism that we construct, are a generalization of existing constructions, and that the main points here are the use of canonical orientations, the generalization of the construction of the PSS isomorphism to arbitrary coefficients, and a proof of the fact that the natural algebraic structures are intertwined by it.
This theorem is proved as part of the constructions of \S\ref{s:PSS}. In addition the PSS isomorphism respects the so-called quantum module structures on both sides. Namely, $HF_*(L)$ is a superalgebra over the Hamiltonian Floer homology $HF_*(M)$ of $M$, and analogously $QH_*(L)$ is a superalgebra over the quantum homology $QH_*(M)$. The PSS isomorphism we construct intertwines the two structures.
\subsection{Overview of the construction}\label{ss:overview_construction}
Let us briefly review these constructions. We have the path space
$$\Omega_L = \{\gamma {:\ } [0,1] \to M\,|\, \gamma(0),\gamma(1) \in L\,,[\gamma] = 0 \in \pi_1(M,L)\}$$
and its covering space
$$\widetilde\Omega_L = \{[\gamma,\widehat\gamma] \,|\, \gamma \in \Omega_L\,,\widehat\gamma \text{ a capping of }\gamma\}\,,$$
where a capping of $\gamma$ is a map from the closed half-disk to $M$ where the diameter maps to $\gamma$ while the boundary maps to $L$. Two cappings $\widehat\gamma,\widehat\gamma'$ of the same path $\gamma$ are equivalent if their concatenation $\widehat\gamma \sharp - \widehat\gamma'$ is nullhomotopic relative to $L$, and $[\gamma,\widehat\gamma]$ denotes the equivalence class of a capping. Given a Hamiltonian $H {:\ } M \times [0,1] \to {\mathbb{R}}$ its action functional is
$${\mathcal{A}}_{H:L} {:\ } \widetilde\Omega_L \to {\mathbb{R}}\,,\quad {\mathcal{A}}_{H:L}([\gamma,\widehat\gamma]) = \int_0^1 H_t(\gamma(t))\,dt - \int \widehat\gamma^*\omega\,.$$
The critical points $\Crit {\mathcal{A}}_{H:L}$ are those points $[\gamma,\widehat\gamma]$ for which $\gamma$ is a Hamiltonian arc of $H$, that is $\dot \gamma = X_H\circ\gamma$. We call $H$ nondegenerate if for every such critical point the linearized map
$$\phi_{H,*} {:\ } T_{\gamma(0)}M \to T_{\gamma(1)}M$$
maps $T_{\gamma(0)}L$ to a subspace transverse to $T_{\gamma(1)}L$. For such $H$ the underlying module of its Floer complex is defined as
$$CF_*(H:L) = \bigoplus_{\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}}C(\widetilde\gamma)\,,$$
where $C(\widetilde\gamma)$ is a canonical rank $1$ free $R$-module \footnote{For $R = {\mathbb{Z}}$ it is a ``fake ${\mathbb{Z}}$,'' as J.-Y. Welschinger would call it.} associated to $\widetilde\gamma$. It is generated by the two possible orientations of the determinant line bundle of a certain natural family of Fredholm operators, which are formal linearizations of the Floer PDE defined on cappings $\widehat\gamma$ in class $\widetilde\gamma$. This module is graded by the Conley--Zehnder index $m_{H:L}$.
\begin{rem}
We wish to remark that until quite recently there was no complete treatment of the topic of determinant lines of Fredholm operators, except the paper \cite{Knudsen_Mumford_Projectivity_moduli_space_stable_curves_I_prelims_det_Div}, which however is very abstract, and in which the useful properties were not formulated. To the best of our knowledge the first complete exposition including such properties, only appeared in 2013 in Zinger's paper \cite{Zinger_Det_line_bundle_Fredholm_ops}, on which the present paper relies very heavily. The somewhat unfortunate reality of this topic is that the word ``canonical'' is very much used, often without specifying which one of the possible canonical choices is selected. Zinger's paper remedies the situation. See \S\ref{s:determinants}.
\end{rem}
Given a sufficiently generic compatible almost complex structure $J$ the Floer boundary operator
$$\partial_{H,J} {:\ } CF_*(H:L) \to CF_{*-1}(H:L)$$
is defined via its matrix elements which are homomorphisms
$$\sum_{[u] \in {\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)} C(u) {:\ } C(\widetilde\gamma_-) \to C(\widetilde\gamma_+)\,,$$
where for $\widetilde\gamma_\pm \in \Crit {\mathcal{A}}_{H:L}$ of index difference $1$ we have the moduli space of solutions ${\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ of the Floer PDE, which is a finite set. Here
$$C(u) {:\ } C(\widetilde\gamma_-) \to C(\widetilde\gamma_+)$$
is an isomorphism defined roughly as follows: linear gluing of Fredholm operators allows us to produce, starting from a formal linearization of the Floer PDE on a capping $\widehat\gamma_-$, a formal linearization on a capping $\widehat\gamma_+$ in the class $\widetilde\gamma_+$. It then follows from the existence of canonical isomorphisms of determinant lines for Fredholm operators that there is a bijection between the orientations of the linearized operator $D_u$ of $u$ and isomorphisms $C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_+)$. The isomorphism $C(u)$ is the one corresponding to the orientation of $D_u$ given by the positive direction of the natural ${\mathbb{R}}$-action on its kernel.
We then prove that $\partial_{H,J}$ squares to zero. This includes a fairly standard argument involving the compactification of the $2$-dimensional solution spaces of the Floer PDE by broken trajectories, as well as bubbling analysis in case $N_L = 2$. We denote by
$$HF_*(H,J:L)$$
the resulting Lagrangian Floer homology. Usual continuation maps yield canonical isomorphisms
$$\Phi_{H,J}^{H',J'} {:\ } HF_*(H,J:L) \to HF_*(H',J': L)\,,$$
which satisfy the cocycle identity and therefore allow us to define the abstract Floer homology $HF_*(L)$. This is endowed with the structure of an associative unital algebra using moduli spaces of solutions of the Floer PDE on the disk with three boundary punctures. The Hamiltonian Floer homology $HF_*(M)$ is made to act on $HF_*(L)$ using the disk with two boundary and one interior puncture.
The quantum complex of a quantum datum ${\mathcal{D}} = (f,\rho,I)$ as above has
$$QC_*({\mathcal{D}}:L) = \bigoplus_{\substack{q \in \Crit f\\ A \in \pi_2(M,L,q)}}C(q,A)$$
as the underlying $R$-module, where $C(q,A)$ is a certain rank $1$ free $R$-module, again generated by the orientations of a certain natural family of Fredholm operators. The boundary operator
$$\partial_{\mathcal{D}} {:\ } QC_*({\mathcal{D}}:L) \to QC_{*-1}({\mathcal{D}}:L)$$
has quite an involved definition via the pearly spaces \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, and therefore we will not review it here. The quantum homology
$$QH_*({\mathcal{D}}:L)$$
is the homology of the chain complex $(QC_*({\mathcal{D}}:L),\partial_{\mathcal{D}})$. We include a treatment of the particular case $N_L = 2$ when ``bubbling'' may arise, see \S \ref{ss:boundary_op_squares_zero_bubbling_QH}. We construct a product and a superalgebra structure over $QH_*(M)$.
Finally the PSS morphism
$$\PSS_{H,J}^{\mathcal{D}} {:\ } CF_*(H:L) \to QC_*({\mathcal{D}}:L)$$
is defined via mixed Floer-pearly moduli spaces. We show that it induces an isomorphism on homology and that it respects continuation maps, which implies that it induces an isomorphism
$$HF_*(L) \to QH_*({\mathcal{D}}:L)\,.$$
We also construct an opposite isomorphism
$$QH_*({\mathcal{D}}:L) \to HF_*(L)$$
using an analogous strategy. Composing the two isomorphisms for different quantum data we obtain ``continuation maps'' for quantum homology
$$\Phi_{\mathcal{D}}^{{\mathcal{D}}'} {:\ } QH_*({\mathcal{D}}:L) \to QH_*({\mathcal{D}}':L)\,,$$
which are isomorphisms by construction, and which satisfy a cocycle identity. This allows us to define the abstract quantum homology $QH_*(L)$. PSS isomorphisms are shown to respect the algebraic structures on both sides, such as the quantum product and the quantum module action, which means that $QH_*(L)$ inherits a product and a superalgebra structure over $QH_*(M)$, and in particular the product is unital and associative.
\subsection{Relation with previous results and constructions}\label{ss:relation_prev_constructions}
Lagrangian Floer homology was, of course, defined by Floer \cite{Floer_Morse_thry_Lagr_intersections, Floer_unregularized_grad_flow_symp_action, Floer_Witten_cx_inf_dim_Morse_thry} for weakly exact Lagrangians and extended to the monotone case by Oh \cite{Oh_FH_Lagr_intersections_hol_disks_I}, both over ${\mathbb{Z}}_2$. For the pants product on Lagrangian Floer homology see \cite{Oh_sympl_topology_action_fcnl_II} and references therein. The superalgebra structure of $HF_*(L)$ over $HF_*(M)$ is known to experts and is a generalization of the so-called ``closed-open'' map, see also \cite{Albers_Lagr_PSS_comparison_morphisms_HF}. Lagrangian quantum homology was constructed by Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} using the pearly complex which was described by Oh \cite{Oh_Relative_Floer_quantum_cohomology}. In \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} the authors construct a product and a quantum module action on the Lagrangian quantum homology. Lagrangian Floer homology with arbitrary coefficients is defined, in the most general case, in \cite{FOOO_Lagr_intersection_Floer_thry_anomaly_obstr_I, FOOO_Lagr_intersection_Floer_thry_anomaly_obstr_II}. Hu--Lalonde treat the monotone case in \cite{Hu_Lalonde_Relative_Seidel_morphism_Albers_map}. Also see Seidel's book \cite{Seidel_The_Book} for the exact case; the approach in it is closest to what is done in this paper. Quantum homology with orientations is defined in \cite{Biran_Cornea_Lagr_top_enumerative_invts} for $L$ being oriented and Spin. The use of (relative) (S)Pin structures to produce coherent orientations appears in \cite{FOOO_Lagr_intersection_Floer_thry_anomaly_obstr_I, FOOO_Lagr_intersection_Floer_thry_anomaly_obstr_II}, as well as in \cite{Solomon_Intersection_thry_moduli_space_holo_curves_Lag_boundary_conds, Hu_Lalonde_Relative_Seidel_morphism_Albers_map, Biran_Cornea_Lagr_top_enumerative_invts, Wehrheim_Woodward_Orientations_pseudoholo_quilts}.
The Lagrangian PSS isomorphism appears already in \cite{Cornea_Lalonde_Cluster_homology}, in the context of cluster homology. The papers \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} define this isomorphism between Lagrangian quantum and Floer homology, in a way that is used in the present paper. Certain particular cases of the Lagrangian PSS morphism for monotone Lagrangians were handled by Albers \cite{Albers_Lagr_PSS_comparison_morphisms_HF}. The case of the zero section of a cotangent bundle appears in \cite{Katic_Milinkovic_PSS_Lagr_intersections}. For conormal bundles see \cite{Duretic_PSS_isos_spectral_invts_conormal_bundle}. For weakly exact Lagrangians a description of the PSS isomorphism appears in \cite{Leclercq_spectral_invariants_Lagr_FH} over ${\mathbb{Z}}_2$ and over arbitrary rings in \cite{HLL11}. The fact that the Lagrangian PSS morphism respects the natural product structures was proved in \cite{Katic_Milinkovic_Simcevic_cohomology_rings_iso_HF_Morse} for the zero section of a cotangent bundle. In general, the fact that the algebraic structures on Lagrangian Floer and quantum homology are canonically isomorphic has been known to experts, although a proof does not seem to be written anywhere.
Coherent orientations were introduced into symplectic topology by Floer and Hofer \cite{Floer_Hofer_Coherent_orientations}. This approach is mainstream now. The use of canonical orientations was inspired by conversations with J.-Y. Welschinger and by his paper \cite{Welschinger_Open_strings_Lag_conductors_Floer_fctr}, as well as by Seidel's book \cite{Seidel_The_Book}. In particular our spaces $C(\widetilde\gamma)$ and $C(q,A)$ appearing in \S\ref{ss:overview_construction} are similar to the orientation spaces of \cite[Chapter 11]{Seidel_The_Book}. See also Abouzaid's expository paper \cite{Abouzaid_Symp_H_Viterbo_thm}. Coherent orientations for the PSS isomorphism in the cotangent bundle case were constructed by Kati\'c--Milinkovi\'c \cite{Katic_Milinkovic_coherent_oris_mixed_moduli_spaces}. Coherent orientations in the context of cluster complexes can be found in \cite{Charest_Source_spaces_perturbations_cluster_complexes}; the approach of that paper resembles what we do here to an extent.
Lagrangian spectral invariants using Floer theory were defined in \cite{Oh_sympl_topology_action_fcnl_I} for the zero section of a cotangent bundle, in \cite{Leclercq_spectral_invariants_Lagr_FH} for weakly exact Lagrangians, and most recently in \cite{Leclercq_Zapolsky_Spectral_invts_monotone_Lags} for monotone Lagrangians using the technical results of the present paper.
\subsection{Overview of the paper}
Since our approach is to use canonical orientations, and the existing literature on this topic is quite scarce, we decided to include an extensive treatment of Floer and quantum homology, which also serves to establish notation. The methods developed while constructing these serve as a foundation for the construction of the PSS isomorphism and for the proofs of its various properties.
It is best to read the paper linearly. In order to avoid redundancies, each section builds upon the previous ones. Since the subject matter is quite technical, the exposition is terse, therefore short summaries are given at the beginning of each section.
In \S\ref{s:determinants} we review the basic notion of the determinant line of a Fredholm operator and state the properties necessary for the rest of the paper.
In \S\ref{s:HF} we define the canonical Floer complex $(CF_*(H:L),\partial_{H,J})$ associated to a regular Floer datum $(H,J)$. We define the boundary operator and prove that it squares to zero, when bubbling of Maslov $2$ disks is absent. Various algebraic structures are defined, such as the quantum product, the unit, and the quantum module structure, and their properties are proved, such as the associativity of the product and the like. The abstract Floer homology $HF_*(L)$ is defined.
In \S\ref{s:QH} we define the quantum complex $(QC_*({\mathcal{D}}:L),\partial_{\mathcal{D}})$, using the canonical orientation approach. We similarly define the boundary operator, prove that it squares to zero in absence of bubbling, and define the quantum product and quantum module action.
In \S\ref{s:PSS} we construct the canonical PSS maps between the Floer and quantum homology of $L$, and prove that they are in fact isomorphisms. As a result of their properties, PSS isomorphisms are used to define ``continuation maps'' in quantum homology, which we use instead of the more direct approach of Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}. Thus we obtain the abstract quantum homology $QH_*(L)$. We prove that the PSS isomorphisms intertwine the algebraic structures on both sides, such as products and quantum module actions.
In \S\ref{s:boundary_op_squares_zero_bubbling} we prove that the Lagrangian Floer and quantum boundary operators square to zero when there is bubbling present.
In \S\ref{s:quotient_cxs} we describe the construction of quotient complexes, which are the more familiar objects in Floer theory. We start with a summary of relative Pin structures and how to use them to define a system of coherent orientations on formal linearized operators corresponding to disks with boundary on $L$. We obtain a simplification of the usual process of constructing such orientations via framings. We describe the construction of quotient complexes in Hamiltonian Floer homology and the quantum homology of $M$, which require no choices at all, and then proceed with the Lagrangian case, which is much less trivial and for which coherent orientations are needed.
In \S\ref{s:examples_computations} we compute the canonical quantum complexes for ${\mathbb{R}} P^n \subset {\mathbb{C}} P^n$, as well as for three monotone tori, the Clifford torus and the Chekanov torus in ${\mathbb{C}} P^2$, and the exotic torus in ${\mathbb{C}} P^1 \times {\mathbb{C}} P^1$.
\begin{acknow}
I wish to thank David Blanc, Strom Borman, Fran\c cois Charette, Boris Chorny, Tobias Ekholm, Misha Entov, Oli Fabert, Penka Georgieva, Luis Haug, Vladimir Hinich, Marco Mazzucchelli, Dusa McDuff, Cedric Membrez, Will Merry, Fabien Ng\^o, Tony Rieser, Dietmar Salamon, Matthias Schwarz, and Egor Shelukhin for numerous stimulating discussions and general interest, Paul Biran and Octav Cornea for reading a portion of the manuscript and providing valuable comments which allowed me to write the abstract and the introduction more clearly, and Leonid Polterovich for useful suggestions. The approach chosen in this project was largely inspired by a conversation with J.-Y. Welschinger at a pierogi restaurant in \L \'od\'z; he taught me canonical orientations in Morse theory, and answered many questions about generalizations to Floer homology. Jacqui Espina kindly agreed to listen to a number of preliminary results and took notes, which later helped me write this manuscript. Special thanks go to Judy Kupferman who listened to me ramble about this paper over innumerable cups of coffee, and to my collaborator R\'emi Leclercq for reading a portion of the text, and for his support and patience during the long time that it took me to complete this project, which is necessary for our \cite{Leclercq_Zapolsky_Spectral_invts_monotone_Lags}.
\end{acknow}
\section{Determinant lines for Fredholm operators}\label{s:determinants}
In this section we collect the necessary preliminaries concerning Fredholm operators, their determinant lines and properties. We refer the reader to the wonderful paper by Zinger \cite{Zinger_Det_line_bundle_Fredholm_ops}, which constructs determinant lines of Fredholm operators and proves their various properties and classification in complete detail.
Given two real Banach spaces $X,Y$, a Fredholm operator between them is a bounded linear operator
$$D {:\ } X \to Y$$
with closed range, whose kernel and cokernel are both finite-dimensional. We let ${\mathcal{F}}(X,Y)$ be the set of Fredholm operators. It is an open subspace of the space of bounded linear operators between $X$ and $Y$ relative to the norm topology. The index of $D$ is the integer
$$\ind D = \dim \ker D - \dim \coker D\,.$$
The index is a continuous function on ${\mathcal{F}}(X,Y)$ and consequently it is locally constant.
A \textbf{${\mathbb{Z}}_2$-graded line}, or a graded line for short, is a one-dimensional real vector space together with a grading, which is an element of ${\mathbb{Z}}_2 = \{\overline 0, \overline 1\}$. For a graded line $L$ we let $\deg L \in {\mathbb{Z}}_2$ be its grading. For a finite-dimensional real vector space $V$ we denote by
$$\ddd(V)$$
the graded line whose underlying line is the top exterior power $\bigwedge^{\text{max}}V$ and whose grading is $\dim V \pmod 2$. The \textbf{determinant line} of a Fredholm operator $D \in {\mathcal{F}}(X,Y)$ is the graded line
$$\ddd(D) = \ddd(\ker D) \otimes \big(\ddd(\coker D)\big)^\vee\quad \text{with grading} \quad \deg \ddd(D) = \ind D \pmod 2\,.$$
We let
$$\ddd_{X,Y} = \biguplus_{D \in {\mathcal{F}}(X,Y)}\ddd(D)\,.$$
For now this is just a set with fiberwise linear structure with respect to the obvious projection
$$\ddd_{X,Y} \to {\mathcal{F}}(X,Y)\,.$$
We wish to put a topology on $\ddd_{X,Y}$ so that it becomes a line bundle over ${\mathcal{F}}(X,Y)$. However, not any topology would do. The reason for this is that in applications there are a plethora of natural operations on Fredholm operators, and we wish our topology to be compatible with these operations.
Abstractly, we have the collection of spaces (yet without topology) $\ddd_{X,Y}$, indexed by pairs of real Banach spaces $(X,Y)$. Therefore \emph{a priori} we can topologize each one of these spaces to make them into real line bundles. As we just pointed out, not every such system of topologies will have the desired properties. Zinger \cite{Zinger_Det_line_bundle_Fredholm_ops} lists a number of very natural properties which should be satisfied by any useful such system of topologies, and proves the following fundamental result.
\begin{thm}\label{thm:exist_systems_topologies_det_lines}
There exist systems of topologies on the collection of spaces $\ddd_{X,Y}$, indexed by pairs of Banach spaces $(X,Y)$, satisfying the desired natural properties. \qed
\end{thm}
\noindent It would take us too far afield to list all the properties formulated by Zinger. Instead we refer the reader to his paper, which also contains a precise formulation of Theorem \ref{thm:exist_systems_topologies_det_lines}.
In this paper we fix once and for all a single compatible system of topologies whose existence is guaranteed by Theorem \ref{thm:exist_systems_topologies_det_lines}, for instance the system explicitly described in \cite[Section 4.2]{Zinger_Det_line_bundle_Fredholm_ops}. We will now list the properties which will be needed in the applications presented in the paper. Note that in this section we only list properties pertaining to abstract Fredholm operators. Later in the paper we will specialize to real Cauchy--Riemann operators (see \S \ref{s:HF}, \ref{s:QH}); these have additional properties specific to them, and we will introduce them at appropriate points in the text.
\begin{rem}
Systems of topologies whose existence is asserted in Theorem \ref{thm:exist_systems_topologies_det_lines} are not unique. In fact, Zinger classifies all such systems in \cite[Theorem 2]{Zinger_Det_line_bundle_Fredholm_ops}. From this classification it is apparent that there are infinitely many such systems. However, this has no bearing on results of computations, since they are based solely on the properties of these topologies, which ultimately boil down to canonical constructions involving solely finite-dimensional vector spaces. Since all the compatible systems of topologies satisfy the same properties, computations will always yield the same result, whatever system of topologies is chosen.
\end{rem}
\subsection{Properties of determinant lines}\label{ss:pties_det_lines}
\paragraph{Naturality with respect to isomorphisms} If $f {:\ } V \to W$ is an isomorphism of finite-dimensional real vector spaces, we let
$$\ddd(f) {:\ } \ddd(V) \to \ddd(W)$$
be the induced isomorphism of determinant lines. If $\phi {:\ } X \to X'$, $\psi {:\ } Y \to Y'$ are Banach space isomorphisms, there is an induced homeomorphism
$${\mathcal{F}}(X,Y) \to {\mathcal{F}}(X',Y')\,,\quad D \mapsto \psi \circ D \circ \phi^{-1}\,.$$
This lifts to an isomorphism of line bundles
$$\ddd_{X,Y} \to \ddd_{X',Y'}\,,$$
as follows. For $D \in {\mathcal{F}}(X,Y)$ let $D' = \psi \circ D \circ \phi^{-1}$. The map between $\ddd(D)$ and $\ddd(D')$ is given by
$$\ddd(\phi|_{\ker D}) \otimes \big(\ddd(\overline \psi)^{-1}\big)^\vee {:\ } \ddd(\ker D) \otimes \big(\ddd(\coker D)\big)^\vee \to \ddd(\ker D') \otimes \big(\ddd(\coker D')\big)^\vee\,,$$
where $\overline \psi {:\ } \coker D \to \coker D'$ is the isomorphism induced by $\psi$.
\begin{rem}[Banach bundles] Given a system of topologies on the spaces $\ddd_{X,Y}$ satisfying the naturality property, we can topologize the determinant line of a Fredholm morphism between two Banach bundles. In more detail, let $p {:\ } {\mathcal{X}} \to B$ and $q {:\ } {\mathcal{Y}} \to B$ be two locally trivial Banach bundles over a space $B$, with fibers $X$ and $Y$, respectively. A \textbf{Fredholm morphism} between these is a fiber-preserving fiberwise linear continuous map ${\mathcal{D}} {:\ } {\mathcal{X}} \to {\mathcal{Y}}$ such that for each $b \in B$, ${\mathcal{D}}_b \in {\mathcal{F}}({\mathcal{X}}_b,{\mathcal{Y}}_b)$. We have the associated space
$$\ddd({\mathcal{D}}) = \biguplus_{b\in B}\ddd({\mathcal{D}}_b)\,.$$
Local trivializations of ${\mathcal{X}}$ and ${\mathcal{Y}}$ over $U \subset B$ conjugate ${\mathcal{D}}|_U$ to a map $D_U {:\ } U \to {\mathcal{F}}(X,Y)$, which leads to the line bundle $\ddd(D_U) := (D_U)^* \ddd_{X,Y}$. We can push the topology on $\ddd(D_U)$ to a topology on the space $\ddd({\mathcal{D}})|_U$. The induced topology on $\ddd({\mathcal{D}})$ is then well-defined thanks to the naturality property, and it makes $\ddd({\mathcal{D}})$ into a line bundle over $B$.
\end{rem}
\paragraph{Exact triples} An \textbf{exact triple of Fredholm operators}, or an exact Fredholm triple for short, is a commutative diagram with exact rows
$$\xymatrix{0 \ar[r] & X' \ar[r] \ar[d]^{D'} & X \ar[r] \ar[d]^{D} & X'' \ar[r] \ar[d]^{D''} & 0 \\
0 \ar[r] & Y' \ar[r] & Y \ar[r] & Y'' \ar[r] & 0}$$
where $X',X,X'',Y',Y,Y''$ are Banach spaces, $D',D,D''$ are Fredholm operators. We will denote an exact triple by ${\mathfrak{t}} = (D',D,D'')$ with the Banach spaces and maps between them being implicit. To each such exact Fredholm triple there corresponds an isomorphism
$$\Psi_{\mathfrak{t}} {:\ } \ddd(D') \otimes \ddd(D'') \to \ddd(D)\,,$$
called the exact triple isomorphism. These isomorphisms lift to line bundle isomorphisms over the spaces of exact triples with fixed Banach spaces.
\begin{rem}
We will often omit the Banach spaces and write an exact triple in an abbreviated form as
$$0 \to D' \to D \to D'' \to 0\,.$$
This should cause no confusion.
\end{rem}
\begin{rem}
Oftentimes in applications exact triples come in the form of a family of exact triples where instead of fixed Banach spaces we have Banach bundles and Fredholm morphisms between them. It is then apparent that the exact triple isomorphisms depend continuously on the base space of the bundle.
\end{rem}
\paragraph{Normalization}\label{par:normalization_pty} Given a short exact sequence of finite-dimensional real vector spaces
$$0 \to V' \xrightarrow{\iota} V \xrightarrow{\pi} V'' \to 0\,,$$
there is a naturally induced isomorphism
\begin{equation}\label{eqn:iso_det_lines_exact_seq_vector_spaces}
\ddd(V') \otimes \ddd(V'') \to \ddd(V)\,,
\end{equation}
defined as follows. Pick ordered bases $v_1',\dots,v_k' \in V'$ and $v_{k+1}'',\dots, v_{k+l}'' \in V''$, where $k = \dim V'$, $l = \dim V''$, define $v_i = \iota(v_i')$ for $i \leq k$, and let $v_i \in V$ be such that $\pi(v_i) = v_i''$ for $i > k$. The isomorphism then sends
$$\textstyle \bigwedge_{i=1}^k v_i' \otimes \bigwedge_{i=1}^lv_{k+i}'' \mapsto \bigwedge_{i=1}^{k+l}v_i\,.$$
It can be checked that this is independent of the chosen bases.
Note that if $0$ denotes the zero vector space, we have canonically $\ddd(0) \equiv {\mathbb{R}}$. If $D$ is a surjective Fredholm operator, then there is a canonical isomorphism
\begin{equation}\label{eqn:identify_det_Fredholm_op_det_its_kernel}
\ddd(\ker D) \to \ddd(D) = \ddd(\ker D) \otimes {\mathbb{R}}^\vee \,,\quad \sigma \mapsto \sigma \otimes 1^\vee\,.
\end{equation}
We will sometimes tacitly identify the determinant line of a surjective operator with the determinant line of its kernel.
If ${\mathfrak{t}}=(D',D,D'')$ is an exact Fredholm triple of \emph{surjective} operators, then there is an induced short exact sequence of kernels:
$$0 \to \ker D' \to \ker D \to \ker D'' \to 0\,.$$
The isomorphism
$$\ddd(\ker D') \otimes \ddd(\ker D'') \to \ddd(\ker D)$$
induced from this short exact sequence coincides with the isomorphism $\Psi_{\mathfrak{t}}$ if we use the identification \eqref{eqn:identify_det_Fredholm_op_det_its_kernel}.
Also, if $V$ is a finite-dimensional space, we have the Fredholm operator $0_V {:\ } V \to 0$ and an obvious isomorphism
$$\ddd(0_V) = \ddd(V)\,.$$
\paragraph{The exact squares property}\label{par:exact_squares_pty} Given two graded lines $L_1,L_2$ we define the \textbf{interchange isomorphism}
$$R {:\ } L_1 \otimes L_2 \to L_2 \otimes L_1\,,\quad v_1 \otimes v_2 \mapsto (-1)^{\deg L_1\cdot \deg L_2} v_2 \otimes v_1\,.$$
An \textbf{exact square} of vector spaces is by definition a short exact sequence of short exact sequences of vector spaces, that is a commutative diagram with exact rows and columns consisting of nine vectors spaces and maps between them, plus bounding zero vector spaces. An \textbf{exact square of Fredholm operators} is a commutative diagram consisting of two layers of exact squares of Banach spaces and nine Fredholm operators between those layers. We denote such a square schematically as follows, where the bounding zeroes are implicit and are omitted for the sake of economy of space:
$$\xymatrix{D_{\text{LT}} \ar[r] \ar[d] & D_{\text{CT}} \ar[r] \ar[d] & D_{\text{RT}} \ar[d] \\
D_{\text{LM}} \ar[r] \ar[d] & D_{\text{CM}} \ar[r] \ar[d] & D_{\text{RM}} \ar[d] \\
D_{\text{LB}} \ar[r] & D_{\text{CB}} \ar[r] & D_{\text{RB}}}$$
The various exact triple isomorphisms form the following commutative diagram:
$$\xymatrix{\ddd(D_{\text{LT}}) \otimes \ddd(D_{\text{RT}}) \otimes \ddd(D_{\text{LB}}) \otimes \ddd(D_{\text{RB}}) \ar[r]^-{\Psi_{\text{T}} \otimes \Psi_{\text{B}}} \ar[d]^{(\Psi_{\text{L}} \otimes \Psi_{\text{R}}) \circ (\id \otimes R \otimes \id)} & \ddd(D_{\text{CT}}) \otimes \ddd(D_{\text{CB}}) \ar[d]^{\Psi_{\text{C}}}\\
\ddd(D_{\text{LM}}) \otimes \ddd(D_{\text{RM}}) \ar[r]^-{\Psi_{\text{M}}} & \ddd(D_{\text{CM}}) }$$
where $\Psi_{\text{L}}$, $\Psi_{\text{C}}$, $\Psi_{\text{R}}$, $\Psi_{\text{T}}$, $\Psi_{\text{M}}$, $\Psi_{\text{B}}$ denote the exact triple isomorphisms corresponding to the left, center, right, top, middle, and bottom exact triples appearing in the diagram, respectively. A parametrized version of this commutative diagram exists when the given exact squares involve Banach bundles and Fredholm morphisms.
\paragraph{Direct sum isomorphisms} \label{par:direct_sum_isos} Given Fredholm operators $D_i \in {\mathcal{F}}(X_i,Y_i)$, $i=1,2$, there is the direct sum operator $D_1 \oplus D_2 {:\ } X_1 \oplus X_2 \to Y_1 \oplus Y_2$ and an obvious exact triple
$$0 \to D_1 \to D_1 \oplus D_2 \to D_2 \to 0\,.$$
This exact triple gives rise to an isomorphism
$$\ddd(D_1) \otimes \ddd(D_2) \to \ddd(D_1 \oplus D_2)\,.$$
This isomorphism pervades the present paper and is of great importance. We refer to it as the \textbf{direct sum isomorphism}.
Since direct sum isomorphisms are a particular case of exact triple isomorphisms, the exact squares property of the latter implies two properties of the former, namely supercommutativity and associativity. Using the exact Fredholm square
$$\xymatrix{0 \ar[r] \ar[d] & D_1 \ar[r] \ar[d] & D_1 \ar[d] \\ D_2 \ar[r] \ar[d] & D_1 \oplus D_2 \ar[r] \ar[d] & D_1 \ar[d] \\ D_2 \ar[r] & D_2 \ar[r] & 0}$$
we obtain the following commutative diagram:
$$\xymatrix{\ddd(D_1) \otimes \ddd(D_2) \ar@{=}[r] \ar[d]_{R} & \ddd(D_1) \otimes \ddd(D_2) \ar[d]^{\oplus} \\ \ddd(D_2) \otimes \ddd(D_1) \ar[r]^{\oplus} & \ddd(D_1 \oplus D_2)}$$
where $\oplus$ denotes the direct sum isomorphism. This means that the composition of direct sum isomorphisms
$$\ddd(D_1) \otimes \ddd(D_2) \to \ddd(D_1 \oplus D_2) \to \ddd(D_2) \otimes \ddd(D_1)$$
coincides with the interchange isomorphism $R$. This is the supercommutativity property of the direct sum isomorphisms.
Next, if we have a third operator $D_3 \in {\mathcal{F}}(X_3,Y_3)$, then we have the exact square
$$\xymatrix{D_1 \ar[r] \ar[d] & D_1 \oplus D_2 \ar[r] \ar[d] & D_2 \ar[d] \\
D_1 \ar[r] \ar[d] & D_1 \oplus D_2 \oplus D_3 \ar[r] \ar[d] & D_2 \oplus D_3 \ar[d]\\
0 \ar[r] & D_3 \ar[r] & D_3}$$
which yields the commutative diagram
$$\xymatrix{\ddd(D_1) \otimes \ddd(D_2) \otimes \ddd(D_3) \ar[r]^{\oplus \otimes \id} \ar[d]^{\id \otimes \oplus} & \ddd(D_1 \oplus D_2) \otimes \ddd(D_3) \ar[d]^{\oplus} \\
\ddd(D_1) \otimes \ddd(D_2 \oplus D_3) \ar[r]^{\oplus} & \ddd(D_1 \oplus D_2 \oplus D_3)}$$
which means that the direct sum isomorphisms are associative.
Again, all these properties hold in parametric versions as well.
\paragraph{Final remark on terminology} In this paper we will be interested in \emph{orientations} of a Fredholm operator $D$, which are elements of the two-point set $(\ddd(D) - \{0\})/{\mathbb{R}}_{>0}$, and therefore we will often say that a diagram of real lines and isomorphisms \textbf{commutes} to mean that it commutes up to multiplication by a positive real number. Also we will say that a given isomorphism between real lines is \textbf{canonical}, even if it is only canonically defined up to a positive multiple.
\section{Floer homology}\label{s:HF}
In this section we construct the canonical chain complexes computing Lagrangian and Hamiltonian Floer homology. We define the various algebraic structures on these complexes, such as products, the module action of the Hamiltonian Floer homology on the Lagrangian Floer homology, and prove various relations between these operations. We develop these structures in a TQFT-like framework, which allows us to describe all of them in a transparent and unified manner.
In \S\ref{ss:punctures_RS_gluing} we define punctured Riemann surfaces, their gluing, and related concepts. \S\ref{ss:CROs_dets_gluing} we define real-linear Cauchy--Riemann operators, their determinant lines and gluing of such operators defined on surfaces undergoing gluing. In \S\ref{ss:b_smooth_maps_pregluing} we define the technical notion of b-smooth maps, which are maps on punctured Riemann surfaces satisfying the property that they extend to suitable compactifications of the surfaces; the main property of b-smooth maps is that solutions of the Floer PDE are b-smooth. In \S\ref{ss:CROs_from_b_smooth_maps} we describe the Cauchy--Riemann operators arising from b-smooth maps as a formal linearization of the Floer operator, and how pregluing b-smooth maps relates to gluing of the corresponding formal linearizations. In \S\ref{ss:Floer_PDE} we define the Floer PDE on a punctured Riemann surface, as well as on a family of such surfaces. In \S\ref{ss:moduli_spaces_sols_Floer_PDE} we define solution spaces and moduli spaces of solutions of the Floer PDE, and describe their compactness properties. In \S\ref{ss:orientatiosn} we describe the canonical orientations of the linearized operators corresponding to solutions of the Floer PDE of the lowest dimension as well as the orientations of compactified moduli spaces induced on them by the canonical orientations corresponding to the boundary points. In \S\ref{ss:operations} we define the matrix elements of operations in Floer homology, using the canonical orientations defined in \S\ref{ss:orientatiosn}, and prove that the various matrix elements are subject to identities, a fact whose proof uses the induced orientations. In \S\ref{ss:HF} we define the Floer complexes and homology, various algebraic operations on them, and prove their properties. \S\ref{ss:arbitrary_rings_loc_coeffs} deals with the case of arbitrary rings and twisting by local systems. In \S\ref{ss:duality_HF} we treat duality in Floer homology and define the augmentation map as the dual of the unit.
We refer the reader to the papers \cite{Floer_Morse_thry_Lagr_intersections, Floer_unregularized_grad_flow_symp_action, Floer_Witten_cx_inf_dim_Morse_thry, Oh_FH_Lagr_intersections_hol_disks_I}, Schwarz's thesis \cite{Schwarz_PhD_thesis}, Seidel's book \cite{Seidel_The_Book}, and references therein for the analytical results used here. We do not provide precise references for all the results, mainly because they are more or less standard by now. The material presented here is largely borrowed from the wonderful book \cite{Seidel_The_Book}, especially Part II, which has also been extremely influential on the style of exposition chosen here.
\subsection{Punctured Riemann surfaces and their gluing}\label{ss:punctures_RS_gluing}
Fix a compact connected Riemann surface $\widehat\Sigma$ with (possibly empty) boundary, and a finite subset $\Theta \subset \widehat\Sigma$. The elements of $\Theta$ are called \textbf{punctures}. We let $\Sigma = \widehat\Sigma - \Theta$; the surface $\Sigma$ is called a \textbf{punctured Riemann surface}. The set of punctures $\Theta$ is decomposed into two disjoint subsets, $\Theta = \Theta^+ \cup \Theta^-$ of \textbf{positive} and \textbf{negative} punctures. A puncture $\theta$ is called \textbf{boundary} if $\theta \in \partial\widehat\Sigma$, otherwise it is \textbf{interior}.
Set ${\mathbb{R}}^\pm = \{s \in {\mathbb{R}}\,|\, \pm s \geq 0\}$. Throughout we use the following standard surfaces with boundary and corners:
$$S = {\mathbb{R}} \times [0,1]\,,\quad C = {\mathbb{R}} \times S^1\,,$$
the \textbf{standard strip} and the \textbf{standard cylinder}, and
$$S^\pm = {\mathbb{R}}^\pm \times [0,1]\,,\quad C^\pm = {\mathbb{R}}^\pm \times S^1$$
the \textbf{standard half-strips} and \textbf{half-cylinders}. The strips $S,S^\pm$ are given the conformal structures coming from the obvious embeddings into ${\mathbb{C}}$ while $C,C^\pm$ are given conformal structures by viewing $S^1 = {\mathbb{R}}/{\mathbb{Z}}$. We let $(s,t)$ be the standing notation for the standard conformal coordinates on $S^\pm,C^\pm$.
Let $\theta \in\Theta^\pm$ be a puncture. An \textbf{end associated to} $\theta$ is a proper conformal embedding
$$\epsilon_\theta {:\ } C^\pm \to \Sigma \quad \text{if } \theta \text{ is interior,}$$
$$\epsilon_\theta {:\ } S^\pm \to \Sigma \quad \text{if } \theta \text{ is boundary,}$$
in which case we also require that $\epsilon_\theta^{-1}(\partial \Sigma) = {\mathbb{R}}^\pm \times \{0,1\}$, where $\lim_{s \to \pm\infty}\epsilon_\theta(s,t) = \theta$ in $\widehat\Sigma$.
A \textbf{choice of ends} for $\Sigma$ is a family $\{\epsilon_\theta\}_{\theta\in\Theta}$ of ends associated to all the punctures of $\Sigma$, with pairwise disjoint images.
\paragraph{Gluing punctured Riemann surfaces} \label{par:gluing_punctured_Riem_surf} Let ${\mathcal{T}}$ be a finite tree with vertex set ${\mathcal{V}}={\mathcal{V}}({\mathcal{T}})$ and edge set ${\mathcal{E}}={\mathcal{E}}({\mathcal{T}}) \subset {\mathcal{V}} \times {\mathcal{V}}$. Assume each vertex $v \in {\mathcal{V}}$ is labeled by a punctured Riemann surface $\Sigma_v$ with puncture set $\Theta_v$, and that each edge $e = (v,v')$ is labeled by a pair of punctures $(\theta,\theta') \in \Theta_v^+ \times \Theta_{v'}^-$ of the same type (both boundary or both interior) and a positive real number $R_e$ called a gluing length. Fix a choice of ends for each $\Sigma_v$. We can define the \textbf{glued surface} $\Sigma_{\mathcal{T}}$ corresponding to these data as follows. Take the disjoint union
$$\biguplus_{v \in {\mathcal{V}}}\Sigma_v\,,$$
and for each edge $e$ labeled by $(\theta,\theta') \in \Theta_v^+ \times \Theta_{v'}^-$, remove from it the subset
$$\epsilon_\theta([R_e,\infty)\times[0,1]) \cup \epsilon_{\theta'}((-\infty,-R_e]\times [0,1])\quad \text{if } \theta,\theta' \text{ are boundary, or}$$
$$\epsilon_\theta([R_e,\infty)\times S^1) \cup \epsilon_{\theta'}((-\infty,-R_e]\times S^1)\quad \text{ if they are interior,}$$
where $\epsilon_\theta,\epsilon_{\theta'}$ are the ends associated to $\theta,\theta'$. On the resulting subset of $\biguplus_v\Sigma_v$ make the identification
$$\epsilon_\theta(s,t)\simeq \epsilon_{\theta'}(-R_e+s,t)$$
for $e=(\theta,\theta')$ and $s\in (0,R_e)$.
The glued surface $\Sigma_{\mathcal{T}}$ inherits a conformal structure from the $\Sigma_v$. It also inherits punctures and a choice of ends from the $\Sigma_v$, namely all the punctures that did not appear in labels of the edges of ${\mathcal{T}}$, and the ends associated to them.
Gluing of punctured Riemann surfaces satisfies an associativity property. To formulate it, let ${\mathcal{F}} = {\mathcal{T}}_1\uplus \dots \uplus {\mathcal{T}}_k$ be a subforest of ${\mathcal{T}}$, that is a graph obtained from ${\mathcal{T}}$ by deleting some of the edges. We can then form the glued surfaces $\Sigma_{{\mathcal{T}}_1},\dots,\Sigma_{{\mathcal{T}}_k}$ according to the procedure just described. The quotient tree $\overline{\mathcal{T}} = {\mathcal{T}}/{\mathcal{F}}$ has vertex set $\overline{\mathcal{V}} = \{{\mathcal{T}}_i\}_i$. An edge of $\overline{\mathcal{T}}$ is an edge of ${\mathcal{T}}$ connecting a pair of the subtrees ${\mathcal{T}}_i$. Label the vertices of $\overline{\mathcal{T}}$ by the surfaces $\Sigma_{{\mathcal{T}}_i}$. Note that all the $\Sigma_{{\mathcal{T}}_i}$ have a choice of ends and puncture sets coming from gluing. Labels of edges of ${\mathcal{T}}$ not appearing in ${\mathcal{F}}$ define in a natural way labels of edges of $\overline{\mathcal{T}}$. Therefore we can form the glued surface $\Sigma_{\overline{\mathcal{T}}}$. The associativity of gluing is expressed by means of an obvious canonical identification
$$\Sigma_{\mathcal{T}} = \Sigma_{\overline{\mathcal{T}}}\,,$$
which preserves the conformal structures, the punctures, and the choice of ends.
\subsection{Cauchy--Riemann operators, their determinant lines and gluing}\label{ss:CROs_dets_gluing}
Let $\Sigma$ be a punctured Riemann surface and endow it with a set of ends. Let $(E,F) \to (\Sigma,\partial\Sigma)$ be a Hermitian bundle pair, that is $E$ is a vector bundle endowed with a symplectic form $\omega$ and a compatible almost complex structure $J$, and $F \subset E|_{\partial\Sigma}$ is a Lagrangian subbundle. Choose a connection $\nabla$ on $E$. A \textbf{real Cauchy--Riemann operator} is an operator of the form
$$\overline\partial_\nabla = \nabla^{0,1} {:\ } C^\infty (\Sigma,\partial\Sigma; E,F) \to C^\infty(\Sigma,\Omega^{0,1}_\Sigma \otimes E)\,,$$
that is the complex-antilinear part of $\nabla$, which is defined as follows:
$$\nabla^{0,1}\xi = \xi + J\nabla_{j\cdot}\xi\,,$$
where $j$ denotes the conformal structure on $\Sigma$. When $\Sigma$ is compact, such an operator can be extended to suitable Sobolev completions, where it becomes a Fredholm operator. In order to have an analogous statement for noncompact $\Sigma$, we need to have sufficient control on the behavior of our data at infinity.
A \textbf{limiting datum} at a boundary puncture $\theta$ is a quintuple
$$(E_\theta,F_\theta,\omega_\theta,J_\theta,\nabla_\theta)\,,$$
where $(E_\theta,F_\theta) \to ([0,1],\{0,1\})$ is a Hermitian bundle pair with symplectic form $\omega_\theta$ and complex structure $J_\theta$, and $\nabla_\theta$ is a symplectic connection on $E_\theta$, meaning its parallel transport maps along $[0,1]$ are symplectic. If $\theta$ is interior, a limiting datum is a quadruple $(E_\theta,\omega_\theta,J_\theta,\nabla_\theta)$, where $E_\theta \to S^1$ is a Hermitian bundle, while the rest of the symbols carry the same meaning as in the boundary case.
Assume we have fixed a choice of limiting data at all the punctures of $\Sigma$. Let $\pi {:\ } S^\pm \to [0,1]$ or $\pi {:\ } C^\pm \to S^1$ be the projection onto the $t$ variable and assume that for every puncture $\theta$ we have fixed identifications
\begin{equation}\label{eqn:identification_Hermitian_bundle_with_limiting_bundle_at_puncture}
\epsilon_\theta^*E \simeq \pi^*E_\theta
\end{equation}
with respect to which all the data on $\Sigma$ are asymptotic in suitable topologies to the limiting data at $\theta$, that is $\omega \to \omega_\theta$, $J \to J_\theta$, $F \to F_\theta$, and $\nabla \to \nabla_\theta$. In this case we call $\overline\partial_\nabla$ \textbf{admissible}. Note for future use that these identifications can be deformed so that $\omega,J,F,\nabla$ all become constant on the ends. This will be useful for gluing in the next subsection.
We call the connection $\nabla_\theta$ \textbf{nondegenerate} if, in case $\theta$ is boundary, the parallel transport map along $[0,1]$ maps $F_{\theta,0}$ to a subspace of $E_{\theta,1}$ transverse to $F_{\theta,1}$, and in case $\theta$ is interior, the parallel transport map around $S^1$ does not have $1$ as an eigenvalue. If all the connections $\nabla_\theta$ are nondegenerate, we call the operator $\overline\partial_\nabla$ nondegenerate.
Using the measure induced on $\Sigma$ by the choice of ends, one can define Sobolev completions
$$W^{1,p}(\Sigma,\partial\Sigma; E,F)\,,\;L^p(\Sigma,\Omega^{0,1}_\Sigma\otimes E)$$
of the corresponding spaces of smooth sections for $p > 2$; these are Banach spaces. An admissible nondegenerate Cauchy--Riemann operator extends to a Fredholm operator
$$\overline\partial_\nabla {:\ } W^{1,p}(\Sigma,\partial\Sigma;E,F) \to L^p(\Sigma,\Omega^{0,1}_\Sigma\otimes E)\,.$$
\paragraph{Gluing Cauchy--Riemann operators and their determinant lines}\label{par:gluing_CROs_and_det_lines}
Let now ${\mathcal{T}}$ be a gluing tree as in \S \ref{par:gluing_punctured_Riem_surf}, whose vertex set ${\mathcal{V}}$ is labeled by punctured Riemann surfaces $\{\Sigma_v\}_{v \in {\mathcal{V}}}$ with puncture sets $\Theta_v$ and a choice of ends for each $\Sigma_v$. Assume each edge $e = (v,v')$ of ${\mathcal{T}}$ is labeled by a positive gluing length $R_e$ and by a pair $(\theta,\theta') \in \Theta_v^+ \times \Theta_{v'}^-$, where $\theta,\theta'$ are of the same type (both boundary or interior). As described in \S \ref{par:gluing_punctured_Riem_surf}, we can form the glued surface $\Sigma_{\mathcal{T}}$.
For each $v \in {\mathcal{V}}$ let $(E_v,F_v) \to (\Sigma_v,\partial\Sigma_v)$ be a Hermitian bundle pair with Hermitian structure $(\omega_v,J_v)$ and let $\nabla_v$ be a connection on $E_v$. Then we have the corresponding Cauchy--Riemann operator $D_v:=\overline\partial_{\nabla_v}$. Assume now that all the operators $D_v$ are admissible and nondegenerate, and assume that if a pair of punctures $(\theta,\theta')$ label an edge, then they have identical limiting data. Assume also that the identifications \eqref{eqn:identification_Hermitian_bundle_with_limiting_bundle_at_puncture} are such that the data $\omega_v,J_v,F_v,\nabla_v$ are constant at each puncture undergoing gluing. We can then glue the bundle pairs, the Hermitian structures, and the connections in an obvious manner to form a bundle pair
$$(E_{\mathcal{T}},F_{\mathcal{T}}) \to (\Sigma_{\mathcal{T}},\partial\Sigma_{\mathcal{T}})$$
with Hermitian structure $(\omega_{\mathcal{T}},J_{\mathcal{T}})$ and a connection $\nabla_{\mathcal{T}}$. The corresponding operator $D_{\mathcal{T}}:= \overline\partial_{\nabla_{\mathcal{T}}}$ is admissible and nondegenerate.
We can use cutoff functions to patch sections of $(E_v,F_v)$ together to form sections of $(E_{\mathcal{T}},F_{\mathcal{T}})$. Using the orthogonal projection \footnote{Even though we only defined the Cauchy--Riemann operators for $p > 2$, they can also be defined for $p = 2$, in which case the Sobolev spaces involved are Hilbert spaces. It is a standard fact that the kernel and cokernel of such an operator are independent of $p$. Therefore we can take elements of the kernels, patch them together, get an element in $W^{1,2}$ and use the inner product to project. The resulting section belongs to $W^{1,p}$ for all $p$. The same applies to cokernels.} onto $\ker D_{\mathcal{T}}$ we get a linear map
$$\bigoplus_v\ker D_v \to \ker D_{\mathcal{T}}\,,$$
which for large enough gluing lengths $R_e$ becomes an isomorphism. Similarly, we get a linear map
$$\bigoplus_v \coker D_v \to \coker D_{\mathcal{T}}\,,$$
which is also an isomorphism for large gluing lengths. Therefore we have that
$$\ind D_{\mathcal{T}} = \sum_v \ind D_v$$
and we obtain an isomorphism
$$\ddd \Big( \bigoplus_v D_v\Big) \simeq \ddd(D_{\mathcal{T}})\,,$$
independent of the choices. We refer to it as the \textbf{gluing isomorphism} throughout.
This gluing of Cauchy--Riemann operators and the resulting gluing isomorphism of the determinant lines are associative, in the following sense. If ${\mathcal{F}}=\biguplus_i {\mathcal{T}}_i \subset {\mathcal{T}}$ is a subforest as in \S\ref{par:gluing_punctured_Riem_surf}, we can form the glued operators ${\mathcal{D}}_{{\mathcal{T}}_i}$ over $\Sigma_{{\mathcal{T}}_i}$, which are admissible and nondegenerate. These have matching limiting data at punctures labeling edges of $\overline{\mathcal{T}} = {\mathcal{T}}/{\mathcal{F}}$, and therefore can also be glued. The resulting operator $D_{\overline{\mathcal{T}}}$ can be canonically identified with $D_{{\mathcal{T}}}$. The corresponding diagram of isomorphisms of determinant lines commutes:
$$\xymatrix{\ddd \big( \bigoplus_{v\in{\mathcal{V}}} D_v\big) \ar [r] \ar [d]& \ddd \big(\bigoplus_i D_{{\mathcal{T}}_i} \big) \ar [d]\\
\ddd(D_{\mathcal{T}}) \ar@{=}[r]& \ddd(D_{\overline{\mathcal{T}}})}$$
where all the arrows except $=$ are the gluing isomorphisms.
\paragraph{Gluing isomorphisms and direct sum isomorphisms commute} We will now formulate a crucial property satisfied by gluing isomorphisms, which is the foundation of many computations leading to various properties of algebraic operations in Floer theory. Recall the direct sum isomorphism defined in \S\ref{par:direct_sum_isos}. If we have operators $D_i$, $i=1,2,3$, where $D_1,D_2$ can be glued, that is they are defined on Hermitian bundle pairs with matching limiting data at a gluing puncture, and we let $D_1 \sharp D_2$ be the resulting glued operator, then the following diagram commutes:
$$\xymatrix{\ddd(D_1 \oplus D_2 \oplus D_3) \ar[r]^{\oplus} \ar[d]^{\sharp} & \ddd(D_1 \oplus D_2) \otimes \ddd(D_3) \ar[d]^{\sharp \otimes \id}\\ \ddd(D_1 \sharp D_2 \oplus D_3) \ar[r]^{\oplus}& \ddd(D_1 \sharp D_2) \otimes \ddd(D_3)}$$
\subsection{B-smooth maps and their pregluing}\label{ss:b_smooth_maps_pregluing}
If $H {:\ } [0,1] \times M \to {\mathbb{R}}$ is a time-dependent Hamiltonian, the associated Hamiltonian vector field is defined by
$$\omega(X_H^t,\cdot) = -dH_t\,.$$
We call a smooth curve $\gamma {:\ } [0,1] \to M$ a \textbf{Hamiltonian orbit} of $H$ with $L$ \textbf{as the boundary condition} if $\gamma(0),\gamma(1) \in L$ and
$$\dot\gamma(t) = X^t_H(\gamma(t))\,.$$
Similarly if $H {:\ } S^1 \times M \to {\mathbb{R}}$ is time-periodic, a smooth loop $x {:\ } S^1 \to M$ is a \textbf{periodic Hamiltonian orbit} of $H$ if
$$\dot x(t) = X_H^t(x(t))\,.$$
Let $\overline S{}^+ = [0,\infty] \times [0,1]$, and similarly define $\overline S{}^-$ and $\overline C{}^\pm$. We endow $\overline S{}^+$ with the unique structure of a smooth manifold with corners by declaring the map
$$\overline S{}^+ \to [0,1] \times [0,1]\,,\quad (s,t) \mapsto \Big(\frac{s}{\sqrt{1+s^2}},t\Big)$$
to be a diffeomorphism, and we do the same with $\overline S{}^-,\overline C{}^\pm$. If $\Sigma$ is a punctured Riemann surface endowed with a choice of ends for its punctures, we let $\overline \Sigma$ be the smooth manifold with corners obtained from $\Sigma$ by gluing $\overline S{}^\pm, \overline C{}^\pm$ along the ends $\epsilon_\theta$. Intuitively this amounts to compactifying $\Sigma$ by adding a copy of the interval $[0,1]$ for each boundary puncture and a copy of $S^1$ for each interior puncture.
A smooth map $f {:\ } \Sigma \to M$ is called \textbf{b-smooth} \footnote{This concept is borrowed from \cite{Schwarz_PhD_thesis}, where it appears under a different name.} if it has a smooth extension to the whole of $\overline\Sigma$. This in particular means that $f$ extends continuously to the compactified surface $\overline\Sigma$ and that its derivatives (with respect to the $s$ variable on the ends) decay sufficiently rapidly at the punctures. We let
$$C^\infty_b(\Sigma,M)$$
be the set of b-smooth maps and
$$C^\infty_b(\Sigma,\partial\Sigma;M,L)$$
be the subset mapping $\partial\Sigma\to L$.
For $u\in C^\infty_b(\Sigma,\partial\Sigma;M,L)$ and a puncture $\theta$ of $\Sigma$ we let
$$u_\theta = \lim_{|s| \to \infty}u(\epsilon_\theta(s,\cdot))$$
be the limiting curve at $\theta$, defined either on $[0,1]$ or on $S^1$. Assume that we have chosen a time-dependent Hamiltonian $H^\theta$ for every puncture $\theta$, with the additional condition that $H^\theta$ is time-periodic if $\theta$ is interior, and a Hamiltonian orbit $y_\theta$ of $H^\theta$, which has $L$ as the boundary condition if $\theta$ is boundary or is a loop if $\theta$ is interior. We let
$$C^\infty_b(\Sigma,\partial\Sigma;M,L;\{y_\theta\}_\theta) = \{u \in C^\infty_b(\Sigma,\partial\Sigma;M,L)\,|\,u_\theta = y_\theta \text{ for all }\theta\}\,.$$
For $u \in C^\infty_b(\Sigma,\partial\Sigma;M,L;\{y_\theta\}_\theta)$, a fixed puncture $\theta$, and a Riemannian metric $\rho$ on $M$, which in case $\theta$ is boundary satisfies the additional condition that $L$ be totally geodesic with respect to it, we can express $u$ near $\theta$ using the exponential map of $\rho$. To illustrate, assume that $\theta$ is a positive interior puncture. Then there is a section $U \in C^\infty(C^+,\pi^*y_\theta^*TM)$ ($\pi {:\ } C^+ \to S^1$ the projection) such that
$$u(\epsilon_\theta(s,t)) = \exp_{\rho,y_\theta(t)}(U(s,t))$$
for all sufficiently large $s$.
\paragraph{Pregluing b-smooth maps}\label{par:pregluing_b_smooth_maps}
Let again ${\mathcal{T}}$ be a gluing tree as in \S\ref{par:gluing_punctured_Riem_surf} with vertices labeled by $\Sigma_i$. Assume that for each $i$ and for each puncture of $\Sigma_i$ we have chosen a time-dependent Hamiltonian, which is time-periodic in case the puncture is interior, and a Hamiltonian orbit thereof, which has $L$ as the boundary condition if the puncture is boundary, and is a loop if the puncture is interior. Assume that if two punctures label an edge of ${\mathcal{T}}$, then the corresponding Hamiltonians and Hamiltonian orbits coincide. Let $u_i \in C^\infty_b(\Sigma_i,\partial\Sigma_i; M,L)$ have these Hamiltonian orbits as asymptotics.
Using the expression of $u_i$ via exponential maps as above at each puncture undergoing gluing, we can piece together the corresponding vector fields with the help of cutoff functions to get a b-smooth map $u \in C^\infty_b(\Sigma,\partial\Sigma;M,L)$ defined on the surface $\Sigma$ obtained from the $\Sigma_i$ by gluing according to ${\mathcal{T}}$, where we take the gluing lengths to be large enough. Note that $u$ has asymptotics dictated by the orbits corresponding to the punctures of $\Sigma$. We refer to a b-smooth map obtained in such a fashion from the maps $u_i$ as the result of \textbf{pregluing} the $u_i$.
\subsection{Cauchy--Riemann operators associated to b-smooth maps}\label{ss:CROs_from_b_smooth_maps}
We will now describe how to construct an admissible Cauchy--Riemann operator starting from a b-smooth map asymptotic to Hamiltonian orbits. Let $\Sigma$ be a punctured Riemann surface endowed with a choice of ends. Assume that to each puncture $\theta$ we associate a Hamiltonian $H^\theta$ and an orbit $y_\theta$ of $H^\theta$. Fix
$$u \in C^\infty_b(\Sigma,\partial\Sigma;M,L;\{y_\theta\}_\theta)\,.$$
Let $E_u = u^*TM$, $F_u = (u|_{\partial\Sigma})^*TL$ and $\omega_u = u^*\omega$. Assume that for each puncture $\theta$ we have a family of almost complex structures $J^\theta$ on $M$ compatible with $\omega$, such that $J^\theta$ is parametrized by $[0,1]$ if $\theta$ is boundary, and by $S^1$ if it's interior. We let $J_u$ be any compatible almost complex structure on $E_u$ satisfying
$$J_u(\epsilon_\theta(s,t)) = J^\theta_t(u(\epsilon_\theta(s,t)))$$
for all $\theta$ and $(s,t)$.
For each puncture $\theta$ let $E_\theta = y_\theta^*TM$, $\omega_\theta = y_\theta^*\omega$, $J_\theta(t) = J^\theta_t(y_\theta(t))$, and let $\nabla_\theta$ be the symplectic connection whose parallel transport maps along $t$ are given by the linearized flow of $H^\theta$ along $y_\theta$. Finally, if $\theta$ is boundary, let $F_\theta = (y_\theta|_{\{0,1\}})^*TL$. Since $u$ is b-smooth, it extends to a smooth map $\overline u {:\ } \overline\Sigma \to M$, see \S\ref{ss:b_smooth_maps_pregluing}. Choose a connection $\nabla_u$ on $\overline u^*TM$ which is a smooth extension of the set of connections $\nabla_\theta$ over the limiting curves $u_\theta=y_\theta$.
We therefore have all the necessary data, $(E_u,F_u,\omega_u,J_u,\nabla_u)$, in order to define the associated admissible Cauchy--Riemann operator
$$D_u = \overline\partial_{\nabla_u}\,.$$
Let us say that a Hamiltonian orbit $\gamma {:\ } [0,1] \to M$ of $H$ with endpoints on $L$ is \textbf{nondegenerate} if $\phi_{H*,\gamma(0)}^1(T_{\gamma(0)}L)$ is transverse to $T_{\gamma(1)}L$, and an orbit $x {:\ } S^1 \to M$ is nondegenerate if $\phi_{H*,x(0)}^1$ has no eigenvalue equal to $1$.
If all the orbits $y_\theta$ are nondegenerate, so are the connections $\nabla_\theta$ and therefore $D_u$ is nondegenerate. Thus it extends as a Fredholm operator to the Sobolev completions:
$$D_u {:\ } W^{1,p}(\Sigma,\partial\Sigma; E_u,F_u) \to L^p(\Sigma,\Omega_\Sigma^{0,1}\otimes E_u)\,.$$
We refer to $D_u$ as a \textbf{formal linearization} of $D_u$. See Remark \ref{rem:relating_linearized_and_formal_linearized_ops} for a relation between this operator and the linearization of Floer's PDE.
\paragraph{Gluing Cauchy--Riemann operators and pregluing b-smooth maps} Now we describe how gluing Cauchy--Riemann operators on surfaces relates to pregluing b-smooth maps. We keep the notation of \S\ref{par:pregluing_b_smooth_maps}: ${\mathcal{T}}$ is a gluing tree with vertices $\Sigma_i$, to each puncture of every $\Sigma_i$ there is associated a Hamiltonian and a nondegenerate orbit thereof, so that for every pair of punctures appearing as a label of an edge the corresponding Hamiltonians and orbits coincide. Finally, let $u_i {:\ } \Sigma_i \to M$ be b-smooth maps asymptotic to the chosen Hamiltonian orbits. We also have a preglued b-smooth map $u$ defined on the glued surface $\Sigma$.
We then have Cauchy--Riemann operators $D_i=D_{u_i}$ constructed as above, the operator $D$ glued from the $D_i$ as described in \S\ref{par:gluing_CROs_and_det_lines}, and the operator $D_u$, where the almost complex structure $J_u$ and the connection $\nabla_u$ coincide with the data $J_{u_i}$ and $\nabla_{u_i}$ outside the parts of $\Sigma_i$ participating in the gluing process. We can make an identification of the bundle $E$, obtained from gluing the bundles $E_{u_i}$, and the bundle $E_u$. The operator $D$ acting on $E$ and the operator $D_u$ acting on $E_u$ can now be deformed into one another relative to this identification, keeping the limiting data intact. This deformation induces an isomorphism
$$\ddd(D) \simeq \ddd(D_u)\,,$$
independent of the choices. Details are left to the reader. We refer to this isomorphism as the \textbf{deformation isomorphism} below.
\subsection{The Floer PDE}\label{ss:Floer_PDE}
A \textbf{Floer datum} associated to a puncture $\theta$ of $\Sigma$ is a pair $(H,J)$ where $H$ is a smooth time-dependent Hamiltonian on $M$, while $J$ is a smooth time-dependent family of $\omega$-compatible almost complex structures. Both $H,J$ are required to be time-periodic in case $\theta$ is interior. We call a Floer datum $(H,J)$ associated to a boundary puncture \textbf{nondegenerate} if all the Hamiltonian orbits of $H$ with boundary on $L$ are nondegenerate. If the datum $(H,J)$ is associated to an interior puncture, it is called nondegenerate if all the periodic Hamiltonian orbits of $H$ are nondegenerate.
Fix a punctured Riemann surface $\Sigma$, a choice of ends $\{\epsilon_\theta\}_{\theta\in\Theta}$ for it, and a Floer datum $(H^\theta,J^\theta)$ associated to each puncture $\theta$. A \textbf{perturbation datum} on $\Sigma$ is a pair $(K,I)$ where $K$ is a smooth $1$-form on $\Sigma$ with values in $C^\infty(M)$, satisfying the requirement that
$$K|_{\partial\Sigma}\quad\text{vanishes along} \quad L\,,$$
while $I$ is a family of compatible almost complex structures on $M$ parametrized by $\Sigma$. A perturbation datum $(K,I)$ is said to be \textbf{compatible} with the Floer data $\{(H^\theta,J^\theta)\}_\theta$ associated to the punctures of $\Sigma$ if
$$\epsilon_\theta^*K = H^\theta_t\,dt\quad\text{ and }\quad I(\epsilon_\theta(s,t)) = J^\theta_t \quad \text{for all }(s,t)\text{ and }\theta\,.$$
We can now define the Floer PDE. Assume that $\Sigma$ is a punctured Riemann surface, and we have fixed a choice of ends for it, as well as Floer data associated to every puncture, and a compatible perturbation datum $(K,I)$. We let $X_K$ be the $1$-form on $\Sigma$ with values in Hamiltonian vector fields on $M$, defined via
$$\omega(X_K(\xi),\cdot) = - dK(\xi)\quad\text{ for }\xi \in T\Sigma\,.$$
The \textbf{Floer PDE} is the equation
$$\overline\partial_{K,I}u:=(du - X_K)^{0,1} = 0$$
for $u \in C^\infty(\Sigma,\partial\Sigma; M,L)$.
\subsection{Moduli spaces of solutions of the Floer PDE}\label{ss:moduli_spaces_sols_Floer_PDE}
Here we define solutions spaces of the Floer PDE, the corresponding moduli spaces, and describe their compactness properties.
\subsubsection{Cappings and action functionals}\label{sss:cappings_action_fcnls}
Let $D^2 \subset {\mathbb{C}}$ be the closed unit disk and let $\dot D^2 = D^2 -\{1\}$ be the punctured disk where the puncture is considered to be positive. The standard end is defined by
\begin{equation}\label{eqn:std_end_cappings_Lagr_HF}
\epsilon_{\std}{:\ } S^+ \to \dot D^2\,, \quad \epsilon_{\std}(z) = \frac{e^{\pi z} - i}{e^{\pi z} + i}\,.
\end{equation}
Let $y {:\ } ([0,1],\{0,1\}) \to (M,L)$ be a smooth curve. By definition, a \textbf{capping} of $y$ is a b-smooth map
$$\widehat y \in C^\infty_b(\dot D^2,\partial \dot D^2 ; M,L;y)$$
with respect to the standard end. A capping has a canonical extension to the compactified disk where we glue in an interval $[0,1]$ at the puncture along the end $\epsilon_{\std}$. Two cappings $\widehat y,\widehat y'$ are \textbf{equivalent} if the concatenation of the canonical extension of $\widehat y$ and of the canonical extension of $-\widehat y'$ defines a disk with boundary on $L$ representing the trivial class in $\pi_2(M,L)$, where
$$-\widehat y'(a+ib) = \widehat y'(-a+ib)\,.$$
We use the notation
$$\widetilde y = [y,\widehat y]$$
to denote the equivalence class of cappings of $y$ containing $\widehat y$.
Similarly let $\dot S^2$ be the sphere punctured once, with the puncture being positive. The standard end for it is defined by
$$\epsilon_{\std} {:\ } C^+ \to \dot S^2\,,\quad \epsilon_{\std}(z) = \frac{e^{2\pi z} - i}{e^{2 \pi z} + i}\,,$$
where we view $S^2 = {\mathbb{C}} P^1 = {\mathbb{C}} \cup \{\infty\}$. Let $y {:\ } S^1 \to M$ be a smooth loop. A capping of $y$ is a b-smooth map
$$\widehat y \in C^\infty_b(\dot S^2,M;y)$$
relative to the standard end. Such a capping has a canonical extension to the compactified sphere where we add a circle at infinity according to the standard end. We call two cappings $\widehat y,\widehat y'$ equivalent if the concatenation of the canonical extension of $\widehat y$ and the canonical extension of $-\widehat y'$ is a contractible sphere. Again, we denote $\widetilde y =[y,\widehat y]$ the equivalence class of cappings containing $\widehat y$.
We now define the action functionals. Let
$$\Omega_L = \{y {:\ } [0,1] \to M\,|\, [y] = 0 \in \pi_1(M,L)\}$$
and let
$$\widetilde\Omega_L = \{\widetilde y=[y,\widehat y]\,|\, \widehat y\text{ is a capping of }y\}\,.$$
We denote by $p {:\ } \widetilde \Omega_L \to \Omega_L$ the obvious projection. Let $H$ be a time-dependent Hamiltonian. The action functional associated to $H$ is
$${\mathcal{A}}_{H:L} {:\ } \widetilde\Omega_L \to {\mathbb{R}}\,,\quad {\mathcal{A}}_{H:L}([y,\widehat y]) = \int_0^1 H_t(y(t))\,dt-\int \widehat y^*\omega\,.$$
A point $\widetilde y$ is critical for ${\mathcal{A}}_{H:L}$ if and only if $y = p(\widetilde y)$ is a Hamiltonian orbit of $H$. If all the orbits of $H$ with boundary on $L$ are nondegenerate, there is a Conley-Zehnder index
$$m_{H:L} {:\ } \Crit {\mathcal{A}}_{H:L} \to {\mathbb{Z}}\,.$$
We use the definition of this index, which satisfies the following shift property: for two cappings $\widehat y,\widehat y'$ of the same orbit $y$ we have
$$m_{H:L}([y,\widehat y]) - m_{H:L}([y,\widehat y']) = -\mu(\widehat y\sharp -\widehat y')\,.$$
In addition, we normalize it as follows. Assume $f$ is a $C^2$-small Morse function on $L$, and that we have identified a neighborhood of $L$ with a neighborhood of the zero section in $T^*L$; let $H$ be obtained by cutting off the pullback of $f$ to $T^*L$ outside the neighborhood. Then the Hamiltonian orbits of $H$ with boundary on $L$ are precisely the constant curves at the critical points of $f$. We let the Conley-Zehnder index of such an orbit together with the constant capping be equal the Morse index of the corresponding critical point.
We use the abbreviated notations
$$|\widetilde y| := m_{H:L}(\widetilde y)\,,\quad |\widetilde y|':=n-m_{H:L}(\widetilde y)\,,$$
which should cause no confusion.
Similarly, let
$$\Omega = \{y {:\ } S^1 \to M\,|\,[y]=0\in\pi_1(M)\}$$
and
$$\widetilde \Omega = \{\widetilde y=[y,\widehat y]\,|\, \widehat y\text{ is a capping of }y\}\,.$$
We have the projection $p {:\ } \widetilde\Omega \to \Omega$. Let $H$ be a time-periodic Hamiltonian. The associated action functional is
$${\mathcal{A}}_H {:\ } \widetilde \Omega \to {\mathbb{R}}\,,\quad {\mathcal{A}}_H([y,\widehat y]) = \int_{S^1} H_t(y(t))\,dt - \int \widehat y^*\omega\,.$$
Its critical points are $\widetilde y$ with $y=p(\widetilde y)$ being a periodic Hamiltonian orbit of $H$. If all the periodic orbits of $H$ are nondegenerate, there is a Conley-Zehnder index
$$m_{H} {:\ } \Crit {\mathcal{A}}_{H} \to {\mathbb{Z}}\,.$$
This has the shift property
$$m_H([y,\widehat y]) - m_H([y,\widehat y']) = -2c_1(\widehat y \sharp - \widehat y')\,,$$
and is normalized to coincide with the Morse index of a critical point of $H$ provided $H$ is $C^2$-small, autonomous, and Morse, and the critical point is considered as a constant periodic orbit taken with the constant capping.
We use the abbreviated notations
$$|\widetilde y| := m_{H}(\widetilde y)\,,\quad |\widetilde y|':=2n-m_{H:L}(\widetilde y)\,.$$
\begin{rem}Below we have to use action functionals of both types, ${\mathcal{A}}_{H:L}$ and ${\mathcal{A}}_{H}$, on almost equal footing. In order to keep the notation less cumbersome, we will oftentimes denote both of them just by ${\mathcal{A}}_{H}$. The context will make it clear on which space ($\widetilde\Omega$ or $\widetilde\Omega_L$) it is defined.
\end{rem}
\subsubsection{Solution spaces and moduli spaces}\label{sss:solution_spaces_moduli_spaces}
Until the end of \S\ref{s:HF}, we assume that $\widehat\Sigma$ is either the sphere or the closed disk, and that there is a unique positive puncture $\theta$ and negative punctures $\{\theta_i\}_{i=1}^k$ (possibly $k=0$). Recall that $\Sigma = \widehat \Sigma - \Theta$. Fix a choice of ends for $\Sigma$, nondegenerate Floer data $(H,J)$ associated to $\theta$ and $\{(H^i,J^i)\}_i$ associated to $\theta_i$, and a compatible perturbation datum $(K,I)$. Choose critical points $\widetilde y,\widetilde y_i$ of the action functionals corresponding to $H$ and $H^i$. We define the solution space
$${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y) = \{u \in C^\infty_b(\Sigma,\partial\Sigma; M,L;\{y_i\}_i,y)\,|\, u\sharp \widehat y_1\sharp\dots\sharp \widehat y_k \in \widetilde y\,,\overline\partial_{K,I}u = 0\}\,,$$
that is the set of solutions of Floer's PDE with boundary conditions on $L$, asymptotics given by the orbits $y_i,y$ and homotopy class coming from the chosen equivalence classes of cappings for the orbits. Here $u\sharp \widehat y_1 \sharp\dots \sharp \widehat y_k$ denotes a map obtained by pregluing the b-smooth maps $u,\widehat y_1,\dots,\widehat y_k$ according to the obvious gluing tree. To every $u \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\},\widetilde y)$ there is associated the linearized operator (see \cite[Chapter 9]{Seidel_The_Book})
$$D_u {:\ } W^{1,p}(\Sigma,\partial \Sigma; E_u,F_u) \to L^p(\Sigma, \Omega^{01,}_\Sigma \otimes E_u)\,.$$
\begin{rem}\label{rem:relating_linearized_and_formal_linearized_ops}
In \cite[Chapter 8]{Seidel_The_Book} it is explained how to choose a connection $\nabla_u$ on $E_u$ for which this linearized operator coincides with the formal linearized operator $\nabla_u^{0,1}$ introduced \S\ref{ss:CROs_from_b_smooth_maps} for b-smooth maps. In the sequel we shall always reserve the notation $D_u$ to mean the linearized operator in case $u$ is a solution of the Floer PDE.
\end{rem}
The perturbation datum $(K,I)$ is called \textbf{regular} if for every choice of critical points $\widetilde y_i,\widetilde y$ and every $u \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ the operator $D_u$ is onto. In this case ${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is naturally a smooth manifold of dimension
$$\dim {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y) = |\widetilde y|' - \sum_{i=1}^k|\widetilde y_i|'\,.$$
The set of regular perturbation data $(K,I)$ compatible with the given Floer data is dense in the set of all compatible perturbation data \cite{Seidel_The_Book}.
We single out the special case of a translation-invariant perturbation datum: assume $\Sigma = S$ or $C$ and assume the perturbation datum has the form:
$$K(s,t) = H_t\,dt\,,I(s,t) = J_t$$
for all $(s,t)$. In this case the Floer PDE is the original equation for the negative gradient flow of the action functional, to wit
$$\langle \overline\partial_{K,I}u, \partial_s\rangle = \partial_s u + J(u) \big( \partial_t u - X_H(u) \big) = 0\,.$$
If $\widetilde y_\pm \in \Crit {\mathcal{A}}_H$, we let
$$\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+) := {\mathcal{M}}_\Sigma(K,I;\widetilde y_-,\widetilde y_+)\,.$$
This space admits a natural ${\mathbb{R}}$-action by translation in the $s$ variable. We let
$${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$$
denote the quotient if the action is free, otherwise we declare it to be empty. We call the Floer datum $(H,J)$ \textbf{regular} if the corresponding translation-invariant perturbation datum is regular. In this case $\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ is a smooth manifold of dimension
$$|\widetilde y_+|' - |\widetilde y_-|' = |\widetilde y_-| - |\widetilde y_+|\,,$$
while ${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ has dimension $|\widetilde y_-| - |\widetilde y_+| - 1$. We note that the set of regular data is dense in the set of all nondegenerate Floer data \cite{Floer_Hofer_Salamon_Transversality}.
\begin{rem}
Note that by definition a regular Floer datum is nondegenerate.
\end{rem}
\paragraph{Parametrized solution spaces}\label{par:parametrized_solution_spaces}
Next we treat families of surfaces and the Floer PDE on them. Assume for the moment that we have a smooth compact connected oriented surface with boundary $\widehat\Sigma$ equipped with a finite set of punctures $\Theta$ and let $\Sigma= \widehat\Sigma - \Theta$. Let ${\mathcal{S}} \to {\mathcal{R}}$ be a fiber bundle with fiber $\Sigma$, whose structure group is the group of orientation-preserving diffeomorphisms of $\widehat\Sigma$ which are the identity on $\Theta$. In this case ${\mathcal{S}} \to {\mathcal{R}}$ extends to a fiber bundle $\widehat{\mathcal{S}} \to {\mathcal{R}}$ with fiber $\widehat\Sigma$ and the punctures of $\Sigma$ give rise to canonical smooth sections ${\mathcal{R}} \to \widehat{\mathcal{S}}$. We identify a puncture with the corresponding section. We denote the fiber of ${\mathcal{S}} \to {\mathcal{R}}$ over $r \in {\mathcal{R}}$ by $\Sigma_r$.
Assume there is a smooth family of conformal structures on the fibers of ${\mathcal{S}} \to {\mathcal{R}}$. A choice of ends for ${\mathcal{S}} \to {\mathcal{R}}$ is a family of fiberwise maps
$$\epsilon_\theta {:\ } {\mathcal{R}} \times S^\pm \to {\mathcal{S}} \quad \text{ or }\quad \epsilon_\theta {:\ } {\mathcal{R}} \times C^\pm \to {\mathcal{S}}$$
whose restrictions to the fibers over $r$ constitute a choise of ends for the Riemann surface $\Sigma_r$, such that $\epsilon_\theta$ is asymptotic to $\theta$.
Assume that we have fixed a choice of ends for ${\mathcal{S}}$ and nondegenerate Floer data $\{(H^\theta,J^\theta)\}_\theta$ associated to the punctures of $\Sigma$. A perturbation datum on ${\mathcal{S}}$ is a pair $(K,I)$ where $I$ is a family of compatible almost complex structures parametrized by ${\mathcal{S}}$, while $K$ is a smooth family of $1$-forms on the vertical tangent bundle of ${\mathcal{S}} \to {\mathcal{R}}$ such that the restriction $(K_r,I_r)$ is a perturbation datum on $\Sigma_r$. The notion of compatibility of the perturbation datum with the Floer data extends to families in an obvious manner.
Let now $\widehat\Sigma$ be either the sphere or the closed disk, and assume that it carries a unique positive puncture $\theta$ and negative punctures $\{\theta_i\}_{i=1}^k$. Fix a choice of ends for ${\mathcal{S}}$, Floer data $(H,J),\{(H^i,J^i)\}_i$ associated to punctures of $\Sigma$ and a compatible perturbation datum $(K,I)$. Fix also critical points $\widetilde y,\widetilde y_i$ of the action functionals of $H$ and $H_i$. We define
$${\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y) = \{(r,u)\,|\,r\in{\mathcal{R}}\,, u \in {\mathcal{M}}_{\Sigma_r}(K_r,I_r;\{\widetilde y_i\}_i, \widetilde y)\}\,.$$
For $(r,u) \in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ we have the \textbf{extended linearized operator}
$$D_{r,u} {:\ } T_r{\mathcal{R}} \times W^{1,p}(\Sigma_r,\partial\Sigma_r; E_u,F_u) \to L^p(\Sigma_r,\Omega_{\Sigma_r}^{0,1}\otimes E_u)\,.$$
See \cite{Seidel_The_Book} for the precise definition. Note for future use that the restriction
$$D_{r,u}|_{0 \times W^{1,p}(\Sigma_r,\partial\Sigma_r; E_u,F_u)}$$
coincides with the linearized operator $D_u$ of $u \in {\mathcal{M}}_{\Sigma_r}(K_r,I_r;\{\widetilde y_i\}_i,\widetilde y)$. We call the perturbation datum $(K,I)$ for ${\mathcal{S}}$ \textbf{regular} if for every choice of critical points $\widetilde y_i,\widetilde y$ and every $(r,u) \in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ the extended operator $D_{r,u}$ is onto. In this case ${\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is a smooth manifold of dimension
$$\dim {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y) = \dim {\mathcal{R}} + |\widetilde y|' - \sum_{i=1}^k|\widetilde y_i|'\,.$$
The set of regular perturbation data compatible with the given Floer data is dense in the set of all compatible perturbation data.
\begin{rem}
We note here that a regular solution of the Floer PDE is necessarily a b-smooth map \cite{Schwarz_PhD_thesis}. Therefore all the constructions regarding b-smooth maps in \S\ref{ss:b_smooth_maps_pregluing}, \S\ref{ss:CROs_from_b_smooth_maps} apply to them and their linearized operators.
\end{rem}
\subsubsection{Compactness and gluing}\label{sss:compactness_gluing}
Here we discuss the relevant compactness and gluing results for the above solution spaces. The general statement is that whenever a moduli space is zero-dimensional, it is compact, therefore a finite number of points, whereas when it is one-dimensional, it can be compactified into a compact $1$-dimensional manifold with boundary, where the boundary consists either of boundary points already present in the moduli space, or else of pairs of elements of $0$-dimensional moduli spaces. Moreover, the converse to compactness, called gluing, states that all suitable pairs are obtained in this way.
We start with the description of the relevant $0$-dimensional moduli spaces. There are three basic types of moduli spaces used in Floer homology:
\begin{enumerate}
\item when the surface is either a strip or a cylinder and the perturbation datum is translation-invariant --- this leads to the definition of boundary operators;
\item there is a single surface --- this is used to define various operations on Floer homology;
\item there is a family of surfaces ${\mathcal{S}} \to {\mathcal{R}}$ with ${\mathcal{R}}$ being $1$-dimensional --- this leads to relations between the operations and the boundary operators, such as chain homotopies and various algebraic identities.
\end{enumerate}
\paragraph{The case of translation-invariant perturbation datum on $S$ or $C$}
We treat the case $\Sigma = S$, the case of the cylinder $C$ being entirely similar. We fix a regular Floer datum $(H,J)$. The set $\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ is a smooth manifold of dimension
$$|\widetilde y_-| - |\widetilde y_+|\,.$$
When this difference is $1$, the quotient manifold ${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ is $0$-dimensional and compact, therefore a finite set of points. When the difference is $2$, there are two cases:
Case I: $N_L \geq 3$ or $y_- \neq y_+$, and
Case II: $N_L = 2$ and $y_- = y_+$.
\noindent In case I, the manifold ${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ admits a compactification $\overline{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ whose boundary is
$$\partial\overline {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+) = \bigcup_{\widetilde y \in \Crit{\mathcal{A}}_{H:L}} {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y) \times {\mathcal{M}}(H,J;\widetilde y,\widetilde y_+)\,,$$
that is the only way to noncompactness is through Floer breaking.
Consider now case II. For $q \in L$, $A \in \pi_2(M,L,q)$, and an almost complex structure $J$, define
$$\widetilde {\mathcal{M}}_1(J;q,A) = \{u \in C^\infty(D^2,S^1,1;M,L,q)\,|\, \overline \partial_J u = 0, [u] = A\}\,,$$
where $\overline\partial_J = \overline \partial_{0,J}$ is the Floer operator of the perturbation datum $(0,J)$ on $D^2$, that is with vanishing Hamiltonian term. We let ${\mathcal{M}}_1(J;q,A)$ be the quotient of $\widetilde{\mathcal{M}}_1(J;q,A)$ by the conformal automorphism group of $D^2$ preserving $1 \in S^1$. We call $J$ \textbf{regular} if for every $q,A$, and $u \in \widetilde {\mathcal{M}}_1(J;q,A)$ the linearized operator $D_u$ is onto. Since $L$ is monotone and $N_L \geq 2$, it follows that all disks in $\widetilde{\mathcal{M}}_1(J;q,A)$ are simple \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, therefore the set of regular $J$ is dense. If $J$ is regular, ${\mathcal{M}}_1(J;q,A)$ is a zero-dimensional manifold for generic $q$. The set of regular Floer data $(H,J)$ for which $J_0,J_1$ are regular is dense. If $(H,J)$ is such, we have in case II:
\begin{align*}
\partial\overline {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+) &= \bigcup_{\widetilde z \in\Crit{\mathcal{A}}_{H:L}} {\mathcal{M}}(H,J;\widetilde y_-, \widetilde z) \times {\mathcal{M}}(H,J;\widetilde z,\widetilde y_+)\\
&\cup {\mathcal{M}}_1(J_0; y(0), [\widehat y_+ \sharp - \widehat y_-]) \cup {\mathcal{M}}_1(J_1; y(1), [\widehat y_+ \sharp - \widehat y_-])\,,
\end{align*}
where $y = y_- = y_+$. This means that in this particular case another possibility for noncompactness opens up, that of bubbling off of Maslov $2$ disks attached to endpoints of the Hamiltonian orbit $y$.
Bubbling off of holomorphic spheres of Chern number $1$ is also possible, however the set of points through which they pass has high codimension and therefore generically they do not appear in the boundary of the moduli space \cite{Hofer_Salamon_HF_Nov_rings}. Also see \cite{Hu_Lalonde_Relative_Seidel_morphism_Albers_map} .
The treatment in the case $\Sigma = C$ is entirely analogous, with the difference that the Floer datum $(H,J)$ is $1$-periodic in $t$ and for a generic datum there is no bubbling, the noncompactness being only due to Floer breaking.
\paragraph{The case of a single surface}
Assume $\widehat\Sigma$ is the sphere or the closed disk and endow it with punctures, where exactly one puncture $\theta$ is positive, the other punctures $\{\theta_i\}_{i=1}^k$ being negative; let $\Sigma$ be the resulting punctured Riemann surface. Endow $\Sigma$ with a choice of ends, and fix regular Floer data $(H^i,J^i)$ and $(H,J)$ associated to the punctures $\theta_i$ and $\theta$. Fix also a regular perturbation datum $(K,I)$ compatible with the Floer data, and critical points $\widetilde y_i, \widetilde y$ of the action functionals of $H^i$, $H$, and consider
$${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)\,.$$
This is a smooth manifold of dimension $|\widetilde y|' - \sum_i |\widetilde y_i|'$. If this dimension is $0$, ${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is compact and therefore a finite number of points.
When ${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is $1$-dimensional, the only possible noncompactness is due to Floer breaking, namely we have
{
\begin{multline*}
\partial \overline {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y) = \bigcup_{j=1}^{k} \bigcup_{\widetilde y_j' \in \Crit{\mathcal{A}}_{H^j}}{\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \times {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y) \\
\cup \bigcup_{\widetilde y' \in \Crit {\mathcal{A}}_{H}}{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y') \times {\mathcal{M}}(H,J;\widetilde y',\widetilde y)\,.
\end{multline*}
}
\paragraph{The case of a $1$-dimensional family}\label{par:compactness_families}
Here we have a family of punctured surfaces ${\mathcal{S}} \to {\mathcal{R}}$ as in \S\ref{par:parametrized_solution_spaces}. We will only need the cases when ${\mathcal{R}} = [0,1]$ or ${\mathcal{R}}=[0,\infty)$.
Assume first that ${\mathcal{R}} = [0,1]$, that ${\mathcal{S}} = \Sigma \times [0,1]$, where $\Sigma$ is either a punctured sphere or a punctured disk, that the punctures are $\Theta=\{\theta,\theta_1\dots,\theta_k\}$, where $\theta$ is the unique positive puncture, that we have fixed a choice of ends for ${\mathcal{S}}$ which are constant near the boundary of ${\mathcal{R}}$, and regular Floer data $(H,J)$ and $(H^i,J^i)$ associated to $\theta$ and $\theta_i$. Let $(K,I)$ be a regular compatible perturbation datum, which is constant near the boundary of ${\mathcal{R}}$. Fix critical points $\widetilde y,\widetilde y_i$ of the action functionals of $H,H^i$. The set
$${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$$
is a smooth manifold of dimension
$$\dim {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y) = 1 + |\widetilde y|' - \sum_i|\widetilde y_i|'\,.$$
When this dimension is zero, ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is compact and therefore a finite number of points.
When it equals $1$, ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ can be compactified by adding Floer breaking, that is we have
\begin{multline*}
\partial\overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y) = \{0\}\times {\mathcal{M}}_{\Sigma_0}(K_0,I_0;\{\widetilde y_i\}_i,\widetilde y) \cup \{1\} \times {\mathcal{M}}_{\Sigma_1}(K_1,I_1;\{\widetilde y_i\}_i,\widetilde y) \\
\cup \bigcup_{j=1}^k \bigcup_{\widetilde y_j' \in \Crit{\mathcal{A}}_{H^j}} {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \times {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y) \\
\cup \bigcup_{\widetilde y' \in \Crit{\mathcal{A}}_{H}} {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y') \times {\mathcal{M}}(H,J;\widetilde y',\widetilde y)\,.
\end{multline*}
When ${\mathcal{R}} = [0,\infty)$, we require that the family ${\mathcal{S}}$ and the choice of ends on it have a specific form. Namely, let $\Sigma^1,\Sigma^2$ be two punctured Riemann surfaces with puncture sets $\Theta_i = \{\theta^i,\theta^i_1,\dots,\theta^i_{k_i}\}$, $\theta^i$ being positive and the rest being negative. Fix a choice of ends for the $\Sigma^i$. Let $R_0 > 0$ and for $r \geq R_0$ let $\Sigma_r$ be obtained from gluing $\Sigma^1,\Sigma^2$, where the tree has two vertices corresponding to the surfaces $\Sigma^1,\Sigma^2$ and the unique edge between them is labeled by $(\theta^1,\theta^2_j)$ and the gluing length is $r$. Note that $\Sigma_r$ has $\theta^2,\{\theta^1_i\}_i,\{\theta^2_i\}_{i \neq j}$ as punctures. We require the family ${\mathcal{S}} \to {\mathcal{R}} = [0,\infty)$ to have fiber $\Sigma_r$ for $r \geq R_0$ and to have conformal structures and choices of ends to come from gluing as described.
Similarly, the choice of perturbation datum on ${\mathcal{S}}$ comes from gluing. More precisely, assume we have chosen regular Floer data associated to the punctures of the $\Sigma^i$: $(H^i,J^i)$ for $\theta^i$ and $(H^{i,l},J^{i,l})_l$ for $\theta^i_l$, , $i=1,2$, such that $H^1 = H^{2,j}$, $J^1 = J^{2,j}$. Let $(K^i,I^i)$ be regular compatible perturbation data on $\Sigma^i$. There is an obvious perturbation datum $(K_r,I_r)$ on the glued surface $\Sigma_r$, since the perturbation data on the $\Sigma^i$ agree on the overlap, and we require the perturbation datum on ${\mathcal{S}}$ to equal $(K_r,I_r)$ on the fibers $\Sigma_r$ over $r \geq R_0$. Note that the Floer data associated to punctures of $\Sigma_r$ (for all $r$) are $(H^2,J^2)$ for $\theta^2$ and $\{(H^{1,i},J^{1,i})\}_i$, $\{(H^{2,i},J^{2,i})\}_{i\neq j}$ for $\{\theta^1_i\}_i,\{\theta^2_i\}_{i \neq j}$.
Let therefore ${\mathcal{S}} \to {\mathcal{R}} = [0,\infty)$ be such a family. Endow ${\mathcal{S}}$ with a choice of ends as above, with additional condition that they are locally constant near $0 \in {\mathcal{R}}$. We already have a choice of Floer data for the punctures of ${\mathcal{S}}$, which we now assume to be regular, and we let $(K,I)$ be a regular compatible perturbation datum, locally constant near $0\in{\mathcal{R}}$, and which has the aforementioned form for $r \geq R_0$. The set of such $(K,I)$ is dense. Fix critical points $\widetilde y_2, \{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i \neq j}$ of the action functionals of $H^2,\{H^{1,i}\}_i,\{H^{2,i}\}_{i\neq j}$. Then the set
$${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)$$
is a smooth manifold of dimension
$$\dim {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2) = 1 + |\widetilde y_2|' - \sum_i|\widetilde y_{1,i}|' - \sum_{i\neq j}|\widetilde y_{2,i}|'\,.$$
When this dimension is zero, ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)$ is compact and therefore a finite number of points. When it equals $1$, ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)$ can be compactified by adding Floer breaking and breaking at the noncompact end of ${\mathcal{R}}$, that is we have
\begin{multline*}
\partial\overline {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2) =
\{0\} \times {\mathcal{M}}_{\Sigma_0}(K_0,I_0;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2) \\
\cup \bigcup_{\widetilde y_{2,j} \in \Crit{\mathcal{A}}_{H^1}} {\mathcal{M}}_{\Sigma^1}(K^1,J^1;\{\widetilde y_{1,i}\}_i,\widetilde y_{2,j}) \times {\mathcal{M}}_{\Sigma^2}(K^2,I^2;\{\widetilde y_{2,i}\}_i,\widetilde y_2) \\
\cup \bigcup_{\widetilde y_2' \in \Crit {\mathcal{A}}_{H^2}} {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2') \times {\mathcal{M}}(H^2,J^2;\widetilde y_2',\widetilde y_2)\\
\cup \bigcup_l \bigcup_{\widetilde y_{1,l}' \in \Crit {\mathcal{A}}_{H^{1,l}}} {\mathcal{M}}(H^{1,l},J^{1,l};\widetilde y_{1,l},\widetilde y_{1,l}') \times {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_{i\neq l},\widetilde y_{1,l}',\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)\\
\cup \bigcup_{l\neq j} \bigcup_{\widetilde y_{2,l}' \in \Crit {\mathcal{A}}_{H^{2,l}}} {\mathcal{M}}(H^{2,l},J^{2,l};\widetilde y_{2,l},\widetilde y_{2,l}') \times {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j,l}, \widetilde y_{2,l}', \widetilde y_2)\,.
\end{multline*}
\subsection{Orientations}\label{ss:orientatiosn}
\subsubsection{Canonical ${\mathbb{Z}}$-modules associated to critical points of action functionals}\label{sss:can_Z_modules_crit_pts_action_fcnls}
Recall the definition of a capping for a smooth curve $y {:\ } ([0,1],\{0,1\}) \to (M,L)$, \S\ref{sss:cappings_action_fcnls}. Fix an equivalence class of cappings $\widetilde y$ of $y$ and regard it as a topological space. Since a capping is a b-smooth map $\dot D^2 \to M$, it has a canonical extension to a continuous map defined on the compactification of $\dot D^2$ obtained by gluing a copy of $[0,1]$ along the standard end, \S\ref{ss:b_smooth_maps_pregluing}. This compactification is diffeomorphic to $D^2 \cap \{\RE z \leq 0\}$, and we view the canonical extension of $\widehat y$ as a map $D^2 \cap \{\RE z \leq 0\} \to M$.
Let $C_{\widetilde y}$ be the space of continuous maps $D^2 \cap \{\RE z \leq 0\} \to M$ mapping the semicircle to $L$, the diameter to $y$, and belonging to the homotopy class dictated by $\widetilde y$. It is easy to see that the map taking a capping to the corresponding continuous extension is a homotopy equivalence between $\widetilde y$ and $C_{\widetilde y}$. We have the following obvious lemma.
\begin{lemma}\label{lem:fund_gp_space_of_cappings_isomorphic_to_rel_pi_3}
Fix $\widehat y_0 \in C_{\widetilde y}$ and let $-\widehat y_0 {:\ } D^2 \cap \{\RE z \geq 0\} \to M$ be defined by $-\widehat y_0(s,t) = \widehat y_0(-s,t)$. Then the concatenation map
$$C_{\widetilde y} \to \{w {:\ } (D^2,S^1,-i) \to (M,L,y(0)) \,|\, [w] = 0 \in \pi_2(M,L,y(0))\}\,, \quad \widehat y \mapsto \widehat y \sharp - \widehat y_0$$
is well-defined and is a homotopy equivalence. In particular the fundamental group $\pi_1(C_{\widetilde y}, \widehat y_0)$ is isomorphic to $\pi_3(M,L,y(0))$, and since it is abelian, the isomorphism is independent of $\widehat y_0$. \qed
\end{lemma}
\noindent This means that we have canonically identified the fundamental group of the space of cappings $\widetilde y$ with $\pi_3(M,L,y(0))$.
Let now $H {:\ } [0,1] \times M \to {\mathbb{R}}$ be a nondegenerate Hamiltonian, that is all its orbits with boundary on $L$ are nondegenerate, and fix an orbit $y$ of $H$. Pick an equivalence class of cappings $\widetilde y = [y,\widehat y]$, that is a critical point of ${\mathcal{A}}_{H:L}$. For any $\widehat y$ we have an associated linearized operator $D_{\widehat y}$, see \S\ref{ss:CROs_from_b_smooth_maps}. Its construction depends on various choices, such as an almost complex structure and a connection on the pullback bundle $\widehat y^*TM$. Let $D_{\widetilde y}$ denote the collection of all the operators obtained in this way for all the cappings $\widehat y$ in class $\widetilde y$ and all the auxiliary choices. Since the spaces of almost complex structures and connections are contractible, we see that the parameter space of $D_{\widetilde y}$ is homotopy equivalent to the space $\widetilde y$ of cappings in class $\widetilde y$. Therefore its fundamental group is canonically isomorphic to $\pi_3(M,L,y(0))$ by Lemma \ref{lem:fund_gp_space_of_cappings_isomorphic_to_rel_pi_3}. The determinant lines of the operators in the family $D_{\widetilde y}$ glue into the line bundle $\ddd (D_{\widetilde y})$. We have
\begin{lemma}\label{lem:first_Stiefel_Whitney_dD_wt_y_HF}
Relative to the canonical isomorphism of $\pi_3(M,L,y(0))$ with the fundamental group of the space of parameters over which the line bundle $\ddd(D_{\widetilde y})$ is defined, its first Stiefel--Whitney class equals
$$w_1(\ddd(D_{\widetilde y}))=w_2(TL) \circ \partial {:\ } \pi_3(M,L) \to {\mathbb{Z}}_2\,.$$
\end{lemma}
\begin{prf}
Let $(\widehat y_\tau)_{\tau \in S^1}$ be a loop of cappings and lift it to a loop of operators $D_\tau = D_{\widehat y_\tau}$. Recall that for a fixed capping $\widehat y_0 \in \widetilde y$ we defined the reverse capping $- \widehat y_0$ via $-\widehat y_0(s,t) = \widehat y_0(-s,t)$. To it there corresponds a Cauchy--Riemann operator $D_{-\widehat y_0}$ on $D^2 - \{-1\}$. The latter surface has a negative end, and therefore we can form the glued operator $D_\tau \sharp D_{-\widehat y_0}$ over $D^2$ for some gluing length. Combining the direct sum and the gluing isomorphisms, we obtain an isomorphism
$$\ddd(D_\tau) \otimes \ddd(D_{-\widehat y_0}) \simeq \ddd(D_\tau \sharp D_{-\widehat y_0})\,,$$
which is continuous in $\tau$, and which implies that the loop $(D_\tau)_\tau$ is orientable if and only if the loop $D_\tau \sharp D_{-\widehat y_0}$ is. Let us therefore compute the first Stiefel--Whitney class of $\ddd(D_\tau \sharp D_{-\widehat y_0})$ over $S^1$. The $w_1$ of a loop of Cauchy--Riemann operators on a compact surface has been computed, see for instance \cite{Seidel_The_Book, Georgieva_Orientability_problem_open_GW}. Since in our case the boundary condition of the loop of operators $\ddd(D_\tau \sharp D_{-\widehat y_0})$ is stationary on the right half of the disk, we obtain
$$w_1\big((\ddd(D_\tau \sharp D_{-\widehat y_0}))_\tau\big) = \langle w_2(TL), [U] \rangle\,,$$
where $U$ is the image in $L$ of the evaluation map
$$\partial D^2 \times S^1 \to L\,,\quad (\sigma,\tau) \mapsto (\widehat y_\tau \sharp -\widehat y_0)(\sigma)\,.$$
Unraveling the definitions, we see that the number $\langle w_2(TL), [U] \rangle$ equals $w_2(TL)\circ \partial$ evaluated on the loop in the space of contractible disks at $y(0)$ given by $(\widehat y_\tau \sharp -\widehat y_0)_{\tau \in S^1}$. \qed
\end{prf}
Due to assumption \textbf{(O)}, we obtain that the determinant line $\ddd(D_{\widetilde y})$ is orientable. We let
$$C(\widetilde y)$$
be the free ${\mathbb{Z}}$-module of rank $1$ whose two generators are the two possible orientations of this determinant line bundle. Note that since $\widetilde y$ is connected, this definition makes sense.
Similarly, recall the definition of a capping for a smooth loop $y {:\ } S^1 \to M$. Pick a nondegenerate Hamiltonian $H {:\ } S^1 \times M \to {\mathbb{R}}$, that is all of its Hamiltonian orbits which are loops are nondegenerate. For a capping $\widehat y$ of a periodic orbit $y$ of $H$ we have defined a linearized operator $D_{\widehat y}$, see \S\ref{ss:CROs_from_b_smooth_maps}. Similarly to the above, we have the family $D_{\widetilde y}$ of all the linearized operators associated to cappings in a given class $\widetilde y$. We have
\begin{lemma}The determinant line $\ddd(D_{\widetilde y})$ is orientable.
\end{lemma}
\begin{prf}
It suffices to prove that the determinant bundle of a loop $D_\tau = D_{\widehat y_\tau}$ of operators above a loop of cappings is orientable. Fix a capping $\widehat y_0 \in \widetilde y$. We can form the glued operator $D_\tau \sharp D_{-\widehat y_0}$ over $S^2$. The direct sum and the gluing isomorphisms combine to the isomorphism
$$\ddd(D_\tau) \otimes \ddd(D_{-\widehat y_0}) \simeq \ddd(D_\tau \sharp D_{-\widehat y_0})\,,$$
which is continuous in $\tau$, and which implies that we only have to prove the orientability of the determinant bundle $(\ddd(D_\tau \sharp D_{-\widehat y_0}))_\tau$. However this latter bundle is a bundle of real Cauchy--Riemann operators on $S^2$, which is a closed Riemann surface. It is well-known that the set of real Cauchy--Riemann operators on a Hermitian vector bundle over a closed Riemann surface deformation retracts onto the subspace of complex-linear Cauchy--Riemann operators. The determinant lines of the complex linear operators are canonically oriented since their kernels and cokernels are complex vector spaces. It follows that the determinant bundle of the whole space of real Cauchy--Riemann operators over a closed Riemann surface is canonically oriented. This shows that our bundle over $S^1$ is orientable. \qed
\end{prf}
We therefore let
$$C(\widetilde y)$$
be the free ${\mathbb{Z}}$-module of rank $1$ whose two generators are the two possible orientations of this determinant line bundle. Again, since $\widetilde y$ is connected, this definition makes sense.
\subsubsection{Orientations and isomorphisms}\label{sss:orientations_isomorphisms}
Let $\Sigma$ be a punctured sphere or closed disk with punctures $\theta,\{\theta_i\}_i$ with $\theta$ being the only positive puncture. Assume we have chosen nondegenerate Floer data $(H,J)$, $(H^i,J^i)$ associated to $\theta,\theta_i$. Choose critical points $\widetilde y_i \in \Crit {\mathcal{A}}_{H_i}$ and let $y$ be a Hamiltonian orbit of $H$. Let
$$w \in C^\infty_b(\Sigma,\partial\Sigma; M,L;\{y_i\}_i,y)\,.$$
Fix representative cappings $\widehat y_i \in \widetilde y_i$. We can preglue the maps $\widehat y_i$ and $w$, according to the obvious gluing tree, to form a new map, which is b-smooth and has $y$ as the unique asymptotic orbit. It therefore can be viewed as a capping for $y$, and so we denote it $\widehat y$, and let $\widetilde y$ be its equivalence class. Let $D_{\widehat y_i} \in D_{\widetilde y_i}$ be linearized operators for the cappings. We can glue these operators with a linearized operator $D_w$ cooresponding to $w$ so that the result can be deformed into a linearized operator $D_{\widehat y}$ for the capping $\widehat y$. We therefore have an isomorphism
$$\ddd(D_{\widehat y}) \simeq \ddd\Big(D_w \oplus \bigoplus_i D_{\widehat y_i}\Big)$$
which is the composition of the deformation and gluing isomorphisms.
The direct sum isomorphism (\S\ref{par:direct_sum_isos}) yields
$$\ddd\Big(D_w \oplus \bigoplus_i D_{\widehat y_i}\Big) \simeq \ddd(D_w) \otimes \bigotimes_i\ddd(D_{\widehat y_i})\,,$$
and it depends on the ordering of the punctures of $\Sigma$. Composing the two isomorphisms, we obtain the isomorphism
$$\ddd(D_{\widehat y}) \simeq \ddd(D_w) \otimes \bigotimes_i\ddd(D_{\widehat y_i})\,.$$
Passing to the families, we finally get
$$\ddd(D_{\widetilde y}) \simeq \ddd(D_w) \otimes \bigotimes_i\ddd(D_{\widetilde y_i})\,.$$
This means that there is a canonical bijection between orientations of $D_w$ and isomorphisms
$$\ddd(D_{\widetilde y}) \simeq \bigotimes_i\ddd(D_{\widetilde y_i})\,,$$
or equivalently isomorphisms
$$C(\widetilde y) \simeq \bigotimes_i C(\widetilde y_i)\,.$$
We emphasize that this bijection depends on the chosen ordering of the orbits $y_i$. Note that this bijection is continuous with respect to $w$.
Now we'll show how such isomorphisms correspond to orientations of solution spaces. There are two cases: the case of a single surface and the case of a family.
\paragraph{A single surface} Assume the Floer data associated to the punctures of $\Sigma$ are regular and choose a compatible regular perturbation datum $(K,I)$ on $\Sigma$, so that for every $u\in{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ the linearized operator $D_u$ is onto and we have canonically
$$\ker D_u = T_u{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)\,.$$
As we have just seen, isomorphisms $\bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$ are in bijection with orientations of $D_u$ for any $u \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$, therefore we obtain: for every connected component of ${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$, there is a bijection between such isomorphisms and orientations of that connected component.
\paragraph{A family} Let ${\mathcal{S}} \to {\mathcal{R}}$ be a family of punctured Riemann surfaces with a single positive puncture $\theta$ and negative punctures $\{\theta_i\}_i$, and assume we have chosen a set of ends for it, a set of regular Floer data $(H,J)$, $(H^i,J^i)$ associated to $\theta,\theta^i$, and a compatible regular perturbation datum $(K,I)$. Fix $\widetilde y_i \in \Crit {\mathcal{A}}_{H^i}$, $\widetilde y\in\Crit {\mathcal{A}}_H$. For every $(r,u) \in {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ we have canonically
$$\ker D_{r,u} = T_{(r,u)}{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)\,.$$
The exact Fredholm triple
\begin{equation}\label{eqn:exact_triple_extended_linearized_op}
0 \to D_u \to D_{r,u} \to 0_{T_r{\mathcal{R}}} \to 0
\end{equation}
leads to the canonical isomorphism
\begin{equation}\label{eqn:iso_exact_triple_extended_linearized_op}
\ddd(D_{r,u}) \simeq \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}})\,.\end{equation}
As we have just seen, there is a bijection between isomorphisms $\bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$ and orientations of $D_u$. Therefore there is a bijection between such isomorphisms and orientations of ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ \emph{relative to} ${\mathcal{R}}$. In our applications below ${\mathcal{R}}$ is always an interval in ${\mathbb{R}}$ and so it carries the positive orientation inherited from ${\mathbb{R}}$, which therefore implies that we have a canonical bijection between isomorphisms $\bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$ and orientations of components of ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$.
\subsubsection{Canonical orientations}\label{sss:canonical_ors}
Having defined the relevant moduli spaces of solutions of the Floer PDE, we now pass to the canonical orientations of the corresponding linearized operators. We distinguish the cases of a translation-invariant perturbation datum, a single surface, and a family.
\paragraph{Translation-invariant perturbation datum} Let $(H,J)$ be a regular Floer datum, which is time-periodic in case we consider periodic orbits, and fix $\widetilde y_\pm \in \Crit {\mathcal{A}}_H$ of index difference $1$. For any $u \in \widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ the linearized operator $D_u$ is onto and has index $1$, therefore its kernel is $1$-dimensional, and it is spanned by the infinitesimal translation $\partial_s$. We call the orientation
$$\partial_u := \partial_s \otimes 1^\vee$$
of $D_u$ \textbf{canonical}.
\paragraph{A single surface} Let $\Sigma$ be a Riemann surface with $\theta$ being the only positive puncture and $\{\theta_i\}_i$ being the negative punctures, and assume we have chosen a set of ends for it, regular Floer data $(H,J)$, $(H^i,J^i)$ associated to $\theta,\theta_i$ and a compatible regular perturbation datum $(K,I)$; fix critical points $\widetilde y_i \in \Crit {\mathcal{A}}_{H^i},\widetilde y \in \Crit {\mathcal{A}}_H$. Assume the moduli space ${\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is zero-dimensional and let $u$ be an element therein. The linearized operator $D_u$ is surjective and has index zero. Therefore it is an isomorphism and we let
$${\mathfrak{o}}_u = 1 \otimes 1^\vee \in \ddd(D_u)$$
be the positive orientation. We call this orientation ${\mathfrak{o}}_u$ of $D_u$ {canonical}.
\paragraph{A family} \label{par:canonical_orientations_families} Assume we have a family of Riemann surfaces ${\mathcal{S}} \to {\mathcal{R}}$ where ${\mathcal{R}} \subset {\mathbb{R}}$ is an interval. Let $\theta,\theta_i$ be the punctures of ${\mathcal{S}}$ with $\theta$ being the only positive puncture, and assume we have chosen a set of ends for ${\mathcal{S}}$, regular Floer data $(H,J),(H^i,J^i)$ associated to $\theta,\theta_i$, a regular compatible perturbation datum $(K,I)$, and critical points $\widetilde y_i \in \Crit {\mathcal{A}}_{H^i},\widetilde y \in \Crit {\mathcal{A}}_H$, such that ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ is zero-dimensional. Assume $(r,u) \in {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$. The operator $D_{r,u}$ is onto and has index zero, therefore it is an isomorphism. Recall the isomorphism \eqref{eqn:iso_exact_triple_extended_linearized_op}
$$\ddd(D_{r,u}) \simeq \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}})\,.$$
Let $\partial_r$ be the positive orientation of ${\mathcal{R}}$. We let ${\mathfrak{o}}_u \in \ddd(D_u)$ be the orientation such that this isomorphism maps
$$1 \otimes 1^\vee \mapsto {\mathfrak{o}}_u \otimes \partial_r\,.$$
We call this orientation ${\mathfrak{o}}_u$ \textbf{canonical}. Note that $D_u$ has index $-1$.
\subsubsection{Induced orientations}\label{sss:induced_ors}
Whenever we have a $1$-dimensional moduli space ${\mathcal{M}}$, it can be compactified to a $1$-dimensional compact manifold $\overline{\mathcal{M}}$ with boundary consisting of elements of $0$-dimensional moduli spaces. Here we show how the canonical orientations of the Fredholm operators corresponding to these $0$-dimensional spaces induce orientations on ${\mathcal{M}}$. These computations will be used in \S\ref{sss:identities} to prove that the various operations in Floer homology satisfy suitable identities.
The notations and the treatment here parallel those of \S\ref{sss:compactness_gluing}, where the types of boundary points arising in compactification are described. Note that all Floer and perturbation data in sight are assumed to be regular and sufficiently generic so that compactness results of \S\ref{sss:compactness_gluing} apply.
\paragraph{The case of translation-invariant perturbation datum}\label{par:induced_orientation_translation_invt_pert_datum} Consider the $1$-dimensional moduli space ${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ and a boundary point
$$\delta=([u],[v]) \in {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y) \times {\mathcal{M}}(H,J;\widetilde y, \widetilde y_+)\,.$$
Let $\Delta$ be the connected component of $\overline{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ such that $\delta \in \Delta$. Let $w \in \widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ be obtained by gluing $u,v$ for some large gluing length. The canonical orientations of $D_u,D_v$ correspond, by \S\ref{sss:orientations_isomorphisms}, to isomorphisms
$$C(u){:\ } C(\widetilde y_-) \simeq C(\widetilde y) \quad \text{and} \quad C(v) {:\ } C(\widetilde y) \simeq C(\widetilde y_+)\,.$$ The isomorphism
$$C(v) \circ C(u) {:\ } C(\widetilde y_-) \simeq C(\widetilde y_+)$$
corresponds to an orientation of $D_w$, which we will now compute. First, consider the isomorphism
\begin{equation}\label{eqn:iso_dDu_otimes_dDv_dDw_induced_or_translation_invt_pert_datum}
\ddd(D_v) \otimes \ddd(D_u) \simeq \ddd(D_w)
\end{equation}
which is the composition of the direct sum, gluing, and deformation isomorphisms. Let ${\mathfrak{o}}_w \in \ddd(D_w)$ be the image of $\partial_v \otimes \partial_u$ under this isomorphism. We claim that ${\mathfrak{o}}_w$ corresponds to the isomorphism $C(v) \circ C(u)$. Indeed, we have the following commutative diagram:
\begin{equation}\label{dia:computation_induced_ori_bdry_op_squared_HF}
\xymatrix{\ddd(D_{\widetilde y_-}) \ar[r] \ar[dr] & \ddd(D_v) \otimes \ddd(D_u) \otimes \ddd(D_{\widetilde y_-}) \ar[r] \ar[d] & \ddd(D_v) \otimes \ddd(D_{\widetilde y}) \ar[d]\\ & \ddd(D_w) \otimes \ddd(D_{\widetilde y_-}) \ar[r] & \ddd(D_{\widetilde y_+}) }
\end{equation}
where the leftmost horizontal arrow maps ${\mathfrak{o}}_- \mapsto \partial_v \otimes \partial_u \otimes {\mathfrak{o}}_-$ while the slanted arrow maps ${\mathfrak{o}}_- \mapsto {\mathfrak{o}}_w \otimes {\mathfrak{o}}_-$. The triangle then commutes by the definition of ${\mathfrak{o}}_w$. The square consists of combinations of direct sum, gluing, and deformation isomorphisms, and therefore commutes. Fix ${\mathfrak{o}}_- \in \ddd(D_{\widetilde y_-})$ and let ${\mathfrak{o}} = C(u)({\mathfrak{o}}_-)$ and ${\mathfrak{o}}_+ = C(v)({\mathfrak{o}})$. Then by the definition of $C(u), C(v)$ we know that this diagram maps
$$\xymatrix{{\mathfrak{o}}_- \ar@{|->}[r] \ar@{|->}[dr] & \partial_v \otimes \partial_u \otimes {\mathfrak{o}}_- \ar@{|->}[d] \ar@{|->}[r] & \partial_v \otimes {\mathfrak{o}} \ar@{|->}[d] \\ & {\mathfrak{o}}_w \otimes {\mathfrak{o}}_- \ar@{|->} [r] & {\mathfrak{o}}_+}$$
We see that the composition of the slanted arrow and the bottom arrow maps ${\mathfrak{o}}_- \mapsto {\mathfrak{o}}_+$. On the other hand, by the definition of the correspondence between isomorphisms $C(\widetilde y_-) \simeq C(\widetilde y_+)$ and orientations of $\ddd(D_w)$ we have that the isomorphism $C(v) \circ C(u)$, which maps ${\mathfrak{o}}_- \mapsto {\mathfrak{o}}_+$, corresponds to the orientation ${\mathfrak{o}}_w$. It remains to explicitly compute ${\mathfrak{o}}_w$.
Recall that the gluing map is a local diffeomorphism (see for instance \cite{Schwarz_PhD_thesis})
$$\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y) \times \widetilde{\mathcal{M}}(H,J;\widetilde y,\widetilde y_+) \to \widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$$
defined near $(u,v)$ and mapping a neighborhood of this point diffeomorphically to a neighborhood of $w$. The main property of this gluing map is that its differential, which is an isomorphism
$$dg {:\ } T_u\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y) \oplus T_v\widetilde{\mathcal{M}}(H,J;\widetilde y,\widetilde y_+) \simeq T_w \widetilde {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$$
is such that if we let $\ddd(dg) {:\ } \ddd(D_u \oplus D_v) \to \ddd(D_w)$ be the induced map on determinant lines, then the composition with the direct sum isomorphism
$$\ddd(D_v) \otimes \ddd(D_u) \xrightarrow{\oplus}\ddd(D_u \oplus D_v) \xrightarrow{\ddd(dg)} \ddd(D_w)$$
yields the isomorphism \eqref{eqn:iso_dDu_otimes_dDv_dDw_induced_or_translation_invt_pert_datum}, which also enters in the left vertical arrow in the diagram \eqref{dia:computation_induced_ori_bdry_op_squared_HF}. Here we have of course identified
$$\ddd(D_u) = \ddd(\ker D_u) = \ddd(T_u\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y))$$
and similarly for $v,w$. Therefore we know that ${\mathfrak{o}}_w$ is the image of $\partial_v \otimes \partial_u$ by the composition
$$\ddd(D_v) \otimes \ddd(D_u) \xrightarrow{\oplus} \ddd(D_v \oplus D_u) \xrightarrow{\ddd(dg)} \ddd(D_w)\,.$$
The first map maps $\partial_v \otimes \partial_u \mapsto \partial_v \wedge \partial_u$. Next, from the structure of the gluing map $g$ it follows that
$$dg(\partial_v + \partial_u) = \partial_w$$
while \label{footnote:differential_gluing_map} \footnote{This can be seen intuitively as follows. Fix a large gluing length $R$. Then the glued trajectory $w$ satisfies that $w(0)$ is close to $u(0)$ while $w(2R+1)$ is close to $v(0)$. If we now let $u',v'$ be trajectories which satisfy, say $u'(0) = u(-1)$, $v'(0) = v(1)$, so that the passage from $(u,v)$ to $(u',v')$ is in the direction of the vector $-\partial_u+\partial_v$, and we let $w'$ be the trajectory glued from $u',v'$ for the same gluing length, then $w'(0)$ is close to $u'(0)=u(-1)$, and $w'(2R+1)$ is close to $v'(0) = v(1)$, that is the points $w'(0),w'(2R+1)$ are futher apart than the points $w(0),w(2R+1)$, which means that $w'$ is faster than $w$, because it traverses a longer distance in the same amount of time. Therefore it spends less time near $\widetilde y$, meaning it is further away from $\delta$. Therefore $dg$ maps $-\partial_u + \partial_v$ to $\text{inward}_\delta$.}
$$dg(\partial_v - \partial_u) = \text{inward}_\delta\,,$$
where $\inward_\delta \in T_w\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ is a vector pointing away from $\delta$. Therefore
$${\mathfrak{o}}_w = \ddd(dg)(\partial_v \wedge \partial_u) = \ddd(dg)((\partial_v + \partial_u) \wedge (-\partial_v + \partial_u)) = \partial_w\wedge (-\text{inward}_\delta)\,.$$
This means that the isomorphism $C(v) \circ C(u)$ induces the orientation $-\partial_w\wedge \text{inward}_\delta$ on the connected component of $\widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ containing $w$.
\paragraph{The case of a single surface}\label{par:induced_orientation_single_surf} Consider a $1$-dimensional moduli space ${\mathcal{M}}_\Sigma (K,I;\{\widetilde y_i\}_i,\widetilde y)$. The boundary points correspond to Floer breaking, either at an incoming or at the outgoing end. Consider breaking at the outgoing end:
$$\delta = (u,[v]) \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y') \times {\mathcal{M}}(H,J;\widetilde y',\widetilde y)\,.$$
Let $\Delta \subset \overline {\mathcal{M}}_\Sigma (K,I;\{\widetilde y_i\}_i,\widetilde y)$ be the connected component with $\delta \in \Delta$. Let $w \in \Delta$ be obtained by gluing $u,v$ for some large gluing length. The canonical orientations of $D_u,D_v$ correspond to isomorphisms
$$C(u) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y')\quad\text{and}\quad C(v) {:\ } C(\widetilde y') \simeq C(\widetilde y)\,,$$
and the composition
$$C(v) \circ C(u) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$$
corresponds to an orientation of $D_w$, and therefore of $\Delta$. Let us compute this orientation. The differential of the gluing map is an isomorphism
$$dg {:\ } T_u{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y') \times T_v \widetilde{\mathcal{M}}(H,J;\widetilde y',\widetilde y) \simeq T_w{\mathcal{M}}_\Sigma (K,I;\{\widetilde y_i\}_i,\widetilde y)$$
mapping the spanning vector \footnote{This can be seen using the argument of the footnote on page \pageref{footnote:differential_gluing_map}.} $\partial_v$ to $\text{inward}_\delta$. Thus the induced isomorphism on determinant lines
$$\ddd(D_v) \otimes \ddd(D_u) \xrightarrow{\oplus} \ddd(D_v \oplus D_u) \xrightarrow{\ddd(dg)} \ddd(D_w)$$
maps $\partial_v \otimes {\mathfrak{o}}_u \mapsto \text{inward}_\delta$.
The operator $D_v \oplus D_u \oplus \bigoplus_i D_{\widehat y_i}$ glues into: $D_v \oplus D_{\widehat y'}$, $D_w \oplus \bigoplus_i D_{\widehat y_i}$, and $D_{\widehat y}$. Using a combination of direct sum, linear gluing, and deformation isomorphisms, we obtain the commutative diagram
$$\xymatrix{\ddd(D_v) \otimes \ddd(D_u) \otimes \bigotimes_i\ddd(D_{\widetilde y_i}) \ar[dr] \ar[dd]^{dg \otimes \id}& & \\
& \ddd(D_v) \otimes \ddd(D_{\widetilde y'}) \ar[r] \ar[dl] & \ddd(D_{\widetilde y}) \\
\ddd(D_w) \otimes \bigotimes_i\ddd(D_{\widetilde y_i}) \ar[rru]}$$
Pick ${\mathfrak{o}}_i \in C(\widetilde y_i)$ and denote ${\mathfrak{o}}' = C(u)\big(\bigotimes_i {\mathfrak{o}}_i\big) \in C(\widetilde y')$ and ${\mathfrak{o}} = C(v)({\mathfrak{o}}') \in C(\widetilde y)$. This diagram maps
$$\xymatrix{ \partial_v \otimes {\mathfrak{o}}_u \otimes \bigotimes_i {\mathfrak{o}}_i \ar@{|->}[dr] \ar@{|->}[dd]^{dg \otimes \id}& & \\
& \partial_v \otimes {\mathfrak{o}}' \ar@{|->}[r] \ar@{|->}[dl] & {\mathfrak{o}} \\
\text{inward}_\delta \otimes \bigotimes_i {\mathfrak{o}}_i \ar@{|->}[urr]}$$
This means that the orientation of $D_w$ corresponding to the isomorphism $C(v)\circ C(u)$, which maps $\bigotimes_i{\mathfrak{o}}_i \mapsto {\mathfrak{o}}$, is given by $\text{inward}_\delta$. This is also the induced orientation on $\Delta$.
When the breaking happens at the $j$-th incoming end, we get a boundary point
$$\delta = ([u],v) \in {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \times {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y)\,.$$
Let again $\Delta$ be the connected component of $\overline {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ with $\delta \in \Delta$. Let $w \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ be obtained by gluing $u,v$ for some large gluing length. The canonical orientations of the operators $D_u,D_v$ correspond to isomorphisms
$$C(u) {:\ } C(\widetilde y_j) \simeq C(\widetilde y_j') \quad \text{and}\quad C(v) {:\ } \bigotimes_{i<j}C(\widetilde y_i) \otimes C(\widetilde y_j') \otimes \bigotimes_{i>j} C(\widetilde y_i) \simeq C(\widetilde y)\,.$$
The isomorphism
$$C(v) \circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$$
corresponds to an orientation of $D_w$, and therefore of $\Delta$. Let us compute it. The differential of the gluing map is an isomorphism
$$dg {:\ } T_u \widetilde {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \times T_v {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y) \to T_w {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$$
mapping the spanning vector $\partial_u$ to $-\text{inward}_\delta$.
The operator $D_v \oplus D_u \oplus \bigoplus_i D_{\widehat y_i}$ glues into $D_v \oplus \bigoplus_{i\neq j}D_{\widehat y_i} \oplus D_{\widehat y_j'}$, $D_w \oplus \bigoplus_i D_{\widehat y_i}$, and $D_{\widehat y}$. Noting that direct sum isomorphisms obey the Koszul rule with respect to the grading of the determinant lines, we have the following commutative diagram
$$\xymatrix{ & \ddd(D_v) \otimes \bigotimes_{i < j}\ddd(D_{\widetilde y_i}) \otimes \ddd (D_u) \otimes \ddd(D_{\widetilde y_j}) \otimes \bigotimes_{i>j}\ddd(D_{\widetilde y_i}) \ar [d]^R \ar[dl]\\
\ddd(D_{\widetilde y}) & \ddd(D_v) \otimes \ddd(D_u) \otimes \bigotimes_i \ddd(D_{\widetilde y_i}) \ar[d]^{\ddd(dg) \otimes \id} \ar[l]\\
& \ddd(D_w) \otimes \bigotimes_i \ddd(D_{\widetilde y_i}) \ar[lu]}$$
where $R$ differs from the mere exchange of factors by the Koszul sign $(-1)^{\sum_{i < j}|\widetilde y_i|'}$. Fix generators ${\mathfrak{o}}_i \in C(\widetilde y_i)$ and let ${\mathfrak{o}} = C(v)\circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id)(\bigotimes_i{\mathfrak{o}}_i)\in C(\widetilde y)$. We have
\begin{equation}\label{dia:induced_orientation_single_surf_Floer_breaking_incoming_end}
\xymatrix{ & {\mathfrak{o}}_v \otimes \bigotimes_{i < j} {\mathfrak{o}}_i \otimes \partial_u \otimes {\mathfrak{o}}_j \otimes \bigotimes_{i>j} {\mathfrak{o}}_i \ar@{|->} [d] \ar @{|->}[dl]\\
{\mathfrak{o}} &(-1)^{\sum_{i<j}|\widetilde y_i|'} {\mathfrak{o}}_v \otimes \partial_u \otimes \bigotimes_i {\mathfrak{o}}_i \ar @{|->}[d] \ar@{|->} [l]\\
& -(-1)^{\sum_{i<j}|\widetilde y_i|'} \text{inward}_\delta \otimes \bigotimes_i {\mathfrak{o}}_i \ar @{|->}[lu]}
\end{equation}
This means that the isomorphism $C(v) \circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id)$ which maps $\bigotimes_i{\mathfrak{o}}_i \mapsto {\mathfrak{o}}$, corresponds to the orientation $-(-1)^{\sum_{i<j}|\widetilde y_i|'}\text{inward}_\delta$ of $D_w$. This is therefore the induced orientation on $\Delta$.
\paragraph{The case of a family}\label{par:induced_orientations_families} Let ${\mathcal{S}} \to {\mathcal{R}}$ be a family of punctured Riemann surfaces, and fix regular Floer data associated to its punctures and a regular compatible perturbation datum. We only deal with ${\mathcal{R}}$ being a connected interval in ${\mathbb{R}}$ of the form ${\mathcal{R}} = [0,1]$ or ${\mathcal{R}} = [0,\infty)$. We orient ${\mathcal{R}}$ by the orientation $\partial_r = 1 \in \ddd(T_r{\mathcal{R}}) = \ddd({\mathbb{R}})$. We consider a $1$-dimensional moduli space ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ and the boundary of its compactification. There are three types of boundary points: the boundary points of the original moduli space before compactification, internal Floer breaking, and breaking at a noncompact end of ${\mathcal{R}}$, see \S\ref{sss:compactness_gluing}.
We start with boundary points belonging to the moduli space itself: let
$$\delta = (r,u) \in \partial {\mathcal{R}} \times {\mathcal{M}}_{\Sigma_r}(K_r,I_r;\{\widetilde y_i\}_i,\widetilde y)\,.$$
The operator $D_u$ is canonically oriented by $1\otimes 1^\vee$, since it is an isomorphism. We have the isomorphism \eqref{eqn:iso_exact_triple_extended_linearized_op}
$$\ddd(D_{r,u}) \simeq \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}})\,.$$
Since the operators $D_u,D_{r,u}$ are onto, by the normalization property (\S\ref{par:normalization_pty}) this isomorphism in fact comes from the short exact sequence
$$0 \to \ker D_u \to \ker D_{r,u} \xrightarrow{\pr} T_r{\mathcal{R}} \to 0\,,$$
where $\pr {:\ } \ker D_{r,u} \to T_r{\mathcal{R}}$ is the restriction of the projection $T_r{\mathcal{R}} \oplus W^{1,p}(u) \to T_r{\mathcal{R}}$. Since $\ker D_u = 0$, this $\pr$ is an isomorphism; by abuse of notation we denote $\partial_r$ the preimage of $\partial_r$ by $\pr$. It follows that
$$\partial_r \mapsto (1 \otimes 1^\vee) \otimes \partial_r\,.$$
Therefore the induced orientation on $D_{r,u}$ is given by $\partial_r$. This is therefore also the induced orientation on the connected component of ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ containing $(r,u)$. Note that if $r = 0$, this orientation coincides with $\inward_\delta$, while if $r = 1$, it coincides with $-\inward_\delta$.
Next we consider internal Floer breaking. This can happen at the outgoing end or at an incoming end. Consider first breaking at the outgoing end: let
$$\delta = ((r,u),[v])\in{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y') \times {\mathcal{M}}(H,J;\widetilde y',\widetilde y)\,.$$
Let $\Delta$ be the connected component of $\overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ with $\delta \in \Delta$. Let $(s,w) \in \Delta$ be obtained by gluing $(r,u)$ and $v$ for some large gluing length. The canonical orientations of $D_u,D_v$ correspond to isomorphisms
$$C(u) {:\ } C(\widetilde y') \simeq C(\widetilde y)\quad\text{and}\quad C(v){:\ }\bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y')\,.$$
The isomorphism
$$C(v) \circ C(u) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$$
corresponds to an orientation of $D_w$, and to an orientation of ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ via the isomorphism \eqref{eqn:iso_exact_triple_extended_linearized_op}. Let us compute these orientations. The differential of the gluing map is an isomorphism
$$dg {:\ } T_{(r,u)}{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y') \oplus T_v \widetilde {\mathcal{M}}(H,J;\widetilde y',\widetilde y) \to T_{(s,w)}{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$$
mapping the spanning vector $\partial_v$ to $\text{inward}_\delta$. We have the exact square of Fredholm operators
$$\xymatrix{D_v \ar@{=}[r] \ar[d] & D_v \ar[r] \ar[d] & 0 \ar[d] \\ D_v \oplus D_u \ar[r] \ar[d] & D_v \oplus D_{r,u} \ar[r] \ar[d] & 0_{T_r{\mathcal{R}}} \ar@{=}[d] \\ D_u \ar[r] & D_{r,u} \ar[r] & 0_{T_r{\mathcal{R}}}}$$
which induces the following commutative diagram:
$$\xymatrix{\ddd(D_v) \otimes \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_v) \otimes \ddd(D_{r,u}) \ar[d]\\
\ddd(D_v \oplus D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r]& \ddd(D_v\oplus D_{r,u})}$$
where the vertical arrows are direct sum isomorphisms while the horizontal arrows come from exact triples.
There is an isomorphism
\begin{equation}\label{eqn:iso_dDu_otimes_dDv_dDw_internal_Floer_breaking_families_outgoing_end}
\ddd(D_v \oplus D_u) \simeq \ddd(D_w)
\end{equation}
which is the combination of linear gluing and deformation isomorphisms. Since $s$ is close to $r$, we have the isomorphism
$$\ddd(D_v \oplus D_u) \otimes \ddd(T_r{\mathcal{R}}) \simeq \ddd(D_w) \otimes \ddd(T_s{\mathcal{R}})\,.$$
The main property of the gluing map is the commutativity of the following diagram:
$$\xymatrix{\ddd(D_v \oplus D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_v\oplus D_{r,u})\ar[d]^{\ddd(dg)}\\
\ddd(D_w) \otimes \ddd(T_s{\mathcal{R}}) \ar[r]& \ddd(D_{s,w})}$$
where the horizontal arrows come from exact triples. Combining the two diagrams, we obtain the diagram on the left:
\begin{minipage}{8cm}
$$\xymatrix{\ddd(D_v) \otimes \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_v) \otimes \ddd(D_{r,u}) \ar[d]\\
\ddd(D_v \oplus D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_v\oplus D_{r,u})\ar[d]^{\ddd(dg)}\\
\ddd(D_w) \otimes \ddd(T_s{\mathcal{R}}) \ar[r]& \ddd(D_{s,w})}$$
\end{minipage}
\begin{minipage}{6cm}
$$\xymatrix{\partial_v \otimes {\mathfrak{o}}_u \otimes \partial_r \ar@{|->}[r] \ar@{|->}[d]& \partial_v \otimes (1\otimes 1^\vee)\ar@{|->}[d]\\
(\partial_v \wedge {\mathfrak{o}}_u) \otimes \partial_r \ar@{|->}[r] \ar@{|->}[d]& \partial_v\ar@{|->}[d]\\
{\mathfrak{o}}_w \otimes \partial_r \ar@{|->}[r]& \text{inward}_\delta}$$
\end{minipage}
\noindent Recall that we have the canonical orientations ${\mathfrak{o}}_u \in \ddd(D_u)$ and $\partial_v \in \ddd(D_v)$. Let us denote by ${\mathfrak{o}}_w$ the image of $\partial_v \otimes {\mathfrak{o}}_u$ by the isomorphism \eqref{eqn:iso_dDu_otimes_dDv_dDw_internal_Floer_breaking_families_outgoing_end}. The diagram on the left maps orientations as shown in the diagram on the right, where the top horizontal arrow comes from the definition of ${\mathfrak{o}}_u$, see \S \ref{par:canonical_orientations_families}, and the vertical arrows all come from the definitions. The goal of this computation is the bottom arrow, which as we can see maps ${\mathfrak{o}}_w \otimes \partial_r \mapsto \text{inward}_\delta$.
Now the operator $D_v \oplus D_u \oplus \bigoplus_i D_{\widehat y_i}$ glues into $D_w \oplus \bigoplus_i D_{\widehat y_i}$, $D_v \oplus D_{\widehat y'}$, and $D_{\widehat y}$. Therefore, using a combination of direct sum, gluing, and deformation isomorphisms, we have the commutative diagram
$$\xymatrix{\ddd(D_v) \otimes \ddd(D_u) \otimes \bigotimes_i\ddd(D_{\widetilde y_i}) \ar[d]^{dg \otimes \id} \ar[r] & \ddd(D_{\widetilde y}) \\
\ddd(D_w) \otimes \bigotimes_i\ddd(D_{\widetilde y_i}) \ar[ru] }$$
Pick generators ${\mathfrak{o}}_i \in C(\widetilde y_i)$ and let ${\mathfrak{o}} = C(v)(C(u)(\bigotimes_i{\mathfrak{o}}_i)) \in C(\widetilde y)$. This diagram maps
$$\xymatrix{\partial_v \otimes {\mathfrak{o}}_u \otimes \bigotimes_i {\mathfrak{o}}_i \ar@{|->}[d]^{dg \otimes \id} \ar@{|->}[r] & {\mathfrak{o}} \\
{\mathfrak{o}}_w \otimes \bigotimes_i {\mathfrak{o}}_i \ar@{|->}[ru]}$$
Thus we see that the isomorphism $C(v) \circ C(u)$, which maps $\bigotimes_i {\mathfrak{o}}_i \mapsto {\mathfrak{o}}$, corresponds to the orientation ${\mathfrak{o}}_w$ of $D_w$, which in turn corresponds to the orientation $\text{inward}_\delta$ induced on $\Delta$.
Assume now that the Floer breaking occurs at the $j$-th incoming end, and we have
$$\delta=([u],(r,v)) \in {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \times {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y)\,.$$
Let $\Delta$ be the connected component of $\overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ with $\delta \in \Delta$. Let $(s,w)$ be obtained by gluing $u,(r,v)$ for some large gluing length. The canonical orientations $\partial_u \in \ddd(D_u)$ and ${\mathfrak{o}}_v \in \ddd(D_v)$ correspond to isomorphisms
$$C(u) {:\ } C(\widetilde y_j) \simeq C(\widetilde y_j')\quad \text{and} \quad C(v) {:\ } \bigotimes_{i<j} C(\widetilde y_i) \otimes C(\widetilde y_j') \otimes \bigotimes_{i > j}C(\widetilde y_i) \simeq C(\widetilde y)\,.$$
The composition
$$C(v)\circ(\id \otimes \dots\otimes C(u) \otimes \dots \otimes \id) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)$$
corresponds to an orientation of $D_w$, and via \eqref{eqn:iso_exact_triple_extended_linearized_op}, to an orientation of ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$. Let us compute these orientations. The differential of the gluing map is an isomorphism
$$dg {:\ } T_u \widetilde {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \oplus T_{(r,v)}{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y) \to T_{(s,w)}{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$$
mapping the spanning vector $\partial_u$ to $-\text{inward}_\delta$. Similarly to the breaking at the outgoing end described above, we use the exact square of Fredholm operators
$$\xymatrix{D_v \ar[r] \ar[d] & D_v \oplus D_u \ar[r] \ar [d] & D_u \ar@{=}[d] \\ D_{r,v} \ar[r] \ar[d]& D_{r,v} \oplus D_u \ar[r] \ar[d] & D_u \ar[d]\\ 0_{T_r{\mathcal{R}}} \ar@{=}[r] & 0_{T_r{\mathcal{R}}} \ar[r]& 0}$$
and the isomorphism
\begin{equation}\label{eqn:iso_dDu_otimes_dDv_dDw_internal_Floer_breaking_families_incoming_end}
\ddd(D_v \oplus D_u) \simeq \ddd(D_w)
\end{equation}
obtained as a combination of linear gluing and deformation isomorphisms, to obtain the commutative diagram on the left:
\begin{minipage}{8cm}
$$\xymatrix{\ddd(D_v) \otimes \ddd(D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_{r,v}) \otimes \ddd(D_u) \ar[d]\\
\ddd(D_v \oplus D_u) \otimes \ddd(T_r{\mathcal{R}}) \ar[r] \ar[d]& \ddd(D_{r,v}\oplus D_u)\ar[d]\\
\ddd(D_w) \otimes \ddd(T_s{\mathcal{R}}) \ar[r]& \ddd(D_{s,w})}$$
\end{minipage}
\begin{minipage}{6cm}
$$\xymatrix{{\mathfrak{o}}_v \otimes \partial_u \otimes \partial_r \ar@{|->}[r] \ar@{|->}[d]& -(1\otimes 1^\vee) \otimes \partial_u \ar@{|->}[d]\\
({\mathfrak{o}}_v \wedge\partial_u) \otimes \partial_r \ar@{|->}[r] \ar@{|->}[d]& -\partial_u\ar@{|->}[d]\\
{\mathfrak{o}}_w \otimes \partial_r \ar@{|->}[r]& \text{inward}_\delta}$$
\end{minipage}
\noindent where the top square is induced by the exact Fredholm square while the bottom square comes from the compatibility of the gluing map with linear gluing. The top arrow is the composition of the interchange of factors, which includes the Koszul sign $(-1)^{\ind D_u \cdot\dim {\mathcal{R}}} = -1$, and the isomorphism coming from the exact triple. Recall that we have canonical orientations ${\mathfrak{o}}_v,\partial_u$ and let ${\mathfrak{o}}_w$ be the image of ${\mathfrak{o}}_v \otimes \partial_u$ by the isomorphism \eqref{eqn:iso_dDu_otimes_dDv_dDw_internal_Floer_breaking_families_incoming_end}. Then the diagram on the left maps as shown on the right.
The rest of the computation is almost identical to the case of Floer breaking at an incoming end for the case of a single surface: the isomorphism $C(v) \circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id)$ corresponds to the orientation of $D_w$ given by $(-1)^{\sum_{i<j}|\widetilde y_i|'}{\mathfrak{o}}_w$ (that is, the Koszul sign is the same as in \eqref{dia:induced_orientation_single_surf_Floer_breaking_incoming_end}). As we have just computed, the orientation ${\mathfrak{o}}_w$ corresponds to the orientation $\text{inward}_\delta$ of $\Delta$, therefore the total induced orientation on $\Delta$ is $(-1)^{\sum_{i<j}|\widetilde y_i|'} \text{inward}_\delta$.
Lastly, when ${\mathcal{R}} = [0,\infty)$, we also have breaking at the noncompact end of ${\mathcal{R}}$. We retain the notation of \S\ref{par:compactness_families} treating compactness and gluing in this case. Consider the $1$-dimensional moduli space ${\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i \neq j},\widetilde y_2)$ and let
$$\delta = (u,v) \in {\mathcal{M}}_{\Sigma^1}(K^1,I^1;\{\widetilde y_{1,i}\}_i, \widetilde y_{2,j}) \times {\mathcal{M}}_{\Sigma^2}(K^2,I^2;\{\widetilde y_{2,i}\}_i,\widetilde y_2)\,.$$
Let $\Delta$ be the connected component of $\overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i \neq j},\widetilde y_2)$ such that $\delta \in \Delta$. Let $(r,w) \in \Delta$ be obtained by gluing $u,v$ for some large gluing length. The canonical orientations of $D_u,D_v$ correspond to isomorphisms
$$C(u) {:\ } \bigotimes_i C(\widetilde y_{1,i}) \simeq C(\widetilde y_{2,j})\quad\text{and} \quad C(v) {:\ } \bigotimes_i C(\widetilde y_{2,i}) \simeq C(\widetilde y_2)\,.$$
The composition
$$C(v) \circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id) {:\ } \bigotimes_{i < j} C(\widetilde y_{2,i}) \otimes \bigotimes_i C(\widetilde y_{1,i}) \otimes \bigotimes_{i > j} C(\widetilde y_{2,i}) \simeq C(\widetilde y_2)$$
corresponds to an orientation of $D_w$, and therefore to an orientation of $\Delta$, which we will now compute.
The gluing map is the trivial (zero) isomorphism
$$dg {:\ } T_u{\mathcal{M}}_{\Sigma^1}(K^1,I^1;\{\widetilde y_{1,i}\}_i, \widetilde y_{2,j}) \oplus T_v {\mathcal{M}}_{\Sigma^2}(K^2,I^2;\{\widetilde y_{2,i}\}_i,\widetilde y_2) \to T_w {\mathcal{M}}_{\Sigma_r}(K_r,I_r;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i \neq j},\widetilde y_2)\,,$$
therefore the induced isomorphism
$$\ddd(D_v) \otimes \ddd(D_u) \simeq \ddd(D_w)\,,$$
which is the composition of the direct sum, linear gluing, and deformation isomorphisms, maps ${\mathfrak{o}}_v \otimes {\mathfrak{o}}_u = (1 \otimes 1^\vee) \otimes (1 \otimes 1^\vee)$ to ${\mathfrak{o}}_w:= 1\otimes 1^\vee$, that is the positive orientation of $D_w$.
On the other hand, the operator
$$D_v \oplus D_u \oplus \bigoplus_i D_{\widehat y_{1,i}} \oplus \bigoplus_{i \neq j} D_{\widehat y_{2,i}}$$
glues into $D_w \oplus \bigoplus_i D_{\widehat y_{1,i}} \oplus \bigoplus_{i \neq j} D_{\widehat y_{2,i}}$, $D_v \oplus \bigoplus_i D_{\widehat y_{2,i}}$, and $D_{\widehat y_2}$. Using a combination of direct sum, linear gluing, and deformation isomorphisms, we get the following commutative diagram:
$$\xymatrix{\ddd(D_v) \otimes \bigotimes_{i < j} \ddd(D_{\widetilde y_{2,i}}) \otimes \ddd(D_u) \otimes \bigotimes_i \ddd(D_{\widetilde y_{1,i}}) \otimes \bigotimes_{i > j} \ddd(D_{\widetilde y_{2,i}}) \ar [d]^R \ar [dr]\\
\ddd(D_v) \otimes \ddd(D_u) \otimes \bigotimes_{i < j} \ddd(D_{\widetilde y_{2,i}}) \otimes \bigotimes_i \ddd(D_{\widetilde y_{1,i}}) \otimes \bigotimes_{i > j} \ddd(D_{\widetilde y_{2,i}}) \ar[d]^{\ddd(dg) \otimes \id} \ar[r] & \ddd(D_{\widetilde y_2})\\
\ddd(D_w) \otimes \bigotimes_{i < j} \ddd(D_{\widetilde y_{2,i}}) \otimes \bigotimes_i \ddd(D_{\widetilde y_{1,i}}) \otimes \bigotimes_{i > j} \ddd(D_{\widetilde y_{2,i}}) \ar[ur]}$$
where $R$ is the interchange of factors, including the Koszul sign, which is trivial since $\ind D_u = 0$. Pick generators ${\mathfrak{o}}_{1,i} \in C(\widetilde y_{1,i})$ for all $i$, ${\mathfrak{o}}_{2,i} \in C(\widetilde y_{2,i})$ for $i \neq j$, and let
$$\textstyle {\mathfrak{o}}_2 = C(v)\big(\bigotimes_{i < j} {\mathfrak{o}}_{2,i} \otimes C(u)\big(\bigotimes_i {\mathfrak{o}}_{1,i}\big) \otimes \bigotimes_{i > j} {\mathfrak{o}}_{2,i}) \in C(\widetilde y_2 \big)\,.$$
The diagram maps
$$\xymatrix{{\mathfrak{o}}_v \otimes \bigotimes_{i < j} {\mathfrak{o}}_{2,i} \otimes {\mathfrak{o}}_u \otimes \bigotimes_i {\mathfrak{o}}_{1,i} \otimes \bigotimes_{i > j} {\mathfrak{o}}_{2,j} \ar@{|->} [d]^R \ar@{|->} [dr]\\
{\mathfrak{o}}_v \otimes {\mathfrak{o}}_u \otimes \bigotimes_{i < j} {\mathfrak{o}}_{2,i} \otimes \bigotimes_i {\mathfrak{o}}_{1,i} \otimes \bigotimes_{i > j} {\mathfrak{o}}_{2,j} \ar @{|->} [d]^{\ddd(dg) \otimes \id} \ar @{|->} [r] & {\mathfrak{o}}_2 \\
{\mathfrak{o}}_w \otimes \bigotimes_{i < j} {\mathfrak{o}}_{2,i} \otimes \bigotimes_i {\mathfrak{o}}_{1,i} \otimes \bigotimes_{i > j} {\mathfrak{o}}_{2,j} \ar @{|->}[ru]}$$
Therefore the isomorphism $C(v) \circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id)$ which maps
$$\textstyle\bigotimes_{i < j} {\mathfrak{o}}_{2,i} \otimes \bigotimes_i {\mathfrak{o}}_{1,i} \otimes \bigotimes_{i > j} {\mathfrak{o}}_{2,i} \mapsto {\mathfrak{o}}_2\,,$$
corresponds to the orientation ${\mathfrak{o}}_w = 1 \otimes 1^\vee$ of $D_w$. Using the argument on page \pageref{par:induced_orientations_families} we see that the induced orientation on $\Delta$ is $\partial_r$, which evidently equals $-\inward_\delta$.
\subsection{Operations}\label{ss:operations}
Here we define the matrix elements of operations on Floer homology, which itself will be defined in the next subsection. The pattern is the same for all operations: given a collection of critical points $\widetilde y_i\in\Crit {\mathcal{A}}_{H^i}$, $\widetilde y \in \Crit {\mathcal{A}}_H$ the matrix element of an operation is a finite sum of the form
$$\sum_{u \in {\mathcal{M}}(\{\widetilde y_i\}_i,\widetilde y)} C(u) {:\ } \bigotimes_i C(\widetilde y_i) \to C(\widetilde y)$$
where ${\mathcal{M}}(\{\widetilde y_i\}_i,\widetilde y)$ is a $0$-dimensional moduli space of solutions of the Floer PDE and $C(u)$ is the isomorphism corresponding to the canonical orientation of the linearized operator $D_u$. Identities are proved by considering the compactified $1$-dimensional moduli spaces, whose boundary points are in bijection with summands of a desired identity. The main technical points here are the compactness and gluing results of \S\ref{sss:compactness_gluing}, the canonical orientations defined in \S\ref{sss:canonical_ors}, and the computation of induced orientations in \S\ref{sss:induced_ors}.
\subsubsection{Definition of operations}\label{sss:definition_ops}
There are three types of operations: boundary operators, multiplicative operators, and homotopy operators.
\paragraph{Boundary operators}\label{par:matrix_elts_boundary_op}
First we deal with the boundary operator in Lagrangian Floer homology. Let $(H,J)$ be a regular Floer datum for the strip $S = {\mathbb{R}} \times [0,1]$. Fix $\widetilde y_\pm \in \Crit {\mathcal{A}}_H$ with $|\widetilde y_-| - |\widetilde y_+| = 1$. For every $u \in \widetilde {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ the canonical orientation $\partial_u \in \ddd(D_u)$ corresponds to an isomorphism
$$C(u) {:\ } C(\widetilde y_-) \simeq C(\widetilde y_+)\,.$$
Clearly this only depends on $[u] \in {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$. The matrix element of the boundary operator $\partial_{H,J}$ is the homomorphism
$$\sum_{[u] \in {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)} C(u) {:\ } C(\widetilde y_-) \to C(\widetilde y_+)\,.$$
This sum is finite since so is ${\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$.
The matrix element of the boundary operator in Hamiltonian Floer homology is defined in an entirely analogous fashion, but the Floer datum is required to be $1$-periodic in time.
\paragraph{Multiplicative operators}\label{par:matrix_elts_operations}
There is a multiplicative operator corresponding to every punctured Riemann surface $\Sigma$ with exactly one positive puncture $\theta$ and negative punctures $\theta_i$, where $\widehat \Sigma$ is the sphere or the closed disk. Fix a choice of ends for $\Sigma$, regular Floer data $(H,J),(H^i,J^i)$ associated to $\theta,\theta_i$, and a regular compatible perturbation datum $(K,I)$. For $\widetilde y \in \Crit {\mathcal{A}}_{H^i},\widetilde y \in \Crit {\mathcal{A}}_H$ with $|\widetilde y|' = \sum_i |\widetilde y_i|'$ pick $u \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$. The canonical orientation ${\mathfrak{o}}_u \in \ddd(D_u)$ corresponds to an isomorphism
$$C(u) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)\,.$$
The matrix element of the operation $\Phi_{\Sigma;K,I}$ is the homomorphism
$$\sum_{u \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)} C(u) {:\ } \bigotimes_i C(\widetilde y_i) \to C(\widetilde y)\,.$$
\paragraph{Homotopy operators}\label{par:matrix_elts_htpy_ops}
Let ${\mathcal{R}} = [0,1]$ or $[0,\infty)$ and let ${\mathcal{S}} \to {\mathcal{R}}$ be a family of punctured Riemann surfaces with one positive puncture $\theta$ and negative punctures $\theta_i$. Pick a choice of ends for ${\mathcal{S}}$, regular Floer data $(H,J),(H^i,J^i)$ associated to $\theta,\theta_i$, and a regular compatible perturbation datum $(K,I)$ on ${\mathcal{S}}$. Recall that in case ${\mathcal{R}} = [0,\infty)$, the choice of ends and perturbation data has the special form described in \S\ref{par:compactness_families}. We keep a simpler numbering of Floer data and punctures, since the more specialized numbering described there is not needed for the purposes of the current definition.
Fix $\widetilde y_i \in \Crit {\mathcal{A}}_{H^i}$, $\widetilde y \in \Crit {\mathcal{A}}_H$ such that $1+|\widetilde y|' - \sum_i|\widetilde y_i|' = 0$. For $(r,u) \in {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ the canonical orientation ${\mathfrak{o}}_u \in \ddd(D_u)$ determines an isomorphism
$$C(u) {:\ } \bigotimes_i C(\widetilde y_i) \simeq C(\widetilde y)\,.$$
The matrix element of the homotopy operator $\Psi_{{\mathcal{S}};K,I}$ is the homomorphism
$$\sum_{(r,u) \in {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)} C(u) {:\ } \bigotimes_i C(\widetilde y_i) \to C(\widetilde y)\,.$$
\subsubsection{Identities}\label{sss:identities}
Here we prove that various operations satisfy identities. There are three types of identities, expressing the fact that boundary operators square to zero, that multiplicative operators are chain maps, and that they satisfy algebraic identities, which in turn is proved using homotopy operators. Since every operation is defined in terms of matrix elements, in order to prove an identity, it suffices to prove that the corresponding combination of matrix elements of the operations involved equals zero. This is what we do here.
\paragraph{Boundary operators square to zero}\label{par:matrix_elts_bd_op_squared_vanish} Here we only prove this in the absence of bubbling. The remaining case is treated in \S\ref{ss:boundary_op_squares_zero_bubbling_HF}. It is enough to show that the matrix element of $\partial_{H,J}^2$ relative to $\widetilde y_\pm$ is zero, that is that
$$\sum_{\substack{\widetilde y \in \Crit {\mathcal{A}}_H \\ [u]\in{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y) \\ [v] \in {\mathcal{M}}(H,J;\widetilde y,\widetilde y_+)}}C(v)\circ C(u) = 0\,.$$
Consider the compactified moduli space $\overline {\mathcal{M}}(H,J; \widetilde y_-,\widetilde y_+)$; the summands in the above sum are in bijection with its boundary points. For a boundary point $\delta$ let $C(\delta) {:\ } C(\widetilde y_-) \simeq C(\widetilde y_+)$ be the corresponding summand. It is enough to show that for each connected component $\Delta \subset {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ having $\delta,\delta'$ as its boundary points, we have
$$C(\delta) + C(\delta') = 0\,.$$
Let $\widetilde \Delta \subset \widetilde {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$ be the component covering $\Delta$ under the quotient map $\widetilde{\mathcal{M}} (H,J;\widetilde y_-, \widetilde y_+)\to {\mathcal{M}} (H,J;\widetilde y_-, \widetilde y_+)$. By \S\ref{sss:orientations_isomorphisms}, we know that isomorphisms $C(\widetilde y_-) \simeq C(\widetilde y_+)$ are in bijection with orientations of $\widetilde \Delta$. We computed the orientation induced on $\widetilde \Delta$ from a boundary point in \S\ref{par:induced_orientation_translation_invt_pert_datum}. For $w \in \widetilde\Delta$ it is given by $-\partial_w \wedge \text{inward}_\delta$. Since clearly
$$\text{inward}_\delta = -\text{inward}_{\delta'}\,,$$
we see that $C(\delta) = -C(\delta')$.
\paragraph{Operations $\Phi_\Sigma$ are chain maps}\label{par:mult_ops_are_chain_maps}
We want to prove the vanishing of the matrix element
$$\sum_{\substack{\widetilde y' \in \Crit {\mathcal{A}}_{H'} \\ u \in{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y') \\ [v] \in {\mathcal{M}}(H,J;\widetilde y',\widetilde y)}} C(v)\circ C(u)
-\sum_j (-1)^{\sum_{i<j}|\widetilde y_i|'} \sum_{\substack{\widetilde y_j' \in \Crit {\mathcal{A}}_{H^j} \\ [u] \in{\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \\ v \in {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y)}} C(v)\circ (\id \otimes \dots \otimes C(u) \otimes \dots \otimes \id)$$
as a homomorphism
$$\bigotimes_jC(\widetilde y_j) \to C(\widetilde y)\,.$$
There is a bijection between the boundary points of the compactified $1$-dimensional moduli space $\overline{\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ and the summands of the above matrix element. For a boundary point $\delta \in \partial \overline {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ we let $C(\delta)$ be the summand corresponding under this bijection to $\delta$, including the sign in front of it. It is enough to show that for each connected component $\Delta \subset \overline {\mathcal{M}}_\Sigma(K,I;\{\widetilde y_i\}_i,\widetilde y)$ with $\partial \Delta = \{\delta,\delta'\}$ we have
$$C(\delta) + C(\delta') = 0\,.$$
Isomorphisms $\bigotimes_jC(\widetilde y_j) \to C(\widetilde y)$ are in bijection with orientations of $\Delta$. In \S\ref{par:induced_orientation_single_surf} we computed the orientations induced on $\Delta$ by the isomorphisms $C(\delta),C(\delta')$. The computations show precisely that these orientations are always opposite, whence the vanishing of the matrix element.
\paragraph{Chain homotopies for a single surface}\label{par:chain_htpies_single_surf} Consider the trivial family \footnote{Note however that the fiberwise conformal structure on ${\mathcal{S}}$ is allowed to vary along $[0,1]$.} ${\mathcal{S}} = {\mathcal{R}} \times \Sigma$, ${\mathcal{R}} = [0,1]$, where the ends and the perturbation datum are constant near $\partial{\mathcal{R}} = \{0,1\}$. We want to prove the vanishing of the matrix element
\begin{multline*}
\sum_{u \in {\mathcal{M}}_{\Sigma_1}(K_1,I_1;\{\widetilde y_i\}_i,\widetilde y)} C(u) - \sum_{u\in{\mathcal{M}}_{\Sigma_0}(K_0,I_0; \{\widetilde y_i\}_i,\widetilde y)} C(u) -\\
- \sum_{\substack{\widetilde y' \in \Crit {\mathcal{A}}_H \\ (r,u) \in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_i\}_i,\widetilde y') \\ [v] \in {\mathcal{M}}(H,J; \widetilde y',\widetilde y)}} C(v) \circ C(u)
- \sum_j(-1)^{\sum_{i<j}|\widetilde y_i|'} \sum_{\substack{\widetilde y_j' \in \Crit {\mathcal{A}}_{H^j} \\ [u] \in {\mathcal{M}}(H^j,J^j;\widetilde y_j,\widetilde y_j') \\ (r,v) \in {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_{i\neq j},\widetilde y_j',\widetilde y)}} C(v) \circ C(u)
\end{multline*}
as a homomorphism
$$\bigotimes_jC(\widetilde y_j) \to C(\widetilde y)\,.$$
Again, the boundary points of the compactified $1$-dimensional moduli space $\overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ are in bijection with the summands of the matrix element. If $C(\delta)$ denotes the summand corresponding to the boundary point $\delta$, including the sign in front of it, then it is enough to show that for every connected component $\Delta \subset \overline{\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_i\}_i,\widetilde y)$ with $\partial \Delta = \{\delta,\delta'\}$ we have $C(\delta) + C(\delta') = 0$. Isomorphisms $\bigotimes_jC(\widetilde y_j) \to C(\widetilde y)$ correspond to orientations of $\Delta$. In \S\ref{par:induced_orientations_families} we computed the orientations induced on $\Delta$ by the isomorphisms $C(\delta),C(\delta')$. Examining these orientations we dedude $C(\delta) + C(\delta') = 0$ and therefore the vanishing of the matrix element.
\paragraph{Chain homotopies for a family of surfaces over $[0,\infty)$}\label{par:chain_htpies_families}
Keep the notations of \S\ref{par:compactness_families}. We wish to prove the vanishing of the matrix element
\begin{multline*}
\sum_{\substack{\widetilde y_{2,j} \in \Crit{\mathcal{A}}_{H^1} \\ u\in {\mathcal{M}}_{\Sigma^1}(K^1,J^1;\{\widetilde y_{1,i}\}_i,\widetilde y_{2,j}) \\ v \in {\mathcal{M}}_{\Sigma^2}(K^2,I^2;\{\widetilde y_{2,i}\}_i,\widetilde y_2)}} C(v) \circ C(u)- \\
- \sum_{u \in {\mathcal{M}}_{\Sigma_0}(K_0,I_0;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)} C(u) -
\sum_{\substack{\widetilde y_2' \in \Crit {\mathcal{A}}_{H^2} \\ (r,u)\in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2') \\ [v] \in {\mathcal{M}}(H^2,J^2;\widetilde y_2',\widetilde y_2)}} C(v) \circ C(u)+ \\
- \sum_l \sum_{\substack{\widetilde y_{1,l}' \in \Crit {\mathcal{A}}_{H^{1,l}} \\ [u] \in {\mathcal{M}}(H^{1,l},J^{1,l};\widetilde y_{1,l},\widetilde y_{1,l}') \\ (r,v) \in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_{i\neq l},\widetilde y_{1,l}',\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)}} (-1)^{\sum_{i < j}|\widetilde y_{2,i}|' + \sum_{i < l} |\widetilde y_{1,i}|'} C(v) \circ C(u) \\
- \sum_{l\neq j} \sum_{\substack{\widetilde y_{2,l}' \in \Crit {\mathcal{A}}_{H^{2,l}} \\ [u] \in {\mathcal{M}}(H^{2,l},J^{2,l};\widetilde y_{2,l},\widetilde y_{2,l}') \\ (r,v) \in {\mathcal{M}}_{{\mathcal{S}}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j,l}, \widetilde y_{2,l}', \widetilde y_2)}} (-1)^{\text{sign}(l)} C(v) \circ C(u)
\end{multline*}
as a homomorphism
$$\bigotimes_{i < j} C(\widetilde y_{2,i}) \otimes \bigotimes_i C(\widetilde y_{1,i}) \otimes \bigotimes_{i > j} C(\widetilde y_{2,i}) \simeq C(\widetilde y_2)\,.$$
Here
$$\text{sign}(l) = \left\{\begin{array}{ll} \sum_{i < l} |\widetilde y_{2,i}|' & \text { if } l < j \\ \sum_{i < l, i \neq j} |\widetilde y_{2,i}|' + \sum_{i=1}^m |\widetilde y_{1,i}|' & \text { if } l > j \end{array} \right .$$
where $m$ is the number of negative ends of $\Sigma_1$.
The argument is identical to the above: the summands of the matrix element are in bijection with boundary points of the compactification $\overline {\mathcal{M}}_{\mathcal{S}}(K,I;\{\widetilde y_{1,i}\}_i,\{\widetilde y_{2,i}\}_{i\neq j}, \widetilde y_2)$; let $C(\delta)$ be the summand corresponding to the boundary point $\delta$, including any signs in front of it. It suffices to show that for any connected component $\Delta$ of the compactified space with $\partial \Delta = \{\delta,\delta'\}$ we have $C(\delta) + C(\delta') = 0$. The bijection between isomorphisms $\bigotimes_{i < j} C(\widetilde y_{2,i}) \otimes \bigotimes_i C(\widetilde y_{1,i}) \otimes \bigotimes_{i > j} C(\widetilde y_{2,i}) \simeq C(\widetilde y_2)$ and orientations of $\Delta$, plus the computation of induced orientations done in \S\ref{par:induced_orientations_families} imply $C(\delta) + C(\delta') = 0$ and hence the vanishing of the desired matrix element.
\subsection{Floer homology}\label{ss:HF}
Here we define Lagrangian and Hamiltonian Floer homology, and the various algebraic operations thereupon.
\subsubsection{Lagrangian Floer homology}\label{sss:Lagr_HF}
Choose a regular Floer datum $(H,J)$ for the strip $S$, which is sufficiently generic so that the compactness results of \S\ref{sss:compactness_gluing} hold. Define the ${\mathbb{Z}}$-module
$$CF_*(H:L) = \bigoplus_{\widetilde y \in \Crit {\mathcal{A}}_{H:L}}C(\widetilde y)\,.$$
This is graded by $m_{H:L}$.
\paragraph{The boundary operator} The boundary operator
$$\partial_{H,J} {:\ } CF_j(H:L) \to CF_{j-1}(H:L)$$
is defined via its matrix elements, see \S\ref{par:matrix_elts_boundary_op}. The vanishing of the matrix elements of $\partial_{H,J}^2$, \S\ref{par:matrix_elts_bd_op_squared_vanish} together with the results of \S\ref{ss:boundary_op_squares_zero_bubbling_HF} show that $\partial_{H,J}^2 = 0$ and therefore we can define the \textbf{Lagrangian Floer homology}
$$HF_*(H,J:L)$$
as the homology of the \textbf{Lagrangian Floer complex} $(CF_*(H:L),\partial_{H,J})$.
\paragraph{Continuation maps} Let $(H^i,J^i)$ be regular Floer data, $i=0,1$, associated to the ends of $S$, $(H^0,J^0)$ to the negative end, and $(H^1,J^1)$ to the positive end, and choose a regular perturbation datum $(K,I)$ on $S$ which is of the form
\begin{equation}\label{eqn:pert_datum_continuation_maps}
K(s,t)=H^s_t\,dt\,,\quad I(s,t) = J^s_t
\end{equation}
where $(H^s,J^s)_{s\in {\mathbb{R}}}$ is a smooth homotopy of Floer data, which is independent of $s$ for $s \notin (0,1)$. The corresponding \textbf{continuation map}
$$\Phi_{S;K,I} {:\ } CF_j(H^0:L) \to CF_j(H^1:L)$$
is determined by its matrix elements, see \S\ref{par:matrix_elts_operations}. According to \S\ref{par:mult_ops_are_chain_maps}, it is a chain map:
$$\Phi_{S;K,I}\circ \partial_{H^0,J^0} = \partial_{H^1,J^1}\circ \Phi_{S;K,I}$$
and therefore it induces a map on homology
$$\Phi_{S;K,I} {:\ } HF_j(H^0,J^0:L) \to HF_j(H^1,J^1:L)\,.$$
If $(K',I')$ is another regular perturbation datum on $S$ as above, that is it corresponds to a different homotopy of Floer data, then the maps $\Phi_{S;K,I}$ and $\Phi_{S;K',I'}$ are chain homotopic. Indeed, consider the trivial family ${\mathcal{S}} = S \times [0,1]$, where we associate the data $(H^i,J^i)$ to the ends of $S$ as above, and choose a regular compatible perturbation datum $(\overline K,\overline I)$ on ${\mathcal{S}}$ which near $r=0$ equals $(K,I)$ and near $r=1$ equals $(K',I')$. It follows from \S\ref{par:chain_htpies_single_surf} that the operator
$$\Psi_{{\mathcal{S}};\overline K,\overline I} {:\ } CF_j(H^0:L) \to CF_{j+1}(H^1:L)$$
determined by its matrix elements \S\ref{par:matrix_elts_htpy_ops}, is a chain homotopy between $\Phi_{S;K,I}$ and $\Phi_{S;K',I'}$. Therefore the induced map on homology is independent of the choice of homotopy of Floer data; we denote it
$$\Phi_{H^0,J^0}^{H^1,J^1}{:\ } HF_*(H^0,J^0:L) \to HF_*(H^1,J^1:L)\,.$$
Consider now the trivial family ${\mathcal{S}} = S \times [0,\infty)$, where for some $R_0 > 0$ the ends and a regular perturbation datum $(K,I)$ on ${\mathcal{S}}$ come from gluing $\Sigma^1=\Sigma^2=S$ as described in \S\ref{par:compactness_families}, where the positive end of $\Sigma^1$ is glued to the negative end of $\Sigma^2$, the Floer data associated to the ends of $\Sigma^1$ are $(H^0,J^0),(H^1,J^1)$, the data associated to the ends of $\Sigma^2$ are $(H^1,J^1),(H^2,J^2)$, and there are perturbation data $(K^i,J^i)$ on $\Sigma_i$ of the form \eqref{eqn:pert_datum_continuation_maps}. Using the matrix elements defined in \S\ref{par:matrix_elts_htpy_ops} we can define a homotopy operator
$$\Psi_{{\mathcal{S}};K,I} {:\ } HF_j(H^0,J^0:L) \to HF_{j+1}(H^2,J^2:L)$$
between
$$\Phi_{H^0,J^0}^{H^2,J^2}\quad \text{and} \quad \Phi_{H^1,J^1}^{H^2,J^2} \circ \Phi_{H^0,J^0}^{H^1,J^1}\,.$$
This follows from the vanishing of the matrix elements, \S\ref{par:chain_htpies_single_surf}. This means that on homology we have
\begin{equation}\label{eqn:cocycle_id_continuation_maps_Lagr_HF}
\Phi_{H^0,J^0}^{H^2,J^2} = \Phi_{H^1,J^1}^{H^2,J^2} \circ \Phi_{H^0,J^0}^{H^1,J^1}\,.
\end{equation}
Let now $(H,J)$ be a regular Floer datum on $S$ and let $(K,I)$ be the corresponding translation-invariant perturbation datum. It can be seen that the continuation map $\Phi_{S;K,I}$ is the identity on chain level. This means that
$$\Phi_{H,J}^{H,J} = \id_{HF_*(H,J:L)}\,.$$
Combining with \eqref{eqn:cocycle_id_continuation_maps_Lagr_HF}, we obtain
$$\Phi_{H^0,J^0}^{H^1,J^1} = (\Phi_{H^1,J^1}^{H^0,J^0})^{-1}\,,$$
and in particular continuation maps are isomorphisms.
We can now define the \textbf{abstract Floer homology} $HF_*(L)$ as the limit of the system of Floer homologies connected by the continuation isomorphisms.
\paragraph{The product}
Let $\Sigma$ be the disk with three boundary punctures $\theta_i$, $i=0,1,2$, arranged in positive cyclic order on $\partial D^2$; here $\theta_2$ is positive and the other punctures are negative. Endow $\Sigma$ with a choice of ends. Let $(H^i,J^i)$ be regular Floer data associated to the $\theta_i$ and let $(K,I)$ be a regular compatible perturbation datum on $\Sigma$. The matrix elements \S\ref{par:matrix_elts_operations} determine an operation
$$\Phi_{\Sigma;K,I} {:\ } CF_j(H^0:L) \otimes CF_k(H^1:L) \to CF_{j+k-n}(H^2:L)\,.$$
The vanishing of the matrix elements \S\ref{par:chain_htpies_single_surf} shows that $\Phi_{\Sigma;K,I}$ is a chain map, therefore we have a bilinear operation on homology
$$\Phi_{\Sigma;K,I} {:\ } HF_j(H^0,J^0:L) \otimes HF_k(H^1,J^1:L) \to HF_{j+k-n}(H^2,J^2:L)\,,$$
which is the \textbf{product}. A change of auxiliary data, that is the choice of a conformal structure on $\Sigma$, the position of the punctures, the ends or perturbation datum, can be encoded in a family over $[0,1]$, which then produces a homotopy operator between the corresponding chain maps. This means that we have a well-defined operation
$$\star {:\ } HF_j(H^0,J^0:L) \otimes HF_k(H^1,J^1:L) \to HF_{j+k-n}(H^2,J^2:L)\,,$$
which only depends on the Floer data.
Gluing $\Sigma$ to a strip with perturbation data corresponding to a change of the Floer datum leads to a family ${\mathcal{S}} \to [0,\infty)$, which can be used to produce a homotopy operator, which implies that on homology we have
$$\star \circ (\Phi \otimes \Phi) = \Phi \circ \star\,,$$
where $\Phi$ is shorthand notation for a continuation map, that is the product is compatible with the continuation maps. This in particular means that we have a well-defined product on the abstract Floer homology $HF_*(L)$.
\paragraph{Associativity of the product}
Let $\Sigma$ be the disk with four boundary punctures $\theta_i$, $i=0,1,2,3$, arranged in positive cyclic order, where $\theta_3$ is the only positive puncture. Fix a choice of ends for $\Sigma$, and let $(H^i,J^i)$ be regular Floer data associated to the $\theta_i$, and $(K,I)$ a regular compatible perturbation datum on $\Sigma$. The corresponding operation
$$\Phi_{\Sigma;K,I} {:\ } CF_j(H^0:L) \otimes CF_k(H^1:L) \otimes CF_l(H^2:L) \to CF_{j+k+l-2n}(H^3:L)$$
can be proved to be a chain map, using the same arguments as above, which means it descends to homology. It is then shown to be independent of the auxiliary data. Next one shows that it is compatible with the continuation morphisms, therefore it defines a ternary operation on $HF_*(L)$.
Note now that $\Sigma$ can glued from two surfaces $\Sigma^1=\Sigma^2$ which are disks with three boundary punctures of which only one is positive. This gluing can be done in two different ways: one can feed the positive puncture of $\Sigma_1$ into either one of the negative punctures of $\Sigma_2$. The resulting families ${\mathcal{S}} \to [0,\infty)$ can be used to show that both compositions
$$\star \circ (\id \otimes \star)\quad\text{and}\quad \star \circ (\star \otimes \id)$$
are equal, on the level of homology, to the ternary operation we have just defined. In particular it means that the product $\star$ on $HF_*(L)$ is associative.
\paragraph{The unit}
Take now $\Sigma$ to be the disk with one positive puncture, endow it with an end, and fix a regular nondegenerate Floer datum $(H,J)$ and a compatible perturbation datum $(K,I)$. There is the corresponding nullary operation $\Phi_{\Sigma;K,I}$, which plays the role of the unit in Floer homology. Let us give a more detailed description of it. Fix $\widetilde y \in \Crit {\mathcal{A}}_{H:L}$. The moduli space ${\mathcal{M}}_{\Sigma}(K,I;\widetilde y)$ has dimension $|\widetilde y|' = n - |\widetilde y|$, therefore it is zero-dimensional whenever $|\widetilde y| = n$. Assume this. For any $u \in {\mathcal{M}}_\Sigma(K,I;\widetilde y)$ the operator $D_u$ is an isomorphism therefore it is canonically oriented by the positive orientation ${\mathfrak{o}}_u = 1 \otimes 1^\vee$. Orientations of $D_u$ are in bijection with isomorphisms $\bigotimes_\varnothing \simeq C(\widetilde y)$. The empty tensor product \footnote{If the reader does not like abstract nonsense, here is another way to do this: the linearized operator $D_u$ by definition belongs to the family $D_{\widetilde y}$, and since it is oriented, there is the corresponding generator of $C(\widetilde y)$; in our notation this is just $C(u)(1)$.} by definition is just the ${\mathbb{Z}}$-module ${\mathbb{Z}}$, therefore the canonical orientation of $D_u$ gives rise to an isomorphism $C(u) {:\ } {\mathbb{Z}} \simeq C(\widetilde y)$. Taking the sum:
$$\sum_{u \in {\mathcal{M}}_\Sigma(K,I;\widetilde y)} C(u) {:\ } {\mathbb{Z}} \to C(\widetilde y)$$
gives the matrix element of the operation $\Phi_{\Sigma;K,I}$. Passing to the whole complex, we see that this amounts to a graded linear map
$${\mathbb{Z}}[n] \to CF_*(H:L)\,,$$
where ${\mathbb{Z}}[n]$ denotes the graded abelian group which has ${\mathbb{Z}}$ in degree $n$ and $0$ everywhere else. This is the unit. Using the same methods as above it is shown to be a chain map, to be independent of the auxiliary data, and to be compatible with continuation morphisms. Therefore we have the canonical nullary operation
$$1 {:\ } {\mathbb{Z}}[n] \to HF_*(L)\,.$$
Let $\Sigma_\star$ be the disk with three boundary punctures, two negative and one positive puncture. We can glue $\Sigma$ to either one of the negative punctures of $\Sigma_\star$, and the resulting surface is isomorphic to the strip $S$ with one positive and one negative puncture. Using the same arguments as above, we see that this leads to the following identities:
$$\star \circ (1 \otimes \id) = \star \circ (\id \otimes 1) = \id$$
on $HF_*(L)$. This means precisely that the operation $1$ we have just defined is the unit for the product $\star$.
\subsubsection{Hamiltonian Floer homology}\label{sss:Hamiltonian_HF}
The treatment here is entirely parallel to the Lagrangian case, so we just outline the main arguments and results and establish notation.
Fix a regular nondegenerate Floer datum $(H,J)$ for the cylinder $C$, which is sufficiently generic so that the compactness results of \S\ref{sss:compactness_gluing} hold. Define the ${\mathbb{Z}}$-module
$$CF_*(H) = \bigoplus_{\widetilde y\in\Crit{\mathcal{A}}_H}C(\widetilde y)\,.$$
This is graded by $m_H$.
\paragraph{The boundary operator}
The matrix elements of \S\ref{par:matrix_elts_boundary_op} assemble into the boundary operator
$$\partial_{H,J} {:\ } CF_j(H) \to CF_{j-1}(H)\,.$$
That $\partial_{H,J}^2 = 0$ follows from the vanishing of the corresponding matrix elements, see \S\ref{par:matrix_elts_bd_op_squared_vanish}. We let the \textbf{Hamiltonian Floer homology}
$$HF_*(H,J)$$
be the homology of $(CF_*(H),\partial_{H,J})$.
\paragraph{Continuation maps}
These are defined in the exact same manner as in the Lagrangian case, and we leave the details to the reader. The upshot is that for any regular nondegenerate Floer data $(H^i,J^i)$, $i=0,1$, there is a well-defined morphism
$$\Phi_{H^0,J^0}^{H^1,J^1} {:\ } HF_*(H^0,J^0) \to HF_*(H^1,J^1)$$
which satisfies the cocycle identity and such that
$$\Phi_{H,J}^{H,J} = \id_{HF_*(H,J)}\,.$$
It follows that the continuation maps are isomorphisms. Therefore the abstract Floer homology $HF_*(M)$ can be defined as the limit of the system of Floer homologies for different Floer data, connected by the continuation isomorphisms.
\paragraph{The product}
This is defined analogously to the Lagrangian case, but there are some minor differences. The product here is defined using the surface $\Sigma$ which is the sphere $S^2$ with three punctures, two negative and one positive. It therefore gives rise to an operation on homology
$$* {:\ } HF_j(H^0,J^0) \otimes HF_k(H^1,J^1) \to HF_{j+k-2n}(H^2,J^2)$$
for any regular Floer data $(H^i,J^i)$ associated to the punctures of $\Sigma$. One shows that this is independent of the auxiliary data such as the choice of ends (since the space of ends around an interior puncture is connected), and the perturbation datum, and therefore the operation $*$ is well-defined on homology. It can also be shown to respect continuation maps, which means that there is a well-defined product on the abstract Floer homology $HF_*(M)$. Associativity is proved in the same way as in the Lagrangian case, by gluing two copies of $\Sigma$.
The difference from the Lagrangian case is the fact that the product $*$ is supercommutative. This can be seen as follows. Let $(K,I)$ be a regular compatible perturbation datum on $\Sigma$. There is an orientation-preserving diffeomorphism of $\Sigma$ which preserves the positive puncture and exchanges the negative ones. Let $(K',I')$ be the perturbation datum obtained by pushing forward the datum $(K,I)$ by this diffeomorphism. For every critical points $\widetilde y_i \in \Crit {\mathcal{A}}_{H^i}$ we have a canonical identification of moduli spaces
$${\mathcal{M}}_\Sigma(K,I;\widetilde y_0,\widetilde y_1,\widetilde y_2) \simeq {\mathcal{M}}_\Sigma(K',I';\widetilde y_1,\widetilde y_0,\widetilde y_2)\,,$$
where on the left we have the original conformal structure and ends while on the right we use the conformal structure and ends pushed forward by the diffeomorphism. This means that the corresponding matrix elements
$$\sum_{u \in {\mathcal{M}}_\Sigma(K,I;\widetilde y_0,\widetilde y_1,\widetilde y_2)} C(u) {:\ } C(\widetilde y_0) \otimes C(\widetilde y_1) \to C(\widetilde y_2) \text{ and}\sum_{u \in {\mathcal{M}}_\Sigma(K',I';\widetilde y_1,\widetilde y_0,\widetilde y_2)} C(u) {:\ } C(\widetilde y_1) \otimes C(\widetilde y_0) \to C(\widetilde y_2)$$
only differ by the Koszul sign $(-1)^{|\widetilde y_0|'|\widetilde y_1|'}$ which arises when we compose the direct sum isomorphisms
$$\ddd(D_{\widetilde y_0}) \otimes \ddd(D_{\widetilde y_1}) \simeq \ddd(D_{\widetilde y_0} \oplus D_{\widetilde y_1}) \simeq \ddd(D_{\widetilde y_1}) \otimes \ddd(D_{\widetilde y_0})\,,$$
see \S\ref{par:direct_sum_isos}. Therefore we can see the supercommutativity already on the chain level.
\paragraph{The unit}
This again is defined analogously to the Lagrangian case, the difference being that the unit is now a graded linear map
$$1: {\mathbb{Z}}[2n] \to HF_*(M)\,.$$
\subsubsection{Quantum module action}\label{sss:quantum_module_action}
This is defined as follows. Let $\Sigma_\bullet$ be the disk with two boundary punctures, one of which is positive, and one interior puncture which is negative. Endow $\Sigma_\bullet$ with a choice of ends, and let $(H^i,J^i)$ be regular Floer data associated to the punctures, the one with $i=1$ to the negative boundary puncture, $i=0$ to the interior puncture, and $i=2$ to the positive boundary puncture. Assume $(H^0,J^0)$ is $1$-periodic in time. Pick a regular compatible perturbation datum $(K,I)$. The matrix elements \S\ref{par:matrix_elts_operations} define an operation
$$\Phi_{\Sigma_\bullet;K,I} {:\ } CF_j(H^0) \otimes CF_k(H^1:L) \to CF_{j+k-2n} (H^2:L)\,.$$
As usual, one shows that this is a chain map, that the resulting map on homology is independent of the auxiliary choices, that it then respects the continuation maps, and therefore defines a bilinear operation on abstact homologies:
$$\bullet {:\ } HF_*(M) \otimes HF_*(L) \to HF_{*-2n}(L)\,.$$
Gluing $\Sigma_\bullet$ to different surfaces, we can prove the following identities:
$$\bullet \circ (1 \otimes \id) = \id\,,$$
that is the unit $1 \in HF_{2n}(M)$ acts as a unit;
$$\bullet \circ (* \otimes \id) = \bullet \circ (\id \otimes \bullet)\,,$$
which means $\bullet$ defines a module action; finally, we have
$$\bullet \circ (\id \otimes \star) = \star \circ (\bullet \otimes \id) = \star \circ (\id \otimes \bullet)\circ (R \otimes \id)\,, $$
where
$$R {:\ } HF_*(M) \otimes HF_*(L) \to HF_*(L) \otimes HF_*(M)$$
is the interchange of factors multiplied with the corresponding Koszul signs. This means that $HF_*(L)$ becomes a superalgebra over $HF_*(M)$ by means of $\bullet$.
One can also substitute the Lagrangian unit into $\bullet$ and get the so-called closed-open map, which is a degree $-n$ operation
$$\bullet \circ (\id \otimes 1) {:\ } HF_*(M) \to HF_{n-*}(L)\,,$$
which can be shown to be an algebra morphism.
\subsection{Arbitrary rings and local coefficients}\label{ss:arbitrary_rings_loc_coeffs}
The above definitions are made over the ground ring ${\mathbb{Z}}$. Given an arbitrary commutative ring $R$, we can form Floer complexes over $R$ by tensoring, $- \otimes_{\mathbb{Z}} R$. Thus we obtain the Floer homology over $R$, and all the above algebraic operations become $R$-linear.
In case $2=0$ in $R$, we can form the Floer complex without making assumption \textbf{(O)}, as follows. For a nondegenerate Floer datum $(H,J)$ we define
$$CF_*(H:L) = \bigoplus_{\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}}R \cdot \widetilde\gamma\,.$$
The boundary operator is given by counting the moduli spaces ${\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ modulo $2$. The algebraic structures are defined similarly.
Recall that a \textbf{local system} on a topological space $X$ is a functor from the fundamental groupoid $\Pi_1(X)$ to the category of groups. In more elementary terms, it is given by assigning a group $G_x$ to every point $x \in X$ and an isomorphism $G_x \simeq G_y$ for every homotopy class of paths from $x$, where the isomorphisms behave coherently with respect to concatenation of paths. Similarly we can define a local system of $R$-modules.
Given a ground ring $R$, a \textbf{flat $R$-bundle} over $\widetilde\Omega_L$ is by definition just a local system of $R$-modules over $\widetilde\Omega_L$. Let ${\mathcal{E}}$ be such a bundle and for a path $u$ in $\widetilde\Omega_L$ running from $\widetilde\gamma_-$ to $\widetilde\gamma_+$ let
$${\mathcal{P}}_u {:\ } {\mathcal{E}}_{\widetilde\gamma_-} \to {\mathcal{E}}_{\widetilde\gamma_+}$$
be the corresponding parallel transport isomorphism. We can then form the Floer complex of $(H,J)$ \textbf{twisted by} ${\mathcal{E}}$, as follows. As an $R$-module, we have
$$CF_*(H:L;{\mathcal{E}}) = \bigoplus_{\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}} C(\widetilde\gamma) \otimes_R {\mathcal{E}}_{\widetilde\gamma}\,.$$
The boundary operator has matrix elements
$$\sum_{[u] \in {\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)} C(u) \otimes_R {\mathcal{P}}_u\,,$$
where we lift $u$ to a path in $\widetilde\Omega_L$. The corresponding Floer homology is denoted
$$HF_*(H,J:L;{\mathcal{E}})\,.$$
The continuation maps are similarly defined and we obtain the abstract Floer homology
$$HF_*(L;{\mathcal{E}})\,.$$
We will employ the following useful piece of notation. For $V$ a real $1$-dimensional vector space we let $|V|$ be its \textbf{normalization}, which is the rank $1$ free ${\mathbb{Z}}$-module generated by its two possible orientations. If ${\mathcal{L}} \to B$ is a real line bundle, we let $|{\mathcal{L}}|$ be the flat ${\mathbb{Z}}$-bundle with fibers $|{\mathcal{L}}_b|$ for $b \in B$. For a ring $R$, the $R$-normalization of ${\mathcal{L}}$ is the flat locally free $R$-bundle of rank $1$ with fibers $|{\mathcal{L}}_b|\otimes_{\mathbb{Z}} R$.
\subsection{Duality}\label{ss:duality_HF}
\subsubsection{Dual complexes and dual Hamiltonians}\label{sss:dual_cxs_dual_Hams}
We only treat the Lagrangian case in detail, the Hamiltonian case being entirely analogous. Fix a regular Floer datum $(H,J)$ on the strip $S$. We define
$$CF^*(H:L) = \bigoplus_{\widetilde y \in \Crit {\mathcal{A}}_{H:L}}C(\widetilde y)^\vee$$
where
$$C(\widetilde y)^\vee = \Hom_{\mathbb{Z}}(C(\widetilde y),{\mathbb{Z}})\,.$$
We grade this by $m_{H:L}$. Note for future use that there is a canonical isomorphism
$$C(\widetilde y)^\vee \equiv C(\widetilde y)\,,\quad c^\vee \mapsto c\,,$$
where $c \in C(\widetilde y)$ is a generator and $\langle c^\vee, c\rangle = 1$. The differential $\partial_{H,J}^\vee$ on $CF^*(H:L)$ is defined to be the dual of $\partial_{H,J}$: its matrix element
$$C(\widetilde y_+)^\vee \to C(\widetilde y_-)^\vee$$
is the dual of the matrix element of $\partial_{H,J}$ as a map $C(\widetilde y_-) \to C(\widetilde y_+)$ for $\widetilde y_\pm \in \Crit {\mathcal{A}}_{H:L}$ of index difference $1$. We define another differential
$$\delta_{H,J} {:\ } CF^k(H,J:L) \to CF^{k+1}(H,J:L)\quad \text{by} \quad \delta_{H,J} = (-1)^{k-1}\partial_{H,J}^\vee\,.$$
We refer to the cochain complex
$$(CF^*(H,J:L),\delta_{H,J})$$
as the \textbf{dual complex} and we let
$$HF^*(H,J:L)$$
be its cohomology, called the \textbf{Floer cohomology} of $(H,J)$.
In an analogous fashion we can define the twisted dual complex
$$CF^*(H:L;{\mathcal{E}})$$
for a flat ${\mathbb{Z}}$-bundle ${\mathcal{E}}$ over $\widetilde\Omega_L$ and the corresponding cohomology
$$HF^*(H,J:L;{\mathcal{E}})\,.$$
Let us define the dual Hamiltonian of $H$ to be
$$\overline H(t,x) = -H(1-t,x)\,.$$
This generates the flow obtained from the flow of $H$ by retracing it backward, that is $\phi^t_{\overline H} = \phi^{1-t}_H\phi_H^{-1}$. There is a bijection between orbits of $H$ and $\overline H$ given by
$$y \mapsto \overline y\,,\quad \overline y(t) = y(1-t)\,.$$
If $\widehat y$ is a capping of $y$, then
$$\overline{\widehat y} {:\ } \dot D^2 \to M \quad \text{defined by} \quad \overline{\widehat y} (\sigma,\tau) = \widehat y(\sigma,-\tau)$$
is a capping of $\overline y$ (using the same end for both maps). This establishes a bijection
$$\Crit {\mathcal{A}}_{H:L} \simeq \Crit {\mathcal{A}}_{\overline H:L}\,,\quad \widetilde y=[y,\widehat y] \mapsto \overline{\widetilde y}:=[\overline y,\overline{\widehat y}]\,.$$
We have
$$m_{\overline H:L}(\overline{\widetilde y}) = n - m_{H:L}(\widetilde y)\quad \text{and} \quad {\mathcal{A}}_{\overline H:L}(\overline{\widetilde y}) = - {\mathcal{A}}_{H:L}(\widetilde y)\,.$$
This first relation is shown below \eqref{eqn:grading_dual_cx_Lagr_HF}.
We have the regular Floer datum $(\overline H,\overline J)$ where $\overline J_t(x) = J_{1-t}(x)$. Therefore the Floer complex
$$(CF_*(\overline H:L),\partial_{\overline H,\overline J})$$
is well-defined. For $\widetilde y_\pm \in \Crit {\mathcal{A}}_{H:L}$ of index difference $1$ we have a canonical diffeomorphism between moduli spaces
$$\widetilde {\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+) \simeq \widetilde {\mathcal{M}}(\overline H,\overline J; \overline{\widetilde y}_+,\overline{\widetilde y}_-)\,,\quad u \mapsto \overline u\,,\text{ where }\overline u(s,t) = u(-s,1-t)\,.$$
For a capping $\widehat y$ of an orbit $y$ of $H$ let us define the map
$$-\widehat y {:\ } \dot D^2_- \to M\,,\quad -\widehat y(\sigma,\tau) = \widehat y(-\sigma,\tau)\,.$$
Here $\dot D^2_- = D^2 - \{-1\}$ has $-1$ as a negative puncture; we endow it with the standard negative end given by precomposing the standard end \eqref{eqn:std_end_cappings_Lagr_HF} with $(s,t) \mapsto (-s,-t)$. The map $\phi {:\ } \dot D^2 \to \dot D^2_-$, $\phi(z) = -z$ is a conformal isomorphism which also intertwines the standard ends. Therefore the linearized operators $D_{\overline{\widehat y}}$ and $D_{-\widehat y}$ are isomorphic in the sense of \S\ref{s:determinants}. It follows that their determinant lines are isomorphic as well, and in fact extend to a canonical isomorphism of determinant bundles
\begin{equation}\label{eqn:iso_det_lines_cappings_duality_Lagr_HF}
\ddd(D_{\overline{\widetilde y}}) \simeq \ddd(D_{-\widetilde y})\,.
\end{equation}
Next, the maps $\widehat y,-\widehat y$ have matching asymptotics and therefore can be preglued. It is not hard to see that the resulting map is homotopic through maps $(D^2,\partial D^2) \to (M,L)$ to the constant map at $y(0)$. Using the direct sum, linear gluing, and deformation isomorphisms, we obtain
$$\ddd(D_{\widehat y}) \otimes \ddd(D_{-\widehat y}) \simeq \ddd(D_{\widehat y} \oplus D_{-\widehat y}) \simeq \ddd(T_{y(0)}L)\,.$$
In particular $n = \ind 0_{T_{y(0)}L} = \ind D_{\widehat y} + \ind D_{-\widehat y}$ and thus
$$\ind D_{\overline {\widetilde y}} = \ind D_{-\widehat y} = n - \ind D_{\widehat y}\,,$$
which implies
\begin{equation}\label{eqn:grading_dual_cx_Lagr_HF}
m_{\overline H:L}(\overline{\widetilde y}) = n - \ind D_{\overline{\widetilde y}} = \ind D_{\widehat y} = n - m_{H:L}(\widetilde y)\,.
\end{equation}
We thus have a canonical isomorphism
$$\ddd(D_{\widehat y}) \otimes \ddd(D_{\overline{\widehat y}}) \simeq \ddd(T_{y(0)}L)\,.$$
Tensoring with $\ddd(T_{y(0)}L)$ and noting that the square of a real line bundle is canonically oriented, we obtain
$$C(\widetilde y) \otimes |{\mathcal{L}}_{\widetilde y}| \otimes C(\overline{\widetilde y}) \simeq {\mathbb{Z}}\,,$$
where ${\mathcal{L}}$ is the flat ${\mathbb{Z}}$-bundle bundle on $\widetilde \Omega_L$ obtained by pulling back the bundle $|\ddd(TL)|$ on $L$ via the evaluation map $\widetilde \Omega_L \to L$, $\widetilde y \mapsto y(0)$.
Therefore we have the isomorphism
$$CF^{n-*}(H:L;{\mathcal{L}}) \simeq CF_*(\overline H:L)$$
of graded ${\mathbb{Z}}$-modules. In fact this can be extended to a chain isomorphism, as follows. We have the following commutative diagram for $\widetilde y_\pm \in\Crit {\mathcal{A}}_{H:L}$ of index difference $1$ and $u \in \widetilde{\mathcal{M}}(H,J;\widetilde y_-,\widetilde y_+)$, obtained by employing the direct sum, linear gluing, and deformation isomorphisms:
$$\xymatrix{\ddd(TL) \ar@{=}[r]& \ddd(D_{\widehat y_-} \oplus D_u \oplus D_{-\widehat y_+}) & \ddd(D_{\widehat y_+}) \otimes \ddd(D_{-\widehat y_+}) \ar [l]\\
&\ddd(D_{\widehat y_-}) \otimes \ddd(D_{-\widehat y_-}) \ar[u] & \ddd(D_{\widehat y_-}) \otimes \ddd(D_u) \otimes \ddd(D_{-\widehat y_+}) \ar[u]^{(C(u) \otimes \id) \circ (R \otimes \id)} \ar[l]}$$
Where $R$ is the interchange of factors together with the Koszul sign $(-1)^{\ind D_u\cdot |\widetilde y_-|'} = (-1)^{|\widetilde y|'}$. Fix ${\mathfrak{o}}_{y_-} \in C(\widetilde y_-)$ and ${\mathfrak{o}} \in \ddd(TL)$ and let ${\mathfrak{o}}_{y_+} = C(u)({\mathfrak{o}}_{y_-})$, and ${\mathfrak{o}}_{-y_\pm} \in \ddd(D_{-\widetilde y_\pm})$ be such that the isomorphisms
$$\ddd(TL) \simeq \ddd(D_{\widetilde y_\pm}) \otimes \ddd(D_{-\widetilde y_\pm})$$
map ${\mathfrak{o}} \mapsto {\mathfrak{o}}_{y_\pm}\otimes {\mathfrak{o}}_{-y_\pm}$. Then the above diagram maps
$$\xymatrix{{\mathfrak{o}} & {\mathfrak{o}}_{y_+} \otimes {\mathfrak{o}}_{-y_+} \ar@{|->} [l]\\
{\mathfrak{o}}_{y_-} \otimes {\mathfrak{o}}_{-y_-} \ar @{|->} [u] & (-1)^{|\widetilde y_-|'} {\mathfrak{o}}_{y_-} \otimes \partial_u \otimes {\mathfrak{o}}_{-y_+} \ar @{|->} [u] \ar @{|->} [l]}$$
The dual diagram, obtained by replacing all the operators with their ``minus-bar'' version, reads:
$$\xymatrix{\ddd(TL) \ar@{=}[r]& \ddd(D_{-\overline{\widehat y}_-} \oplus D_{\overline u} \oplus D_{\overline{\widehat y}_+}) & \ddd(D_{-\overline{\widehat y}_+}) \otimes \ddd(D_{\overline{\widehat y}_+}) \ar [l]\\
&\ddd(D_{-\overline{\widehat y}_-}) \otimes \ddd(D_{\overline{\widehat y}_-}) \ar[u] & \ddd(D_{-\overline{\widehat y}_-}) \otimes \ddd(D_{\overline u}) \otimes \ddd(D_{\overline{\widehat y}_+}) \ar[u] \ar[l]_-{\id \otimes C(\overline u)}}$$
Let now ${\mathfrak{o}}_{\overline y_\pm} \in \ddd(D_{\overline{\widetilde y}_\pm})$ correspond to ${\mathfrak{o}}_{-y_\pm} \in \ddd(D_{-\widetilde y_\pm})$ under the isomorphism \eqref{eqn:iso_det_lines_cappings_duality_Lagr_HF} and let ${\mathfrak{o}}'_{\overline y_-} = C(\overline u) ({\mathfrak{o}}_{\overline y_+})$. The dual diagram maps
$$\xymatrix{{\mathfrak{o}} & {\mathfrak{o}}_{-\overline y_+} \otimes {\mathfrak{o}}_{\overline y_+} \ar@{|->} [l]\\
{\mathfrak{o}}_{-\overline y_-} \otimes {\mathfrak{o}}_{\overline y_-} \ar @{|->} [u] & (-1)^{|\widetilde y_-|'} {\mathfrak{o}}_{-\overline y_-} \otimes \partial_{\overline u} \otimes {\mathfrak{o}}_{\overline y_+} \ar @{|->} [u] \ar @{|->} [l]}$$
whence it follows that $C(\overline u)$ maps ${\mathfrak{o}}_{\overline y_+}$ to $(-1)^{|\widetilde y_-|'}{\mathfrak{o}}_{\overline y_-}$. Thus the following diagram commutes:
$$\xymatrix{C(\overline{\widetilde y}_+) \ar[d]^{C(\overline u)} \ar[r] & [C(\widetilde y_+) \otimes {\mathcal{L}}_{\widetilde y_+}]^\vee \ar [d] ^{(-1)^{|\widetilde y_-|'} [C(u) \otimes {\mathcal{P}}_u]^\vee}\\
C(\overline{\widetilde y}_-) \ar[r] & [C(\widetilde y_-) \otimes {\mathcal{L}}_{\widetilde y_-}]^\vee}$$
This means that we have obtained a canonical isomorphism of chain complexes
\begin{equation}\label{eqn:duality_iso_chain_cxs_Lagr_HF}
(CF^{n-*}(H:L;{\mathcal{L}}),\delta_{H,J}\otimes {\mathcal{P}}) = (CF_*(\overline H:L),\partial_{\overline H,\overline J})\,,
\end{equation}
where ${\mathcal{P}}$ is the parallel transport operator of ${\mathcal{L}}$. In particular we obtain a canonical isomorphism of homologies
$$HF^{n-*}(H,J:L;{\mathcal{L}}) \simeq HF_*(\overline H, \overline J:L)\,.$$
This is the \textbf{duality isomorphism}.
\subsubsection{Augmentation}\label{sss:augmentation_HF}
As we saw above in \S\ref{sss:unit_Lagr_QH}, the unit in Floer homology can be viewed as a chain map
$${\mathbb{Z}}[n] \to CF_*(\overline H:L)\,.$$
Therefore we have the dual map
$$CF^*(\overline H:L) \to {\mathbb{Z}}[n]\,,$$
which is also a chain map. Since the former cochain complex is canonically isomorphic to $CF_{n-*}(H:L;{\mathcal{L}})$ by the duality isomorphism \eqref{eqn:duality_iso_chain_cxs_Lagr_HF}, we get the chain map
$$CF_*(H:L;{\mathcal{L}}) \to {\mathbb{Z}}\text{ (in degree zero)}\,.$$
This is the \textbf{augmentation map}. Since it is a chain morphism, we get the induced map on Floer homology
$$HF_*(H,J:L;{\mathcal{L}}) \to {\mathbb{Z}}\,.$$
Since the unit commutes with continuation maps, the same is true of the augmentation and therefore we obtain the canonical augmentation map on the abstact Floer homology
$$\epsilon {:\ } HF_*(L;{\mathcal{L}}) \to {\mathbb{Z}}\,.$$
\section{Quantum homology}\label{s:QH}
In this section we define Lagrangian quantum homology as well as the quantum homology of our symplectic manifold $M$, and various algebraic operations on them. Lagrangian quantum homology was constructed by Biran--Cornea, see \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} and references therein. The present section uses the analytical results of these papers. Our contribution here is the precise construction of relevant Cauchy--Riemann operators, their determinant lines and the relation between these, and the ensuing construction of the canonical quantum complex over an arbitrary ground ring, without using relative Pin-structures, including the minimal condition for which such a construction is possible (assumption \textbf{(O)}), and also a proof of the fact that the quantum boundary operator squares to zero in the case $N_L = 2$, see \S\ref{s:boundary_op_squares_zero_bubbling}.
\S\ref{ss:boundary_gluing} is concerned with gluing of Riemann surfaces and Cauchy--Riemann operators on them at a boundary or an interior point; the material presented here also appears in condensed form in \cite{Seidel_The_Book}. In \S\ref{ss:Lagr_QH} we define the quantum complex of a quantum datum, the corresponding boundary operator, and prove it squares to zero. We also define the quantum product and the corresponding unit. \S\ref{ss:arbitrary_rings_loc_coeffs_Lagr_QH}, \S\ref{ss:duality_Lagr_QH} are concerned with arbitrary coefficients and duality in quantum homology. \S\ref{ss:QH_of_M} defines the quantum homology of $M$ as well as its module action on the Lagrangian quantum homology. In \S\ref{ss:spectral_seqs} we describe the natural spectral sequence which starts with the (twisted) Morse complex and converges to the quantum homology.
\subsection{Gluing at a boundary or an interior point}\label{ss:boundary_gluing}
There is one technical aspect of Cauchy--Riemann operators not covered in \S\ref{s:HF}, namely gluing of Riemann surfaces, Cauchy--Riemann operators on them, and pregluing smooth maps, at a boundary or at an interior point rather than at a puncture. This subsection collects the necessary definitions and facts regarding these operations.
\paragraph{Riemann surfaces and smooth maps}
Given two Riemann surfaces $\Sigma_i$, $i=1,2$ and points $z_i \in \Sigma_i$, both either boundary or interior, we can form the glued surface $\Sigma_1 \sharp \Sigma_2$ by choosing collars around the points $z_i$, a gluing length, and identifying punctured neighborhoods of the $z_i$ according to the collars and the gluing length. If $u_i {:\ } (\Sigma_i,\partial \Sigma_i) \to (M,L)$ are smooth maps, $z_i \in \Sigma$ are both either boundary or interior, and $u_1(z_1) = u_2(z_2)$, we can use the expression of the $u_i$ near $z_i$ via exponential maps, similarly to what we described in \S\ref{ss:b_smooth_maps_pregluing}, to preglue $u_1$ and $u_2$ to a smooth map $u_1 \sharp u_2 {:\ } \Sigma_1 \sharp \Sigma_2 \to M$. Of course, if the $\Sigma_i$ have punctures and the $u_i$ are b-smooth, the resulting surface $\Sigma_1 \sharp \Sigma_2$ will inherit the punctures, and the preglued map $u_1\sharp u_2$ will be b-smooth as well.
\paragraph{Cauchy--Riemann operators}
Gluing Cauchy--Riemann operators at a boundary or interior point consists of two stages. At the first stage we assume given two Riemann surfaces $\Sigma_i$, $i=1,2$, two points $z_i \in \Sigma_i$, both boundary or both interior, and Hermitian bundle pairs $(E_i,F_i) \to (\Sigma_i,\partial \Sigma_i)$ endowed with connections $\nabla_i$. These give rise to the associated Cauchy--Riemann operators
$$D_i := \overline\partial_{\nabla_i} = \nabla_i^{0,1} {:\ } W^{1,p}(\Sigma_i, \partial \Sigma_i; E_i, F_i) \to L^p(\Sigma_i, \Omega_{\Sigma_i}^{0,1} \otimes E_i)\,.$$
These are Fredholm.\footnote{In case the $\Sigma_i$ have punctures, we need to assume in addition that the $D_i$ are admissible and nondegenerate, however we suppress these details for the sake of clarity.} Assume we are given a unitary isomorphism $(E_1)_{z_1} \simeq (E_2)_{z_2}$, which in case the $z_i$ are boundary also identifies $(F_1)_{z_1}$ with $(F_2)_{z_2}$. Assume that the $z_i$ are boundary. Using the identification, we can produce the following exact Fredholm triple:
$$\xymatrix{0 \ar[r] & W^{1,p}_1 \sharp W^{1,p}_2 \ar[r] \ar[d]^{D_1 \sharp D_2} & W^{1,p}_1 \oplus W^{1,p}_2 \ar[rr]^-{\ev_{z_1} - \ev_{z_2}} \ar[d]^{D_1 \oplus D_2} & & (F_1)_{z_1} \ar[r] \ar[d] & 0\\
0 \ar[r] & L^p_1 \oplus L^p_2 \ar@{=}[r] & L^p_1 \oplus L^p_2 \ar[rr] & & 0}$$
where
$$W^{1,p}_i = W^{1,p}(\Sigma_i, \partial \Sigma_i; E_i, F_i)\,,\quad L^p_i = L^p(\Sigma_i, \Omega_{\Sigma_i}^{0,1} \otimes E_i)\,,$$
$$W^{1,p}_1 \sharp W^{1,p}_2 = \{(\xi_1,\xi_2) \in W^{1,p}_1 \oplus W^{1,p}_2\,|\,\xi_1(z_1) = \xi_2(z_2)\}\,,\quad \text{and}\quad D_1 \sharp D_2 = (D_1 \oplus D_2)|_{W^{1,p}_1 \sharp W^{1,p}_2}\,.$$
This exact triple gives rise to the isomorphism
\begin{equation}\label{eqn:boudnary_gluing_iso}
\ddd(D_1 \sharp D_2) \otimes \ddd((F_1)_{z_1}) \to \ddd(D_1 \oplus D_2)\,,
\end{equation}
which we refer to as the \textbf{boundary gluing isomorphism}. Note that it depends on the ordering of the operators. It can be checked that if we exchange the operators $D_1, D_2$, which amounts to replacing the map $\ev_{z_1} - \ev_{z_2}$ with $\ev_{z_2} - \ev_{z_1}$, then the above isomorphism is multiplied by $(-1)^{\dim (F_1)_{z_1}}$.
If the $z_i$ are interior, a similar argument yields the isomorphism
$$\ddd(D_1 \sharp D_2) \otimes \ddd((E_1)_{z_1}) \to \ddd(D_1 \oplus D_2)\,,$$
and if we use the canonical orientation of $E_1$, then we simply get
$$\ddd(D_1 \sharp D_2) \to \ddd(D_1 \oplus D_2)\,.$$
This isomorphism is independent of the ordering of $D_1,D_2$.
At the second stage we produce an operator on the glued surface $\Sigma_1 \sharp \Sigma_2$. First we need to glue the Hermitian bundle pairs by identifying them over the collars; this can be done using the given identification $(E_1)_{z_1} \simeq (E_2)_{z_2}$. Then we can deform the operator $D_1 \sharp D_2$, for example by deforming the connections, so that the resulting operators over the collars coincide relative to the identification of the bundles. This then defines an operator on the glued bundle pair. The resulting deformation of Cauchy--Riemann operators produces an isomorphism between $\ddd(D_1 \sharp D_2)$ and the determinant line of the glued operator. By slightly abusing notation, we denote the glued operator by the same symbol $D_1 \sharp D_2$. We refer to either one of these operators as being \textbf{boundary glued} from $D_1,D_2$.
\paragraph{Smooth maps and their linearizations} Finally, assume we have two smooth maps $u_i {:\ } (\Sigma_i,\partial \Sigma_i) \to (M,L)$ with $u_1(z_1) = u_2(z_2)$, and consider the preglued map $u_1 \sharp u_2$. The linearizations $D_{u_1}$ and $D_{u_2}$ can be glued according to the previous paragraph, since we of course have an identification $(E_{u_1})_{z_1} = (E_{u_2})_{z_2} = T_{u_1(z_1)}M$ and similarly for $(F_i)_{z_i} = T_{u_1(z_1)}L$. The glued operator $D_{u_1} \sharp D_{u_2}$ can then be deformed into the linearization $D_{u_1\sharp u_2}$. In particular we have the canonical deformation isomorphism
$$\ddd(D_{u_1} \sharp D_{u_2}) \simeq \ddd(D_{u_1 \sharp u_2})\,.$$
\subsection{Lagrangian quantum homology}\label{ss:Lagr_QH}
Given a Morse function $f$ on $L$ we let $\Crit f$ be the set of its critical points. For $q \in \Crit f$ we denote its Morse index by $|q|_f$, or usually just by $|q|$. If $\rho$ is a Riemannian metric on $L$, we let ${\mathcal{S}}_f(q),{\mathcal{U}}_f(q)$ be the stable and unstable manifolds of $f$ with respect to $\rho$ at a critical point $q$. Usually we will drop the subscript $_f$ if the function is clear from the context. We call $(f,\rho)$ a Morse-Smale pair if every stable manifold of $f$ intersects every unstable manifold of $f$ transversely. In this case the set
$$\widetilde{\mathcal{M}}(q,q') = {\mathcal{U}}(q) \cap {\mathcal{S}}(q') \subset L$$
is naturally a smooth manifold of dimension $|q| - |q'|$ for any $q,q' \in \Crit f$.
A \textbf{quantum homology datum} for $L$ is a triple ${\mathcal{D}} = (f,\rho,J)$, where $f \in C^\infty(L)$ is a Morse function, $\rho$ a Riemannian metric on $L$, such that $(f,\rho)$ is a Morse-Smale pair, and $J$ is an $\omega$-compatible almost complex structure. We call ${\mathcal{D}}$ regular if $J$ is chosen generically, in the sense that the various pearly moduli spaces defined below are transversely cut out, in the sense of Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}.
\subsubsection{Generators and the complex as a ${\mathbb{Z}}$-module}\label{sss:generators_cx_Lagr_QH}
Fix a quantum homology datum ${\mathcal{D}} = (f,\rho, J)$ for $L$. The ${\mathbb{Z}}$-module underlying the quantum complex of $L$ is defined as
$$QC_*({\mathcal{D}}:L) = \bigoplus_{\substack{q \in \Crit f\\ A \in \pi_2(M,L,q)}}C(q,A)$$
where $C(q,A)$ is a certain rank $1$ free ${\mathbb{Z}}$-module associated to the pair $(q,A)$, which will be defined below. The grading is determined by requiring the elements of $C(q,A)$ to be homogeneous of degree $|q| - \mu(A)$.
In order to define the module $C(q,A)$ for $q \in \Crit f$ and $A \in \pi_2(M,L,q)$, we need a preliminary construction. First let us fix once and for all an arbitrary background connection $\nabla$ on $M$. Set
$$C^\infty_A = \{u \in C^\infty(D^2,S^1,1;M,L,q)\,|\, [u] = A\}\,.$$
For $u \in C^\infty_A$ we have the bundle pair $(E_u,F_u) \to (D^2,S^1)$ given by $E_u = u^*TM$, $F_u = (u|_{S^1})^*TL$. This carries the Hermitian structure $\omega_u = u^*\omega$, $J_u = u^*J$. We let $\nabla_u = u^*\nabla$ be the induced connection on $E_u$. We then have the associated Cauchy--Riemann operator
$$D_u = \nabla_u^{0,1} {:\ } W^{1,p}(D^2,S^1;E_u,F_u) \to L^p(D^2; \Omega_{D^2}^{0,1} \otimes E_u)$$
where we have extended it to the Sobolev completions. This operator is Fredholm with
$$\ind D_u = n + \mu(A)\,.$$
Thus we have the family of Fredholm operators
$$D_A = (D_u)_{u \in C^\infty_A}\,,$$
and the corresponding determinant line bundle \footnote{In contrast to the situation in \S\ref{ss:CROs_from_b_smooth_maps} where we had to include the auxiliary choices of an almost complex structure and a connection into the parameter space, here there is no need to do that.} $\det D_A$ over $C^\infty_A$. If we let $C_A$ be the space of continuous disks in class $A$, the inclusion $C^\infty_A \to C_A$ is a homotopy equivalence. The latter space can be identified with the space of continuous maps $([0,1]^2,\partial[0,1]^2,\{0,1\} \times [0,1] \cup [0,1] \times \{0\}) \to (M,L,q)$ in class $A$. Such continuous maps can be concatenated using the first coordinate of $[0,1]^2$ (this corresponds to the product in $\pi_2(M,L,q)$). For a map $u {:\ } [0,1]^2 \to M$ we let $-u {:\ } [0,1]^2 \to M$ be defined via $-u(s,t) = u(1-s,t)$. We have the following foundational lemma.
\begin{lemma} Fix $u_0 \in C^\infty_A$ and view it as a map of the square to $M$ as just described. The concatenation map $C_A \to C_0$, $u \mapsto u\sharp - u_0$, is a homotopy equivalence, where $C_0$ is the space of contractible disks at $q$. In particular the fundamental group of $C^\infty_A$ is isomorphic to $\pi_1(C_0,q)$, which in turn is canonically isomorphic to $\pi_3(M,L,q) = \pi_1(C_0,q)$, since $\pi_3(M,L,q)$ is abelian. Moreover, relative to this identification, the first Stiefel--Whitney class of the line bundle $\det D_A$ satisfies
$$w_1(\det D_A) = w_2(TL) \circ \partial {:\ } \pi_3(M,L,q) \to {\mathbb{Z}}_2$$
where $\partial {:\ } \pi_3(M,L,q) \to \pi_2(L,q)$ is the boundary map. \qed
\end{lemma}
\noindent The proof of the homotopy part of the statement is left to the reader as an exercise. The second part is proved very similarly to the proof of Lemma \ref{lem:first_Stiefel_Whitney_dD_wt_y_HF} concerning the case of Floer homology.
This lemma shows that $\det D_A$ is orientable if and only if $w_2(TL) \circ \partial$ vanishes, that is if assumption \textbf{(O)} holds. We assume this vanishing from this point on.
There is an evaluation map \footnote{This is well-defined because $p > 2$ implies the elements of $W^{1,p}$ are continuous.}
$$\ev_1 {:\ } W^{1,p}(D^2,S^1;E_u,F_u) \to T_qL\,, \quad \xi \mapsto \xi(1)\,,$$
and we let $D_u \sharp T{\mathcal{S}}(q)$ be the restriction of $D_u$ to $\ev_1^{-1}(T_q{\mathcal{S}}(q))$. Sometimes we will employ the full notation $D_u \sharp T_q{\mathcal{S}}_f(q)$. We have
\begin{lemma}\label{lem:family_ops_orientable_def_cx_QH}
The family of operators $D_A \sharp T{\mathcal{S}}(q) := (D_u \sharp T{\mathcal{S}}(q))_{u \in C^\infty_A}$ is orientable.
\end{lemma}
\begin{prf}
We have the natural exact Fredholm triple
$$0 \to D_u \sharp T{\mathcal{S}}(q) \to D_u \oplus 0_{T_q{\mathcal{S}}(q)} \xrightarrow{\ev_1 - \,\text{inclusion}} 0_{T_qL} \to 0\,,$$
Together with the direct sum isomorphism, it yields the isomorphisms
$$\ddd(D_u) \otimes \ddd(T{\mathcal{S}}(q)) \simeq \ddd(D_u \oplus T{\mathcal{S}}(q)) \simeq \ddd(D_u\sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qL)$$
which are continuous with respect to $u$. We see that we have obtained an isomorphism of line bundles
$$\ddd(D_A) \simeq \ddd(D_A \sharp T{\mathcal{S}}(q))\,,$$
whence the bundle $\ddd(D_A \sharp T{\mathcal{S}}(q))$ is orientable. \qed
\end{prf}
We can now define $C(q,A)$: it is the rank $1$ free ${\mathbb{Z}}$-module generated by the two possible orientations of the line bundle $\ddd(D_A \sharp T{\mathcal{S}}(q))$. Note that this makes sense because $C^\infty_A$ is connected. The definition of the graded ${\mathbb{Z}}$-module $QC_*({\mathcal{D}}:L)$ is thereby completed.
\subsubsection{Boundary operator}\label{sss:boundary_op_Lagr_QH}
To define the boundary operator, we need first to define the spaces of pearls. We follow \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}. Fix $q,q' \in \Crit f$. We will define the space of parametrized pearls $\widetilde {\mathcal{P}}_k(q,q')$ for $k \geq 0$. We define\
$$\widetilde {\mathcal{P}}_0(q,q')$$
to be the space of parametrized negative gradient lines of $f$ from $q$ to $q'$. It is naturally identified with $\widetilde{\mathcal{M}}(q,q')$ via $u \mapsto u(0)$. This is a smooth manifold of dimension $|q| - |q'|$. It admits a natural ${\mathbb{R}}$-action and we let
$${\mathcal{P}}_0(q,q') = \widetilde {\mathcal{P}}_0(q,q')/{\mathbb{R}}$$
be the quotient space if this action is free. When it is not free, which is the case if and only if $q = q'$, we let ${\mathcal{P}}_0(q,q') = \varnothing$. Note that $\dim {\mathcal{P}}_0(q,q') = |q| - |q'| - 1$.
Assume now $k > 0$. We have the evaluation map
$$\ev {:\ } (C^\infty(D^2,S^1;M,L))^k \to L^{2k}\,,\quad u = (u_1,\dots,u_k) \mapsto (u_1(-1),u_1(1),\dots,u_k(-1),u_k(1))\,.$$
We denote by $\phi_f^t {:\ } L \to L$ the time-$t$ flow map of the negative gradient $-\nabla_\rho f$. We let
$$Q_{f,\rho} = \{(x,\phi_f^t(x))\,|\, x \notin \Crit f\,, t > 0\} \subset L^2$$
be the flow manifold. We usually abbreviate this to $Q$. Note for further use that this has dimension $n+1$. We will also need the extended flow manifold
$$\overline Q = \{(x,\phi_f^t(x))\,|\, x \notin \Crit f\,, t\geq 0\}\,,$$
which is a manifold with boundary. This $\overline Q$ carries a natural hyperplane distribution, which we denote $\Gamma \subset T\overline Q$; this is just the collection of graphs of the differentials of the flow maps:
$$\Gamma_{(x,\phi_f^t(x))} = \{(X,\phi_{f*}^t(X))\,|\,X \in T_xL\} \subset T_{(x,\phi_f^t(x))}\overline Q\,.$$
For future use note that if $(x,y) \in Q$, then we have a natural basis of $T_{(x,y)}Q/\Gamma_{(x,y)}$, given by the coset of the vector $(-\nabla_\rho f(x),0)$:
\begin{equation}\label{eqn:basis_vector_for_TQ_mod_Gamma}
e_{x,y}:=(-\nabla_\rho f(x),0) + \Gamma_{(x,y)}\in T_{(x,y)}Q/\Gamma_{(x,y)}\,.
\end{equation}
Let $\widetilde{\mathcal{M}}(L;J)$ be the space of parametrized nonconstant $J$-holomorphic disks with boundary on $L$. The \textbf{pearly spaces} are then defined to be
$$\widetilde{\mathcal{P}}_k(q,q') = \ev^{-1}({\mathcal{U}}(q) \times Q^{k-1} \times {\mathcal{S}}(q')) \cap (\widetilde{\mathcal{M}}(L;J))^k \subset (\widetilde {\mathcal{M}}(L;J))^k\,,\quad \text{ and }\quad \widetilde{\mathcal{P}}(q,q') = \bigcup_{k \geq 0} \widetilde{\mathcal{P}}_k(q,q')\,.$$
We identify the group of conformal automorphisms of $(D^2,\partial D^2,\pm 1)$ with ${\mathbb{R}}$, as follows. The formula \eqref{eqn:std_end_cappings_Lagr_HF} defines a biholomorphism between ${\mathbb{R}} \times [0,1]$ and $D^2 - \{\pm 1\}$. The group ${\mathbb{R}}$ acts on ${\mathbb{R}} \times [0,1]$ by translations to the right. The biholomorphism thus induces an isomorphism ${\mathbb{R}} \to \Aut(D^2,S^1,\pm 1)$, the latter being the group of conformal automorphisms of $D^2$ preserving $\pm 1$. We make ${\mathbb{R}}$ act on the set of smooth maps $(D^2,S^1) \to (M,L)$ via
$$\tau \cdot u = u(\cdot + \tau,\cdot)$$
relative to the coordinates $(s,t)$ on $D^2 - \{\pm 1\} \simeq {\mathbb{R}} \times [0,1]$.
This gives rise to an action of ${\mathbb{R}}^k$ on $\widetilde{\mathcal{P}}_k(q,q')$ by reparametrizations on each disk. We let ${\mathcal{P}}_k(q,q')$ and ${\mathcal{P}}(q,q')$ be the corresponding quotient spaces.
For $u \in (C^\infty (D^2,S^1;M,L))^k$ let
$$\mu(u):=\sum_i \mu(u_i)\,.$$
Biran--Cornea proved \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}:
\begin{prop}
There is a subset of ${\mathcal{J}}(M,\omega)$ of the second category such that for every $J$ in the subset, for each $k \geq 0$ and for each pair $q,q' \in \Crit f$ the space ${\mathcal{P}}_k(q,q')$ is a smooth manifold of local dimension at $[u] \in {\mathcal{P}}_k(q,q')$
$$\dim_{[u]}{\mathcal{P}}_k(q,q') = |q| - |q'| + \mu(u) - 1$$
whenever this number does not exceed $1$. \qed
\end{prop}
For the rest of \S\ref{sss:boundary_op_Lagr_QH} we assume that $J$ is chosen so that the proposition holds. We define for future use the elements
\begin{equation}\label{eqn:std_basis_vector_pearly_space_QH}
\partial_{u_i} \in T_u\widetilde{\mathcal{P}}_k(q,q')
\end{equation}
given by the infinitesimal action of ${\mathbb{R}}$ on $u_i$.
The boundary operator
$$\partial_{\mathcal{D}} {:\ } QC_*({\mathcal{D}}:L) \to QC_{*-1}({\mathcal{D}}:L)$$
will be defined in terms of its matrix elements, which are homomorphisms
$$C(q,A) \to C(q',A')$$
for $|q| - \mu(A) = |q'| - \mu(A') + 1$. We now proceed to define these matrix elements.
Fix $q,q' \in \Crit f$ and $A \in \pi_2(M,L,q)$. For any $u \in \ev^{-1}({\mathcal{U}}(q) \times \overline Q{}^{k-1} \times {\mathcal{S}}(q'))$ there is a natural way of constructing a homotopy class $A \sharp u \in \pi_2(M,L,q')$. Indeed, each disk $u_j$ can be viewed as a continuous map $([0,1]^2, \partial[0,1]^2) \to (M,L)$ which maps $\{0\} \times [0,1]$ to $u_j(-1)$ and $\{1\} \times [0,1]$ to $u_j(1)$. Every piece of gradient trajectory connecting either a critical point $q$ or $q'$ with $u_1(-1)$ or with $u_k(1)$, or $u_j(1)$ with $u_{j+1}(-1)$, can be viewed as a continuous map $([0,1]^2, \partial[0,1]^2) \to (M,L)$ which is independent of the second variable. Now take all these maps defined on $[0,1]^2$ and concatenate them using the first coordinate, with each other, and with a representative of $A$, also viewed as such a continuous map, where the concatenation order is dictated by the linear structure of the string $u$. This works in case $k=0$ as well.
For $u \in \widetilde{\mathcal{P}}_k(q,q')$ which satisfies $|q| - |q'| + \mu(u) - 1 = 0$, we will construct an isomorphism
$$C(u) {:\ } C(q,A) \to C(q',A\sharp u)\,.$$
Fix now $A' \in \pi_2(M,L,q')$ such that $|q| - \mu(A) = |q'| - \mu(A') + 1$. Then the corresponding matrix element of the boundary operator is the sum
$$\sum_{\substack{[u] \in {\mathcal{P}}(q,q'): \\ A\sharp u = A'}}C(u) {:\ } C(q,A) \to C(q',A')\,.$$
A compactness argument shows that this sum is finite.
It therefore remains to define the isomorphism $C(u)$. We describe this is detail. The same basic construction will be used again below to define the algebraic structures.
Before we describe the isomorphism $C(u)$ for $u \in \widetilde{\mathcal{P}}_0(q,q')$, we establish a general correspondence between orientations of the connected component of $\widetilde{\mathcal{M}}(q,q')$ containing a gradient trajectory $u$ and isomorphisms $\ddd(D_A\sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))$, where $q,q' \in \Crit f$ are two arbitrary critical points, and $A'$ is obtained from $A$ by transferring it along $u$.
First note that we have the following natural exact sequence
\begin{equation}\label{eqn:exact_seq_moduli_space_gradient_lines_stable_unstable}
0 \to T_{u(0)}\widetilde{\mathcal{M}}(q,q') \to T_{u(0)} {\mathcal{S}}(q') \to T_{u(0)}L / T_{u(0)} {\mathcal{U}}(q) \to 0\,,
\end{equation}
where the penultimate arrow is the composition of the inclusion followed by the quotient map. The exactness follows from the fact that its kernel is precisely
$$T_{u(0)}\widetilde{\mathcal{M}}(q,q') = T_{u(0)} {\mathcal{S}}(q') \cap T_{u(0)} {\mathcal{U}}(q)\,,$$
because $\widetilde{\mathcal{M}}(q,q') = {\mathcal{S}}(q') \cap {\mathcal{U}}(q)$. Note that the bundle $TL/T{\mathcal{U}}(q)$ is trivial over ${\mathcal{U}}(q)$ (the latter being contractible), and we have naturally $T_qL / T_q {\mathcal{U}}(q) = T_q{\mathcal{S}}(q)$. Combining this fact with the natural isomorphism of determinant lines \eqref{eqn:iso_det_lines_exact_seq_vector_spaces} we obtain the isomorphism
\begin{equation}\label{eqn:iso_oris_moduli_space_gradient_lines_stable_mfds}
\ddd(T{\mathcal{S}}(q')) \simeq \ddd(T_{u(0)}\widetilde{\mathcal{M}}(q,q')) \otimes \ddd(T{\mathcal{S}}(q))
\end{equation}
This means that there is a natural bijection between orientations of the connected component of $\widetilde{\mathcal{M}}(q,q')$ containing $u$ and isomorphisms $\ddd(T{\mathcal{S}}(q)) \simeq \ddd(T{\mathcal{S}}(q'))$.
Using exact triples as in the proof of Lemma \ref{lem:family_ops_orientable_def_cx_QH}, we obtain canonical isomorphisms
$$\ddd(D_A) \otimes \ddd(T{\mathcal{S}}(q)) \simeq \ddd(D_A \sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qL)\,,$$
$$\ddd(D_{A'}) \otimes \ddd(T{\mathcal{S}}(q')) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q')) \otimes \ddd(T_{q'}L)\,.$$
Using the natural isomorphisms $\ddd(D_A) \simeq \ddd(D_{A'})$ and $\ddd(T_qL) \simeq \ddd(T_{q'}L)$ obtained by transferring orientations along $u$, we see that there is a natural bijection between the set of isomorphisms $\ddd(D_A \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))$ and the set of isomorphisms $\ddd(T{\mathcal{S}}(q)) \simeq \ddd(T{\mathcal{S}}(q'))$, which in turn is in bijection with orientations of the connected component of $\widetilde{\mathcal{M}}(q,q')$ containing $u$. This is the general correspondence we wanted to construct.
Now if $u \in \widetilde{\mathcal{P}}_0(q,q')$ and $|q| = |q'| + 1$, $T_{u(0)}\widetilde{\mathcal{M}}(q,q') = T_u\widetilde{\mathcal{P}}_0(q,q')$ is canonically oriented by the translation vector $\dot u(0) = \partial_u$. We have the corresponding isomorphism $\ddd(D_A \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))$, or equivalently the isomorphism $C(q,A) \simeq C(q',A')$. This isomorphism is $C(u)$.
We now pass to the definition of $C(u)$ for $u \in \widetilde {\mathcal{P}}_k(q,q')$ with $k \geq 1$. We need to define some additional Banach spaces. For a smooth map $v {:\ } (D^2,S^1) \to (M,L)$ we use the abbreviations
$$W^{1,p}(v) = W^{1,p}(D^2,S^1;v^*TM,(v|_{S^1})^*TL)\quad \text{ and } \quad L^p(v) = L^p(D^2;\Omega^{0,1}_{D^2}\otimes v^*TM)\,.$$
For a string of maps $v = (v_1,\dots,v_k) \in (C^\infty(D^2,S^1;M,L))^k$ we let
$$W^{1,p}(v) = \bigoplus_i W^{1,p}(v_i) \quad \text{ and } \quad L^p(v) = \bigoplus_i L^p(v_i)\,.$$
We define for $v \in (C^\infty(D^2,S^1;M,L))^k$ with $\ev(v) \in {\mathcal{U}}(q) \times \overline Q{}^{k-1} \times {\mathcal{S}}(q')$ the spaces
$$Z_\Gamma^v = \{\xi = (\xi_1,\dots,\xi_k) \in W^{1,p}(v)\,|\, (\xi_i(1),\xi_{i+1}(-1)) \in \Gamma_{(v_i(1),v_{i+1}(-1))}\text{ for }i < k\}\,,$$
$$X_\Gamma^v = \{\xi \in Z_\Gamma^v \,|\, \xi_k(1) \in T_{v_k(1)}{\mathcal{S}}(q')\}\,,$$
$$Y_\Gamma^v = \{\xi \in X_\Gamma^v\,|\, \xi_1(-1) \in T_{v_1(-1)}{\mathcal{U}}(q)\}\,,$$
and if $\ev (v) \in {\mathcal{U}}(q) \times Q^{k-1} \times {\mathcal{S}}(q')$, we define in addition
\begin{multline*}
Y_Q^v = \{\xi \in W^{1,p}(v)\,|\, \xi_1(-1) \in T_{v_1(-1)}{\mathcal{U}}(q)\,,(\xi_i(1),\xi_{i+1}(-1)) \in T_{(v_i(1),v_{i+1}(-1))}Q\text{ for }i < k\,,\\\xi_k(1) \in T_{v_k(1)}{\mathcal{S}}(q')\}\,.
\end{multline*}
For such $v$ we will also employ the notation
$$T_vQ:= \bigoplus_{i < k} T_{(v_i(1),v_{i+1}(-1))}Q$$
and
\begin{equation}\label{eqn:definition_of_TQ_mod_Gamma}
T_vQ/\Gamma:= \bigoplus_{i < k} T_{(v_i(1),v_{i+1}(-1))}Q/\Gamma_{(v_i(1),v_{i+1}(-1))}\,.
\end{equation}
The latter space has a natural basis
\begin{equation}\label{eqn:canonical_basis_of_TQ_mod_Gamma}
e^v_i:=e_{v_i(1),v_{i+1}(-1)} \in T_{(v_i(1),v_{i+1}(-1))}Q/\Gamma_{(v_i(1),v_{i+1}(-1))}
\end{equation}
for $i=1,\dots,k-1$, where we use the notation \eqref{eqn:basis_vector_for_TQ_mod_Gamma}. The natural evaluation map $\ev {:\ } Y^v_Q \to T_vQ/\Gamma$ induces a short exact sequence of Banach spaces:
$$0 \to Y^v_\Gamma \to Y^v_Q \xrightarrow{\ev} T_vQ \to 0\,.$$
The isomorphism $C(u)$ is constructed in two stages. First, we establish a canonical bijection between the orientations of $\ddd(D_u|_{Y^u_\Gamma})$ and orientations of $\ddd(T_u \widetilde{\mathcal{P}}_k(q,q')) \otimes \ddd(T_uQ/\Gamma)$. Then we establish a canonical bijection between the orientations of $\ddd(D_u|_{Y^u_\Gamma})$ and isomorphisms $\ddd(D_A\sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))$ where $A' = A \sharp u$, or equivalently isomorphisms $C(q,A) \simeq C(q',A')$. Selecting the orientation
$$\textstyle (-1)^{k+1}\bigwedge_i\partial_{u_i} \otimes \bigwedge_ie^u_i \in \ddd(T_u \widetilde{\mathcal{P}}_k(q,q')) \otimes \ddd(T_uQ/\Gamma)\,,$$
we get, using these bijections, the desired isomorphism $C(u) {:\ } C(q,A) \simeq C(q',A')$.
To establish the first bijection consider the exact Fredholm triple
$$\xymatrix{0 \ar[r] & Y_\Gamma^u \ar[r] \ar[d]^{D_u|_{Y^u_\Gamma}} & Y_Q^u \ar[r]^{\ev} \ar[d]^{D_u|_{Y^u_Q}} & T_uQ/\Gamma\ar[r] \ar[d] & 0 \\ 0 \ar[r] & L^p(u) \ar@{=}[r] & L^p(u) \ar[r] & 0}$$
where $D_u$ is the linearization of $u$. Note that $T_u\widetilde{\mathcal{P}}_k(q,q') = \ker D_u|_{Y_Q^u}$, and that $D_u|_{Y_Q^u}$ is surjective. Then the exact triple yields the isomorphism
\begin{equation}\label{eqn:correspondence_ors_pearly_space_TQ_mod_Gamma_and_D_u}
\ddd(T_u\widetilde{\mathcal{P}}_k(q,q')) = \ddd(D_u|_{Y_Q^u}) \simeq \ddd(D_u|_{Y_\Gamma^u}) \otimes \ddd(T_uQ/\Gamma)
\end{equation}
and by tensoring with $\ddd(T_uQ/\Gamma)$ we see that there is indeed a canonical bijection between orientations of $\ddd(D_u|_{Y^u_\Gamma})$ and orientations of $\ddd(T_u\widetilde{\mathcal{P}}_k(q,q')) \otimes \ddd(T_uQ/\Gamma)$.
We now construct the second bijection. We need some preliminary constructions and lemmata. Let $X_i,Y_i$, $i=1,2$, be Banach spaces, $D_i \in {\mathcal{F}}(X_i,Y_i)$, and let $V$ be a finite-dimensional vector space. Assume $\theta_i {:\ } X_i \to V$ are surjective linear continuous maps. Let $W \subset V \oplus V$ be a subspace of dimension $\dim W = \dim V$. Then we can define the space $X_1 \sharp_W X_2 = (\theta_1 \oplus \theta_2)^{-1}(W) \subset X_1 \oplus X_2$, and the operator
$$D_1 \sharp_W D_2 = (D_1 \oplus D_2)|_{X_1 \sharp_W X_2}\,.$$
We have the following exact triple
$$\xymatrix{0 \ar[r] & X_1\sharp_W X_2 \ar[r] \ar[d]^{D_1\sharp_W D_2} & X_1 \oplus X_2 \oplus W \ar[rr]^-{\theta_1 \oplus \theta_2 - \, \text{inclusion}} \ar[d]^{D_1 \oplus D_2 \oplus 0_W} & & V \oplus V \ar[r] \ar[d] & 0\\
0 \ar[r] & Y_1 \oplus Y_2 \ar[r] & Y_1 \oplus Y_2 \ar[rr] & & 0}$$
which together with the direct sum isomorphism yields the isomorphism
$$\ddd(D_1 \oplus D_2) \otimes \ddd(W) \simeq \ddd(D_1 \sharp_W D_2) \otimes \ddd(V \oplus V)\,.$$
This shows that $(\ddd(W))_W$ and $(\ddd(D_1 \sharp_W D_2))_W$ are isomorphic as line bundles over the Grassmannian $G_{\dim V}(V \oplus V)$. In particular a path in this Grassmannian between two subspaces $W,W'$ induces an isomorphism $\ddd(D_1 \sharp_W D_2) \simeq \ddd(D_1 \sharp_{W'} D_2)$.
Assume now that $V$ splits as the direct sum $W_1 \oplus W_2$ and let $W = (W_1\oplus 0) \oplus (0\oplus W_2) \subset V \oplus V$. There is a path of subspaces between $W$ and the diagonal $\Delta_V$, given by the images of the embeddings
$$V = W_1 \oplus W_2 \to V \oplus V\,,\quad (w_1,w_2) \mapsto (w_1 + tw_2, tw_1 + w_2)$$
for $t \in [0,1]$. We have the induced isomorphism $\ddd(W) \simeq \ddd(\Delta_V)$ and therefore the isomorphism
\begin{equation}\label{eqn:iso_abstract_deformation_incidence_condition_W}
\ddd(D_1 \sharp_W D_2) \simeq \ddd(D_1 \sharp _{\Delta_V} D_2)\,,
\end{equation}
which will be frequently used in the sequel. We say that this isomorphism is obtained by \textbf{deforming the incidence condition}.
Now we turn to a special case. Assume $w \in C^\infty_A$ with $A \in \pi_2(M,L,q)$, in particular $w(1) = q$. Assume further that $u' \in (C^\infty(D^2,S^1;M,L))^k$, $k \geq 1$, satisfies $\ev(u') \in \{q\} \times \Delta_L^{k-1} \times \{q'\}$. We have the operators
$$D_w {:\ } W^{1,p}(w) \to L^p(w)\quad \text{and} \quad D_{u'}|_{X_\Gamma^{u'}} {:\ } X_\Gamma^{u'} \to L^p(u')\,,$$
and the surjective continuous homomorphisms
$$\theta_1 {:\ } W^{1,p}(w) \to T_qL\,,\; \xi \mapsto \xi(1)\,;\quad \theta_2 {:\ } X_\Gamma^{u'} \to T_qL\,,\; (\xi_1,\dots,\xi_k) \mapsto \xi_1(-1)\,.$$
Note that
$$D_w|_{\theta_1^{-1}(T_q{\mathcal{S}}(q))} = D_w \sharp T{\mathcal{S}}(q)\,,$$
that
$$\theta_2^{-1}(T_q{\mathcal{U}}(q)) = Y_\Gamma^{u'}\,,$$
and that
$$(D_{u'}|_{X_\Gamma^{u'}})|_{\theta_2^{-1}(T_q{\mathcal{U}}(q))} = D_{u'}|_{Y_\Gamma^{u'}}\,.$$
Therefore if we let $W = (T_q{\mathcal{S}}(q) \oplus 0) \oplus (0 \oplus T_q{\mathcal{U}}(q)) \subset T_qL \oplus T_qL$, then
$$D_w \sharp_W D_{u'}|_{X_\Gamma^{u'}} = D_w\sharp T{\mathcal{S}}(q) \oplus D_{u'}|_{Y_\Gamma^{u'}}\,.$$
On the other hand, the operator $D_w \sharp_{\Delta_{T_qL}} D_{u'}|_{X_\Gamma^{u'}}$ is precisely what we denoted by $D_w \sharp D_{u'}|_{X_\Gamma^{u'}}$ in \ref{ss:boundary_gluing}, and which is the precursor to boundary gluing of the operators $D_w, D_{u'}|_{X_\Gamma^{u'}}$. Combining these considerations with the above isomorphism \eqref{eqn:iso_abstract_deformation_incidence_condition_W}, and with the direct sum isomorphism, we obtain the isomorphism
\begin{equation}\label{eqn:iso_correspondence_ors_D_u_Y_Gamma_isos}
\ddd(D_{u'}|_{Y_\Gamma^{u'}}) \otimes \ddd(D_w \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_w \sharp D_{u'}|_{X_\Gamma^{u'}})\,.
\end{equation}
We have the following lemma.
\begin{lemma}\label{lem:bijection_isos_C_q_A_ors_D_u_prime}
The isomorphism \eqref{eqn:iso_correspondence_ors_D_u_Y_Gamma_isos} induces a bijection between orientations of $\ddd(D_{u'}|_{Y_\Gamma^{u'}})$ and isomorphisms $C(q,A) \to C(q',A')$ where $A' = A \sharp u'$. It is continuous in $u'$.
\end{lemma}
\begin{prf}
The continuity follows from the corresponding property of direct sum and deformation isomorphisms.
We know that concatenating $w$ with the constituent disks of $u'$ gives us a representative of $A'$. Moreover, the operator $D_w \sharp D_{u'}|_{X_\Gamma^{u'}}$, after performing boundary gluing and deformation as described in \S\ref{ss:boundary_gluing}, yields a representative of the family $D_{A'}\sharp T{\mathcal{S}}(q')$. This can be seen as follows. The operator $D_w \sharp D_{u'}|_{Z^{u'}_\Gamma}$ satisfies the necessary incidence conditions and therefore can be glued to an operator in the family $D_{A'}$ since the surface on which it is defined is precisely the concatenation of the disks $w,u'_1,\dots,u'_k$, that is $w\sharp u_1 \sharp \dots \sharp u_k$, which is a disk in class $A'$. The operator $D_w \sharp D_{u'}|_{X_\Gamma^{u'}}$ is the restriction of $D_w \sharp D_{u'}|_{Z_\Gamma^{u'}}$ to the subspace where $\xi_k \in W^{1,p}(u_k)$ satisfies $\xi_k(1) \in T_{q'}{\mathcal{S}}(q')$. Therefore the glued operator obtained from $D_w \sharp D_{u'}|_{Z_\Gamma^{u'}}$, when restricted to the same subspace, yields an operator in the family $D_{A'}\sharp T{\mathcal{S}}(q')$. This restricted operator is also be obtained by gluing the operator $D_w \sharp D_{u'}|_{X_\Gamma^{u'}}$.
Passing to the families, we see that the isomorphism \eqref{eqn:iso_correspondence_ors_D_u_Y_Gamma_isos} yields an isomorphism of line bundles
$$\ddd(D_{u'}|_{Y_\Gamma^{u'}}) \otimes \ddd(D_A \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))\,.$$
This shows that there is a canonical bijection between orientations of $\ddd(D_{u'}|_{Y_\Gamma^{u'}})$ and isomorphisms $\ddd(D_A \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q'))$, or equivalently, isomorphisms $C(q,A) \simeq C(q',A')$, as claimed. \qed
\end{prf}
In order to obtain a bijection between orientations of $D_u|_{Y_\Gamma^u}$ and isomorphisms $C(q,A) \simeq C(q',A')$, we deform $u$ into $u' \in (C^\infty(D^2,S^1;M,L))^k$ with $\ev(u') \in \{q\} \times \Delta_L^{k-1} \times \{q'\}$. The isomorphism induced by the deformation
\begin{equation}\label{eqn:deformation_iso_from_u_to_u_prime}
\ddd(D_u|_{Y^u_\Gamma}) \simeq \ddd(D_{u'}|_{Y^{u'}_\Gamma})
\end{equation}
and the bijection between the orientations of $\ddd(D_{u'}|_{Y^{u'}_\Gamma})$ and isomorphisms $C(q,A) \simeq C(q',A')$ from Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime} yields the desired bijection. Therefore we have to find a way to deform $u$ into $u'$ with the desired properties. This can be done as follows. Consider the disk $u_1$ and deform it slightly so that a neighborhood of $-1$ maps to $u_1(-1)$. Then deform the constant map on this neighborhood to a map covering the piece of gradient trajectory going from $q$ to $u_1(-1)$. Perform a similar deformation of every $u_j$, $j > 1$ so that the resulting deformation maps a neighborhood of $-1$ to the piece of gradient trajectory from $u_{j-1}(1)$ to $u_j(-1)$. Finally, deform $u_k$ additionally so that a neighborhood of $1$ covers the piece of gradient trajectory from $u_k(1)$ to $q'$. All this can be done so that the deformation and the maps involved are smooth.
We have therefore completed the definition of the boundary operator.
\begin{thm}\label{thm:boundary_op_squares_zero_QH}
$\partial_{\mathcal{D}}^2 = 0$.
\end{thm}
\noindent The proof of this theorem is quite involved and will occupy the rest of \S\ref{sss:boundary_op_Lagr_QH}.
\begin{prf}
Fix two critical points $q,q'' \in \Crit f$ and classes $A \in \pi_2(M,L,q), A'' \in \pi_2(M,L,q'')$ such that $|q| - \mu(A) = |q''| - \mu(A'') + 2$. We need to prove the vanishing of the corresponding matrix element of $\partial_{\mathcal{D}}^2$, which equals
\begin{equation}\label{eqn:vanishing_matrix_elts_boundary_op_QH_squared}
\sum_{q' \in \Crit f} \sum_{\substack{A' \in \pi_2(M,L,q'): \\ |q| - \mu(A) = |q'| - \mu(A') + 1}} \sum_{\substack{([u],[v]) \in {\mathcal{P}}(q,q') \times {\mathcal{P}}(q',q''):\\A\sharp u = A', A'\sharp v = A''}} C(v) \circ C(u) {:\ } C(q,A) \to C(q'',A'')\,.
\end{equation}
Let us denote by ${\mathcal{P}}^1(q,q'')$ the $1$-dimensional part of ${\mathcal{P}}(q,q'')$. Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} gave a description of the compactification $\overline{\mathcal{P}}{}^1(q,q'')$ for ${\mathcal{P}}^1(q,q'')$. The compactness of ${\mathcal{P}}^1(q,q'')$ fails in one of the following ways: one of the gradient trajectories undergoes Morse breaking; two holomorphic disks collide, that is one of the gradient lines connecting the disks shrinks to a point; a holomorphic disk breaks into two; or a Maslov $2$ holomorphic disk bubbles off at a point. The case involving bubbling, which only happens when $N_L = 2$, will be handled in \S\ref{ss:boundary_op_squares_zero_bubbling_QH}; here we only consider the case in which bubbling does not occur.
Let us call $u \in \widetilde{\mathcal{M}}(L,J) ^ m$, $m > 0$ a \textbf{degenerate pearly trajectory} between $q,q''$ if there is $j < m$ such that $\ev(u) \in {\mathcal{U}}(q) \times Q^{j-1} \times \Delta_L \times Q^{m-j-1} \times {\mathcal{S}}(q'')$, that is it is an ordinary pearly trajectory, except that two of the holomorphic disks touch.
A degenerate pearly trajectory appears as a boundary point in the compactification of exactly two components of ${\mathcal{P}}^1(q,q'')$: one in which the two disks are separated by a positive-time gradient trajectory, and the other one where one of the holomorphic disks breaks into the two touching disks in $u$. We redefine $\overline{\mathcal{P}}{}^1(q,q'')$ to be the disjoint union of all the compactified components of ${\mathcal{P}}^1(q,q'')$, and where we identify two boundary points if they correspond to the same degenerate pearly trajectory. This endows $\overline{\mathcal{P}}{}^1(q,q'')$ with the structure of a compact $1$-dimensional topological manifold with boundary whose points are pairs of pearly trajectories $([u],[v]) \in {\mathcal{P}}(q,q') \times {\mathcal{P}}(q',q'')$ with $\dim_{[u]}{\mathcal{P}}(q,q') = \dim_{[v]}{\mathcal{P}}(q',q'') = 0$, therefore we see that the summands of \eqref{eqn:vanishing_matrix_elts_boundary_op_QH_squared} are in bijection with the boundary $\partial\overline{\mathcal{P}}{}^1(q,q'')$.
For $\delta = ([u],[v]) \in \partial \overline{\mathcal{P}}{}^1(q,q'')$ let $C(\delta) = C(v) \circ C(u)$ be the summand corresponding to $\delta$. It is enough to show that for every connected component $\Delta \subset \overline{\mathcal{P}}{}^1(q,q'')$ with boundary $\partial\Delta = \{\delta,\delta'\}$ we have $C(\delta) + C(\delta') = 0$. The component $\Delta$ either has holomorphic disks or it doesn't. If it has no holomorphic disks, by the discussion on page \pageref{eqn:exact_seq_moduli_space_gradient_lines_stable_unstable} we know that for $w \in \widetilde{\mathcal{P}}_0(q,q'')$ with $[w] \in \Delta$ there is a bijection between isomorphisms $C(q,A) \simeq C(q'',A'')$ and orientations of $T_w\widetilde{\mathcal{P}}(q,q'')$.
If $\Delta$ does contain holomorphic disks, we have the following situation. Let $\{w^t\}_{t \in [0,1]}$ be a continuous parametrization of $\Delta$ with $w^0 = \delta$, $w^1 = \delta'$. There are finitely many instances
$$t_0 = 0 < t_1 < \dots < t_m = 1$$
such that $w^t \notin {\mathcal{P}}^1(q,q'')$ if and only if $t = t_i$ for some $i$, that is $w^{t_i}$ is a denerate trajectory. By abuse of notation, for $t \neq t_i$ let $w^t \in \widetilde{\mathcal{P}}(q,q'')$ be a continuous family of representatives of $w^t \in {\mathcal{P}}^1(q,q'')$. Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime} together with the isomorphism \eqref{eqn:correspondence_ors_pearly_space_TQ_mod_Gamma_and_D_u} show that there is a bijection between isomorphisms $C(q,A) \simeq C(q'',A'')$, orientations of $D_{w^t}|_{Y_\Gamma^{w^t}}$, and orientations of $\ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)$, which is continuous in $t$ within each interval $(t_i,t_{i+1})$. The theorem then follows from the next lemma.
\begin{lemma}\label{lem:induced_or_pearly_spaces_boundary_op_squared}
Assume $\delta = ([u],[v]) \in {\mathcal{P}}_k(q,q') \times {\mathcal{P}}_l(q',q'')$. Then under the above bijection, the isomorphism $C(\delta)$ corresponds to:
\begin{itemize}
\item If $k=l=0$, the orientation $- \partial_w \wedge \text{\rm inward}_\delta \in \ddd(T_w\widetilde{\mathcal{P}}(q,q''))$;
\item Otherwise, the orientation
\begin{equation}\label{eqn:orientation_induced_on_pearly_space_TQ_mod_Gamma_by_C_delta}
\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_\delta \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)
\end{equation}
whenever $t$ is not one of the degenerate values $t_j$. Here $\text{\rm inward}_\delta \in T_{w^t}\widetilde{\mathcal{P}}(q,q'')$ denotes a vector which is transverse to the infinitesimal action of the automorphism group, and which points away from the boundary point $\delta$.
\end{itemize}
\end{lemma}
\noindent Indeed, assume the lemma. Let $\delta' = ([u'],[v']) \in {\mathcal{P}}_{k'}(q,q_1) \times {\mathcal{P}}_{l'}(q_1,q'')$. According to the lemma, if $k = l = 0$, then, of course $k' = l' = 0$, and the isomorphism $C(\delta')$ corresponds to the orientation
$$-\partial_w \wedge\text{inward}_{\delta'} \in T_{w}\widetilde {\mathcal{P}}(q,q'')\,.$$
Since $\text{inward}_{\delta} = -\text{inward}_{\delta'}$, we see that in this case $C(\delta) + C(\delta') = 0$.
Otherwise the lemma says that $C(\delta')$ corresponds to
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_{\delta'} \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)$$
for $t \in (t_{m-1},t_m)$. On the other hand, $C(\delta)$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_\delta \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)$$
for the same range of $t$. Noting that $\text{inward}_\delta = -\text{inward}_{\delta'}$, we see that these two orientations are opposite, which proves that $C(\delta) + C(\delta') = 0$ and therefore that $\partial_{\mathcal{D}}^2 = 0$. \qed
\end{prf}
\noindent This proves Theorem \ref{thm:boundary_op_squares_zero_QH}, modulo Lemma \ref{lem:induced_or_pearly_spaces_boundary_op_squared}.
\begin{prf}[of Lemma \ref{lem:induced_or_pearly_spaces_boundary_op_squared}] In case $k = l = 0$, this is a standard computation in Morse theory, see for instance Schwarz's book \cite{Schwarz_Morse_H_book}. In order to put that computation in the context of the present paper, we note that we have the following commutative diagram:
$$\xymatrix{\ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \otimes \ddd(T{\mathcal{S}}(q)) \ar[r] \ar[d] & \ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(T{\mathcal{S}}(q'))\ar[d] \\ \ddd(T_w\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T{\mathcal{S}}(q)) \ar[r] & \ddd(T{\mathcal{S}}(q''))} $$
where all the arrows except the left one come from the isomorphism \eqref{eqn:iso_oris_moduli_space_gradient_lines_stable_mfds}, whereas the left arrow comes from the direct sum isomorphism and the differential of the gluing map, which is an isomorphism
$$T_u\widetilde{\mathcal{P}}(q,q') \times T_v\widetilde{\mathcal{P}}(q',q'') \simeq T_w\widetilde{\mathcal{P}}(q,q'')\quad \text{mapping}\quad \partial_v + \partial_u \mapsto \partial_w \text{ and }\partial_u - \partial_v \mapsto -\inward_\delta\,.$$
We can now tensor this diagram with the identity on $\ddd(D_A)$, then replace $\ddd(D_A)$ with $\ddd(D_{A'})$ and $\ddd(D_{A''})$ where needed, and replace the identity isomorphisms with the corresponding deformation isomorphisms. The exact triple isomorphisms as in the proof of Lemma \ref{lem:family_ops_orientable_def_cx_QH} then will yield the following diagram:
$$\resizebox{\textwidth}{!}{\xymatrix{\ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qL) \ar[r] \ar[d] & \ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(D_{A'} \sharp T{\mathcal{S}}(q')) \otimes \ddd(T_{q'}L) \ar[d] \\ \ddd(T_w\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(D_A \sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qL) \ar[r] & \ddd(D_{A''} \sharp T{\mathcal{S}}(q'')) \otimes \ddd(T_{q''}L)}} $$
Here we can now erase the terms $\ddd(T_\cdot L)$ and obtain the diagram
$$\xymatrix{\ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] \ar[d] & \ddd(T_v\widetilde{\mathcal{P}}(q',q'')) \otimes \ddd(D_{A'} \sharp T{\mathcal{S}}(q')) \ar[d] \\ \ddd(T_w\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(D_A \sharp T{\mathcal{S}}(q)) \ar[r] & \ddd(D_{A''} \sharp T{\mathcal{S}}(q''))} $$
which tells us that the isomorphism $C(\delta) = C(v) \circ C(u)$, which is obtained by chasing the diagram along through the top right corner, corresponds to the orientation of $T_w\widetilde{\mathcal{P}}(q,q'')$ obtained from the differential of the gluing map, that is the orientation $-\partial_w \wedge \inward_\delta$.
Assume now that at least one of the numbers $k,l$ is nonzero. The claim then will be proved by induction on $j$. The base of the induction consists of showing that the isomorphism $C(\delta)$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_\delta \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)$$
for $t \in (0,t_1)$. It suffices to show this for some $t$ small enough due to continuity. Let therefore $w = w^t$ for some small $t > 0$. There is a canonical bijection between isomorphisms $C(q,A) \simeq C(q'',A'')$, orientations of $\ddd(D_w|_{Y_\Gamma^w})$, and orientations of $T_w\widetilde{\mathcal{P}}(q,q'') \otimes \ddd(T_wQ/\Gamma)$, see Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime} and equation \eqref{eqn:correspondence_ors_pearly_space_TQ_mod_Gamma_and_D_u}. Therefore the isomorphism $C(\delta) = C(v) \circ C(u)$ corresponds to a certain orientation of $\ddd(D_w|_{Y_\Gamma^w})$. Let us find this orientation.
We will separate the proof into two cases. The first case is when both $k,l > 0$. The second case is treated below.
Consider the family of operators
$$D_{w^t}|_{Y_\Gamma^{w^t}} {:\ } Y_\Gamma^{w^t} \to L^p(w^t)\,;$$
these form a Fredholm morphism between the Banach bundles $(Y_\Gamma^{w^t})_{t \in (0,t_1)}$ and $(L^p(w^t))_{t\in(0,t_1)}$. Note that since $w^t$ is obtained by Morse gluing at $q'$ from $u,v$, it follows that the Banach bundle $Y_\Gamma^{w^t}$ over $(0,t_1)$ can be extended to a bundle over $[0,t_1)$ by adding the fiber over $0$ which is precisely $Y_\Gamma^u \oplus Y_\Gamma^v$. This follows from the following lemma in Morse theory, left to the reader as an exercise:
\begin{lemma}\label{lem:convergence_in_Grassmannian_Morse_theory}
Consider the exterior sum $TL \boxplus TL$ as a rank $2n$ vector bundle over $L \times L$ and consider the associated Grassmann bundle of $n$-dimensional subspaces $G_n(TL \boxplus TL)$. Then in this space $\Gamma_{(w^t_k(1),w^t_{k+1}(-1))}$, that is the graph of the differential of the flow map of $-\nabla_\rho f$ connecting $w^t_k(1)$ with $w^t_{k+1}(-1)$, converges to $T_{u_k(1)}{\mathcal{S}}(q') \oplus T_{v_1(-1)}{\mathcal{U}}(q')$ as $t \to 0$. \qed
\end{lemma}
\noindent The bundle $L^p(w^t)$ extends to $t=0$ by $L^p(u) \oplus L^p(v)$. The Fredholm morphism $D_{w^t}|_{Y_\Gamma^{w^t}}$ also extends over $0$, where it coincides with $D_u|_{Y_\Gamma^u} \oplus D_v|_{Y_\Gamma^v}$.
We can deform $w^t$ into $(w^t)' \in (C^\infty(D^2,S^1;M,L))^{k+l}$ with $\ev((w^t)') \in \{q\} \times \Delta_L^{k+l-1} \times \{q''\}$. Similarly, we can deform $u$ and $v$ into $u',v'$ with $\ev(u') \in \{q\} \times \Delta_L^{k-1} \times \{q'\}$ and $\ev(v') \in \{q'\} \times \Delta_L^{l-1} \times \{q''\}$, according to the prescription after equation \eqref{eqn:deformation_iso_from_u_to_u_prime}. Note that these deformations can be done so that $(w^t)'$ tends to $w':=(u_1',\dots,u_k',v_1',\dots,v_l')$ as $t \to 0$. These considerations show that the operator $D_{w^t}|_{Y_\Gamma^{w^t}}$ deforms into the operators $D_u|_{Y_\Gamma^u} \oplus D_v|_{Y_\Gamma^v}$, $D_{(w^t)'}|_{Y_\Gamma^{(w^t)'}}$, and $D_{w'}|_{Y_\Gamma^{w'}}$. Note as well that there is a deformation between $D_u|_{Y_\Gamma^u} \oplus D_v|_{Y_\Gamma^v}$ and $D_{u'}|_{Y_\Gamma^{u'}} \oplus D_{v'}|_{Y_\Gamma^{v'}}$, while the latter operator can be deformed as described above into $D_{w'}|_{Y_\Gamma^{w'}}$ by deforming the incidence condition at $q'$.
This implies that the families $D_A \sharp T{\mathcal{S}}(q) \oplus D_{w^t}|_{Y_\Gamma^{w^t}}$, $D_A \sharp T{\mathcal{S}}(q) \oplus D_u|_{Y_\Gamma^u} \oplus D_v|_{Y_\Gamma^v}$, $D_A \sharp T{\mathcal{S}}(q) \oplus D_{(w^t)'}|_{Y_\Gamma^{(w^t)'}}$, $D_A \sharp T{\mathcal{S}}(q) \oplus D_{w'}|_{Y_\Gamma^{w'}}$, and $D_A \sharp T{\mathcal{S}}(q) \oplus D_{u'}|_{Y_\Gamma^{u'}} \oplus D_{v'}|_{Y_\Gamma^{v'}}$ are all deformations of one another. Moreover, the space of parameters of these deformations, which consists of the interval $[0,t_1)$ multiplied by the parameter used to deform $u$ into $u'$ and so on, is contractible, being a product of intervals. Therefore all these operators have mutually canonically isomorphic determinant lines, the isomorphisms being deformation isomorphisms. Furthermore, by deforming the incidence conditions at $q$ and $q'$, we see that $D_A \sharp T{\mathcal{S}}(q) \oplus D_{u'}|_{Y_\Gamma^{u'}} \oplus D_{v'}|_{Y_\Gamma^{v'}}$ deforms into $D_A \sharp D_{u'}|_{Z_\Gamma^{u'}} \sharp D_{v'}|_{X_\Gamma^{v'}}$, while $D_A \sharp T{\mathcal{S}}(q) \oplus D_{w'}|_{Y_\Gamma^{w'}}$ deforms into $D_A \sharp D_{w'}|_{X_\Gamma^{w'}}$. Both the operators $D_A \sharp D_{u'}|_{Z_\Gamma^{u'}} \sharp D_{v'}|_{X_\Gamma^{v'}}$ and $D_A \sharp D_{w'}|_{X_\Gamma^{w'}}$ can be glued to yield representatives of the family $D_{A''} \sharp T{\mathcal{S}}(q'')$. These considerations, together with the direct sum isomorphisms, yield the commutative diagram
$$\resizebox{\textwidth}{!}{\xymatrix{\ddd(D_v|_{Y_\Gamma^v}) \otimes \ddd(D_u|_{Y_\Gamma^u}) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] \ar[d] &\ddd(D_{v'}|_{Y_\Gamma^{v'}}) \otimes \ddd(D_{u'}|_{Y_\Gamma^{u'}}) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] \ar[d] & \ddd(D_{A''} \sharp T{\mathcal{S}}(q''))\\
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] &\ddd(D_{w'}|_{Y_\Gamma^{w'}}) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[ru]}}$$
Let ${\mathfrak{o}} \in \ddd(D_A\sharp T{\mathcal{S}}(q))$, ${\mathfrak{o}}'' = C(\delta)({\mathfrak{o}})$. The isomorphisms $C(u),C(v)$ correspond to orientations ${\mathfrak{o}}_u \in \ddd(D_u|_{Y^u_\Gamma})$, ${\mathfrak{o}}_v \in \ddd(D_v|_{Y^v_\Gamma})$. Consider the isomorphism
$$\ddd(D_v|_{Y^v_\Gamma}) \otimes \ddd(D_u|_{Y^u_\Gamma}) \to \ddd(D_w|_{Y^w_\Gamma})$$
obtained by composing the direct sum isomorphism with the deformation isomorphism described above, and let ${\mathfrak{o}}_w \in \ddd(D_w|_{Y^w_\Gamma})$ be the image of ${\mathfrak{o}}_v \otimes {\mathfrak{o}}_u$ under this isomorphism. Letting ${\mathfrak{o}}_{u'} \in \ddd(D_{u'}|_{Y^{u'}_\Gamma})$, ${\mathfrak{o}}_{v'} \in \ddd(D_{v'}|_{Y^{v'}_\Gamma})$, ${\mathfrak{o}}_{w'} \in \ddd(D_{w'}|_{Y^{w'}_\Gamma})$ be the orientations corresponding to ${\mathfrak{o}}_u,{\mathfrak{o}}_v, {\mathfrak{o}}_w$ under the deformation isomorphisms \eqref{eqn:deformation_iso_from_u_to_u_prime}, we see that the diagram maps
$$\xymatrix{{\mathfrak{o}}_v \otimes {\mathfrak{o}}_u \otimes {\mathfrak{o}} \ar@{|->}[r] \ar@{|->}[d] & {\mathfrak{o}}_{v'} \otimes {\mathfrak{o}}_{u'} \otimes {\mathfrak{o}} \ar@{|->}[r] \ar@{|->}[d] & {\mathfrak{o}}''\\
{\mathfrak{o}}_w \otimes {\mathfrak{o}} \ar@{|->}[r] & {\mathfrak{o}}_{w'} \otimes {\mathfrak{o}} \ar@{|->}[ru]}$$
Since the composition of the two bottom arrows maps ${\mathfrak{o}}_w \otimes {\mathfrak{o}} \mapsto {\mathfrak{o}}''$, we see that the isomorphism $C(\delta)$, which maps ${\mathfrak{o}} \mapsto {\mathfrak{o}}''$, corresponds to the orientation ${\mathfrak{o}}_w$.
The next step is showing that this orientation ${\mathfrak{o}}_w$ corresponds to the orientation \eqref{eqn:orientation_induced_on_pearly_space_TQ_mod_Gamma_by_C_delta}.
We have the following commutative diagram
\begin{equation}\label{dia:computation_induced_ori_bdry_op_squared_QH}
\resizebox{\textwidth}{!}{\xymatrix{\ddd(D_v|_{Y_\Gamma^v}) \otimes \ddd(T_vQ/\Gamma) \otimes \ddd(D_u|_{Y_\Gamma^u}) \otimes \ddd(T_uQ/\Gamma) \otimes \ddd(T^k_wQ/\Gamma) \ar[d] \ar[r] & \ddd(D_v|_{Y_Q^v}) \otimes \ddd(D_u|_{Y_Q^u}) \otimes \ddd(T^k_wQ/\Gamma) \ar[d] \\
\ddd(D_v|_{Y_\Gamma^v} \oplus D_u|_{Y_\Gamma^u}) \otimes \ddd(T_vQ/\Gamma \oplus T_uQ/\Gamma) \otimes \ddd(T^k_wQ/\Gamma) \ar[d] \ar[r] & \ddd(D_v|_{Y_Q^v} \oplus D_u|_{Y_Q^u}) \otimes \ddd(T^k_wQ/\Gamma) \ar[d] \\
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_w^{\neg k}Q/\Gamma) \otimes \ddd(T^k_wQ/\Gamma) \ar[d] \ar[r] & \ddd(D_w|_{Y_Q^{w,\neg k}}) \otimes \ddd(T^k_wQ/\Gamma) \ar[d]\\
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_wQ/\Gamma) \ar[r]& \ddd(D_w|_{Y_Q^w})} }
\end{equation}
where we use the following notations:
$$T_w^kQ/\Gamma = T_{(w_k(1),w_{k+1}(-1))}Q/\Gamma_{(w_k(1),w_{k+1}(-1))}\,,$$
$$T_w^{\neg k}Q/\Gamma = \bigoplus_{j\neq k}T_{(w_j(1),w_{j+1}(-1))}Q/\Gamma_{(w_j(1),w_{j+1}(-1))}\,,$$
$$Y_Q^{w,\neg k} = \{\xi \in Y_Q^w\,|\, (\xi_k(1),\xi_{k+1}(-1)) \in \Gamma_{w_k(1),w_{k+1}(-1)}\}\,.$$
The top square is obtained as follows. We have the exact Fredholm square
$$\xymatrix{D_v|_{Y_\Gamma^v} \ar[r] \ar[d] & D_v|_{Y_Q^v} \ar[r] \ar[d] & 0_{T_vQ/\Gamma} \ar[d] \\
D_v|_{Y_\Gamma^v} \oplus D_u|_{Y_\Gamma^u} \ar[r] \ar[d] & D_v|_{Y_Q^v} \oplus D_u|_{Y_Q^u} \ar[r] \ar[d] & 0_{T_vQ/\Gamma} \oplus 0_{T_uQ/\Gamma} \ar[d] \\
D_u|_{Y_\Gamma^u} \ar[r] & D_u|_{Y_Q^u} \ar[r] & 0_{T_uQ/\Gamma}}$$
The top square is then the commutative square corresponding to this exact Fredholm square, see \S\ref{par:exact_squares_pty}, tensored with the identity on $\ddd(T_w^kQ/\Gamma)$.
The bottom square is the commutative square corresponding to the exact Fredholm square
$$\xymatrix{D_w|_{Y_\Gamma^w} \ar@{=}[r] \ar[d] & D_w|_{Y_\Gamma^w} \ar[r] \ar [d] & 0 \ar[d] \\
D_w|_{Y_Q^{w,\neg k}} \ar[r] \ar[d] & D_w|_{Y_Q^w} \ar[r] \ar[d] & 0_{T^k_wQ/\Gamma} \ar@{=}[d]\\
0_{T^{\neg k}_wQ/\Gamma} \ar[r] & 0_{T_wQ/\Gamma} \ar[r] & 0_{T^k_wQ/\Gamma}}$$
It remains to describe the middle square. For every $t>0$ we have the exact Fredholm triple
$$0 \to D_{w^t}|_{Y_\Gamma^{w^t}} \to D_{w^t}|_{Y_Q^{w^t,\neg k}} \to 0_{T_{w^t}^{\neg k}Q/\Gamma} \to 0$$
yielding the isomorphism
\begin{equation}\label{eqn:iso_det_lines_w_t_close_to_boundary_point_pearly_space}
\ddd(D_{w^t}|_{Y_Q^{w^t,\neg k}}) \simeq \ddd(D_{w^t}|_{Y_\Gamma^{w^t}}) \otimes \ddd(T_{w^t}^{\neg k}Q/\Gamma)
\end{equation}
which is continuous in $t$. As $t \to 0$, the space $T_{w^t}^{\neg k}Q/\Gamma$ naturally converges to $T_uQ/\Gamma \oplus T_vQ/\Gamma$. Moreover, a Banach bundle argument similar to the one just before Lemma \ref{lem:convergence_in_Grassmannian_Morse_theory} shows that $D_{w^t}|_{Y_Q^{w^t,\neg k}}$ deforms into $D_u|_{Y_Q^u} \oplus D_v|_{Y_Q^v}$ as $t \to 0$. Therefore we have natural isomorphisms
$$\ddd(D_{w^t}|_{Y_\Gamma^{w^t}}) \simeq \ddd(D_u|_{Y_\Gamma^u} \oplus D_v|_{Y_\Gamma^v}) \quad \text{ and }\quad \ddd(D_{w^t}|_{Y_Q^{w^t,\neg k}}) \simeq \ddd(D_u|_{Y_Q^u} \oplus D_v|_{Y_Q^v})\,.$$
The middle square is obtained by substituting these isomorphisms into \eqref{eqn:iso_det_lines_w_t_close_to_boundary_point_pearly_space}, and tensoring with the identity on $\ddd(T_w^kQ/\Gamma)$.
We will see shortly that the diagram \eqref{dia:computation_induced_ori_bdry_op_squared_QH} maps
\begin{equation}\label{dia:calculation_induced_or_boundary_op_squared_how_diagram_maps}
\xymatrix{{\mathfrak{o}}_v \otimes \bigwedge_i e^v_i \otimes {\mathfrak{o}}_u \otimes \bigwedge_i e^u_i \otimes e^w_k \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k+l} \bigwedge_i \partial_{v_i} \otimes \bigwedge_i \partial_{u_i} \otimes e^w_k \ar@{|->}[d] \\
(-1)^{l-1} ({\mathfrak{o}}_v \wedge {\mathfrak{o}}_u) \otimes (\bigwedge_i e^v_i \wedge \bigwedge_i e^u_i) \otimes e^w_k \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k+l}(\bigwedge_i \partial_{v_i} \wedge \bigwedge_i \partial_{u_i}) \otimes e^w_k \ar@{|->}[d] \\
(-1)^{k(l-1)} {\mathfrak{o}}_w \otimes \bigwedge_{i \neq k} e^w_i \otimes e^w_k \ar@{|->}[r] \ar@{|->}[d] & (-1)^{kl+k+l} \bigwedge_i \partial_{w_i} \otimes e^w_k \ar@{|->}[d] \\
(-1)^{(k+1)(l-1)} {\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \ar@{|->}[r] & (-1)^{kl + k + l} \bigwedge_i \partial_{w_i} \wedge \text{inward}_{\delta}}
\end{equation}
but first let us deduce the desired result for $k,l \neq 0$. We see that the bottom arrow maps
$$\textstyle {\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \mapsto -\bigwedge_i \partial_{w_i} \wedge \text{inward}_{\delta}\,,$$
which means that ${\mathfrak{o}}_w$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w_i} \wedge \text{inward}_{\delta} \otimes \bigwedge_i e^w_i \in \ddd(T_w\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_wQ/\Gamma)\,.$$
On the other hand, we saw above that $C(\delta)$ corresponds to ${\mathfrak{o}}_w$. Therefore $C(\delta)$ corresponds to the orientation \eqref{eqn:orientation_induced_on_pearly_space_TQ_mod_Gamma_by_C_delta}, as claimed.
Let us explain the diagram \eqref{dia:calculation_induced_or_boundary_op_squared_how_diagram_maps}. Here we can compute the top horizontal arrow, as well as all the vertical arrows using definitions and the normalization property, see \S\ref{par:normalization_pty}. Our goal is the bottom horizontal arrow. Since the diagram commutes, we can compute the remaining three horizontal arrows, including the bottom one.
In the top square the left arrow consists of direct sum isomorphisms, together with the interchange isomorphism, which is responsible for the sign $(-1)^{l-1}$, since $\ind D_u|_{Y_\Gamma^u} = 1$, $\dim T_vQ/\Gamma = l-1$. The right arrow can be computed using the normalization property, since $D_u|_{Y_Q^u},D_v|_{Y_Q^v}$ are surjective. Its top horizontal arrow is obtained as the tensor product of isomorphisms \eqref{eqn:correspondence_ors_pearly_space_TQ_mod_Gamma_and_D_u} for $u,v$.
In the middle square, the right arrow comes from the fact that under deformation, $\partial_{u_i} \mapsto \partial_{w_i}$ while $\partial_{v_i} \mapsto \partial_{w_{k+i}}$, and also because $\bigwedge_i\partial_{v_i} \wedge \bigwedge_i\partial_{u_i} = (-1)^{kl} \bigwedge_i\partial_{u_i} \wedge \bigwedge_i\partial_{v_i}$. The left arrow is a combination of the deformation isomorphism sending ${\mathfrak{o}}_v \wedge {\mathfrak{o}}_u \mapsto {\mathfrak{o}}_w$ (this is how we defined ${\mathfrak{o}}_w$), and the fact that the deformation isomorphism sends $e_i^u \mapsto e_i^w$, $e_i^v \mapsto e_{k+i}^w$, and that $\bigwedge_i e_i^v \wedge \bigwedge_i e_i^u = (-1)^{(k-1)(l-1)} \bigwedge_i e_i^u \wedge \bigwedge_i e_i^v$.
In the bottom square, the left arrow comes from the normalization property together with the equality $\bigwedge_{i \neq k} e_i^w \wedge e_k^w = (-1)^{l-1}\bigwedge_i e_i^w$. The right arrow comes from the normalization property together with the fact that the map $\ker D_w|_{Y_Q^w} \to T^k_wQ/\Gamma$ maps $\text{inward}_\delta \mapsto e_k^w$, because the vector $e_k^w$ corresponds to shrinking the segment of gradient line between the points $w_k(1),w_{k+1}(-1)$, which evidently corresponds to moving the pearly trajectory away from the boundary point $\delta$.
Now we treat the case when precisely one of the numbers $k,l$ vanishes. Assume first $k = 0$. Using a combination of the techniques for the Morse case and the above treatment, we can obtain the following commutative diagram:
$$\xymatrix{\ddd(D_v|_{Y_\Gamma^v}) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] \ar[d] &\ddd(D_{v}|_{Y_\Gamma^{v}}) \otimes \ddd(D_{A'}\sharp T{\mathcal{S}}(q')) \ar[d] \\
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_A\sharp T{\mathcal{S}}(q)) \ar[r] & \ddd(D_{A''} \sharp T{\mathcal{S}}(q''))}$$
in which the top arrow is obtained from the correspondence described on page \pageref{eqn:exact_seq_moduli_space_gradient_lines_stable_unstable}, the right and the bottom arrows come from direct sum, deformation, and gluing isomorphisms, and the left arrow comes from the differential of the gluing map on pearly spaces. Recall that $C(v)$ corresponds to the orientation ${\mathfrak{o}}_v$, $C(u)$ to the orientation $\partial_u$, and let ${\mathfrak{o}}_w \in \ddd(D_w|_{Y_\Gamma^w})$ be the orientation which is the image of ${\mathfrak{o}}_v \otimes \partial_u$ by the isomorphism defining the left arrow. It follows that the isomorphism $C(v) \circ C(u) {:\ } C(q,A) \simeq C(q'',A'')$ corresponds to ${\mathfrak{o}}_w$, and our task now is to compute this orientation. We have the following commutative diagram
\begin{equation}\label{dia:calculation_induced_or_boundary_op_squared_k_or_l_zero}
\xymatrix{\ddd(D_v|_{Y_\Gamma^v}) \otimes \ddd(T_vQ/\Gamma) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \ar[r] \ar[d] & \ddd(D_v|_{Y_Q^v}) \otimes \ddd(T_u\widetilde{\mathcal{P}}(q,q')) \ar[d] \\
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_wQ/\Gamma) \ar[r] & \ddd(D_w|_{Y_Q^w})}
\end{equation}
which maps
$$\xymatrix{{\mathfrak{o}}_v \otimes \bigwedge_ie^v_i \otimes \partial_u \ar@{|->}[r] \ar@{|->}[d]& (-1)^{l+1} \bigwedge_i \partial_{v_i} \otimes \partial_u \ar@{|->}[d] \\ (-1)^{l-1}{\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \ar@{|->}[r] & (-1)^l \bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta}$$
We see that ${\mathfrak{o}}_w$ corresponds to the orientation $-\bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta \otimes \bigwedge_ie^w_i$, as claimed. To explain the diagram \eqref{dia:calculation_induced_or_boundary_op_squared_k_or_l_zero}, note that the top arrow comes from the definition of the boundary operator, the left arrow acquires the Koszul sign, while the right arrow comes from the fact that the deformation isomorphism maps $\partial_{v_i} \mapsto \partial_{w_i}$ and $\partial_u \mapsto - \inward_\delta$. The proof for the case $k > 0$, $l = 0$, follows the same scheme.
Now we prove the induction step. This consists of showing the following: assume that the isomorphism $C(\delta)$ corresponds to the orientation
$$\textstyle -\bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_\delta \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)$$
for $t \in (t_j,t_{j+1})$; then for $t\in(t_{j+1},t_{j+2})$ it corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{\rm inward}_\delta \otimes \bigwedge_i e^{w^t}_i \in \ddd(T_{w^t}\widetilde{\mathcal{P}}(q,q'')) \otimes \ddd(T_{w^t}Q/\Gamma)\,.$$
Without loss of generality we may assume that the degeneration of the pearly trajectory as $t \nearrow t_{j+1}$ consists of the shrinking of one of the gradient trajectories to a point. We will use $w^t$ for $t$ close to $t_{j+1}$ but smaller than or equal to it, and denote $w' = w^t$ for $t$ close to $t_{j+1}$ but strictly larger than it. Assume that the disks in $w^t$ which collide as $t \nearrow t_{j+1}$ carry numbers $r,r+1$. In this case the $r$-th disk of $w'$ is obtained by gluing the two colliding disks $w^{t_{j+1}}_r, w^{t_{j+1}}_{r+1}$.
We have the following commutative diagram for $t \leq t_{j+1}$ close to $t_{j+1}$.
\begin{equation}\label{dia:sign_computation_crossing_disk_collision_boundary_op_squares_zero}
\xymatrix{\ddd(D_{w^t}|_{Y_\Gamma^{w^t}}) \otimes \ddd(T_{w^t}Q/\Gamma) \ar[r] & \ddd(D_{w^t}|_{Y_Q^{w^t}})\\
\ddd(D_{w^t}|_{Y_\Gamma^{w^t}}) \otimes \ddd(T_{w^t}^{\neg r}Q/\Gamma) \otimes \ddd(T_{w^t}^rQ/\Gamma) \ar[r] \ar[u] \ar[d] & \ddd(D_{w^t}|_{Y_Q^{{w^t},\neg r}}) \otimes \ddd(T_{w^t}^rQ/\Gamma) \ar[u] \ar[d] \\
\ddd(D_{w'}|_{Y_\Gamma^{w'}}) \otimes \ddd(T_{w'}Q/\Gamma) \otimes \ddd(T_{w^t}^rQ/\Gamma) \ar[r] & \ddd(D_{w'}|_{Y_Q^{w'}}) \otimes \ddd(T_{w^t}^rQ/\Gamma)}
\end{equation}
The top square is the commutative square corresponding to the following exact Fredholm square:
$$\xymatrix{D_{w^t}|_{Y_\Gamma^{w^t}} \ar@{=}[r] \ar[d] & D_{w^t}|_{Y_\Gamma^{w^t}} \ar[r] \ar [d] & 0 \ar[d] \\
D_{w^t}|_{Y_Q^{{w^t},\neg r}} \ar[r] \ar[d] & D_{w^t}|_{Y_Q^{w^t}} \ar[r] \ar[d] & 0_{T^r_{w^t}Q/\Gamma} \ar@{=}[d]\\
0_{T^{\neg r}_{w^t}Q/\Gamma} \ar[r] & 0_{T_{w^t}Q/\Gamma} \ar[r] & 0_{T^r_{w^t}Q/\Gamma}}$$
The bottom square is obtained as follows. Since the pearly trajectory $w'$ is obtained from $w:=w^{t_{j+1}}$ by gluing, see \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, we have a natural isomorphism
$$\ddd(D_w|_{Y_Q^{w,\neg r}}) \simeq \ddd(D_{w'}|_{Y_Q^{w'}})$$
expressing the fact that the tangent space to the space of degenerate pearls at $w$, which is precisely the kernel of $D_w|_{Y_Q^{w,\neg r}}$, maps isomorphically onto $T_{w'}\widetilde{\mathcal{P}}(q,q'') = \ker D_{w'}|_{Y_Q^{w'}}$ by the differential of the gluing map. The operators $D_w|_{Y_Q^{w,\neg r}}, D_{w'}|_{Y_Q^{w'}}$ also appear in the following exact Fredholm triples:
\begin{equation}\label{eqn:exact_triple_w_crossing_disk_collision_boundary_op_squares_zero}
0 \to D_w|_{Y_\Gamma^w} \to D_w|_{Y_Q^{w,\neg r}} \to 0_{T_w^{\neg r}Q/\Gamma} \to 0\,.
\end{equation}
\begin{equation}\label{eqn:exact_triple_w_prime_crossing_disk_collision_boundary_op_squares_zero}
0 \to D_{w'}|_{Y_\Gamma^{w'}} \to D_{w'}|_{Y_Q^{w'}} \to 0_{T_{w'}Q/\Gamma} \to 0\,.
\end{equation}
We note that the operator $D_w|_{Y_\Gamma^w}$ can be boundary glued at the point where the disks $w_r,w_{r+1}$ touch, and the result can be deformed into $D_{w'}|_{Y_\Gamma^{w'}}$. Also, as $t \searrow t_{j+1}$, the space $T_{(w')^t}Q/\Gamma$ tends to $T_w^{\neg r}Q/\Gamma$. It is then the feature of the gluing map that the following diagram commutes:
$$\xymatrix{\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_w^{\neg r}Q/\Gamma) \ar[r] \ar[d] & \ddd(D_w|_{Y_Q^{w,\neg r}}) \ar[d] \\
\ddd(D_{w'}|_{Y_\Gamma^{w'}}) \otimes \ddd(T_{w'}Q/\Gamma) \ar[r] & \ddd(D_{w'}|_{Y_Q^{w'}})}$$
where the horizontal arrows come from the exact triples \eqref{eqn:exact_triple_w_crossing_disk_collision_boundary_op_squares_zero}, \eqref{eqn:exact_triple_w_prime_crossing_disk_collision_boundary_op_squares_zero}, while the vertical arrows come from gluing and deformation. The bottom square of the diagram \eqref{dia:sign_computation_crossing_disk_collision_boundary_op_squares_zero} is obtained by taking the middle horizontal arrow, letting $t \nearrow t_{j+1}$, except in the term $\ddd(T_{w^t}^rQ/\Gamma)$, where $t$ is kept fixed, applying the square to the resulting isomorphism, and finally tensoring with $\ddd(T_{w^t}^rQ/\Gamma)$.
Now the isomorphism $C(\delta)$ corresponds to an orientation ${\mathfrak{o}}_{w^t} \in \ddd(D_{w^t}|_{Y_\Gamma^{w^t}})$, an orientation ${\mathfrak{o}}_w \in \ddd(D_w|_{Y_\Gamma^w})$, and an orientation ${\mathfrak{o}}_{w'} \in \ddd(D_{w'}|_{Y_\Gamma^{w'}})$. It is not difficult to see that the deformation isomorphism $\ddd(D_{w^t}|_{Y_\Gamma^{w^t}}) \simeq \ddd(D_w|_{Y^w_\Gamma})$ maps ${\mathfrak{o}}_{w^t} \mapsto {\mathfrak{o}}_w$, and the above isomorphism $\ddd(D_w|_{Y_\Gamma^w}) \simeq \ddd(D_{w'}|_{Y_\Gamma^{w'}})$ maps ${\mathfrak{o}}_w\mapsto {\mathfrak{o}}_{w'}$. Assume $w$ has $s$ disks. Then the diagram \eqref{dia:sign_computation_crossing_disk_collision_boundary_op_squares_zero} maps
$$\xymatrix{(-1)^{s-r-1} {\mathfrak{o}}_{w^t} \otimes \bigwedge_i e^{w^t}_i \ar@{|->}[r] & (-1)^{s-r} \bigwedge_i\partial_{w^t_i} \wedge \text{inward}_\delta\\
{\mathfrak{o}}_{w^t} \otimes \bigwedge_{i\neq r} e^{w^t}_i \otimes e^{w^t}_r \ar@{|->}[d] \ar@{|->}[r] \ar@{|->}[u] & (-1)^{s-r} \bigwedge_i\partial_{w^t_i} \otimes e^{w^t}_r \ar@{|->}[u] \ar@{|->}[d]\\
{\mathfrak{o}}_{w'} \otimes \bigwedge_i e^{w'}_i \otimes e^{w^t}_r \ar@{|->}[r] & -\bigwedge_i \partial_{w'_i} \wedge \text{inward}_\delta \otimes e^{w^t}_r}$$
We will derive this shortly, but now let us see how it implies the claim. We see that the assumption that the orientation ${\mathfrak{o}}_{w^t}$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{w^t}_i$$
implies that the orientation ${\mathfrak{o}}_{w'}$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w'_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{w'}_i\,,$$
which is what the bottom arrow tells us. This means that the sign stays the same, which was precisely what we wanted to prove.
Let us now explain the diagram. Here we can compute the top arrow and all the vertical arrows, while the remaining two horizontal arrows are obtained by commutativity. The goal of the computation is the bottom horizontal arrow. In the top square, the left arrow comes from the normalization property \S\ref{par:normalization_pty}, together with the fact that we have to interchange some factors in the wedge product, which gives the sign. The top arrow comes from the assumption on the orientation ${\mathfrak{o}}_w$. The right arrow comes from the exact triple
$$0 \to D_{w^t}|_{Y_Q^{{w^t},\neg r}} \to D_{w^t}|_{Y_Q^{w^t}} \to 0_{T^r_{w^t}Q/\Gamma} \to 0\,,$$
where we note that $\text{inward}_\delta \in \ker D_{w^t}|_{Y_Q^{w^t}}$ maps to $e^{w^t}_r \in T^r_{w^t}Q/\Gamma$, because both correspond to shrinking the piece of gradient trajectory separating the disks $w^t_r$, $w^t_{r+1}$.
In the bottom square the left arrow comes from the deformation isomorphisms, which map ${\mathfrak{o}}_{w^t} \mapsto {\mathfrak{o}}_{w'}$ and $e_i^{w^t} \mapsto e_i^{w'}$ for $i < r$, $e_i^{w^t} \mapsto e_{i-1}^{w'}$ for $i > r$. The right arrow comes from the deformation isomorphism, together with the following computation. It is straightforward to check that the differential of the gluing map, which is an isomorphism $\ker(D_{w^t}|_{Y_Q^{{w^t},\neg r}}) \simeq \ker(D_{w'}|_{Y_Q^{w'}})$, maps $\partial_{w^t_i} \mapsto \partial_{w'_i}$ for $i < r$, $\partial_{w^t_r} + \partial_{w^t_{r+1}} \mapsto \partial_{w'_r}$, $-\partial_{w^t_r} + \partial_{w^t_{r+1}} \mapsto \text{inward}_\delta$, and $\partial_{w^t_i} \mapsto \partial_{w'_{i-1}}$ for $i > r+1$. Therefore the induced isomorphism $\ddd(D_{w^t}|_{Y_Q^{{w^t},\neg r}}) \simeq \ddd(D_{w'}|_{Y_Q^{w'}})$ maps
$$\textstyle \bigwedge_i \partial_{w^t_i} \mapsto \bigwedge_{i \leq r}\partial_{w'_i} \wedge \text{inward}_\delta \wedge \bigwedge_{i > r+1}\partial_{w'_{i-1}} = (-1)^{s-r-1} \bigwedge_i \partial_{w'_i} \wedge \text{inward}_\delta\,.$$
The proof of the lemma, and therefore of Theorem \ref{thm:boundary_op_squares_zero_QH}, is now complete. \qed
\end{prf}
This theorem allows us to define the quantum homology
$$QH_*({\mathcal{D}}:L)$$
as the homology of the complex $(QC_*({\mathcal{D}}:L),\partial_{\mathcal{D}})$.
\subsubsection{Product}\label{sss:product_Lagr_QH}
Fix quantum data ${\mathcal{D}}_i = (f_i,\rho,J)$ for $L$ for $i=0,1,2$, where we assume that the pairs $(f_i,\rho)$ are Morse--Smale and $J$ is chosen so that the ${\mathcal{D}}_i$ are regular. The \textbf{quantum product} is a bilinear map
\begin{equation}\label{eqn:product_QH_chain_level}
\star {:\ } QC_k({\mathcal{D}}_0:L) \otimes QC_l({\mathcal{D}}_1:L) \to QC_{k+l-n}({\mathcal{D}}_2:L)\,.
\end{equation}
As with the boundary operator, this is defined by its matrix elements which are homomorphisms
$$C(q_0,A_0) \otimes C(q_1,A_1) \to C(q_2,A_2)$$
for $q_i \in \Crit f_i$, $A_i \in \pi_2(M,L,q_i)$ such that $|q_0| + |q_1| - |q_2| - \mu(A_0) - \mu (A_1) + \mu(A_2) - n = 0$.
In order to define these matrix elements, we need first to describe the spaces of pearly triangles. Therefore fix $q_i \in \Crit f_i$. Recall that $\widetilde{\mathcal{M}}(L,J)$ denotes the space of parametrized nonconstant $J$-holomorphic disks with boundary on $L$. We denote by $\widetilde{\mathcal{M}}^\circ(L,J)$ the space of all parametrized $J$-holomorphic disks with boundary on $L$, including constant ones. Fix $k_i \geq 0$ for $i=0,1,2$. We have the evaluation map
$$\ev {:\ } (C^\infty(D^2,S^1;M,L))^{k_0 + k_1 + k_2 + 1}\to L^{2(k_0+k_1+k_2)+3}$$
given by
\begin{multline*}
\ev(U=(u^0,u^1,u^2,u)) = (u^0_1(-1),u^0_1(1),\dots,u^0_{k_0}(-1),u^0_{k_0}(1),u(1);\\
u^1_1(-1),\dots,u^1_{k_1}(1),u(e^{2\pi i/3}); u(e^{4\pi i/3}),u^2_1(-1),\dots,u^2_{k_2}(1))\,,
\end{multline*}
where $u^i \in (C^\infty(D^2,S^1;M,L))^{k_i}$ and $u \in C^\infty(D^2,S^1;M,L)$. We let
\begin{multline*}
\widetilde{\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2) = \ev^{-1}({\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho}^{k_0} \times {\mathcal{U}}_{f_1}(q_1) \times Q_{f_1,\rho}^{k_1} \times Q_{f_2,\rho}^{k_2} \times {\mathcal{S}}_{f_2}(q_2))\cap\\ (\widetilde{\mathcal{M}}(L,J))^{k_0} \times (\widetilde{\mathcal{M}}(L,J))^{k_1} \times (\widetilde{\mathcal{M}}(L,J))^{k_2} \times \widetilde{\mathcal{M}}^\circ(L,J)\,.
\end{multline*}
This is the space of parametrized \textbf{pearly triangles}. We have a natural action of ${\mathbb{R}}^{k_0+k_1+k_1}$ on this space and we let ${\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2)$ be the quotient. We also define
$$\widetilde{\mathcal{P}}(q_0,q_1;q_2) = \bigcup_{k_0,k_1,k_2 \geq 0} \widetilde{\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2) \quad \text{ and }\quad {\mathcal{P}}(q_0,q_1;q_2) = \bigcup_{k_0,k_1,k_2 \geq 0} {\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2)\,.$$
For $U \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+k_2 +1}$ we let $\mu(U)$ be the sum of the Maslov numbers of the constituent disks of $U$. We have the following result by Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}:
\begin{prop}
For Morse-Smale pairs $(f_i,\rho)_{i=0,1,2}$ there is a subset of ${\mathcal{J}}(M,\omega)$ of the second category such that for each $J$ in the subset, each triple of critical points $q_i \in \Crit f_i$ and each triple of nonnegative integers $k_i$ the space ${\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2)$ is a smooth manifold of local dimension at $[U]$
$$|q_0| + |q_1| - |q_2| + \mu(U) - n$$
whenever this number is at most $1$. \qed
\end{prop}
We proceed to the definition of the matrix element. Fix $A_i \in \pi_2(M,L,q_i)$ for $i = 0,1$. For $U \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+k_2+1}$ with
$$\ev(U) \in {\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho_0}^{k_0} \times {\mathcal{U}}_{f_1}(q_1) \times Q_{f_1,\rho_1}^{k_1} \times Q_{f_2,\rho_2}^{k_2} \times {\mathcal{S}}_{f_2}(q_2)$$
there is an obvious way to construct a class $A_0 \sharp A_1 \sharp U \in \pi_2(M,L,q_2)$ by concatenating representatives of $A_0,A_1$ with the constituent disks of $U$ according to the gradient trajectories connecting the evaluation points of the disks. For $U \in \widetilde{\mathcal{P}}(q_0,q_1;q_2)$ satisfying $|q_0| + |q_1| - |q_2| + \mu(U) - n = 0$ we will construct an isomorphism
$$C(U) {:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_0\sharp A_1\sharp U)\,.$$
For a class $A_2 \in \pi_2(M,L,q_2)$ with $|q_0|+|q_1| - |q_2| - \mu(A_0) - \mu(A_1) + \mu(A_2) - n = 0$, the matrix element is defined to be
\begin{equation}\label{eqn:matrix_elts_product_QH}
\sum_{\substack{[U] \in {\mathcal{P}}(q_0,q_1;q_2):\\A_0 \sharp A_1 \sharp U = A_2}} C(U){:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)\,.
\end{equation}
It remains to define the isomorphism $C(U)$. To this end we define additional Banach spaces. For $V =(v^0,v^1,v^2,v) \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+k_2+1}$ with
$$\ev(V) \in {\mathcal{U}}_{f_0}(q_0) \times \overline Q{}_{f_0,\rho}^{k_0} \times {\mathcal{U}}_{f_1}(q_1) \times \overline Q{}_{f_1,\rho}^{k_1} \times \overline Q{}_{f_2,\rho}^{k_2} \times {\mathcal{S}}_{f_2}(q_2)$$
we define
\begin{multline*}
X_\Gamma^V = \{\Xi = (\xi^0,\xi^1,\xi^2,\xi) \in W^{1,p}(v^0) \oplus W^{1,p}(v^1) \oplus W^{1,p}(v^2) \oplus W^{1,p}(v)\,|\\
\xi^2_{k_2}(1) \in T_{v^2_{k_2}(1)}{\mathcal{S}}_{f_2}(q_2)\,; (\xi^m_j(1),\xi^m_{j+1}(-1)) \in \Gamma_{(v^m_j(1),v^m_{j+1}(-1))},\\
(\xi^m_{k_m}(1),\xi(e^{2\pi i m/3})) \in \Gamma_{(v^m_{k_m}(1),v(e^{2\pi i m/3}))}\text{ for }m=0,1 \text{ and }j < k_m;\\
(\xi^2_j(1),\xi^2_{j+1}(-1)) \in \Gamma_{(v^2_j(1),v^2_{j+1}(-1))}\text{ for }j=1,\dots,k_2; (\xi(e^{4\pi i/3}),\xi^2_1(-1)) \in \Gamma_{(v(e^{4\pi i/3}),v^2_1(-1))}\}
\end{multline*}
$$Y_\Gamma^V = \{\Xi \in X_\Gamma^V\,|\, \xi^m_1(-1) \in T_{v^m_1(-1)}{\mathcal{U}}_{f_m}(q_m)\text{ for }m=0,1\}$$
If $V$ satisfies
$$\ev(V) \in {\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho}^{k_0} \times {\mathcal{U}}_{f_1}(q_1) \times Q_{f_1,\rho}^{k_1} \times Q_{f_2,\rho}^{k_2} \times {\mathcal{S}}_{f_2}(q_2)\,,$$
we define in addition
\begin{multline*}
Y_Q^V = \{\Xi = (\xi^0,\xi^1,\xi^2,\xi) \in W^{1,p}(v^0) \oplus W^{1,p}(v^1) \oplus W^{1,p}(v^2) \oplus W^{1,p}(v)\,|\\
\xi^2_{k_2}(1) \in T_{v^2_{k_2}(1)}{\mathcal{S}}_{f_2}(q_2)\,; (\xi^m_j(1),\xi^m_{j+1}(-1)) \in T_{(v^m_j(1),v^m_{j+1}(-1))}Q_{f_m,\rho},\\
(\xi^m_{k_m}(1),\xi(e^{2\pi i m/3}) \in T_{(v^m_{k_m}(1),v(e^{2\pi i m/3}))}Q_{f_m,\rho}\text{ for }m=0,1 \text{ and }j < k_m;\\
(\xi^2_j(1),\xi^2_{j+1}(-1)) \in T_{(v^2_j(1),v^2_{j+1}(-1))}Q\text{ for }j=1,\dots,k_2; (\xi(e^{4\pi i/3}),\xi^2_1(-1)) \in T_{(v(e^{4\pi i/3}),v^2_1(-1))}Q_{f_2,\rho}\}
\end{multline*}
and
\begin{multline*}
T_VQ = \bigoplus_{j=1}^{k_0-1}T_{(v^0_j(1),v^0_{j+1}(-1))}Q_{f_0,\rho} \oplus T_{(v^0_{k_0}(1),v(1))}Q_{f_0,\rho} \oplus \\
\bigoplus_{j=1}^{k_1-1}T_{(v^1_j(1),v^1_{j+1}(-1))}Q_{f_1,\rho} \oplus T_{(v^1_{k_1}(1),v(e^{2\pi i/3}))}Q_{f_1,\rho} \oplus \\
T_{(v(e^{4\pi i/3}),v^2_1(-1))}Q_{f_2,\rho} \oplus \bigoplus_{j=1}^{k_2-1}T_{(v^2_j(1),v^2_{j+1}(-1))}Q_{f_2,\rho}\,.
\end{multline*}
Also we define the space $T_VQ/\Gamma$ in a manner similar to the definition \eqref{eqn:definition_of_TQ_mod_Gamma}; its dimension equals $k_0+k_1+k_2$. Note that this space has a basis defined similarly to \eqref{eqn:canonical_basis_of_TQ_mod_Gamma}, and whose elements we denote by $e_i^{v^j}$ for $i=0,1,2$ and $j=1,\dots,k_i$.
Similarly to the case of the boundary operator above, the isomorphism $C(U)$ is defined in two stages. At the first stage we construct a bijection between orientations of $D_U|_{Y_\Gamma^U}$ and orientations of the line $\ddd(T_U\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_UQ/\Gamma)$. Then we construct a bijection between orientations of $D_U|_{Y_\Gamma^U}$ and isomorphisms $C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)$. Once we have these bijections, the desired isomorphism $C(U)$ is the one corresponding to the following orientation:
$$\textstyle (-1)^{k_2}\bigwedge_i\partial_{u^0_i} \wedge \bigwedge_i \partial_{u^1_i} \wedge \bigwedge_i \partial_{u^2_i} \otimes \bigwedge_i e^{u^0}_i \wedge \bigwedge_i e^{u^1}_i \wedge \bigwedge_i e^{u^2}_i \in \ddd(T_U\widetilde{\mathcal{P}}_{k_0+k_1+k_2}(q_0,q_1;q_2)) \otimes \ddd(T_UQ/\Gamma)\,.$$
The first bijection is constructed as follows. The exact Fredholm triple
$$0 \to D_U|_{Y_\Gamma^U} \to D_U|_{Y_Q^U} \to 0_{T_UQ/\Gamma} \to 0$$
induces an isomorphism
$$\ddd(T_U\widetilde{\mathcal{P}}(q_0,q_1;q_2)) = \ddd(D_U|_{Y_Q^U}) \simeq \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma)\,,$$
where the first equality is due to the fact that $T_U \widetilde{\mathcal{P}}(q_0,q_1;q_2) = \ker D_U|_{Y_Q^U}$, which follows from the definition of the pearly triangles. Tensoring with $\ddd(T_UQ/\Gamma)$, we see that indeed there is a bijection between orientations of $\ddd(D_U|_{Y_\Gamma^U})$ and orientations of $\ddd(T_U\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_UQ/\Gamma)$.
The second bijection is constructed as follows. Consider the family of operators
$$D_{A_0} \sharp T{\mathcal{S}}_{f_0}(q_0) \oplus D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1) \oplus D_U|_{Y_\Gamma^U}\,.$$
We first deform $U$ into $U' \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+k_2+1}$ with
$$\ev(U') \in \{q_0\} \times \Delta_L^{k_0} \times \{q_1\} \times \Delta_L^{k_1} \times \Delta_L^{k_2} \times \{q_2\}$$
just like we did in \S\ref{sss:boundary_op_Lagr_QH} when defining the boundary operator. Our operator therefore deforms into
$$D_{A_0} \sharp T{\mathcal{S}}_{f_0}(q_0) \oplus D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1) \oplus D_{U'}|_{Y_\Gamma^{U'}}\,.$$
Next, we can deform the incidence conditions at $q_0,q_1$ in the sense of the isomorphism \eqref{eqn:iso_abstract_deformation_incidence_condition_W}, to arrive at the boundary glued operator $D_{A_0} \sharp D_{A_1} \sharp D_{U'}|_{X_\Gamma^{U'}}$, which after deformation yields a representative of the family $D_{A_2} \sharp T{\mathcal{S}}_{f_2}(q_2)$. These deformations, together with the direct sum isomorphism, yield a string of isomorphisms
\begin{align*}
\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}_{f_0}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1)) &\simeq \ddd(D_{A_0} \sharp T{\mathcal{S}}_{f_0}(q_0) \oplus D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1) \oplus D_U|_{Y_\Gamma^U})\\
&\simeq \ddd(D_{A_0} \sharp T{\mathcal{S}}_{f_0}(q_0) \oplus D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1) \oplus D_{U'}|_{Y_\Gamma^{U'}})\\
&\simeq \ddd(D_{A_0} \sharp D_{A_1} \sharp D_{U'}|_{X_\Gamma^{U'}})\\
&\simeq \ddd(D_{A_2} \sharp T{\mathcal{S}}_{f_2}(q_2))
\end{align*}
whose composition indeed shows that there is a bijection between isomorphisms $C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)$ and orientations of $D_U|_{Y_\Gamma^U}$.
We have therefore completed the definition of the matrix elements of the product \eqref{eqn:matrix_elts_product_QH} and hence we have defined the product as a bilinear operation \eqref{eqn:product_QH_chain_level}.
We now prove
\begin{thm}\label{thm:product_QH_is_chain_map}
The operation $\star$ is a chain map. More precisely, we have
$$\partial_{{\mathcal{D}}_2} \circ \star = \star \circ (\partial_{{\mathcal{D}}_0} \otimes \id + (-1)^{n - k} \id \otimes \partial_{{\mathcal{D}}_1}) {:\ } QC_k({\mathcal{D}}_0:L) \otimes QC_l({\mathcal{D}}_1:L) \to QC_{k+l-n-1}({\mathcal{D}}_2:L)\,.$$
\end{thm}
\begin{prf}
It suffices to prove the vanishing of the matrix element
\begin{multline}\label{eqn:vanishing_matrix_elts_to_prove_product_is_chain_map}
\sum_{q_2' \in \Crit f_2} \sum_{\substack{A_2' \in \pi_2(M,L,q_2'):\\ |q_2'| - \mu(A_2') = |q_2| - \mu(A_2) + 1}} \sum_{\substack{([U],[w]) \in {\mathcal{P}}(q_0,q_1;q_2') \times {\mathcal{P}}(q_2',q_2): \\ A_0 \sharp A_1 \sharp U = A_2', A_2' \sharp w = A_2}} C(w) \circ C(U) - \\
- \sum_{q_0' \in \Crit f_0} \sum_{\substack{A_0' \in \pi_2(M,L,q_0'): \\ |q_0| - \mu(A_0) = |q_0'| - \mu(A_0') + 1}} \sum_{\substack{([w],[U]) \in {\mathcal{P}}(q_0,q_0') \times {\mathcal{P}}(q_0',q_1;q_2): \\ A_0 \sharp w = A_0',A_0' \sharp A_1 \sharp U = A_2}} C(U) \circ (C(w) \otimes \id)\\
- (-1)^{n-k} \sum_{q_1' \in \Crit f_1} \sum_{\substack{A_1' \in \pi_2(M,L,q_1'): \\ |q_1| - \mu(A_1) = |q_1'| - \mu(A_1') + 1}} \sum_{\substack{([w],[U]) \in {\mathcal{P}}(q_1,q_1') \times {\mathcal{P}}(q_0,q_1';q_2): \\ A_1 \sharp w = A_1', A_0 \sharp A_1' \sharp U = A_2}} C(U) \circ (\id \otimes C(w))
\end{multline}
as a homomorphism
$$C(q_0,A_0) \otimes C(q_1,A_1) \to C(q_2,A_2)\,.$$
Let us denote by ${\mathcal{P}}^1(q_0,q_1;q_2)$ the $1$-dimensional part of the space of pearly triangles. Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} described the structure of the compactification $\overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$. The space ${\mathcal{P}}^1(q_0,q_1;q_2)$ fails to be compact in one of three ways: either one of the gradient trajectories undergoes Morse breaking, or one of the gradient trajectories shrinks to a point, or a holomorphic disk breaks into two. Note that the breaking can happen at the core and in the resulting degenerate triangle one of the disks (the one carrying three marked points) may be constant. We redefine $\overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$ to be the union of the compactified connected components of ${\mathcal{P}}^1(q_0,q_1;q_2)$ where two boundary points of different components are identified if they represent the same degenerate pearly triangle, where by a degenerate pearly triangle we mean a pearly triangle where precisely one of the gradient trajectories connecting two holomorphic disks has length zero. Thus $\overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$ has the structure of a compact $1$-dimensional topological manifold with boundary. Its boundary points represent Morse breaking and are in an obvious bijection with the summands of the matrix element \eqref{eqn:vanishing_matrix_elts_to_prove_product_is_chain_map}. For $\delta \in \partial \overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$ let $C(\delta)$ denote the summand of \eqref{eqn:vanishing_matrix_elts_to_prove_product_is_chain_map} corresponding to $\delta$ (together with the sign). It therefore suffices to prove the following: for any connected component $\Delta \subset \overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$ with $\partial \Delta = \{\delta,\delta'\}$ we have $C(\delta) + C(\delta') = 0$.
We now proceed to the computation of the various summands. Fix a point $\delta \in \overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$ and let $[X] \in {\mathcal{P}}^1(q_0,q_1;q_2)$ lie close to $\delta$. As we saw, the isomorphism $C(\delta) {:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)$ determines an orientation of $D_X|_{Y_\Gamma^X}$, which in turn corresponds to an orientation of $\ddd(T_X\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_XQ/\Gamma)$. For $V = (v^0,v^1,v^2;v) \in \widetilde{\mathcal{P}}(q_0,q_1;q_2)$ we abbreviate
$$\textstyle \bigwedge_i \partial_{v^0_i} \wedge \bigwedge_i \partial_{v^1_i} \wedge \bigwedge_i \partial_{v^2_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{v^0}_i \wedge \bigwedge_i e^{v^1}_i \wedge \bigwedge_i e^{v^2}_i \in \ddd(T_V \widetilde {\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_VQ/\Gamma)$$
to
$$\textstyle \bigwedge_i\partial_{V_i} \wedge \text{inward}_\delta\otimes \bigwedge_ie^V_i\,,$$
where $\text{inward}_\delta \in T_V \widetilde {\mathcal{P}}(q_0,q_1;q_2)$ is a tangent vector directed away from the boundary point $\delta$. This should cause no confusion. We also refer to this particular orientation as the \textbf{standard orientation} of the line $\ddd(T_V \widetilde {\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_VQ/\Gamma)$.
We have the following lemma.
\begin{lemma}\label{lem:computation_induced_oris_product_is_chain_map}
We have the following cases:
\begin{itemize}
\item If $\delta = ([U],[w]) \in {\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2') \times {\mathcal{P}}_r(q_2',q_2)$, the isomorphism $C(\delta) = C(w) \circ C(U)$ corresponds to $(-1)^{k_0+k_1}$ times the standard orientation of $\ddd(T_X\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_XQ/\Gamma)$.
\item If $\delta = ([w],[U]) \in {\mathcal{P}}_r(q_0,q_0') \times {\mathcal{P}}_{k_0,k_1,k_2}(q_0',q_1;q_2)$, the isomorphism $C(\delta) = -C(U) \circ (C(w) \otimes \id)$ corresponds to $(-1)^{k_0+k_1+r}$ times the standard orientation of $\ddd(T_X\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_XQ/\Gamma)$.
\item If $\delta = ([w],[U]) \in {\mathcal{P}}_r(q_1,q_1') \times {\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1';q_2)$, then the isomorphism $C(\delta) = -(-1)^{n-k} C(U) \circ (\id \otimes C(w))$ corresponds to $(-1)^{k_0+k_1+r}$ times the standard orientation of $\ddd(T_X\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_XQ/\Gamma)$.
\end{itemize}
\end{lemma}
\noindent Lemma \ref{lem:computation_induced_oris_product_is_chain_map} is proved below. In order to complete the proof of the vanishing of the matrix element \eqref{eqn:vanishing_matrix_elts_to_prove_product_is_chain_map}, we also need to keep track of the change of orientations when crossing disk collision/breaking points in $\overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$. This is described in the following lemma:
\begin{lemma}\label{lem:sign_change_product_is_chain_map}
Let an isomorphism $C {:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)$ be given and let $[V],[W] \in {\mathcal{P}}^1(q_0,q_1;q_2)$ be two points lying on two different sides of a degenerate pearly triangle in the space $\overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$, and close to it. Suppose that the isomorphism $C$ corresponds to the orientation
$$\textstyle\bigwedge_i \partial_{V_i} \wedge \eta_V \otimes \bigwedge_i e^V_i \in \ddd(T_V\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_VQ/\Gamma)$$
where $\eta_V \in T_V\widetilde{\mathcal{P}}(q_0,q_1;q_2)$ is an arbitrary vector transverse to the infinitesimal action of the automorphism group at $V$. Then $C$ corresponds to the orientation
$$\textstyle\epsilon \bigwedge_i \partial_{W_i} \wedge \eta_W \otimes \bigwedge_i e^W_i \in \ddd(T_W\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_WQ/\Gamma)\,,$$
where $\eta_W \in T_W\widetilde{\mathcal{P}}(q_0,q_1;q_2)$ points in the same direction \footnote{Note that this is well-defined!} as $\eta_V$ and $\epsilon \in \{\pm 1\}$ is a sign which equals $-1$ if the passage from $V$ to $W$ happens through disk breaking/collision in one of legs $0,1$ of the triange, and it equals $1$ if the breaking/collision happens in leg $2$ of the triangle.
\end{lemma}
\noindent This lemma is proved below. Let us see how Lemmas \ref{lem:computation_induced_oris_product_is_chain_map}, \ref{lem:sign_change_product_is_chain_map} allow us to complete the proof of Theorem \ref{thm:product_QH_is_chain_map}. There are various combinatorial types of possible components $\Delta \subset \overline{\mathcal{P}}{}^1(q_0,q_1;q_2)$. If $\partial \Delta = \{\delta,\delta'\}$, the proof that $C(\delta) + C(\delta') = 0$ follows an identical argument for all of these types, therefore we will only give a full proof for one of them: suppose $\delta = ([U],[w]) \in {\mathcal{P}}_{k_0,k_1,k_2}(q_0,q_1;q_2') \times {\mathcal{P}}_r(q_2',q_2)$ and $\delta' = ([w'],[U']) \in {\mathcal{P}}_{r'}(q_1,q_1') \times {\mathcal{P}}_{k_0',k_1',k_2'}(q_0,q_1';q_2)$. Let $X^t \in \widetilde{\mathcal{P}}(q_0,q_1;q_2)$ be a continuous family of pearly triangles such that $[X^t]$ gives a continuous parametrization of $\Delta \cap {\mathcal{P}}^1(q_0,q_1;q_2)$, that is $X^t$ is defined for all but a finite number of values of $t \in [0,1]$. According to Lemma \ref{lem:computation_induced_oris_product_is_chain_map}, the isomorphism $C(\delta)$ corresponds to
$$\textstyle (-1)^{k_0+k_1} \bigwedge_i\partial_{X^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{X^t}_i \in \ddd(T_{X^t}\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_{X^t}Q/\Gamma)$$
for small positive $t$. As $t$ grows from $0$ to $1$, $X^t$ undergoes a number of jumps which correspond to instances of disk collision/breaking. Let us denote by $n_i$ the number of disk collision/breaking instances which take place in leg $i$ of the triangle. Then clearly we have
$$n_0 \equiv k_0 + k_0' \mod 2\,,\quad n_1 \equiv k_1+r' + k_1' \mod 2\,,\quad n_2 \equiv k_2 + r + k_2' \mod 2\,.$$
Therefore for $t$ close to $1$ the isomorphism $C(\delta)$ corresponds to the orientation
$$\textstyle (-1)^{k_0+k_1+n_0+n_1} \bigwedge_i\partial_{X^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{X^t}_i = (-1)^{k_0' + k_1' + r'} \bigwedge_i\partial_{X^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{X^t}_i\,,$$
as follows from Lemma \ref{lem:sign_change_product_is_chain_map}. On the other hand, Lemma \ref{lem:computation_induced_oris_product_is_chain_map} implies that the isomorphism $C(\delta')$ corresponds to the orientation
$$\textstyle (-1)^{k_0'+k_1'+r'} \bigwedge_i\partial_{X^t_i} \wedge \text{inward}_{\delta'} \otimes \bigwedge_i e^{X^t}_i \in \ddd(T_{X^t}\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes \ddd(T_{X^t}Q/\Gamma)$$
for $t$ close to $1$. Since clearly $\text{inward}_\delta = -\text{inward}_{\delta'}$, we see that the orientations are opposite and therefore $C(\delta) + C(\delta') = 0$. Similar arguments prove the vanishing of this sum for other combinatorial types of the components $\Delta$. This finishes the proof of the theorem. \qed
\end{prf}
This means that, modulo Lemmas \ref{lem:computation_induced_oris_product_is_chain_map}, \ref{lem:sign_change_product_is_chain_map}, we have defined a bilinear operation on homology:
$$\star {:\ } QH_k({\mathcal{D}}_0:L) \otimes QH_l({\mathcal{D}}_1:L) \to QH_{k+l-n}({\mathcal{D}}_2:L)\,.$$
\noindent We now prove Lemma \ref{lem:computation_induced_oris_product_is_chain_map}.
\begin{prf}[of Lemma \ref{lem:computation_induced_oris_product_is_chain_map}] We only prove the lemma assuming $r > 0$. The remaining case can be handled using arguments similar to those of the proof of Lemma \ref{lem:induced_or_pearly_spaces_boundary_op_squared}.
Consider the first case. Using arguments similar to those appearing in the proof of Lemma \ref{lem:induced_or_pearly_spaces_boundary_op_squared}, we obtain the following commutative diagram:
\begin{equation}\label{dia:induced_or_product_chain_map_leg_2}
\resizebox{\textwidth}{!}{\xymatrix{\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_wQ/\Gamma) \otimes \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[r] \ar[d] & \ddd(D_w|_{Y_Q^w}) \otimes \ddd(D_U|_{Y_Q^U}) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[d] \\
\ddd(D_w|_{Y_\Gamma^w} \oplus D_U|_{Y_\Gamma^U}) \otimes \ddd(T_wQ/\Gamma \oplus T_UQ/\Gamma) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[r] \ar[d] & \ddd(D_w|_{Y_Q^w} \oplus D_U|_{Y_Q^U}) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[d] \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_X^{x^2:\neg k_2+1}Q/\Gamma) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[r] \ar[d] & \ddd(D_X|_{Y_Q^{X,x^2:\neg k_2+1}}) \otimes \ddd(T_{x^2}^{k_2+1}Q/\Gamma) \ar[d] \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_XQ/\Gamma) \ar[r] & \ddd(D_X|_{Y_Q^X})}}
\end{equation}
where
$$T_{x^2}^{k_2+1}Q/\Gamma = T_{(x^2_{k_2}(1),x^2_{k_2+1}(-1))}Q/\Gamma_{(x^2_{k_2}(1),x^2_{k_2+1}(-1))}\,,$$
$$T_X^{x^2:\neg k_2+1}Q/\Gamma = \ker \big(T_XQ/\Gamma \to T_{x^2}^{k_2+1}Q/\Gamma\big)\,,$$
$$Y_Q^{X,x^2:\neg k_2+1} = \{\Xi = (\xi^0,\xi^1,\xi^2,\xi) \in Y_Q^X\,|\, (\xi^2_{k_2}(1),\xi^2_{k_2+1}(-1)) \in \Gamma_{(x^2_{k_2}(1),x^2_{k_2+1}(-1))}\}\,.$$
The top and bottom squares of the diagram are obtained from suitable exact Fredholm squares while the middle square is obtained from gluing. Now, by definition, the isomorphism $C(w)$ corresponds to the orientation ${\mathfrak{o}}_w \in \ddd(D_w|_{Y_\Gamma^w})$ which in turns corresponds to the orientation $(-1)^{r+1}\bigwedge_i \partial_{w_i} \otimes \bigwedge_i e^w_i \in \ddd(D_w|_{Y_Q^w}) \otimes \ddd(T_wQ/\Gamma)$; the isomorphism $C(U)$ corresponds to the orientation ${\mathfrak{o}}_U \in \ddd(D_U|_{Y_\Gamma^U})$ which in turn corresponds to the orientation $(-1)^{k_2}\bigwedge_i \partial_{U_i} \otimes \bigwedge_i e^U_i \in \ddd(D_U|_{Y_Q^U}) \otimes \ddd(T_UQ/\Gamma)$. The direct sum isomorphism, composed with the gluing isomorphism, gives the isomorphism
\begin{equation}\label{eqn:iso_det_lines_composition_product_boundary_op_leg_2}
\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_U|_{Y_\Gamma^U}) \simeq \ddd(D_X|_{Y_\Gamma^X})\,;
\end{equation}
let us denote by ${\mathfrak{o}}_X$ the image of ${\mathfrak{o}}_w \otimes {\mathfrak{o}}_U$ under this isomorphism. Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime} together with the deformation isomorphism says that the isomorphism $C(w) \circ C(U)$ corresponds to an orientation of $\ddd(D_X|_{Y_\Gamma^X})$; it is not hard to see that this orientation is precisely ${\mathfrak{o}}_X$ --- indeed, it follows from associativity of gluing and arguments involving deformation of incidence consditions that the following diagram commutes:
$$\xymatrix{\ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[r] \ar[d] & \ddd(D_{A_2}\sharp T{\mathcal{S}}(q_2)) \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[ur] }$$
where the arrows pointing to the right come from deforming the trajectories so that their disks touch, and then applying Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime}, therefore they correspond to the isomorphism $C(w) \circ C(U)$, while the vertical arrow is the isomorphism \eqref{eqn:iso_det_lines_composition_product_boundary_op_leg_2} tensored with identity.
Now diagram \eqref{dia:induced_or_product_chain_map_leg_2} maps
$$\resizebox{\textwidth}{!}{\xymatrix{{\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \otimes {\mathfrak{o}}_U \otimes \bigwedge_i e^U_i \otimes e^{x^2}_{k_2+1} \ar@{|->}[r] \ar@{|->}[d] & (-1)^{r+1}\bigwedge_i\partial_{w_i} \otimes (-1)^{k_2} \bigwedge_i \partial_{U_i} \otimes e^{x^2}_{k_2+1} \ar@{|->}[d]\\
({\mathfrak{o}}_w \wedge {\mathfrak{o}}_U) \otimes (\bigwedge_i e^w_i \wedge \bigwedge_i e^U_i) \otimes e^{x^2}_{k_2+1} \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2+r+1} (\bigwedge_i\partial_{w_i} \wedge \bigwedge_i \partial_{U_i}) \otimes e^{x^2}_{k_2+1} \ar@{|->}[d] \\
(-1)^{(r-1)(k_0+k_1+k_2)}{\mathfrak{o}}_X \otimes (\bigwedge_i e^{x^0}_i \wedge \bigwedge_i e^{x^1}_i \wedge \bigwedge_{i \neq k_2+1} e^{x^2}_i) \otimes e^{x^2}_{k_2+1} \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2+r + 1 + r(k_0+k_1+k_2)} \bigwedge_i\partial_{X_i} \otimes e^{x^2}_{k_2+1} \ar@{|->}[d] \\
(-1)^{(r-1)(k_0+k_1+k_2 + 1)}{\mathfrak{o}}_X \otimes \bigwedge_i e^X_i \ar@{|->}[r] & (-1)^{k_2 + r + 1 + r(k_0+k_1+k_2)} \bigwedge_i\partial_{X_i} \wedge \text{inward}_\delta}}$$
We will explain in a moment why it is so. Assuming this, we see from the bottom arrow, which is the goal of this computation, that the orientation ${\mathfrak{o}}_X$, and therefore the isomorphism $C(\delta) = C(w) \circ C(U)$, corresponds to the orientation
$$\textstyle (-1)^{k_0+k_1}\bigwedge_i\partial_{X_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^X_i\,,$$
as claimed.
Let us briefly explain the diagram. In the top square the top arrow comes from the definition of the orientations ${\mathfrak{o}}_w,{\mathfrak{o}}_U$, the left arrow is just the direct sum isomorphism composed with the interchange isomorphism, which does not produce a sign since $\ind D_U|_{Y_\Gamma^U} = 0$. The right arrow comes from the normalization property \S\ref{par:normalization_pty}. In the middle square we use the fact that $\partial_{u^i_j} \mapsto \partial_{x^i_j}$ for all relevant $i,j$, while $\partial_{w_i}\mapsto \partial_{x^2_{k_2+i}}$, and the additional sign comes from the interchange of factors in the wedge product. In the left arrow we similarly have $e_j^{u^i} \mapsto e_j^{x^i}$ and $e_i^w \mapsto e_{k_2+i+1}^{x^2}$, and the additional sign comes from interchange of factors in the wedge product. In the bottom square, in the right arrow we use the fact that $e_{k_2+1}^{x^2}$ lifts to $\text{inward}_\delta \in \ker D_X|_{Y_Q^X}$, while in the left arrow the additional sign again comes from interchange of factors in the wedge product.
Consider now the second case. We have the following commutative diagram, obtained by methods similar to those above.
\begin{equation}\label{dia:induced_or_product_chain_map_leg_0}
\resizebox{\textwidth}{!}{\xymatrix{\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma) \otimes \ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_wQ/\Gamma) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[r] \ar[d] & \ddd(D_U|_{Y_Q^U}) \otimes \ddd(D_w|_{Y_Q^w}) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[d] \\
\ddd(D_U|_{Y_\Gamma^U} \oplus D_w|_{Y_\Gamma^w}) \otimes \ddd(T_UQ/\Gamma \oplus T_wQ/\Gamma ) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[r] \ar[d] & \ddd(D_U|_{Y_Q^U} \oplus D_w|_{Y_Q^w}) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[d] \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_X^{x^0:\neg r}Q/\Gamma) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[r] \ar[d] & \ddd(D_X|_{Y_Q^{X,x^0:\neg r}}) \otimes \ddd(T_{x^0}^rQ/\Gamma) \ar[d] \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_XQ/\Gamma) \ar[r] & \ddd(D_X|_{Y_Q^X})}}
\end{equation}
By definition, the isomorphism $C(w)$ corresponds to orientations
$$\textstyle(-1)^{r+1}\bigwedge_i \partial_{w_i} \otimes \bigwedge_i e^w_i \in \ddd(D_w|_{Y_Q^w}) \otimes \ddd(T_wQ/\Gamma)$$
and ${\mathfrak{o}}_w \in \ddd(D_w|_{Y_\Gamma^w})$, while $C(U)$ corresponds to orientations
$$\textstyle(-1)^{k_2}\bigwedge_i \partial_{U_i} \otimes \bigwedge_i e^U_i \in \ddd(D_U|_{Y_Q^U}) \otimes \ddd(T_UQ/\Gamma)$$
and ${\mathfrak{o}}_U \in \ddd(D_U|_{Y_\Gamma^U})$. We have the isomorphism
$$\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_w|_{Y_\Gamma^w}) \simeq \ddd(D_X|_{Y_\Gamma^X})$$
obtained by composing the direct sum isomorphism with deformation and gluing; let ${\mathfrak{o}}_X$ be the image of ${\mathfrak{o}}_U \otimes {\mathfrak{o}}_w$ under this isomorphism. This isomorphism enters in the vertical arrow of the following commutative diagram:
$$\xymatrix{\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[r] \ar[d] & \ddd(D_{A_2}\sharp T{\mathcal{S}}(q_2)) \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[ur] }$$
where the horizontal arrows correspond to the isomorphism $C(U) \circ (C(w) \otimes \id)$. It follows that ${\mathfrak{o}}_X$ is precisely the orientation corresponding to this isomorphism by an obvious modification of Lemma \ref{lem:bijection_isos_C_q_A_ors_D_u_prime} applied to a deformed triangle where the disks touch. Therefore the diagram \eqref{dia:induced_or_product_chain_map_leg_0} maps
$$\resizebox{\textwidth}{!}{\xymatrix{{\mathfrak{o}}_U \otimes \bigwedge_i e^U_i \otimes {\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \otimes e^{x^0}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2} \bigwedge_i\partial_{U_i} \otimes (-1)^{r+1}\bigwedge_i \partial_{w_i} \otimes e^{x^0}_r \ar@{|->}[d] \\
(-1)^{k_0+k_1+k_2}({\mathfrak{o}}_U \wedge {\mathfrak{o}}_w) \otimes (\bigwedge_i e^U_i \wedge \bigwedge_i e^w_i) \otimes e^{x^0}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2+r+1} (\bigwedge_i\partial_{U_i} \wedge \bigwedge_i \partial_{w_i}) \otimes e^{x^0}_r \ar@{|->}[d] \\
(-1)^{r(k_0+k_1+k_2)} {\mathfrak{o}}_X \otimes (\bigwedge_{i \neq r} e^{x^0}_i \wedge \bigwedge_i e^{x^1}_i \wedge \bigwedge_i e^{x^2}_i) \otimes e^{x^0}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2+r + 1 + r(k_0+k_1+k_2)} \bigwedge_i\partial_{X_i} \otimes e^{x^0}_r \ar@{|->}[d] \\
(-1)^{(r+1)(k_0+k_1+k_2)} {\mathfrak{o}}_X \otimes \bigwedge_i e^X_i \ar@{|->}[r] & (-1)^{k_2 + r + 1 + r(k_0+k_1+k_2)} \bigwedge_i\partial_{X_i} \wedge \text{inward}_{\delta}}}$$
whence it follows that ${\mathfrak{o}}_X$ corresponds to the isomorphism $C(U) \circ (C(w) \otimes \id)$ and to the orientation
$$\textstyle (-1)^{k_0 + k_1 + r + 1}\bigwedge_i\partial_{X_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^X_i\,,$$
as claimed. Note that $C(\delta) = -C(U) \circ (C(w) \otimes \id)$ by definition.
We now turn the the third case. We similary have the following commutative diagram:
\begin{equation}\label{dia:induced_or_product_chain_map_leg_1}
\resizebox{\textwidth}{!}{\xymatrix{\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma) \otimes \ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(T_wQ/\Gamma) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d] \ar[r] & \ddd(D_U|_{Y_Q^U}) \otimes \ddd(D_w|_{Y_Q^w}) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d] \\
\ddd(D_U|_{Y_\Gamma^U} \oplus D_w|_{Y_\Gamma^w}) \otimes \ddd(T_UQ/\Gamma \oplus T_wQ/\Gamma ) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d] \ar[r] & \ddd(D_U|_{Y_Q^U} \oplus D_w|_{Y_Q^w}) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d]\\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_X^{x^1:\neg r}Q/\Gamma) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d] \ar[r] & \ddd(D_X|_{Y_Q^{X,x^1:\neg r}}) \otimes \ddd(T_{x^1}^rQ/\Gamma) \ar[d] \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(T_XQ/\Gamma) \ar[r] & \ddd(D_X|_{Y_Q^X})}}
\end{equation}
We have: the isomorphism $C(w)$ corresponds to orientations
$$\textstyle(-1)^{r+1}\bigwedge_i \partial_{w_i} \otimes \bigwedge_i e^w_i \in \ddd(D_w|_{Y_Q^w}) \otimes \ddd(T_wQ/\Gamma)$$
and ${\mathfrak{o}}_w \in \ddd(D_w|_{Y_\Gamma^w})$, while $C(U)$ corresponds to orientations
$$\textstyle(-1)^{k_2}\bigwedge_i \partial_{U_i} \otimes \bigwedge_i e^U_i \in \ddd(D_U|_{Y_Q^U}) \otimes \ddd(T_UQ/\Gamma)$$
and ${\mathfrak{o}}_U \in \ddd(D_U|_{Y_\Gamma^U})$. We have the isomorphism
$$\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_w|_{Y_\Gamma^w}) \simeq \ddd(D_X|_{Y_\Gamma^X})$$
obtained by composing the direct sum isomorphism with deformation and gluing; let ${\mathfrak{o}}_X$ be the image of ${\mathfrak{o}}_U \otimes {\mathfrak{o}}_w$ under this isomorphism. This isomorphism enters in the bottom vertical arrow of the following commutative diagram:
$$\xymatrix{\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[rd] \ar[d]^{R} & \\
\ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(D_w|_{Y_\Gamma^w}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[r] \ar[d] & \ddd(D_{A_2}\sharp T{\mathcal{S}}(q_2)) \\
\ddd(D_X|_{Y_\Gamma^X}) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1} \sharp T{\mathcal{S}}(q_1)) \ar[ur] & }$$
where the top and bottom right arrows correspond to the isomorphism $C(U) \circ (\id \otimes C(w))$, the top vertical arrow is the interchange of factors times the Koszul sign
$$(-1)^{\ind D_{A_0} \sharp T{\mathcal{S}}(q_0)\cdot \ind D_w|_{Y_\Gamma^w}} = (-1)^{n-k}\,.$$
It follows that the isomorphism $C(U) \circ (\id \otimes C(w))$ corresponds to the orientation $(-1)^{n-k}{\mathfrak{o}}_X$. The diagram \eqref{dia:induced_or_product_chain_map_leg_1} maps
$$\xymatrix{{\mathfrak{o}}_U \otimes \bigwedge_i e^U_i \otimes {\mathfrak{o}}_w \otimes \bigwedge_i e^w_i \otimes e^{x^1}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2} \bigwedge_i\partial_{U_i} \otimes (-1)^{r+1}\bigwedge_i \partial_{w_i} \otimes e^{x^1}_r \ar@{|->}[d] \\
(-1)^{k_0+k_1+k_2}({\mathfrak{o}}_U \wedge {\mathfrak{o}}_w) \otimes (\bigwedge_i e^U_i \wedge \bigwedge_i e^w_i) \otimes e^{x^1}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2 + r + 1} (\bigwedge_i\partial_{U_i} \wedge \bigwedge_i \partial_{w_i}) \otimes e^{x^1}_r \ar@{|->}[d] \\
(-1)^{r(k_1+k_2) + k_0} {\mathfrak{o}}_X \otimes (\bigwedge_i e^{x^0}_i \wedge \bigwedge_{i \neq r} e^{x^1}_i \wedge \bigwedge_i e^{x^2}_i) \otimes e^{x^1}_r \ar@{|->}[r] \ar@{|->}[d] & (-1)^{k_2 + r + 1 + r(k_1+k_2)} \bigwedge_i\partial_{X_i} \otimes e^{x^1}_r \ar@{|->}[d] \\
(-1)^{(r+1)(k_1+k_2) + k_0} {\mathfrak{o}}_X \otimes \bigwedge_i e^X_i \ar@{|->}[r] & (-1)^{k_2 + r + 1 + r(k_1+k_2)} \bigwedge_i\partial_{X_i} \wedge \text{inward}_{\delta}}$$
whence it follows that ${\mathfrak{o}}_X$, which corresponds to the isomorphism $(-1)^{n-k}C(U) \circ (\id \otimes C(w))$, corresponds to the orientation
$$\textstyle (-1)^{k_0 + k_1 + r + 1}\bigwedge_i\partial_{X_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^X_i\,.$$
It follows that the isomorphism $C(U) \circ (\id \otimes C(w))$ corresponds to the orientation
$$\textstyle (-1)^{n-k+ k_0 + k_1 + r + 1}\bigwedge_i\partial_{X_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^X_i\,,$$
as claimed. Note that by definition $C(\delta) = -(-1)^{n-k}C(U) \circ (\id \otimes C(w))$. This finishes the proof of Lemma \ref{lem:computation_induced_oris_product_is_chain_map}. \qed
\end{prf}
It remains to prove Lemma \ref{lem:sign_change_product_is_chain_map}.
\begin{prf}
We can assume without loss of generality that the two disks which touch in the degenerate triangle are separated by a positive length gradient trajectory in $V$, and therefore in $W$ the two disks are glued together into a single one. Assume first that at least one of the disks lies in leg $0$ of the triangle, while the other disk may either belong to the same leg or be the core. Therefore in $V$ the two disks are separated by a piece of gradient trajectory, and let us assume its number is $j$. Then we have the following commutative diagram:
\begin{equation}\label{dia:sign_change_product_is_chain_map_leg_0}
\xymatrix{\ddd(D_V|_{Y_\Gamma^V}) \otimes \ddd(T_VQ/\Gamma) \ar[r] & \ddd(D_V|_{Y_Q^V})\\
\ddd(D_V|_{Y_\Gamma^V}) \otimes \ddd(T_V^{v^0:\neg j}Q/\Gamma) \otimes \ddd(T_{v^0}^jQ/\Gamma) \ar[r] \ar[u] \ar[d] & \ddd(D_V|_{Y_Q^{V,v^0:\neg j}}) \otimes \ddd(T_{v^0}^jQ/\Gamma) \ar[u] \ar[d] \\
\ddd(D_W|_{Y_\Gamma^W}) \otimes \ddd(T_WQ/\Gamma) \otimes \ddd(T_{v^0}^jQ/\Gamma) \ar[r] & \ddd(D_W|_{Y_Q^W}) \otimes \ddd(T_{v^0}^jQ/\Gamma)}
\end{equation}
where the top square corresponds to the exact Fredholm square
$$\xymatrix{D_V|_{Y_\Gamma^V} \ar@{=}[r] \ar[d] & D_V|_{Y_\Gamma^V} \ar[r] \ar[d] & \ar[d] 0\\
D_V|_{Y_Q^{V,v^0:\neg j}} \ar[r] \ar[d] & D_V|_{Y_Q^V} \ar[r] \ar[d] & T_{v^0}^jQ/\Gamma \ar@{=}[d] \\
T_V^{v^0:\neg j}Q/\Gamma \ar[r] & T_VQ/\Gamma \ar[r] & T_{v^0}^jQ/\Gamma}$$
To obtain the bottom square, we note that, similarly to what we had during the proof of Lemma \ref{lem:induced_or_pearly_spaces_boundary_op_squared}, we have the exact triples
$$0 \to D_V|_{Y_\Gamma^V} \to D_V|_{Y_Q^{V,v^0:\neg j}} \to T_V^{v^0:\neg j}Q/\Gamma \to 0\,.$$
$$0 \to D_W|_{Y_\Gamma^W} \to D_W|_{Y_Q^W} \to T_WQ/\Gamma \to 0\,.$$
We have canonical isomorphisms
$$\ddd(D_V|_{Y_\Gamma^V}) \simeq \ddd(D_W|_{Y_\Gamma^W})\,,\; \ddd(D_V|_{Y_Q^{V,v^0:\neg j}}) \simeq \ddd(D_W|_{Y_Q^W})\,,\;\text{and}\;\ddd(T_V^{v^0:\neg j}Q/\Gamma) \simeq \ddd(T_WQ/\Gamma)\,,$$
obtained as follows. The pearly triangle $V$ can be deformed into the degenerate triangle, in which two disks are then glued to obtain $W$. During this process the operator $D_V|_{Y_\Gamma^V}$ undergoes deformation and linear gluing, with $D_W|_{Y_\Gamma^W}$ as the result; similarly, the space $T_V^{v^0:\neg j}Q/\Gamma$ deforms into $T_WQ/\Gamma$. If we let $V'$ be the degenerate pearly triangle, we have the corresponding linearized operator $D_{V'}|_{Y_Q^{V'}}$, which is surjective, and moreover the deformation of $V$ into $V'$ yields an isomorphism
$$\ker(D_V|_{Y_Q^{V,v^0:\neg j}}) \simeq \ker(D_{V'}|_{Y_Q^{V'}})\,.$$
In addition, the differential of the gluing map yields an isomorphism
$$\ker(D_{V'}|_{Y_Q^{V'}}) \simeq \ker(D_W|_{Y_Q^W})\,.$$
In total we obtain an isomorphism
$$\ddd(D_V|_{Y_Q^{V,v^0:\neg j}}) \simeq \ddd(D_W|_{Y_Q^W})\,.$$
It is then the feature of the gluing map that the following diagram commutes:
$$\xymatrix{\ddd(D_V|_{Y_\Gamma^V}) \otimes \ddd(T_V^{v^0:\neg j}Q/\Gamma) \ar[r] \ar[d] & \ddd(D_V|_{Y_Q^{V,v^0:\neg j}}) \ar[d] \\
\ddd(D_W|_{Y_\Gamma^W}) \otimes \ddd(T_WQ/\Gamma) \ar[r] & \ddd(D_W|_{Y_Q^W})}$$
with the horizontal arrows coming from the above exact triples and the vertical arrows being the above canonical isomorphisms.
Assume now that the isomorphism $C$ corresponds to orientations
$$\textstyle {\mathfrak{o}}_V \in \ddd(D_V|_{Y_\Gamma^V})\,,\; \bigwedge_i \partial_{V_i} \wedge \eta_V \otimes \bigwedge_ie^V_i \in \ddd(T_V\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes T_VQ/\Gamma\,,$$
and to orientations
$$\textstyle {\mathfrak{o}}_W \in \ddd(D_W|_{Y_\Gamma^W})\,,\; \epsilon \bigwedge_i \partial_{W_i} \wedge \eta_W \otimes \bigwedge_ie^W_i \in \ddd(T_W\widetilde{\mathcal{P}}(q_0,q_1;q_2)) \otimes T_WQ/\Gamma\,,$$
and let us compute the sign $\epsilon$. For the convenience of the computation, and without loss of generality, we assume that $\eta_V$ is directed from $V$ toward the degenerate triangle $V'$. Note that the isomorphism $\ddd(D_V|_{Y_\Gamma^V}) \simeq \ddd(D_W|_{Y_\Gamma^W})$ maps ${\mathfrak{o}}_V \mapsto {\mathfrak{o}}_W$. Then the diagram \eqref{dia:sign_change_product_is_chain_map_leg_0} maps:
$$\xymatrix{(-1)^{k_0+k_1+k_2-j}{\mathfrak{o}}_V \otimes \bigwedge_i e^V_i \ar@{|->}[r] & (-1)^{k_0+k_1+k_2 - j} \bigwedge_i\partial_{V_i}\wedge \eta_V\\
{\mathfrak{o}}_V \otimes \bigwedge_{i \neq j} e^{v^0}_i \wedge \bigwedge_i e^{v^1}_i \wedge \bigwedge_i e^{v_2}_i \otimes e^{v^0}_j \ar@{|->}[r] \ar@{|->}[u] \ar@{|->}[d] & (-1)^{k_0+k_1+k_2 - j} \bigwedge_i\partial_{V_i} \otimes e^{v^0}_j \ar@{|->}[u] \ar@{|->}[d]\\
{\mathfrak{o}}_W \otimes \bigwedge_i e^W_i \otimes e^{v^0}_j \ar@{|->}[r] & -\bigwedge_i\partial_{W_i} \wedge \eta_W \otimes e^{v^0}_j}$$
The bottom arrow tells us that the orientation ${\mathfrak{o}}_W$ corresponding to the isomorphism $C$ also corresponds to the orientation $-\bigwedge_i\partial_{W_i} \wedge \eta_W \otimes \bigwedge_ie^W_i$, therefore $\epsilon = -1$ as claimed.
The only computation that needs explanation is the right bottom arrow. The differential of the gluing map sends: $\partial_{v^{1,2}_i} \mapsto \partial_{w^{1,2}_i}$ for all $i$, $\partial_{v^0_i} \mapsto \partial_{w^0_i}$ for $i < j$. If $j < k_0$, it sends $\partial_{v^0_j} + \partial_{v^0_{j+1}} \mapsto \partial_{w^0_j}$ and $-\partial_{v^0_j} + \partial_{v^0_{j+1}} \mapsto \eta_W$, and $\partial_{v^0_i} \mapsto \partial_{w^0_{i-1}}$ for $i > j+1$, therefore
$$\textstyle \bigwedge_i\partial_{V_i} \mapsto -(-1)^{k_0+k_1+k_2-j}\bigwedge_i\partial_{W_i} \wedge \eta_W\,.$$
In case $j=k_0$, the gluing map sends $\partial_{v^0_{k_0}} \mapsto -\eta_W$, and therefore we have
$$\textstyle \bigwedge_i\partial_{V_i} \mapsto - (-1)^{k_0+k_1+k_2-j}\bigwedge_i\partial_{W_i} \wedge \eta_W\,.$$
The computation of $\epsilon$ in the case when the breaking/collision happens in leg $1$ of the triangle is entirely analogous. Let us therefore compute $\epsilon$ in case the breaking happens in leg $2$. Using identical arguments, we obtain the following commutative diagram, where we assume that the piece of gradient trajectory of $V$ which shrinks to $0$ at the collision point bears number $j$:
$$\xymatrix{\ddd(D_V|_{Y_\Gamma^V}) \otimes \ddd(T_VQ/\Gamma) \ar[r] & \ddd(D_V|_{Y_Q^V})\\
\ddd(D_V|_{Y_\Gamma^V}) \otimes \ddd(T_V^{v^2:\neg j}Q/\Gamma) \otimes \ddd(T_{v^2}^jQ/\Gamma) \ar[r] \ar[u] \ar[d] & \ddd(D_V|_{Y_Q^{V,v^2:\neg j}}) \otimes \ddd(T_{v^2}^jQ/\Gamma) \ar[u] \ar[d] \\
\ddd(D_W|_{Y_\Gamma^W}) \otimes \ddd(T_WQ/\Gamma) \otimes \ddd(T_{v^2}^jQ/\Gamma) \ar[r] & \ddd(D_W|_{Y_Q^W}) \otimes \ddd(T_{v^2}^jQ/\Gamma)}$$
It maps
$$\xymatrix{(-1)^{k_2-j}{\mathfrak{o}}_V \otimes \bigwedge_i e^V_i \ar@{|->}[r] & (-1)^{k_2 - j} \bigwedge_i\partial_{V_i}\wedge \eta_V\\
{\mathfrak{o}}_V \otimes \bigwedge_i e^{v^0}_i \wedge \bigwedge_i e^{v^1}_i \wedge \bigwedge_{i \neq j} e^{v_2}_i \otimes e^{v^2}_j \ar@{|->}[r] \ar@{|->}[d] \ar@{|->}[u] & (-1)^{k_2 - j} \bigwedge_i\partial_{V_i} \otimes e^{v^2}_j \ar@{|->}[d] \ar@{|->}[u] \\
{\mathfrak{o}}_W \otimes \bigwedge_i e^W_i \otimes e^{v^2}_j \ar@{|->}[r] & \bigwedge_i\partial_{W_i} \wedge \eta_W \otimes e^{v^0}_j}$$
We see from the bottom arrow that the orientation ${\mathfrak{o}}_W$ corresponding to the isomorphism $C$ also corresponds to the orientation $\bigwedge_i\partial_{W_i} \wedge \eta_W \otimes \bigwedge_ie^W_i$, therefore $\epsilon = 1$ as claimed.
Again, the bottom right arrow is obtained as follows. The differential of the gluing map sends: $\partial_{v^{0,1}_i} \mapsto \partial_{w^{0,1}_i}$ for all $i$, $\partial_{v^2_i} \mapsto \partial_{w^2_{i-1}}$ for $i > j$. If $j > 1$, it sends $\partial_{v^2_{j-1}} + \partial_{v^2_{j}} \mapsto \partial_{w^2_{j-1}}$ and $-\partial_{v^2_{j-1}} + \partial_{v^2_{j}} \mapsto \eta_W$, and $\partial_{v^2_i} \mapsto \partial_{w^2_{i}}$ for $i < j-1$, therefore
$$\textstyle \bigwedge_i\partial_{V_i} \mapsto (-1)^{k_2 - j}\bigwedge_i\partial_{W_i} \wedge \eta_W\,.$$
In case $j=1$, the gluing map sends $\partial_{v^2_1} \mapsto \eta_W$, and therefore we have
$$\textstyle \bigwedge_i\partial_{V_i} \mapsto (-1)^{k_2-j}\bigwedge_i\partial_{W_i} \wedge \eta_W\,.$$
The proof of the lemma is now complete.
\qed
\end{prf}
We have thus completed the definition of the product on quantum homology.
\subsubsection{Unit}\label{sss:unit_Lagr_QH}
Assume we have a regular quantum datum ${\mathcal{D}} = (f,\rho,J)$. We will now construct an element in $QH_n({\mathcal{D}}:L)$ which serves as a unit for the quantum product. This is defined as follows. Let $q$ be a maximum of $f$. Then the stable manifold ${\mathcal{S}}(q)$ is just the singleton $\{q\}$. Let $0 \in \pi_2(M,L,q)$ be the zero class and consider the operator $D_0 \sharp T{\mathcal{S}}(q)$. We can canonically orient this operator, as follows. Let $w$ be the constant disk at $q$; it clearly represents the class $0$. The corresponding Cauchy--Riemann operator $D_w$ is just the standard Doulbeault operator on the trivial bundle pair $(T_qM,T_qL) \to (D^2,S^1)$ with the Hermitian structure $(\omega_q,J_q)$. This operator is surjective and its kernel consists of constant sections with values in $T_qL$. Now the operator $D_w \sharp T{\mathcal{S}}(q)$ is its restriction to the subspace of sections of this bundle pair vanishing at $1 \in D^2$. Clearly this restricted operator has trivial kernel, and since it has index $0$, it must be an isomorphism. Therefore we have the canonical positive orientation of this operator by $1 \otimes 1^\vee$, which induces an orientation on the family $D_0 \sharp T{\mathcal{S}}(q)$, and therefore an element of $C(q,0)$, which we denote $1_q$. The \textbf{unit} is now the element
$$1_{\mathcal{D}} = \sum_{\substack{q \in \Crit f:\\|q| = n}}1_q \in QC_n({\mathcal{D}}:L)\,.$$
We claim that this element is a cycle. Indeed, the Morse boundary operator vanishes on it, and from index considerations in $\partial_{\mathcal{D}}(1_{\mathcal{D}})$ there are no terms involving nonconstant holomorphic disks.
To see that the class $1_{\mathcal{D}} \in QH_n({\mathcal{D}}:L)$ is the unit with respect to the quantum product, we choose $f_1 = f_2 = f'$ (this is possible \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}), that is the same function for the first and second slots, and show that the map
$$1_{\mathcal{D}} \star - {:\ } QC_k (f',\rho,J:L) \to QC_k(f',\rho,J:L)$$
is the identity, so that in fact we have identity already on chain level. First we claim that any pearly triangle belonging to the zero-dimensional part of ${\mathcal{P}}(q,q';q'')$ where $q \in \Crit f$, $q',q'' \in \Crit f'$ must be constant, meaning that the only holomorphic disk is the core, that it is the constant disk, and that the lengths of all the gradient trajectories are zero. Indeed, let $U$ be such a triangle. First we will show that the zeroth leg contains no disks. If there were a disk, it would be nonconstant, and since lying in the unstable manifold of $q$ is an open condition, after quotienting out the action of the automorphism group of the triange, we would obtain a positive-dimensional space, which is a contradiction. The same argument shows that the core must be constant, which means that we can obtain a pearly trajectory from $q'$ to $q''$, and that it has index zero. This is only possible if there are no nonconstant disks and that the resulting gradient trajectory from $q'$ to $q''$ is constant, which is what we claimed.
Next, for a generically chosen $f'$, all of its critical points lie in the union of the unstable manifolds of the maxima of $f$, which means that all the constant pearly triangles indeed appear and that we obtain the identity map on $QC_k(f',\rho,J:L)$ as a result of multiplying by $1_{\mathcal{D}}$, proving that it indeed acts as the unit.
\subsection{Arbitrary rings and twisted coefficients}\label{ss:arbitrary_rings_loc_coeffs_Lagr_QH}
Analogously to the case of Floer homology, we can use an arbitrary ground ring $R$ if $L$ satisfies assumption \textbf{(O)}, or otherwise use a ring in which $2=0$. Also, given a flat $R$-bundle ${\mathcal{E}}$ over $\widetilde\Omega_L$ we can define quantum homology of $L$ twisted by ${\mathcal{E}}$, which we denote $QH_*({\mathcal{D}}:L;{\mathcal{E}})$. We only need to note that the pairs $(q,A)$ with $q \in L$ and $A \in \pi_2(M,L,q)$ naturally give rise to points of the space $\widetilde\Omega_L$. Details are left to the reader.
\subsection{Duality}\label{ss:duality_Lagr_QH}
The treatment of duality in quantum homology is very similar to the case of Floer homology, and we follow it closely. The goal here is to establish a canonical chain isomorphism
$$QC_*(\overline{\mathcal{D}}:L) \equiv QC^{n-*}({\mathcal{D}}:L;{\mathcal{L}})\,,$$
where $\overline{\mathcal{D}}$ is the dual quantum datum, ${\mathcal{L}} \to \widetilde\Omega_L$ is the flat ${\mathbb{Z}}$-bundle obtained as the normalization of the pullback of $\ddd(TL)$ to $\widetilde\Omega_L$ via the evaluation map $\widetilde\gamma \mapsto \gamma(0)$.
First we define quantum cohomology. Fix a regular quantum homology datum ${\mathcal{D}} = (f,\rho,J)$ and define
$$C(q,A)^\vee = \Hom_{\mathbb{Z}}(C(q,A),{\mathbb{Z}})\,.$$
Put
$$QC^*({\mathcal{D}}:L) = \bigoplus_{\substack{q \in \Crit f \\ A \in \pi_2(M,L,q)}}C(q,A)^\vee\,.$$
This is graded by assigning the elements of $C(q,A)^\vee$ the degree $|q| - \mu(A)$. The matrix coefficients of the dual differential $\partial_{\mathcal{D}}^\vee$ are dual to the matrix elements of $\partial_{\mathcal{D}}$ as maps $C(q',A')^\vee \to C(q,A)^\vee$. We define another differential
$$\delta_{\mathcal{D}} {:\ } QH^k({\mathcal{D}}:L) \to QH^{k+1}({\mathcal{D}}:L) \quad\text{via} \quad \delta_{\mathcal{D}} = (-1)^{k-1}\partial_{\mathcal{D}}^\vee\,.$$
The cochain complex
$$(QC^*({\mathcal{D}}:L),\delta_{\mathcal{D}})$$
is the \textbf{quantum cochain complex} and its cohomology is the \textbf{quantum cohomology}
$$QH^*({\mathcal{D}}:L)\,.$$
The dual quantum datum is defined by $\overline{\mathcal{D}}=(-f,\rho,J)$. The functions $f, -f$ have the same critical points. For $w \in C^\infty(D^2,S^1,1;M,L,q)$ we let $\overline w$ be defined as $\overline w(\sigma,\tau) = w(\sigma,-\tau)$. Then $\overline w$ represents the class $[w]^{-1} \in \pi_2(M,L,q)$. We also define $-w \in C^\infty(D^2,S^1,-1;M,L,q)$ via $-w(\sigma,\tau) = w(-\sigma,\tau)$. Clearly $-w = \overline w \circ \phi$ where $\phi {:\ } D^2 \to D^2$ is defined by $\phi(z) = -z$. This map is a conformal isomorphism therefore it induces an isomorphism of determinant lines $\ddd(D_{\overline w}) = \ddd(D_{-w})$. We also have the isomorphisms
\begin{equation}\label{eqn:iso_det_lines_duality_Lagr_QH}
\ddd(D_w \sharp T{\mathcal{S}}_f(q)) \otimes \ddd(D_{-w}\sharp T{\mathcal{U}}_f(q)) \simeq \ddd(D_w \sharp T{\mathcal{S}}_f(q) \oplus D_{-w} \sharp T{\mathcal{U}}_f(q)) \simeq \ddd(D_w \sharp D_{-w}) \simeq \ddd(T_qL)\,,
\end{equation}
where the second isomorphism comes from deforming the incidence condition at $q$ from
$$(T{\mathcal{S}}_f(q)\oplus 0) \oplus (0 \oplus T{\mathcal{U}}_f(q)) \subset T_qL \oplus T_qL \quad \text{to} \quad \Delta_{T_qL}\,,$$
and the third isomorphism comes from deforming the operator $D_w \sharp D_{-w}$ into the operator $D_0$ and using the canonical isomorphism $\ddd(D_0) = \ddd(T_qL)$.
Since we have ${\mathcal{U}}_f(q) = {\mathcal{S}}_{-f}(q)$, this yields, combined with the isomorphism \eqref{eqn:iso_det_lines_duality_Lagr_QH}, the following:
$$\ddd(D_w \sharp T{\mathcal{S}}_f(q)) \otimes \ddd(D_{\overline w} \sharp T{\mathcal{S}}_{-f}(q)) \simeq \ddd(T_qL)\,.$$
This means that we have a canonical isomorphism
$$C_f(q,A) \otimes C_{-f}(q,A^{-1}) \otimes {\mathcal{L}}_q = {\mathbb{Z}}\,.$$
This implies that we have canonically
$$QC_*(\overline{\mathcal{D}}:L) = QC^{n-*}({\mathcal{D}}:L;{\mathcal{L}})$$
as modules, where we observe that the elements of $C_{-f}(q,A^{-1})$ have degree
$$|q|_{-f} - \mu(A^{-1}) = n-|q|_f + \mu(A)\,,$$
which is $n$ minus the degree of $C_f(q,A)$. We will now obtain an identification of the differentials. Fix pairs $(q_\pm,A_\pm)$ so that $|q_-| - \mu(A_-) = |q_+| - \mu(A_+) + 1$, and let $u \in \widetilde{\mathcal{P}}(q_-,q_+)$ be such that $A_- \sharp u = A_+$. Represent the classes $A_\pm$ by maps $w_\pm$. We have the following commutative diagram, obtained by employing the direct sum, gluing, and deformation isomorphisms:
$$\xymatrix{\ddd(D_{w_-}\sharp T{\mathcal{S}}_f(q_-)) \otimes \ddd(D_u|_{Y_\Gamma^u}) \otimes \ddd(D_{-w_+} \sharp T{\mathcal{S}}_{-f}(q_+)) \ar[r] \ar[d]^{(C(u) \otimes \id)\circ (R\otimes \id)}& \ddd(D_{w_-}\sharp T{\mathcal{S}}_f(q_-)) \otimes \ddd(D_{-w_-} \sharp T{\mathcal{S}}_{-f}(q_-)) \ar[d] \\ \ddd(D_{w_+}\sharp T{\mathcal{S}}_f(q_+)) \otimes \ddd(D_{-w_+} \sharp T{\mathcal{S}}_{-f}(q_+) ) \ar[r] & \ddd(TL)}$$
where $R$ is the interchange of factors including the Koszul sign
$$(-1)^{\ind D_u|_{Y_\Gamma^u} \cdot \ind D_{w_-}\sharp T{\mathcal{S}}_f(q_-)} = (-1)^{n-|q_-|_{f}+\mu(A_-)}\,.$$
Fix ${\mathfrak{o}} \in \ddd(TL)$ where we trivialize $\ddd(TL)$ along the lower boundary of $u$, viewed as a degenerate strip with boundary on $L$. Fix ${\mathfrak{o}}_{q_-} \in C_f(q_-,A_-)$, let ${\mathfrak{o}}_{q_+} = C(u)({\mathfrak{o}}_{q_-}) \in C_f(q_+,A_+)$ and let ${\mathfrak{o}}_{-q_\pm} \in \ddd(D_{-w_\pm} \sharp T{\mathcal{S}}_{-f}(q_\pm))$ be such that ${\mathfrak{o}}_{q_\pm} \otimes {\mathfrak{o}}_{-q_\pm} \mapsto {\mathfrak{o}}$ via the isomorphism \eqref{eqn:iso_det_lines_duality_Lagr_QH}. The diagram then maps
$$\xymatrix{(-1)^{n - |q_-| + \mu(A_-)}{\mathfrak{o}}_{q_-} \otimes {\mathfrak{o}}_u \otimes {\mathfrak{o}}_{-q_+} \ar@{|->}[r] \ar@{|->}[d] & {\mathfrak{o}}_{q_-} \otimes {\mathfrak{o}}_{-q_-} \ar@{|->}[d] \\ {\mathfrak{o}}_{q_+} \otimes {\mathfrak{o}}_{-q_+} \ar@{|->}[r] & {\mathfrak{o}}}$$
Dualizing in a manner similar to the treatment of duality in Floer homology, see \S\ref{ss:duality_HF}, we obtain the diagram
$$\xymatrix{\ddd(D_{-\overline w_-}\sharp T{\mathcal{S}}_f(q_-)) \otimes \ddd(D_{\overline u}|_{Y_\Gamma^{\overline u}}) \otimes \ddd(D_{\overline w_+} \sharp T{\mathcal{S}}_{-f}(q_+)) \ar^-{\id \otimes C(\overline u)}[r] \ar[d]& \ddd(D_{- \overline w_-}\sharp T{\mathcal{S}}_f(q_-)) \otimes \ddd(D_{\overline w_-} \sharp T{\mathcal{S}}_{-f}(q_-)) \ar[d] \\ \ddd(D_{-\overline w_+}\sharp T{\mathcal{S}}_f(q_+)) \otimes \ddd(D_{\overline w_+} \sharp T{\mathcal{S}}_{-f}(q_+)) \ar[r] & \ddd(TL)}$$
Let $\overline {\mathfrak{o}}_{q_\pm} \in \ddd(D_{\overline w_\pm} \sharp T{\mathcal{S}}_{-f}(q_\pm))$, $\overline {\mathfrak{o}}_{-q_\pm} \in \ddd(D_{-\overline w_\pm} \sharp T{\mathcal{S}}_f(q_\pm))$ be obtained from ${\mathfrak{o}}_{-q_\pm}$, ${\mathfrak{o}}_{q_\pm}$, respectively, by dualization. The latter diagram then maps
$$\xymatrix{(-1)^{n - |q_-| + \mu(A_-)} \overline {\mathfrak{o}}_{-q_-} \otimes {\mathfrak{o}}_{\overline u} \otimes \overline {\mathfrak{o}}_{q_+} \ar@{|->}[r] \ar@{|->}[d] & \overline{\mathfrak{o}}_{-q_-} \otimes \overline{\mathfrak{o}}_{q_-} \ar@{|->}[d]\\
\overline{\mathfrak{o}}_{-q_+} \otimes \overline{\mathfrak{o}}_{q_+} \ar@{|->}[r] & {\mathfrak{o}} }$$
Thus we see that $C(\overline u) (\overline {\mathfrak{o}}_{q_+}) = (-1)^{n - |q_-| + \mu(A_-)} \overline {\mathfrak{o}}_{q_-}$. This means that the following diagram commutes:
$$\xymatrix{C_{-f}(q_+,A_+^{-1}) \ar[r] \ar[d]^{C(\overline u)} & C_f(q_+,A_+)^\vee \otimes {\mathcal{L}}_{q_+} \ar[d]^{(-1)^{n - |q_-| + \mu(A_-)}C(u)^\vee \otimes {\mathcal{P}}}\\ C_{-f}(q_-,A_-^{-1}) \ar[r] & C_f(q_-,A_-)^\vee \otimes {\mathcal{L}}_{q_-}}$$
where ${\mathcal{P}}$ is the parallel transport isomorphism on the line bundle ${\mathcal{L}}$, see \S\ref{ss:duality_HF}. This means that we have established a canonical isomorphism of chain complexes:
$$(QC_*(\overline{\mathcal{D}}:L),\partial_{\overline {\mathcal{D}}}) = (QC^{n-*}({\mathcal{D}}:L;{\mathcal{L}}),\delta_{{\mathcal{D}}} \otimes {\mathcal{P}})\,.$$
It therefore induces the \textbf{duality isomorphism} on homology:
\begin{equation}\label{eqn:duality_Lagr_QH}
QH_*(\overline{\mathcal{D}}:L) = QH^{n-*}({\mathcal{D}}:L;{\mathcal{L}})\,.
\end{equation}
\subsubsection{Augmentation}
Similarly to the case of Floer homology, we can view the unit as a graded map
$$1 {:\ } {\mathbb{Z}}[n] \to QH_*(\overline{\mathcal{D}}:L)\,.$$
The duality isomorphism \eqref{eqn:duality_Lagr_QH} means that we obtain a graded map
$${\mathbb{Z}}[n] \to QH^{n-*}({\mathcal{D}}:L;{\mathcal{L}})\,,$$
and by dualizing we obtain
$$QH_*({\mathcal{D}}:L;{\mathcal{L}}) \to {\mathbb{Z}}\,,$$
which is the \textbf{augmentation map}.
\subsection{Quantum homology of $M$}\label{ss:QH_of_M}
We call a triple ${\mathcal{D}} = (f,\rho,J)$ a quantum datum for $M$ if $(f,\rho)$ is a Morse--Smale pair on $M$ and $J$ is an $\omega$-compatible almost complex structure on $M$. We call it regular if the various moduli spaces below are transversely cut out. This is the case for a generic $J$.
\subsubsection{Generators, the complex as a module, and the boundary operator}\label{sss:generators_cx_bd_op_QH_of_M}
Fix a regular quantum datum ${\mathcal{D}} = (f,\rho,J)$ for $M$. For a critical point $q \in \Crit f$ and a homotopy class $A \in \pi_2(M,q)$ we can construct the family of operators $D_A$, just like in the Lagrangian case. Members of $D_A$ are formal linearized operators $D_u$ of smooth maps $u {:\ } (S^2,1) \to (M,q)$ in class $A$ with respect to some auxiliary connection $\nabla$ on $M$. We have the following foundational lemma.
\begin{lemma}\label{lem:D_A_has_canonical_orientation}
The family $D_A$ possesses a canonical orientation.
\end{lemma}
\begin{prf}
This is a family of real Cauchy--Riemann operators on a closed Riemann surface. The set of real Cauchy--Riemann operators retracts onto the subset of complex linear operators, which have canonically oriented determinant lines. It follows that the determinant line bundle of the set of real Cauchy--Riemann operators, and therefore the determinant line of $D_A$, is canonically oriented. \qed
\end{prf}
We also have the family $D_A \sharp T{\mathcal{S}}(q)$, defined analogously to the Lagrangian case. Using an exact Fredholm triple analogous to the ones appearing in the proof of Lemma \ref{lem:family_ops_orientable_def_cx_QH}, we see that this family is orientable as well, and thus we can define
$$C(q,A)$$
to be the rank $1$ free abelian group whose two generators are its two possible orientations. We let
$$QC_*({\mathcal{D}}) = \bigoplus_{\substack{q \in \Crit f \\ A \in \pi_2(A,q)}} C(q,A)\,.$$
This is graded by assigning the elements of $C(q,A)$ the degree $|q| - 2c_1(A)$. The boundary operator $\partial_{\mathcal{D}} {:\ } QC_*({\mathcal{D}}) \to QC_{*-1}({\mathcal{D}})$ is just the ordinary Morse boundary operator, enhanced by the homotopy classes. More precisely, it is defined as follows. We first define the Morse boundary operator. When $A = 0 \in \pi_2(M,q)$, we have canonically $\ddd(D_A) = \ddd(T_qM)$, and therefore canonically $\ddd(D_A \sharp T{\mathcal{S}}(q)) = \ddd(T{\mathcal{S}}(q))$. For a Morse trajectory $u \in \widetilde{\mathcal{M}}(q,q')$ of index $1$ we have a natural exact sequence, see \eqref{eqn:exact_seq_moduli_space_gradient_lines_stable_unstable}:
$$0 \to {\mathbb{R}} \partial_u \to T{\mathcal{S}}(q') \to T{\mathcal{S}}(q) \to 0$$
whence
$$\ddd(T{\mathcal{S}}(q')) \simeq \ddd({\mathbb{R}} \partial_u) \otimes \ddd(T{\mathcal{S}}(q))\,.$$
Substituting the positive orientation of ${\mathbb{R}}$ we get the isomorphism
$$C(u) {:\ } C(q,0) \simeq C(q',0)\,.$$
For general $A$, this isomorphism induces an isomorphism
$$C(u) {:\ } C(q,A) \simeq C(q',A')$$
where $A' \in \pi_2(M,q')$ is obtained by transferring the class $A$ to $q'$ along $u$. Indeed, from the relations
$$\ddd(D_A) \otimes \ddd(T{\mathcal{S}}(q)) \simeq \ddd(D_A \sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qM)$$
$$\ddd(D_{A'}) \otimes \ddd(T{\mathcal{S}}(q')) \simeq \ddd(D_{A'} \sharp T{\mathcal{S}}(q')) \otimes \ddd(T_{q'}M)$$
we see that to induce such an isomorphism, it suffices to produce isomorphisms $\ddd(D_A) \simeq \ddd(D_{A'})$ and $\ddd(T_qM) \simeq \ddd(T_{q'}M)$. The former is obtained by deformation induced by moving the base point along $u$, while the latter comes from the fact that $M$ is oriented. The boundary operator is then given by its matrix elements. The matrix element between $(q,A)$ and $(q',A')$ where $|q| = |q'| + 1$ and $c_1(A') = c_1(A)$ is given by the sum
$$\sum_{[u] \in {\mathcal{M}}(q,q'): A\sharp u = A'}C(u) {:\ } C(q,A) \to C(q',A')\,.$$
\begin{thm}\label{thm:boundary_op_squares_zero_QH_of_M}
The boundary operator satisfies $\partial_{\mathcal{D}}^2 = 0$.
\end{thm}
\noindent We can therefore define the quantum homology of $M$, $QH_*({\mathcal{D}})$, as the homology of $(QC_*({\mathcal{D}}),\partial_{\mathcal{D}})$.
\begin{prf}[of Theorem \ref{thm:boundary_op_squares_zero_QH_of_M}]This immediately follows from the parallel proof in Morse theory, coupled with the observation that transferring a class $A \in \pi_2(M,q)$ along paths ending at $q''$, which are homotopic with fixed endpoints, yields the same class $A'' \in \pi_2(M,q'')$. In the Morse-theoretic proof one uses the compactified $1$-dimensional moduli space of gradient trajectories, and transferring a class along the trajectories comprising the two boundary points of a connected component of it yields the same class at the other critical point, therefore the proof goes through, even though the quantum boundary operator distinguishes homotopy classes of spheres. \qed
\end{prf}
\subsubsection{Product}\label{sss:product_QH_of_M}
To define the product we need to define moduli spaces of spiked spheres. Fix data quantum data ${\mathcal{D}}_i = (f_i,\rho,J)$, $i=0,1,2$. Let $\widetilde{\mathcal{M}}^\circ(J)$ be the space of parametrized $J$-holomorphic spheres in $M$, including constant ones. We have the evaluation map
$$\ev {:\ } C^\infty(S^2,M) \to M^3\,, \quad u \mapsto (u(0),u(1),u(\infty))\,,$$
where we view $S^2 = {\mathbb{C}} P^1$. The space of spiked spheres is
$${\mathcal{P}}(q_0,q_1;q_2) = \ev^{-1}({\mathcal{U}}(q_0) \times {\mathcal{U}}(q_1) \times {\mathcal{S}}(q_2)) \cap \widetilde{\mathcal{M}}^\circ(J)\,.$$
For generic $J$ it is a smooth manifold of local dimension at $u$
$$|q_0| + |q_1| - |q_2| +2c_1(u) - 2n$$
provided this number is $\leq 1$. For $u$ with $\ev(u) \in {\mathcal{U}}(q_0) \times {\mathcal{U}}(q_1) \times {\mathcal{S}}(q_2)$ define the space
$$Y^u = \{\xi \in W^{1,p}(u) \,|\, \xi(0) \in T_{u(0)}{\mathcal{U}}(q_0),\xi(1) \in T_{u(1)}{\mathcal{U}}(q_1),\xi(\infty) \in T_{u(\infty)}{\mathcal{S}}(q_2)\}\,.$$
Assume $\dim_u {\mathcal{P}}(q_0,q_1;q_2) = 0$. Then the linearized operator $D_u|_{Y^u}$ has index zero and is surjective, therefore it possesses the canonical positive orientation ${\mathfrak{o}}_u = 1\otimes 1^\vee$. Deform $u$ into $u' \in C^\infty(S^2,M)$ satisfying $\ev(u') = (q_0,q_1,q_2)$. Then we have a canonical isomorphism
$$\ddd(D_u|_{Y^u}) \simeq \ddd(D_{u'}|_{Y^{u'}})\,.$$
Using direct sum, deformation, and linear gluing isomorphisms, combined with arguments involving deformation of incidence conditions at $q_0,q_1$ (see the proof of Lemma \ref{lem:computation_induced_oris_product_is_chain_map} of the Lagrangian case), we get an isomorphism
$$\ddd(D_{u'}|_{Y^{u'}}) \otimes \ddd(D_{A_0}\sharp T{\mathcal{S}}(q_0)) \otimes \ddd(D_{A_1}\sharp T{\mathcal{S}}(q_1))\simeq \ddd(D_{A_2}\sharp T{\mathcal{S}}(q_2))\,,$$
where $A_2 = A_0 \sharp A_1 \sharp u$, and composing this with the deformation isomorphism $\ddd(D_u|_{Y^u}) \simeq \ddd(D_{u'}|_{Y^{u'}})$, we get a bijection between orientations of $D_u|_{Y^u}$ and isomorphisms $C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)$. Thus the standard orientation ${\mathfrak{o}}_u \in \ddd(D_u|_{Y^u})$ gives rise to the isomorphism
$$C(u) {:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)\,.$$
The matrix element of the product is then
$$\sum_{\substack{u \in {\mathcal{P}}(q_0,q_1;q_2): \\ A_0\sharp A_1\sharp u = A_2}} C(u) {:\ } C(q_0,A_0) \otimes C(q_1,A_1) \simeq C(q_2,A_2)\,.$$
We have thus defined a bilinear operation
$$* {:\ } QC_k({\mathcal{D}}_0) \otimes QC_l({\mathcal{D}}_1) \to QC_{k+l-2n}({\mathcal{D}}_2)\,.$$
\begin{thm}
The operation $*$ is a chain map. More precisely
$$\partial_{{\mathcal{D}}_2} \circ * = * \circ (\partial_{{\mathcal{D}}_0} \otimes \id + (-1)^{2n-k} \id \otimes \partial_{{\mathcal{D}}_1}) {:\ } QC_k({\mathcal{D}}_0) \otimes QC_l({\mathcal{D}}_1) \to QC_{k+l-2n-1}({\mathcal{D}}_2)\,.$$
\end{thm}
\begin{prf}
This is proved in complete analogy with the Lagrangian case, except that there is no holomorphic curve breaking or collision to take into account. The $1$-dimensional moduli space ${\mathcal{P}}(q_0,q_1;q_2)$ can be compactified by adding Morse breaking. One then computes the induced orientations on ${\mathcal{P}}$ coming from isomorphisms corresponding to the constituent trajectories of boundary points to conclude. \qed
\end{prf}
\subsubsection{Unit}\label{sss:unit_QH_of_M}
This is defined analogously to the Lagrangian case. Let ${\mathcal{D}} = (f,\rho,J)$ be a regular quantum datum. Like in the Lagrangian case, we have the elements $1_q \in C(q,0)$ for every maximum $q \in \Crit f$ where $0 \in \pi_2(M,q)$ is the zero class. Their sum
$$1_{\mathcal{D}} = \sum_{\substack{q \in \Crit f: \\ |q| = 2n}}1_q \in QC_{2n}({\mathcal{D}})$$
is the unit. Again, like in the Lagrangian case, one checks that $1_{\mathcal{D}}$ actually is a unit on chain level, and therefore in homology.
\subsubsection{Quantum module action}\label{sss:quantum_module_action_QH}
Fix a compatible almost complex structure $J$ and quantum data ${\mathcal{D}}_i = (f_i,\rho,J)$, $i=0,1$ for $L$, and a quantum datum ${\mathcal{D}} = (f,\rho',J)$ for $M$, such that all of them are regular. The \textbf{quantum module action} is a bilinear operation
$$\bullet {:\ } QC_k({\mathcal{D}}) \otimes QC_l({\mathcal{D}}_0:L) \to QC_{k+l-2n}({\mathcal{D}}_1:L)\,.$$
This is defined via its matrix elements which are homomorphisms
$$C(q,A) \otimes C(q_0,A_0) \to C(q_1,A_1)$$
for $q \in \Crit f,q_i \in \Crit f_i$, $A \in \pi_2(M,q)$, $A_i \in \pi_2(M,L,q_i)$, such that $|q| + |q_0| - |q_1| - 2c_1(A) - \mu(A_0) + \mu(A_1) - 2n = 0$. In order to define these, we need to describe additional pearly moduli spaces. Fix $q_i \in \Crit f_i$ and $q \in \Crit f$. Recall the spaces $\widetilde{\mathcal{M}}^\circ(L,J)$, $\widetilde{\mathcal{M}}(L,J)$ of $J$-holomorphic disks with boundary on $L$, respectively nonconstant $J$-holomorphic disks. For $k_0,k_1 \geq 0$ we have the evaluation map
$$\ev {:\ } (C^\infty(D^2,S^1;M,L))^{k_0+k_1+1} \to L^{2(k_0+k_1)+3}$$
defined by
\begin{multline*}
\ev(U = (u^0,u^1,u)) = (u(0);u^0_1(-1),u^0_1(1),\dots,u^0_{k_0}(-1),u^0_{k_0}(1),u(-1);\\ u(1),u^1_1(-1),u^1_1(1),\dots,u^1_{k_1}(-1),u^1_{k_1}(1))\,,
\end{multline*}
where $u^i \in (C^\infty(D^2,S^1;M,L))^{k_i}$. We let
\begin{multline*}
\widetilde{\mathcal{P}}_{k_0,k_1}(q,q_0;q_1) = \ev^{-1}({\mathcal{U}}_f(q) \times {\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho}^{k_0} \times Q_{f_1,\rho}^{k_1} \times {\mathcal{S}}_{f_1}(q_1)) \cap \\ (\widetilde{\mathcal{M}}(L,J))^{k_0} \times (\widetilde{\mathcal{M}}(L,J))^{k_1} \times \widetilde{\mathcal{M}}^\circ(L,J)\,.
\end{multline*}
For the sake of convention we call this space the space of spiked pearls. There is a natural ${\mathbb{R}}^{k_0+k_1}$-action on this space and we let ${\mathcal{P}}_{k_0+k_1}(q,q_0;q_1)$ be the quotient. We also define
$$\widetilde{\mathcal{P}}(q,q_0;q_1) = \bigcup_{k_0,k_1 \geq 1}\widetilde{\mathcal{P}}_{k_0,k_1}(q,q_0;q_1) \quad \text{ and }\quad {\mathcal{P}}(q,q_0;q_1) = \bigcup_{k_0,k_1 \geq 1} {\mathcal{P}}_{k_0,k_1}(q,q_0;q_1)\,.$$
We have \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}:
\begin{prop}
Fix Morse-Smale pairs $(f_i,\rho)_{i=0,1}$ and $(f,\rho')$. Then there is a subset of ${\mathcal{J}}(M,\omega)$ of the second category so that for each $J$ in this subset, for all $q_i \in \Crit f_i, q \in \Crit f$ the space ${\mathcal{P}}(q,q_0;q_1)$ is a smooth manifold of local dimension at $[U]$ equal to
$$|q| + |q_0| - |q_1| + \mu(U) - 2n$$
provided this number is at most zero. \qed
\end{prop}
We proceed with the definition of the matrix element. Fix $A \in \pi_2(M,q)$ and $A_0 \in \pi_2(M,L,q_0)$. There is an obvious way to produce a class $A\sharp A_0 \sharp U \in \pi_2(M,L,q_1)$ for $U \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+1}$ with $\ev(U) \in {\mathcal{U}}_f(q) \times {\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho_0}^{k_0} \times Q_{f_1,\rho_1}^{k_1} \times {\mathcal{S}}_{f_1}(q_1)$ by concatenating representatives of $A,A_0$ with the constituent disks of $U$ along the pieces of gradient trajectories connecting the evaluation points of the disks. For $U \in \widetilde{\mathcal{P}}(q,q_0;q_1)$ satisfying $|q| + |q_0| - |q_1| + \mu(U) - 2n = 0$ we will construct an isomorphism
$$C(U) {:\ } C(q,A) \otimes C(q_0,A_0) \simeq C(q_1,A\sharp A_0 \sharp U)\,.$$
For $A_1 \in \pi_2(M,L,q_1)$ with $|q| + |q_0| - |q_1| - 2c_1(A) - \mu(A_0) + \mu(A_1) - 2n = 0$, the matrix element of $\bullet$ is then the sum
$$\sum_{\substack{[U] \in {\mathcal{P}}(q,q_0;q_1): \\ A\sharp A_0 \sharp U = A_1}} C(U) {:\ } C(q,A) \otimes C(q_0,A_0) \simeq C(q_1,A_1)\,.$$
We therefore have to define the isomorphism $C(U)$. We need some additional spaces. For $U$ with $\ev(U) \in {\mathcal{U}}_f(q) \times {\mathcal{U}}_{f_0}(q_0) \times \overline Q{}_{f_0,\rho}^{k_0} \times \overline Q{}_{f_1,\rho}^{k_1} \times {\mathcal{S}}_{f_1}(q_1)$ let us define
\begin{multline*}
Y_\Gamma^U = \{\Xi =(\xi^0,\xi^1,\xi) \in W^{1,p}(u^0) \oplus W^{1,p}(u^1) \oplus W^{1,p}(u)\,|\,\\
\xi(0) \in T_{u(0)}{\mathcal{U}}_{f}(q); (\xi^0_j(1),\xi^0_{j+1}(-1)) \in \Gamma_{(u^0_j(1),u^0_{j+1}(-1))}\text{ for }j < k_0;\\
(\xi^1_j(1),\xi^1_{j+1}(-1)) \in \Gamma_{(u^1_j(1),u^1_{j+1}(-1))}\text{ for }j \geq 1;\\
\xi^0(-1) \in T_{u^0(-1)}{\mathcal{U}}_{f_0}(q_0), (\xi^0_{k_0},\xi(-1)) \in \Gamma_{(u^0_{k_0}(1),u(-1))};\\
\xi^1_{k_1}(1) \in T_{u^1_{k_1}(1)}{\mathcal{S}}_{f_1}(q_1),(\xi(1),\xi^1_1(-1)) \in \Gamma_{(u(1),u^1_{k_1}(-1))}\}
\end{multline*}
and for $U$ with $\ev(U) \in {\mathcal{U}}_f(q) \times {\mathcal{U}}_{f_0}(q_0) \times Q_{f_0,\rho}^{k_0} \times Q_{f_1,\rho}^{k_1} \times {\mathcal{S}}_{f_1}(q_1)$ let us define
\begin{multline*}
Y_Q^U = \{\Xi =(\xi^0,\xi^1,\xi) \in W^{1,p}(u^0) \oplus W^{1,p}(u^1) \oplus W^{1,p}(u)\,|\,\\
\xi(0) \in T_{u(0)}{\mathcal{U}}_{f}(q); (\xi^0_j(1),\xi^0_{j+1}(-1)) \in T_{(u^0_j(1),u^0_{j+1}(-1))}Q_{f_0,\rho} \text{ for }j < k_0;\\
(\xi^1_j(1),\xi^1_{j+1}(-1)) \in T_{(u^1_j(1),u^1_{j+1}(-1))}Q_{f_1,\rho} \text{ for }j \geq 1;\\
\xi^0(-1) \in T_{u^0(-1)}{\mathcal{U}}_{f_0}(q_0), (\xi^0_{k_0},\xi(-1)) \in T_{(u^0_{k_0}(1),u(-1))}Q_{f_0,\rho};\\
\xi^1_{k_1}(1) \in T_{u^1_{k_1}(1)}{\mathcal{S}}_{f_1}(q_1),(\xi(1),\xi^1_1(-1)) \in T_{(u(1),u^1_{k_1}(-1))}Q_{f_1,\rho}\}
\end{multline*}
The isomorphism $C(U)$ is defined in an entirely similar fashion to the isomorphism entering the definition of the Lagrangian quantum product. Namely, we construct two canonical bijections: the first one is between orientations of $D_U|_{Y_\Gamma^U}$ and orientations of $\ddd(T_U\widetilde{\mathcal{P}}(q,q_0;q_1)) \otimes \ddd(T_UQ/\Gamma)$, while the second one is between orientations of $D_U|_{Y_\Gamma^U}$ and isomorphisms $C(q,A) \otimes C(q_0,A_0) \simeq C(q_1,A_1)$.
The first bijection comes from the exact triple
$$0 \to D_U|_{Y_\Gamma^U} \to D_U|_{Y_Q^U} \to 0_{T_UQ/\Gamma} \to 0$$
which yields the isomorphism
$$\ddd(D_U|_{Y_Q^U}) \simeq \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma)\,,$$
whence the desired bijection.
The construction of the second bijection follows the same lines as for the Lagrangian product. Namely, we can deform $U$ into $U' \in (C^\infty(D^2,S^1;M,L))^{k_0+k_1+1}$ with $\ev(U') \in \{q\} \times \{q_0\} \times \Delta_L^{k_0+k_1} \times \{q_1\}$, which induces an isomorphism $\ddd(D_U|_{Y_\Gamma^U}) \simeq \ddd(D_{U'}|_{Y_\Gamma^{U'}})$. Then we can deform the operator family $D_A \sharp T{\mathcal{S}}_f(q) \oplus D_{A_0}\sharp T{\mathcal{S}}_{f_0} \oplus D_{U'}|_{Y_\Gamma^{U'}}$ by changing the incidence conditions at $q,q_0$ into an operator \footnote{The space $X_\Gamma^{U}$ is defined just like $Y_\Gamma^U$ but with no condition on $\xi(0)$ and $\xi^1_1(-1)$.} $D_A \sharp D_{A_0} \sharp D_{U'}|_{X_\Gamma^{U'}}$, which upon gluing and deformation yields a representative of the family $D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1)$. Together with the direct sum isomorphisms, we obtain the isomorphism
$$\ddd(D_{U}|_{Y_\Gamma^{U}}) \otimes \ddd(D_A \sharp T{\mathcal{S}}_f(q)) \otimes \ddd(D_{A_0} \sharp T{\mathcal{S}}_{f_0}) \simeq \ddd(D_{A_1} \sharp T{\mathcal{S}}_{f_1}(q_1))\,,$$
which shows that indeed there is a bijection between orientations of $D_{U}|_{Y_\Gamma^{U}}$ and isomorphisms $C(q,A) \otimes C(q_0,A_0) \simeq C(q_1,A_1)$.
The isomorphism $C(U)$ now corresponds to the orientation
$$\textstyle (-1)^{k_1}\bigwedge_i\partial_{U_i} \otimes \bigwedge_i e^U_i \in \ddd(T_U\widetilde{\mathcal{P}}(q,q_0;q_1)) \otimes \ddd(T_UQ/\Gamma)$$
where we abbreviated $\bigwedge_i\partial_{U_i} = \bigwedge_i\partial_{u^0_i}\wedge \bigwedge_i\partial_{u^1_i}$ and $\bigwedge_i e^U_i = \bigwedge_i e^{u^0}_i \wedge \bigwedge_i e^{u^1}_i$.
We have
\begin{thm}
The operation $\bullet$ is a chain map. More precisely:
$$\partial_{{\mathcal{D}}_1} \circ \bullet = \bullet \circ (\partial_{{\mathcal{D}}} \otimes \id + (-1)^{2n-k} \id \otimes \partial_{{\mathcal{D}}_0}) {:\ } QC_k({\mathcal{D}}) \otimes QC_l({\mathcal{D}}_0:L) \to QC_{k+l-2n-1}({\mathcal{D}}_1:L)$$
\end{thm}
\begin{prf}
Unlike the proof in the case of Lagrangian quantum product, due to transversality issues in case $N_L = 2$ (see \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} and reference therein), one needs to use more general objects in order to prove the asserted relation, namely one needs to use moduli spaces of spiked pearls where the center is allowed to carry a Hamiltonian perturbation. The proof in \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} then proceeds as follows: first one defines a perturbed variant of the operation $\bullet$ using the perturbed moduli spaces; it is then straightforward to show that this perturbed operation is indeed a chain map. Next, one proves that for all sufficiently small perturbations, there is a canonical bijection between the $0$-dimensional components of perturbed and unperturbed moduli spaces of spiked pearls, and therefore that the two operations are identical.
Of course, in the original proof of Biran--Cornea orientations are not taken into account, as all the counts are modulo 2. In \cite{Biran_Cornea_Lagr_top_enumerative_invts} the authors bring orientations into play, but we will not follow that argument, since our methods do not presuppose additional choices such as an orientation of $L$ and (relative S)Pin structures, and since the proof can be carried through without such choices.
The method used in the present paper to show, for example, that the Lagrangian quantum product defines a chain map, can be used with obvious minimal changes to accomodate the situation where the core carries a Hamiltonian perturbation. The same exact strategy can be used to show that if we define $\bullet$ using moduli spaces of spiked pearls in which the center carries a Hamiltonian perturbation, then it is a chain map, and this is based on transversality and gluing results of Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}. What remains to be shown is that the perturbed operation and the unperturbed one coincide in case the perturbation is small enough. Let ${\mathcal{P}}^0(q,q_0;q_1)$ and ${\mathcal{P}}^0(q,q_0;q_1;H)$ denote the $0$-dimensional components of the moduli spaces of spiked pearls and perturbed spiked pearls, respectively. Biran--Cornea prove that for all $H$ sufficiently small there is a canonical bijection ${\mathcal{P}}^0(q,q_0;q_1;H) \simeq {\mathcal{P}}^0(q,q_0;q_1)$. We have to show that if $U_H \in {\mathcal{P}}^0(q,q_0;q_1;H)$ and $U \in {\mathcal{P}}^0(q,q_0;q_1)$ correspond under this bijection, then the isomorphisms $C(U_H)$, $C(U)$ are equal. This is however obvious, since the maps $U_H,U$ are close, therefore can be deformed into each other, which means there is a canonical isomorphism $\ddd(D_U|_{Y_\Gamma^U}) = \ddd(D_{U_H}|_{Y_\Gamma^{U_H}})$. We have a similar isomorphism $\ddd(T_U\widetilde{\mathcal{P}}(q,q_0;q_1)) \otimes \ddd(T_UQ/\Gamma) \simeq \ddd(T_{U_H}\widetilde{\mathcal{P}}(q,q_0;q_1;H)) \otimes \ddd(T_{U_H}Q/\Gamma)$, which means that the isomorphisms $C(U_H),C(U)$, which correspond to specific orientations of these spaces, coincide. \qed
\end{prf}
Thus we have a well-defined bilinear operation on homology
$$\bullet {:\ } QH_k({\mathcal{D}}) \otimes QH_l({\mathcal{D}}_0:L) \to QH_{k+l-2n}({\mathcal{D}}_1:L)\,.$$
This is called the quantum module action.
\subsection{Spectral sequences}\label{ss:spectral_seqs}
The quantum complexes admit natural filtrations by the Maslov or Chern numbers, and these give rise to spectral sequences.
We only consider the Lagrangian case. The Lagrangian quantum complex corresponding to a quantum datum ${\mathcal{D}} = (f,\rho,J)$ is
$$QC_*({\mathcal{D}}:L) = \bigoplus_{ \substack{x \in \Crit f \\ A \in \pi_2(M,L,x)}} C(x,A)\,.$$
Let us define the increasing filtration
$$F_pQC_*({\mathcal{D}}:L) = \bigoplus_{x \in \Crit f} \bigoplus_{\substack{A \in \pi_2(M,L,x) \\ \mu(A) \geq - pN_L}} C(x,A)\,.$$
It follows from the definition of $\partial_{\mathcal{D}}$ that it preserves this filtration. Therefore we have the associated spectral sequence whose zeroth page is
$$E^0_{p,q} = F_p QC_{p+q}({\mathcal{D}}:L) / F_{p-1} QC_{p+q}({\mathcal{D}}:L)$$
and so
$$E^0_{p,*} = F_p QC_*({\mathcal{D}}:L) / F_{p-1} QC_*({\mathcal{D}}:L) \simeq \bigoplus_{x \in \Crit f} \bigoplus_{\substack{A \in \pi_2(M,L,x) \\ \mu(A) = - pN_L}} C(x,A)\,.$$
The boundary operator $\partial^0$ on $E^0_{p,*}$ comes from the Morse boundary operator of $f$, twisted by the local system $\xi_p$, where
$$\xi_p(x) = \{A \in \pi_2(M,L,x)\,|\,\mu(A) = -pN_L\}\,,$$
that is $(E^0_{p,*},\partial^0) \simeq (CM_*(f;\xi_p), \partial^f_{\xi_p})$, where $CM_*$ means the Morse complex, and $\partial^f_{\xi_p}$ is the Morse boundary operator of $f$ twisted by $\xi_p$. Therefore the first page is the twisted homology:
$$E^1_{p,q} \simeq H_{p+q - pN_L}(L;\xi_p)\,.$$
Thus we have
\begin{thm}
The filtration by the Maslov index of disks, $F_*QC_*({\mathcal{D}}:L)$ induces a spectral sequence whose first page is isomorphic to the singular homology of $L$ twisted by the local systems $\xi_p$, and which converges to $QH_*({\mathcal{D}}:L)$. Moreover, using different quantum data, this spectral sequence can be seen to be multiplicative in an obvious sense. \qed
\end{thm}
\section{PSS isomorphisms}\label{s:PSS}
This section is dedicated to the definition and properties of the PSS isomorphisms. The idea of the construction, at least over ${\mathbb{Z}}_2$, is contained in \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling} and the references therein, however to the best of our knowledge, this is the first time the construction is carried out in detail over an arbitrary ground ring.
In \S\ref{ss:definition_PSS} we define the PSS morphisms on chain level via their matrix elements, prove that they are chain maps and that they are independent of the auxiliary data such as a perturbation datum. \S\ref{ss:PSS_maps_properties} covers the main properties of the PSS maps, namely the fact that they are indeed isomorphisms and that they respect the natural algebraic structures on Floer and quantum homology.
\subsection{Definition}\label{ss:definition_PSS}
We start with the Lagrangian Floer and quantum homologies. Let $(H,J)$ be a regular Floer datum for $L$ and ${\mathcal{D}} = (f,\rho,I_0)$ a regular quantum datum for $L$. The \textbf{PSS map}
$$\PSS_{H,J}^{\mathcal{D}} {:\ } CF_*(H:L) \to QC_*({\mathcal{D}}:L)$$
will be defined through its matrix elements, to define which we need new moduli spaces which combine solutions of Floer's PDE and pearly trajectories. Denote $D^2_- = D^2 - \{-1\}$ and consider $-1$ as a negative puncture. Endow it with the standard negative end, and associate the Floer datum $(H,J)$ to it. Choose a regular perturbation datum $(K,I)$ on $D^2_-$ which is compatible with $(H,J)$ and which satisfies $K = 0$, $I = I_0$ near $1$. Let
$${\mathcal{M}}_-(\widetilde\gamma) = \{u \in C^\infty_b(D^2_-,\partial D^2_-; M,L; \gamma)\,|\,\overline\partial_{K,I}u = 0\,,[\widehat\gamma\sharp u] = 0\in \pi_2(M,L)\}\,.$$
Since the perturbation datum is regular, this is a smooth manifold of dimension $|\widetilde\gamma|$. We have the evaluation map
$$\ev {:\ } {\mathcal{M}}_-(\widetilde\gamma) \times (\widetilde{\mathcal{M}}(L,I_0))^k \to L^{2k+1}$$
given by
$$\ev(u = (u_0,\dots,u_k)) = (u_0(1),u_1(-1),u_1(1),\dots,u_k(-1),u_k(1))\,.$$
Fix $q \in \Crit f$ and define
$$\widetilde{\mathcal{P}}_k(\widetilde\gamma,q) = \ev^{-1}(Q^k \times {\mathcal{S}}(q))\,.$$
There is a natural ${\mathbb{R}}^k$-action on this space and we let ${\mathcal{P}}_k(\widetilde\gamma,q)$ be the quotient. We also denote
$$\widetilde{\mathcal{P}}(\widetilde\gamma,q) = \bigcup_{k\geq 0} \widetilde{\mathcal{P}}_k(\widetilde\gamma,q)\quad \text{ and }\quad {\mathcal{P}}(\widetilde\gamma,q) = \bigcup_{k \geq 0} {\mathcal{P}}_k(\widetilde\gamma,q)\,.$$
As Biran--Cornea indicate \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}, it can be shown that for fixed $(H,J)$ and ${\mathcal{D}}$ there is a subset of ${\mathcal{J}}(M,\omega)$ of second category such that for each $I_0$ in it ${\mathcal{P}}(\widetilde\gamma,q)$ is a smooth manifold of local dimension at $[u]$ equal to $|\widetilde\gamma| - |q| + \mu(u)$ whenever this number does not exceed $1$.
We now proceed with the definition of the matrix elements. Given
$$u \in C^\infty_b(D^2_-,\partial D^2_-;M,L,\gamma) \times (C^\infty(D^2,S^1;M,L))^k$$
with $\ev(u) \in Q^k \times {\mathcal{S}}(q)$ there is an obvious way of producing a class $\widetilde\gamma \sharp u \in \pi_2(M,L,q)$ by concatenating $\widehat\gamma$ with $u_0$ and the disks in $u$ along pieces of gradient trajectories connecting the evaluation points. For $u \in \widetilde{\mathcal{P}}(\widetilde\gamma,q)$ with $|\widetilde\gamma| - |q| + \mu(u) = 0$ and any class of cappings $\widetilde\gamma'$ for $\gamma$ we will construct an isomorphism
$$C(u) {:\ } C(\widetilde\gamma') \to C(q,\widetilde\gamma' \sharp u)$$
For $A \in \pi_2(M,L,q)$ the matrix element of the PSS map is then
$$\sum_{\substack{[u] \in {\mathcal{P}}(\widetilde\gamma,q):\\ \widetilde\gamma' \sharp u = A}} C(u) {:\ } C(\widetilde\gamma') \to C(q,A)\,.$$
It remains to define the isomorphism $C(u)$. This is entirely analogous to all the cases described above: there are natural bijections between orientations of \footnote{The spaces $Y_\Gamma^u$, $Y_Q^u$, $X_\Gamma^u$, $T_uQ/\Gamma$ are defined in an obvious manner.} $D_u|_{Y_\Gamma^u}$, isomorphisms $C(\widetilde\gamma') \simeq C(q,A)$ for $A = \widetilde\gamma' \sharp u$, and orientations of $\ddd(T_u\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_uQ/\Gamma)$. The isomorphism $C(u)$ is then the one corresponding to the orientation
$$\textstyle(-1)^k\bigwedge_i \partial_{u_i} \otimes \bigwedge_i e^u_i \in \ddd(T_u\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_uQ/\Gamma)\,,$$
where $k$ is the number of holomorphic disks in $u$. The bijections are constructed as follows: the first one is obtained using deformation and gluing arguments akin to those used in the definition of quantum homology and various algebraic structures on it. The second one comes from the exact Fredholm triple
$$0 \to D_u|_{Y_\Gamma^u} \to D_u|_{Y_Q^u} \to 0_{T_uQ/\Gamma} \to 0 \,.$$
We have
\begin{thm}\label{thm:PSS_is_chain_map_Floer_to_quantum}
The operator $\PSS_{H,J}^{\mathcal{D}}$ thus defined is a chain map:
$$\partial_{\mathcal{D}} \circ \PSS_{H,J}^{\mathcal{D}} = \PSS_{H,J}^{\mathcal{D}} \circ \partial_{H,J}\,.$$
\end{thm}
\begin{prf}
The argument is almost identical to the above proofs, so we just briefly outline the main steps. Let ${\mathcal{P}}^1(\widetilde\gamma,q)$ denote the union of $1$-dimensional components of ${\mathcal{P}}(\widetilde\gamma,q)$. This space can be compactified by adding points of the following types: Floer breaking at the negative end of $u_0$; Morse breaking of one of the gradient trajectories; splitting of a holomorphic disk $u_i$, $i > 0$, into two; splitting of $u_0$ into an element of ${\mathcal{M}}_-(\widetilde \gamma)$ and a holomorphic disk; and collision of $u_i,u_{i+1}$ for some $i \geq 0$. Just as in case of pearly spaces, we can define $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ to be the union of compactified components of ${\mathcal{P}}^1(\widetilde\gamma,q)$ where points corresponding to collision/breaking belonging to two different components are identified if they represent the same degenerate trajectory. Thus $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ is endowed with the structure of a $1$-dimensional compact topological manifold with boundary. The boundary points correspond to Floer and Morse breaking and are in an obvious bijection with the summands of the matrix element of
$$\partial_{\mathcal{D}} \circ \PSS_{H,J}^{\mathcal{D}} - \PSS_{H,J}^{\mathcal{D}} \circ \partial_{H,J}$$
going from $C(\widetilde\gamma')$ to $C(q,A)$. One needs to show that for every connected component of $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ the summands corresponding to its two boundary points cancel out.
This is achieved as follows. Let $[w] \in {\mathcal{P}}^1(\widetilde\gamma,q)$. Then isomorphisms $C(\widetilde\gamma') \to C(q,\widetilde\gamma'\sharp w)$ are in bijection with orientations of $D_w|_{Y_\Gamma^w}$ and of $\ddd(T_w\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_wQ/\Gamma)$. The following can be shown:
\begin{itemize}
\item Suppose $\delta = ([u],[v]) \in {\mathcal{M}}(H,J;\widetilde\gamma,\widetilde\gamma') \times {\mathcal{P}}_k(\widetilde\gamma',q)$ is a boundary point of $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ and $[w]$ lies close to $\delta$. Then the orientation corresponding to $C(v) \circ C(u)$ is $- \bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^w_i \in \ddd(T_w\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_wQ/\Gamma)$.
\item Suppose $\delta = ([u],[v]) \in {\mathcal{P}}_k(\widetilde\gamma,q') \times {\mathcal{P}}_l(q',q)$ and $[w]$ is close to $\delta$. Then the isomorphism $C(v) \circ C(u)$ corresponds to the orientation $\bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^w_i \in \ddd(T_w\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_wQ/\Gamma)$.
\item If $C {:\ } C(\widetilde\gamma') \simeq C(q,A)$ is a fixed isomorphism and $u,v \in \widetilde{\mathcal{P}}^1(\widetilde \gamma,q)$ are two points lying on different sides of a degenerate trajectory in $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ close to it, and $C$ corresponds to the orientation $\bigwedge_i \partial_{u_i}\wedge \eta_u \otimes \bigwedge_i e^u_i$, where $\eta_u \in T_u\widetilde{\mathcal{P}}(\widetilde\gamma,q)$ is a vector transverse to the infinitesimal action of the automorphism group at $u$, then $C$ corresponds to the orientation $\bigwedge_i \partial_{v_i}\wedge \eta_v \otimes \bigwedge_i e^v_i$, where $\eta_v \in T_v\widetilde{\mathcal{P}}(\widetilde\gamma,q)$ points in the same direction as $\eta_u$. In other words, the sign in front of the standard orientation is unchanged when crossing a breaking/collision point.
\end{itemize}
Let us prove, for example, that if there a connected component $\Delta$ with boundary points $\delta = ([u],[v]) \in {\mathcal{M}}(H,J;\widetilde\gamma,\widetilde\gamma') \times {\mathcal{P}}_k(\widetilde\gamma',q)$ and $\delta' = ([u'],[v']) \in {\mathcal{P}}_{k'}(\widetilde\gamma,q') \times {\mathcal{P}}_{l'}(q',q)$, then $C(v) \circ C(u) = C(v') \circ C(u')$. Let $\{[w^t]\}_{t\in (0,1)}$ be a parametrization of $\Delta$ at nondegenerate points, such that
$$[w^t] \xrightarrow[t \to 0]{} \delta\,.$$
The isomorphism $C(v) \circ C(u)$ corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{w^t}_i$$
for $t$ close to $0$. We see that $C(v) \circ C(u)$ also corresponds to the orientation
$$\textstyle - \bigwedge_i \partial_{w^t_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^{w^t}_i$$
for $t$ close to $1$. On the other hand the isomorphism $C(v') \circ C(u')$ corresponds to the orientation
$$\textstyle \bigwedge_i \partial_{w^t_i} \wedge \text{inward}_{\delta'} \otimes \bigwedge_i e^{w^t}_i$$
for $t$ close to $1$. Since $\text{inward}_\delta = - \text{inward}_{\delta'}$, we see that the two orientations are equal, whence the equality of the isomorphisms $C(v) \circ C(u) = C(v') \circ C(u')$.
The computation of the above orientations is done in analogy with the proofs appearing in \S\ref{s:QH}: one uses commutative diagrams of determinant lines coming from exact Fredholm squares and from gluing, and uses the structure of the respective gluing maps to conclude. \qed
\end{prf}
The opposite PSS map
$$\PSS^{H,J}_{\mathcal{D}} {:\ } QC_*({\mathcal{D}}:L) \to CF_*(H:L)$$
is constructed analogously. We will describe its matrix elements. To define them, we need opposite moduli spaces. Recall that $\dot D^2 = D^2 - \{1\}$ and consider $1$ as a positive puncture. Endow it with the standard end, and associate the Floer datum $(H,J)$ to it. Choose a regular perturbation datum $(K,I)$ on $\dot D^2$ which is compatible with $(H,J)$ and which satisfies $K = 0$, $I = I_0$ near $-1$. Let
$${\mathcal{M}}_+(\widetilde\gamma) = \{u \in C^\infty_b(\dot D^2,\partial \dot D^2; M,L; \gamma)\,|\,\overline\partial_{K,I}u = 0\,,\widehat\gamma \text{ is equivalent to }u \text{ as a capping of }\gamma\}\,.$$
Since the perturbation datum is regular, this is a smooth manifold of dimension $|\widetilde\gamma|' = n - |\widetilde\gamma|$. We have the evaluation map
$$\ev {:\ } (\widetilde{\mathcal{M}}(L,I_0))^k \times {\mathcal{M}}_+(\widetilde\gamma) \to L^{2k+1}$$
given by
$$\ev(u = (u_1,\dots,u_k;u_0)) = (u_1(-1),u_1(1),\dots,u_k(-1),u_k(1);u_0(-1))\,.$$
Fix $q \in \Crit f$ and define
$$\widetilde{\mathcal{P}}_k(q,\widetilde\gamma) = \ev^{-1}({\mathcal{U}}(q) \times Q^k)\,.$$
There is a natural ${\mathbb{R}}^k$-action on this space and we let ${\mathcal{P}}_k(q,\widetilde\gamma)$ be the quotient. We also denote
$$\widetilde{\mathcal{P}}(q,\widetilde\gamma) = \bigcup_{k\geq 0} \widetilde{\mathcal{P}}_k(q,\widetilde\gamma)\quad \text{ and }\quad {\mathcal{P}}(q,\widetilde\gamma) = \bigcup_{k \geq 0} {\mathcal{P}}_k(q,\widetilde\gamma)\,.$$
Again, it can be shown that for fixed $(H,J)$ and ${\mathcal{D}}$ there is a subset of ${\mathcal{J}}(M,\omega)$ of the second category such that for each $I_0$ in it ${\mathcal{P}}(q,\widetilde\gamma)$ is a smooth manifold of local dimension at $[u]$ equal to $|q| - |\widetilde\gamma| + \mu(u)$ whenever this number does not exceed $1$.
We now proceed with the definition of the matrix elements. Given
$$u \in (C^\infty(D^2,S^1;M,L))^k \times C^\infty_b(\dot D^2,\partial \dot D^2;M,L;\gamma)$$
with $\ev(u) \in {\mathcal{U}}(q) \times Q^k$ there is an obvious way of producing a class of cappings $A \sharp u$ of $\gamma$ for $A \in \pi_2(M,L,q)$ by concatenating a representative of $A$ with the disks of $u$ and the capping $u_0$ of $\gamma$ along pieces of gradient trajectories connecting the evaluation points. For $u \in \widetilde{\mathcal{P}}(q,\widetilde\gamma)$ with $|q| - |\widetilde\gamma| + \mu(u) = 0$ we will construct an isomorphism
$$C(u) {:\ } C(q,A) \to C(A\sharp u)\,.$$
The matrix element is then
$$\sum_{\substack{[u] \in {\mathcal{P}}(q,\widetilde\gamma): \\ A\sharp u = \widetilde\gamma'}} C(u) {:\ } C(q,A) \to C(\widetilde\gamma')\,.$$
It remains to define the isomorphism $C(u)$. This is identical to the above construction: there are natural bijections between orientations of $D_u|_{Y_\Gamma^u}$, isomorphisms $C(q,A) \simeq C(\widetilde\gamma)$ for $\widetilde\gamma = A\sharp u$, and orientations of $\ddd(T_u\widetilde{\mathcal{P}}(q,\widetilde\gamma)) \otimes \ddd(T_uQ/\Gamma)$. The isomorphism $C(u)$ is then the one corresponding to the orientation
$$\textstyle\bigwedge_i \partial_{u_i} \otimes \bigwedge_i e^u_i \in \ddd(T_u\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_uQ/\Gamma)\,.$$
The bijections are constructed as follows: the first one is obtained using deformation and gluing arguments akin to those used in the definition of quantum homology and various algebraic structures on it. The second one comes from the exact Fredholm triple $0 \to D_u|_{Y_\Gamma^u} \to D_u|_{Y_Q^u} \to 0_{T_uQ/\Gamma} \to 0$.
We have
\begin{thm}
The operator $\PSS^{H,J}_{\mathcal{D}}$ is a chain map:
$$\partial_{H,J} \circ \PSS^{H,J}_{\mathcal{D}} = \PSS^{H,J}_{\mathcal{D}} \circ \partial_{\mathcal{D}}\,.$$
\end{thm}
\begin{prf}
This is proved in the exact same way as Theorem \ref{thm:PSS_is_chain_map_Floer_to_quantum}, the only difference being in the orientations induced by the boundary points and the fact that there is a sign change when crossing a breaking/collision point. To summarize the argument, $\overline{\mathcal{P}}{}^1(q,\widetilde\gamma)$ has the structure of a $1$-dimensional compact topological manifold with boundary consisting of points corresponding to Floer or Morse breaking. If $[w] \in {\mathcal{P}}^1(q,\widetilde\gamma)$, then isomorphisms $C(q,A) \simeq C(A\sharp w)$ are in bijection with orientations of $D_w|_{Y_\Gamma^w}$ and of $\ddd(T_w\widetilde{\mathcal{P}}(q,\widetilde\gamma)) \otimes \ddd(T_wQ/\Gamma)$. The following can be shown:
\begin{itemize}
\item Suppose $\delta = ([u],[v]) \in {\mathcal{P}}_k(q,\widetilde\gamma') \times {\mathcal{M}}(H,J;\widetilde\gamma',\widetilde\gamma)$ is a boundary point of $\overline{\mathcal{P}}{}^1(q,\widetilde\gamma)$ and $[w]$ lies close to $\delta$. Then the orientation corresponding to $C(v) \circ C(u)$ is $(-1)^k\bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^w_i \in \ddd(T_w\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_wQ/\Gamma)$.
\item Suppose $\delta = ([u],[v]) \in {\mathcal{P}}_k(q,q') \times {\mathcal{P}}_l(q',\widetilde\gamma)$ and $[w]$ is close to $\delta$. Then the isomorphism $C(v) \circ C(u)$ corresponds to the orientation $(-1)^{k+l+1}\bigwedge_i \partial_{w_i} \wedge \text{inward}_\delta \otimes \bigwedge_i e^w_i \in \ddd(T_w\widetilde{\mathcal{P}}(\widetilde\gamma,q)) \otimes \ddd(T_wQ/\Gamma)$.
\item If $C {:\ } C(q,A) \simeq C(\widetilde\gamma')$ is a fixed isomorphism and $u,v \in \widetilde{\mathcal{P}}^1(\gamma,q)$ are two points lying on different sides of a degenerate trajectory in $\overline{\mathcal{P}}{}^1(\widetilde\gamma,q)$ close to it, and $C$ corresponds to the orientation $\bigwedge_i \partial_{u_i}\wedge \eta_u \otimes \bigwedge_i e^u_i$, where $\eta_u \in T_u\widetilde{\mathcal{P}}(\widetilde\gamma,q)$ is a vector transverse to the infinitesimal action of the automorphism group at $u$, then $C$ corresponds to the orientation $-\bigwedge_i \partial_{v_i}\wedge \eta_v \otimes \bigwedge_i e^v_i$, where $\eta_v \in T_v\widetilde{\mathcal{P}}(\widetilde\gamma,q)$ points in the same direction as $\eta_u$. In other words, the sign in front of the standard orientation flips when crossing a breaking/collision point.
\end{itemize}
From this we can deduce the vanishing of the matrix elements of
$$\partial_{H,J} \circ \PSS^{H,J}_{\mathcal{D}} - \PSS^{H,J}_{\mathcal{D}} \circ \partial_{\mathcal{D}}\,. \quad\qed$$
\end{prf}
The definition of PSS maps between Hamiltonian Floer homology and the quantum homology of $M$ proceeds in a similar, though much simpler, manner. We describe it here briefly, mainly to establish notation for later use.
Fix a regular nondegenerate periodic Floer datum $(H,J)$ and a regular quantum datum ${\mathcal{D}} = (f,\rho,I_0)$ for $M$. We start with the definition of
$$\PSS_{H,J}^{\mathcal{D}} {:\ } CF_*(H) \to QC_*({\mathcal{D}})\,.$$
Let $S^2_- = S^2 - \{-1\}$, where we view $S^2 = {\mathbb{C}} P^1$, with $-1$ being a negative puncture. Endow it with the standard negative end and associate $(H,J)$ to it. Pick a regular perturbation datum $(K,I)$ on $S^2_-$ which is compatible with $(H,J)$ and which satisfies $K = 0$, $I = I_0$ near $1$. For $\widetilde x \in \Crit {\mathcal{A}}_H$ define
$${\mathcal{M}}_-(\widetilde x) = \{u \in C^\infty_b(S^2_-,M;x)\,|\,\overline\partial_{K,I}u=0\,,[\widehat x \sharp u] = 0 \in \pi_2(M)\}\,.$$
This is a smooth manifold of dimension $|\widetilde x|$. We have the evaluation map
$$\ev {:\ } {\mathcal{M}}_-(\widetilde x) \to M\,,\quad \ev(u) = u(1)\,.$$
For $q \in \Crit f$ let
$${\mathcal{P}}(\widetilde x, q) = \ev^{-1}({\mathcal{S}}(q))\,.$$
This is a smooth manifold of dimension $|\widetilde x| - |q|$.
We now define the matrix elements of the PSS map. Given $u \in C^\infty_b(S^2_-,M;x)$ with $\ev(u) \in {\mathcal{S}}(q)$ we can produce a class $\widetilde x \sharp u \in \pi_2(M,q)$ by concatenating a representative of $\widetilde x$ with $u$ and transferring it to $q$ using the piece of gradient trajectory connecting $u(1)$ with $q$. For $u \in {\mathcal{P}}(\widetilde x,q)$ and any class of cappings $\widetilde x'$ of $x$ we will define an isomorphism
$$C(u) {:\ } C(\widetilde x') \to C(q,\widetilde x' \sharp u)$$
The matrix element of the PSS map is then
$$\sum_{\substack{u \in {\mathcal{P}}(\widetilde x,q): \\ \widetilde x' \sharp u = A}} C(u) {:\ } C(\widetilde x') \to C(q,A)\,.$$
To define the isomorphism $C(u)$ we note that, as usual, there is a bijection between orientations \footnote{The space $Y^u$ is defined in an obvious manner.} of $D_u|_{Y^u}$ and isomorphisms $C(\widetilde x') \simeq C(q,\widetilde x' \sharp u)$. On the other hand, $D_u|_{Y^u}$ is an index zero surjective operator, therefore an isomorphism. The isomorphism $C(u)$ is the one corresponding to the positive canonical orientation $1 \otimes 1^\vee \in \ddd(D_u|_{Y^u})$. The bijection is constructed using the gluing and deformation isomorphisms.
This defines the PSS map, and we have
\begin{thm}
The PSS map is a chain map:
$$\partial_{\mathcal{D}} \circ \PSS_{H,J}^{\mathcal{D}} = \PSS_{H,J}^{\mathcal{D}} \circ \partial_{H,J}\,.$$
\end{thm}
\begin{prf}
The $1$-dimensional part ${\mathcal{P}}^1(\widetilde x,q)$ can be compactified by adding Floer and Morse breaking. It suffices to note that a boundary point corresponding to Floer breaking induces the outward orientation on ${\mathcal{P}}^1(\widetilde x, q)$ while if it corresponds to Morse breaking, the induced orientation is inward. Since there is no breaking/collision, the fact that for an element $w \in {\mathcal{P}}^1(\widetilde x,q)$ there is a bijection between isomorphisms $C(\widetilde x') \simeq C(q,\widetilde x'\sharp w)$ and orientations of $\ddd(D_w|_{Y^w}) = \ddd(T_w{\mathcal{P}}(\widetilde x,q))$, suffices to establish the vanishing of the matrix elements of the difference $\partial_{\mathcal{D}} \circ \PSS_{H,J}^{\mathcal{D}} = \PSS_{H,J}^{\mathcal{D}} \circ \partial_{H,J}$. \qed
\end{prf}
Similarly, we have the opposite PSS map
$$\PSS^{H,J}_{\mathcal{D}} {:\ } QC_*({\mathcal{D}}) \to CF_*(H)\,.$$
Let $S^2_+ = S^2 - \{1\}$ with the puncture $\{1\}$ being positive. Endow it with the standard end and associate $(H,J)$ to it. Pick a regular perturbation datum $(K,I)$ on $S^2_+$ which is compatible with $(H,J)$ and which satisfies $K = 0$, $I = I_0$ near $-1$. For $\widetilde x \in \Crit {\mathcal{A}}_H$ define
$${\mathcal{M}}_+(\widetilde x) = \{u \in C^\infty_b(S^2_+,M;x)\,|\,\overline\partial_{K,I}u=0\,,u \in \widetilde x \text{ as a capping}\}\,.$$
This is a smooth manifold of dimension $|\widetilde x|' = 2n - |\widetilde x|$. We have the evaluation map
$$\ev {:\ } {\mathcal{M}}_+(\widetilde x) \to M\,,\quad \ev(u) = u(-1)\,.$$
For $q \in \Crit f$ let
$${\mathcal{P}}(q,\widetilde x) = \ev^{-1}({\mathcal{U}}(q))\,.$$
This is a smooth manifold of dimension $|q| - |\widetilde x|$.
We now define the matrix elements of the PSS map. Given $u \in C^\infty_b(S^2_+,M;x)$ with $\ev(u) \in {\mathcal{U}}(q)$, and $A \in \pi_2(M,q)$ we can produce a class of cappings $A \sharp u$ of $x$ by concatenating a representative of $A$ with $u$ along the piece of gradient trajectory connecting $q$ with $u(-1)$. For $u \in {\mathcal{P}}(q,\widetilde x)$ and $A \in \pi_2(M,q)$ we will define an isomorphism
$$C(u) {:\ } C(q,A) \simeq C(A \sharp u)\,.$$
The matrix element of the PSS map is then
$$\sum_{\substack{u \in {\mathcal{P}}(q, \widetilde x): \\ A \sharp u = \widetilde x'}}C(u) {:\ } C(q,A) \to C(\widetilde x')\,.$$
To define the isomorphism $C(u)$ note that there is a bijection between orientations of $D_u|_{Y^u}$ and isomorphisms $C(q,A) \simeq C(A \sharp u)$. On the other hand, $D_u|_{Y^u}$ is an index zero surjective operator, therefore an isomorphism. The isomorphism $C(u)$ is the one corresponding to the positive canonical orientation $1 \otimes 1^\vee \in \ddd(D_u|_{Y^u})$. The bijection is constructed using the gluing and deformation isomorphisms.
This defines the PSS map, and we can prove
\begin{thm}
The PSS map is a chain map:
$$\partial_{H,J} \circ \PSS^{H,J}_{\mathcal{D}} = \PSS^{H,J}_{\mathcal{D}} \circ \partial_{\mathcal{D}}\,.\quad \qed$$
\end{thm}
\subsubsection{Independence of auxiliary data}\label{sss:indep_aux_data_PSS}
The idea is to use moduli spaces of mixed pearls parametrized over $[0,1]$ and to use a homotopy of auxiliary data, such as conformal structures and perturbation data over the interval. We then define a chain homotopy between the PSS maps corresponding to the perturbation data over $0,1$ using the $0$-dimensional moduli spaces, and then the $1$-dimensional moduli spaces are used in order to show that this is indeed a chain homotopy. This combines techniques from \S\ref{s:QH} and the present section.
We will give details only for the Lagrangian case, the other one being entirely similar, albeit much simpler. More precisely, let $(H,J)$ be a regular Floer datum, ${\mathcal{D}} = (f,\rho, I_0)$ a regular quantum homology datum for $L$, and let $(K,I) = \{(K^r,I^r)\}_{r \in [0,1]}$ be a homotopy of perturbation data on $D^2_-$ compatible with $(H,J)$, which is stationary for $r$ close to $0,1$. Assume that both $(K^0,I^0)$, $(K^1,I^1)$ are regular and the homotopy is regular as well. Any two perturbation data can be connected by such a homotopy. For $\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}$ define
$${\mathcal{M}}_-(K,I;\widetilde\gamma) = \{(r,u)\,|\, r\in[0,1]\,,u\in C^\infty_b(D^2_-,\partial D^2_-;M,L;\gamma)\,,\overline\partial_{K^r,I^r}u = 0\,,[\widehat\gamma\sharp u] = 0 \in \pi_2(M,L)\}\,.$$
This is a smooth manifold of dimension $|\widetilde\gamma| + 1$. We have the evaluation map
$$\ev {:\ } {\mathcal{M}}_-(K,I;\widetilde\gamma) \times (\widetilde{\mathcal{M}}(L,I_0))^k \to L^{2k+1}$$
defined via
$$\ev(u = (u_0,u_1,\dots, u_k)) = (u_0(1),u_1(-1),\dots,u_k(1))$$
and we let
$$\widetilde{\mathcal{P}}_k(K,I;\widetilde\gamma,q) = \ev^{-1}(Q^k \times {\mathcal{S}}(q))\,.$$
We have a natural ${\mathbb{R}}^k$-action on this space and we let ${\mathcal{P}}_k(K,I;\widetilde\gamma,q)$ be the quotient. We let $\widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)$ and ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ be the respective unions of these spaces over $k \geq 0$. For generic $I_0$ the space ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ is a smooth manifold of local dimension at $u$ equal to $|\widetilde\gamma| - |q| + \mu(u) + 1$ if this number does not exceed $1$.
Using the $0$-dimensional part of this space one can define a homomorphism
$$\Psi\equiv \Psi_{K,I} {:\ } CF_*(H:L) \to QC_{*+1}({\mathcal{D}}:L)\,.$$
This is defined via its matrix elements. To define these it suffices, using the above methods, to orient the operator $D_u|_{Y_\Gamma^u}$ for $(r,u) \in \widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)$ such that $|\widetilde\gamma| - |q| + \mu(u) + 1 = 0$. This orientation then induces an isomorphism $C(\widetilde\gamma') \simeq C(q,\widetilde\gamma'\sharp u)$, and summing up over the elements of the $0$-dimensional part of ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ produces the desired matrix elements. In order to orient $D_u|_{Y_\Gamma^u}$ we note that there is an exact Fredholm triple
$$0 \to D_u|_{Y_\Gamma^u} \to D_{r,u}|_{Y_Q^u} \to 0_{T_r[0,1] \oplus T_uQ/\Gamma} \to 0$$
which induces a bijection between orientations of $D_u|_{Y_\Gamma^u}$ and orientations of $\ddd(T_{(r,u)}\widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)) \otimes \ddd(T_r[0,1] \oplus T_uQ/\Gamma)$. Now pick the orientation $\bigwedge_i\partial_{u_i} \otimes \bigwedge_i e^u_i \wedge \partial_r$.
Using the $1$-dimensional part of ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ and chasing suitable commutative diagrams as above, we can prove
\begin{thm}
The map $\Psi_{K,I}$ defines a chain homotopy between the PSS maps constructed using the perturbation data $(K^0,I^0)$ and $(K^1,I^1)$. \qed
\end{thm}
Analogously one proves that the opposite PSS maps are also independent of the perturbation datum used. This allows us to see that there are well-defined morphisms
$$\PSS_{H,J}^{\mathcal{D}} {:\ } HF_*(H,J:L) \to QH_*({\mathcal{D}}:L)\,,\quad \text{ and }\quad \PSS_{\mathcal{D}}^{H,J} {:\ } QH_*({\mathcal{D}}:L) \to HF_*(H,J:L)\,.$$
\subsection{Properties}\label{ss:PSS_maps_properties}
\subsubsection{PSS maps and the continuation morphisms}\label{sss:PSS_maps_cont_isomorphisms}
Let $(H^i,J^i)$, $i=0,1$ be regular Floer data associated to the puncture of $D^2_-$, and let $(H^s,J^s)_{s\in{\mathbb{R}}}$ be a regular homotopy between these which is stationary outside $s \in (0,1)$. Let ${\mathcal{D}} = (f,\rho,I_0)$ be a regular quantum homology datum. We will prove here that the PSS maps respect Floer continuation morphisms, more precisely that
$$\PSS_{H^1,J^1}^{\mathcal{D}} \circ \Phi_{H^0,J^0}^{H^1,J^1} = \PSS_{H^0,J^0}^{\mathcal{D}} {:\ } HF_*(H^0,J^0:L) \to QH_*({\mathcal{D}}:L)\,.$$
In order to prove this we will construct a suitable chain homotopy. First we need some additional moduli spaces. Let $(K,I) = \{(K^r,I^r)\}_{r\geq 0}$ be a regular perturbation datum on the trivial family $[0,\infty) \times D^2_-$, which is stationary for $r$ near $0$, which is compatible with $(H^0,J^0)$, which satisfies $K=0,I=I_0$ near $1 \in D^2_-$, and in addition for $r$ large enough we have the following expression in the coordinates $(s,t) \in (-\infty,0] \times [0,1]$ on the end of $D^2_-$:
$$K^r(s,t) = H^{s+r+1}_t\,dt\,, \quad I^r(s,t) = J^{s+r+1}_t\,.$$
For $\widetilde\gamma \in \Crit{\mathcal{A}}_{H^0:L}$ let
$${\mathcal{M}}_-(K,I;\widetilde\gamma) = \{(r,u)\,|\, r\in[0,\infty)\,,u\in C^\infty_b(D^2_-,\partial D^2_-;M,L;\gamma)\,,\overline\partial_{K^r,I^r}u = 0\,,[\widehat\gamma\sharp u] = 0 \in \pi_2(M,L)\}\,.$$
This is a smooth manifold of dimension $|\widetilde\gamma| + 1$. We have the evaluation map
$$\ev {:\ } {\mathcal{M}}_-(K,I;\widetilde\gamma) \times (\widetilde{\mathcal{M}}(L,I_0))^k \to L^{2k+1}$$
defined via
$$\ev(u = (u_0,u_1,\dots, u_k)) = (u_0(1),u_1(-1),\dots,u_k(1))$$
and for $q \in \Crit f$ we let
$$\widetilde{\mathcal{P}}_k(K,I;\widetilde\gamma,q) = \ev^{-1}(Q^k \times {\mathcal{S}}(q))\,.$$
We have a natural ${\mathbb{R}}^k$-action on this space and we let ${\mathcal{P}}_k(K,I;\widetilde\gamma,q)$ be the quotient. We let $\widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)$ and ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ be the respective unions of these spaces over $k \geq 0$. For generic $I_0$ the space ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ is a smooth manifold of local dimension at $u$ equal to $|\widetilde\gamma| - |q| + \mu(u) + 1$ if this number does not exceed $1$.
Using the $0$-dimensional part of this space one can define a homomorphism
$$\Psi\equiv \Psi_{K,I} {:\ } CF_*(H^0:L) \to QC_{*+1}({\mathcal{D}}:L)\,.$$
This is defined via its matrix elements. To define these it suffices, using the above methods, to orient the operator $D_u|_{Y_\Gamma^u}$ for $(r,u) \in \widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)$ such that $|\widetilde\gamma| - |q| + \mu(u) + 1 = 0$. This orientation then induces an isomorphism $C(\widetilde\gamma') \simeq C(q,\widetilde\gamma'\sharp u)$, and summing up over the elements of the $0$-dimensional part of ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ produces the desired matrix elements. In order to orient $D_u|_{Y_\Gamma^u}$ we note that there is an exact Fredholm triple
$$0 \to D_u|_{Y_\Gamma^u} \to D_{r,u}|_{Y_Q^u} \to 0_{T_r[0,\infty) \oplus T_uQ/\Gamma} \to 0$$
which induces a bijection between orientations of $D_u|_{Y_\Gamma^u}$ and orientations of $\ddd(T_{(r,u)}\widetilde{\mathcal{P}}(K,I;\widetilde\gamma,q)) \otimes \ddd(T_r[0,\infty) \oplus T_uQ/\Gamma)$. Now pick the orientation $\bigwedge_i\partial_{u_i} \otimes \bigwedge_i e^u_i \wedge \partial_r$.
The $1$-dimensional part of ${\mathcal{P}}(K,I;\widetilde\gamma,q)$ compactifies by adding Morse breaking, Floer breaking at $\widetilde\gamma$, which corresponds to the continuation morphism, as well as disk collision and breaking. It is then possible to show, using the same methods as above, that the following is true:
\begin{thm}
The map $\Psi_{K,I}$ defines a chain homotopy between $\PSS_{H^0,J^0}^{\mathcal{D}}$ and $\PSS_{H^1,J^1}^{\mathcal{D}} \circ \Phi_{H^0,J^0}^{H^1,J^1}$. \qed
\end{thm}
The proof that the opposite PSS morphism, as well as the PSS morphisms for quantum homology of $M$, respect continuation morphisms, proceeds in the same manner. Therefore the various PSS morphisms assemble into canonical morphisms
$$\PSS^{\mathcal{D}} {:\ } HF_*(L) \to QH_*({\mathcal{D}}:L)\,,\quad \PSS_{\mathcal{D}} {:\ } QH_*({\mathcal{D}}:L) \to HF_*(L)\,,$$
and
$$\PSS^{\mathcal{D}} {:\ } HF_*(M) \to QH_*({\mathcal{D}})\,,\quad \PSS_{\mathcal{D}} {:\ } QH_*({\mathcal{D}}) \to HF_*(M)\,.$$
\subsubsection{PSS maps are isomorphisms}\label{sss:PSS_are_isomorphisms}
We show here that the compositions
$$\PSS_{H,J}^{\mathcal{D}} \circ \PSS_{\mathcal{D}}^{H,J} \quad \text{ and } \quad \PSS_{\mathcal{D}}^{H,J} \circ \PSS_{H,J}^{\mathcal{D}}$$
are chain homotopic to identity.
Let us start with the second composition. We will in fact produce a chain homotopy between $\PSS_{\mathcal{D}}^{H,J} \circ \PSS_{H,J}^{\mathcal{D}}$ and a continuation morphism from $(H,J)$ to itself. For this we need some new moduli spaces. For $k \geq 1$ and $\widetilde\gamma_\pm \in \Crit {\mathcal{A}}_{H:L}$ we have the evaluation map
$$\ev {:\ } {\mathcal{M}}_-(\widetilde\gamma_-) \times (\widetilde{\mathcal{M}}(L,I_0))^{k-1} \times {\mathcal{M}}_+(\widetilde\gamma_+) \to L^{2k}\,,$$
$$\ev(u = (u_0,u_1,\dots,u_{k-1},u_k)) = (u_0(1),u_1(-1),u_1(1),\dots,u_{k-1}(-1),u_{k-1}(1),u_k(-1))\,,$$
and we let
$$\widetilde{\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+) = \ev^{-1}(Q^k)\,.$$
There is a natural ${\mathbb{R}}^{k-1}$-action on this space and we let ${\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+)$ be the quotient.
We define $\widetilde{\mathcal{P}}_0(\widetilde\gamma_-,\widetilde\gamma_+) \equiv {\mathcal{P}}_0(\widetilde\gamma_-,\widetilde\gamma_+) = {\mathcal{M}}_{\mathcal{S}}(K,I;\widetilde\gamma_-,\widetilde\gamma_+)$ where ${\mathcal{S}} = [r_0,\infty) \times S$ is a trivial family for some $r_0 > 0$, and where $(K,I)$ is a regular perturbation datum which equals $(K^r,I^r) = (0,I_0)$ for $s \in [-r,r]$, equals $(H_t\,dt, J_t)$ for $s$ outside $(-r-1,r+1)$, and interpolates between the two for the rest of values of $s$. We let
$$\widetilde{\mathcal{P}}(\widetilde\gamma_-,\widetilde\gamma_+) = \bigcup_{k \geq 0}\widetilde{\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+)\,,\quad {\mathcal{P}}(\widetilde\gamma_-,\widetilde\gamma_+) = \bigcup_{k \geq 0}{\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+)\,.$$
The set ${\mathcal{P}}(\widetilde\gamma_-,\widetilde\gamma_+)$ is a smooth manifold of local dimension at $u$ equal to $|\widetilde\gamma_-| - |\widetilde\gamma_+| + \mu(u) + 1$ if this number does not exceed $1$. It is now clear how to proceed with the definition of the chain homotopy. For $u \in \widetilde{\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+)$ with $|\widetilde\gamma_-| - |\widetilde\gamma_+| + \mu(u) + 1 = 0$, we need to orient $D_u|_{Y_\Gamma^u}$ in order to obtain an isomorphism $C(u) {:\ } C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_-\sharp u)$. For $k = 0$ this proceeds as described in \S\ref{s:HF}. For $k \geq 1$ we have the exact triple
$$0 \to D_u|_{Y_\Gamma^u} \to D_u|_{Y_Q^u} \to T_uQ/\Gamma \to 0\,.$$
The operator $D_u|_{Y_Q^u}$ is onto with $(k-1)$-dimensional kernel which is nothing but $T_u\widetilde{\mathcal{P}}_k(\widetilde\gamma_-,\widetilde\gamma_+)$. We orient $D_u|_{Y_\Gamma^u}$ by the orientation corresponding to
$$\textstyle\bigwedge_i\partial_{u_i} \otimes \bigwedge_ie^u_i \in \ddd(T_u\widetilde{\mathcal{P}}(\widetilde\gamma_-,\widetilde\gamma_+)) \otimes \ddd(T_uQ/\Gamma)\,.$$
Similarly to the above, one proves
\begin{thm}
The map $\Psi$ thus defined is a chain homotopy between $\PSS_{\mathcal{D}}^{H,J}\circ \PSS_{H,J}^{\mathcal{D}}$ and the continuation map $\Phi_{H,J}^{H,J}$ defined with the help of the perturbation datum $(K^{r_0},I^{r_0})$. \qed
\end{thm}
We now will show that the composition $\PSS^{\mathcal{D}}_{H,J} \circ \PSS_{\mathcal{D}}^{H,J}$ is homotopic to the identity. For this we need new moduli spaces. Recall that $D^2 - \{\pm1\}$ is biholomorphic to ${\mathbb{R}} \times [0,1]$. Denote by $(s,t)$ the coordinates on $D^2 - \{\pm1\}$ induced by this biholomorphism. Consider a regular perturbation datum $(K,I)$ on the trivial family ${\mathcal{S}} = [0,\infty) \times D^2$ which satisfies: for $r$ close to $0$, $K = 0, I = I_0$, for $r$ large $(K^r(s,t), I^r(s,t))$ equals $(H_t\,dt, J_t)$ for $s \in [-r,r]$, equals $(0,I_0)$ for $s \notin (-r-1,r+1)$, and interpolates between the two for the rest of values of $s$. Consider the space
$${\mathcal{M}}_{\mathcal{S}}(K,I) = \{(r,u) \in [0,\infty) \times C^\infty(D^2,S^1;M,L)\,|\, \overline\partial_{K^r,I^r}u = 0\}\,.$$
This is a smooth manifold of local dimension at $u$ equal to $n + \mu(u) + 1$. We have the evaluation map
$$\ev {:\ } (\widetilde{\mathcal{M}}(L,I_0))^{k_0} \times {\mathcal{M}}_{\mathcal{S}}(K,I) \times (\widetilde{\mathcal{M}}(L,I_0))^{k_1} \to L^{2(k_0+k_1+1)}$$
defined by
$$\ev(U = (u^0_1,\dots,u^0_{k_0};u;u^1_1,\dots,u^1_{k_1})) = (u^0_1(-1),\dots,u^0_{k_0}(1);u(-1),u(1);u^1_1(-1),\dots,u^1_{k_1}(1))\,.$$
For $q,q'\in\Crit f$ define
$$\widetilde{\mathcal{P}}_{k_0,k_1}(K,I;q,q') = \ev^{-1}({\mathcal{U}}(q) \times Q^{k_0+k_1} \times {\mathcal{S}}(q'))\,.$$
There is a natural ${\mathbb{R}}^{k_0+k_1}$-action on this space and we let ${\mathcal{P}}_{k_0,k_1}(K,I;q,q')$ be the quotient. We also define
$$\widetilde{\mathcal{P}}(K,I;q,q') = \bigcup_{k_0,k_1\geq 0}\widetilde{\mathcal{P}}_{k_0,k_1}(K,I;q,q')\quad \text{ and } \quad {\mathcal{P}}(K,I;q,q') = \bigcup_{k_0,k_1\geq 0}{\mathcal{P}}_{k_0,k_1}(K,I;q,q')\,.$$
The space ${\mathcal{P}}(K,I;q,q')$ is a smooth manifold of local dimension at $U$ equal to $|q| - |q'| + \mu(U) + 1$ whenever this number does not exceed $1$.
In order to define the desired chain homotopy, we need only orient the operator $D_U|_{Y_\Gamma^U}$ for $U \in \widetilde{\mathcal{P}}_{k_0,k_1}(K,I;q,q')$ such that $|q| - |q'| + \mu(U) + 1 = 0$. We have the exact Fredholm triple
$$0 \to D_U|_{Y_\Gamma^U} \to D_{r,U}|_{T_r[0,\infty) \times Y_Q^U} \to T_r[0,\infty) \times T_UQ/\Gamma \to 0$$
whence the isomorphism
$$\ddd(D_{r,U}|_{T_r[0,\infty) \times Y_Q^U}) \simeq \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma)\,.$$
We orient the operator $D_U|_{Y_\Gamma^U}$ by the orientation corresponding to the orientation
$$\textstyle (-1)^{k_0} \bigwedge_i\partial_{U_i} \otimes \bigwedge_ie^U_i \wedge \partial_r\,.$$
Using the $1$-dimensional moduli spaces, one proves
\begin{thm}
The map $\Psi$ thus defined is a chain homotopy between $\PSS^{\mathcal{D}}_{H,J} \circ \PSS^{H,J}_{\mathcal{D}}$ and the identity. \qed
\end{thm}
\begin{prf}
We need only show that for $r = 0$ one gets the identity. This follows from dimension considerations and the fact that the central disk is $I_0$-holomorphic in this case. Unless it is constant, it admits a $1$-parameter family of reparametrizations which contradicts the fact that the corresponding pearly space is $0$-dimensional. Therefore the disk is constant and $q=q'$. \qed
\end{prf}
Therefore we have shown that the compositions
$$\PSS_{H,J}^{\mathcal{D}} \circ \PSS_{\mathcal{D}}^{H,J} \quad \text{ and } \quad \PSS_{\mathcal{D}}^{H,J} \circ \PSS_{H,J}^{\mathcal{D}}$$
equal identity maps on homology, and therefore they are inverse isomorphisms.
The proof of analogous facts for the Hamiltonian Floer homology and the quantum homology of $M$ proceeds in a similar, though much simpler, manner.
\subsubsection{Continuation maps for quantum homology}\label{sss:continuation_maps_QH}
It is possible to define continuation morphisms on quantum homology directly using the approach of Biran--Cornea \cite{Biran_Cornea_Quantum_structures_Lagr_submfds, Biran_Cornea_Rigidity_uniruling}. But since we are ultimately interested in spectral invariants \cite{Leclercq_Zapolsky_Spectral_invts_monotone_Lags}, it is less important to have an independent good definition of quantum homology and therefore we choose another path.
Let ${\mathcal{D}},{\mathcal{D}}'$ be two data for Lagrangian quantum homology. We define the {continuation morphism}
$$\Phi_{\mathcal{D}}^{{\mathcal{D}}'} {:\ } QH_*({\mathcal{D}}:L) \to QH_*({\mathcal{D}}':L) \quad \text{by} \quad \Phi_{\mathcal{D}}^{{\mathcal{D}}'} = \PSS_{H,J}^{{\mathcal{D}}'} \circ \PSS_{{\mathcal{D}}}^{H,J}$$
for a regular Floer datum $(H,J)$ on $L$. The previous subsection implies that this definition is independent of $(H,J)$ since the PSS maps respect Floer continuation maps. Since PSS maps are isomorphisms, so are the quantum homology continuation maps. In particular we can now define the abstract quantum homology $QH_*(L)$ as the limit of the system of homologies $QH_*({\mathcal{D}}:L)$ connected by the isomorphisms $\Phi_{\mathcal{D}}^{{\mathcal{D}}'}$.
Also we can define the abstract PSS isomorphisms
$$\PSS {:\ } HF_*(L) \to QH_*(L) \quad \text{ and }\quad \PSS {:\ } QH_*(L) \to HF_*(L)\,.$$
\subsubsection{PSS maps respect the algebraic structures}\label{sss:PSS_maps_respect_alg_structs}
We start with the product in Lagrangian Floer and quantum homologies. Here we show that for any regular Floer data $(H^i,J^i)$ and regular quantum data ${\mathcal{D}}_i = (f_i,\rho,I_0)$ for $L$, $i=0,1,2$ we have
$$\PSS_{H^2,J^2}^{{\mathcal{D}}_2} \circ \star \circ (\PSS_{{\mathcal{D}}_0}^{H^0,J^0} \otimes \PSS_{{\mathcal{D}}_1}^{H^1,J^1}) = \star {:\ } QH_*({\mathcal{D}}_0:L) \otimes QH_*({\mathcal{D}}_1:L) \to QH_*({\mathcal{D}}_2:L)\,,$$
where $\star$ denotes the product on both the Floer and quantum homology. This implies that the PSS maps respect the multiplicative structure on both sides.
We will exhibit a chain homotopy between $\PSS_{H^2,J^2}^{{\mathcal{D}}_2} \circ \star \circ (\PSS_{{\mathcal{D}}_0}^{H^0,J^0} \otimes \PSS_{{\mathcal{D}}_1}^{H^1,J^1})$ and $\star$. In order to do so we define new moduli spaces. We choose a regular perturbation datum $(K,I)$ on the trivial family ${\mathcal{S}} = [0,\infty) \times D^2$, as follows. Fix a choice of ends on the punctured surface
$$D^2-\{e^{2\pi i j/3}\,|\,j=0,1,2\}\,,$$
such that the puncture $e^{4\pi i/3}$ is positive and the other two are negative. We require the perturbation datum to satisfy the following conditions: for $r$ close to $0$, $K^r = 0, I^r = I_0$, for $r$ large, $(K^r,I^r) = (H^j_t\,dt,J^j_t)$ for $\epsilon_js \in [0,r]$, $(K^r,I^r) = (0,I_0)$ for $\epsilon_js \in [r+1,\infty)$, and interpolating between the two for $\epsilon_js \in (r,r+1)$ on the $j$-th end; here $\epsilon_j$ is the sign of the $j$-th end, that is $\epsilon_j = -1$ for $j=0,1$ and $\epsilon_2 = 1$. We let
$${\mathcal{M}}_{\mathcal{S}}(K,I) = \{(r,u) \in [0,\infty) \times C^\infty(D^2,S^1;M,L)\,|\,\overline\partial_{K^r,I^r}u = 0\}\,.$$
This is a smooth manifold of local dimension at $u$ equal to $n + \mu(u) + 1$.
We have the evaluation map
$$\ev {:\ } (\widetilde{\mathcal{M}}(L,I_0))^{k_0+k_1+k_2} \times {\mathcal{M}}_{\mathcal{S}}(K,I) \to L^{2(k_0+k_1+k_2)+3}$$
defined by
\begin{multline*}
\ev(U=(u^0,u^1,u^2;u)) = (u^0_1(-1),\dots,u^0_{k_0}(1),u(1); u^1_1(-1),\dots,u^1_{k_1}(1),u(e^{2\pi i/3});\\ u(e^{4\pi i/3}),u^2_1(-1),\dots,u^2_{k_2}(1))\,.
\end{multline*}
For $q_i \in \Crit f_i$ we let
$$\widetilde{\mathcal{P}}_{k_0,k_1,k_2}(K,I;q_0,q_1;q_2) = \ev^{-1}({\mathcal{U}}_{f_0}(q_0) \times Q^{k_0}_{f_0,\rho} \times {\mathcal{U}}_{f_1}(q_1) \times Q^{k_1}_{f_1,\rho} \times Q^{k_2}_{f_2,\rho} \times {\mathcal{S}}_{f_2}(q_2))\,.$$
We let ${\mathcal{P}}_{k_0,k_1,k_2}(K,I;q_0,q_1;q_2)$ be the quotient of this space by the natural ${\mathbb{R}}^{k_0+k_1+k_2}$-action, and put
$$\widetilde {\mathcal{P}}(K,I;q_0,q_1;q_2) = \bigcup_{k_0,k_1,k_2\geq 0}\widetilde {\mathcal{P}}_{k_0,k_1,k_2}(K,I;q_0,q_1;q_2)\;,$$
$${\mathcal{P}}(K,I;q_0,q_1;q_2) = \bigcup_{k_0,k_1,k_2\geq 0} {\mathcal{P}}_{k_0,k_1,k_2}(K,I;q_0,q_1;q_2)\,.$$
The space ${\mathcal{P}}(K,I;q_0,q_1,q_2)$ is a smooth manifold of local dimension at $U$ equal to $|q_0| + |q_1| - |q_2| + \mu(U) + 1 - n$ whenever this number does not exceed $1$. In order to define the chain homotopy, we need to orient the operator $D_U|_{Y_\Gamma^U}$ if $U \in \widetilde{\mathcal{P}}_{k_0,k_1,k_2}(K,I;q_0,q_1;q_2)$ is such that $|q_0| + |q_1| - |q_2| + \mu(U) + 1 - n= 0$. The exact Fredholm triple $0 \to D_U|_{Y_\Gamma^U} \to D_{r,U}|_{T_r[0,\infty)\times Y_Q^U} \to T_r[0,\infty) \times T_UQ/\Gamma \to 0$ gives rise to the isomorphism
$$\ddd(D_{r,U}|_{T_r[0,\infty) \times Y_Q^U}) \simeq \ddd(D_U|_{Y_\Gamma^U}) \otimes \ddd(T_UQ/\Gamma)\,.$$
We orient $D_U|_{Y_\Gamma^U}$ by the orientation corresponding to the orientation
$$\textstyle(-1)^{k_0+k_1}\bigwedge_i\partial_{U_i} \otimes \bigwedge_i e^U_i \wedge \partial_r\,.$$
Using the compactified $1$-dimensional moduli space, it is possible to show
\begin{thm}
The map $\Psi$ thus defined is a chain homotopy between $\PSS_{H^2,J^2}^{{\mathcal{D}}_2} \circ \star \circ (\PSS_{{\mathcal{D}}_0}^{H^0,J^0} \otimes \linebreak\PSS_{{\mathcal{D}}_1}^{H^1,J^1})$ and $\star$. \qed
\end{thm}
\begin{prf}
Since for $r = 0$ the perturbation datum coincides with $(0,I_0)$, we are counting holomorphic disks and therefore indeed we obtain the Lagrangian quantum product at the corresponding end. \qed
\end{prf}
The proof that the PSS maps intertwine the two other algebraic structures, namely the product on $QH_*(M)$ and $HF_*(M)$ and the quantum module structure, is entirely analogous. The case of the quantum homology of $M$ is much simpler since there are no signs involved and there is no breaking/collision. In the case of the quantum module structure one needs to introduce the sign $(-1)^{k_0}$ when defining the chain homotopy, where $k_0$ is the number of disks in leg $0$ of the spiked pearly trajectory.
The upshot is that the PSS maps are isomorphisms between the quantum and Floer homologies, and they intertwine the algebraic structures on both sides, which include the product structures on the quantum and Floer homologies and the quantum module action. Note that the PSS maps automatically respect the unit elements, and that the inverse PSS maps respect the algebraic structures as well.
\section{Boundary operators in the presence of bubbling}\label{s:boundary_op_squares_zero_bubbling}
Here we show that the boundary operators on Lagrangian Floer and quantum homologies square to zero when there is bubbling present. By index considerations this only happens when $N_L = 2$.
\subsection{$\partial_{H,J}^2 = 0$}\label{ss:boundary_op_squares_zero_bubbling_HF}
Let $(H,J)$ be a regular Floer datum for $L$, where $N_L = 2$. We wish to show that $\partial_{H,J}^2 = 0$. Since the boundary operator is determined by its matrix elements, it suffices to show that all the matrix elements of its square vanish, that is whenever $\widetilde\gamma_\pm \in \Crit {\mathcal{A}}_{H:L}$ are such that $|\widetilde\gamma_-| = |\widetilde\gamma_+| + 2$, then we have
$$\sum_{\substack{\widetilde\delta \in \Crit {\mathcal{A}}_{H:L} \\ |\widetilde\delta| = |\widetilde\gamma_-| - 1}} \sum_{\substack{([u],[v]) \in {\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\delta) \times \\ {\mathcal{M}}(H,J;\widetilde\delta,\widetilde\gamma_+)}} C(v) \circ C(u) = 0\,.$$
The Gromov compactification of ${\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ is obtained by adding two types of points, the usual Floer breaking, and bubbling, see \S\ref{sss:compactness_gluing}. A Maslov $2$ disk can bubble off only in the case $\gamma_- = \gamma_+ =: \gamma$, and it is attached to one of the ends points $q_i := \gamma(i)$, $i = 0,1$. In fact, such a bubble appears as follows: a sequence of Floer strips in $\widetilde {\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ degenerates into the $s$-independent strip $(s,t) \mapsto \gamma(t)$ and a Maslov two disk attached to it at one of its boundary components. The noncompact connected components of $\overline{\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ are thus subdivided into three types according to the number of ends corresponding to Floer breaking. The arguments used in proving that the boundary operator squares to zero when no bubbling is present, allow us to see that it is enough to show the vanishing of the sum
$$\sum C(v) \circ C(u)$$
where it is taken over all pairs $([u],[v])$ which appear as boundary points in those components of the compactified space $\overline{\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ whose other boundary point is a bubble.
We let $\widetilde{\mathcal{M}}(J,q)$ be the space of $J$-holomorphic disks with boundary on $L$ passing through $q \in L$, and denote by ${\mathcal{M}}(J,q)$ the quotient by the conformal automorphism group $\Aut(D^2,\partial D^2,1)$. Let $\Delta \subset \overline{\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$ be a connected component whose boundary points are a broken Floer trajectory $([x],[y])$ and a holomorphic disk $[u] \in {\mathcal{M}}(J_i,q_i)$ attached at one of the endpoints of $\gamma$. Recall that there is a bijection between isomorphisms $C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_+)$ and orientations of $T_z\widetilde{\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$, given by gluing, where $[z] \in \Delta$, and that moreover the isomorphism $C(y) \circ C(x)$ corresponds to the orientation $-\partial_z \wedge \text{inward}$. Similarly, using gluing, one can establish a bijection between isomorphisms $C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_+)$ and orientations of $T_u\widetilde{\mathcal{M}}(J_i,q_i)$. Namely, one has the following exact triple:
$$\xymatrix{0 \ar[r] & W^{1,p}(u)\sharp 0 \ar[r] \ar[d]^{D_u\sharp 0} & W^{1,p}(u) \sharp W^{1,p}(\widetilde\gamma_-) \ar[r] \ar[d]^{D_u \sharp D_{\widetilde\gamma_-}} & W^{1,p}(\widetilde\gamma_-) \ar[r] \ar[d]^{D_{\widetilde\gamma_-}} & 0 \\ 0 \ar[r] & L^p(u) \ar[r] & L^p(u) \oplus L^p(\widetilde\gamma_-) \ar[r] & L^p(\widetilde\gamma_-) \ar[r] & 0}$$
where $W^{1,p}(u)\sharp 0 = \{\xi \in W^{1,p}(u)\,|\,\xi(1) = 0\}$ and $D_u\sharp 0 = D_u|_{W^{1,p}(u)\sharp 0}$. This triple induces an isomorphism
$$\ddd(D_u\sharp 0) \otimes \ddd(D_{\widetilde\gamma_-}) \simeq \ddd(D_u\sharp D_{\widetilde\gamma_-})\,.$$
The latter operator can be identified with $D_{\widetilde\gamma_+}$, therefore using the fact that $\ker D_u \sharp 0 = T_u\widetilde{\mathcal{M}}(J_i,q_i)$, we have obtained the desired bijection. By regularity, the operator $D_u$ is onto, and by genericity, so is $D_u \sharp 0$, and so its kernel is $2$-dimensional. The infinitesimal action of the isomorphism group induces a map ${\mathbb{C}} \to \ker D_{u}\sharp 0$, which is an isomorphism. Thus we see that the isomorphism $C(y) \circ C(x)$ corresponds to an orientation of ${\mathbb{C}}$. A computation shows that the isomorphism ${\mathbb{C}} \to \ker D_u\sharp 0$ induces the standard orientation on ${\mathbb{C}}$ if the bubble is attached at $\gamma(0)$, and the negative of the standard orientation on ${\mathbb{C}}$ if the bubble is attached at $\gamma(1)$. We see now that the sum
$$\sum C(y) \circ C(x)$$
over the boundary points of connected components of $\overline{\mathcal{M}}$ whose other endpoints are bubbles equals the sum
$$\sum C(u)$$
running over those bubbles appearing as boundary points of these connected components, where $C(u)$ is the isomorphism $C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_+)$ induced via gluing from the orientation on $u$ which is the standard orientation on ${\mathbb{C}}$ if $u$ is attached at $\gamma(0)$ and the negative of the standard orientation otherwise.
Fix a representative $\widehat \gamma_- \in \widetilde\gamma_-$ and let $\{a(t)\}_{t\in[0,1]}$ be a parametrization of $\widehat\gamma_-|_{\partial\dot D^2}$, up to the endpoints of $\gamma$, and define the following space:
$$\widetilde{\mathcal{M}}(a) = \{(t,u)\,|\, t\in[0,1],\, u \in \widetilde{\mathcal{M}}(J_t,a(t))\}\,.$$
This is a smooth $3$-dimensional manifold. Let also denote ${\mathcal{M}}(a)$ its quotient by the automorphism group $\Aut(D^2,1)$; ${\mathcal{M}}(a)$ is a smooth $1$-dimensional compact manifold with boundary. We claim that whenever $\Delta$ is a connected component of ${\mathcal{M}}(a)$ with boundary points $(t,[u]),(t',[u'])$, then $C(u) + C(u')$ vanishes. A little combinatorial argument shows that this implies the vanishing of the above sum which only involves disks bubbling off at the ends of components whose other ends are broken Floer trajectories. We have therefore reduced the problem to the following claim: for any connected component $\Delta \subset {\mathcal{M}}(a)$ the sum $C(u) + C(u')$ vanishes, where $u,u'$ represent the boundary points of $\Delta$.
So let indeed $\Delta$ be such a component and let $(t,[u]),(t',[u'])$ be its boundary points ($t,t' = 0,1$). Let $\{(t(\tau),[u(\tau)])\}_{\tau \in [0,1]}$ be a parametrization of $\Delta$ with $(t(0),[u(0)]) = (t,[u])$ and $(t(1),[u(1)]) = (t',[u'])$. Using a gluing argument as above, we get the isomorphism
$$\ddd(D_{u(\tau)}\sharp 0) \otimes \ddd(D_{\widetilde\gamma_-}) \simeq \ddd(D_{\widetilde\gamma_+})$$
which is continuous in $\tau$. This means that there is a bijection between isomorphisms $C(\widetilde\gamma_-) \simeq C(\widetilde\gamma_+)$ and orientations of the line bundle $\{\ddd(D_{u(\tau)}\sharp 0)\}_\tau$ over $[0,1]$. Using an obvious exact triple, we obtain the isomorphism
$$\ddd(D_{u(\tau)}\sharp 0) \otimes \ddd({\mathbb{R}}) \simeq \ddd(T_{(t(\tau),u(\tau))}\widetilde{\mathcal{M}}(a))$$
which is continuous in $\tau$. Therefore we have a bijection between the above isomorphisms and orientations of the component of $\widetilde{\mathcal{M}}(a)$ above $\Delta$. We have the following isomorphism of short exact sequences for every $\tau$ for which $D_{u(\tau)} \sharp 0$ is surjective:
$$\xymatrix{0 \ar[r] & {\mathbb{C}} \ar[r] \ar[d] & T\widetilde{\mathcal{M}}(a) \ar[r] \ar@{=}[d] & T\Delta \ar[r] \ar[d]^{\pi_*} & 0 \\ 0 \ar[r] & \ker D_{u(\tau)} \sharp 0 \ar[r] & T\widetilde{\mathcal{M}}(a) \ar[r] & {\mathbb{R}} \ar[r] & 0}$$
where the left vertical arrow is the infinitesimal action of the automorphism group while the right vertical arrow is the differential of the map $\pi{:\ } (t,[u]) \mapsto t$. Orient ${\mathbb{C}}$, ${\mathbb{R}}$ with their standard orientations, and pick an orientation of $\Delta$. These induce an orientation on $T\widetilde{\mathcal{M}}(a)$ and an orientation on $\ker D_{u(\tau)} \sharp 0$ via the short exact sequences, and it is easy to see that the left vertical arrow and the right vertical arrow are either both orientation-preserving or orientation-reversing.
Assume now that $\Delta$ connects two disks attached at the same point. Then $\pi_{*,(t,[u])}$ is orientation-preserving if and only if $\pi_{*,(t',[u'])}$ is orientation-reversing. This means that the isomorphism ${\mathbb{C}} \simeq \ker D_{u}\sharp 0$, which is the left vertical arrow, is orientation-preserving if and only if the isomorphism ${\mathbb{C}} \simeq \ker D_{u'}\sharp 0$ is orientation-reversing. This implies the following. Assume $u,u'$ are attached at $\gamma(0)$ and we have oriented both $\ker D_{u'}\sharp 0$ and $\ker D_{u} \sharp 0$ using the standard orientation on ${\mathbb{C}}$ and the above isomorphisms induced by the infinitesimal action. Then, since both left vertical arrows in the above diagram are orientation-preserving, we see that the orientations on $T\widetilde{\mathcal{M}}(a)$ coming from the chosen orientations on $\ker D_{u'}\sharp 0$ and $\ker D_{u}\sharp 0$ and the standard orientation on ${\mathbb{R}}$ are opposite. This means that the isomorphisms $C(u),C(u')$ are opposite. A similar argument shows that when $\Delta$ connects two disks attached at $\gamma(1)$, or two disks attached at different points, we still have $C(u)+C(u') = 0$.
This finishes the proof that $\partial_{H,J}^2 = 0$ in the presence of bubbling.
\subsection{$\partial_{\mathcal{D}}^2 = 0$}\label{ss:boundary_op_squares_zero_bubbling_QH}
The above compactification structure of the $1$-dimensional unparametrized pearly spaces can fail in case $q=q''$ and $N_L = 2$, and a bubble at $q$ can form. Here we prove that even if this happens, the quantum boundary operator still squares to zero.
The point is that the $1$-dimensional part ${\mathcal{P}}^1(q,q)$ can no longer be compactified by adding Morse breaking alone (there is no holomorphic disk breaking or collision since we are in the minimal Maslov case), and has to be supplemented by adding disks of Maslov $2$ at $q$. Let us denote by $\overline{\mathcal{P}}{}^1(q,q)$ the compact $1$-dimensional topological manifold with boundary obtained by adding the bubbles at $q$ and identifying two boundary components if they correspond to the same bubble. Pick a connected component $\Delta \subset \overline{\mathcal{P}}{}^1(q,q)$ having two boundary points, both of which are pairs of the form $([u],[v]) \in {\mathcal{P}}_0(q,q') \times {\mathcal{P}}_1(q',q)$ or $([u],[v]) \in {\mathcal{P}}_1(q,q') \times {\mathcal{P}}_0(q',q)$, and such that there is a unique interior point represented by a bubble at $q$. Let $\partial \Delta = \{\delta,\delta'\}$ where $\delta = ([u],[v])$ and $\delta' = ([u'],[v'])$ with $u,v$ as described. We have to show that
$$C(v)\circ C(u) + C(v')\circ C(u') = 0$$
as a homomorphism $C(q,A) \to C(q,AB)$, where $B$ is the class of the bubble.
This is done as follows. Consider
$$\ev {:\ } C^\infty(D^2,S^1;M,L) \times S^1 \times S^1 \to L^2\,,\quad (w,\theta_1,\theta_2) \mapsto (w(\theta_1),w(\theta_2))\,.$$
For $(w,\theta_1,\theta_2) \in \ev^{-1}({\mathcal{U}}(q) \times {\mathcal{S}}(q))$ we have the spaces
$$Y^{(w,\theta_1,\theta_2)} = \{(\xi,\Theta_1,\Theta_2) \in W^{1,p}(w) \times T_{\theta_1} S^1 \times T_{\theta_2} S^1\,|\, \xi(\theta_1) + \Theta_1 \in T_{w(\theta_1)}{\mathcal{U}}(q)\,,\xi(\theta_2) + \Theta_2 \in T_{w(\theta_2)}{\mathcal{S}}(q)\}$$
$$Y^{(w,\theta_1,\theta_2)}_0 = Y^{(w,\theta_1,\theta_2)} \cap (W^{1,p}(w) \times 0 \times 0)\,.$$
For $(w,\theta_1,\theta_2) \in \ev^{-1}({\mathcal{U}}(q) \times {\mathcal{S}}(q))$ we have the natural Fredholm triple
$$\xymatrix{0 \ar[r] & Y^{(w,\theta_1,\theta_2)}_0 \ar[r] \ar[d]^{D_w|_{Y^{(w,\theta_1,\theta_2)}_0}} & Y^{(w,\theta_1,\theta_2)} \ar[r] \ar[d]^{D_w|_{Y^{(w,\theta_1,\theta_2)}}} & T_{(\theta_1,\theta_2)}(S^1 \times S^1) \ar[d] \ar[r] & 0 \\
0 \ar[r] & L^p(w) \ar@{=}[r] & L^p(w) \ar[r]& 0}$$
where the operators
$$D_w|_{Y^{(w,\theta_1,\theta_2)}_0} \quad \text{and} \quad D_w|_{Y^{(w,\theta_1,\theta_2)}}$$
are defined by pulling back $D_w {:\ } W^{1,p}(w) \to L^p(w)$ via the projections
$$Y^{(w,\theta_1,\theta_2)}_0 \to W^{1,p}(w) \quad \text{and} \quad Y^{(w,\theta_1,\theta_2)} \to W^{1,p}(w)\,.$$
This triple induces the canonical isomorphism
\begin{equation}\label{eqn:iso_detl_lines_proof_d_quantum_squares_zero}
\ddd(D_w|_{Y^{(w,\theta_1,\theta_2)}}) \simeq \ddd(D_w|_{Y^{(w,\theta_1,\theta_2)}_0}) \otimes \ddd(TS^1 \times TS^1)\,,
\end{equation}
which is continuous in $(w,\theta_1,\theta_2)$. Using a generalization of the construction in \S\ref{sss:boundary_op_Lagr_QH}, we obtain a canonical bijection between isomorphisms $C(q,A) \simeq C(q,AB)$ and orientations of $D_w|_{Y^{(w,\theta_1,\theta_2)}_0}$, which is continuous in $(w,\theta_1,\theta_2)$. Let us orient $S^1 \times S^1$ by the standard positive orientation. Then the isomorphism \eqref{eqn:iso_detl_lines_proof_d_quantum_squares_zero} yields a bijection between isomorphisms $C(q,A) \simeq C(q,AB)$ and orientations of $D_w|_{Y^{(w,\theta_1,\theta_2)}}$.
Let $\widetilde{\mathcal{M}}(q,q) = \ev^{-1}({\mathcal{U}}(q) \times {\mathcal{S}}(q)) \cap \widetilde{\mathcal{M}}_2(J)$, where $\widetilde{\mathcal{M}}_2(J)$ denotes the space of Maslov $2$ holomorphic disks with boundary on $L$. This is a smooth $4$-dimensional manifold. The group $\Aut(D^2)$ acts freely on it and we let ${\mathcal{M}}(q,q)$ be the $1$-dimensional quotient. For $(w,\theta_1,\theta_2) \in \widetilde{\mathcal{M}}(q,q)$ with $w$ in class $B$ we see that the operator $D_w|_{Y^{(w,\theta_1,\theta_2)}_0}$ is onto and has index $2$, the operator $D_w|_{Y^{(w,\theta_1,\theta_2)}}$ is onto, has index $4$ and its kernel equals $T_{(w,\theta_1,\theta_2)}\widetilde{\mathcal{M}}(q,q)$. By the above, there is a bijection between isomorphisms $C(q,A) \simeq C(q,AB)$, orientations of $D_w|_{Y^{(w,\theta_1,\theta_2)}_0}$, and orientations of $T_{(w,\theta_1,\theta_2)}\widetilde{\mathcal{M}}(q,q)$, that is orientations of the connected component of $\widetilde{\mathcal{M}}(q,q)$ containing the bubble $z$ at $q$ in the form of the element $(z,1,1)$.
We therefore need to compute the orientations of $\widetilde{\mathcal{M}}(q,q)$ corresponding to the isomorphisms $C(v)\circ C(u)$ and $C(v') \circ C(u)$. Let $w \in \widetilde{\mathcal{P}}(q,q)$ and $w' \in \widetilde{\mathcal{P}}(q,q)$ be obtained by gluing $u,v$ and $u',v'$, respectively. The disk $w$ defines an element of $\widetilde{\mathcal{M}}(q,q)$ via $(w,-1,1)$ and we have an identification
$$\ker (D_w|_{Y_\Gamma^w}) = \ker(D_w|_{Y^{(w,-1,1)}_0})\,.$$
We know by \S\ref{sss:boundary_op_Lagr_QH} that the isomorphism $C(v) \circ C(u)$ corresponds to the orientation $-\partial_w \wedge \inward_\delta$ of $\ker (D_w|_{Y_\Gamma^w})$. Analogously we see that the isomorphism $C(v') \circ C(u')$ corresponds to the orientation $-\partial_{w'} \wedge \inward_{\delta'}$. These vectors, together with the standard orientation of $T_{(-1,1)}(S^1 \times S^1)$ by $\partial_{\theta_1} \wedge \partial_{\theta_2}$, induce the following orientations on $\widetilde{\mathcal{M}}(q,q)$:
$$-\partial_w \wedge \inward_\delta \wedge \partial_{\theta_1} \wedge \partial_{\theta_2} \quad \text{and} \quad -\partial_{w'} \wedge \inward_{\delta'} \wedge \partial_{\theta_1} \wedge \partial_{\theta_2}\,,$$
and we have to show that these two orientations are opposite. We cannot compare them directly because the vector $\partial_w$ does not extend continuously pass the bubble.
We proceed as follows. We have the following exact sequence:
$$0 \to \Lie \Aut(D^2) \to T_{(w,-1,1)}\widetilde{\mathcal{M}}(q,q) \to T_{[w,-1,1]}{\mathcal{M}}(q,q) \to 0\,,$$
and similarly for $w'$. Orient $\Lie \Aut(D^2)$ by $\epsilon_1 \wedge \epsilon_2 \wedge \epsilon_3$ (see \S\ref{ss:general_technique_tori} for the definition of the vectors $\epsilon_i \in \Lie \Aut(D^2)$), and $T_{[w,-1,1]}{\mathcal{M}}(q,q)$ by $\inward_\delta$. We see that the vector $\epsilon_2$ equals $\partial_w$ (by the definition of the latter, see \S\ref{sss:boundary_op_Lagr_QH}) while the projection $T\widetilde{\mathcal{M}}(q,q) \to TS^1 \times TS^1$ maps $\epsilon_1$ to $\partial_{\theta_1} + \partial_{\theta_2}$ and $\epsilon_3$ to $\partial_{\theta_2}$. Therefore we obtain the induced orientation
$$\epsilon_1 \wedge \epsilon_2 \wedge \epsilon_3 \wedge \inward_\delta\,,$$
which equals the orientation
$$\partial_{\theta_1} \wedge \partial_w \wedge \partial_{\theta_2} \wedge \inward_\delta = - \partial_w \wedge \inward_\delta \wedge \partial_{\theta_1} \wedge \partial_{\theta_2}\,.$$
Since this orientation equals $\epsilon_1 \wedge \epsilon_2 \wedge \epsilon_3 \wedge \inward_\delta$, it extends continuously past the bubble, and equals
$$-\epsilon_1 \wedge \epsilon_2 \wedge \epsilon_3 \wedge \inward_{\delta'} = \partial_{w'} \wedge \inward_{\delta'} \wedge \partial_{\theta_1} \wedge \partial_{\theta_2}$$
at $(w',-1,1)$, as a similar computation shows. This implies that the orientations of $\widetilde{\mathcal{M}}(q,q)$ induced by the isomorphisms $C(v) \circ C(u)$ and $C(v') \circ C(u')$ are opposite, which means that these isomorphisms themselves are opposite and the claim is proved.
\section{Quotient complexes}\label{s:quotient_cxs}
The above Floer and quantum complexes distinguish homotopy classes of cappings, and as such, at times they are too large. In applications it is often more convenient to work with smaller quotient complexes. In this section we describe how to construct such quotients in case $L$ is relatively $\Pin^\pm$. We also describe the familiar algebraic structures such as the module structure over a Novikov ring, which in this context turns out to be a nontrivial issue, due to the fact that the canonical complexes take into account the action of the fundamental group on the homotopy groups.
In \S\ref{ss:relative_Pin_structs_coh_ors_disks} we define relative Pin structures via \v Cech cochains and show how such a structure allows one to construct a coherent system of orientations on operators of the form $D_u \sharp 0$ where $u {:\ } (D^2,S^1) \to (M,L)$ is a smooth map with even Maslov index. \S\ref{ss:quotient_cxs_Ham_HF_QH_of_M} and \S\ref{ss:quotient_cxs_Lagr_HF_QH} deal with the construction of quotient complexes in the closed and open (Lagrangian) case, respectively.
\subsection{Relative Pin structures and coherent orientations for disks}\label{ss:relative_Pin_structs_coh_ors_disks}
We say that $L$ is \textbf{relatively} $\Pin^\pm$ if $w^\pm \equiv w^\pm(L) \in \im(H^2(M;{\mathbb{Z}}_2) \to H^2(L;{\mathbb{Z}}_2))$, where $w^+(L) = w_2(TL)$ and $w^-(L) = w_2(TL) + w_1^2(TL)$. Not being relatively $\Pin^\pm$ is an obstructing for the existence of a relative $\Pin^\pm$-structure on $L$.
\begin{rem}
Assume $L$ is relatively $\Pin^\pm$ and let $w$ be such that $w|_L = w^\pm$. Since $H^1(S^2;{\mathbb{Z}}_2) = 0$, we see that $w_2(TL) \circ \partial = w^\pm \circ \partial$ as a map $\pi_3(M,L) \to \partial_2(L)$. It then factors as
$$\pi_3(M,L) \to \pi_2(L) \to \pi_2(M) \xrightarrow{w} {\mathbb{Z}}_2\,,$$
implying that $w_2(TL) \circ \partial = 0$, which means that being relatively $\Pin^\pm$ implies assumption \textbf{(O)}. It is however a strictly weaker assumption, as the example of ${\mathbb{R}} P^5 \subset {\mathbb{C}} P^5$ shows: the map
$$H^2({\mathbb{C}} P^5 ;{\mathbb{Z}}_2) \to H^2({\mathbb{R}} P^5;{\mathbb{Z}}_2)$$
vanishes, the class $w_2({\mathbb{R}} P^5) = w_2({\mathbb{R}} P^5) + w_1^2({\mathbb{R}} P^5)$ is nonzero, which means that ${\mathbb{R}} P^5$ is not relatively $\Pin^\pm$, however $\pi_2({\mathbb{R}} P^5) = 0$, which means that assumption \textbf{(O)} is satisfied in this case. \footnote{We thank J.-Y. Welschinger for this example.}
\end{rem}
Let us now describe the notion of a relative $\Pin^\pm$ structure on $L$ and a how choice of such a structure yields a system of coherent orientations for Cauchy--Riemann operators coming from disks.
We start with some generalities. This material is essentially contained in \cite{Wehrheim_Woodward_Orientations_pseudoholo_quilts}. Consider Lie groups $G,H$ and let $\phi {:\ } G \to H$ be a surjective Lie group homomorphism with finite abelian kernel $A = \ker \phi$. It is well-known that principal $H$-bundles on a smooth manifold $X$ are classified by the nonabelian cohomology $H^1(X;H)$. Let us recall the definition of $H^1(X;H)$. For an open cover ${\mathcal{U}} = (U_i)_{i \in I}$ of $X$ we have the \v Cech cochain groups
$$C^k({\mathcal{U}};H) = \prod_{(i_0,\dots,i_k)\in I^{k+1}}C^\infty(U_{i_0\dots i_k},H)\,,$$
where $k \geq 0$ and $U_{i_0\dots i_k} = U_{i_0}\cap \dots \cap U_{i_k}$. Define $\delta^k {:\ } C^k({\mathcal{U}};H) \to C^{k+1}({\mathcal{U}};H)$ by
$$(\delta c)_{i_0\dots i_{k+1}} = \prod_{j=0}^{k+1}\big(c_{i_0\dots\widehat{i_j}\dots i_{k+1}}|_{U_{i_0\dots i_{k+1}}}\big)^{(-1)^j}\,.$$
The set $Z^1({\mathcal{U}};H) = \ker \delta^1$ is the set of $1$-cocycles, that is $f = (f_{ij}) \in Z^1({\mathcal{U}};H)$ if and only if for all $i,j,k \in I$ we have $f_{ik} = f_{ij}f_{jk}$. The group $C^0({\mathcal{U}};H)$ acts on $Z^1({\mathcal{U}};H)$ on the left as follows: for $c = (c_i) \in C^0({\mathcal{U}};H)$ and $f = (f_{ij})$ we have $(c\cdot f)_{ij}=c_i f_{ij} c_j^{-1}$. We let
$$H^1({\mathcal{U}};H) = Z^1({\mathcal{U}};H)/C^0({\mathcal{U}};H)\,.$$
Given another cover ${\mathcal{V}} = (V_j)_{j \in J}$ a refinement map $\tau {:\ } {\mathcal{V}} \to {\mathcal{U}}$ by definition is a map $\tau {:\ } J \to I$ such that $V_j \subset U_{\tau(j)}$ for every $j \in J$. A refinement map induces homomorphisms $C^k({\mathcal{U}};H) \to C^k({\mathcal{V}};H)$ commuting with $\delta$ and therefore a well-defined map $H^1({\mathcal{U}};H) \to H^1({\mathcal{V}};H)$, which can be shown to be injective. Taking the direct limit over the set of covers directed by the relation of refinement, we obtain the first nonabelian cohomology $H^1(X;H)$. It can be shown that if ${\mathcal{U}}$ is a good cover, then the canonical map $H^1({\mathcal{U}};H) \to H^1(X;H)$ is a bijection.
There is a map from the set of isomorphism classes of principal $H$-bundles over $X$ to $H^1(X;H)$, defined by taking a sufficiently fine cover ${\mathcal{U}}$, trivializing the bundle and taking the transition maps, which satisfy the cocycle relation, meaning they define an element in $Z^1({\mathcal{U}};H)$. The corresponding class in $H^1(X;H)$ is well-defined, that is it only depends on the isomorphism class of the bundle. This map is a bijection. The inverse is obtained by gluing according to a cocycle.
Assume now that $Q$ is a principal $H$-bundle over $X$ and let ${\mathcal{U}}$ be a good cover. Let $h = (h_{ij}) \in Z^1({\mathcal{U}};H)$ be the transition cocycle corresponding to a trivialization of $Q$. Since ${\mathcal{U}}$ is a good cover, every map $h_{ij} {:\ } U_{ij} \to H$ lifts to a map $g_{ij} {:\ } U_{ij} \to G$. The homomorphism $\phi$ induces in an obvious way homomorphisms $C^k({\mathcal{U}};G) \to C^k({\mathcal{U}};H)$ commuting with the differentials. Since clearly $\phi(g) = h$, we see that $\phi\delta g = \delta \phi g = \delta h = 0$, meaning $\delta g \in C^2({\mathcal{U}};A)$. Since clearly $\delta(\delta g)= 0$, we see that $\delta g$ is in fact a cocycle in $Z^2({\mathcal{U}};A)$. The corresponding class $[\delta g] \in H^2(X;A)$ (this is the ordinary second cohomology with coefficients in $A$) is well-defined, that is it only depends on the isomorphism class of $Q$, and is the characteristic class of $Q$. It vanishes if and only if $g$ can be corrected to a cocycle in $Z^1({\mathcal{U}};G)$, meaning in this case the bundle $Q$ is in fact covered by a $G$-bundle $P$ via a $\phi$-equivariant map. A cocycle $g \in Z^1({\mathcal{U}};G)$ with $\phi(g) = h$ is called a $G$-\textbf{trivialization} of $h$. The set of such trivializations quotiented out by the equivalence relation induced by multiplication by $C^0({\mathcal{U}};A)$ is the set of \textbf{$G$-structures} on the bundle $Q$ relative to the cover ${\mathcal{U}}$. Since it is a good cover, we get the same notion if we allow arbitrary covers and take limits over refinement maps. It can be seen that the set of $G$-structures on $Q$ is a torsor over the group $H^1(X;A)$.
Now we consider relative $G$-structures. Let $f {:\ } X \to Y$ be a smooth map and let ${\mathcal{U}}$, ${\mathcal{V}}$ be good covers of $X$, $Y$, respectively, where ${\mathcal{U}}$ is a refinement of $f^{-1}{\mathcal{V}}$. Let $Q \to X$ be an $H$-bundle and assume $h = (h_{ij}) \in Z^1({\mathcal{U}};H)$ is a transition cocycle for $Q$. A \textbf{$G$-trivialization on $Q$ relative to $f$} is a pair $(g,b) \in C^1({\mathcal{U}};G) \times Z^2({\mathcal{V}};A)$ such that $\phi(g) = h$ and $f^*b = \delta g$, that is $g$ is a lift of $h$ to $G$ and $b$ is a cocycle on $Y$ which pulls back to a cocycle on $X$ which is a boundary and in fact is the boundary of $g$. Two relative trivializations $(g,b)$ and $(g',b')$ are called \textbf{equivalent} if there is $(a,a') \in C^0({\mathcal{U}};A) \times C^1({\mathcal{V}};A)$ with
$$(\delta a\cdot g,\delta a' \cdot b) = (g',b')\,.$$
An equivalence class of relative $G$-trivializations is called a \textbf{relative $G$-structure on $Q$} (relative to $f$). It is a torsor over the group $H^1(f;A)$, defined as follows. Let $Z^1(f;A)$ be the group consisting of pairs $(a,a') \in C^1({\mathcal{U}};A) \times C^2({\mathcal{V}};A)$ such that $\delta a' = 0$ and $f^*a' = \delta a$. Now quotient it out by the subgroup of relative coboundaries whose elements are pairs $(\delta c\cdot c',\delta c')$ for $(c,c') \in C^0({\mathcal{U}};A) \times C^1({\mathcal{V}};A)$.
A \textbf{relative $\Pin^\pm$-structure on $L$} is a relative $\Pin^\pm$-structure on $TL$ relative to the embedding $L \hookrightarrow M$. To spell it out, let ${\mathcal{V}}$ be a good cover of $M$ and ${\mathcal{U}}$ a good cover of $L$ which is a refinement of ${\mathcal{V}}|_L$. Trivialize $TL$ to get transition functions $h = (h_{ij}) \in Z^1({\mathcal{U}};O(n))$ (here we are using an auxiliary Riemannian metric on $L$). A trivialization of $h$ relative to the embedding $L \to M$ is a lift of $h$ to $g = (g_{ij}) \in C^1({\mathcal{U}};\Pin^\pm(n))$ and a cocycle $b \in Z^2({\mathcal{V}};{\mathbb{Z}}_2)$ on $M$ such that $\delta g = b|_L$. A relative $\Pin^\pm$-structure is an equivalence class of such trivializations. The set of relative $\Pin^\pm$-structures is a torsor over the group $H^1(\iota;{\mathbb{Z}}_2) = H^2(M,L;{\mathbb{Z}}_2)$ where $\iota {:\ } L \to M$.
Note that if we have a commutative diagram of smooth manifolds and maps
$$\xymatrix{X' \ar[r]^{f'} \ar[d] & Y'\ar[d] \\ X \ar[r]^f & Y}$$
then a relative $G$-structure on an $H$-bundle $Q$ on $X$ relative to $f$ canonically induces a relative $G$-structure on the pullback $H$-bundle $Q' \to X'$ by the map $X' \to X$ relative to the map $f'$.
We note the following obvious fact.
\begin{lemma}
Given a vector bundle $V \to S^1$, the canonical map sending $\Pin^\pm$-structures on $V$ to relative $\Pin^\pm$-structures on $V$ relative to the embedding $S^1 \to D^2$ is a bijection. This bijection is equivariant with respect to the natural actions of $H^1(S^1;{\mathbb{Z}}_2)$ and $H^2(D^2,S^1;{\mathbb{Z}}_2)$, connected by the boundary morphism $H^1(S^1;{\mathbb{Z}}_2) \to H^2(D^2,S^1;{\mathbb{Z}}_2)$, which is an isomorphism. \qed
\end{lemma}
\begin{coroll}
Assume a relative $\Pin^\pm$-structure on $L$ is given. Then for any disk $u {:\ } (D^2,S^1) \to (M,L)$ the bundle $F_u = (u|_{S^1})^*TL$ acquires a canonical $\Pin^\pm$-structure.
\end{coroll}
\begin{prf}
The map $u$ induces a relative $\Pin^\pm$ structure on $F_u$, which by the lemma corresponds to a unique $\Pin^\pm$-structure. \qed
\end{prf}
We will now prove the following result.
\begin{prop}\label{prop:canonical_oris_disks_Pin_struct}
A relative $\Pin^\pm$-structure on $L$ determines a system of orientations of the operators $D_u \sharp 0$ over the space of smooth disks $u$ with even Maslov number, which is coherent with respect to boundary gluing.
\end{prop}
\begin{prf}
Let $(E^0,F^0) \to (D^2,S^1)$ be a Hermitian bundle pair and assume $F^0$ has even Maslov number. Let $E^1 \to {\mathbb{C}} P^1$ be a Hermitian bundle with Chern number $-\mu(F^0)/2$. Then we can glue $E^0$ and $E^1$ at $0 \in D^2$ (see \S\ref{ss:boundary_gluing}) to obtain a Hermitian bundle pair $(E,F)$ with zero Maslov number. We have
$$\ddd(D_{E,F}) \simeq \ddd(D_{E^0,F^0}) \otimes \ddd(D_{E^1})\,,$$
where we omitted the factor $\ddd(E^0_0)$ since it's canonically oriented. The operator $D_{E^1}$, being a Cauchy--Riemann operator on a closed Riemann surface, is canonically oriented, therefore we get a canonical isomorphism
$$\ddd(D_{E^0,F^0}) = \ddd(D_{E,F})\,.$$
Similarly we get a canonical isomorphism
$$\ddd(D_{E^0,F^0}\sharp 0) = \ddd(D_{E,F} \sharp 0)\,.$$
It remains to establish a bijection between $\Pin^\pm$-structures on $F$ and orientations of the latter operator. Fix a unitary trivialization of $E$ and denote by $F$ the resulting Lagrangian loop in ${\mathbb{C}}^n$. The set of $\Pin^\pm$-structures on $F$ has two points, and varying $F$, we obtain a double cover of the space of contractible loops in the Lagrangian Grassmannian of ${\mathbb{C}}^n$. On the other hand, the set of orientations of the operator $D_{{\mathbb{C}}^n,F}\sharp 0$ also has two points and thus defines another double cover over the same space. The two covers have the same $w_1$, therefore they are isomorphic. It remains to choose an isomorphism at a point in this space. We do this for the constant loop. The constant loop $F \to S^1$ produces the operator $D_{{\mathbb{C}}^n,F} \sharp 0$, which can easily be seen to be an isomorphism. Since $F$ is the constant loop, of the two $\Pin^\pm$-structures over it one is the trivial structure (corresponding to transition functions at the identity in $\Pin^\pm(n)$). We associate the trivial structure to the canonical positive orientation, and the other structure to the negative orientation. Now note that this is independent of the chosen unitary trivialization of $E$.
The case of smooth maps is now obtained by noting that $(E_u,F_u)$ is a Hermitian bundle pair with even Maslov, and applying the above construction.
The coherence follows from the fact that boundary gluing two trivial bundle pairs yields a trivial bundle pair; the boundary gluing isomorphism maps the orientations of the corresponding isomorphism operators via multiplication in ${\mathbb{Z}}_2 = \{\pm 1\}$, and it does the same with the $\Pin^\pm$-structures. \qed
\end{prf}
\subsection{Hamiltonian Floer homology and quantum homology of $M$}\label{ss:quotient_cxs_Ham_HF_QH_of_M}
We first describe the quotient complexes for the periodic orbit Floer homology $HF_*(M)$ and the corresponding quantum homology $QH_*(M)$ since in this case there is a canonical way of doing it.
Let us consider the Floer complex corresponding to a regular time-periodic Floer datum $(H,J)$:
$$CF_*(H) = \bigoplus_{\widetilde x \in \Crit {\mathcal{A}}_H} C(\widetilde x)\,.$$
In order to be able to define a quotient complex of $CF_*(H)$, we need to understand the necessary identifications that go into forming such a quotient. Of course, we wish only to identify the spaces $C(\widetilde x)$, $C(\widetilde x')$ where $\widetilde x = [x,\widehat x]$, $\widetilde x' = [x,\widehat x']$ share the same orbit and the only difference is in the capping. Let $q = x(0)$ and let $A \in \pi_2(M,q)$ be such that $[\widehat x' \sharp - \widehat x] = A$. We symbolize this by writing $\widetilde x' = A\cdot \widetilde x$. Let us see what it means to identify $C(\widetilde x)$ and $C(\widetilde x')$.
Gluing $D_A$ and $D_{\widetilde x}$ at a point close to $q$ produces an exact triple
$$0 \to D_A \sharp D_{\widetilde x} \to D_A \oplus D_{\widetilde x} \to 0_{T_qM} \to 0\,,$$
where the penultimate arrow is the difference of the evaluation maps to $T_qM$. This, together with the direct sum isomorphism, yields the isomorphism
$$\ddd(D_A) \otimes \ddd(D_{\widetilde x}) \simeq \ddd(D_A \sharp D_{\widetilde x}) \otimes \ddd(T_qM)\,.$$
Recall (Lemma \ref{lem:D_A_has_canonical_orientation}) that the operators $D_A$ have a canonical orientation. This, together with the canonical orientation of $T_qM$, yields the isomorphism
$$\ddd(D_{\widetilde x}) \simeq \ddd(D_A \sharp D_{\widetilde x}) \simeq \ddd(D_{\widetilde x'})\,,$$
where the second isomorphism comes from deformation. This means that for any $\widetilde x,\widetilde x' \in p^{-1}(x)$ we have an isomorphism
\begin{equation}\label{eqn:can_iso_C_x_C_x_prime_Ham_HF}
C(\widetilde x) = C(\widetilde x')
\end{equation}
which is independent of any choices.
Next, assume that $x_\pm$ are two periodic orbits of $H$ and $\widetilde x_\pm,\widetilde x_\pm'\in p^{-1}(x_\pm)$. Assume that $|\widetilde x_-| = |\widetilde x_+| + 1$, $|\widetilde x_-'| = |\widetilde x_+'| + 1$ and that $u {:\ } {\mathbb{R}} \times S^1 \to M$ is a Floer cylinder so that $u \in \widetilde {\mathcal{M}}(H,J;\widetilde x_-,\widetilde x_+)$ and also $u \in \widetilde {\mathcal{M}}(H,J; \widetilde x_-', \widetilde x_+')$. We have the diagram
\begin{equation}\label{dia:can_isos_commute_C_u_Ham_HF}
\xymatrix{C(\widetilde x_-) \ar@{=}[d] \ar[r]^{C(u)} & C(\widetilde x_+) \ar@{=}[d] \\ C(\widetilde x_-') \ar[r]^{C(u)} & C(\widetilde x_+')}
\end{equation}
where the vertical equality signs denote the above canonical isomorphisms. We claim that this diagram commutes. This follows from the associativity of the direct sum and gluing isomorphisms. The nontrivial point here is that the gluing of the difference classes $A_\pm = [\widehat x_\pm' \sharp - \widehat x_\pm]$ happens at different points of $M$, namely $x_\pm(0)$, and that $A_+$ is the result of transfer of $A_-$ to $x_+(0)$ along the curve $u(\cdot, 0)$. The isomorphism $\ddd(D_{A_-}) \simeq \ddd(D_{A_+})$ induced by this transfer coincides with the obvious isomorphism coming from the canonical orientations of $\ddd(D_{A_\pm})$, and it is what makes the diagram commute.
Now we can describe the construction of a quotient complex. Recall that the second homotopy groups of $M$ assemble into a local system of abelian groups (see \S\ref{ss:arbitrary_rings_loc_coeffs} for the definition of a local system) over $M$ with the group $\pi_2(M,q)$ being attached to $q \in M$, and with homotopy classes of paths between $q,q'$ inducing group isomorphisms $\pi_2(M,q) \simeq \pi_2(M,q')$ (see \cite{Hatcher_AG}). We denote this local system by $\pi_2(M)$. The datum that goes into the definition of a quotient complex in this case is a local subsystem $G$ of $\pi_2(M)$, that is a subgroup $G_q < \pi_2(M,q)$ for each $q \in M$ such that the isomorphisms coming from paths in $M$ preserve these subgroups. Fix such a subsystem $G$. Note that $\pi_2(M)$, being a local system, is itself a groupoid, and as such it acts on $\widetilde \Omega$ over $\Omega$, in the following sense: for every $q \in M$ let $\widetilde \Omega_q = \{\widetilde x = [x,\widehat x] \in \widetilde \Omega\,|\, x(0) = q\}$; this is a covering space of $\Omega_q = \{x \in \Omega\,|\, x(0) = q\}$, and $\pi_2(M,q)$ acts on $\widetilde\Omega_q$ by attaching spheres, and all of these transformations happen over $\Omega_q$. Since $G$ is a local subsystem, it too acts on $\widetilde\Omega/\Omega$. The quotient of this action is a covering space of $\Omega$:
$$\widetilde\Omega/G$$
where two cappings of the same orbit through $q$ are identified if their difference lies in $G_q$. We have the quotient map $\widetilde\Omega \to \widetilde\Omega/G$ and we denote the image of $\widetilde x$ by this map via $[\widetilde x]_G$. Note that the action of $G$ on $\widetilde \Omega$ restricts in an obvious manner to an action on $\Crit {\mathcal{A}}_H$, and we let $\Crit {\mathcal{A}}_H/G$ be the quotient. For $[\widetilde x]_G \in \Crit {\mathcal{A}}_H/G$ we let
$$C([\widetilde x]_G)$$
be the limit of the direct system of modules $(C(\widetilde x))_{\widetilde x \in [\widetilde x]_G}$ connected by the isomorphisms \eqref{eqn:can_iso_C_x_C_x_prime_Ham_HF}. The \textbf{quotient complex} then is
$$CF_*^G(H) = \bigoplus_{[\widetilde x]_G \in \Crit {\mathcal{A}}_H/G} C([\widetilde x]_G)$$
as a module. The commutativity of the diagram \eqref{dia:can_isos_commute_C_u_Ham_HF} ensures that the boundary operator $\partial_{H,J}$ on $CF_*(H)$ descends to a boundary operator $\partial_{H,J}^G$ so that the quotient map
$$(CF_*(H,),\partial_{H,J}) \to (CF_*^G(H),\partial_{H,J}^G)$$
is a chain map.
The local system $\pi_2(M)$ has a natural subsystem $\pi_2^0(M)$ consisting of spheres of zero area, and therefore of zero Chern number, by monotonicity. Since we are ultimately interested in spectral invariants, we only use subsystems $G$ contained in $\pi_2^0(M)$. If $G$ is such a subsystem, the action functional ${\mathcal{A}}_H {:\ } \widetilde \Omega \to {\mathbb{R}}$ descends to a functional
$${\mathcal{A}}_H {:\ } \widetilde\Omega/G \to {\mathbb{R}}\,.$$
Moreover, the ${\mathbb{Z}}$-grading of $CF_*(H)$ descends to a ${\mathbb{Z}}$-grading on $CF_*^G(H)$ in this case as well.
There are two typical local subsystems of $\pi_2^0(M)$. One is $\pi_2^0(M)$ itself. Note that this subsystem is the kernel of the local system morphism $c_1{:\ } \pi_2(M) \to {\mathbb{Z}}$. Therefore in case $c_1$ does not vanish on $\pi_2(M)$, the quotient space $\widetilde\Omega/\pi_2^0(M)$ inherits an action of the trivial system ${\mathbb{Z}}$. This is the familiar Novikov action. It also induces an action of ${\mathbb{Z}}$ on the corresponding Floer complex $CF^{\pi_2^0(M)}_*(H)$. Letting $t$ be the positive generator of ${\mathbb{Z}}$, we see that the complex $CF^{\pi_2^0(M)}_*(H)$ then becomes a module over the group ring ${\mathbb{Z}}[t,t^{-1}]$, which is the familiar Novikov module structure.
The other subsystem of $\pi_2^0(M)$ is the kernel of the Hurewicz morphism $G:=\ker(\pi_2(M) \to H_2(M;{\mathbb{Z}}))$. In this case the quotient space $\widetilde\Omega/G$ inherits an action of $H_2^S(M) = \im(\pi_2(M) \to H_2(M;{\mathbb{Z}}))$, and therefore the quotient Floer complex $CF_*^G(H)$ becomes a module over the group ring ${\mathbb{Z}}[H_2^S(M)]$, also familiar in Floer homology.
It is similarly checked that the continuation morphisms respect the identifications by $G$, which means that we have well-defined abstract Floer homology
$$HF_*^G(M)\,.$$
Next we consider the effect of the quotient construction on products. Let $(H_i,J_i)$, $i=0,1,2$ be regular time-periodic Floer data, $(K,I)$ a regular compatible perturbation datum on the thrice-punctured sphere, and consider the resulting product map on chain level:
$$*=*_{K,I} {:\ } CF_*(H_0) \otimes CF_*(H_1) \to CF_*(H_2)\,.$$
We claim that this product descend to a well-defined product on the quotient complexes, to wit:
$$* {:\ } CF_*^G(H_0) \otimes CF_*^G(H_1) \to CF_*^G(H_2)$$
is well-defined and intertwines the quotient maps $CF_* \to CF_*^G$. To show this, it is enough to show the following. Let $\widetilde x_i, \widetilde x_i' \in \Crit {\mathcal{A}}_{H_i}$ be such that $|\widetilde x_0| + |\widetilde x_1| - |\widetilde x_2| = 2n$, $|\widetilde x_0'| + |\widetilde x_1'| - |\widetilde x_2'| = 2n$, and let $u \in {\mathcal{M}}(K,I;\{\widetilde x_i\}_i)$ and $u \in {\mathcal{M}}(K,I;\{\widetilde x_i'\}_i)$. Then the diagram
$$\xymatrix{C(\widetilde x_0) \otimes C(\widetilde x_1) \ar@{=}[d] \ar[r]^-{C(u)} & C(\widetilde x_2) \ar@{=}[d] \\ C(\widetilde x_0') \otimes C(\widetilde x_1') \ar[r]^-{C(u)} & C(\widetilde x_2')}$$
commutes. The commutativity of this diagram is proved similarly to that of the diagram \eqref{dia:can_isos_commute_C_u_Ham_HF} for the boundary operator. Its commutativity is ensured by the fact that the operators $D_A$, $A \in \pi_2(M)$, are all canonically oriented, and that the manifold $M$ is oriented. Note as well that it is in defining the product operation on the quotient complexes that we really use the group structure on $G$: indeed, assuming $A_i = [\widehat x_i' \sharp - \widehat x_i]$, we see that for $\widetilde x_i$ to be $G$-equivalent to $\widetilde x_i'$, for all $i$, we need the product of $A_0$ and $A_1$ to lie in $G$, after transferring both of them to $x_2(0)$ along paths dictated by $u$, and we need this product to be equal $A_2$.
The reader will have no trouble checking that if $G = \pi_2^0(M)$ and $c_1|_{\pi_2(M)} \neq 0$, then the resulting homology $HF_*^G(M)$ becomes an algebra over the Novikov ring ${\mathbb{Z}}[t,t^{-1}]$, and if $G = \ker (\pi_2(M) \to H_2(M;{\mathbb{Z}}))$, then $HF_*^G(M)$ has the structure of an algebra over the ring ${\mathbb{Z}}[H_2^S(M)]$.
Almost identical arguments apply to quantum homology. Recall that the quantum complex for a datum ${\mathcal{D}} = (f,\rho,J)$ is
$$QC_*({\mathcal{D}}) = \bigoplus_{q \in \Crit f}\bigoplus_{A \in \pi_2(M,q)}C(q,A)\,.$$
We have canonical identifications
$$C(q,A) = C(q,A')$$
for any $A,A' \in \pi_2(M,q)$. In fact, we have a canonical isomorphism
$$\ddd(D_A) \otimes \ddd(T{\mathcal{S}}(q)) \simeq \ddd(D_A \sharp T{\mathcal{S}}(q)) \otimes \ddd(T_qM)\,,$$
which shows that if we use the canonical orientations of $D_A$ and of $T_qM$, we have in fact an identification
$$C(q,0) = C(q,A)\,.$$
Thus a quotient complex corresponding to a local subsystem $G < \pi_2(M)$ can be formed, and we denote it $QC_*^G({\mathcal{D}})$. In case $G < \pi_2^0(M)$, the quotient complex inherits a ${\mathbb{Z}}$-grading. The quotient map $QC_*({\mathcal{D}}) \to QC_*^G({\mathcal{D}})$ is a chain map.
The chain-level product operation descends to quotient complexes as well, as do the units.
We note also that PSS maps $CF_*(H) \leftrightarrows QC_*({\mathcal{D}})$ descend to well-defined maps on the quotient complexes, and they induce isomorphisms on homology. Therefore we have a well-defined abstract quantum homology $QH_*^G(M)$, which is canonically isomorphic to $HF_*^G(M)$ by the abstract PSS map. This is an isomorphism of supercommutative unital rings. In special cases of subsystems of $\pi_2^0(M)$ we get algebras over the corresponding Novikov rings.
\subsection{Lagrangian Floer and quantum homology}\label{ss:quotient_cxs_Lagr_HF_QH}
The Lagrangian case is significantly more involved due to the lack of canonical orientations of the operators $\ddd(D_A)$, $A\in \pi_2(M,L)$, and of the Lagrangian $L$ itself.
We first cover some preliminaries about the local systems involved. There is a natural local system of groups on $L$ given by $q \mapsto \pi_2(M,L,q)$, with natural isomorphisms $\pi_2(M,L,q) \simeq \pi_2(M,L,q')$ corresponding to homotopy classes of paths from $q$ to $q'$ \cite{Hatcher_AG}. We denote this local system by $\pi_2(M,L)$. In particular, looking at loops at $q$, we obtain an action of $\pi_1(L,q)$ on $\pi_2(M,L,q)$ by automorphisms. This action has the property that the boundary operator $\partial {:\ } \pi_2(M,L,q) \to \pi_1(L,q)$ intertwines it with the action of $\pi_1(M,q)$ on itself by conjugation. Moreover, if $A,B \in \pi_2(M,L,q)$ then $\partial(A)\cdot B = ABA^{-1}$. This local system acts, as a groupoid, on the space $\widetilde\Omega_L$, as follows. Let $q \in L$ and let $\Omega_{L,q} = \{\gamma \in \Omega_L \,|\, \gamma(0) = q\}$ and $\widetilde\Omega_{L,q} = p^{-1}(\Omega_{L,q})$ where $p {:\ } \widetilde\Omega_L \to \Omega_L$ is the projection. Then $p {:\ } \widetilde\Omega_{L,q} \to \Omega_{L,q}$ is a trivial covering space on which $\pi_2(M,L,q)$ acts simply transitively by attaching disks.
Let now $G < \pi_2(M,L)$ be a subsystem, that is $G_q$ is a subgroup of $\pi_2(M,L,q)$ and these subgroups are preserved by the isomorphisms induced by paths on $L$. In particular $G_q$ must be preserved by the action of elements of the form $\partial(A)$ for $A \in \pi_2(M,L,q)$, which by the above means that $G_q$ is normal in $\pi_2(M,L,q)$. This $G$ then acts on $\widetilde\Omega_L$ and we let
$$\widetilde \Omega_L/G$$
be the quotient. This is a covering space of $\Omega_L$ and it has an inherited action of the quotient system $\pi_2(M,L)/G$. If $H$ is a Hamiltonian on $M$, the action functional ${\mathcal{A}}_{H:L}$ will descend to $\widetilde\Omega_L/G$ provided $G$ consists of disks of zero area, which is equivalent to saying that it is a subsystem of the local system $\pi_2^0(M,L)$ which is just the kernel of the local system morphism $\mu {:\ } \pi_2(M,L) \to {\mathbb{Z}}$ which is the Maslov index, or equivalently of the area morphism $\omega$, due to monotonicity. Henceforth we will only consider subsystems of $\pi_2^0(M,L)$.
Fix therefore a subsystem $G < \pi_2^0(M,L)$ and consider the quotient covering space $\widetilde\Omega_L/G \to \Omega_L$. Fix a regular Floer datum $(H,J)$. The set $\Crit {\mathcal{A}}_{H:L}$ inherits an action of $\pi_2(M,L)$ in an obvious way, and therefore we can consider the quotient $\Crit {\mathcal{A}}_{H:L}/G$, which naturally maps onto the set of Hamiltonian orbits of $H$ with boundary on $L$. We let $[\widetilde\gamma]_G$ be the image in $\Crit {\mathcal{A}}_{H:L}/G$ of a point $\widetilde\gamma \in \Crit{\mathcal{A}}_{H:L}$. Recall the Floer complex
$$CF_*(H:L) = \bigoplus_{\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}}C(\widetilde\gamma)\,.$$
We wish to construct a quotient complex
$$CF_*^G(H:L) = \bigoplus_{[\widetilde\gamma]_G \in \Crit {\mathcal{A}}_{H:L}/G}C([\widetilde\gamma]_G)$$
similarly to the periodic orbit case. In order to do so we need to construct identifications $C(\widetilde\gamma) = C(\widetilde\gamma')$ if $[\widetilde\gamma]_G = [\widetilde\gamma']_G$. Let us see what goes into such an identification.
Let $A = [\widehat\gamma' \sharp - \widehat\gamma] \in \pi_2(M,L,q)$ where $q = \gamma(0)$. Recall that $D_A \sharp 0$ denotes the restriction of $D_A$ to the subspace $\{\xi\in W^{1,p}(A)\,|\, \xi(1) = 0\}$. We have the operator $D_A\sharp D_{\widetilde\gamma}$ obtained by boundary gluing. Recall that this operator is just the restriction of $D_A \oplus D_{\widetilde\gamma}$ to the subspace of $W^{1,p}(D) \oplus W^{1,p}(\widetilde\gamma)$ consisting of pairs of sections agreeing at the point $q$. Thus the incidence condition, in the sense of \eqref{eqn:iso_abstract_deformation_incidence_condition_W}, is given by the diagonal $\Delta_{T_qL} \subset T_qL \oplus T_qL$. On the other hand, the family $D_A \sharp 0\oplus D_{\widetilde\gamma}$ is the restriction of $D_A \oplus D_{\widetilde\gamma}$ to the subspace where the incidence condition is $0 \oplus T_qL \subset T_qL \oplus T_qL$. The isomorphism \eqref{eqn:iso_abstract_deformation_incidence_condition_W} yields
$$\ddd(D_A \sharp 0 \oplus D_{\widetilde\gamma}) \simeq \ddd(D_A \sharp D_{\widetilde\gamma})\,,$$
which combined with the direct sum and deformation isomorphisms, gives us finally
$$\ddd(D_A \sharp 0) \otimes \ddd(D_{\widetilde\gamma}) \simeq \ddd(D_A \sharp D_{\widetilde\gamma}) \simeq \ddd(D_{\widetilde\gamma'})\,.$$
This means that isomorphisms $C(\widetilde\gamma) \simeq C(\widetilde\gamma')$ are in a natural bijection with orientations of the operator $D_A \sharp 0$. In order to form a quotient complex as above, we therefore need to choose an orientation of this operator. Recall Proposition \ref{prop:canonical_oris_disks_Pin_struct} which states that a choice of a relative $\Pin^\pm$-structure on $L$ determines a system of orientations of the families $D_A\sharp 0$ for $A$ varying in $\pi_2^0(M,L,q)$ for all $q$. Let us therefore assume that $L$ is relatively $\Pin^+$ or relatively $\Pin^-$ and fix a relative $\Pin^\pm$-structure on it, and endow all the lines $\ddd(D_A \sharp 0)$ with the corresponding orientations.
\begin{rem}
Even though more precise conditions on the existence of such a system of orientations required for the construction of quotient complexes can be formulated, in applications it is enough to limit oneself to the relatively Pin case, which is what we do here.
\end{rem}
Therefore we have isomorphisms $C(\widetilde\gamma) \simeq C(A\cdot\widetilde\gamma)$ for all $\widetilde\gamma \in \Crit {\mathcal{A}}_{H:L}$ and so we can define
$$C([\widetilde\gamma]_G)$$
as the limit of the direct system of modules $(C(\widetilde\delta))_{\delta \in [\widetilde\gamma]_G}$ and above isomorphisms. The quotient module then is
$$CF_*^G(H:L) = \bigoplus_{[\widetilde\gamma]_G \in \Crit {\mathcal{A}}_{H:L}/G}C([\widetilde\gamma]_G)\,.$$
For the boundary operator to descend it is enough to require that the diagram
$$\xymatrix{C(\widetilde\gamma_-) \ar[r]^{C(u)} \ar[d] & C(\widetilde\gamma_+) \ar[d] \\ C(\widetilde\gamma_-') \ar[r]^{C(u)} & C(\widetilde\gamma_+')}$$
commute for all $\widetilde\gamma_\pm,\widetilde\gamma_\pm' \in \Crit {\mathcal{A}}_{H:L}$ with $|\widetilde\gamma_-| = |\widetilde\gamma_+| + 1$, $|\widetilde\gamma_-'| = |\widetilde\gamma_+'| + 1$, and $u \in \widetilde{\mathcal{M}}(H,J;\widetilde\gamma_-,\widetilde\gamma_+)$, $u \in \widetilde{\mathcal{M}}(H,J;\widetilde\gamma_-',\widetilde\gamma_+')$. The coherence of the chosen system of orientations with respect to boundary gluing ensures the commutativity of the diagram.
Therefore the boundary operator $\partial_{H,J}$ descends to a well-defined boundary operator $\partial_{H,J}^G$ on $CF_*^G(H:L)$ so that the quotient map
$$(CF_*(H:L),\partial_{H,J}) \to (CF_*^G(H:L),\partial_{H,J}^G)$$
is a chain map. In any case we have a well-defined Floer homology $HF_*^G(H,J:L)$.
Next, the coherence of the system of orientations implies that we have well-defined continuation maps. Thus we have a well-defined abstract Floer homology $HF_*^G(L)$.
Lastly, we wish to endow the quotient complexes with a product structure coming from the product operation $\star$. Therefore fix regular Floer data $(H^i,J^i)$ associated to the punctures of the thrice-punctured disk used to define $\star$, and a regular compatible perturbation datum $(K,I)$. Again, due to coherence, diagrams of the following kind:
$$\xymatrix{C(\widetilde\gamma_0) \otimes C(\widetilde\gamma_1) \ar[d] \ar[r]^-{C(u)} & C(\widetilde\gamma_2) \ar[d] \\ C(\widetilde\gamma_0') \otimes C(\widetilde\gamma_1') \ar[r]^-{C(u)}& C(\widetilde\gamma_2')}$$
commute, where $\widetilde\gamma_i,\widetilde\gamma_i' \in \Crit {\mathcal{A}}_{H^i:L}$ are such that $A_i := [\widehat\gamma_i'\sharp - \widehat\gamma_i] \in G_{\gamma_i(0)}$, $|\widetilde\gamma_0| + \widetilde\gamma_1| - |\widetilde\gamma_2| = n$, $|\widetilde\gamma_0'| + \widetilde\gamma_1'| - |\widetilde\gamma_2'| = n$, and $u \in {\mathcal{M}}(K,I;\{\widetilde\gamma_i\}_i)$, $u \in {\mathcal{M}}(K,I;\{\widetilde\gamma_i'\}_i)$.
Transferring $A_0$ and $A_1$ to $\gamma_2(0)$ along the boundary of the disk $\widehat\gamma_0\sharp \widehat\gamma_1 \sharp u$ we see that $A_2$ must be equal to the product of these, which explains why we required $G_q$ to be a subgroup of $\pi_2(M,L,q)$ for every $q$.
We therefore obtain a well-defined product structure
$$\star {:\ } CF_*^G(H^0:L) \otimes CF_*^G(H^1:L) \to CF_*^G(H^2:L)\,.$$
It can be checked that this structure is intertwined by the quotient maps, and that the result is a well-defined product structure on the abstract Floer homology $HF_*^G(L)$, which becomes an associative unital ring.
Lastly we wish to comment on the usual structure of Floer homology as a module over Novikov rings. Firstly, let us point out that since $\pi_2(M,L)$ is a local system, that is a groupoid rather than a group, it does not make sense to speak of its group ring. One can define $H_2^D:= \im (\pi_2(M,L) \to H_2(M,L;{\mathbb{Z}}))$ and look at the corresponding group ring ${\mathbb{Z}}[H_2^D]$, which is the usual one appearing in Floer theory. We note that if one picks the local system $G = \ker (\pi_2(M,L) \to H_2(M,L;{\mathbb{Z}}))$, then the quotient space $\widetilde\Omega_L/G \to \Omega_L$ is a normal covering space with deck group canonically isomorphic to $H_2^D$, so in particular $H_2^D$ acts on $\Crit {\mathcal{A}}_{H:L}/G$. But this is not enough to make $H_2^D$ act on the complex as this involves orienting operators $D_A\sharp 0$ for $A$ with nonzero Maslov, as explained above. This is possible if and only if $\mu(A)$ is even. This means that if $\mu(A)$ is odd, which of course only happens if $L$ is nonorientable, we simply cannot orient these operators in a coherent manner. Therefore the Floer complex $CF_*^G(H:L)$ is not, in general, a module over the Novikov ring ${\mathbb{Z}}[H_2^D]$. We illustrate this point in the example of ${\mathbb{R}} P^n \subset {\mathbb{C}} P^n$ for $n$ even, see \S\ref{ss:RPn}. Note however that since we do have a coherent system of orientations for classes $A$ with even Maslov, we see that the Floer complex $CF_*^G(H:L)$ is a module over the subring ${\mathbb{Z}}[H_2^{D,\text{even}}]$ where $H_2^{D,\text{even}}$ consists of classes with even Maslov number. We emphasize that this module structure depends on the chosen relative Pin structure.
We also consider the particular case of a class $A \in \pi_2(M,L,q)$ lying in the image of the morphism $\pi_2(M,q) \to \pi_2(M,L,q)$. In this case the operator $D_{A,q}$ is canonically oriented, because it can be considered as the result of gluing the operator $D_A$ where we view $A$ as a sphere attached at $q$, and the operator $D_0 \sharp 0$ where $0$ is the trivial class in $\pi_2(M,L)$. Since $\ddd(D_0 \sharp 0) = \ddd(0) \equiv {\mathbb{R}}$, we see that this operator is canonically oriented. This orientation also agrees with the one induced by any relative Pin structure, as can be seen from the multiplicative property. The conclusion is that convenient local systems $G < \pi_2^0(M,L)$ will contain the image of $\pi_2^0(M) \to \pi_2^0(M,L)$, because one has these canonical orientations. For the Novikov module structure, this has the following implication. Since $D_A \sharp 0$ is always canonically oriented for spherical classes $A\in \pi_2(M,L)$, the quotient Floer complex $CF_*^G(H:L)$ with $G = \ker (\pi_2(M,L) \to H_2(M,L;{\mathbb{Z}}))$ inherits the structure of a module over the group ring ${\mathbb{Z}}[H_2^S]$.
All of the above can be carried over to quantum homology verbatim. We consequently have quotient complexes $QC_*^G({\mathcal{D}}:L)$ with homology $QH_*^G({\mathcal{D}}:L)$. The PSS morphisms are well-defined isomorphisms $CF_*^G \simeq QH_*^G$, which allows us to define the abstract quantum homology $QH_*^G(L)$, which inherits the structure of a unital associative ring. In case $G$ is the kernel of the Hurewicz morphism, $QH_*^G(L)$ carries the structure of an algebra over the Novikov ring ${\mathbb{Z}}[H_2^S]$, and over the ring ${\mathbb{Z}}[H_2^{D,\text{even}}]$, and the two are compatible.
\subsection{Quantum module structure}\label{ss:quotient_cxs_quantum_module_struct}
Finally we wish to combine the two previous constructions and define a quantum module structure of Lagrangian Floer homology over the quantum homology of $M$. The necessary ingredients here are a local subsystem $G < \pi_2^0(M,L)$ containing $\im(\pi_2^0(M) \to \pi_2^0(M,L))$, and a relative Pin structure for $L$. The read will have no trouble checking that the quantum module operation
$$\bullet {:\ } CF_*(H) \otimes CF_*(H_0:L) \to CF_*(H_1:L)$$
descends to a well-defined operation
$$\bullet {:\ } CF_*^{G'}(H) \otimes CF_*^G(H_0:L) \to CF_*^G(H_1:L)$$
where $G' < \pi_2^0(M)$ is any local system such that the morphism $\pi_2(M,q) \to \pi_2(M,L,q)$ maps $G'_q$ into $G_q$ for $q \in L$.
\section{Examples and computations}\label{s:examples_computations}
Here we compute the canonical quantum complex for a number of Lagrangians.
\subsection{${\mathbb{R}} P^n$ in ${\mathbb{C}} P^n$, $n \geq 2$}\label{ss:RPn}
We consider $M = {\mathbb{C}} P^n$ with the Fubini-Study form $\omega$ and $L = {\mathbb{R}} P^n$, which is monotone with minimal Maslov number $N_L = n+1$. Here we do not need to find the holomorphic disks in order to compute the homology, as we will see.
Choose a Morse function $f$ on $L$ with a unique critical point $q_i$ of index $i$, where $i = 0,\dots, n$. Assume that $J$ is an almost complex structure on $M$ such that ${\mathcal{D}} = (f,\rho,J)$ is a regular quantum datum, where $\rho$ is a Riemannian metric for which there are precisely two gradient trajectories of $f$ from $q_{i+1}$ to $q_i$ for every $i < n$. The complex as a module is
$$QC_*({\mathcal{D}}:L) = \bigoplus_{i = 0}^n \bigoplus_{A \in \pi_2(M,L,q_i)}C(q_i,A)\,.$$
A computation shows that $\pi_2(M,L) \simeq {\mathbb{Z}}$, therefore the complex has rank $1$ in every degree. We see that $C(q_i,0)$ is generated by the orientations of $T{\mathcal{S}}(q_i)$. As we know, the module $C(q_n,0)$ has a canonical generator, which we denote by $q_n$ by abuse of notation, corresponding to the positive orientation of $\ddd(T{\mathcal{S}}(q_n)) \equiv \ddd({\mathbb{R}})$. We also let $q_i$, $i < n$, denote a generator of $C(q_i,0)$, by abuse of notation. By degree reasons, the boundary operator $\partial {:\ } C(q_{i+1},0) \to C(q_i,0)$ for $i< n$ coincides with the Morse boundary operator. We have therefore $\partial q_n = 0$. The Morse homology of ${\mathbb{R}} P^n$ in degree $n-1$ is ${\mathbb{Z}}_2$ if $n$ is even and $0$ if $n$ is odd; this forces the boundary operator $\partial {:\ } C(q_{n-1},0) = {\mathbb{Z}}\langle q_{n-1}\rangle \to C(q_{n-2},0) = {\mathbb{Z}}\langle q_{n-2}\rangle$ to send $q_{n-1} \mapsto \pm 2 q_{n-2}$, where the sign depends on the choice of generators. Since $0 = \partial^2q_{n-1} = \pm 2\partial q_{n-2}$, we see that $\partial q_{n-2} = 0$, that is the complex in degrees $n,n-1,n-2$ has the form
$$\dots \to {\mathbb{Z}} \langle q_{n} \rangle \xrightarrow{0} {\mathbb{Z}}\langle q_{n-1}\rangle \xrightarrow{\pm 2} {\mathbb{Z}} \langle q_{n-2}\rangle \xrightarrow{0} \dots$$
Its homology therefore is $QH_{n-1}({\mathcal{D}}:L) \simeq 0$, $QH_{n-2}({\mathcal{D}}:L) \simeq {\mathbb{Z}}_2$. Since the quantum homology of ${\mathbb{C}} P^n$ is isomorphic to ${\mathbb{Z}}$ in every even degree, and all the homogeneous generators are invertible with respect to the quantum product, using the quantum module structure of $QH_*({\mathcal{D}}:L)$ over $QH_*(M)$, we see that the former is $2$-periodic with respect to degree, therefore we obtain finally
$$QH_*(L) \simeq \left \{ \begin{array}{ll}{\mathbb{Z}}_2\,, & n-*\text{ even} \\ 0\,, & n-*\text{ odd} \end{array}\right.\,.$$
From this we can compute the quantum homology of ${\mathbb{R}} P^n$ over an arbitrary ring $R$. Namely, the complex in degrees $n,n-1,n-2$ is
$$\dots \to R \langle q_n \rangle \xrightarrow{0} R \langle q_{n-1} \rangle \xrightarrow{2} R \langle q_{n-2} \rangle \xrightarrow{0} \dots $$
from which we see that $QH_{n-1}(L) \simeq \ker 2$ and $QH_{n-2} \simeq R /(2)$, where we view $2 {:\ } R \to R$ as the map corresponding to multiplication by $2$ and $(2) \subset R$ is the corresponding principal ideal. The rest of $QH_*(L)$ is obtained from the $2$-periodicity with respect to degree.
For instance if $2$ is invertible in $R$, we see that $QH_*(L) = 0$. In the other extreme where $R$ has characteristic $2$, we see that $QH_*(L) \simeq R$ in every degree.
When $n$ is even, ${\mathbb{R}} P^n$ is nonorientable. We see that in this case the quantum homology over ${\mathbb{Z}}$ is $2$-periodic in degree. It follows that the usual Novikov ring ${\mathbb{Z}}[t,t^{-1}]$ where $t$ is the Novikov parameter of degree $|t| = -N_L = -n-1$ does not act on this module, because the degree of $t$ is odd. This illustrates the point raised at the end of \S\ref{ss:quotient_cxs_Lagr_HF_QH} that in general the Floer complexes and the Floer homology of nonorientable Lagrangians are not modules over the full Novikov ring. We see, however, that the subring generated by the even powers of $t$ does in fact act on the complex and on the homology.
\subsection{A general technique for tori}\label{ss:general_technique_tori}
In order to efficiently compute the quantum complex, it makes sense to choose a basis for the underlying module in a suitable way. Here we present such a way in case $L$ is a Lagrangian torus.
We identify $L$ with the standard Euclidean torus ${\mathbb{T}}^n$ by means of a diffeomorphism. We note that ${\mathbb{T}}^n$ carries a canonical trivial $\Pin^\pm$-structure, which is determined by requiring the transition maps to equal the identity element of $\Pin^\pm(n)$. Proposition \ref{prop:canonical_oris_disks_Pin_struct} yields a coherent system of orientations for the operator families $D_A\sharp 0$ for $A \in \pi_2(M,L,q)$ and any $q$. Recall that the module $C(q,A)$ is generated by the orientations of $D_A\sharp T{\mathcal{S}}(q)$. The canonical isomorphism
$$\ddd(D_A \sharp T{\mathcal{S}}(q)) \simeq \ddd(D_A \sharp 0) \otimes \ddd(T{\mathcal{S}}(q))$$
together with the orientation of $D_A \sharp 0$ gives us a canonical identification $C(q,A) = C(q,0)$. In order to remember $A$, we write this as
\begin{equation}\label{eqn:C_q_A_isomorphic_e_to_A_otimes_C_q_0}
C(q,A) = e^A \otimes C(q,0)\,.
\end{equation}
If we now fix an orientation for every ${\mathcal{S}}(q)$, it gives us a generator of $C(q,0)$, which we denote by $q$ by abuse of notation. Therefore the complex becomes
$$QC_*({\mathcal{D}}:L) = \bigoplus_{q \in \Crit f} {\mathbb{Z}}[\pi_2(M,L,q)]\otimes q\,,$$
where we write elements of the group ring ${\mathbb{Z}}[\pi_2(M,L,q)]$ as sums $\sum_i c_i e^{A_i}$.
Assume now that the local system $\pi_2(M,L)$ is trivial, which means that the natural action of $\pi_1(L,q)$ on $\pi_2(M,L,q)$ is trivial for every $q$. The complex therefore can be written as
$$QC_*({\mathcal{D}}:L) = {\mathbb{Z}}[\pi_2(M,L)]\otimes \bigoplus_{q \in \Crit f} {\mathbb{Z}}\cdot q\,.$$
In this case, using the fact that $N_L$ is even, it is not hard to show that $\partial_Q$ is linear over the group ring ${\mathbb{Z}}[\pi_2(M,L)]$, in the sense that for any $q,q' \in \Crit f$ and any $A, B \in \pi_2(M,L)$ and any $u \in \widetilde{\mathcal{P}}(q,q')$ with $u$ representing the class $B \in \pi_2(M,L,q)$, and where $|q| - |q'| + \mu(u) - 1 = 0$, we have the commutative diagram
$$\xymatrix{C(q,0) \ar[r]^{C(u)} \ar[d]^{e^A} & C(q',B) \ar[d]^{e^A} \\ C(q,A) \ar[r]^{C(u)} & C(q',AB)}$$
This means the following in terms of computing the quantum complex: it suffices to compute the isomorphisms $C(u) {:\ } C(q,0) \to C(q',B)$, and then the isomorphisms $C(u) {:\ } C(q,A) \to C(q',AB)$ are given by tensoring with $e^A$, that is using the isomorphism \eqref{eqn:C_q_A_isomorphic_e_to_A_otimes_C_q_0} above.
Another useful piece of information is as follows. Assume $u$ is a $J$-holomorphic disk of Maslov index $2$ and assume that the evaluation maps $\ev_\theta {:\ } \widetilde{\mathcal{M}}(J) \to L$, $\ev_\theta(v) = v(\theta)$, for $\theta \in S^1$ have surjective differentials at $u$, that is $\ev_{\theta*,u} {:\ } \ker D_u \to T_{u(\theta)}L$ is onto for every $\theta$. The operator $D_u \sharp 0$ is surjective and has index $2$, therefore its kernel is $2$-dimensional. The infinitesimal action of the conformal automorphism group of $D^2$ preserving $1$ is an isomorphism ${\mathbb{C}} = \Lie \Aut(D^2,1) \to \ker D_u \sharp 0$, therefore using the canonical orientation of ${\mathbb{C}}$ we obtain an orientation of $D_u \sharp 0$. On the other hand $D_u \sharp 0$ is oriented by the canonical $\Pin^\pm$-structure on $L$.
\begin{lemma}
The canonical orientation on $D_u \sharp 0$ coming from the $\Pin$-structure coincides with the orientation of $\ker (D_u \sharp 0)$ coming from its identification with ${\mathbb{C}}$.
\end{lemma}
\begin{prf}
We note that the bundle pair $(E_u = u^*TM, F_u = (u|_{S^1})^*TL)$ splits into the direct sum of bundle pairs $(\im u_*,\im(u|_{S^1})_*)$ and a complement $(E',F')$. The latter bundle pair can be identified with $({\mathbb{C}}^{n-1},{\mathbb{R}}^{n-1})$ and the corresponding operator on it is surjective with kernel isomorphic to $F'_1$. It follows that the restricted operator $D_{E',F'}\sharp 0$ is an isomorphism and thus it's oriented by the canonical positive orientation.
The remaining bundle pair $(\im u_*,\im(u|_{S^1})_*)$ has Maslov index $2$. It can be seen that the corresponding operator can be represented as the gluing of the Dolbeault operator on ${\mathcal{O}}(1) \to {\mathbb{C}} P^1$ and the Dolbeault operator on the standard pair $({\mathbb{C}},{\mathbb{R}})$. The orientation of $D_u\sharp 0$ corresponding to the canonical trivial $\Pin$-structure comes from the complex orientation of the kernel of $\overline\partial$ on ${\mathcal{O}}(1) \to {\mathbb{C}} P^1$ restricted to the subspace of sections vanishing at $0 \in {\mathbb{C}} P^1$. It can be seen that this complex orientation coincides with the orientation induced by identifying $\ker (D_u \sharp 0) = {\mathbb{C}}$. The lemma is proved. \qed
\end{prf}
This result allows us to compute the isomorphism $C(u)$ coming from such a disk. Let therefore $q,q' \in \Crit f$ with $|q| = |q'| - 1$ and let $u \in \widetilde{\mathcal{P}}(q,q')$ be a Maslov $2$ disk such that $(u(-1),u(1)) \in {\mathcal{U}}(q) \times {\mathcal{S}}(q')$ and assume that the evaluation maps from the space of $J$-disks to $L$ at points of $S^1$ have surjective differentials at $u$. The space $\ker D_u|_{X_\Gamma} = \{\xi \in \ker D_u\,|\, \xi(1) \in T_{u(1)}{\mathcal{S}}(q')\}$ enters into the following two exact sequences:
$$0 \to \ker D_u|_{Y_\Gamma} \to \ker D_u|_{X_\Gamma} \to T_{u(-1)}L/T_{u(-1)}{\mathcal{U}}(q) \to 0\,.$$
$$0 \to \ker D_u \sharp 0 \to \ker D_u|_{X_\Gamma} \to T_{u(1)}{\mathcal{S}}(q') \to 0\,.$$
The first sequence yields
$$\ddd(D_u|_{X_\Gamma}) \simeq \ddd(D_u|_{Y_\Gamma}) \otimes \ddd(TL/T{\mathcal{U}}(q)) = \ddd(D_u|_{Y_\Gamma}) \otimes \ddd(T{\mathcal{S}}(q))$$
while the second one gives
$$\ddd(D_u|_{X_\Gamma}) \simeq \ddd(D_u \sharp 0) \otimes \ddd(T{\mathcal{S}}(q'))$$
From the definitions in \S\ref{ss:Lagr_QH} it follows that the isomorphism $C(u) {:\ } C(q,0) \to C(q',A)$, where $A = [u]$, is obtained as follows: a generator of $C(q,0)$, that is an orientation of $T{\mathcal{S}}(q)$, together with the canonical orientation of $D_u|_{Y_\Gamma}$ by the infinitesimal action of ${\mathbb{R}}$ determine an orientation of $D_u|_{X_\Gamma}$ by the first sequence. The second sequence gives an orientation of $\ddd(D_u\sharp 0) \otimes \ddd(T{\mathcal{S}}(q'))$, or equivalently an element of $C(q',A)$. Now using the canonical orientation of $D_u \sharp 0$ we obtain an orientation of $T{\mathcal{S}}(q')$.
We now define a basis of the Lie algebra of the conformal automorphism group $\Aut(D^2)$: $\epsilon_1$ is the infinitesimal vector of the elliptic counterclockwise rotation about $0 \in D^2$; $\epsilon_2$ is the infinitesimal vector of the hyperbolic translation from $-1$ to $1$; $\epsilon_3$ is the infinitesimal vector of the parabolic rotation around $1$ which evaluates to $-i$ at $0 \in D^2$, so that it induces a counterclockwise translation on the boundary away from $1$. Note that $\epsilon_2,\epsilon_3$ form a \emph{negative} basis of $\Lie \Aut (D^2,1) = {\mathbb{C}}$. When we have a holomorphic disk $u$ with boundary on $L$, by abuse of notation we will denote the vector fields along $u$ corresponding to the infinitesimal actions of these vectors by the same letters.
We will apply this in case $L$ is a two-dimensional torus. There are two possibilities. The first one is when $q$ has index $1$ and $q'$ has index $2$, meaning it's a maximum. Let us choose an orientation of ${\mathcal{S}}(q)$, which consists of a tangent vector field $v$ along ${\mathcal{S}}(q)$. Assume $\epsilon_3(-1) = \epsilon v$. Then the orientation of $\ker D_u|_{X_\Gamma}$ induced by the first sequence is $\epsilon_2 \wedge \epsilon\epsilon_3$. We see that in this case $D_u|_{X_\Gamma} = D_u \sharp 0$ and therefore we just obtain $-\epsilon$ times the standard orientation. We see that in this case the isomorphism $C(u) {:\ } C(q,0) \to C(q',A)$, where $A = [u]$, sends $v$ to $-\epsilon e^A\otimes q'$, where $q'$ denotes the canonical positive orientation of $T{\mathcal{S}}(q') = 0$.
The other possibility is that $q$ is a minimum and $q'$ has index $1$. In this case we choose an orientation of $TL = T{\mathcal{S}}(q)$ by a pair of vectors $v_1,v_2$, for instance we can take the vectors corresponding to the chosen coordinates on $L$. We also choose an orientation of ${\mathcal{S}}(q')$ by a nonvanishing tangent vector field $v$ along ${\mathcal{S}}(q)$. In $\ker D_u|_{X_\Gamma}$ let $\eta$ be a vector such that $\epsilon_3(-1),\eta(-1)$ is a positive basis at $q$. Therefore the first sequence gives $\ker D_u|_{X_\Gamma}$ the orientation
$$\epsilon_2 \wedge \epsilon_3 \wedge \eta\,.$$
Now $\eta$ evaluates to $\epsilon v$ at $+1$. It follows from the second sequence that the induced orientation on $T{\mathcal{S}}(q')$ is given by $-\epsilon v$. Thus $C(u) {:\ } C(q,0) \to C(q',A)$ maps $v_1\wedge v_2$ to $-\epsilon e^A \otimes v$.
We obtain from these considerations the following result.
\begin{lemma}\label{lem:computing_Lagr_QH_tori}
Let $L$ be a Lagrangian two-torus with $N_L = 2$ such that the local system $\pi_2(M,L)$ is trivial, and such that the evaluation maps from the space of Maslov $2$ disks at the points of $S^1$ all have surjective differentials. Let $f$ be a Morse function on $L$ with vanishing Morse boundary operator. Assume the critical points of $f$ are the maximum $q_2$, saddles $x,y$, and the minimum $q_0$. Orient the stable manifolds of $x,y$ somehow and orient ${\mathcal{S}}(q_0)$ using the orientation coming from the identification $L \simeq {\mathbb{T}}^2$.
\begin{itemize}
\item Every unparametrized holomorphic disk $[u] \in {\mathcal{M}}_1(J;q_2,A)$ yields contributions to the matrix elements of $\partial_{\mathcal{D}}$, $C(x,0) \to C(q_2,A)$ and $C(y,0) \to C(q_2,A)$, as follows. The contribution to the matrix element $C(x,0) \to C(q_2,A) = e^A \otimes C(q_2,0)$ is $-1$ times the intersection number of ${\mathcal{U}}(x)$ and the oriented parametrized curve $u|_{S^1}$, where ${\mathcal{U}}(x)$ is cooriented by the chosen orientation of ${\mathcal{S}}(x)$ and $u|_{S^1}$ is oriented by the counterclockwise orientation of $S^1$. The contribution to $C(y,0) \to C(q_2,A)$ is calculated similarly.
\item every unparametrized holomorphic disk $[u] \in {\mathcal{M}}_1(J;q_0,A)$ yields the contribution to the matrix element $C(q_0,0) \to C(x,A) = e^A \otimes C(x,0)$ given by $-i$ times the chosen orientation of ${\mathcal{S}}(x)$, where $i$ is the intersection number of the oriented curve ${\mathcal{S}}(x)$ and the parametrized curve $u|_{S^1}$, cooriented by a vector $\eta \in \ker D_u|_{X_\Gamma}$ subject to the condition that $\epsilon_3(-1),\eta(-1)$ form an oriented basis at $q_0$. The same is true verbatim for the contribution to the matrix element $C(q_0,0) \to C(y,A) = e^A \otimes C(y,0)$. \qed
\end{itemize}
\end{lemma}
In case $L$ is a circle, the condition on the differentials of the evaluation maps is automatic, therefore we obtain
\begin{lemma}
Assume $L$ is a circle with $N_L = 2$, such that $\pi_2(M,L)$ is a trivial local system. Choose an orientation of $L$. Let $f$ be a Morse function on $L$ with maximum $q_1$ and minimum $q_0$. The contribution of $[u] \in {\mathcal{M}}_1(J;q_1,A)$ to the matrix element $C(q_0,0) \to C(q_1,A)$ is given by $-1$ times the intersection number of the oriented parametrized curve $u|_{S^1}$ and the point $q_0$ cooriented by the orientation of ${\mathcal{S}}(q_0)$. \qed
\end{lemma}
\subsection{${\mathbb{R}} P^1$ in ${\mathbb{C}} P^1$}
Let ${\mathcal{D}} = (f,\rho,J_0)$ be a quantum datum for $L = {\mathbb{R}} P^1$ where $f$ is a Morse function with a unique maximum $q_1$ and a unique minimum $q_0$, and $J_0$ is the standard complex structure on ${\mathbb{C}} P^1$. The local system $\pi_2(M,L)$ is trivial and the relative Hurewicz morphism $\pi_2(M,L) \to H_2(M,L;{\mathbb{Z}})$ is an isomorphism. The latter group is generated by two classes $A,B$, defined as follows. Fix an orientation on $L$. The class $A$ is the class of an embedded disk representing a contraction of an orientation-preserving diffeomorphism $S^1 \to L$, while $B$ is represented by an embedded disk realizing the contraction of an orientation-reversing diffeomorphism $S^1 \to L$. For any $q \in L$ the spaces ${\mathcal{M}}_1(J_0;q, A)$ and ${\mathcal{M}}_1(J_0;q, B)$ each contain one point. Choose a basis of $T{\mathcal{S}}(q_0)$ giving ${\mathcal{S}}(q_0)$ the chosen orientation of $L$. Then Lemma \ref{lem:computing_Lagr_QH_tori} gives us the following formula for the boundary operator:
$$\partial_{\mathcal{D}}(q_1) = 0\quad \text{and} \quad \partial_{\mathcal{D}}(q_0) = (-e^A + e^B)q_1\,.$$
The homology of this canonical complex can be computed. To this end we identify the complex in every degree with a direct sum of a countable number of copies of ${\mathbb{Z}}$, meaning we can identify each $QC_i({\mathcal{D}}:L)$ with the space of functions ${\mathbb{Z}} \to {\mathbb{Z}}$ having finite support. The boundary operator acting, for instance, from $QC_0$ to $QC_{-1}$, can be identified with the following operator on functions. Let $\delta_j$ denote the function taking the value $1$ on $j$ and $0$ otherwise. Then $\partial(\delta_j) = \delta_{j+1} - \delta_j$. We see that $QH$ vanishes in even degrees. To compute $QH_{-1}$, for example, define the sum function $f \mapsto \sum_j f(j)$. We see that it is onto ${\mathbb{Z}}$ and that the kernel consists of functions having sum $0$, which is precisely the set of boundaries, as follows from the description of $\partial$ we just had. Thus the homology is isomorphic to ${\mathbb{Z}}$ in every odd degree. Thus
$$QH_*(L) \simeq \left\{ \begin{array}{ll} {\mathbb{Z}}\,, & *\text{ is odd} \\ 0\,, & *\text{ is even}\end{array}\right.\,.$$
There are two Spin structures on $L$. These allow us to form a quotient of $QC_*({\mathcal{D}}:L)$ by identifying all submodules in the same degree. The quotient complex then is isomorphic to ${\mathbb{Z}}$ in each degree. The induced boundary operator depends on the Spin structure: for one Spin structure it vanishes, meaning that $QH_*(L) \simeq {\mathbb{Z}}$ in every degree, while for the other Spin structure it is the multiplication by $2$ when going from an even to an odd degree, which means that $QH_*(L) \simeq {\mathbb{Z}}_2$ in odd degrees and $QH_*(L) = 0$ in even degrees.
\subsection{Some Lagrangian tori}
We compute the canonical quantum complex of three Lagrangian tori: the Clifford and the Che\-ka\-nov torus in ${\mathbb{C}} P^2$ and the exotic torus in $S^2 \times S^2$, based on our general technique of \S \ref{ss:general_technique_tori}. The only thing we need to know is the parametrizations of the boundary circles of $J_0$-holomorphic disks of Maslov $2$. It turns out that the conditions of Lemma \ref{lem:computing_Lagr_QH_tori} are satisfied for all the three tori and for $J_0$ being the standard complex structure.
Let us therefore list the disks of Maslov $2$ with boundary on the tori. Note that the local systems $\pi_2(M,L)$ are trivial and the relative Hurewicz morphism $\pi_2(M,L) \to H_2(M,L;{\mathbb{Z}})$ is an isomorphism in all three cases.
\subsubsection{The Clifford torus in ${\mathbb{C}} P^2$}\label{sss:Cliff_torus}
The Clifford torus is the Lagrangian
$$L = \{[z_0:z_1:z_2] \,|\, |z_0| = |z_1| = |z_2|\} \subset {\mathbb{C}} P^2\,,$$
and we can identify
$$S^1 \times S^1 \simeq L \quad \text{via} \quad (e^{i\theta_0},e^{i\theta_1}) \mapsto [e^{i\theta_0}:e^{i\theta_1}:1]\,.$$
We endow $L$ with the trivial $\Pin^+$-structure \footnote{We can also choose the trivial $\Pin^-$-structure; for the present calculation it is immaterial.} corresponding to this identification. We have the $J_0$-holomorphic disks
$$u_0(z) = [z:1:1]\,,\quad u_1(z) = [1:z:1]\,, \quad u_2(z) = [1:1:z]\quad \text{for }z \in D^2\,,$$
and we let $A,B,C \in \pi_2(M,L)$ be the corresponding classes; these freely generate $\pi_2(M,L)$ (as an abelian group). We choose a Morse function $f$ with critical points $q_2,x,y,q_0$ such that ${\mathcal{U}}(y)$ and ${\mathcal{S}}(x)$ are both slight deformations of vertical curves $\theta_0 = \const$ while ${\mathcal{S}}(y)$ and ${\mathcal{U}}(y)$ are both slight deformations of horizontal curves $\theta_1 = \const$. The Morse boundary operator of $f$ vanishes. We orient ${\mathcal{S}}(y)$ by $\partial_0 \equiv \partial_{\theta_0}$, ${\mathcal{S}}(x)$ by $\partial_1 \equiv \partial_{\theta_1}$, and ${\mathcal{S}}(q_0)$ by $\partial_0 \wedge \partial_1$.
The zero-dimensional part of the space ${\mathcal{P}}(x,q_2)$ has two disks, one in each class $B,C$. By Lemma \ref{lem:computing_Lagr_QH_tori} we have to compute the intersection numbers of the parametrized boundaries of these disks with ${\mathcal{U}}(x)$, cooriented by $\partial_1$. We can see that for the disk in $B$ this intersection number equals $1$ while for the disk in $C$ it equals $-1$, therefore by Lemma \ref{lem:computing_Lagr_QH_tori} we have
$$\partial x = (-e^B + e^C)q_2\,.$$
The zero-dimensional part of the space ${\mathcal{P}}(y,q_2)$ has one disk in each of the classes $A,C$. The intersection number of the parametrized boundary of a disk in class $A$ with ${\mathcal{U}}(y)$ cooriented by $\partial_0$ equals $1$ while the corresponding intersection number for a disk in class $C$ is $-1$, therefore by Lemma \ref{lem:computing_Lagr_QH_tori} we have
$$\partial y = (-e^A + e^C)q_2\,.$$
The zero-dimensional part of the space ${\mathcal{P}}(q_0,x)$ has a disk in each of the classes $A,C$. Let us look at a disk $u \in \widetilde {\mathcal{P}}(q_0,x)$ in class $A$. According to Lemma \ref{lem:computing_Lagr_QH_tori} we have to coorient its boundary by a vector $\eta$ such that $\epsilon_3(-1) = \partial_0,\eta(-1)$ form a positive basis. Therefore $\eta$ should be the vector $\partial_1$. It evaluates to the vector $\partial_1$ at $1$, therefore the intersection number of $u|_{S^1}$ cooriented by $\eta$ with ${\mathcal{S}}(x)$ cooriented by $\partial_1$ is $1$. An analogous computation shows that the intersection number corresponding to a disk in class $C$ is $-1$.
The zero-dimensional part of ${\mathcal{P}}(q_0,y)$ has a disk in each of the classes $B,C$, and the respective intersection numbers equal $-1, 1$, therefore by Lemma \ref{lem:computing_Lagr_QH_tori} we have
$$\partial q_0 = (-e^A + e^C)x + (e^B - e^C)y\,.$$
\subsubsection{The Chekanov torus in ${\mathbb{C}} P^2$}
We can view ${\mathbb{C}} P^2$ as the symplectic cut of the closed unit disk cotangent bundle $D^*{\mathbb{R}} P^2$ by the geodesic flow on the boundary relative to the round metric on ${\mathbb{R}} P^2$, with the symplectic form scaled by an appropriate factor. We let $C$ be the image of the unit cotangent bundle of ${\mathbb{R}} P^2$ by the quotient map; it is a smooth conic. If we fix a point $x \in {\mathbb{R}} P^2$, take a cotangent circle of radius $r$ at it, and let it flow with the geodesic flow, the resulting set $L \subset {\mathbb{C}} P^2$ is a monotone Lagrangian torus for a unique value of $r$. This is the Chekanov torus. We let $\alpha \in \pi_2(M,L)$ be the class of the cotangent disk at $x$, $\beta \in \pi_2(M,L)$ be the class of a disk contracting the geodesic circle whose intersection number with $C$ is $1$. Finally we let $h = [{\mathbb{C}} P^1] \in \pi_2(M,L)$ be the class of the line. The group $\pi_2(M,L)$ is freely generated by these three classes. It is known \cite{Chekanov_Schlenk_Notes_mon_twist_tori, Auroux_Mirror_symmetry_T_duality_compl_antican_divisor} that there are four classes containing Maslov $2$ disks: $\beta$ and $h - 2\beta + k\alpha$ for $k = -1,0,1$.
We put coordinates $(\theta_0,\theta_1)$ on $L$, where the boundary of $\alpha$ is given by $\theta_1 = \const$, while the boundary of $\beta$ is given by $\theta_0 = \const$. We choose a Morse function as in \S\ref{sss:Cliff_torus}. We orient the stable manifolds ${\mathcal{S}}(x)$, ${\mathcal{S}}(y)$, ${\mathcal{S}}(q_0)$ by $\partial_1$, $\partial_0$, $\partial_0\wedge \partial_1$, respectively.
The zero-dimensional part of ${\mathcal{P}}(x,q_2)$ contains seven disks, one in the class $\beta$, and two in each of the classes $h - 2\beta + k\alpha$. Performing a calculation of the corresponding intersection numbers as in \S\ref{sss:Cliff_torus}, we get
$$\partial x = (-e^\beta + 2(e^{h-2\beta-\alpha} +e^{h-2\beta} +e^{h-2\beta+\alpha}))q_2\,.$$
The zero-dimensional part of ${\mathcal{P}}(y,q_0)$ has a disk in each of the classes $h - 2\beta \pm \alpha$, and the corresponding intersection numbers give us
$$\partial y = (e^{h-2\beta-\alpha} - e^{h-2\beta+\alpha})q_2\,.$$
The zero-dimensional part of ${\mathcal{P}}(q_0,x)$ has a disk in each of the classes $h - 2\beta \pm \alpha$, while the zero-dimensional part of ${\mathcal{P}}(q_0,y)$ has one disk in class $\beta$ and two disks in each of the classes $h - 2\beta +k\alpha$. We have:
$$\partial q_0 = (e^{h-2\beta-\alpha} - e^{h-2\beta+\alpha})x + (e^\beta - 2(e^{h-2\beta-\alpha} +e^{h-2\beta} +e^{h-2\beta+\alpha}))y\,.$$
\subsubsection{The exotic torus in ${\mathbb{C}} P^1 \times {\mathbb{C}} P^1$}
We identify ${\mathbb{C}} P^1 = S^2$, and view $S^2$ as the set of unit vectors in ${\mathbb{R}}^3$. The exotic torus is
$$L = \{(x,y) \in S^2 \times S^2\,|\, x\cdot y = -\tfrac 1 2\,, x_3 + y_3 = 0\}\,,$$
and put on it the coordinates $\theta_0, \theta_1$ where $\theta_0$ corresponds to the rotation around the $3$-axis in the counterclockwise direction while $\theta_1$ is the counterclockwise rotation of the pair $(x,y)$ around their sum $x + y$. We let the contraction of the curve $\{x_3 = -y_3 = \sqrt 3/2\}$ through the respective poles generate the class $\alpha$. The curve $\theta_0 = \const$ contracts via a disk whose class we denote by $\beta$. There are also the classes $A = [S^2 \times \pt]$ and $B = [\pt \times S^2]$ in $\pi_2(M,L)$. These four classes freely generate this abelian group.
It is known \cite{Chekanov_Schlenk_Notes_mon_twist_tori, Auroux_Mirror_symmetry_T_duality_compl_antican_divisor} that there are five classes in $\pi_2(M,L)$ containing Maslov $2$ disks, namely $\beta$, $A-\beta$, $B - \beta$, $A - \beta - \alpha$, and $B - \beta + \alpha$.
We choose a Morse function as above and orient its stable manifolds in the same manner. The zero-dimensional parts of the spaces ${\mathcal{P}}(x,q_2)$, ${\mathcal{P}}(q_0,y)$ each contain one disk in each one of the above five classes, while the spaces ${\mathcal{P}}(y,q_2)$, ${\mathcal{P}}(q_0,x)$ contain one disk in each of the classes $A - \beta - \alpha$, $B - \beta + \alpha$. We then have
$$\partial x = (-e^{\beta} + e^{A - \beta} + e^{B - \beta} + e^{A - \beta - \alpha} + e^{B - \beta + \alpha})q_2 \,, \quad \partial y = (e^{A - \beta - \alpha} - e^{B - \beta + \alpha})q_2\,,$$
$$\partial q_0 = (e^{A - \beta - \alpha} - e^{B - \beta + \alpha})x + (e^{\beta} - (e^{A - \beta} + e^{B - \beta} + e^{A - \beta - \alpha} + e^{B - \beta + \alpha}))y\,.$$
|
1,108,101,563,650 | arxiv | \section{Background} The starting point for the development of algebraic invariants in topological data analysis is the classification of finite persistence modules over a field $k$: that any such module decomposes into a direct sum of indecomposable interval modules; moreover, the decomposition is unique up to reordering. The barcodes associated to the original module correspond to these interval submodules, which are indexed by the set of connected subgraphs of the finite directed graph associated to the finite, totally ordered indexing set of the module. Modules that decompose in such a fashion are conventionally referred to as {\it tame}.
\vskip.2in
A central problem in the subject has been to determine what, if anything, holds true for more complex types of poset modules (ps-modules) - those indexed on finite partially ordered sets; most prominent among these being $n$-dimensional persistence modules \cite{cs, cz}. Gabriel's theorem \cite{pg, gr} implies that the only types for which the module is {\it always} tame are those whose underlying graph corresponds to a simply laced Dynkin diagram of type $A_n$, $D_n, n\ge 4$ or one of the exceptional graphs $E_6, E_7$ or $E_8$; a result indicating there is no simple way to generalize the 1-dimensional case.
\vskip.2in
However it is natural to ask whether the existence of some (naturally occuring) additional structure for such modules might lead to an appropriate generalization that is nevertheless consistent with Gabriel's theorem. This turns out to be the case. Before stating our main results, we will need to briefly discuss the framework in which we will be working. We consider finite ps-modules (referred to as $\cal C$-modules in this paper) equipped with i) no additional structure, ii) a {\it weak inner product} ($\cal WIPC$-module) , iii) an {\it inner product} ($\cal IPC$-module). The ``structure theorem" - in reality a sequence of theorems and lemmas - is based on the fundamental notion of a {\it multi-flag} of a vector space $V$, referring to a collection of subspaces of $V$ closed under intersections, and the equally important notion of {\it general position} for such an array. Using terminology made precise below, our results may be summarized as follows:
\begin{itemize}
\item Any $\cal C$-module admits a (non-unique) weak inner product structure (can be realized as a $\cal WIPC$-module). However, the obstruction to further refining this to an $\cal IPC$-structure is in general non-trivial, and we give an explicit example of a $\cal C$-module which cannot admit an inner product structure.
\item Associated to any finite $\cal WIPC$-module $M$ is a functor ${\cal F}:{\cal C}\to (multi\mhyphen flags/k)$ which associates to each $x\in obj({\cal C})$ a multi-flag ${\cal F}(M)(x)$ of the vector space $M(x)$, referred to as the {\it local structure} of $M$ at $x$.
\item This local strucure is naturally the direct limit of a directed system of recursively defined multi-flags $\{{\cal F}_n(M), \iota_n\}$, and is called {\it stable} when this directed system stabilizes at a finite stage.
\item In the case $M$ is an $\cal IPC$-module with stable local structure
\begin{itemize}
\item it determines a {\it tame covering} of $M$ - a surjection of $\cal C$-modules $p_M:T(M)\surj M$ with $T(M)$ weakly tame, and with $p_M$ inducing an isomorphism of associated graded local structures. The projection $p_M$ is an isomorphism iff $M$ itself is weakly tame, which happens exactly when the multi-flag ${\cal F}(M)(x)$ is in general position for each object $x$. In this way $T(M)$ is the closest weakly tame approximation to $M$.
\item If, in addition, the category $\cal C$ is {\it holonomy free} (h-free), then each block of $T(M)$ may be non-canonically written as a finite direct sum of GBCs (generalized bar codes); in this case $T(M)$ is tame and $M$ is tame iff it is weakly tame.
\end{itemize}
\item In the case $M$ is equipped only with a $\cal WIPC$-structure, the tame cover may not exist, but one can still define the {\it generalized bar code vector} of $M$ which, in the case $M$ is an $\cal IPC$-module, measures the dimensions of the blocks of $M$. This vector does not depend on the choice of $\cal WIPC$-structure, and therefore is defined for all $\cal C$-modules $M$ with stable local structure.
\item All finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 1$ (this includes all finite $n$-dimensional persistence modules, and strongly stable implies stable).
\item All finite $n$-dimensional persistence modules, in addition, admit a (non-unique) inner product structure.
\end{itemize}
A distinct advantage to the above approach is that the decomposition into blocks, although dependent on the choice of inner product, is {\it basis-free}; moreover the local structure is derived solely from the underlying structure of $M$ via the iterated computation of successively refined functors ${\cal F}_n(M)$ determined by images, kernels and intersections. For modules with stable local structure, the total dimension of the kernel of $p_M$ - referred to as the {\it excess} of $M$ - is an isomorphism invariant that provides a complete numerical obstruction to an $\cal IPC$-module $M$ being weakly tame. Moreover, the block diagram of $M$, codified by its tame cover $T(M)$, always exists for $\cal IPC$-modules with stable local structure, even when $M$ itself is not weakly tame. It would seem that the computation of the local structure of $M$ in this case should be amenable to algorithmic implementation. And although there are obstructions (such as holonomy) to stability for $\cal WIPC$-modules indexed on arbitrary finite ps-categories, these obstructions vanish for finite zig-zag modules in all dimensions (as indicated by the last bullet point). Additionally, although arbitrary $\cal C$-modules may not admit an inner product, all finite $n$-dimensional persistence modules do (for all dimensions $n\ge 1$), which is our main case of interest.
\vskip.2in
A brief organizational description: in section 2 we make precise the notion of multi-flags, general position, and the local structure of a $\cal WIPC$-module. The {\it excess} of the local structure - a whole number which measures the failure of general position - is defined. In section 3 we show that when $M$ is an $\cal IPC$-module, the associated graded local structure ${\cal F}(M)_*$ defines the blocks of $M$, which in turn can be used to create the tame cover via direct sum. Moreover, this tame cover is isomorphic to $M$ iff the excess is zero. We define the holonomy of the indexing category; for holonomy-free (h-free) categories, we show that this block sum may be further decomposed into a direct sum of generalized bar codes, yielding the desired generalization of the classical case mentioned above. As an illustration of the efficacy of this approach, we use it at the conclusion of section 3.2 to give a 2-sentence proof of the structure theorem for finite 1-dimensional persistence modules. In section 3.3 we show that the dimension vector associated to the tame cover can still be defined in the absence of an inner product structure, yielding an isomorphism invariant for arbitrary $\cal C$ modules. Section 3.4 investigates the obstruction to equipping a $\cal C$-module with an inner product, the main results being that i) it is in general non-trivial, and ii) the obstruction vanishes for all finite $n$-dimensional persistence modules. In section 3.5 we consider the related obstruction to being h-free; using the introduced notion of an elementary homotopy we show all finite $n$-dimensional persistence modules are strongly h-free (implying h-free). We also show how the existence of holonomy can prevent stability of the local structure. Finally section 3.6 considers the stability question; although (from the previous section) the local structure can fail to be stable in general, it is always so for i) finite $n$-dimensional zig-zag modules (which includes persistence modules as a special case) over an arbitrary field, and ii) any $\cal C$-module over a finite field.
\vskip.2in
In section 4 we introduce the notion of geometrically based $\cal C$-modules; those which arise via application of homology to a $\cal C$-diagram of simplicial sets or complexes with finite skeleta. We show that the example of section 3.4 can be geometrically realized, implying that geometrically based $\cal C$-modules need not admit an inner product. However, by a cofibrant replacement argument we show that any geometrically based $\cal C$-module admits a presentation by $\cal IPC$-modules, a result which is presently unknown for general $\cal C$-modules.
\vskip.2in
I would like to thank Dan Burghelea and Fedor Manin for their helpful comments on earlier drafts of this work, and Bill Dwyer for his contribution to the proof of the cofibrancy replacement result presented in section 4.1.
\vskip.5in
\section{$\cal C$-modules}
\subsection{Preliminaries} Throughout we work over a fixed field $k$. Let $(vect/k)$ denote the category of finite dimensional vector spaces over $k$, and linear homomorphisms between such. Given a category $\cal C$, a {\it $\cal C$-module over $k$} is a covariant functor $M:{\cal C}\to (vect/k)$. The category $({\cal C}\mhyphen mod)$ of $\cal C$-modules then has these functors as objects, with morphisms represented in the obvious way by natural transformations. All functorial constructions on vector spaces extend to the objects of $({\cal C}\mhyphen mod)$ by objectwise application. In particular, one has the appropriate notions of
\begin{itemize}
\item monomorphisms, epimorphisms, short and long-exact sequences;
\item kernel and cokernel;
\item direct sums, Hom-spaces, tensor products;
\item linear combinations of morphisms.
\end{itemize}
With these constructs $({\cal C}\mhyphen mod)$ is an abelian category, without restriction on $\cal C$. By a {\it ps-category} we will mean the categorical representation of a poset $(S,\le)$, where the objects identify with the elements of $S$, while $Hom(x,y)$ contains a unique morphism iff $x\le y$ in $S$. A ps-category is {\it finite} iff it has a finite set of objects, and is {\it connected} if its nerve $N({\cal C})$ is connected. A ps-module is then a functor $F:{\cal C}\to (vect/k)$ from a ps-category $\cal C$ to $(vect/k)$. A morphism $\phi_{xy}:x\to y$ in $\cal C$ is {\it atomic} if it does not admit a non-trivial factorization (in terms of the partial ordering, this is equivalent to saying that if $x\le z\le y$ then either $z=x$ or $z=y$). Any morphism in $\cal C$ can be expressed (non-uniquely) as a composition of atomic morphisms. The {\it minimal graph} of $\cal C$ is then defined as the (oriented) subgraph of the 1-skeleton of $N({\cal C})$ with the same vertices, but whose edges are represented by atomic morphisms (not compositions of such). The minimal graph of $\cal C$ is denoted by $\Gamma({\cal C})$ and will be referred to simply as the graph of $\cal C$. We observe that $\cal C$ is connected iff $\Gamma({\cal C})$ is a connected.
\vskip.2in
In all that follows we will assume $\cal C$ to be a {\it connected, finite ps-category}, so that all $\cal C$-modules are finite ps-modules. If $M$ is a $\cal C$-module and $\phi_{xy}\in Hom_{\cal C}(x,y)$, we will usually denote the linear map $M(\phi_{xy}): M(x)\to M(y)$ simply as $\phi_{xy}$ unless more precise notation is needed. A very special type of ps-category occurs when the partial ordering on the finite set is a total ordering. In this case the resulting categorical representation $\cal C$ is isomorphic to $\underline{n}$, which denotes the category corresponding to $\{1 < 2 < 3\dots < n\}$. A finite persistence module is, by definition, an $\underline{n}$-module for some natural number $n$. So the $\cal C$-modules we consider in this paper occur as natural generalizations of finite persistence modules.
\vskip.3in
\subsection{Inner product structures} It will be useful to consider two refinements of the category $(vect/k)$.
\begin{itemize}
\item $(WIP/k)$, the category whose objects are inner product (IP)-spaces $V = (V,<\ ,\ >_V)$ and whose morphisms are linear transformations (no compatibility required with respect to the inner product structures on the domain and range);
\item $(IP/k)$, the wide subcategory of $(WIP/k)$ whose morphisms $L:(V,<\ ,\ >_V)\to (W,<\ ,\ >_W)$ satisfy the property that $\wt{L}: ker(L)^\perp\to W$ is an isometric embedding, where $ker(L)^\perp\subset V$ denotes the orthogonal complement of $ker(L)\subset V$ in $V$ with respect to the inner product $<\ ,\ >_V$, and $\wt{L}$ is the restriction of $L$ to $ker(L)^\perp$.
\end{itemize}
There are obvious transformations
\[
(IP/k)\xrightarrow{\iota_{ip}} (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
where the first map is the inclusion which is the identity on objects, while the second map forgets the inner product on objects and is the identity on transformations between two fixed objects.
\vskip.2in
Given a $\cal C$-module $M:{\cal C}\to (vect/k)$ a {\it weak inner product} on $M$ is a factorization
\[
M: {\cal C}\to (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
while an {\it inner product} on $M$ is a further factorization through $(IP/k)$:
\[
M: {\cal C}\to (IP/k)\xrightarrow{\iota_{ip}}(WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
A $\cal WIPC$-module will refer to a $\cal C$-module $M$ equipped with a weak inner product, while an $\cal IPC$-module is a $\cal C$-module that is equipped with an actual inner product, in the above sense. As any vector space admits a (non-unique) inner product, we see that
\begin{proposition} Any $\cal C$-module $M$ admits a non-canonical representation as a $\cal WIPC$-module.
\end{proposition}
The question as to whether a $\cal C$-module $M$ can be represented as an $\cal IPC$-module, however, is much more delicate, and discussed in some detail below.
\vskip.2in
Given a $\cal C$-module $M$ and a morphism $\phi_{xy}\in Hom_{\cal C}(x,y)$, we set $KM_{xy} := \ker(\phi_{xy} : M(x)\to M(y)).$ We note that a $\cal C$-module $M$ is an $\cal IPC$-module, iff
\begin{itemize}
\item for all $x\in obj({\cal C})$, $M(x)$ comes equipped with an inner product $< , >_x$;
\item for all $\phi_{xy}\in Hom_{\cal C}(x,y)$, the map $\wt{\phi}_{xy} : KM_{xy}^\perp\to M(y)$ is an isometry, where $\wt{\phi}_{xy}$ denotes the restriction of $\phi_{xy}$ to $KM_{xy}^\perp = $ the orthogonal complement of $KM_{xy}\subset M(x)$ with respect to the inner product $< , >_x$. In other words,
\[
<\phi({\bf v}), \phi({\bf w})>_y = <{\bf v}, {\bf w}>_x,\qquad \forall\, {\bf v}, {\bf w}\in KM_{xy}^\perp
\]
\end{itemize}
\begin{definition} Let $V = (V, < , >)$ be an inner product (IP) space. If $W_1\subseteq W_2\subset V$, we write $(W_1\subset W_2)^\perp$ for the relative orthogonal complement of $W_1$ viewed as a subspace of $W_2$ equipped with the induced inner product, so that $W_2\cong W_1\oplus (W_1\subset W_2)^\perp$.
\end{definition}
Note that $(W_1\subset W_2)^\perp = W_1^\perp\cap W_2$ when $W_1\subseteq W_2$ and $W_2$ is equipped with the induced inner product.
\vskip.3in
\subsection{Multi-flags and general position} Recall that a {\it flag} in a vector space $V$ consists of a finite sequence of proper inclusions beginning at $\{0\}$ and ending at $V$:
\[
\underline{W} := \{W_i\}_{0\le i\le n} = \left\{\{0\} = W_0\subset W_1\subset W_2\subset\dots\subset W_m = V\right\}
\]
If $\underline{m}$ denotes the totally ordered set $0 < 1 < 2 <\dots < m$ viewed as a category, $Sub(V)$ the category of subspaces of $V$ and inclusions of such, with $PSub(V)\subset Sub(V)$ the wide subcategory whose morphisms are proper inclusions, then there is an evident bijection
\[
\{\text{flags in } V\}\Leftrightarrow \underset{m\ge 1}{\coprod} Funct(\underline{m}, PSub(V))
\]
We will wish to relax this structure in two different ways. First, one may consider a sequence as above where not all of the inclusions are proper; we will refer to such an object as a {\it semi-flag}. Thus a semi-flag is represented by (and corresponds to) a functor $F:\underline{m}\to Sub(V)$ for some $m$. More generally, we define a {\it multi-flag} in $V$ to be a collection ${\cal F} = \{W_\alpha\subset V\}$ of subspaces of $V$ containing $\{0\}, V$, partially ordered by inclusion, and closed under intersection. It need not be finite.
\vskip.2in
Assume now that $V$ is equipped with an inner product. Given an element $W\subseteq V$ of a multi-flag $\cal F$ associated to $V$, let $S(W) := \{U\in {\cal F}\ |\ U\subsetneq W\}$ be the elements of $\cal F$ that are proper subsets of $W$, and set
\begin{equation}\label{eqn:one}
W_{\cal F} := \left(\left(\displaystyle\sum_{U\in S(W)} U\right) \subset W\right)^\perp
\end{equation}
\begin{definition}\label{def:genpos} For an IP-space $V$ and multi-flag $\cal F$ in $V$, the associated graded of $\cal F$ is the set of subspaces ${\cal F}_* := \{W_{\cal F}\ |\ W\in{\cal F}\}$. We say that $\cal F$ is in \underbar{general position} iff $V$ can be written as a direct sum of the elements of ${\cal F}_*$: $V\cong \displaystyle\bigoplus_{W\in{\cal F}} W_{\cal F}$.
\end{definition}
Note that, as $V\in{\cal F}$, it will always be the case that $V$ can be expressed as a sum of the subspaces in ${\cal F}_*$. The issue is whether that sum is a direct sum, and whether that happens is completely determined by the sum of the dimensions.
\begin{proposition} For any multi-flag $\cal F$ of an IP-space $V$, $\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F}) \ge dim(V)$. Moreover the two are equal iff $\cal F$ is in general position.
\end{proposition}
\begin{proof} The first claim follows from the fact that $\displaystyle\sum_{W\in{\cal F}} W = V$. Hence the sum of the dimensions on the left must be at least $dim(V)$, and equals $dim(V)$ precisely when the sum is a direct sum.
\end{proof}
\begin{definition} The excess of a multi-flag $\cal F$ of an IP-space $V$ is $e({\cal F}) := \left[\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F})\right] - dim(V)$.
\end{definition}
\begin{corollary} For any multi-flag $\cal F$, $e({\cal F})\ge 0$ and $e({\cal F}) = 0$ iff $\cal F$ is in general position.
\end{corollary}
Any semi-flag $\cal F$ of $V$ is in general position; this is a direct consequence of the total ordering. Also the multi-flag $\cal G$ formed by a pair of subspaces $W_1, W_2\subset V$ and their common intersection (together with $\{0\}$ and $V$) is always in general position. More generally, we have
\begin{lemma}\label{lemma:2} If ${\cal G}_i$, $i = 1,2$ are two semi-flags in the inner product space $V$ and $\cal F$ is the smallest multi-flag containing ${\cal G}_1$ and ${\cal G}_2$ (in other words, it is the multi-flag generated by these two semi-flags), then $\cal F$ is in general position.
\end{lemma}
\vskip.1in
Let ${\cal G}_i = \{W_{i,j}\}_{0\le j\le m_i}, i = 1,2$. Set $W^{j,k} := W_{1,j}\cap W_{2,k}$. Note that for each $i$, $\{W^{i,k}\}_{0\le k\le m_2}$ is a semi-flag in $W_{1,i}$, with the inclusion maps $W_{1,i}\hookrightarrow W_{1,i+1}$ inducing an inclusion of semi-flags $\{W^{i,k}\}_{0\le k\le m_2}\hookrightarrow \{W^{i+1,k}\}_{0\le k\le m_2}$. By induction on length in the first coordinate we may assume that the multi-flag of $W := W_{1,m_1-1}$ generated by $\wt{\cal G}_1 := \{W_{1,j}\}_{0\le j\le m_1-1}$ and $\wt{\cal G}_2 := \{W\cap W_{2,k}\}_{0\le k\le m_2}$ are in general position. To extend general position to the multi-flag on all of $V$, the induction step allows reduction to considering the case where the first semi-flag has only one middle term:
\begin{claim} Given $W\subseteq V$, viewed as a semi-flag ${\cal G}'$ of $V$ of length 3, and the semi-flag ${\cal G}_2 = \{W_{2,j}\}_{0\le j\le m_2}$ as above, the multi-flag of $V$ generated by ${\cal G}'$ and ${\cal G}_2$ is in general position.
\end{claim}
\begin{proof} The multi-flag $\cal F$ in question is constructed by intersecting $W$ with the elements of ${\cal G}_2$, producing the semi-flag ${\cal G}_2^W := W\cap {\cal G}_2 = \{W\cap W_{2,j}\}_{0\le j\le m_2}$ of $W$, which in turn includes into the semi-flag ${\cal G}_2$ of $V$. Constructed this way the direct-sum splittings of $W$ induced by the semi-flag $W\cap {\cal G}_2$ and of $V$ induced by the semi-flag ${\cal G}_2$ are compatible, in that if we write $W_{2,j}$ as $(W\cap W_{2,j})\oplus (W\cap W_{2,j}\subset W_{2,j})^\perp$ for each $j$, then the orthogonal complement of $W_{2,k}$ in $W_{2,k+1}$ is given as the direct sum of the orthogonal complement of $(W\cap W_{2,k})$ in $(W\cap W_{2,k+1})$ and the orthogonal complement of $(W\cap W_{2,k}\subset W_{2,k})^\perp$ in $(W\cap W_{2,k+1}\subset W_{2,k+1})^\perp$, which yields a direct-sum decomposition of $V$ in terms of the associated grade terms of $\cal F$, completing the proof both of the claim and of the lemma.
\end{proof}
On the other hand, one can construct simple examples of multi-flags which are not - in fact cannot be - in general position, as the following illustrates.
\begin{example} Let $\mathbb R\cong W_i\subset\mathbb R^2$ be three 1-dimensional subspaces of $\mathbb R^2$ intersecting in the origin, and the $\cal F$ be the multi-flag generated by this data. Then $\cal F$ is not in general position.
\end{example}
\vskip.2in
Given an arbitrary collection of subspaces $T = \{W_\alpha\}$ of an IP-space $V$, the multi-flag generated by $T$ is the smallest multi-flag containing each element of $T$. It can be constructed as the closure of $T$ under the operations i) inclusion of $\{0\}, V$ and ii) taking finite intersections.
\vskip.2in
[Note: Example 1 also illustrates the important distinction between a configuration of subspaces being of {\it finite type} (having finitely many isomorphism classes of configurations), and the stronger property of {\it tameness} (the multi-flag generated by the subspaces is in general position).]
\vskip.2in
A multi-flag $\cal F$ of $V$ is a poset in a natural way; if $V_1,V_2\in {\cal F}$, then $V_1\le V_2$ as elements in $\cal F$ iff $V_1\subseteq V_2$ as subspaces of $V$. If $\cal F$ is a multi-flag of $V$, $\cal G$ a multi-flag of $W$, a {\it morphism} of multi-flags $(L,f):{\cal F}\to {\cal G}$ consists of
\begin{itemize}
\item a linear map from $L:V\to W$ and
\item a map of posets $f:{\cal F}\to {\cal G}$ such that
\item for each $U\in {\cal F}$, $L(U)\subseteq f(U)$.
\end{itemize}
Then $\{multi\mhyphen flags\}$ will denote the category of multi-flags and morpisms of such.
\vskip.2in
If $L:V\to W$ is a linear map of vector spaces and $\cal F$ is a multi-flag of $V$, the multi-flag generated by $\{L(U)\ |\ U\in {\cal F}\}\cup \{W\}$ is a multi-flag of $W$ which we denote by $L({\cal F})$ (or $\cal F$ pushed forward by $L$). In the other direction, if $\cal G$ is a multi-flag of $W$, we write $L^{-1}[{\cal G}]$ for the multi-flag $\{L^{-1}[U]\ |\ U\in {\cal G}\}\cup \{\{0\}\}$ of $V$ (i.e., $\cal G$ pulled back by $L$; as intersections are preserved under taking inverse images, this will be a multi-flag once we include - if needed - $\{0\}$). Obviously $L$ defines morphisms of multi-flags ${\cal F}\xrightarrow{(L,\iota)} L({\cal F})$, $L^{-1}[{\cal G}]\xrightarrow{(L,\iota')} {\cal G}$.
\vskip.3in
\subsection{The local structure of $\cal C$-modules}
Assume first that $M$ is an $\cal WIPC$-module. A {\it multi-flag of $M$} or {\it $M$-multi-flag} is a functor $F:{\cal C}\to \{multi\mhyphen flags\}$ which assigns
\begin{itemize}
\item to each $x\in obj({\cal C})$ a multi-flag $F(x)$ of $M(x)$;
\item to each $\phi_{xy}:M(x)\to M(y)$ a morphism of multi-flags $F(x)\to F(y)$
\end{itemize}
To any $\cal WIPC$-module $M$ we may associate the multi-flag $F_0$ which assigns to each $x\in obj({\cal C})$ the multi-flag $\{\{0\}, M(x)\}$ of $M(x)$. This is referred to as the {\it trivial} multi-flag of $M$.
\vskip.2in
A $\cal WIPC$-module $M$ determines a multi-flag on $M$. Precisely, the {\it local structure} ${\cal F}(M)$ of $M$ is defined recursively at each $x\in obj({\cal C})$ as follows: let $S_1(x)$ denote the set of morphisms of $\cal C$ originating at $x$, and $S_2(x)$ the set of morphisms terminating at $x$, $x\in obj({\cal C})$ (note that both sets contain $Id_x:x\to x$). Then
\vskip.05in
\begin{enumerate}
\item[\underbar{LS1}] ${\cal F}_0(M)(x) =$ the multi-flag of $M(x)$ generated by
\[
\{\ker(\phi_{xy}:M(x)\to M(y))\}_{\phi_{xy}\in S_1(x)}\cup \{im(\phi_{zx} : M(z)\to M(x)\}_{\phi_{zx}\in S_2(x)};
\]
\item[\underbar{LS2}] For $n\ge 0$, ${\cal F}_{n+1}(M)(x) =$ the multi-flag of $M(x)$ generated by
\begin{itemize}
\item[{LS2.1}] $\phi_{xy}^{-1}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(y)$ and $\phi_{xy}\in S_1(x)$;
\item [{LS2.2}] $\phi_{zx}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(z)$ and $\phi_{zx}\in S_2(x)$;
\end{itemize}
\item [\underbar{LS3}]${\cal F}(M)(x) = \varinjlim {\cal F}_n(M)(x)$.
\end{enumerate}
More generally, starting with a multi-flag $F$ on $M$, the local structure of $M$ relative to $F$ is arrived at in exactly the same fashion, but starting in LS1 with the multi-flag generated (at each object $x$) by ${\cal F}_0(M)(x)$ and $F(x)$. The resulting direct limit is denoted ${\cal F}^F(M)$. Thus the local structure of $M$ (without superscript) is the local structure of $M$ relative to the trivial multi-flag on $M$. In almost all cases we will only be concerned with the local structure relative to the trivial multi-flag on $M$.
\begin{proposition}\label{prop:invimage} For all $k\ge 1$, $W\in {\cal F}_k(M)(x)$, and $\phi_{zx}:M(z)\to M(x)$, there is a unique maximal element of $W'\in {\cal F}_{k+1}(M)(z)$ with $\phi_{zx}(W') = W$.
\end{proposition}
\begin{proof} This is an immediate consequence of property (LS2.1).
\end{proof}
\begin{definition} The local structure of a $\cal WIPC$-module $M$ is the functor ${\cal F}(M)$, which associates to each vertex $x\in obj({\cal C})$ the multi-flag ${\cal F}(M)(x)$.
\end{definition}
A key question arises as to whether the direct limit used in defining ${\cal F}(M)(x)$ stablizes at a finite stage. For infinite fields $k$ it turns out that this property is related the existence of {\it holonomy}, as we will see below. For now, we include it as a definition.
\begin{definition} The local structure on $M$ is \underbar{locally stable} at $x\in obj({\cal C})$ iff there exists $N = N_x$ such that ${\cal F}_n(M)(x)\inj {\cal F}_{n+1}(M)(x)$ is the identity map whenever $n\ge N$. It is \underbar{stable} if it is locally stable at each object. It is \underbar{strongly stable} if for all \underbar{finite} multi-flags $F$ on $M$ there exists $N = N(F)$ such that ${\cal F}^F(M)(x) = {\cal F}^F_N(M)(x)$ for all $x\in obj({\cal C})$.
\end{definition}
In almost all applications of this definition we will only be concerned with stability, not the related notion of strong stability. The one exception occurs in the statement and proof of Theorem \ref{thm:6} below.
\vskip.2in
For each $0\le k\le \infty$ and at each object $x$ we may consider the associated graded ${\cal F}_k(M)_*(x)$ of ${\cal F}_k(M)(x)$. Stabilization via direct limit in the construction of ${\cal F}(M)$ yields a multi-flag structure that is preserved under the morphisms of the $\cal C$-module $M$. The following result identifies the effect of a morphism on the associated graded limit ${\cal F}(M)_*$, under the more restrictive hypothesis that $M$ is equipped with an inner product structure (which guarantees that the relative orthogonal complements coming from the associated graded are compatible under the morphisms of $M$).
\begin{theorem}\label{thm:1} Let $M$ be an $\cal IPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $W\in {\cal F}(M)(x)$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$
\begin{enumerate}
\item The morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
\item $\phi_{xy}(W)\in {\cal F}(M)(y)$, and either $\phi_{xy}(W_{\cal F}) = \{0\}$, or $\phi_{xy}:W_{\cal F}\xrightarrow{\cong}\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$ where $\phi_{xy}(W_{\cal F})$ denotes the element in the associated graded ${\cal F}(M)_*(y)$ induced by $\phi_{xy}(W)$;
\item either $im(\phi_{zx})\cap W_{\cal F} = \{0\}$, or there is a canonically defined element $U_{\cal F} = \left(\phi_{zx}^{-1}[W_{\cal F}]\right)_{\cal F} = \left(\phi_{zx}^{-1}[W]\right)_{\cal F}\in {\cal F}(M)_*(z)$ with $\phi_{zx}:U_{\cal F}\xrightarrow{\cong} W_{\cal F}$.
\end{enumerate}
\end{theorem}
\begin{proof} Stabilization with respect to the operations (LS.1) and (LS.2), as given in (LS.3), implies that for any object $x$, morphisms $\phi_{xy},\phi_{zx}$, and $W\in {\cal F}(M)(x)$, that $\phi(W)\in {\cal F}(M)(y)$ and $\phi^{-1}[W]\in {\cal F}(M)(z)$, verifying the first statement. Let $K = \ker(\phi_{xy})\cap W$. Then either $K=W$ or is a proper subset of $W$. If the former case, $\phi_{xy}(W_{\cal F}) = \{0\}$, while if the latter we see (again, by stabilization) that $K\in S(W)$ and so $K\cap W_{\cal F} = \{0\}$, implying that $W_{\cal F}$ maps isomorphically to its image under $\phi_{xy}$. Moreover, in this last case $\phi_{xy}$ will map $S(W)$ surjectively to $S(\phi_{xy}(W))$, implying the equality $\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$.
\vskip.2in
Now given $\phi_{zx}:M(z)\to M(x)$, let $U = \phi_{zx}^{-1}[W]\in {\cal F}(M)(z)$. As before, the two possibilities are that $\phi_{zx}(U) = W$ or that $T := \phi_{zx}(U)\subsetneq W$. In the first case, $\phi_{zx}$ induces a surjective map of sets $S(U)\surj S(W)$, and so will map $U_{\cal F}$ surjectively to $W_{\cal F}$. By statement 2. of the theorem, this surjection must be an isomorphism. In the second case we see that the intersecton $im(\phi_{zx})\cap W$ is an element of $S(W)$ (as ${\cal F}(M)(x)$ is closed under intersections), and so $W_{\cal F}\cap im(\phi_{zx}) = \{0\}$ by the definition of $W_{\cal F}$.
\end{proof}
\vskip.1in
Using the local structure of $M$, we define the {\it excess} of a $\cal WIPC$-module $M$ as
\[
e(M) = \sum_{x\in obj({\cal C})} e({\cal F}(M)(x))
\]
We say ${\cal F}(M)$ is {\it in general position} at the vertex $x$ iff ${\cal F}(M)(x)$ is in general position as defined above; in other words if $e({\cal F}(M)(x)) = 0$ . Thus ${\cal F}(M)$ is {\it in general position} (without restriction) iff $e(M) = 0$. The previous theorem implies
\begin{corollary}\label{cor:2} ${\cal F}(M)$ is {\it in general position} at the vertex $x$ if and only if $e({\cal F}(M)(x)) = 0$. It is in general position (without restriction) iff $e(M) = 0$.
\end{corollary}
Note that as $M(x)$ is finite-dimensional for each $x\in obj({\cal C})$, ${\cal F}(M)(x)$ must be locally stable at $x$ if it is in general position (in fact, general position is a much stronger requirement).
\vskip.2in
Now assume given a $\cal C$-module $M$ without any additional structure. A multi-flag on $M$ is then defined to be a multi-flag on $M$ equipped with an arbitrary $\cal WIPC$-structure. Differing choices of weak inner product on $M$ affect the choice of relative orthogonal complements appearing in the associated graded at each object via equation (\ref{eqn:one}). However the constructions in LS1, LS2, and LS3 are independent of the choice of inner product, as are the definitions of excess and stability at an object and also for the module as a whole. So the results stated above for $\cal WIPC$-modules may be extended to $\cal C$-modules. The only result requiring an actual $\cal IPC$-structure is Theorem \ref{thm:1}.
\vskip.5in
\section{Statement and proof of the main results} In discussing our structural results, we first restrict to the case $M$ is an $\cal IPC$-module, and then investigate what properties still hold for more general $\cal WIPC$-modules.
\subsection{Blocks, generalized barcodes, and tame $\cal C$-modules} To understand how blocks and generalized barcodes arise, we first need to identify the type of subcategory on which they are supported. For a connected poset category $\cal C$, its oriented (minimal) graph $\Gamma = \Gamma({\cal C})$ was defined above. A subgraph $\Gamma'\subset\Gamma$ will be called {\it admissible} if
\begin{itemize}
\item it is connected;
\item it is pathwise full: if $v_1e_1v_2e_2\dots v_{k-1}e_{k-1}v_k$ is an oriented path in $\Gamma'$ connecting $v_1$ and $v_k$, and $(v_1=w_1)e'_1w_2e'_2\dots w_{l-1}e'_{l-1}(w_l = v_k)$ is any other path in $\Gamma$ connecting $v_1$ and $v_k$ then the path $v_1=w_1e'_1w_2e'_2\dots w_{l-1}e'_{l-1}w_l$ is also in $\Gamma'$.
\end{itemize}
Any admissible subgraph $\Gamma'$ of $\Gamma$ determines a unique subcategory ${\cal C}'\subset {\cal C}$ for which $\Gamma({\cal C}') = \Gamma'$, and we will call a subcategory ${\cal C}'\subset {\cal C}$ admissible if $\Gamma({\cal C}')$ is an admissible subgraph of $\Gamma({\cal C})$. If $M'\subset M$ is a sub-$\cal C$-module of the $\cal C$-module $M$, its {\it support} will refer to the full subcategory ${\cal C}(M')\subset {\cal C}$ generated by $\{x\in obj({\cal C})\ |\ M'(x)\ne \{0\} \}$. It is easily seen that being a submodule of $M$ (rather than just a collection of subspaces indexed on the objects of $\cal C$) implies that the support of $M'$, if connected, is an admissible subcatgory of $\cal C$ in the above sense. A {\it block} will refer to a sub-$\cal C$-module $M'$ of $M$ for which $\phi_{xy}:M'(x)\xrightarrow{\cong} M'(y)$ whenever $x,y\in obj({\cal C}(M'))$ (any morphism between non-zero vertex spaces of $M'$ is an isomorphism). Finally, $M'$ is a {\it generalized barcode} (GBC) for $M$ if it is a block where $dim(M'(x) ) = 1$ for all $x\in obj({\cal C}(M'))$.
\vskip.2in
It is evident that if $M'\subset M$ is a GBC, it is an indecomposeable $\cal C$-submodule of $M$. If $\Gamma$ represents an oriented graph, we write $\ov{\Gamma}$ for the underlying unoriented graph. Unlike the particular case of persistence (or more generally zig-zag) modules, blocks occuring as $\cal C$-modules for an arbitrary ps-category may not decompose into a direct sum of GBCs. The following two simple oriented graphs illustrate the obstruction.
\vskip.5in
({\rm D1})\vskip-.4in
\centerline{
\xymatrix{
\bullet\ar[rr]\ar[dd] && \bullet\ar[dd] &&&& \bullet && \bullet\ar[dd]\ar[ll]\\
& \Gamma_1 & &&&& & \Gamma_2 &\\
\bullet\ar[rr] && \bullet &&&& \bullet\ar[uu]\ar[rr] && \bullet
}}
\vskip.2in
For a block represented by the graph $\Gamma_1$ on the left, the fact $\cal C$ is a poset category implies that, even though the underlying unoriented graph is a closed loop, going once around the loop yields a composition of isomorphisms which is the identity. As a consequence, it is easily seen that a block whose support is an admissible category ${\cal C}'$ with graph $\Gamma({\cal C}') = \Gamma_1$ can be written as a direct sum of GBCs indexed on ${\cal C}'$ (see the lemma below). However, if the graph of the supporting subcategory is $\Gamma_2$ as shown on the right, then the partial ordering imposes no restrictions on the composition of isomorphisms and their inverses, starting and ending at the same vertex. For such a block with base field $\mathbb R$ or $\mathbb C$, the moduli space of isomorphism types of blocks of a given vertex dimension $n$ is non-discrete for all $n>1$ and can be identified with the space of $n\times n$ Jordan normal forms. The essential difference between these two graphs lies in the fact that the category on the left exhibits one initial and one terminal object, while the category on the right exhibits two of each. Said another way, the zig-zag length of the simple closed loop on the left is two, while on the right is four. We remark that the obstruction here is not simply a function of the underlying unoriented graph, as $\ov{\Gamma}_1 = \ov{\Gamma}_2$ in the above example. A closed loop in $\Gamma({\cal C})$ is an {\it h-loop} if it is able to support a sequence of isomorphsms whose composition going once around, starting and ending at the same vertex, is other than the identity map (``h'' for holonomy). Thus $\Gamma_2$ above exhibits an h-loop. Note that the existence of an h-loop implies the existence of a simple h-loop.
\vskip.2in
We wish explicit criteria which identify precisely when this can happen. One might think that the zig-zag length of a simple closed loop is enough, but this turns out to not be the case. The following illustrates what can happen.
\vskip.5in
({\rm D2})\vskip-.4in
\centerline{
\xymatrix{
& \bullet\ar[rr]\ar[dd] && \bullet\ar[rr]\ar@{-->}[dd] && \bullet\ar[dd]\\
\\
\Gamma({\cal C}'): & \bullet\ar[dd]\ar@{-->}[rr] && \bullet\ar[rr]\ar[dd] && \bullet\\
\\
&\bullet\ar[rr] && \bullet &&
}}
\vskip.2in
Suppose $\cal C$ indexes $3\times 3$ two-dimensional persistance modules (so that $\Gamma({\cal C})$ looks like an oriented two-dimensonal $3\times 3$ lattice, with arrows pointing down and also to the right). Suppose ${\cal C}'\subset {\cal C}$ is an admissible subcategory of $\cal C$ with $\Gamma({\cal C}')$ containing the above simple closed curve indicated by the solid arrows. The zig-zag length of the curve is four, suggesting that it might support holonomy and so be a potential h-loop. However, the admissibility condition forces ${\cal C}'$ to also contain the morphisms represented by the dotted arrows, resulting in three copies of the graph $\Gamma_1$ above. Including these morphisms one sees that holonomy in this case is not possible.
\vskip.2in
Given an admissible subcategory ${\cal C}'$ of $\cal C$, we will call ${\cal C}'$ {\it h-free} if $\Gamma({\cal C}')$ does not contain any simple closed h-loops (and therefore no closed h-loops).
\begin{lemma}\label{lemma:3} Any block $M'$ of $M$ whose support ${\cal C}'$ is h-free can be written (non-uniquely) as a finite direct sum of GBCs all having the same support as $M'$.
\end{lemma}
\begin{proof} Fix $x\in obj(supp(M'))$ and a basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$. Let $y\in obj(supp(M'))$, and choose a path $xe_1x_1e_2x_2\dots x_{k-1}e_ky$ from $x_0 = x$ to $x_k = y$ in $\ov{\Gamma}(M')$. Each edge $e_j$ is represented by an invertible linear map $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$, with
\[
\lambda := \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1:M'(x)\xrightarrow{\cong} M'(y)
\]
As ${\cal C}' = supp(M')$ is h-free, the isomorphism between $M'(x)$ and $M'(y)$ resulting from the above construction is independent of the choice of path in $\ov{\Gamma}(M')$ from $x$ to $y$, and is uniquely determined by the ${\cal C}'$-module $M'$. Hence the basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$ determines one for $M'(y)$ given as $\{\lambda(\bf{v}_1),\dots,\lambda(\bf{v}_n)\}$ which is independent of the choice of path connecting these two vertices. In this way the basis at $M'(x)$ may be compatibly extended to all other vertices of ${\cal C}'$, due to the connectivity hypothesis. The result is a system of {\it compatible bases} for the ${\cal C}'$-module $M'$, from which the splitting of $M'$ into a direct sum of GBCs each supported by ${\cal C}'$ follows.
\end{proof}
A $\cal C$-module $M$ is said to be {\it weakly tame} iff it can be expressed as a direct sum of blocks. It is {\it strongly tame} or simply {\it tame} if, in addition, each of those blocks may be further decomposed as a direct sum of GBCs.
\vskip.3in
\subsection{The main results} We first establish the relation between non-zero elements of the associated graded at an object of $\cal C$ and their corresponding categorical support. We assume throughout this section that $M$ is an $\cal IPC$-module with stable local structure.
\vskip.2in
Suppose $W\in {\cal F}(M)(x)$ with $0\ne W_{\cal F}\in {\cal F}(M)_*(x)$. Then $W_{\cal F}$ uniquely determines a subcategory ${\cal C}(W_{\cal F})\subset {\cal C}$ satisfying the following three properties:
\begin{enumerate}
\item $x\in obj({\cal C}(W_{\cal F}))$;
\item For each path $xe_1x_1e_2\dots x_{k-1}e_ky$ in $\Gamma({\cal C}(W_{\cal F}))$ beginning at $x$, with each edge $e_j$ represented by $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$ ($\phi_{x_{j-1}x_j}$ a morphism in $\cal C$), $W_{\cal F}$ maps isomorphically under the composition $\lambda = \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1$ to $0\ne \lambda(W_{\cal F})\in W_{\cal F}(M)_*(y)$;
\item ${\cal C}(W_{\cal F})$ is the largest subcategory of $\cal C$ satisfying properties 1. and 2.
\end{enumerate}
We refer to ${\cal C}(W_{\cal F})$ as the {\it block category} associated to $W_{\cal F}$. It is easy to see that $\varnothing\ne {\cal C}(W_{\cal F})$, and moreover that ${\cal C}(W_{\cal F})$ is admissible as defined above. Now let ${\cal S}({\cal C})$ denote the set of admissible subcategories of $\cal C$. If $x\in obj({\cal C})$ we write ${\cal S}_x{\cal C}$ for the subset of ${\cal S}({\cal C})$ consisting of those admimissible ${\cal C}'\subset {\cal C}$ with $x\in obj({\cal C}')$.
\begin{lemma}\label{lemma:4} For each $x\in obj({\cal C})$ and $\cal IPC$-module $M$, the assignment
\begin{gather*}
{\cal A}_x: {\cal F}(M)_*(x)\backslash\{0\}\longrightarrow {\cal S}_x{\cal C}\\
0\ne W_{\cal F}\mapsto {\cal C}(W_{\cal F})
\end{gather*}
defines an injection from ${\cal F}(M)_*(x)\backslash\{0\}$ to the set of admissible subcategories of $\cal C$ which occur as the block category of a non-zero element of ${\cal F}(M)_*(x)$.
\end{lemma}
\begin{proof} The fact that ${\cal C}(W_{\cal F})$ is uniquely determined by $W_{\cal F}$ ensures the map is well-defined. To see that the map is 1-1, we observe that corresponding to each ${\cal C}'\in {\cal S}_x{\cal C}$ is a unique maximal $W\in {\cal F}(M)(x)$ with image-kernel-intersection data determined by the manner in which each vertex $y\in obj({\cal C}')$ connects back to $x$. More precisely, the subspace $W$ is the largest element of ${\cal F}M(x)$ satisfying the property that for every
\begin{itemize}
\item $y\in obj({\cal C}')$;
\item zig-zag sequence $p$ of morphisms in ${\cal C}'$ connecting $x$ and $y$;
\item morphism $\phi_{yz}$ in $\cal C$ from $y$ to $z\in obj({\cal C})\backslash obj({\cal C}')$;
\end{itemize}
the pull-back and push-forward of $ker(\phi_{xz})$ along the path back from $M(z)$ to $M(x)$ yields a subspace of $M(x)$ containing $W$. This clearly determines $W$ uniquely; note that the conditions may result in $W = \{0\}$. Restricting to the image of ${\cal A}_x$ we arrive at the desired result.
\end{proof}
Write ${\cal AS}({\cal C})$ for the subset of ${\cal S}({\cal C})$ consisting of those admissible subcategories for which there exists $x\in obj({\cal C})$ with $\{0\}\ne im({\cal A}_x)\subset {\cal S}_x{\cal C}$. This lemma, in conjunction with Theorem \ref{thm:1}, implies
\begin{theorem}\label{thm:2} Let $M$ be an $\cal IPC$-module. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-submodule $M({\cal C}')$ of $M$, where $M({\cal C}')(x) = $ the unique non-zero element $W_{\cal F}$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W_{\cal F}) = {\cal C}'$.
\end{theorem}
\begin{proof} Fix ${\cal C}'\in {\cal AS}({\cal C})$ and $x\in obj({\cal C}')$. By Theorem \ref{thm:1} and Lemma \ref{lemma:4}, for any $\phi_{xy'}\in Hom({\cal C}')$, $W_{\cal F} := {\cal A}^{-1}({\cal C}'\in {\cal S}_x{\cal C})$ maps isomorphically under $\phi_{xy'}$ to $\phi_{xy'}(W_{\cal F})\in {\cal F}(M)_*(y')$.
\vskip.1in
Now any other vertex $y\in\Gamma({\cal C}')$ is connected to $x$ by a zig-zag path of oriented edges. Let $\lambda_{xy}$ represent such a path, corresponding to a composition sequence of morphisms and their inverses. As $M$ is not required to be h-free, the resulting isomorphism between $W_{\cal F}$ and $\lambda_{xy}(W_{\cal F})$ is potentially dependent on the choice of path $\lambda_{xy}$ in $\Gamma({\cal C}')$. However the space itself is not. Moreover the same lemma and theorem also imply that for any $\phi_{xz}\in Hom({\cal C})$ with $z\in obj({\cal C})\backslash obj({\cal C}')$, $W_{\cal F}$ maps to $0$ under $\phi_{xz}$. This is all that is needed to identify an actual submodule of $M$ by the assignments
\begin{itemize}
\item $M({\cal C}')(y) = \lambda_{xy}(W_{\cal F})$ for $\lambda_{xy}$ a zig-zag path between $x$ and $y$ in $\Gamma({\cal C}')$;
\item $M({\cal C}')(z) = 0$ for $z\in obj({\cal C})\backslash obj({\cal C}')$
\end{itemize}
As defined, $M({\cal C}')$ is a block, completing the proof.
\end{proof}
We define the {\it (weakly) tame cover} of $M$ as
\begin{equation}
T(M)(x) = \underset{{\cal C}'\in {\cal AS}({\cal C})}{\bigoplus} M({\cal C}')
\end{equation}
with the projection $p_M:T(M)\surj M$ given on each summand $M({\cal C}')$ by the inclusion provided by the previous theorem. We are now in a position to state the main result.
\begin{theorem}\label{thm:3} An $\cal IPC$-module $M$ is weakly tame iff its excess $e(M) = 0$. In this case the decomposition into a direct sum of blocks is basis-free, depending only on the underlying $\cal IPC$-module $M$, and is unique up to reordering. If in addition $\cal C$ is h-free then $M$ is tame, as each block decomposes as a direct sum of GBCs, uniquely up to reordering after fixing a choice of basis at a single vertex.
\end{theorem}
\begin{proof} The excess at a given vertex $x$ is zero iff the projection map at that vertex is an isomorphism, as the excess is equal to the dimension of the kernel of $p_M$ at $x$. Moreover, if $\cal C$ is h-free then each block further decomposes in the manner described by Lemma \ref{lemma:3}; the precise way in which this decomposition occurs will depend on a choice of basis at a vertex in the support of that block, but once that has been chosen, the basis at each other vertex is uniquely determined. All that remains is to decide the order in which to write the direct sum.
\end{proof}
Note that the excess of $M$ need not be finite. If $\cal C$ is not h-free and $M$ exhibits holonomy at a vertex x, then the tame cover of $M$ might be infinite dimensional at $x$, which will make the overall excess infinite. Nevertheless, $T(M)$ in all cases should be viewed as the ``closest" weakly tame approximation to $M$, which equals $M$ if and only if $M$ itself is weakly tame. Another way to view this proximity is to observe that $T(M)$ and the projection to $M$ are constructed in such a way that $p_M$ induces a global isomorphism of associated graded objects
\[
p_M: {\cal F}(T(M))_*\xrightarrow{\cong} {\cal F}(M)_*
\]
so that $T(M)$ is uniquely characterized up to isomorphism as the weakly tame $\cal C$-module which maps to $M$ by a map which induces an isomorphism of the associated graded local structure.
\vskip.2in
To conclude this subsection we illustrate the efficiency of this approach by giving a geodesic proof of the classical strucuture theorem for finite 1-dimensional persistence modules. Let us first observe that such a module $M$ may be equipped with an inner product structure; the proof follows easily by induction on the length of $M$. So for the following theorem we may assume such an IP-structure has been given.
\begin{theorem}\label{thm:4} If ${\cal C} \cong \underline{n}$ is the categorical representation of a finite totally ordered set, then any $\cal C$-module $M$ is tame.
\end{theorem}
\begin{proof} By Lemma \ref{lemma:2}, the multiflag ${\cal F}(M)(x)$ is in general position for each object $x$, implying the excess $e(M) = 0$, so $M$ is weakly tame by the previous theorem. But there are no non-trivial closed zig-zag loops in $\Gamma({\cal C})$, so $\cal C$ is h-free and $M$ is tame.
\end{proof}
\vskip.3in
\subsection{The GBC vector for $\cal C$-modules} In the absence of an $\cal IP$-structure on the $\cal C$-module $M$, assuming only that $M$ is a $\cal WIP$-module, we may not necessarily be able to construct a weakly tame cover of $M$ but can still extract useful numerical information. By the results of Proposition \ref{prop:invimage} and the proof of Theorem \ref{thm:1}, we see that the results of that theorem still hold for the assoicated graded ${\cal F}_*(M)$, assuming only a $\cal WIPC$-module structure. Moreover, a slightly weaker version of the results of Theorem \ref{thm:2} still apply for this weaker $\cal WIPC$-structure. Summarizing,
\begin{theorem}\label{thm:wipc} Let $M$ be an $\cal WIPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$, the morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
Moreover, if $W\in {\cal F}(M)_*(x)$, viewed as a subquotient space of $M(x)$, then either $dim(\phi_{xy}(W)) = dim(W)$ or $dim(\phi_{xy}(W)) = 0$. Similarly, either $dim(\phi_{zx}^{-1}(W)) = dim(W)$ or $dim(\phi_{zx}^{-1}(W)) = 0$. In this way we may, as before, define the support ${\cal C}(W)$ of $W$, which will be an admissible subcategory of $\cal C$. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-module $M({\cal C}')$, where $M({\cal C}')(x) = $ the unique non-zero element $W$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W) = {\cal C}'$.
\end{theorem}
The lack of IP-structure means that, unlike the the statement of Theorem \ref{thm:2}, we cannot identify the $\cal C$-module $M({\cal C}')$ as an actual submodule of $M$, or even construct a map of $\cal C$ modules $M({\cal C}')\to M$, as $M({\cal C}')$ is derived purely from the associated graded local structure ${\cal F}_*(M)$.
\vskip.2in
Nevertheless, Theorem \ref{thm:wipc} implies the dimension of each of these blocks - given as the dimension at any element in the support - is well-defined, as $dim(M({\cal C}')(x)) = dim(M({\cal C}')(y))$ for any pair $x,y\in obj({\cal C}')$ by the theorem above. The {\it generalized bar code dimension} of $M$ is the vector ${\cal S}({\cal C})\to \mathbb W$ given by
\[
GBCD(M)({\cal C}') =
\begin{cases}
dim(M({\cal C}')) := dim\big(M({\cal C}')(x)\big), x\in obj({\cal C}')\qquad\text{if }{\cal C}'\in {\cal AS}({\cal C})\\
0 \hskip2.68in\text{if }{\cal C}'\notin {\cal AS}({\cal C})
\end{cases}
\]
Finally if $M$ is simply a $\cal C$-module, let $M'$ denote $M$ with a fixed weak inner product structure. Setting
\[
GBCD(M) := GBCD(M')
\]
yields a well-defined function $GBCD:\{{\cal C}$-$ modules\}\to \mathbb W$, as one easily sees that $GBCD(M')$ is independent of the choice of lift of $M$ to a $\cal WIPC$-module; moreover this is an isomorphism invariant of $M$.
\vskip.3in
\subsection{Obstructions to admitting an inner product} The obstruction to imposing an IP-structure on a $\cal C$-module is, in general, non-trivial.
\begin{theorem}\label{thm:obstr} Let ${\cal C} = {\cal C}_2$ be the poset category for which $\Gamma({\cal C}_2) = \Gamma_2$, as given in diagram (D1). Then there exist ${\cal C}_2$-modules $M$ which do not admit an inner product structure.
\end{theorem}
\begin{proof} Label the initial objects of $\cal C$ as $x_1, x_2$, and terminal objects as $y_1, y_2$, with morphisms $\phi_{i,j}:x_i\to y_j$, $1\le i,j\le 2$. For each $(i,j)$ fix an identification $M(x_i) = M(y_j) = \mathbb R$. In terms of this identification, let
\[
M(\phi_{i,j})({\bf v}) =
\begin{cases}
2{\bf v}\quad\text{if } (i,j) = (1,1)\\
{\bf v}\quad\ \text{otherwise}
\end{cases}
\]
The self-map $M(\phi_{1,2})^{-1}\circ M(\phi_{2,2})\circ M(\phi_{2,1})^{-1}\circ M(\phi_{1,1}): M(x_1)\to M(x_1)$ is given as scalar multiplication by $2$. There is no norm on $\mathbb R$ for which this map is norm-preserving; hence there cannot be any collection of inner products $<_-,_->_{i,j}$ on $M_{i,j}$ giving $M$ the structure of an $\cal IPC$-module.
\end{proof}
More generally, we see that
\begin{theorem} If $\cal C$ admits holonomy, then there exist $\cal C$-modules which do not admit the structure of an inner product. Moreover, the obstruction to admitting an inner product is an isomorphism invariant.
\end{theorem}
However, for an important class of $\cal C$-modules the obstruction vanishes. An $n$-dimensional persistence module is defined as a $\cal C$-module where $\cal C$ is an $n$-dimensional {\it persistence category}, i.e., one isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ where $m_p$ is the categorical representation of the totally ordered set $\{1 < 2 < \dots < p\}$.
\begin{theorem} Any (finite) n-dimensional persistence module admits an IP-structure.
\end{theorem}
\begin{proof} It was already observed above that the statement is true for ordinary 1-dim.~persistence modules. So we may proceed by induction, assuming $n > 1$ and that the statement holds in dimensions less than $n$. Before proceeding we record the following useful lemmas. Let ${\cal C}[1]$ denote the categorical representation of the poset $\{0 < 1\}$, and let ${\cal C}[m] = \prod_{i=1}^m{\cal C}[1]$. This is a poset category with objects $m$-tuples $(\varepsilon_1,\dots,\varepsilon_m)$ and a unique morphism $(\varepsilon_1,\dots,\varepsilon_m)\to (\varepsilon'_1,\dots,\varepsilon'_m)$ iff $\varepsilon_j\le \varepsilon'_j, 1\le j\le m$. The oriented graph $\Gamma({\cal C}[m])$ may be viewed as the oriented 1-skeleton of a simplicial $m$-cube. Write $t$ for the terminal object $(1,1,\dots,1)$ in ${\cal C}[m]$, and let ${\cal C}[m,0]$ denote the full subcategory of ${\cal C}[m]$ on objects $obj({\cal C}[m])\backslash \{t\}$.
\begin{lemma} Let $M$ be a ${\cal C}[m]$ module, and let $M(0) = M|_{{\cal C}[m,0]}$ be the restriction of $M$ to the subcategory ${\cal C}[m,0]$. Then any inner product structure on $M[m,0]$ may be extended to one on $M$.
\end{lemma}
\begin{proof} Let $M'$ be the ${\cal C}[m]$-module defined by
\begin{align*}
M'|_{{\cal C}[m,0]} &= M|_{{\cal C}[m,0]}\\
M'(t) &= \underset{{\cal C}[m,0]}{colim}\ M'
\end{align*}
with the map $M'(\phi_{xt})$ given by the unique map to the colimit when $x\in obj({\cal C}[m,0])$. The inner product on $M(0)$ extends to a unique inner product on $M'$. We may then choose an inner product on $M(t)$ so that the unique morphism $M'(t)\to M(t)$ (determined by $M$) lies in $(IP/k)$. Fixing this inner product on $M(t)$ gives $M$ an IP-structure compatible with the given one on $M(0)$.
\end{proof}
For evident reasons we will refer to this as a {\it pushout extension} of the inner product. More generally, iterating the same line of argument yields
\begin{lemma}\label{lemma:5} Let $M$ be a ${\cal C}[m]$-module and $\wt{M} = M|_{{\cal C}'}$ where ${\cal C}'$ is an admissible subcategory of ${\cal C}[m]$ containing the initial object. Then any IP-structure on $\wt{M}$ admits a compatible extension to $M$.
\end{lemma}
Continuing with the proof of the theorem, let ${\cal C} = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ with $m_p = \{1 < 2 < \dots < p\}$ as above. Let ${\cal C}_q = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_{n-1}}\times \{1 < 2 < \dots < q\}$, viewed as a full subcategory of $\cal C$. Given a $\cal C$-module $M$, let $M_i$ be the ${\cal C}_i$-module constructed as the restriction of $M$ to ${\cal C}_i$. By induction on dimension, we may assume $M_1$ has been equipped with an IP-structure. By induction on the last index, assume that this IP-structure has been compatibly extended to $M_i$. Now $\Gamma({\cal C}_{i+1})$ can be viewed as being constructed from $\Gamma({\cal C}_i)$ via a sequence of $m = m_1m_2\dots m_{n-1}$ concatenations, where each step concatenates the previous graph with the graph $\Gamma({\cal C}[n])$ along an admissible subgraph of $\Gamma({\cal C}[n])$ containing the initial vertex. Denote this inclusive sequence of subgraphs by $\{\Gamma_\alpha\}_{1\le \alpha\le m}$; for each $\alpha$ let ${\cal C}_{\alpha}$ be the subcategory of ${\cal C}_{i+1}$ with $\Gamma({\cal C}_\alpha) = \Gamma_\alpha$. Finally, let $N_\alpha$ denote the restriction of $M$ to ${\cal C}_\alpha$, so that $N_1 = M_i$ and $N_m = M_{i+1}$. Then $N_1$ comes equipped with an IP-structure, and by Lemma \ref{lemma:5} an IP-structure on $N_j$ admits a pushout extension to one on $N_{j+1}$ for each $1\le j\le (m-1)$. Induction in this coordinate then implies the IP-structure on $M_i$ an be compatibly extended (via iterated pushouts) to one on $M_{i+1}$, completing the induction step. As $M_{m_n} = M$, this completes the proof of the theorem.
\end{proof}
\vskip.3in
\subsection{h-free modules} When is an indexing category $\cal C$ h-free? To better understand this phenomenon, we note that the graph $\Gamma_1$ in diagram (D1) - and the way it appears again in diagram (D2) - suggests it may be viewed from the perspective of homotopy theory: define an {\it elementary homotopy} of a closed zig-zag loop $\gamma$ in $\Gamma({\cal C})$ to be one which performs the following replacements in either direction
\vskip.5in
({\rm D3})\vskip-.4in
\centerline{
\xymatrix{
A\ar[rr]\ar[dd] && B && && && B\ar[dd]\\
&& & \ar@{<=>}[rr]& && &&\\
C && && && C\ar[rr] && D
}}
\vskip.2in
In other words, if $C\leftarrow A\rightarrow B$ is a segment of $\gamma$, we may replace $\gamma$ by $\gamma'$ in which the segment $C\leftarrow A\rightarrow B$ is replaced by $C\rightarrow D\leftarrow B$ with the rest of $\gamma$ remaining intact; a similar description applies in the other direction. We do not require the arrows in the above diagram to be represented by atomic morphisms, simply oriented paths between vertices.
\begin{lemma} If a zig-zag loop $\gamma$ in $\Gamma({\cal C})$ is equivalent, by a sequence of elementary homotopies, to a collection of simple closed loops of type $\Gamma_1$ as appearing in (D1), then $\gamma$ is h-free. If this is true for all zig-zag loops in $\Gamma({\cal C})$ then $\cal C$ itself is h-free.
\end{lemma}
\begin{proof} Because $\Gamma_1$ has no holonomy, replacing the connecting segment between B and C by moving in either direction in diagram (D3) does not change the homonomy of the closed path. Thus, if by a sequence of such replacements one reduces to a connected collection of closed loops of type $\Gamma_1$, the new loop - hence also the original loop - cannot have any holonomy.
\end{proof}
Call $\cal C$ {\it strongly h-free} if every zig-zag loop in $\Gamma({\cal C})$ satisfies the hypothesis of the above lemma. Given $n$ ps-categories ${\cal C}_1, {\cal C}_2,\dots,{\cal C}_n$, the graph of the $n$-fold cartesian product is given as
\[
\begin{split}
\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n) = N_1({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)
= diag(N({\cal C}_1)\times N({\cal C}_2)\times\dots N({\cal C}_n))_1\\
= diag(\Gamma({\cal C}_1)\times\Gamma({\cal C}_2)\times\dots\times\Gamma({\cal C}_n))
\end{split}
\]
the oriented 1-skeleton of the diagonal of the product of the oriented graphs of each category. Of particular interest are $n$-dimensional persistence categories, as defined above.
\begin{theorem}\label{thm:5} Finite $n$-dimensional persistence categories are strongly h-free.
\end{theorem}
\begin{proof} The statement is trivially true for $n=1$ (there are no simple closed loops), so assume $n\ge 2$. Let ${\cal C}_i = \underline{m_i}$, $1\le i\le n$.
\begin{claim} The statement is true for $n=2$.
\end{claim}
\begin{proof} Given a closed zig-zag loop $\gamma$ in $\Gamma({\cal C}_1\times {\cal C}_2)$, we may assume ${\bf a} = (a_1,a_2)$ are the coordinates of an initial vertex of the loop. We orient $\Gamma({\cal C}_1\times {\cal C}_2)$ so that it moves to the right in the first coordinate and downwards in the second coordinate, viewed as a lattice in $\mathbb R^2$. As it is two-dimensional, we may assume that $\gamma$ moves away from $\bf a$ by a horizontal path to the right of length at least one, and a vertical downwards path of length also at least one. That means we may apply an elementary homotopy to the part of $\gamma$ containing $\bf a$ as indicated in diagram (D3) above, identifying $\bf a$ with the vertex ``A" in the diagram, and replacing $C\leftarrow A\rightarrow B$ with $C\rightarrow D\leftarrow B$. If $D$ is already a vertex in $\gamma$, the result is a single simple zig-zag loop of type $\Gamma_1$, joined at $D$ with a closed zig-zag-loop of total length less than $\gamma$. By induction on total length, both of these loops are h-free, hence so the original $\gamma$. In the second case, $D$ was not in the original loop $\gamma$. In this case the total length doesn't change, but the total area enclosed by the curve (viewed as a closed curve in $\mathbb R^2$) does. By induction on total bounded area, the curve is h-free in this case as well, completing the proof of the claim.
\end{proof}
Continuing with the proof of the theorem, we assume $n > 2$, and that we are given a zig-zag path $\gamma$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$. From the above description we may apply a sequence of elementary homotopies in the first two coordinates to yield a zig-zag loop $\gamma'$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$ with the same degree of h-freeness as $\gamma$, but where the first two coordinates are constant. The theorem follows by induction on $n$.
\end{proof}
We conclude this subsection with an illustration of how holonomy can prevent stability of the local structure over an infinite field. Consider the indexing category $\cal C$ whose graph $\Gamma({\cal C})$ is
\vskip.5in
({\rm D4})\vskip-.4in
\centerline{
\xymatrix{
\bullet && y\ar[dd]\ar[ll] && x\ar[ll]\\
& \Gamma_2 & &&\\
\bullet\ar[uu]\ar[rr] && \bullet &&
}}
\vskip.2in
where the part of the graph labeled $\Gamma_2$ is as in (D1). Suppose the base field to be $k = \mathbb R$ and $M$ the $\cal C$-module which assigns the vector space $\mathbb R^2$ to each vertex in $\Gamma_2$, and assigns $\mathbb R$ to $x$. Each arrow in the $\Gamma_2$-part of the graph is an isomorphism, chosen so that going once around the simple closed zig-zag loop is represented by an element of $SO(2)\cong S^1$ of infinite order (i.e., and irrational rotation). Let $M(x)$ map to $M(y)$ by an injection. In such an arrangement, the local structure of $M$ at the vertex $y$, or the other three vertices of $\cal C$ lying in the graph $\Gamma_2$, never stabilizes.
\vskip.3in
\subsection{Modules with stable local structure} Stability of the local structure can be verified directly in certain important cases. We have given the definition of an $n$-dimensional persistence category above. This construction admits a natural zig-zag generalization. Write \underbar{zm} for any poset of the form $\{1\ R_1\ 2\ R_2\ 3\dots (m-1)\ R_{m-1}\ m\}$ where $R_i = $ ``$\le$" or ``$\ge$" for each $i$. A zig-zag module of length $m$, as defined in \cite{cd}, is a functor $M:$ \underbar{zm}$\to (vect/k)$ for some choice of zig-zag structure on the underlying set of integers $\{1,2,\dots,m\}$. More generally, an $n$-dimensional zig-zag category $\cal C$ is one isomorphic to ${\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}$ for some choice of $\rm \underline{zm_i}$, $1\le i\le n$, and a finite $n$-dimensional zig-zag module is defined to be a functor
\[
M : {\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}\to (vect/k)
\]
for some sequence of positive integers $m_1,m_2,\dots,m_n$ and choice of zig-zag structure on each correpsonding underying set. As with $n$-dimensional persistence modules, $n$-dimensional zig-zag modules may be viewed as a zig-zag diagram of $(n-1)$-dimensional zig-zag modules in essentially $n$ different ways. The proof of the next theorem illustrates the usefulness of strong stability.
\begin{theorem}\label{thm:6} Finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 0$.
\end{theorem}
\begin{proof} We will first consider the case of $n$-dimensional persistence modules. We say an $n$-dimensional persistence category $\cal C$ has multi-dimension $(m_1,m_2,\dots,m_n)$ if $\cal C$ is isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$; note that this $n$-tuple is a well-defined invariant of the isomorphism class of $\cal C$, up to reordering. We may therefore assume the dimensions $m_i$ have been arranged in non-increasing order. We assume the vertices of $\Gamma({\cal C})$ have been labeled with multi-indices $(i_1,i_2,\dots,i_n), 1\le i_j\le m_j$, so that an oriented path in $\Gamma({\cal C})$ from $(i_1,i_2,\dots,i_n)$ to $(j_1,j_2,\dots,j_n)$ (corresponding to a morphism in $\cal C$) exists iff $i_k\le j_k, 1\le k\le n$. We will reference the objects of $\cal C$ by their multi-indices. The proof is by induction on dimension; the base case $n=0$ is trivially true as there is nothing to prove.
\vskip.2in
Assume then that $n\ge 1$. For $1\le i\le j\le m_n$, let ${\cal C}[i,j]$ denote the full subcategory of $\cal C$ on objects $(k_1,k_2,\dots,k_n)$ with $i\le k_n\le j$, and let $M[i,j]$ denote the restriction of $M$ to ${\cal C}[i,j]$. Let ${\cal F}_1$ resp.~${\cal F}_2$ denote the local structures on $M[1,m_n-1]$ and $M[m_n]$ respectively; by induction on the cardinality of $m_n$ we may assume these local structures are stable with stabilization indices $N_1,N_2$. Let $\phi_i:M[i]\to m[i+1]$ be the structure map from level $i$ to level $(i+1)$ in the $n$th coordinate. Then define $\phi_\bullet : M[1,m_n-1]\to M[m_n]$ be the morphism of $n$-dimensional persistence modules which on $M[i]$ is given by the composition
\[
M[i]\xrightarrow{\phi_i} M[i+1]\xrightarrow{\phi_{i+1}}\dots M[m_n-1]\xrightarrow{\phi_{m_n-1}} M[m_n]
\]
Define a multi-flag on $M[1,m_n-1]$ by ${\cal F}_1^* := \phi_\bullet^{-1}[{\cal F}_2]$ and on $M[m_n]$ by ${\cal F}_2^* := \phi_\bullet ({\cal F}_1)$. By induction on length and dimension we may assume that $M[1,m_n-1]$ and $M[m_n]$ have local structures which stabilize strongly (we note that $M[m_n]$ is effectively an $(n-1)$-dimensional persistence module). As these multi-flags are finite, we have that
\begin{itemize}
\item the restricted local structures ${\cal F}_i$ are stable (noted above);
\item the local structure of $M[1,m_n-1]$ is stable relative to ${\cal F}_1^*$;
\item the local structure of $M[m_n]$ is stable relative to ${\cal F}_2^*$.
\end{itemize}
We may then choose $N$ so that in each of the three itemized cases, stabilization has been achieved by the $N^{th}$ stage. Let $\cal G$ be the multi-flag on $M$ which on $M[1,m_n-1]$ is the local structure relative to ${\cal F}_1^*$ and on $M[m_n]$ is the local structure relative to ${\cal F}_2^*$. Then $\cal G$ is the local structure on $M$, and has been achieved after at most $2N$ stages starting with the trivial semi-flag on $M$. This implies $M$ has stable local structure. To verify the induction step for the statement that $M$ has srongly stable local structure, let $F$ be a finite multi-flag on $M$. Let $F_1$ be its restriction to $M[1,m_n-1]$, and $F_2$ its restriction to $M[m_n]$. Then let ${\cal F}_i^{**}$ denote the multi-flag generated by ${\cal F}_i^*$ and $F_i$. Proceeding with the same argument as before yields a multi-flag ${\cal G}^*$ achieved at some finite stage which represents the local structure of $M$ relative to $F$, completing the induction step for persistence modules.
\vskip.2in
In the more general case that one starts with a finite, $n$-dimensional zig-zag module $M$, the argument is esssentially identical but with one adjustment. Representing $M$ as
\[
M[1]\leftrightarrow M[2]\leftrightarrow \dots M[m_n-1]\leftrightarrow M[m_n]
\]
where ``$\leftrightarrow$" indicates either ``$\leftarrow$" or ``$\rightarrow$", the multi-flags ${\cal F}_i^*$ are defined on $M[1,m_n-1]$ and $M[m_n]$ respectively by starting with the stabilized local structure on the other submodule, and then extending by either pulling back or pushing forward as needed to the other. The rest of the induction step is the same, as is the basis step when $n=0$ and there are no morphisms.
\end{proof}
\vskip.2in
The above discussion applies to arbitrary fields; in this case, as we have seen, it is possible that the local structure fails to be stable. However, if the base field $k$ is finite, then the finiteness of $\cal C$ together with the finite dimensionality of a $\cal C$-module $M$ at each vertex implies that any $\cal C$-module $M$ over $k$ is a finite set. In this case, the infinite refinement of ${\cal F}(M)$ that must occur in order to prevent stabilization at some finite stage is no longer possible. Hence
\begin{theorem} Assume the base field $k$ is finite. Then for all (finite) poset categories $\cal C$ and $\cal C$-modules $M$, $M$ has stable local structure.
\end{theorem}
\vskip.5in
\section{Geometrically based $\cal C$-modules} A $\cal C$-module $M$ is said to be {\it geometrically based} if $M = H_n(F)$ for some positive integer $n$, where $F:{\cal C}\to {\cal D}$ is a functor from $\cal C$ to a category $\cal D$, equalling either
\begin{itemize}
\item {\bf f-s-sets} - the category of simplicial sets with finite skeleta and morphisms of simplicial sets, or
\item {\bf f-s-com} - the category of finite simplicial complexes and morphisms of simplicial complexes.
\end{itemize}
Almost all $\cal C$-modules that arise in applications are of this type. A central question, then, is whether or not such modules admit an inner product structure of the type needed for the above structure theorems to hold. We show that the obstruction to imposing an IP-structure on geometrically based modules is in general non-trivial, by means of an explicit example given below. On the other hand, all geometrically based $\cal C$-modules admit a presentation by $\cal IPC$-modules. In what follows we will restrict ourselves to the category {\bf f-s-sets}, as it is slightly easier to work in (although all results carry over to {\bf f-s-complexes}).
\subsection{Cofibrant replacement} Any $\cal C$-diagram in {\bf f-s-sets} can be cofibrantly replaced, up to weak homotopical transformation. Precisely,
\begin{theorem} If $F:{\cal C}\to$ {\bf f-s-sets}, then there is a $\cal C$-diagram $\wt{F}:{\cal C}\to$ {\bf f-s-sets} and a natural transformation $\eta:\wt{F}\xrightarrow{\simeq} F$ which is a weak equivalence at each object, where $\wt{F}(\phi_{xy})$ is a closed cofibration (inclusion of simplicial sets) for all morphisms $\phi_{xy}$\footnote{The proof following is a minor elaboration of an argument communicated to us by Bill Dwyer \cite{bd}.}.
\end{theorem}
\begin{proof} The simplicial mapping cylinder construction $Cyl(_-)$ applied to any morphism in {\bf f-s-sets} verifies the statement of the theorem in the simplest case $\cal C$ consists of two objects and one non-identity morphism. Suppose $\cal C$ has $n$ objects; we fix a total ordering on $obj({\cal C})$ that refines the partial ordering: $\{x_1 \prec x_2 \prec \dots \prec x_n\}$ where if $\phi_{x_i x_j}$ is a morphism in $\cal C$ then $i\le j$ (but not necessarily conversely). Let ${\cal C}(m)$ denote the full subcategory of $\cal C$ on objects $x_1,\dots,x_m$, with $F_m = F|_{{\cal C}(m)}$. By induction, we may assume the statement of the theorem for $F_m:{\cal C}(m)\to$ {\bf f-s-sets}, with cofibrant lift denoted by $\wt{F}_m$; with $\eta_m:\wt{F}_m\xrightarrow{\simeq} F_m$.
\vskip.2in
Now let ${\cal D}(m)$ denote the slice category ${\cal C}/x_{m+1}$; as ``$\prec$" is a refinement of the poset ordering ``$<$", the image of the forgetful functor $P_m:{\cal D}(m)\to {\cal C}; (y\to x_{m+1})\mapsto y$ lies in ${\cal C}(m)$. And as $\cal C$ is a poset category, the collection of morphisms $\{\phi_{y x_{m+1}}\}$ uniquely determine a map
\[
f_m : \underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\xrightarrow{\eta_m} \underset{{\cal D}(m)}{colim}\ F_m\circ P_m \to F(x_{m+1})
\]
Define $\wt{F}_{m+1}:{\cal C}(m+1)\to$ {\bf f-s-sets} by
\begin{itemize}
\item $\wt{F}_{m+1}|_{{\cal C}(m)} = \wt{F}_m$;
\item $\wt{F}_{m+1}(x_{m+1}) = Cyl(f_m)$;
\item If $\phi_{x x_{m+1}}$ is a morphism from $x\in obj({\cal C}(m))$ to $x_{m+1}$, then
\[
\wt{F}_{m+1}(\phi_{x x_{m+1}}):\wt{F}_{m}(x) = \wt{F}_{m+1}(x)\to \wt{F}_{m+1}(x_{m+1})
\]
is given as the composition
\[
\wt{F}_{m}(x) = \wt{F}_m\circ P_m(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow
\underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\hookrightarrow Cyl(f_m) = \wt{F}_{m+1}(x_{m+1})
\]
\end{itemize}
where the first inclusion into the colimit over ${\cal D}(m)$ is induced by the inclusion of the object \newline
$(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow obj({\cal D}(m))$. As all morphisms in ${\cal D}(m)$ map to simplicial inclusions under $\wt{F}_m\circ P_m$ the resulting map of $\wt{F}_m(x)$ into the colimit will also be a simplicial inclusion. Finally, the natural transformation $\eta_m:\wt{F}_m\to F_m$ is extended to $\eta_{m+1}$ on $\wt{F}_{m+1}$ by defining $\eta_{m+1}(x_{m+1}): \wt{F}_{m+1}(x_{m+1})\to F_{m+1}(x_{m+1})$ as the natural collapsing map $Cyl(f_m)\surj F(x_{m+1})$, which has the effect of making the diagram
\centerline{
\xymatrix{
\wt{F}_{m+1}(x)\ar[rr]^{\wt{F}_{m+1}(\phi_{xy})}\ar[dd]^{\eta_{m+1}(x)} && \wt{F}_{m+1}(y)\ar[dd]^{\eta_{m+1}(y)}\\
\\
F_{m+1}(x)\ar[rr]^{F_{m+1}(\phi_{xy})} && F_{m+1}(y)
}}
\vskip.2in
commute for morphisms $\phi_{xy}\in Hom({\cal C}_{m+1})$. This completes the induction step, and the proof.
\end{proof}
\begin{corollary}\label{cor:pres} Any geometrically based $\cal C$-module $M$ admits a presentation by $\cal C$-modules $N_1\inj N_2\surj M$ where $N_i$ is an $\cal IPC$-module and $N_1\inj N_2$ is an isometric inclusion of $\cal IPC$-modules.
\end{corollary}
\begin{proof} By the previous result and the homotopy invariance of homology, we may assume $M = H_n(F)$ where $F :{\cal C}\to$ {\bf i-f-s-sets}, the subcategory of {\bf f-s-sets} on the same set of objects, but where all morphisms are simplicial set injections. In this case, for each object $x$, $C_n(F(x)) = C_n(F(x);k)$ admits a canonical inner product determined by the natural basis of $n$-simplices $F(x)_n$, and each morphism $\phi_{xy}$ induces an injection of basis sets $F(x)_n\inj F(y)_n$, resulting in an isometric inclusion $C_n(F(x))\inj C_n(F(y))$. In this way the functor $C_n(F) := C_n(F;k):{\cal C}\to (vect/k)$ inherits a natural $\cal IPC$-module structure. If $Q$ is an $\cal IPC$-module where all of the morphisms are isometric injections, then any $\cal C$-submodule $Q'\subset Q$, equipped with the same inner product, is an $\cal IPC$-submodule of $Q$. Now $C_n(F)$ contains the $\cal C$-submodules $Z_n(F)$ ($n$-cycles) and $B_n(F)$ ($n$-boundaries); equipped with the induced inner product the inclusion $B_n(F)\hookrightarrow Z_n(F)$ is an isometric inclusion of $\cal IPC$-modules, for which $M$ is the cokernel $\cal C$-module.
\end{proof}
[Note: The results for this subsection have been stated for {\bf f-s-sets}; similar results can be shown for {\bf f-s-complexes} after fixing a systematic way for representing the mapping cyclinder of a map of simplicial complexes as a simplicial complex; this typically involves barycentrically subdividing.]
\vskip.3in
\subsection{Geometrically realizing an IP-obstruction} As we saw in Theorem \ref{thm:obstr}, the ${\cal C}_2$-module
\vskip.5in
({\rm D5})\vskip-.4in
\centerline{
\xymatrix{
\mathbb R && \mathbb R\ar[dd]^{1}\ar[ll]_{2}\\
\\
\mathbb R\ar[uu]^{1}\ar[rr]_{1} && \mathbb R
}}
\vskip.2in
does not admit an IP-structure. We note the same diagram can be formed with $S^1$ in place of $\mathbb R$:
\vskip.5in
({\rm D6})\vskip-.4in
\centerline{
\xymatrix{
\mathbb S^1 && S^1\ar[dd]^{1}\ar[ll]_{2}\\
\\
S^1\ar[uu]^{1}\ar[rr]_{1} && S^1
}}
\vskip.2in
Here ``$2:S^1\to S^1$" represents the usual self-map of $S^1$ of degree $2$. This diagram can be realized up to homotopy by a digram of simplicial complexes and simplicial maps as follows: let $T_1 = \partial(\Delta^2)$ denote the standard triangulation of $S^1$, and let $T_2$ be the barycentric subdivision of $T_1$. We may form the ${\cal C}_2$-diagram in {\bf f-s-com}
\vskip.5in
({\rm D7})\vskip-.4in
\centerline{
\xymatrix{
T_1 && T_2\ar[dd]^{f_1}\ar[ll]_{f_2}\\
\\
T_1\ar[uu]^{1}\ar[rr]_{1} && T_1
}}
\vskip.2in
The map $f_2$ is the triangulation of the top map in (D6), while $f_1$ is the simplicial map which collapses every other edge to a point. The geometric realization of (D7) agrees up to homotopy with (D6). Of course this diagram of simplicial complexes can also be viewed as a diagram in {\bf f-s-sets}. Applying $H_1(_-;\mathbb Q)$ to diagram (D7) we have
\begin{theorem} There exist geometrically based $\cal C$-modules with domain category {\bf f-s-com} (and hence also {\bf f-s-sets}) which do not admit an IP-structure.
\end{theorem}
In this way we see that the presentation result of Corollary \ref{cor:pres} is, in general, the best possible in terms of representing a geometrically based $\cal C$-module in terms of modules equipped with an $\cal IPC$-structure.
\vskip.5in
\section*{Open questions}
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module admit an inner product structure?} More generally, what are necessary and sufficient conditions on the indexing category $\cal C$ that guarantee the existence of an inner product (IP) structure on any $\cal C$-module $M$?
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module $M$ have stable local structure?} In other words (as with IP-structures), is holonomy the only obstruction to ${\cal F}(M)$ stabilizing at some finite stage? One is tempted to conjecture that the answer is ``yes", however the only evidence so far is the lack of examples of non-stable local structures occuring in the absence of holonomy, or even imagining how this could happen.
\vskip.3in
\subsubsection*{Does h-free imply strongly h-free?} Again, this question is based primarily on the absence, so far, of any counterexample illustrating the difference between these two properties.
\vskip.3in
\subsubsection*{If $M$ has stable local structure, does it have strongly stable local structure?} Obviously, strongly stable implies stable. The issue is whether these conditions are, for some reason, equivalent. If not, then a more refined version of the question would be: under what conditions (on either the indexing category $\cal C$ or the $\cal C$-module $M$) are they equivalent?
\vskip.5in
|
1,108,101,563,651 | arxiv | \section{introduction}
The AdS/CFT correspondence\cite{Maldacena1998} indicates that a weak
coupling gravity theory in (d+1)-dimensional AdS spacetime can be
described by a strong coupling conformal field theory on the d-dimensional
boundary. This means that the AdS/CFT correspondence could help us
understand more deeply about the strong coupled gauge theories\cite{Witten1998}\cite{Gubser1998}\cite{Aharony2000}
and offer us a new means of studying the strongly interacting condensed
matter systems in which the perturbation methods are no longer valid\cite{Policastro2002}\cite{Hartnoll2007b}\cite{Buchbinder2009}.
Recently, this holographic method is also expected to give us some
insights into the nature of the pairing mechanism in the high temperature
superconductors which is beyond the scope of current theories of superconductivity.
The earliest models for holographic superconductors in the AdS black
hole spacetime are proposed in \cite{Gubser2008}\cite{Hartnoll2008}\cite{Basu2009}\cite{Herzog2009a},
where the black hole admits scalar hair at the temperature $T$ smaller
than a critical temperature $T_{c}$, but does not possess scalar
hair at higher temperatures. According to the AdS/CFT correspondence,
the appearance of a hairy AdS black hole at low temperature implies
the formation of the scalar condensation in the boundary CFT, which
makes the expectation value of charged operators undergo the U(1)
symmetry breaking and results in the occurrence of the phase transition.
Due to its potential applications to the condensed matter physics,
the properties of the holographic superconductors in the various theories
of gravity have been investigated extensively\cite{Gubser2008a}\cite{Horowitz2008}\cite{Hartnoll2009}\cite{Herzog2009}\cite{Albash2008}
\cite{Nakano2008}\cite{Horowitz2009}\cite{Gregory2009}\cite{Franco2010}\cite{Chen2010}\cite{Jing2012}\cite{Pan2011}\cite{Jing2011a}
in recent years.
AdS3 spacetime as an interesting gravity model has been studied in
many works since it proposed in \cite{Banados1992}. Some holographic
properties and superconductor phase transitions in AdS3 spacetime
background have been discussed in \cite{Maity2010}\cite{Ren2010}\cite{Liu2011}\cite{Li2012}.
But up to now, researches of holographic superconductor on AdS3 spacetime are obviously less than on other spacetimes, and researches are limited to the second phase transition based on the framework of Maxwell electrodynamics.
In this paper, we will do more discussion on holographic superconductors on the framework of Maxwell electrodynamics and on the framework of Born-Infeld electrodynamics, and give the properties of holographic superconductors on the framework of the St\"uckelberg mechanism form in AdS3 spacetime.
On the framework of Maxwell electrodynamics, reference \cite{Ren2010} and \cite{Li2012} have respectively given the numerical solution and analytical solution in the model of one-dimensional holographic superconductor when the mass of scalar field is equal to Breitenlohner-Freedman bound\footnote{Reference \cite{Li2012} has also given another case that the mass of scalar field is zero.}, i.e. $m^2=-1$, the scalar field equation has two same characteristic root $\Delta_{\pm}=1$ and its asymptotically behaves are $\psi\sim z\ln z$ and $\psi\sim z$.
Furthermore we will discuss the general situation that the equation has two different characteristic root $\Delta_{\pm}$, and the scalar field asymptotically behaves are $\psi\sim z^{\Delta_{\pm}}$.
It is well known that the properties of holographic superconductors
depend on behaviors of the electromagnetic field coupled with the
scalar filed in the system. critical temperature and the critical exponent near the point
Different from Maxiwell electrodynamics, Born-Infeld electrodynamics\cite{Born1934}
is one of the important nonlinear electromagnetic theories, which
was introduced by Born and Infeld in 1934 to deal with the infinite
self energies for charged point particles arising in Maxwell theory.
As a candidate of many improved models of Maxwell theory, it is the only electric-magnetic duality invariance theory\cite{Gibbons1995}
which has been researched extensively\cite{Dirac1931}\cite{Hooft1974}\cite{Seiberg1994}.
In particular, the effects of Born-Infeld electrodynamics on the holographic superconductors has been studied numerically in \cite{Jing2010}\cite{Jing2011}\cite{Wang2012}\cite{G.Siopsis2012}\cite{H.B.Zeng2011}\cite{H.F.Li2011}\cite{D.Momeni2012}\cite{J.A.Hutasoit2012}\cite{Gangopadhyay2012a}\cite{Pan2012}\cite{Gangopadhyay2012}\cite{Gangopadhyay2012b}.
In this paper, we are going to investigate how the Born-Infeld
electrodynamics affect the holographic superconductors phase transition in AdS3 spacetime background.
We will also discuss a generalization of the basic holographic
superconductor model in which the spontaneous breaking of a global
U(1) symmetry occurs via the St\"uckelberg mechanism\cite{Franco2010}\cite{Franco2010a}\cite{Aprile2010} in AdS3 spacetime background.
The generalized St\"uckelberg mechanism of symmetry breaking can describe
a wider class of phase transitions including the first order phase
transition and the second order phase transition.
We are going to investigate how the St\"uckelberg form affect the critical temperature, the order of phase transitions and the critical exponent near the phase transition point in AdS3 spacetime background.
This paper is organized as follows.
First we discuss the holographic superconductor model with two different characteristic root on the frame of Maxwell electrodynamics in AdS3 spacetime in section 2.
Then an analytically study of the scalar condensation and the phase transitions of holographic superconductor on the frame of Born-Infeld electrodynamics in AdS3 spacetime is given.
In Sect.4, we calculate a general class of the holographic superconductor model on the frame of the St\"uckelberg form in AdS3 spacetime.
The last section is the conclusion which contains our main results.
\section{AdS3 superconductors on Maxwell electrodynamics}
On the framework of Maxwell electrodynamics, reference \cite{Ren2010} and \cite{Li2012} have respectively given the numerical solution and analytical solution with the motion equation has two same characteristic root in the model of one-dimensional holographic superconductor. However, in this section, we will give more discussion on AdS3 holographic superconductors analytically that the motion equation has two different characteristic root on the framework of Maxwell electrodynamics.
The line element of the AdS3 black hole can be written as
\begin{equation}
ds^{2}=-\frac{1}{z^{2}}f(z)dt^{2}+\frac{dz^{2}}{f(z)}+dx^{2},\label{linez}
\end{equation}
where $f(z)=-M+r_{+}^{2}/z^{2}l^{2}$. Its Hawking temperature is
\begin{equation}
T=\frac{r_{+}}{2\pi}.\label{temperature}
\end{equation}
The action for a Maxwell electromagnetic field coupling with a charged
scalar field in AdS3 spacetime reads
\[
\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-|\nabla_{u}\psi-iA_{\mu}\psi|^{2}-V(|\psi|).
\]
The equations of the motions are described by
\begin{eqnarray}
\phi''+\frac{1}{z}\phi'-\frac{2\psi^{2}}{z^{2}(1-z^{2})}\phi=0,\label{equationszphi}\\
z\psi''-\frac{1+z^{2}}{1-z^{2}}\psi'+\left[\frac{z\phi^{2}}{r_{+}^{2}(1-z^{2})^{2}}-\frac{m^{2}}{z(1-z^{2})}\right]\psi=0.\label{equationszpsi}
\end{eqnarray}
Unlike Ref.\cite{Li2012}, here we will discuss the situation that the motion equation has two different characteristic root $\Delta_\pm$, and at the spatial infinity, the matter fields have the form
\begin{eqnarray}
\psi=\psi^{\pm}z^{\Delta_{\pm}},\label{boundarypsi}\\
\phi=\mu\ln z+\rho,\label{boundaryphi}
\end{eqnarray}
where $\Delta_{\pm}=1\pm\sqrt{1+m^{2}}$. According to the AdS/CFT
correspondence, the dual relation of scalar operators $\langle\mathcal{O}_{\Delta_{\pm}}\rangle$
and scalar field $\psi^{\pm}$ is
\begin{equation}
\langle\mathcal{O}_{\Delta_{\pm}}\rangle\sim r_{+}^{\Delta_{\pm}}\psi^{\pm}.\label{operators}
\end{equation}
At the critical temperature $T_{c}$, $\psi=0$, so Eq.(\ref{equationszphi})
reduces to
\begin{equation}
\phi''+\frac{1}{z}\phi'=0.\label{tcphiequ}
\end{equation}
Let us set $\phi(z)=\lambda r_{+c}\ln z,~\lambda=\mu/r_{+c}$ .
Near the boundary, we introduce a new function $F(z)$ which satisfies
$\psi(z)=z^{\Delta}F(z)\langle\mathcal{O}_{\Delta}\rangle/r_{+}^{\Delta}$,
where $F(0)=1$. At $T\rightarrow T_{c}$, the field equation of $\psi$ becomes
\begin{equation}
-F''+\frac{1}{z}\left(\frac{1+z^{2}}{1-z^{2}}-2\Delta\right)F'+\frac{\Delta^{2}}{1-z^{2}}F=\frac{\lambda^{2}\ln^{2}z}{(1-z^{2})^{2}}F\label{lambdeequlast}
\end{equation}
to be sloved subject to the boundary condition $F'(0)=0$.
According to the Sturm-Liouville eigenvalue problem, the minimum eigenvalue
$\lambda$ is
\begin{equation}
\lambda^{2}=\frac{\int_{0}^{1}dz~z^{-1+2\Delta}[(1-z^{2})F'(z)^{2}+\Delta^{2}F(z)^{2}]}{\int_{0}^{1}dz~z^{-1+2\Delta}\frac{\ln^{2}z}{1-z^{2}}F(z)^{2}}.\label{lambdaissimp}
\end{equation}
We use $F(z)$ as the following trial function
\begin{equation}
F=F_{\alpha}(z)\equiv1-\alpha z^{2}.\label{trialF}
\end{equation}
If $\Delta=1/2$, we have
\begin{equation}
\lambda_{\Delta=1/2}^{2}=\frac{\frac{1}{4}-\frac{\alpha}{6}+\frac{7\alpha^{2}}{12}}{4\alpha-\frac{56\alpha^{2}}{27}+\frac{7}{4}(\alpha-1)^{2}\zeta(3)}.\label{delta12}
\end{equation}
When $\alpha\approx0.12$, it has the minimum $\lambda^{2}\approx0.115$.
If $\Delta=3/2$, we have
\begin{equation}
\lambda_{\Delta=3/2}^{2}=\frac{\frac{3}{4}-\frac{9\alpha}{10}+\frac{11\alpha^{2}}{20}}{-2-\frac{2\alpha(-7000+3527\alpha)}{3375}+\frac{7}{4}(\alpha-1)^{2}\zeta(3)}.\label{delta32}
\end{equation}
When $\alpha\approx0.60$, it has the minimum $\lambda^{2}\approx5.586$.
And the critical temperature can be deduced\cite{Li2012}
\begin{equation}
T_{c}=\frac{r_{+c}}{2\pi}=\frac{1}{2\pi}\frac{\mu}{\lambda}.\label{tc1/2}
\end{equation}
Away from (but close to) the critical temperature, the field equation
(\ref{equationszphi}) of $\phi$ is
\begin{equation}
\phi''+\frac{1}{z}\phi'-\frac{2\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\frac{z^{2(\Delta-1)}F^{2}(z)}{1-z^{2}}\phi=0.\label{closephi}
\end{equation}
Because the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
is small, we can expand $\phi(z)$ in the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
\begin{equation}
\frac{\phi}{r_{+}}=\lambda\ln z+\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\chi(z)+\cdots.\label{expando}
\end{equation}
Substituting the above formulation into Eq.(\ref{closephi}), the
equation of $\phi$ is translated into the equation of $\chi$
\begin{equation}
\chi''+\frac{1}{z}\chi'=\lambda\frac{z^{2(\Delta-1)}F^{2}(z)}{1-z^{2}}\ln z,\label{chiequ}
\end{equation}
where the boundary condition becomes $\chi(1)=0$.
According to \cite{Siopsis2010}\cite{Li2012}, its solution can be
written as
\begin{equation}
z\chi'_{1}(0)=\lambda\mathcal{C},\qquad\mathcal{C}=\int_{0}^{1}dz\frac{z^{2\Delta-1}F^{2}(z)}{1-z^{2}}\ln z,\label{ched}
\end{equation}
then we have the expression
\begin{equation}
\frac{\mu}{r_{+}}=\lambda\left(1+\frac{\mathcal{C}\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}+\cdots\right)\label{muexpand}
\end{equation}
and the operation $\langle\mathcal{O}_{\Delta}\rangle$ near the critical
temperature
\begin{equation}
\langle\mathcal{O}_{\Delta}\rangle\approx\gamma T_{c}^{\Delta}\left(1-\frac{T}{T_{c}}\right)^{\frac{1}{2}},\qquad\gamma=\frac{(2\pi)^{\Delta}}{\sqrt{\mathcal{C}}}.\label{Oexpand}
\end{equation}
\section{AdS3 superconduction on Born-Infeld electrodynamics}
In this section we will give the holographic superconductor on
the frame of Born-Infeld electromagnetic field in AdS3 spacetime.
\subsection{superconduction phase in Born-Infeld electrodynamics}
We rewriter the three-dimensional AdS metric Eq.(\ref{linez}) in the form\cite{Banados1992}
\begin{equation}
ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}dx^{2}.\label{btzform}
\end{equation}
The Hawking temperature of black hole is
\begin{equation}
T_{H}=\frac{1}{2\pi}r_{+},\label{Hawking temperature}
\end{equation}
where $r_{+}$ is the event horizon of the black hole.
We consider a Born-Infeld field and a charged complex scalar field
coupled via the action
\begin{equation}
S=\int d^{4}x\sqrt{-g}\left(\mathcal{L}_{BI}-\left|\nabla_{\mu}\psi-iqA_{\mu}\psi\right|^{2}-m^{2}\left|\psi\right|^{2}\right)\label{L}
\end{equation}
with
\begin{equation}
\mathcal{L}_{BI}=\frac{1}{b}\left(1-\sqrt{1+\frac{bF}{2}}\right),\label{L-bi}
\end{equation}
where $F\equiv F_{\mu\nu}F^{\mu\nu}$.
Taking the ansatz that $\psi=\psi(r)$ and $A_{\mu}$ has only the
time component $A_{t}=\phi$, we can get the equations of motion for
the scalar field $\psi$ and gauge field $\phi$ in the form
\begin{eqnarray}
\phi''+\frac{1}{r}\phi'(1-b\phi'^{2})-\frac{2\psi^{2}}{f}\phi(1-b\phi'^{2})^{\frac{3}{2}}=0,\label{phi-equ-r}\\
\psi''+\left(\frac{f'}{f}+\frac{1}{r}\right)\psi'+\frac{\phi^{2}}{f^{2}}\psi-\frac{m^{2}}{f}\psi=0.\label{psi-equ-r}
\end{eqnarray}
Near the boundary $r\rightarrow\infty$, the matter fields have the asymptotic behaviors
\begin{eqnarray}
& & \psi=\frac{\psi^{\pm}}{r^{\triangle_{\pm}}},\label{psi-bound}\\
& & \phi=-\mu\ln r+\rho,\label{phi-bound}
\end{eqnarray}
where $\mu$ and $\rho$ are interpreted as the chemical potential and charge density in the dual field theory respectively. From the dual field theory, the dual relation of scale operate $\mathcal{O}$ and field $\psi_{\pm}$ can be written as
\begin{equation}
\langle\mathcal{O}_{\pm}\rangle\sim\psi_{\pm},\label{opsi-r}
\end{equation}
In Figure 1 we present the condensation of the scalar operators $\langle{\cal O}_{+}\rangle$
as a function of temperature with various correction terms $b$. We
find that the condensation gap increases with the Born-Infeld scale
parameter $b$, which means that the scalar hair is harder to be formed
than the usual Maxwell field. This behavior is reminiscent of that
seen for the 4-dimension holographic superconductors with Born-Infeld
electrodynamics\cite{Jing2010}, where the higher Born-Infeld corrections make condensation
harder.
\begin{figure}[htbp]
\centering \includegraphics[scale=1.5]{figure2new.eps}
\caption{(Color online) The condensate $\langle\mathcal{O}_{i}\rangle$ as
a function of temperature $T$ with different $b$. The four lines
correspond to increasing $b$, i.e., $0$ (red), $0.01$ (orange),
$0.02$ (blue), $0.03$ (black) from bottom to top as $\triangle=1/2$.}
\label{figure1}
\end{figure}
\subsection{analytical understanding of critical temperature}
Here we will apply the Sturm-Liouville method\cite{Siopsis2010} to
analytically investigate the properties of holographic superconductor
phase transition with Born-Infeld electromagnetic field.
Introducing a new variable $z=r_{+}/r$, we can rewrite Eq.(\ref{phi-equ-r})
and Eq.(\ref{psi-equ-r}) into
\begin{eqnarray}
\phi'(z)+\frac{bz^{4}\phi'(z)^{3}}{r_{+}^{2}}+\frac{2\phi(z)\psi(z)^{2}\left[1-\frac{bz^{4}\phi'(z)^{2}}{r_{+}^{2}}\right]^{\frac{3}{2}}}{-z+z^{3}}+z\phi''(z)=0,\label{phi-equ-z}\\
\left[\frac{m^{2}}{z\left(-1+z^{2}\right)}+\frac{z\phi(z)^{2}}{r_{+}^{2}\left(-1+z^{2}\right)^{2}}\right]\psi(z)+\frac{\left(1+z^{2}\right)\psi'(z)}{-1+z^{2}}+z\psi''(z)=0.\label{psi-equ-z}
\end{eqnarray}
The asymptotic boundary conditions for the scalar potential $\phi(z)$ and the scalar field $\psi(z)$ turn
out to be
\begin{eqnarray}
& & \psi=\psi^{\pm}z^{\triangle_{\pm}},\label{psi-bound-z}\\
& & \phi=\mu\ln z+\rho,\label{phi-bound-z}
\end{eqnarray}
and relation Eq.(\ref{opsi-r}) becomes
\begin{equation}
\langle\mathcal{O}_{\Delta_{\pm}}\rangle\sim r_{+}^{\Delta_{\pm}}\psi^{\pm}.
\end{equation}
With the above set up in place, we are now in a position to investigate
the relation between the critical temperature and the charge density.
At the critical temperature $T_{c}$, $\psi=0$, so the Eq.(\ref{phi-equ-z})
reduces to
\begin{equation}
\phi''(z)+\frac{bz^{3}\phi'(z)^{3}}{r_{+}^{2}}+\frac{\phi'(z)}{z}=0.\label{phi-equ-tc}
\end{equation}
Letting $\phi'(z)=1/\sqrt{\zeta}$, we have
\begin{equation}
-\frac{1}{2}\zeta'(z)+\frac{bz^{3}}{r_{+}^{2}}+\frac{\zeta}{z}=0.\label{zeta-equ}
\end{equation}
It is easy to obtain the solution of the above equation
\begin{equation}
\zeta(z)=\frac{bz^{4}}{r^{2}}+z^{2}c_{1},\label{zeta-solve}
\end{equation}
then we can obtain the solution of $\phi(z)$ with the coefficient
$c_{1}$ and $c_{2}$
\begin{equation}
\phi(z)=\frac{\ln z-\ln(c_{1}+\sqrt{c_{1}}\sqrt{c_{1}+bz^{2}})}{\sqrt{c_{1}}}+c_{2}.
\end{equation}
According to the boundary condition Eq.(\ref{phi-bound-z}) and the
horizon condition $\phi(z=1)=0$, we can solve $c_{1}$ and $c_{2}$
\begin{equation}
c_{1}=\frac{1}{\mu^{2}},~~~~c_{2}=\mu\ln\left(\frac{1}{\mu^{2}}+\frac{1}{\mu}\sqrt{b+\frac{1}{\mu^{2}}}\right).\label{c1c2}
\end{equation}
With the aid of the relation $\lambda=\mu/r_{+c}$, at last we have
the result
\begin{equation}
\phi(z)=r_{+c}\lambda\left[\ln z-\ln(1+\sqrt{1+bz^{2}\lambda^{2}})+\ln(1+\sqrt{1+b\lambda^{2}})\right].
\end{equation}
It can be expanded into a simply form
\begin{equation}
\phi(z)=r_{+c}\lambda\left(-\frac{1}{4}bz^{2}\lambda^{2}+\ln z+\frac{b\lambda^{2}}{4}\right).
\end{equation}
Using the above expansion, we find as $T\rightarrow T_{c}$ the field
equation Eq.(\ref{psi-equ-z}) of $\psi$ approaches the limit
\begin{equation}
-\psi''+\frac{1+z^{2}}{z(1-z^{2})}\psi'+\frac{m^{2}}{z^{2}(1-z^{2})}\psi=\frac{\lambda^{2}\left(\ln z-\frac{1}{4}bz^{2}\lambda^{2}+\frac{b\lambda^{2}}{4}\right)^{2}}{(1-z^{2})^{2}}\psi.\label{psi-tc-equ}
\end{equation}
Near the boundary, we introduce a new function $F(z)$ which satisfies
\begin{equation}
\psi(z)=\frac{\langle\mathcal{O}_{\Delta}\rangle}{r_{+}^{\Delta}}z^{\Delta}F(z),
\end{equation}
where $F(0)=1$. So from Eq.(\ref{psi-tc-equ}), the equation of motion
for $F(z)$ is
\begin{equation}
-F''+\frac{1}{z}\left(\frac{1+z^{2}}{1-z^{2}}-2\Delta\right)F'+\frac{\Delta^{2}}{1-z^{2}}F=\frac{\lambda^{2}\left[\ln^{2}z-\frac{1}{2}b\lambda^{2}\ln z(z^{2}-1)\right]}{(1-z^{2})^{2}}F,
\end{equation}
to be solved subject to the boundary condition $F'(0)=0$.
According to the Sturm-Liouville eigenvalue problem\cite{Siopsis2010},
we obtain the expression which will be used to estimate the minimum
eigenvalue of $\lambda$
\begin{equation}
\lambda^{2}=\frac{\int_{0}^{1}dz~z^{-1+2\Delta}[(1-z^{2})F'(z)^{2}+\Delta^{2}F(z)^{2}]}{\int_{0}^{1}dz~z^{-1+2\Delta}\frac{\ln^{2}z-\frac{1}{2}b\lambda^{2}\ln z(z^{2}-1)}{1-z^{2}}F(z)^{2}}.
\end{equation}
To estimate it, we use $F(z)$ as the following trial function
\[
F=F_{\alpha}(z)\equiv1-\alpha z^{2},
\]
which satisfies the conditions $F(0)=1$ and $F'(0)=0$.
When $\Delta=1/2$ , for $b=0$, we have
\[
\lambda_{\alpha}^{2}=\frac{27-18\alpha+63\alpha^{2}}{189\zeta(3)+[432-378\zeta(3)]\alpha+[189\zeta(3)-224]\alpha^{2}},
\]
and when $\alpha\approx0.123209$, it reaches its minimum $\lambda^{2}\approx\lambda_{0.123209}^{2}\approx0.114659$,
which is agree with the exact value $\lambda^{2}=0.113939$. The critical
temperature is
\begin{equation}
T_{c}=\frac{r_{+c}}{2\pi}=\frac{1}{2\pi}\frac{\mu}{\lambda}.\label{tc-lamda0}
\end{equation}
From Eq.(\ref{tc-lamda0}) we can obtain $T_{c}\approx0.47002\mu$,
which is agree with the exact value $T_{c}=0.471503\mu$.
When $b=0.1$, we obtain
\[
\lambda_{\alpha}^{2}=\frac{\frac{1}{4}-\frac{\alpha}{6}+\frac{7\alpha^{2}}{12}}{2.09787-0.205925\alpha+0.0292962\alpha^{2}},
\]
and when $\alpha\approx0.123276$, it reaches its minimum $\lambda^{2}\approx\lambda_{0.123276}^{2}\approx0.114967$,
which is agree with the exact value $\lambda^{2}=0.114239$. From
Eq.(\ref{tc-lamda0}) we can obtain $T_{c}\approx0.469389\mu$, which
is agree with the exact value $T_{c}=0.470884\mu$.
When $b=0.2$, we obtain
\[
\lambda_{\alpha}^{2}=\frac{\frac{1}{4}-\frac{\alpha}{6}+\frac{7\alpha^{2}}{12}}{2.09213-0.204651\alpha+0.0290669\alpha^{2}},
\]
and when $\alpha\approx0.123344$, it reaches its minimum $\lambda^{2}\approx\lambda_{0.123344}^{2}\approx0.115278$,
which is agree with the exact value $\lambda^{2}=0.114539$. From
Eq.(\ref{tc-lamda0}) we can obtain $T_{c}\approx0.468757\mu$, which
is agree with the exact value $T_{c}=0.470267\mu$.
When $b=0.3$, we obtain
\[
\lambda_{\alpha}^{2}=\frac{\frac{1}{4}-\frac{\alpha}{6}+\frac{7\alpha^{2}}{12}}{2.0864-0.203377\alpha+0.0288376\alpha^{2}},
\]
and when $\alpha\approx0.123412$, it reaches its minimum $\lambda^{2}\approx\lambda_{0.123412}^{2}\approx0.11559$,
which is agree with the exact value $\lambda^{2}=0.114838$. From
Eq.(\ref{tc-lamda0}) we can obtain $T_{c}\approx0.468124\mu$, which
is agree with the exact value $T_{c}=0.469652\mu$.
\subsection{critical exponent and condensation values}
Away from(but close to) the critical temperature, the field equation
Eq.(\ref{phi-equ-z}) of $\phi$ is
\begin{equation}
\phi''(z)+\frac{\phi'(z)}{z}+\frac{bz^{3}\phi'(z)^{3}}{r_{+}^{2}}-\frac{2\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\frac{z^{2(\Delta-1)}F^{2}(z)}{1-z^{2}}\phi(z)\left[1-\frac{3bz^{4}}{2r_{+}^{2}}\phi'(z)^{2}\right]=0.\label{phi-close-equ}
\end{equation}
Because the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
is small, we can expand $\phi(z)$ on the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
\begin{equation}
\frac{\phi}{r_{+}}=\lambda\left(\ln z-\frac{1}{4}bz^{2}\lambda^{2}+\frac{b\lambda^{2}}{4}\right)+\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\chi(z)+\cdots.\label{phi-o}
\end{equation}
Substituting the above formulation into Eq.(\ref{phi-close-equ}),
we translate the equation of $\phi$ into the equation of $\chi$
\[
\chi''+\frac{1}{z}\chi'+3b\lambda^{2}z\chi'=2\lambda\frac{z^{2(\Delta-1)}F^{2}(z)}{1-z^{2}}\left[\ln z-\frac{1}{4}b\lambda^{2}(z^{2}+6z^{2}\ln z-1)\right],
\]
where the boundary condition becomes $\chi(1)=0$.
Multiplying the two sides of this equation by $ze^{3bz^{2}\lambda^{2}/2}$,
we have
\begin{equation}
\frac{d}{dz}\left(ze^{3bz^{2}\lambda^{2}/2}\chi'\right)=ze^{3bz^{2}\lambda^{2}/2}\frac{2\lambda z^{2(\Delta-1)}F^{2}(z)}{1-z^{2}}\left[\ln z-\frac{1}{4}b\lambda^{2}(z^{2}+6z^{2}\ln z-1)\right].\label{dx-equ}
\end{equation}
The variable $\chi(z)$ in Eq.(\ref{phi-o}) can be expanded at $z=0$
\begin{equation}
\frac{\mu\ln z+\rho}{r_{+}}=\lambda\left(\ln z-\frac{1}{4}bz^{2}\lambda^{2}+\frac{b\lambda^{2}}{4}\right)+\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}[\chi(0)+z\chi'(0)\cdots].\label{co-expand}
\end{equation}
Integrating both sides of Eq.(\ref{dx-equ}) from $z=0$ to $z=1$,
we have
\begin{equation}
z\chi'(0)=-\lambda\mathcal{C},\label{eq:cvalue}
\end{equation}
where
\begin{eqnarray}
\mathcal{C} & = & \int_{0}^{1}dz~e^{3bz^{2}\lambda^{2}/2}\frac{2z^{2\Delta-1}F^{2}(z)}{1-z^{2}}\left[\ln z-\frac{1}{4}b\lambda^{2}(z^{2}+6z^{2}\ln z-1)\right]\nonumber \\
& = & \int_{0}^{1}dz~\frac{2z^{2\Delta-1}F^{2}(z)}{1-z^{2}}\left(\ln z-\frac{1}{4}bz^{2}\lambda^{2}+\frac{b\lambda^{2}}{4}\right),\label{cvalue}
\end{eqnarray}
in the last line we have expanded $\mathcal{C}$ about $b\lambda^{2}$.
With the aid of the above relation, differentiating the two sides
of the equation Eq.(\ref{co-expand}) and comparing the coefficient
of $z$ on both sides of the equation Eq.(\ref{co-expand}), we have
\begin{equation}
\frac{\mu}{r_{+}}=\lambda\left(1-\frac{\mathcal{C}\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}+\cdots\right).\label{Oexpress}
\end{equation}
Combining Eq.(\ref{tc-lamda0}) and Eq.(\ref{Oexpress}), we can obtain
the express of operation $\langle\mathcal{O}_{\Delta}\rangle$ near
the critical temperature
\begin{equation}
\langle\mathcal{O}_{\Delta}\rangle\approx\gamma T_{c}^{\Delta}\left(1-\frac{T}{T_{c}}\right)^{\frac{1}{2}},\qquad\gamma=\frac{(2\pi)^{\Delta}}{\sqrt{\mathcal{C}}}.\label{tc-exponent}
\end{equation}
Combining Eq.(\ref{eq:cvalue}), Eq.(\ref{cvalue}) and Eq.(\ref{tc-exponent}),
we can obtain the solution of $\gamma$ as $\Delta=1/2$. The results
are summarized with various $b$ in Table 1:
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$b$ & $\alpha$ & $\gamma_{SL}$ & $\gamma_{numb}$ \\%\tabularnewline
\hline
0 & 0.123209 & 1.633 & 1.581 \\%\tabularnewline
0.01 & 0.123276 & 1.635 & 1.575 \\%\tabularnewline
0.02 & 0.123344 & 1.637 & 1.569 \\%\tabularnewline
0.03 & 0.123412 & 1.639 & 1.564 \\%\tabularnewline
\hline
\end{tabular}
\caption{A comparison of the analytical and numerical results for the parameter $\gamma$ between the critical temperature and the condensation operator for different $b$.}
\label{table}
\end{table}
\section{AdS3 superconduction in St\"uckelberg form}
In this section we will discuss the holographic superconductor on
the frame of St\"uckelberg form in AdS3 spacetime.
The generalized action containing a U(1) gauge field and the scalar
field coupled via a generalized St\"uckelberg Lagrangian reads\cite{Franco2010}\cite{Franco2010a}
\begin{equation}
\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\partial\psi^{2}-|\mathcal{K}(\psi)|(\partial p-A)^{2}-m^{2}|\psi|^{2},\label{actiongene}
\end{equation}
where $\mathcal{K}(\psi)$ is a general function of $\psi$
\begin{equation}
\mathcal{K}(\psi)=|\psi|^{2}+c_{\gamma}|\psi|^{\gamma}+c_{4}|\psi|^{4}.\label{kgene}
\end{equation}
Taking the ansatz of the field as $\psi=\psi(r)$ and $A=\phi(r)dt$,
we can get the equations of motion for scalar field $\psi$ and gauge
field $\phi$ in the form
\begin{eqnarray}
\phi''+\frac{1}{z}\phi'-\frac{\mathcal{K}}{z^{2}(1-z^{2})}\phi=0,\label{motiveequgene1}\\
z\psi''-\frac{1+z^{2}}{1-z^{2}}\psi'+\frac{z\phi^{2}}{2r_{+}^{2}(1-z^{2})^{2}}\mathcal{K}'-\frac{m^{2}}{z(1-z^{2})}\psi=0.\label{motiveequgene2}
\end{eqnarray}
At the critical temperature $T_{c}$, $\psi=0$, so Eq.(\ref{motiveequgene1}) reduces to
\begin{equation}
\phi''+\frac{1}{z}\phi'=0.\label{tcphiequgene}
\end{equation}
The asymptotic boundary conditions for the scalar potential $\phi$
turn out to be
\begin{equation}
\phi(z)=\lambda r_{+c}\ln z,\qquad\lambda=\frac{\mu}{r_{+c}}.\label{philambdagene}
\end{equation}
\subsection{$c_{\gamma}=0$}
With $c_{\gamma}=0$,
\footnote{Now the phase transition of holographic superconductor
is the second order phase transition, so we can discuss with analytical
method directly.}
at $T\rightarrow T_{c}$, the equation of $\psi$ becomes
\begin{equation}
-\psi''+\frac{1+z^{2}}{z(1-z^{2})}\psi'+\frac{m^{2}}{z^{2}(1-z^{2})}\psi=\frac{\lambda^{2}\ln^{2}z}{(1-z^{2})^{2}}(\psi+2c_{4}\psi^{3}).\label{phiequlambdagene}
\end{equation}
Near the boundary, we introduce a new function $F(z)$ which satisfies
\begin{equation}
\psi(z)=\frac{\langle\mathcal{O}_{\Delta}\rangle}{r_{+}^{\Delta}}z^{\Delta}F(z),\label{psifgene}
\end{equation}
where $F(0)=1$. Now Eq.(\ref{phiequlambdagene}) becomes
\begin{equation}
-F''+\frac{1}{z}\left(\frac{1+z^{2}}{1-z^{2}}-2\Delta\right)F'+\frac{\Delta^{2}}{1-z^{2}}F=\frac{\lambda^{2}\ln^{2}z}{(1-z^{2})^{2}}(F+2c_{4}F^{3})\label{lambdeequlastgene}
\end{equation}
to be sloved subject to the boundary condition $F'(0)=0$. Since our
computation is near the critical point, $F$ is small, so the term
$2c_{4}F^{3}$ can be ignored. It is the same as Eq.(\ref{lambdeequlast}),
so its discussion follows the process from Eq.(\ref{lambdaissimp})
to Eq.(\ref{tc1/2}).
Away from (but close to) the critical temperature, the scalar potential
$\phi$ equation is
\begin{equation}
\phi''+\frac{1}{z}\phi'-\left[\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\frac{z^{2\Delta}F^{2}(z)}{z^{2}(1-z^{2})}+\frac{c_{4}\langle\mathcal{O}_{\Delta}\rangle^{4}}{r_{+}^{4\Delta}}\frac{z^{4\Delta}F^{4}(z)}{z^{2}(1-z^{2})}\right]\phi=0.\label{closephigene}
\end{equation}
Because the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
is small, we can expand $\phi$ on the small parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
\begin{equation}
\frac{\phi}{r_{+}}=\lambda\ln z+\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\chi(z)+\cdots.\label{expandogene}
\end{equation}
By substituting the above formulation into Eq.(\ref{closephigene}),
the equation of $\phi$ is translated into the equation of $\chi$
\begin{equation}
\chi''+\frac{1}{z}\chi'=\frac{\lambda\ln z}{z^{2}(1-z^{2})}\left[z^{2\Delta}F^{2}(z)+\frac{c_{4}\mathcal{O}^{2}}{r_{+}^{2\Delta}}z^{4\Delta}F^{4}(z)\right],\label{chiequgene}
\end{equation}
where the boundary condition becomes $\chi(1)=0$.
Reference \cite{Siopsis2010}, the asymptotic behavior of $\phi$
at the boundary can be written as
\begin{equation}
\frac{\mu}{r_{+}}=\lambda\left(1+\frac{\mathcal{C}_{1}\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}+\frac{\mathcal{C}_{2}\langle\mathcal{O}_{\Delta}\rangle^{4}}{r_{+}^{4\Delta}}+\cdots\right).\label{muexpandgene}
\end{equation}
According to Eq.(\ref{chiequgene}), we have
\[
z\chi'_{1}(0)=\lambda\left(\mathcal{C}_{1}+\mathcal{C}_{2}\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\right),
\]
where
\begin{equation}
\mathcal{C}_{1}=\int_{0}^{1}dz\ln z\frac{z^{2\Delta-1}F^{2}(z)}{1-z^{2}},\qquad\mathcal{C}_{2}=\int_{0}^{1}dz\ln z\frac{c_{4}z^{4\Delta-1}F^{4}(z)}{1-z^{2}},\label{chedgene}
\end{equation}
and we can obtain the operation $\langle\mathcal{O}_{\Delta}\rangle$
near the critical temperature
\begin{equation}
\langle\mathcal{O}_{\Delta}\rangle\propto T_{c}^{\Delta}\left(1-\frac{T}{T_{c}}\right)^{\frac{1}{2}}.\label{Oexpandgene}
\end{equation}
\subsection{$c_{\gamma}\protect\neq0$}
With $c_{\gamma}\neq0$, if $\gamma>4$, it is the same as the situation of $c_{\gamma}=0$, so we need only discuss the situation of $2<\gamma<4$. For simplellyfi, here we will only discuss the explicit model $\gamma=3$, and we will rewrite $c_{\gamma}$ as $c_{3}$.\footnote{At this situation, The phase transition of the model may be the second order phase transition or the first order phase transition, but the analytical discussion can only be use to the second phase transition, so our discussion should be limited to the situation that the phase transition is the second order.}
At $T\rightarrow T_{c}$, the equation of $\psi$ becomes
\begin{equation}
-\psi''+\frac{1+z^{2}}{z(1-z^{2})}\psi'+\frac{m^{2}}{z^{2}(1-z^{2})}\psi=\frac{\lambda^{2}\ln^{2}z}{(1-z^{2})^{2}}\left(\psi+\frac{c_{3}}{2}\psi^{2}+2c_{4}\psi^{3}\right).\label{phiequlambdagene2}
\end{equation}
Near the boundary, we also introduce a new function $F(z)$ which
satisfies
\begin{equation}
\psi(z)=\frac{\langle\mathcal{O}_{\Delta}\rangle}{r_{+}^{\Delta}}z^{\Delta}F(z),\label{psifgene2}
\end{equation}
where $F(0)=1$. Now Eq.(\ref{phiequlambdagene2}) becomes
\begin{equation}
-F''+\frac{1}{z}\left(\frac{1+z^{2}}{1-z^{2}}-2\Delta\right)F'+\frac{\Delta^{2}}{1-z^{2}}F=\frac{\lambda^{2}\ln^{2}z}{(1-z^{2})^{2}}\left(F+\frac{c_{3}}{2}F^{2}+2c_{4}F^{3}\right)\label{lambdeequlastgene2}
\end{equation}
to be sloved subject to the boundary condition $F'(0)=0$. Since our
computation is near the critical point, $F$ is small, so the term
$c_{3}F^{2}/2+2c_{4}F^{3}$ can be ignored. It is the same as Eq.(\ref{lambdeequlast}),
and its discussion follows the process from Eq.(\ref{lambdaissimp})
to Eq.(\ref{tc1/2}).
Away from (but close to) the critical temperature, the scalar potential
$\phi$ equation is
\begin{equation}
\phi''+\frac{1}{z}\phi'-\left[\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\frac{z^{2\Delta}F^{2}(z)}{z^{2}(1-z^{2})}+\frac{c_{3}\langle\mathcal{O}_{\Delta}\rangle^{3}}{r_{+}^{3\Delta}}\frac{z^{3\Delta}F^{3}(z)}{z^{2}(1-z^{2})}+\frac{c_{4}\langle\mathcal{O}_{\Delta}\rangle^{4}}{r_{+}^{4\Delta}}\frac{z^{4\Delta}F^{4}(z)}{z^{2}(1-z^{2})}\right]\phi=0.\label{closephigene2}
\end{equation}
Because the parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
is small, we can expand $\phi$ on the small parameter $\langle\mathcal{O}_{\Delta}\rangle^{2}/r_{+}^{2\Delta}$
\begin{equation}
\frac{\phi}{r_{+}}=\lambda\ln z+\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\chi(z)+\cdots.\label{expandogene2}
\end{equation}
Substituting the above formulation into Eq.(\ref{closephigene2}),
the equation of $\phi$ is translated into the equation of $\chi$
\begin{equation}
\chi''+\frac{1}{z}\chi'=\frac{\lambda\ln z}{z^{2}(1-z^{2})}\left[z^{2\Delta}F^{2}(z)+\frac{c_{3}\mathcal{O}}{r_{+}^{\Delta}}z^{3\Delta}F^{3}(z)+\frac{c_{4}\mathcal{O}^{2}}{r_{+}^{2\Delta}}z^{4\Delta}F^{4}(z)\right],\label{chiequgene2}
\end{equation}
where the boundary condition becomes $\chi(1)=0$.
Reference \cite{Siopsis2010}, the asymptotic behavior of $\phi$
at the boundary can be written as
\begin{equation}
\frac{\mu}{r_{+}}=\lambda\left(1+\frac{\mathcal{C}_{2}\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}+\frac{\mathcal{C}_{3}\langle\mathcal{O}_{\Delta}\rangle^{3}}{r_{+}^{3\Delta}}+\frac{\mathcal{C}_{4}\langle\mathcal{O}_{\Delta}\rangle^{4}}{r_{+}^{4\Delta}}+\cdots\right),\label{muexpandgene2}
\end{equation}
According to Eq.(\ref{chiequgene2}), we have
\begin{equation}
z\chi'_{1}(0)=\lambda\left(\mathcal{C}_{2}+\mathcal{C}_{3}\frac{\langle\mathcal{O}_{\Delta}\rangle}{r_{+}^{\Delta}}+\mathcal{C}_{4}\frac{\langle\mathcal{O}_{\Delta}\rangle^{2}}{r_{+}^{2\Delta}}\right),\label{chedgene2}
\end{equation}
where
\begin{eqnarray}
\mathcal{C}_{2} & = & \int_{0}^{1}dz\ln z\frac{z^{2\Delta-1}F^{2}(z)}{1-z^{2}},\nonumber \\
\mathcal{C}_{3} & = & \int_{0}^{1}dz\ln z\frac{c_{3}z^{3\Delta-1}F^{3}(z)}{1-z^{2}},\nonumber \\
\mathcal{C}_{4} & = & \int_{0}^{1}dz\ln z\frac{c_{4}z^{4\Delta-1}F^{4}(z)}{1-z^{2}},\label{chedgene2c}
\end{eqnarray}
and we can obtain the operation $\langle\mathcal{O}_{\Delta}\rangle$
near the critical temperature
\begin{equation}
\langle\mathcal{O}_{\Delta}\rangle\propto T_{c}^{\Delta}\left(1-\frac{T}{T_{c}}\right).\label{Oexpandgene2}
\end{equation}
\section{conclusion}
We study holographic superconductors in the Maxwell electrodynamics field, Born-Infeld electrodynamics field and St\"uckelberg form for a planar AdS3 black hole spacetime. Analytical computations are based on the Sturm-Liouville eigenvalue problem.
On the framework of Maxwell electrodynamics, we give a discussion on AdS3 holographic superconductors analytically when the motion equation has two different characteristic root.
Then we study the scalar condensation and the phase transitions of holographic superconductor models on the frame of Born-Infeld electrodynamics. In the probe limit, we obtain the relation between the critical temperature and the charge density. Apparently, the critical temperature is affected by the value of Born-Infeld coupling parameter $b$.
Finally we calculate superconductor phase transition in St\"uckelberg form. It can be found that the critical temperature and the critical exponent are affected by the value of St\"uckelberg parameter $c_{\gamma},c_{4},\gamma$.
We would like to thank Pro. Jiliang Jing for many guidances on a draft of this paper.
|
1,108,101,563,652 | arxiv | \section{\label{S1Int}Introduction}
The purpose of this paper is to establish the regularity of the
weak solutions for a certain nonlinear biharmonic equation in ${
\mathbb{R}^N}$. We consider solutions $u\colon {\mathbb{R}^N}\rightarrow {\mathbb{R}}$ of
the problem
$$
\left\{
\begin{array}{lr}
\Delta^2u+a(x)u=g(x,u),\\
u\in H^2 ({\mathbb{R}^N}),
\end{array}
\right.
\eqno{(1.1)}
$$
where the condition $u\in H^2(\mathbb{R}^N)$ plays the role of a boundary
value condition, and as well expresses explicitly
that the differential equation is to be satisfied in the weak sense.
We assume
that
\begin{enumerate}
\raggedright
\item[$H_1)$] $g(x,u)\colon {\mathbb{R}^N}\times {\mathbb{R}^1}\rightarrow {\mathbb{R}^1}
$ is measurable in $x$ and
continuous in $u$, and
$\sup\limits_{\substack{x\in {\mathbb{R}^N}\\0\le u\le M_{\mathstrut}}}
\left|g(x,u)\right|<\infty$
for every $M>0$;
\item[$H_2)$] there exist two constants $\sigma>\delta>0$ and two functions
$b_1(x),b_2(x)\in L^{\infty}(\mathbb{R}^N)$
such that $\left|g(x,u)\right|\le b_1(x)|u|^{\delta + 1}+b_2(x)|u|^{\sigma+1}$;
\item[$H_3)$] $\lim\limits_{\left|x\right|\rightarrow\infty}a(x)=k^2>0$ with $k>0$ and
$(k^2-a(x))\in L^2(\mathbb{R}^N)\cap L^{\infty}(\mathbb{R}^{N})$.
\end{enumerate}
Then we have the following theorems:
\begin{theorem}
\label{Theorem1.1} Assume that $H_1)$ to $H_3)$ hold with
$\sigma+1 <\frac{N+4}{N-4}\, $ if $N\ge 5$. Let $u$ be a weak
solution of \textup{(1.1)}. Then $u\in H^4(\mathbb{R}^N)\cap W^{2,
s}{(\mathbb{R}^N)}$ for $2\le s \le +\infty$. In particular $u\in
C^2(\mathbb{R}^N)$ and
$\lim\limits_{\left|x\right|\rightarrow\infty}u(x)=0$,
$\lim\limits_{\left|x\right|\rightarrow\infty}\Delta u(x)=0$.
\end{theorem}
Dealing with regularity of solutions
is much more complicated for biharmonic equations
than for problems that can be treated by well-developed
standard methods, such as second-order elliptic problems.
First of all, there is no maximum principle for the biharmonic
problem. So we can't get some estimates of the solutions by the
methods used to deal with second-order elliptic problems.
Secondly, we know little about the properties of the
eigenfunctions of the biharmonic operator in $\mathbb{R}^N$. To overcome
these difficulties, we first introduce the fundamental solutions
for the linear biharmonic operator $\Delta ^2 +k^2 $ for $k >0$.
By applying some properties of Hankel functions, which are the
solutions of Bessel's equation, we obtain the asymptotic
representation of the fundamental solution of $\Delta^2 +k^2 $ at
$\infty$ and $0$. Then we prove that, for $p>1$,
$$
\Delta ^2 -\lambda \colon \quad W^{2, p}(\mathbb{R}^N) \longrightarrow
L^p(\mathbb{R}^N)
$$
is an isomorphism if $\lambda <0$. Some estimates of the solutions
of (1.1) can be obtained from the properties of the fundamental
solutions of $\Delta ^2 -\lambda$. We also establish some $L^p$
theory for the biharmonic problem (1.1) so that a bootstrap
argument can be used to deduce the regularity of the solutions of
the biharmonic problem (1.1). Please refer to Grunau \cite{g},
Jannelli \cite{j}, Noussair, Swanson and Yang \cite{nsy}, Peletier
and Van der Vorst \cite{pv}, Pucci and Serrin \cite {ps}
for the early results
on the existence and other properties of solutions associated with biharmonic operators.
The organization of this paper is as follows:
In Section \ref{S2Fun}, we introduce the fundamental
solutions of $\Delta ^2 -\lambda$ for $\lambda <0$ and establish some properties of
these fundamental solutions.
In Section \ref{S3H4r}, we show that a weak
solution of the linear problem
$$
\left\{
\begin{array}{lr}
\Delta^2u-\lambda u=f(x),\\
u\in H^2 (\mathbb{R}^N),
\end{array}
\right.
\eqno{(1.2)}
$$
belongs to $H^4(\mathbb{R}^N)$ whenever $f\in L^2(\mathbb{R}^N)$.
In Section \ref{S4W2p}, we obtain a sharper relationship between the
regularity of the weak solutions of the linear biharmonic problem (1.2) and the
properties of the inhomogeneous term $f$ in (1.2).
In Section \ref{S5Reg}, we establish the regularity of
the weak solutions for the nonlinear problem (1.1).
\section{\label{S2Fun}Fundamental solutions }
In this section, we give some properties of the fundamental
solutions for the biharmonic operators $\Delta^2+k^2$. The proof
of these properties can be find in \cite {dengli}.
\begin{lemma}
\label{Lemma2.2}\ Let $G^{(N)}_{k}(|x|)$
be the fundamental solutions of biharmonic operator $\triangle^2 +k^2$ for $k>0$ and
$g^{(N)}_{\delta}(|x|)$
be the fundamental solutions of Laplace operator $-\triangle
+\delta$. Then we have
\begin{enumerate}
\item[{\textup{i)}}]
$$
G^{(N)}_k(x)\in C^{\infty}({\mathbb{R}^N})\setminus
\{0\}
$$
and
$$
\Delta^2 G^{(N)}_k(x)+k^2G^{(N)}_k(x)=0 {\text{\qquad for }} x\not=0\,.
\eqno{(2.20)}
$$
\item[{\textup{ii)}}] As $\left|x\right|\rightarrow\infty$,
$$
e^{(\sqrt{k}/\sqrt{2})\left|x\right|}G^{(N)}_k(x)\rightarrow 0
{\text{\quad and\quad }} e^{(\sqrt{k}/\sqrt{2})\left|x\right|}
\left|\nabla G^{(N)}_k(x)\right|
\rightarrow 0\,.
\eqno{(2.21)}
$$
\item[{\textup{iii)}}] As $\left|x\right|\rightarrow 0$,
\begin{align*}
&
\begin{aligned}
G^{(N)}_k(r)
&=\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}r^{2-2\nu}\\
&\qquad+
O(r^{4-2\nu})
\vphantom{\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}}
\end{aligned}
& &
\text{\quad if }\nu=\frac{N-2}{2}>1\text{ and }\nu\notin {\mathcal{N}};
\\[3\jot]
&
\begin{aligned}
G^{(N)}_k(r)
&=\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}r^{2-2\nu}\\
&\qquad+
O(r^{4-2\nu}+{\ln r})
\vphantom{\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}}
\end{aligned}
& &
\text{\quad if }\nu=\frac{N-2}{2}\ge 2\text{ and }\nu\in {\mathcal{N}};
\\[3\jot]
&
\begin{aligned}
G^{(N)}_k (r)
&\approx O(\ln r)
\end{aligned}
& &
\text{\quad if } N=4 \ (\textstyle\nu=\frac{N-2}{2}=1 );
\\[3\jot]
&
\begin{aligned}
G^{(N)}_k (r)
&=O(1)
\end{aligned}
& &
\text{\quad if } N=2,3 \ (\textstyle\nu=0,\frac{1}{2}).
\end{align*}
\item[{\textup{iv)}}] $|G^{(N)}_k(r)|\le Cg^{(N)}_{\delta}(r)$
for some positive constants $C$ and
$0<\delta<\frac{\sqrt{k}}{\sqrt{2}}$.
\end{enumerate}
\end{lemma}
It
follows from properties {\textup{ii)}} and {\textup{iii)}} of Lemma \ref{Lemma2.2}
that:
\begin{corollary}
\label{Corollary2.3}
$$
\begin{aligned}
&G^{(N)}_k(x)\in L^p({\mathbb{R}^N}) & & \text{ for } \ 1\le p\le +\infty , & & \text{ if } \ N=2,3,\\
&G^{(N)}_k(x)\in L^p({\mathbb{R}^N}) & & \text{ for } \ 1\le p<+\infty , & & \text{ if } \ N=4,\\
&G^{(N)}_k(x)\in L^p({\mathbb{R}^N}) & & \text{ for } \ 1\le p<\tfrac{N}{N-4}, & & \text{ if } \ N\ge 5,\\
&\left|\nabla G^{(N)}_k(x)\right|\in L^p & & \text{ for } \ 1\le p<\tfrac{N}{N-3}, & & \text{ if } \ N>3,\\
&\left|\nabla G^{(N)}_k(x)\right|\in L^p & & \text{ for } \ 1\le p<+\infty, & & \text{ if } \ N=3,\\
&\left|\nabla G^{(N)}_k(x)\right|\in L^p & & \text{ for } \ 1\le p\le +\infty, & & \text{ if } \ N=2,\\
&\left|\Delta G^{(N)}_k(x)\right|\in L^p & & \text{ for } \ 1\le p<\tfrac{N}{N-2}, & & \text{ if } \ N\ge 3,\\
&\left|\Delta G^{(N)}_k(x)\right|\in L^p & & \text{ for } \ 1\le p<+\infty, & & \text{ if } \ N=2.
\end {aligned}
\eqno {(2.24)}
$$
\end{corollary}
Using this information about $G^{(N)}_k(x)$, we can express
solutions of the inhomogeneous biharmonic equation as convolutions
of fundamental solutions with the inhomogeneous term. The
following Theorem can also be found in \cite {dengli}.
\begin{theorem}
\label{Theorem2.4}\
\begin{enumerate}
\item[{\textup{i)}}] Let $f\in L^2({\mathbb{R}^N})\cap L^{\infty}({\mathbb{R}^N})$
and
$$
u=\int_{\mathbb{R}^N}f(z)G^{(N)}_k(x-z)\,dz\,.
$$
Then
$$
\Delta^2u+k^2u=f(x)\,.
$$
\item[{\textup{ii)}}] Let $u$ be a distribution such that
$$
\Delta^2 u+k^2u=f
$$
and $f\in L^2({\mathbb{R}^N})\cap L^{\infty}({\mathbb{R}^N})\,$. Then
$$
u=\int_{\mathbb{R}^N}f(z)G^{(N)}_k(x-z)\,dz\,.
\eqno{(2.25)}
$$
\item[{\textup{iii)}}] There are no nontrival distributions such that
$$
\left\{
\begin{array}{cl}
\Delta^2u+k^2u=0\,,\\
u\in W^{2,2}({\mathbb{R}^{N}})\,.
\end{array}
\right. \eqno{(2.26)}
$$
\end{enumerate}
\end{theorem}
\section{\label{S3H4r}$H^4$-regularity}
The main purpose of this section is to show that a weak
solution of the linear problem
$$
\left\{
\begin{array}{lr}
\Delta^2u-\lambda u=f(x),\\
u\in H^2 (\mathbb{R}^N),
\end{array}
\right.
\eqno{(3.1)}
$$
belongs to $H^4(\mathbb{R}^N)$ whenever $f\in L^2(\mathbb{R}^N)$. To this end, we recall a
well-known result which can be found in \cite {rs1}.
\begin{lemma}
\label{Lemma3.1} Let $h\in L^p(\mathbb{R}^N)$ for some $p\in [1,+\infty]$,
and consider the equation
$$-\Delta u+u=h\eqno{(3.2)}$$
in the sense of distributions.
\begin{enumerate}
\item[{\textup{a)}}] There is a unique tempered distribution $u=\Gamma(h)$
satisfying \textup{(3.2)}.
\item[{\textup{b)}}] If $h\in L^p(\mathbb{R}^N)$ for some $p\in (1,+\infty)$, then
$\Gamma (h)\in W^{2,p}(\mathbb{R}^N)$ and there exists a constant $C(N,p)$ such that
$$
\left\|\Gamma (h)\right\|_{W^{2,p}}\le C(N,p)\left\|h\right\|_{L^p}
$$
for all $h\in L^p(\mathbb{R}^N)$.
\item[{\textup{c)}}] For $p\in (1,+\infty)$,
$-\Delta+1\colon W^{2,p}({\mathbb{R}^N})\rightarrow
L^p(\mathbb{R}^N)$ is an isomorphism.
\end{enumerate}
\end{lemma}
By applying this lemma, we can obtain the $W^{4,p}(\mathbb{R}^N)$ regularity
for the linear biharmonic problem.
\begin{lemma}
\label{Lemma3.2} Let $v\in W^{2,p}(\mathbb{R}^N)$, $ w\in L^p(\mathbb{R}^N)$
for some $p\in (1,+\infty)$ be such that
$$\int_{\mathbb{R}^N}\Delta v\Delta z\,dx = \int_{\mathbb{R}^N}wz\,dx
\text{\qquad for all} \ z\in C^{\infty}_0(\mathbb{R}^N)\,.\eqno{(3.3)}$$
Then $v\in W^{4,p}(\mathbb{R}^N)$ and $\Delta^2 v=w$.
\end{lemma}
\begin{proof}
{}From (3.3), it follows that $u=\Delta v$ is a distribution solution of
$$\Delta u=w \ \ {\text{ and }} \ \
u,w\in L^p(\mathbb{R}^N)\,.\eqno{(3.4)}$$
Thus
$$(-\Delta+1)u=u-w\in L^p(\mathbb{R}^N)\,.$$
By applying Lemma \ref{Lemma3.1} we find
that
$-\Delta+1 \colon W^{2,p}({\mathbb{R}^N})\rightarrow L^p(\mathbb{R}^N)$ is an isomorphism. So there
exists $\varphi\in W^{2,p}(\mathbb{R}^N)$ such that
$$(-\Delta+1)\varphi=u-w\,,$$
that is
$$-\int_{\mathbb{R}^N}\varphi\Delta z\,dx+\int_{\mathbb{R}^N}\varphi z\,dx=\int_{\mathbb{R}^N}uzdx-\int_{\mathbb{R}^N}wz\,dx$$
for all $z\in C^{\infty}_0{(\mathbb{R}^N)}$. From (3.3) we have
$$\int_{\mathbb{R}^N}(\varphi-u)\Delta z\,dx=\int_{\mathbb{R}^N}(\varphi-u)z\,dx
\text{\qquad for all} \ z\in C^{\infty}_0(\mathbb{R}^N)$$ and hence
$$\int_{\mathbb{R}^N}(\varphi-u)(-\Delta z+z)\,dx=0
\text{\qquad for all} \ z\in C^{\infty}_0(\mathbb{R}^N)\,.\eqno{(3.5)}$$
Consider the equation
$$-\Delta z+z=|\varphi-u|^{p-2}(\varphi-u)\,.\eqno{(3.6)}$$
It follows from $\varphi-u\in L^p$ that
$|\varphi-u|^{p-2}(\varphi-u)\in L^{p'}(\mathbb{R}^N)$ with
$\frac{1}{p}+\frac{1}{p'}=1$. By Lemma \ref{Lemma3.1},
the problem (3.6)
possesses a unique solution $z\in W^{2,p}(\mathbb{R}^N)$. Since
$C^{\infty}_0(\mathbb{R}^N)$ is dense in $W^{2,p'} (\mathbb{R}^N)$, we can find a
sequence $ \{{z_n} \} \subset C^{\infty}_0 (\mathbb{R}^N)$ such that
$$z_n\rightarrow z {\text{\quad in }} W^{2,p'}(\mathbb{R}^N) {\text{\quad as }}
n \rightarrow\infty.$$
{}From (3.5) and (3.6), we have
\begin{align*}
0=\int_{\mathbb{R}^N}(\varphi-u)(-\Delta z_n+z_n)\,dx
&\rightarrow\int_{\mathbb{R}^N}(\varphi-u)(-\Delta z+z)\,dx\\
&=\int_{\mathbb{R}^N}|\varphi-u|^p \,dx\,.
\end{align*}
This implies that $\varphi-u\equiv 0$ and hence $u\in W^{2,p}(\mathbb{R}^N)$. It follows
that $v\in W^{4,p}(\mathbb{R}^N)$.
\end{proof}
To obtain the $H^4$-regularity of solutions of (3.1), we rewrite
the
problem (3.1) in the form
$$
\left\{
\begin{array}{lr}
\Delta^2u=f+\lambda u,\\
u\in H^2(\mathbb{R}^N).
\end{array}
\right.
$$
The $H^4$-regularity of solutions of (3.1) follows from Lemma \ref{Lemma3.2}.
In fact, we can
get
a
more general result:
\begin{lemma}
\label{Lemma3.3} Let $f\in L^p(\mathbb{R}^N)$ for some $p\in (1,+\infty)$,
and let
$u$ be the solution of
the
linear biharmonic problem
$$
\left\{
\begin{array}{lr}
(\Delta^2-\lambda)u=f,\\
u\in W^{2,p}(\mathbb{R}^N).
\end{array}
\right.
\eqno{(3.7)}
$$
Then $u\in W^{4,p}(\mathbb{R}^N)$.
\end{lemma}
In the following lemma, we show that
the
problem (3.7) possesses a unique solution
$u\in W^{2,p} (\mathbb{R}^N)$ for given
$p\in [2,+\infty)$ if $f\in L^p(\mathbb{R}^N)$ and $\lambda<0$.
\begin{lemma}
\label{Lemma3.4} For $p\in [2,+\infty)$, $f\in L^p(\mathbb{R}^N)$,
the
problem \textup{(3.7)} possesses a unique solution if $\lambda<0$.
\end{lemma}
\begin{proof}
{}From Lemma \ref{Lemma3.3}, the solution of (3.7) must belong to
$W^{4,p}(\mathbb{R}^N)$.
Suppose that $u\in W^{4,p}({\mathbb{R}^N})$
is a solution of the homogeneous problem
$$
\left\{
\begin{array}{lr}
\Delta^2u-\lambda u=0,\\
u\in W^{4,p}({\mathbb{R}^N}).
\end{array}
\right. \eqno{(3.8)}
$$
Rewrite (3.8) in the form
$$
\left\{
\begin{array}{lr}
-\Delta(-\Delta u)=\lambda u,\\
u\in W^{4,p}({\mathbb{R}^N}),\;-\Delta u\in W^{2,p}({\mathbb{R}^N}).
\end{array}
\right.
$$
By using a bootstrap argument, it follows that
$$u\in C^4 ({\mathbb{R}^N})\cap L^p({\mathbb{R}^N}),
\qquad \Delta u\in C^2({\mathbb{R}^N})\cap L^p({\mathbb{R}^N}),$$
and $\lim\limits_{\left|x\right|\rightarrow\infty}u(x)=0$, $
\lim\limits_{\left|x\right|\rightarrow\infty}\Delta u(x)=0$. Define
$u_1=(\Delta-\sqrt{\lambda})u$, $u_2=(\Delta+\sqrt{\lambda})u$.
Then
$$(\Delta+\sqrt{\lambda})u_1=0, \qquad (\Delta-\sqrt{\lambda})u_2=0,\eqno{(3.9)}$$
and $$u=\frac{1}{2\sqrt{\lambda}}(u_2-u_1),$$
$$\lim_{\left|x\right|\rightarrow\infty}u_1(x)=0 , \qquad
\lim_{\left|x\right|\rightarrow\infty}u_2(x)=0.
$$
For
$\lambda<0$, the solution of (3.9) can be expressed
in terms of
Hankel
functions. By the asymptotic behavior of Hankel functions (see
(2.8)), we can deduce that
$$
e^{\Im (\lambda)^{1/4}\left|x\right|}u_i(x)\rightarrow 0,
\quad i=1,2, \ \
\text{\quad as } \left|x\right|\rightarrow \infty.$$ Thus we have
$$e^{\Im (\lambda)^{1/4} \left|x\right|}u(x)\rightarrow 0
\text{\qquad as } \left|x\right|\rightarrow \infty;\eqno{(3.10)}$$
it follows from (3.10) that $u\in L^r({\mathbb{R}^N})$ for all $r\in
[2,+\infty)$.
In particular, $u\in L^2({\mathbb{R}^N})$ and hence $u\in H^{4}({\mathbb{R}^N})$.
Theorem \ref{Theorem2.4} gives us that $u\equiv 0$.
This completes the proof of our lemma.
\end{proof}
\section{\label{S4W2p}$W^{2,p}(\mathbb{R}^N)$-regularity}
In this section, we obtain a sharper relationship between the
regularity of the weak solutions of
the
linear biharmonic problem (3.1) and the
properties of the inhomogeneous term $f$ in (3.1).
Recalling
the properties of the fundamental solution $G^{(N)}_k$ for $k>0$
(see Corollary \ref{Corollary2.3}), Young's inequality for
convolutions \cite {T} shows
that the convolution $f*G^{(N)}_k$
defines an element of $L^s( \mathbb{R}^N)$ subject to the following restrictions:
$$
\left\{
\begin{array}{ll}
p\le s\le +\infty & {\text{ if }} \ p>\frac{N}{4}\,,\\
p\le s < +\infty & {\text{ if }} \ p=\frac{N}{4}\,,\\
p\le s\le\frac{Np}{N-4p} & {\text{ if }} \ 1\le p<\frac{N}{4}\,.
\end{array}
\right.
\eqno{(4.1)}
$$
Setting $T_kf=f*G^{(N)}_k$, we see that
$$
T_k\colon L^p(\mathbb{R}^N)\rightarrow L^s(\mathbb{R}^N) \text{ is a bounded linear operator.}
\eqno {(4.2)}
$$
Referring again to Corollary \ref{Corollary2.3},
we can deduce that for $i=1,2,\dots, N$,
the convolution $f*\partial _i G^{(N)}_k$ defines
an element of $L^s$ whenever $f\in L^p$ subject to the restrictions
$$
\left\{
\begin{array}{ll}
p\le s \le +\infty & {\text{ if }} \ p>\frac{N}{3}\,,\\
p\le s<+\infty & {\text{ if }} \ p=\frac{N}{3}\,,\\
p\le s<\frac{Np}{N-3p} & {\text{ if }} \ 1\le p<\frac{N}{3}\,;
\end{array}
\right.
\eqno{(4.3)}
$$
and the convolution $f*\Delta G^{(N)}_k$ defines an element of $L^s$ whenever $f\in L^p$
subject to the restrictions
$$
\left\{
\begin{array}{ll}
p\le s \le +\infty & {\text{ if }} \ p>\frac{N}{2}\,,\\
p\le s<+\infty & {\text{ if }} \ p=\frac{N}{2}\,,\\
p\le s<\frac{Np}{N-2p} & {\text{ if }} \ 1\le p<\frac{N}{2}\,.
\end{array}
\right.
\eqno{(4.4)}
$$
Setting $S^i_k f=f*\partial_i G^{(N)}_K$, $i=1,2,\dots,N$, and
$S^{\Delta}_k f=f*\Delta G^{(N)}_k$,
we see that
\begin{align*}
S^i_k \colon L^p\rightarrow L^s
& {\text{ is a bounded linear operator}}\tag*{(4.5)}\\
\intertext{under the restrictions (4.3) and}
S^{\Delta}_k \colon L^p\rightarrow L^s
& {\text{ is a bounded linear operator}}\tag*{(4.6)}
\end{align*}
under the restrictions (4.4).
\begin{theorem}
\label{Theorem4.1} Given $k>0$ and $f\in C^2_0(\mathbb{R}^N)$, set
$$T_kf(x)=f*G^{(N)}_k(x) {\text{\qquad for }} x\in{\mathbb{R}^N}\,.\eqno{(4.7)}$$
Then $T_k f\in C^4({\mathbb{R}^N})$, $\lim\limits_{\left|x\right|\rightarrow\infty}T_k f(x)=0$, and
$u=T_k f$ satisfies the biharmonic equation
$$\Delta^2 u=\lambda u+f {\text{\qquad on }} {\mathbb{R}^N},\eqno{(4.8)}$$
where $\lambda=-k^2$. Furthermore, for all $x\in {\mathbb{R}^{N}}$, the
following formulae are valid for $i,j,l,m=1,2,\dots,N$:
\begin{align*}
T_kf(x) &=\int\! f(x-z)G^{(N)}_k(z)\,dz
=\int\! G^{(N)}_k(x-z)f(z)\,dz,
\\
\partial_i T_k f(x) &= \int\! \partial_i f(x-z)G^{(N)}_k(z)\,dz
=\int\! \partial_i G^{(N)}_k(x-z)f(z)\,dz,
\\
\partial _j\partial_i T_k f(x)
&=\int\!\partial_i f(x-z)\partial_j G^{(N)}_k(z)\,dz,
\\
\partial _m\partial _j\partial_i T_k f(x)
&=\int\!\partial_m\partial_i f(x-z)\partial_j G^{(N)}_k(z)\,dz
=\int\!\partial_i f(x-z)\partial_m\partial_j G^{(N)}_k (z)\,dz\rlap{,}\!
\\
\partial_l\partial_m\partial_j\partial_i T_k f(x)
&=\int\!\partial_m\partial_j f(x-z)\partial_l\partial_i G^{(N)}_k(z)\,dz.
\end{align*}
\end{theorem}
\begin{proof}
Noting that
$$T_kf(x)=T_1f_k(kx)
\text{\qquad where } f_k(y)=k^{-4}f\left(\frac{z}{k}\right),$$
we see that, by a change of scale, it is enough to treat the case $k=1$. In the
following, we take $k=1$ and simplify the notation by setting $T_1=T$,
$G^{(N)}_1=G^{(N)}$. Since $f$, $\partial_i f$ and $\partial_{ij}f\in C^0(\mathbb{R}^N)$
and $\partial_i G^{(N)},\partial_{ij}G^{N}\in L^1(\mathbb{R}^N)$, it
follows that the convolutions
\begin{gather*}
Tf=f*G^{(N)}, \qquad \partial_i f* G^{(N)}, \qquad f*\partial G^{(N)},\\
\partial_i f*\partial_j G^{(N)}, \qquad \partial_{ij}f*\partial_{m}G^{(N)}, \qquad
\partial_i f*\partial_{jm} G^{(N)},
\text{\quad and\quad }\partial_{ij}f*\partial_{ml}G^{(N)}
\end{gather*}
are defined and are continuous on ${\mathbb{R}^N}$. They all tend to zero as
$\left|x\right|\rightarrow\infty$.
Hence to prove the theorem it is sufficient to establish the following statements.
\begin{enumerate}
\item[1)] $\partial_i Tf$ exists and $\partial_i Tf=\partial_if*G^{(N)}$.
\item[2)] $\partial_if*G^{(N)}=f* \partial_i G^{(N)}$.
\item[3)] $\partial_{ij}Tf$ exists and $\partial_j\partial_i Tf
=\partial_jf*\partial_iG^{(N)}$.
\item[4)] $\partial_{mji}Tf$ exists and $\partial_{mij}Tf
=\partial_m\partial_jf*\partial_iG^{(N)}
=\partial_jf*\partial_m\partial_iG^{(N)}$.
\item[5)] $\partial_{lmji}Tf$ exists and $\partial_{lmji}Tf
=\partial_m\partial_jf*\partial_l\partial_iG^{(N)}$.
\item[6)] $\Delta^2Tf+Tf=f$ on ${\mathbb{R}^N}$.
\end{enumerate}
(1) Let $e_i$ be an element of the usual basis for ${\mathbb{R}^N}$ and $h$ a
non-zero real number. Then
$$\frac{Tf(x+he_i)-Tf(x)}{h}=\int_{\mathbb{R}^N}\frac{f(x+he_i-z)
-f(x-z)}{h}G^{(N)}(z)\,dz$$
and
$$\lim_{h\rightarrow 0}\frac{f(x+he_i -z)-f(x-z)}{h}=\partial_if(x-z).$$
Also
\begin{align*}
\left|\frac{f(x+he_i-z)-f(x-z)}{h}\right|
&\le\left|\frac{1}{h}\int^1_0\frac{d}{dt}f(x+the_i-z)\,dt\right|\\
&=\left|\int^1_0\partial_i f(x+the_i-z)\,dt\right|\\
&\le \max_{z\in{\mathbb{R}^N}}\left|\partial_i f(z)\right|=
\left|\partial_i f\right|_{\infty}.
\end{align*}
Hence, by the dominated convergence theorem,
$$\lim_{h\rightarrow 0}\frac{Tf(x+he_i)-Tf(x)}{h}
=\int_{\mathbb{R}^N}\partial_i f(x-z)G^{(N)}(z)\,dz.$$
(2) For $i=1,2,\dots,N$,
\begin{align*}
\partial_i f*G^{(N)}(x)
&=\lim_{\epsilon\rightarrow 0}\int_{|z|\ge\epsilon}\partial_if(x-z)G^{(N)}(z)\,dz\\
&=-\lim_{\epsilon\rightarrow 0}\int_{|z|\ge\epsilon}\frac{\partial}{\partial z_i}f(x-z)G^{(N)}(z)\,dz\\
&=\lim_{\epsilon\rightarrow 0}\left\{\int_{|z|=\epsilon}\frac{z_i}{|z|}f(x-z)G^{(N)}(z)\,dz
+\int_{|z|\ge\epsilon}f(x-z)\partial_i G^{(N)}(z)\,dz\right\}.
\end{align*}
Now from Lemma \ref{Lemma2.2},
\begin{align*}
\left|\int_{|z|=\epsilon}\frac{z_i}{|z|}f(x-z)G^{(N)}(z)\,dz\right|
&\le \left|f\right|_{\infty}\int_{|z|=\epsilon} \left|G^{(N)}(z)\right|\,dz\\
&=\left|f\right|_{\infty}\left|G^{(N)}(\epsilon)\right|\int_{|z|=\epsilon} \! z\,dz
=\left|f\right|_{\infty}\left|G^{(N)}(\epsilon)\right|\cdot w_N\epsilon^{N-1}\\
&=\left\{
\begin{aligned}
&
\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}\epsilon^{4-N}
w_N\epsilon^{N-1}\left|f\right|_{\infty}
& & {\text{ if }} N\ge 5,\\
& \vphantom{\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}}
O\left(\left|\ln \epsilon\right|\right)\epsilon^{N-1} w_N \left|f\right|_{\infty}
& & {\text{ if }} N=4,\\
& \vphantom{\frac{2^{\nu-2}\Gamma(\nu-1)}{2(2\pi)^{N/2}}}
O(1)\epsilon^{N-1} w_N \left|f\right|_{\infty}
& & {\text{ if }} N=2,3,
\end{aligned}
\right.
\end{align*}
where $\nu=\frac{N-2}{2}$. Hence
$$\lim_{\epsilon\rightarrow 0}\int_{|z|=\epsilon}\frac{z_i}{|z|}f(x-z)G^{(N)}(z)\,dz=0$$
and
$$\partial_if*G^{(N)}(x)=\lim_{\epsilon\rightarrow 0}
\int_{|z|\ge\epsilon}f(x-z)\partial_iG^{(N)}(z)\,dz
=f*\partial_iG^{(N)}.$$
(3) Repeat the proof of (1) with $G^{(N)}$ replaced by $\partial_i G^{(N)}$.
(4) Repeat the proof of (1) and (2) with $G^{(N)}$ and $f$ replaced by $\partial_iG^{(N)}$ and $\partial_j f$.
(5) Repeat the proof of (1) with $G^{(N)}$ and $f$ replaced by $\partial_{il}G^{(N)}$ and $\partial_j f$.
(6)
\begin{align*}
\Delta^2 Tf(x)&=\int\Delta f(x-z)\Delta G^{(N)}(z)\,dz\\
&=\lim_{\epsilon\rightarrow 0}\int_{|z|\ge \epsilon}
\Delta _z f(x-z)\Delta G^{(N)}(z)\,dz\\
&=\lim_{\epsilon\rightarrow 0}\left\{\int_{|z|=\epsilon}
f(x-z)\frac{\partial\Delta G^{(N)}}{\partial r}-\Delta G^{(N)}
\frac{\partial f(x-z)}{\partial r}\,dz
\vphantom{+\int_{|z|\ge\epsilon}f(x-z)\Delta ^2G^{(N)}(z)\,dz}\right.\\
&\qquad+
\left.\vphantom{\int_{|z|=\epsilon}
f(x-z)\frac{\partial\Delta G^{(N)}}{\partial r}-\Delta G^{(N)}
\frac{\partial f(x-z)}{\partial r}\,dz}
\int_{|z|\ge\epsilon}f(x-z)\Delta ^2G^{(N)}(z)\,dz\right\}\\
&=\lim_{\epsilon\rightarrow 0}\int_{|z|=\epsilon}
\left(f(x-z)\frac{\partial\Delta G^{(N)}}{\partial r}
-\Delta G^{(N)}\frac{\partial f(x-z)}{\partial r}\right)\,dz-Tf(x).
\end{align*}
Since
$$
\Delta ^2G^{(N)}(z)=-G^{(N)}(z)
\text{ for all } z\not=0,
$$
we obtain that
\begin{align*}
\lim_{\epsilon\rightarrow 0}\int_{|z|=\epsilon}
\Delta G^{(N)}\frac{\partial f(x-z)}{\partial r} \,dz
&= 0,\tag*{(4.9)}\\
\lim_{\epsilon\rightarrow 0}\int_{|z|=\epsilon}f(x-z)
\frac{\partial}{\partial r}
\left(\Delta G^{(N)}(z)\right)\,dz
&= f(x).\tag*{(4.10)}
\end{align*}
In fact, from (2.13),
$$(G^{(N)}(r))'=-2\pi rG^{(N+2)} (r).$$
Thus
\begin{align*}
\left(G^{(N)}(r)\right)''
&= 2^2\pi^2r^2G^{(N+4)}(r)-2\pi G^{(N+2)}(r),\\[3\jot]
\Delta G^{(N)}(x)
&= \left(G^{(N)}(r)\right)''+\frac{N-1}{r}\left(G^{(N)}(r)\right)'\\
&= 4\pi^2r^2G^{(N+4)}(r)-2\pi N G^{(N+2)} (r),\\[3\jot]
\left(\Delta G^{(N)} (r)\right)'_r
&= -16\pi^3r^3G^{(N+6)}(r)+
(8+4N)\pi^2rG^{(N+4)}(r).
\end{align*}
By the asymptotic behavior of
$G^{(N)}_k(r)$ (see Lemma \ref{Lemma2.2}) we deduce that,
as $r=\left|x\right|\rightarrow 0$,
\begin{align*}
\Delta G^{(N)}(x)
&\approx 4{\pi}^2 r^2 G^{(N+4)}(r)
\approx\frac{2^{(N/2)-1}\Gamma\left(\frac{N}{2}\right)}{2(2\pi)^{N/2}}r^{2-N},
\tag*{(4.11)}\\[3\jot]
\left(\Delta G^{(N)}(r)\right)'_r
&\approx -16 {\pi}^3 r^3 G^{(N+6)}(r)
\approx\frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{2(2\pi)^{N/2}}r^{1-N}.
\tag*{(4.12)}
\end{align*}
Thus
\begin{align*}
\left|\int_{|z|=\epsilon}\Delta G^{(N)}(z)\frac{\partial f(x-z)}{\partial r}\,dz
\vphantom{\frac{2^{(N/2)-1}\Gamma\left(\frac{N}{2}\right)}{2(2\pi)^{N/2}}}
\right|
&\approx \left|\int_{|z|=\epsilon}
\frac{2^{(N/2)-1}\Gamma\left(\frac{N}{2}\right)}{2(2\pi)^{N/2}}
|z|^{2-N}
\frac{\partial f(x-z)}{\partial r}\,dz\right|\\
& \le \left|\nabla f\right|_{L^{\infty}}
\int_{|z|=\epsilon}\frac{2^{(N/2)-1}\Gamma\left(\frac{N}{2}\right)}{2(2\pi)^{N/2}}
|z|^{2-N}\,dz\\
&= \left|\nabla f\right|_{L^{\infty}}\frac{2^{(N/2)-1}\Gamma\left(\frac{N}{2}\right)}{2(2\pi)^{N/2}}
\epsilon^{2-N}\epsilon^{N-1}w_N\rightarrow 0\\
& \qquad\text{\qquad as } \epsilon\rightarrow 0.
\end{align*}
This gives (4.9).
Now we are going to prove (4.10). From (4.12) we have
$$\epsilon^{N-1}
\left.\left(\Delta G^{N}(r)\right)'_r\right|_{r=\epsilon}\rightarrow
\frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{(2\pi)^{N/2}}
\text{\qquad as } \epsilon\rightarrow 0.$$
Thus
\begin{align*}
\int_{|z|=\epsilon} &f(x-z)\left(\Delta G^{(N)}(z)\right)'_r\,dz
= \int_{|z|=\epsilon}(f(x-z)-f(x))\left(\Delta G^{(N)}(z)\right)'_r \,dz\\
& \qquad+
f(x)\left.\left(\Delta G^{(N)}(r)\right)'_r\right|_{r=\epsilon}\cdot \epsilon^{N-1} w_N\\
&\approx \int_{|z|=\epsilon}(f(x-z)-f(x))\cdot
\frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{(2\pi)^{N/2}}r^{1-N}\,dz\\
& \qquad+
f(x)\frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{(2\pi)^{N/2}}w_N\\
&= \frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{(2\pi)^{N/2}}\epsilon^{1-N}
\int_{|z|=\epsilon}(f(x-z)-f(x))\,dz\\
& \qquad+
f(x)\cdot\frac{\frac{N}{2}\Gamma\left(\frac{N}{2}\right)}{{\pi}^{N/2}}
\cdot\frac{2\pi^{N/2}}{N\Gamma\left(\frac{N}{2}\right)}\\
&= \frac{2^{N/2}\Gamma\left(\frac{N}{2}+1\right)}{(2\pi)^{N/2}}\epsilon^{1-N}
\int _{|z|=\epsilon}(f(x-z)-f(x))\,dz+f(x).
\end{align*}
The limit
(4.10) follows from the facts that
$$\left|f(x-z)-f(x)\right|\le\left|\nabla f\right|_{L^{\infty}}|z|$$
for all $z\in {\mathbb{R}^N}$
and hence
$$\left|\epsilon^{1-N}\int_{|z|=\epsilon}(f(x-z)-f(x))\,dz\right|
\le \epsilon^{1-N} \left|\nabla f\right|_{L^{\infty}}\epsilon
\cdot\epsilon^{N-1}w_n\rightarrow 0.$$ This completes the proof of
Theorem \ref{Theorem4.1}.
\end{proof}
\begin{theorem}
\label{Theorem4.2} Let $\lambda=-k^2$ where $k>0$ and let $f\in L^p$
where $p\in\left(\frac{2N}{N+4},2\right]$. Then $T_kf\in H^2(\mathbb{R}^N)$ and
$\partial_i T_k f=S^i_k f$ and $\Delta T_k f=S^{\Delta}_k f$.
Furthermore, $T_kf$ is
a weak solution of \textup{(3.1)}, where $S^j_k$ and
$S^{\Delta}_k$ are given by \textup{(4.5)} and \textup{(4.6)}.
\end{theorem}
\begin{proof}
Let $\{f_n\}\subset C^2_0$ be a sequence such that
$|f_n-f|_p\rightarrow 0$ as $n\rightarrow\infty$. Since
$p>\frac{2N}{N+4}$, we have $\frac{2N}{N-4}<\frac{Np}{N-2p}$ when $N>4$ and
$p<\frac{N}{4}$. From (4.2) it follows that
$$
T_kf_n\text{ and }T_kf\in L^s
$$
and that
$$
\left|T_kf_n-T_kf\right|_{L^s}\rightarrow 0\text{ as }
n\rightarrow\infty,
$$
provided that $p\le s\le \frac{2N}{N-4}$ for $N\ge 5$ and
$p\le s <+\infty$ for $N=2,3,4$. Similarly, from (4.3), (4.4), it follows that
\begin{gather*}
S^i_k f_n\text{ and }S^i_k f\in L^s,
\\
\left|S^i_kf_n-S^i_kf\right|_{L^s}\rightarrow 0\text{ as
}n\rightarrow\infty;
\\
S^{\Delta}_kf_n\text{ and }S^{\Delta}_k f\in L^s,
\\
\left|S^{\Delta}_kf_n-S^{\Delta}_kf\right|_{L^s}\rightarrow 0\text{ as }n\rightarrow\infty,
\end{gather*}
provided that $p\le s\le 2$.
By Theorem \ref{Theorem4.1}, we know that $T_k f\in C^4$ and that
$\partial_i T_kf_n=S^i_kf_n$ for $i=1,2,\dots,N$. Putting $s=2$ in
the preceding statements we deduce that $T_k f\in H^2$ with
$\partial_i T_k f=S^i_kf$ for $i=1,2,\dots,N$,
and $\Delta T_kf=S^{\Delta}_kf$. Furthermore, setting $w=T_kf$ for any
$v\in C^{\infty}_0(\mathbb{R}^N)$, we have
\begin{align*}
\int\Delta w\Delta v-(\lambda w+f)v\,dx
&= \int w\Delta^2 v-\left(\lambda w+f\right)v \,dx\\
&= \lim_{n\rightarrow\infty}\int T_k f_n\Delta v
-\left(\lambda T_k f_n+f_n\right)v \,dx\\
&= \lim_{n\rightarrow\infty}\int \Delta (T_k f_n)
\Delta v-\left(\lambda T_k f_n +f_n\right)v\,dx\\
&= \lim_{n\rightarrow\infty}\int\left(\Delta^2 (T_k f_n)
+k^2 T_k f_n-f_n \right)v \,dx=0
\end{align*}
by Theorem \ref{Theorem4.1}. This proves that $w$ is a weak solution of (3.1).
\end{proof}
Having established this relationship between weak solutions and
convolutions with fundamental solutions,
we now have a better understanding of the
regularity of the weak solutions.
\begin{theorem}
\label{Theorem4.3} Let $f\in L^p\cap L^q$ where $p\in\left(\frac{2N}{N+4},
2\right]$ and $q\ge p$.
Let $u$ be a solution of \textup{(3.1)} for $f$ and some $\lambda\in {\mathbb{R}}$. Then
\begin{enumerate}
\item[{\textup{i)}}] $u\in W^{2,s} (\mathbb{R}^N)$ \rlap{where}
\[
\begin{array}{ll}
p\le s\le \infty & {\text{ if }} q>\frac{N}{2}\,,\\
p\le s<\infty & {\text{ if }} q=\frac{N}{2}\,,\\
p\le s<\frac{Nq}{N-2q} & {\text{ if }} q<\frac{N}{2}\,;
\end{array}
\]
\item[{\textup{ii)}}] if $q>\frac{N}{4}$, then $u\in L^{\infty}\cap C$ and
\[
\lim_{\left|x\right|\rightarrow\infty}u(x)=0;
\]
\item[{\textup{iii)}}] $u\in L^s$ where
\[
\begin{array}{ll}
p\le s<\infty & {\text{ if }} q=\frac{N}{4}\,,\\
p\le s<\frac{Nq}{N-4q} & {\text{ if }} q<\frac{N}{4}\,.
\end{array}
\]
\end{enumerate}
\end{theorem}
\begin{proof}
{\textup{i)}} \ For all $v\in C^{\infty}_0(\mathbb{R}^N)$,
$$
\int\Delta u \Delta v \,dx
=\int\left(\lambda u+f\right)v\,dx
=\int\left(-u+g\right)v\,dx\,,
$$
where $g=(\lambda+1)u+f$. Now since $u\in H^2(\mathbb{R}^N)$, $(\lambda+1)u\in L^r$ for $2\le r<+\infty$ if $N=2,3,4$ and
$2\le r<\frac{2N}{N-4}$ for $N\ge 5$. Thus
$u$ is a weak solution of
$$\Delta ^2 u=-u+g,$$
and so $u=T_1(\lambda+1)u+T_1 f$ (from Theorem \ref{Theorem4.2}). Using Lemma \ref{Lemma3.3} and
a bootstrap argument to deal with the term $T_1(\lambda+1)u$,
the result now follows from (4.1) to (4.6).
{\textup{ii)}} \ By {\textup{i)}}, $u\in W^{2,s}$ for some $s>\frac{N}{2}$ provided
that $q>\frac{N}{4}$. For $s>\frac N2$, $W^{2,s}\hookrightarrow C\cap L^{\infty}$ and
$\lim\limits_{\left|x\right|\rightarrow\infty}u(x)=0$ for all $u\in W^{2,s}$.
{\textup{iii)}} \ This follows from {\textup{i)}} and the Sobolev inclusions, or
directly from (4.1).
\end{proof}
\section{\label{S5Reg}Regularity for nonlinear equations}
In this section, we establish the regularity of weak solutions of (1.1).
\begin{proof}[ Proof of Theorem \textup{\ref{Theorem1.1}}]
Let $u(x)$ be
a
solution of (1.1). Set
$f=g(x, u(x))-a(x) u(x)$. Then $u(x)$ must be
a
solution of
$$
\left\{
\begin{array}{lr}
\Delta^2 u=f,\\
u\in H^2 (\mathbb{R}^N).
\end{array}
\right.
\eqno{(5.1)}
$$
{}From the assumptions $H_2)$ and $H_3)$ and $1\le \sigma+1<\frac{N+4}{N-4}$,
it follows that for all $u\in H^2(\mathbb{R}^N)$,
$$
f\in L^p \text{\qquad where }
\left\{
\begin{array}{ll}
2\le(\sigma+1)p<\infty &{\text{ if }} N=2,3,4,\\
2\le(\sigma+1)p\le \frac{2N}{N-4} &{\text{ if }} N\ge 5.
\end{array}
\right.
$$
Now
$$
\left[\frac{2}{\sigma+1},\frac{2N}{N-4}\cdot\frac{1}{\sigma+1}\right]
\cap \left(\frac{2N}{N+4},2\right]\not=\emptyset,
$$
since the restrictions on $\sigma$ ensure that
$\frac{2}{\sigma+1}\le 2$ and $\frac{2N}{N-4}\cdot\frac{1}{\sigma+1}>\frac{2N}{N+4}$.
Thus $u$ is a weak solution of (3.1) for $f$ and $\lambda=0$ where $f\in L^p$ for some
$p\in\left(\frac{2N}{N+4},2\right]$. From Theorem \ref{Theorem4.3},
$u\in W^{2,s}(\mathbb{R}^N)$ for
some $s>2$, and so
$$
f\in L^p \text{\qquad where }
\left\{
\begin{array}{ll}
2\le (\sigma+1)p\le\infty &{\text{ if }}s>\frac{N}{2}\,,\\
2\le (\sigma+1)p<\infty &{\text{ if }} s=\frac{N}{2}\,,\\
2\le(\sigma+1)p\le\frac{Ns}{N-2s} &{\text{ if }} s<\frac{N}{2}\,.
\end{array}
\right.
$$
Noting that $\frac{Ns}{N-2s}>\frac{2N}{N-4}$ for $2<s<\frac{N}{2}$, we see by
Lemma \ref{Lemma3.2} and
a bootstrap argument
that
$u\in W^{2,s}$ for all $2\le s\le \infty$.
This implies that $u\in L^{\infty}$ and so $f\in L^2$. Again by Lemma \ref{Lemma3.2} we now
also have $u\in H^4$.
\end{proof}
\renewcommand{\footnotesize}{\normalsize}
|
1,108,101,563,653 | arxiv | \section{Introduction}
All objects at non-zero temperature emit thermal radiation due to the thermally excited motion of particles and quasi-particles. The emitted photons connect objects radiatively at large distances, such as the earth and the sun or outer space, enabling applications in solar energy harvesting including photovoltaics~\cite{shockley_detailed_1961,markvart_practical_2003,wurfel_physics_2016,giteau_hot-carrier_2022} or radiative cooling~\cite{li_nighttime_2021,goldstein_sub-ambient_2017,raman_passive_2014,mandal_paints_2020,zhai_scalable-manufactured_2017,zhou_polydimethylsiloxane-coated_2019}.
The increasing ability to understand and control thermal radiation has resulted in the development of a variety of applications where radiative heat transfer (RHT) prevails over conduction and convection, for example thermal protection systems for spacecrafts~\cite{laub_thermal_2004} or high-temperature heat exchangers~\cite{zhang_nanomicroscale_2007,howell_thermal_2020}. Thermal radiation has already impacted many areas of science and engineering, and keeps setting higher expectations for novel disruptive technologies in the years to come~\cite{cuevas_radiative_2018,biehs_near-field_2021}.
Besides being one of the first successes of quantum physics, Planck's law ~\cite{planck_uber_1900} has been a cornerstone for modern radiative heat transfer theories. It states that a blackbody, an idealized object that perfectly absorbs radiation at all frequencies, radiates an electromagnetic spectrum which depends only on its temperature, and peaks at the thermal wavelength $\lambda_{th}$ ($\sim 10\,\,\text{$\mu$m}$ at room temperature), dictated by Wien's displacement law~\cite{planck_zur_1901}. When two objects are separated by a distance $d$ much greater than $\lambda_{th}$, the Stefan-Boltzmann law, a direct consequence of Planck's law, predicts the upper bound on the net radiative heat flux to be $\sigma (T_1^4-T_2^4)$, where $\sigma$ is the Stefan-Boltzmann constant, and $T_1\geq T_2$ are the temperatures of the two objects, respectively.
With the recent development of nanophotonics, we have access to dimensions and distances much smaller than the thermal wavelength. This offers the possibility to exploit the coupling between two objects at different temperatures in the near-field\cite{basu_review_2009,song_near-field_2015,biehs_near-field_2021}, where Planck’s and Stefan-Boltzmann's laws cease to hold~\cite{polder_theory_1971,volokitin_near-field_2007}. In fact, near-field radiative heat transfer (NFRHT) can exceed the far-field predictions by several orders of magnitude, owing to evanescent modes such as surface plasmon polaritons (SPPs) or surface phonon polaritons (SPhPs)~\cite{narayanaswamy_surface_2003,laroche_near-field_2006,ilic_overcoming_2012,tervo_near-field_2018,song_near-field_2015,datas_thermionic-enhanced_2019,ben-abdallah_fundamental_2010,ben-abdallah_harvesting_2019,caldwell_low-loss_2015} (Fig.~\ref{fig:schematics_NF_FF}(a,b)).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{The radiative heat transfer between two bulk planar layers at different temperatures, separated by a vacuum gap of size $d$ much greater than the thermal wavelength $\lambda_{th}$ (far-field), is mediated by propagating waves $\bf(a)$. When the two layers are brought at a distance $d<\lambda_{th}$ (near-field), the radiative heat transfer becomes dominated by evanescent modes $\bf(b)$, largely surpassing the black-body limit. In panel $\bf(c)$, the radiative thermal conductance (or heat transfer coefficient) per unit area for SiC bulk plate-plate at room temperature ($T=300\,K$) is shown. For $d\ll\lambda_{th}$, with $\lambda_{th}\approx 10\,\,\text{$\mu$m}$ marked by a vertical line, $h$ undergoes a $\propto d^{-2}$ enhancement (blue line). For $d\gg\lambda_{th}$, $h$ saturates at its far-field value (green line), smaller than the black-body limit $\approx 6\,\text{W/m}^2 \text{K}$ (grey line). Non-local effects are not taken into account.
}
\label{fig:schematics_NF_FF}
\end{figure}%
Unlike far-field RHT, the upper bound of NFRHT is inversely dependent on the separation distance $d$ between the bodies. For instance, in the case of two bulk planar layers, the upper bound on NFRTH is $\displaystyle \frac{\sigma'}{d^2} (T_1^2-T_2^2)$~\cite{volokitin_resonant_2004,pendry_radiative_1999,ben-abdallah_fundamental_2010}, where $\sigma'$ is only dependent on universal constants.
The gap-size dependencies of the NFRHT for some canonical configurations are summarized in Table \ref{tab:gap_dependence}.
We exemplify the transition between far-field and near-field radiative heat transfer for two SiC bulk planar layers in Fig.~\ref{fig:schematics_NF_FF}. We plot the radiative thermal conductance per unit area (or heat transfer coefficient) as a function of the gap-size.
For gap sizes below 100 nm, heat transfer undergoes a boost proportional to $1/d^2$ with respect to its far-field value (due to the SPhP resonance at the vacuum-slab interface~\cite{mulet_enhanced_2002}).
This enhancement of RHT, predicted in the near field, is extremely relevant for many applications including contactless cooling~\cite{guha_near-field_2012,kerschbaumer_contactless_2021,epstein_observation_1995}, thermal lithography~\cite{pendry_radiative_1999,howell_thermal_2020,garcia_advanced_2014,hu_tip-based_2017,malshe_tip-based_2010}, thermally assisted magnetic recording~\cite{hamann_thermally_2004,ruigrok_disk_2000,kief_materials_2018}, thermal logic circuitry~\cite{otey_thermal_2010,fiorino_thermal_2018,ben-abdallah_near-field_2014,ordonez-miranda_radiative_2019,papadakis_gate-tunable_2019,papadakis_deep-subwavelength_2021}, scanning thermal microscopy~\cite{de_wilde_thermal_2006,kittel_near-field_2008} or thermophotovoltaics~\cite{laroche_near-field_2006,mittapally_near-field_2021}.
In this perspective, we review the basic principles of NFRHT, summarizing the main theoretical and experimental efforts. We highlight recent developments related to active tuning of thermal radiation, thermophotovoltaics and non-reciprocal heat transfer, as well as in the theoretical modelling of NFRHT in planar structures. Then, we introduce what we consider to be key challenges for the development of NFRHT-related technologies: improving the quality of narrow-bandgap semiconductors for TPV, achieving nanometric vacuum gaps at scale, and approaches to access spectrally and angularly-resolved information. Finally, we present the current directions for NFRHT tuning and enhancement, with particular emphasis on emerging 2D materials for approaching thermodynamic limits.
\begin{table}[t]\label{tab:gap_dependence}
\ra{2}
\caption{Dependence of the NFRHT from the gap-size $d$ for simple geometries studied theoretically and experimentally in the literature.}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c|}
\stackanchor{PLATE-}{PLATE}&\stackanchor{SPHERE-}{PLATE}&\stackanchor{TIP-}{PLATE}&\stackanchor{SPHERE-}{SPHERE}
\vspace{1mm}
\\
\hline
{\begin{minipage}{.1\textwidth}
\vspace{0.2mm}
\includegraphics[width=0.8\linewidth]{pl_pl3.pdf}
\end{minipage}}&{\begin{minipage}{.1\textwidth}
\includegraphics[width=0.8\linewidth]{sph_pl3.pdf}
\end{minipage}}&{\begin{minipage}{.1\textwidth}
\includegraphics[width=0.8\linewidth]{tip_pl3.pdf}
\end{minipage}}&{\begin{minipage}{.1\textwidth}
\includegraphics[width=0.8\linewidth]{sph_sph3.pdf}
\end{minipage}}\\
\hline
$\displaystyle \frac{1}{d^2}$~\cite{joulain_surface_2005,fiorino_giant_2018}& $\displaystyle \frac{1}{d}\left[a\!\gg\!d\right]$\cite{golyk_small_2013,edalatpour_near-field_2016,rousseau_radiative_2009,shen_surface_2009} &$\displaystyle \log{d}$\cite{mccauley_modeling_2012}(left)&$\displaystyle \frac{1}{d}\left[a\!\gg\!d\right]$\cite{narayanaswamy_thermal_2008,chapuis_radiative_2008}\\
&$\displaystyle \frac{1}{d^3}\left[a< d\right]$\cite{mulet_nanoscale_2001}&$\displaystyle \frac{1\cite{edalatpour_near-field_2016,kloppstech_giant_2017,cui_study_2017}}{d^{\alpha\in[0.3,2]}}$(right)&$\displaystyle\frac{1}{d^{6}}\left[a\!\ll\!d\right]$\cite{narayanaswamy_thermal_2008,chapuis_radiative_2008}
\end{tabular}
\end{ruledtabular}
\end{table}
\section{General overview}
\subsection{Theoretical and computational methods}
NFRHT is comprehensively described within the framework of fluctuational electrodynamics (FE) proposed by Rytov in the 1950s~\cite{rytov_theory_1959,rytov_principles_1989} and refined by Polder and Van Hove in the 1970s~\cite{polder_theory_1971}.
This theory assumes the thermal radiation originating from thermally excited fluctuating currents characterized by the fluctuation-dissipation theorem~\cite{callen_irreversibility_1951,eckhardt_macroscopic_1984}, which links the currents' correlation to the dielectric properties of the intervening media. These random electrical currents are then considered as the driving term in the macroscopic Maxwell's equations, whose solution leads to the determination of the net heat transfer.
Although the problem of NFRHT in two- and many-body systems is, in general, fully characterized~\cite{ben-abdallah_many-body_2011, biehs_near-field_2021}, analytical solutions have been derived only in a few highly symmetrical configurations, involving canonical structures such as spheres~\cite{narayanaswamy_thermal_2008,mulet_nanoscale_2001}, planes~\cite{joulain_surface_2005} and cones~\cite{mccauley_modeling_2012}.
Solving the RHT problem in complex geometries is very challenging, and requires advanced numerical approaches. Fortunately, the FE formalism can leverage the well-established computational techniques inherited from classical electromagnetic scattering.
These techniques rely on spectral and finite element methods, according to the choice of delocalized or localized functions, respectively, as basis constituents for the scattering operators~\cite{biehs_near-field_2021,song_near-field_2015,forestiere_full-retarded_2018,ching_quasinormal-mode_1998,bimonte_nonequilibrium_2017,reid_fluctuation-induced_2013}. Methods include the scattering matrix approach~\cite{bimonte_scattering_2009,messina_scattering-matrix_2011,kruger_trace_2012} as well as surface~\cite{rodriguez_fluctuating-surface-current_2013,otey_fluctuational_2014,forestiere_electromagnetic_2019} and volume~\cite{polimeridis_fluctuating_2015,jin_general_2017,forestiere_volume_2018} current formulations, including the thermal discrete-dipole approximation~\cite{edalatpour_thermal_2014,edalatpour_near-field_2016} and the finite difference time domain approach~\cite{luo_thermal_2004,rodriguez_frequency-selective_2011}
\subsection{Measuring RHT at the nanoscale} \label{sec:NFRHT_experiment}
NFRHT measurements are extremely difficult to perform, as they require fine control of the position and distance between objects at the nanoscale~\cite{song_near-field_2015}.
The prototypical configuration for NFRHT, on which most theoretical studies are based, is that of two parallel plates separated by a vacuum gap (Fig.~\ref{fig:schematics_NF_FF}). Although it is arguably the simplest geometry to treat theoretically, the plate-to-plate configuration is also one of the most difficult to experimentally realize, due to the requirement of a uniform nanometric vacuum gap between smooth parallel surfaces~\cite{song_near-field_2015}.
The vacuum gap can be achieved either without any interposed supporting structure (the \emph{true} vacuum gap configuration, for example using MEMS) or by separating the plates using nanospheres or micropillars (Fig.~\ref{fig:plate_to_plate}(b-d), further detailed in Sec.\ref{sec:vacuum_gap}).
To measure heat transfer, the temperature of the receiver plate is maintained constant with a thermoelectric cooler while a thermoelectric heat pump is used to heat up the emitter. The receiver's and emitter's temperatures are monitored using thermistors. The heat transferred between the two plates for a given temperature difference can be estimated from the power supplied to the heater~\cite{fiorino_giant_2018,desutter_near-field_2019}. Alternatively, the temperature of the emitter and receiver are maintained constant using thermoelectric heat pumps, and the heat transfer is estimated from the power supplied by the heat pumps~\cite{bernardi_radiative_2016}.
To measure \emph{radiative} heat transfer, an additional challenge is to remove heat convection and conduction. The former requires the measurements to be performed in vacuum. The latter is achieved in the true vacuum gap configuration. In all other cases, convection and conduction's contribution must be computed and subtracted from the total measured heat transfer~\cite{desutter_near-field_2019}.
One of the first NFRHT experiments was performed in the late 1960s by Domoto \textit{et al.}~\cite{domoto_experimental_1970} with two parallel copper disks. They observed the gap-size dependence of RHT and reached powers beyond the blackbody limit.
The measurements were performed at cryogenic temperature, leading to significant NFRHT even at micron-sized gaps.
The following decades have witnessed a huge advance in nanofabrication techniques, unlocking NFRHT detection at room temperature. Key NFRHT measurements reported in the literature between macroscopic plates (area $\gg \lambda_{th}^2$) are gathered in Fig~\ref{fig:plate_to_plate}(a), in terms of the minimum gap size achieved and the corresponding radiative heat flux relative to the blackbody limit.
For any single gap-dependent experiment, NFRHT follows the expected $1/d^2$ trend.
Remarkably, even though the materials and configurations can be very different, all measurements performed at room temperature (i.e., all data points except Domoto \textit{et al.} \cite{domoto_experimental_1970} and Kralik \textit{et al.} \cite{kralik_strong_2012}) fall close to the same $1/d^2$ line.
One result currently stands out: Fiorino \textit{et al.}~\cite{fiorino_giant_2018} achieved a true vacuum gap down to 25 nm between parallel silica surfaces, leading to almost a $1000\times$ enhancement over the blackbody limit.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{
{\bf (a)} Radiative heat flux normalized to the blackbody limit as a function of the minimum gap-size $d$ for several planar configuations experimetally studied in the reported references. The vacuum gap has been achieved: {\bf (b)} without any interposed supporting structure (blue markers)~\cite{domoto_experimental_1970,ottens_near-field_2011,kralik_strong_2012,feng_mems_2013,fiorino_giant_2018,bernardi_radiative_2016,lim_near-field_2015}, for example using a MEMS~\cite{fiorino_giant_2018}; {\bf (c)} with dispersed nanospheres ~\cite{hu_near-field_2008,lang_dynamic_2017,sabbaghi_super-planckian_2020} (orange markers); {\bf (d)} with micropillars~\cite{watjen_near-field_2016,ito_parallel-plate_2015,ito_dynamic_2017,yang_observing_2018,desutter_near-field_2019,tang_near-field_2020} (green markers). A qualitative $1/d^2$-trend for experiments at room temperature is also shown with a dashed line.
}
\label{fig:plate_to_plate}
\end{figure}
Some of the alignment and positioning constraints imposed by the plate-to-plate configuration can be relaxed by replacing one of the plates with a tip~\cite{williams_scanning_1986,kittel_near-field_2005} or a sphere~\cite{hu_near-field_2008,rousseau_radiative_2009,shen_surface_2009,song_enhancement_2015}.
Scanning thermal microscopy (SThM), which employs a tip-to-plane configuration is one on the most notable NFRHT measurement techniques~\cite{williams_scanning_1986,wischnath_near-field_2008,worbes_enhanced_2013,kim_radiative_2015,kloppstech_giant_2017}. Akin to scanning tunneling microscope (STM) and atomic force microscope (ATM), SThM allows probing the surface of a sample through the near-field heat exchange between a heated tip and the sample's surface.
This configuration provides access to the integrated heat-flux in the extreme near-field regime, for gap-sizes below $10\,\text{nm}$~\cite{worbes_enhanced_2013,kloppstech_giant_2017,kim_radiative_2015,kittel_near-field_2005}. For instance, Kittel \textit{et al.}~\cite{kittel_near-field_2005} reported SThM measurements of NFRHT from surfaces of Au or GaN for tip-to-surface distances down to $\sim 1\,\text{nm}$, interestingly retrieving experimental data differing from the FE standard prediction.
\section{Recent contributions}
\subsection{Analytical framework for polariton-mediated NFRHT }
\begin{figure}[htpb]
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{{\bf(a)} Quality factor $Q$~\cite{wang_general_2006,pascale_bandwidth_2021} and material residue parameter $B$~\cite{pascale_role_2022} for plasmonic and polar materials. The Drude parameters are taken from~\cite{ashcroft_solid_1976} for Au, Ag, Cu, Al and from~\cite{caldwell_low-loss_2015} for the rest of the considered materials. Superscripts ${}^o$ and ${}^e$ stand for the ordinary and extraordinary principal axes, respectively. The solid line shows the optimal loss condition, $Q_\text{opt}=4.5\,B$, which maximizes NFRHT for a pair of closely spaced bulk planar layers (inset).
{\bf(b)} Radiative thermal conductance, $h$, calculated using the result in~\cite{pascale_role_2022} (solid lines) and numerically, via fluctuational electrodynamics~\cite{polder_theory_1971} (dotted lines), for plasmonic (Ag, AZO) and polar (hBN$^o$, 4H-SiC) materials. The black dashed line shows the fundamental bound to $h$~\cite{ben-abdallah_fundamental_2010}. Adapted from~\cite{pascale_role_2022}.}
\label{fig:QB_hT}
\end{figure}
Plasmonic materials and polar dielectrics, are great candidates for near-field thermal emitters, since they support evanescent surface modes (SPPs and SPhPs, respectively)~\cite{maier_plasmonics_2007,kittel_near-field_2005,Basov2016}.
We recently provided a simple analytical framework for a quantitative classification of these materials for NFRHT in the plate-to-plate configuration~\cite{pascale_role_2022}.
This description allows to disentangle the optical loss from other dispersion characteristics and temperature, enabling a deep physical understanding of their role in NFRHT.
In fact, considering Drude and Lorentz oscillator models for plasmonic and polar materials' dispersion, respectively, we showed that the thermal conductance per unit area, $h$, at a temperature $T$, can be factorized as $\displaystyle h = h_\text{max} \,\Psi\left(Q\right)\Pi\left(T\right)$.
The optical loss is synthetically described by the quality factor, $Q=\Omega/\gamma$ ~\cite{wang_general_2006,pascale_bandwidth_2021}, of the polaritonic material resonance, occurring at the frequency $\Omega$, for dispersions characterized by a damping rate $\gamma$.
The functions $\Psi$ and $\Pi$ are bounded above by unity. Hence $h_\text{max}$, which is proportional to $\Omega/d^2$, represents the maximum achievable thermal conductance.
$\Psi$ describes how NFRHT changes with optical loss, and is maximized at the condition $Q=Q_\text{opt}=4.5\,B$, where $B$, termed \textit{material residue}~\cite{pascale_role_2022}, is a loss-independent parameter:
For plasmonic media, $B$ only depends on the high-frequency permittivity $\varepsilon_\infty$ , while for polar dielectrics it is proportional to $\Omega/\Delta\omega_{{R}}$ , where $\Delta \omega_{{R}}$ is the spectral width of the Reststrahlen band~\cite{kortum_phenomenological_1969}.
In Fig. \ref{fig:QB_hT}(a), we plot $Q_\text{opt}$ against the material residue $B$ along with the quality factor $Q$ for several polaritonic media considered in the literature~\cite{cardona_fundamentals_2005,schubert_infrared_2000,caldwell_low-loss_2015,kim_optimization_2013,kim_plasmonic_2013}.
The temperature dependence of NFRHT is described via the function $\displaystyle\Pi$, which approaches its maximum as $T\to \infty$. In contrast to Wein's displacement law in the far-field, whereby $h$ scales as $T^{3}$, in the near-field, $\Pi$ decays approximately following $\sim \left(\frac{\Omega}{T}\right)^2$. On the other hand, $h_\text{max}$ scales with the resonance frequency $\Omega$. Therefore, plasmonic materials, which support polaritons at higher frequencies than polar dielectrics~\cite{caldwell_low-loss_2015}, can in principle reach higher NFRHT rates. However, for this to occur, they must operate at extremely high temperatures to avoid a dramatic damping in $h$. We show this in Fig. \ref{fig:QB_hT}(b) for a set of plasmonic and polar materials.
\subsection{Active tuning of NFRHT}
The ability to actively tune heat transfer is of broad interest for thermal control and related applications~\cite{chen_experimental_2008,wuttig_phase-change_2017,picardi_dynamic_2022}. In the near-field, a particularly relevant approach is gate tuning~\cite{inoue_realization_2014,chen_widely_2020,duan_active_2022}. By applying an electric bias to an emitter with an infrared SPP (e.g. graphene), it is possible to strongly modify the carrier density. This leads to a change in the frequency and loss of the SPP, enabling a large tuning of NFRHT~\cite{thomas_electronic_2019}.
We have recently demonstrated that the net heat transfer can be significantly enhanced by inserting a material which supports a SPhP below a gate-tunable graphene layer ~\cite{papadakis_gate-tunable_2019}.
Another way to modify the dielectric properties of a material and thereby the resulting heat transfer is by applying stress. Particularly, the SPhP resonance frequency in atomically thin 2D materials like hBN is strongly strain-dependent~\cite{Lyu2019}, and we harnessed this effect to design a near-field thermal transistor which can be switched on and off depending on the strain applied to the gate~\cite{papadakis_deep-subwavelength_2021}.
\subsection{Thermophotovoltaics}
One of the most important applications of NFRHT is related to energy conversion, and specifically thermophotovoltaics (TPV), which allows the conversion of incident thermal radiation into electric power~\cite{datas_chapter_2021}. It can be applied to harvest sunlight, where solar-TPV promises efficiencies significantly higher than the Shockley-Queisser limit of conventional photovoltaics~\cite{shockley_detailed_1961, harder_theoretical_2003}, but also opens opportunities for waste-heat recovery and thermal storage~\cite{burger_present_2020, datas_advances_2022}. In the far-field, TPV can theoretically reach efficiencies up to the Carnot limit, but with power densities limited by blackbody radiation. By contrast, near-field TPV (NFTPV) can achieve much higher power densities for the same efficiency, making it particularly appealing for low-temperature applications~\cite{laroche_near-field_2006,tedah_thermoelectrics_2019}.
We have shown that NFTPV offers advantages relative to far-field when non-radiative recombinations are taken into account ~\cite{papadakis_thermodynamics_2021}. Thanks to both thinner cells and larger radiative currents, the carrier density achieved in NFTPV systems is much larger than in far-field devices. Although this leads to slightly stronger non-radiative recombinations, the radiative efficiency (the fraction of recombination which are radiative) significantly increases overall (Fig.\ref{fig:Papadakis} (a)). As a result, NFTPV systems can achieve much higher power densities for the same conversion efficiency as far-field TPV systems while requiring much lower emitter temperatures (Fig.~\ref{fig:Papadakis} (b)).
In another study~\cite{papadakis_broadening_2020}, we reported that broadening emission (e.g. by combining several resonant emitters) can enhance both the efficiency and the power density when accounting for non-radiative recombinations, for similar reasons.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig4.pdf}
\caption{(a) Radiative and non-radiative recombinations in the near-field and far-field regimes. (b) Efficiency versus power output of near-field and far-field devices. Adapted from Ref.~\cite{papadakis_thermodynamics_2021}.}
\label{fig:Papadakis}
\end{figure}
\subsection{Non-reciprocity}\label{sec:nonreciprocal}
Common thermal emitters are made of materials satisfying Lorentz reciprocity, characterized by symmetric dielectric permittivity and magnetic permeability tensors. These reciprocal emitters absorb and emit in equal manner for given frequency, direction and polarization, as dictated by Kirchhoff's law for thermal radiation~\cite{kirchhoff_ueber_1860}. Kirchhoff's law can be violated by considering non-reciprocal thermal emitters, for instance made of a gyrotropic medium~\cite{miller_universal_2017}. Relaxing the constraints imposed by Kirchhoff's law is of central importance in thermal radiation harvesting ~\cite{fan_thermal_2017}, in that non-reciprocity is a necessary condition for approaching the ultimate efficiency limit, known as Landsberg limit~\cite{landsberg_thermodynamic_1980}. Furthermore, non-reciprocity in far-field operating systems is receiving increasing attention for potential applications such as optical isolators~\cite{Asadchy_sub-wavelength_2020} or photovoltaics~\cite{park_reaching_2022}.
In the near-field, we have shown that although reciprocity could be broken for particular frequencies, the heat flow between two bodies at the same temperature integrated over all wave vectors must be the same due to thermodynamic constrains~\cite{fan_nonreciprocal_2020}.
However, non-reciprocity becomes relevant when at least three bodies are involved and a persistent heat current can be sustained~\cite{zhu_persistent_2016,zhu_theory_2018,biehs_near-field_2021}, which has been applied to demonstrate near-field thermal hall effect~\cite{ben-abdallah_photon_2016}.
\section{Challenges and opportunities}
Most of the NFRHT literature focuses on numerical simulations.
Indeed, current NFRHT experiments are difficult to perform and provide only limited information regarding the underlying physics.
In the following sections, we highlight critical ways to leverage the latest theoretical and technological developments to bring NFRHT closer to real-world applications.
\subsection{NFTPV: narrow-bandgap semiconductors}
In the far-field, a recent 40\% efficiency record~\cite{lapotin_thermophotovoltaic_2022} has confirmed that TPV can compete with other heat-to-electricity conversion technologies~\cite{datas_thermophotovoltaic_2017}.
In the near-field, however, only a handful of studies have reported photovoltaic devices so far~\cite{inoue_one-chip_2019,fiorino_nanogap_2018,bhatt_integrated_2020,lucchesi_near-field_2021,mittapally_near-field_2021}.
One of the major constraints for NFTPV is that the emitter temperature is limited to medium temperatures (below $\mathrm{700 ^\circ C}$) due to parasitic heat conduction and Joule heating.
As a result, the semiconductor absorber must have a very narrow bandgap (below 0.5 eV) to absorb the low-energy radiation, implying larger non-radiative recombinations and a lower efficiency~\cite{yang_narrow_2022}.
Indeed, although Lucchesi \textit{et al.} achieved 14\% conversion efficiency with an InSb cell (0.23 eV), the cell temperature had to be maintained at 77 K to reduce non-radiative recombinations~\cite{lucchesi_near-field_2021}.
Nevertheless, narrow-bandgap semiconductors have so far received far less attention than materials like Si or GaAs, so material quality can still be significantly improved. Along with the fact that cells tend to operate closer to the radiative limit in the near-field~\cite{papadakis_thermodynamics_2021}, room-temperature devices with reasonable efficiencies and power densities should be achievable in the near future.
\subsection{Vacuum gap}\label{sec:vacuum_gap}
For all NFRHT applications, the ability to implement a nanometric vacuum gap is crucial. Indeed, conductive heat transfer should be as small as possible to maximize the temperature difference between the emitter and the receiver. This is particularly challenging in plate-to-plate configurations which require extremely flat and parallel surfaces. So far, the vacuum gap has been achieved in different ways (see Fig.~\ref{fig:plate_to_plate}).
Micro-mecanical platforms, which create a true vacuum gap, have been reported in many works~\cite{ganjeh_platform_2012,st-gelais_demonstration_2014,lim_near-field_2015,st-gelais_near-field_2016,bernardi_radiative_2016,zhu_near-field_2019,mittapally_near-field_2021} (Fig.~\ref{fig:plate_to_plate}(b)). This approach is ideal for experiments as it allows to probe near-field heat transfer without heat conduction. Furthermore, the width of the vacuum gap can be varied, sometimes down to less than 100 nm~\cite{fiorino_giant_2018}.
However, such systems are very difficult to implement and are restricted to small areas with limited scalability.
One notable exception is the fully integrated architecture using a suspended-emitter bridge developed by Bhatt \textit{et al.}~\cite{bhatt_integrated_2020}.
Polystyrene or silica microspheres have also been employed as spacers to create vacuum gaps of a few hundreds of nanometers~\cite{hu_near-field_2008,lang_dynamic_2017,sabbaghi_super-planckian_2020} (Fig.~\ref{fig:plate_to_plate}(c)). The contact area of each sphere is very small, hence limiting parasitic heat conduction. It is however difficult to control their dispersion, and they may introduce parasitic radiative transmission channels.
Finally, micropillars~\cite{dimatteo_microngap_2004,ito_parallel-plate_2015,watjen_near-field_2016,ito_dynamic_2017,yang_observing_2018,desutter_near-field_2019,tang_near-field_2020} or trenches~\cite{inoue_integrated_2021} have been considered to achieve a controlled vacuum gap in a simple and scalable way (Fig.~\ref{fig:plate_to_plate}(d)).
The main drawback of this approach is that it leads to heat conduction channels between the emitter and the receiver. The pillars therefore needs to be sparse (which can lead to significant bowing) and have the smallest heat conductivity possible.
DeSutter \textit{et al.} have employed an ingenious configuration to reduce conduction. By etching pits into the emitter, the pillars can be made much longer, significantly reducing their contribution to heat transfer from 45\% to 1.9\%~\cite{desutter_near-field_2019}.
\subsection{Emerging materials}
In the recent years, there has been considerable interest in a class of anisotropic media characterized by modal hyperbolic dispersions for NFRHT~\cite{biehs_hyperbolic_2012,guo_applications_2012,poddubny_hyperbolic_2013,he_anisotropy_2022,liu_near-field_2022}. These hyperbolic materials are characterized by a dielectric permittivity negative along certain directions (metallic behavior) and positive along others (dielectric behavior). This in turn enables the excitation of modes evanescent in the vacuum gap and propagating inside the bulk region, over a frequency band much broader than conventional polaritonic materials~\cite{biehs_hyperbolic_2012}. In fact, the modes' wavenumber is only limited by the material's intrinsic loss, or fundamentally by the size of the first Brillouin zone~\cite{ashcroft_solid_1976,volokitin_resonant_2004,biehs_hyperbolic_2012}.
Due to their unique optical properties, these materials have been studied for a plethora of applications, such as thermal switches, near-field TPV, or super-resolution imaging~\cite{guo_applications_2012,shekhar_hyperbolic_2014,estevam_da_silva_far-infrared_2012,vongsoasup_performance_2017,liu_near-field_2022}.
Hyperbolic dispersions were first explored using artificial hyperbolic metamaterials~\cite{poddubny_hyperbolic_2013}, in the form of layered metal-dielectric~\cite{shekhar_hyperbolic_2014,guclu_hyperbolic_2012,iorsh_hyperbolic_2013}, arrays of metal-dielectric nanopyramids~\cite{yang_experimental_2012} or nanorods~\cite{podolskiy_plasmon_2002,silveirinha_nonlocal_2006}.
Such configurations are inherently very challenging to experimentally realize. Over the last decade, there has been great effort in investigating natural hyperbolic materials, allowing to bypass the challenges related to the nanopatterning processes~\cite{narimanov_naturally_2015}. Among these, hyperbolic phonon polaritons have been reported in naturally uniaxial two-dimensional (2D) materials, such as hBN~\cite{caldwell_sub-diffractional_2014}, and in naturally biaxial materials, such as MoO$_3$~\cite{ma_-plane_2018}.
A promising new direction for NFRHT is indeed the use of 2D materials, including hBN, graphene, black phosphorus (BP), or transition metal dichalcogenides (TMDs), due to their unique optical properties~\cite{xia_two-dimensional_2014} enabling NFRHT enhancement and tunability. Specifically, researchers are currently investigating the possibilities offered by \textit{twist-optics}, which involves stacking layers with their principal crystalline axes at different angles. This has proven to provide an exquisite degree of control over the in-plane anisotropic polaritons, and in turn over the directionality of propagating polaritons~\cite{he_anisotropy_2022,duan_twisted_2020}. For example, in ref.~\cite{he_active_2020} the authors show how twisting the graphene gratings coating two isotropic layers can actively tune their NFRHT
Another class of anisotropic media, Weyl semimetals, is gaining momentum due to their ability to break Lorentz reciprocity without applying an external magnetic field~\cite{yan_topological_2017}. These materials can exhibit a large gyrotropic optical response in the mid-infrared, featuring nonreciprocal SPPs and violating Kirchhoff's law for thermal radiation~\cite{zhao_axion-field-enabled_2020}, with the implications discussed in Sec. \ref{sec:nonreciprocal}. Recently, several Weyl semimetal-based configurations have been explored, offering new exotic opportunities~\cite{xu_near-field_2020,tang_twist-induced_2021}.
So far, these new materials have mostly been explored theoretically. Indeed, flakes produced with conventional exfoliation techniques are usually too small to perform NFRTH measurements, which require dimensions much larger than the thermal wavelength.
Nonetheless, promising results towards fabricating large-scale single-crystal samples have already been demonstrated \cite{molina-mendoza_centimeter-scale_2016}.
\subsection{Measuring spectrally and angularly-resolved heat transfer}
As mentioned in Sec. \ref{sec:NFRHT_experiment}, most works that probe NFRHT do so by measuring the total heat flux between an emitter and a receiver, thereby integrating spectral and angular information. Given the strong narrowband contributions from polaritons, the ability to spectrally resolve NFRHT is very valuable. This requires, however, to couple evanescent waves from the emitter into the far field.
So far, this has been achieved mainly using a sharp tip as both thermal emitter and far-field scatterer, using so-called \emph{thermal radiation scanning tunneling microscopy} (TRSTM)~\cite{de_wilde_thermal_2006,jones_thermal_2012,jones_thermal_2013,joulain_strong_2014}. However, in this tip-to-plane configuration, it is difficult to assess the exact impact of the tip on heat transfer.
In order to observe plate-to-plate heat transfer, one work has taken an alternative approach~\cite{zare_measurement_2019}, coupling the evanescent waves from an emitter to the far-field through a high-index element with a $45^{\circ}$ bevel, in a configuration analogous to Otto coupling to surface modes~\cite{maier_plasmonics_2007}. A similar approach was considered to enhance thermal emission using a high-index transparent hemisphere~\cite{yu_enhancing_2013}.
Still, these measurements are not resolved as a function of the emission angle.
To completely characterize NFRHT, it is essential to develop a setup that can measure angular-resolved near-field emission spectra.
\section{Conclusions}
Near-field radiative heat transfer is a very active field of research, with relevance for both fundamental and applied physics.
Although it was originally investigated to overcome limitations in radiative power transfer, it is now also used to explore unique regimes not accessible in the far-field.
In this Perspective, we summarized the main theoretical and experimental achievements of the past decades.
We highlighted recent contributions in the analytical modelling of NFRHT and its applications, including active tuning, thermophotovoltaics and non-reciprocal heat tranfer.
We identified key challenges in harnessing NFRHT for energy conversion and achieving nanoscale vacuum gaps over large areas, as well as promising strategies to overcome them.
We pointed out that special classes of anisotropic materials, e.g., hyperbolic media and Weyl semimetals, offer a great platform for stronger, actively tunable heat transfer.
We emphasized the need to complement conventional heat transfer measurements with angularly and spectrally resolving characterization techniques. In fact, this would enable a direct comparison with theory and a deeper understanding of the underlying physics.
We believe that NFRHT experiments will soon bring about the promises of theoretical predictions, leading to disruptive heat transfer technologies.
\section*{Acknowledgments}
The authors thank Dr. Mitradeep Sarkar for valuable suggestions.
The authors declare no competing financial interest. The project that gave rise to these results received the support of a fellowship from ”la Caixa” Foundation (ID 100010434) and from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847648. The fellowship code is LCF/BQ/PI21/11830019.
\input{references.bbl}
\end{document}
|
1,108,101,563,654 | arxiv | \section{Introduction}
\subsection{History and Context}
This article deals with a higher order version of the Alt-Caffarelli problem, which is a free boundary problem posed in \cite{Alt}. The classical first-order formulation can be understood as a variational Dirichlet problem with `adhesion' term. More exactly, the energy the authors consider is given by
\begin{equation*}
\mathcal{E}_{AC}(u) := \int_\Omega |\nabla u |^2 \;\mathrm{d}x + |\{ x \in \Omega : u(x) > 0 \}|
\end{equation*}
where $u \in W^{1,2}(\Omega)$ is such that $u - u_0 \in W_0^{1,2}(\Omega)$ for some given sufficently regular positive function $u_0$. Here, $|\cdot|$ denotes the Lebesgue measure and $\Omega \subset \mathbb{R}^n$ is some sufficiently regular domain. The two summands of $\mathcal{E}_{AC}$ impose competing conditions on minimizers: The Dirichlet term becomes small for functions that do not `vary too much' and the measure term (that we call \emph{adhesion term}) becomes small if the function is nonpositive in a large subregion of $\Omega$. Minimizers have to find a balance between these two terms.
The measure penalization can be understood as an adhesion to the zero level: Indeed, the lattice operations on $W^{1,2}$ imply that each minimizer of $\mathcal{E}_{AC}$ is nonnegative. Given this, a minimizer $u$ divides $\Omega$ into two regions, namely $\{ u = 0 \}$, the so-called nodal set, and $\{ u > 0 \}$. The interface between the two regions is then a free boundary. Because of this structure, the Alt-Caffarelli problem is also called \emph{`adhesive free boundary problem'}.
More recently, the biharmonic Alt-Caffarelli problem, which is also our object of study, has raised a lot of interest, cf. \cite{Valdinoci} and \cite{ValdinociSing}. Here the energy reads
\begin{equation*}
\mathcal{E}_{BAC}(u) := \int_\Omega |\Delta u |^2 \;\mathrm{d}x + |\{ x \in \Omega : u(x) > 0 \}| \quad u \in W^{2,2}(\Omega) : u - u_0 \in W_0^{1,2}(\Omega),
\end{equation*}
defined for $u \in W^{2,2}(\Omega)$ that satisfies again $u- u_0 \in W_0^{1,2}(\Omega)$ for $u_0,\Omega$ as above. From now on we shall also assume that $\Omega \subset \mathbb{R}^2$ since two-dimensionality is essential for our argument.
The minimization with no derivatives prescribed at the boundary is a weak formulation of Navier boundary conditions, cf. \cite[Chapter 2]{Sweers}. If a minimizer is $u$ is sufficiently regular, one can obtain classical Navier boundary conditions, i.e. $`\Delta u = 0 $ on $\partial \Omega$'.
Just as in the first-order case, $\mathcal{E}_{BAC}$ consists of two competing summands: The first one measures roughly how much a function bends. The second one measures the positivity set. Minimizers of $\mathcal{E}_{BAC}$ again have to find a balance between `not bending too much' and being nonpositive in a large subregion of $\Omega$.
As the authors of \cite{Valdinoci} point out, the structure of the problem is now fundamentally different. Due to the lack of a maximum principle, a minimizer $u$ divides $\Omega$ suddenly into three regions $\{ u= 0 \}$, $\{ u > 0 \}$ and $\{u < 0 \}$. And indeed, as \cite[Proposition B.1]{Valdinoci} highlights, the third region will actually be present. Having three regions means that one can get two interfaces, one between $\{ u > 0 \}$ and $\{ u = 0 \}$ and one between $\{ u= 0 \}$ and $\{ u < 0 \}$, at least in case that $\{ u = 0 \}$ is a 'fat' set with nonempty interior.
A promising technique to examine the boundary is to look at the gradient of a minimizer $u$ on $\{ u= 0 \}$. Recall that in the classical Alt-Caffarelli problem, where one has nonnegativity of $u$, one can infer that $\nabla u = 0$ at all interface points, at least provided that $u$ is appropriately smooth. The regularity of $u$ was discussed in \cite{Alt} and turned out to be sufficient for this conclusion.
The goal of this article is to show that $\{ u= 0 \}$ is a $C^2$-smooth manifold and $\nabla u \neq 0 $ on $\{u = 0\}$. Note that this behavior is exactly opposite to the first order problem, which is surprising. This also settles the aforementioned question of how the interfaces look like: There is only one interface of interest, namely the one between $\{u > 0 \}$ and $\{ u< 0 \}$, which is given by $\{u = 0 \}$. Moreover, the nodal set is nowhere 'fat', i.e. its Hausdorff dimension is at most one.
Our result can therefore be understood as an improvement of \cite[Theorem 1.10]{Valdinoci} and the following discussion in the special case of two dimensions.
Two-dimensionality is needed for our argument since it relies on the fact that every minimizer is semiconvex, cf. Lemma \ref{lem:semicon}, which we can prove with methods that do not immediately generalize to higher dimension.
The fact that the gradient does not vanish on the free boundary makes the problem fundamentally different from the obstacle problem for the biharmonic operator, which has been studied in a celebrated article by Caffarelli and Friedman in 1979, see \cite{Friedman}. The article was trendsetting for the study of fourth order free boundary problems and gave way to striking recent results in this field, cf. \cite{Aleksanyan}, \cite{Okabe1}, \cite{Okabe2}
Higher order adhesive free boundary problems have many applications in the context of mathematical physics, for example for the study of elastic bodies adhering to solid substrates, see \cite{Miura1} and \cite{Miura2}. Moreover, the square integral of the Laplacian can be thought of as a linearization of the well-known Willmore energy, see the introduction of \cite{Valdinoci} for more details.
\subsection{Model and Main Results}
For the entire article the given framework is the following.
\begin{definition}(Admissible Set and Energy)\label{def:adm}
Let $\Omega \subset \mathbb{R}^2$ be an open and bounded domain with $C^2$-boundary. Further, let $u_0 \in C^\infty(\overline{\Omega})$ be such that $(u_0)_{\mid \partial \Omega} \geq \delta > 0 $ for some $\delta > 0 $. Define
\begin{equation*}
\mathcal{A}(u_0) := \{ u \in W^{2,2}(\Omega) : u- u_0 \in W_0^{1,2}(\Omega) \}
\end{equation*}
and $\mathcal{E} : \mathcal{A}(u_0) \rightarrow \mathbb{R}$ by
\begin{equation*}
\mathcal{E}(u) := \int_\Omega (\Delta u)^2 \;\mathrm{d}x + |\{u > 0 \}| .
\end{equation*}
We say that $u \in \mathcal{A}(u_0)$ is a minimizer if
\begin{equation*}
\mathcal{E}(u) = \inf_{w \in \mathcal{A}(u_0)} \mathcal{E}(w).
\end{equation*}
\end{definition}
\begin{remark}\label{lem:eximin}
Existence of a minimizer $u \in \mathcal{A}(u_0)$ is shown in \cite[Lemma 2.1]{Valdinoci} with standard techniques in the calculus of variations.
\end{remark}
\begin{remark}
As $\Omega$ is sufficently regular to have a trace operator (see \cite[Theorem 6.3.3]{Willem}) and $u \in W^{2,2}(\Omega) \subset C^{0,\beta}(\overline{\Omega})$ for each $\beta \in (0,1)$, we get that $u_{\mid \partial \Omega} = u_0$ pointwise.
\end{remark}
As we mentioned, the main goal of the article is to show
\begin{theorem}[Regularity and Nodal Set] \label{thm:1.1}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $u \in C^2(\Omega)\cap W^{3,2-\beta}_{loc}(\Omega)$ for each $\beta > 0 $ and there exists a finite number $N \in \mathbb{N}$ such that
\begin{equation}\label{eq:geb}
\{ u < 0 \} = \bigcup_{i = 1}^N G_i,
\end{equation}
where $G_i$ are disjoint domains with $C^2$-smooth boundary.
Moreover, $\nabla u \neq 0 $ on $\partial \{ u< 0 \} = \{ u = 0 \}$ and $\{u = 0 \}$ has finite $1$-Hausdorff measure. Additionally, $u$ solves
\begin{equation}\label{eq:bihame}
2\int_\Omega \Delta u \Delta \phi \;\mathrm{d}x = - \int_{\{ u = 0 \}} \phi \frac{1}{|\nabla u|}\, \mathrm{d}\mathcal{H}^1 \quad \forall \phi \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega).
\end{equation}
\end{theorem}
Let us remark that for smooth $\Omega$, one can remove "loc" in the $W^{3,2-\beta}$ regularity statement, see Section \ref{sec:fwts} for details where also Navier boundary conditions are discussed.
Let us formally motivate the term $\frac{1}{|\nabla u|} d\mathcal{H}^1$ in \eqref{eq:bihame}. It can be seen as a `derivative' of the $|\{ u > 0 \}|$-term of the energy in the following way: By \cite[Prop.3, Sect.3.3.4]{EvGar} one has that for each $f \in C^\infty(\mathbb{R}^n)$ with nonvanishing gradient,
\begin{equation}\label{eq:formmeas}
\frac{d}{dt} | \{ f > t \} | = \int_{ \{ f = t \} } \frac{1}{|\nabla f |} \; \mathrm{d}\mathcal{H}^1 \quad \textrm{for almost every $t \in \mathbb{R}$}.
\end{equation}
Theorem \ref{thm:1.1} will finally be proved in Section 6.
Section 3, 4 and 5 prepare the proof of the main theorem by showing some helpful properties of minimizers. Among those are semiconvexity, superharmonicity of the Laplacian and the blow-up behavior close to the nodal set. In Section 7 we show some estimates for the negativity region which underlines the importance of \eqref{eq:bihame} for applications and future research.
We will also show that minimizers are in general not unique, proved in Section 8.
\begin{theorem}[Non-Uniqueness of Minimizers]\label{thm:nonun}
There exist $\Omega$ and $u_0$ as in Definition \ref{def:adm} such that $\mathcal{E}$ has more than one minimizer in $\mathcal{A}(u_0)$.
\end{theorem}
The construction in the proof of this theorem depicts exactly one domain and one admissible boundary value for which minimizers are not unique. We do not think that it is impossible to obtain positive uniqueness results within certain ranges of initial values. Analysis of such is however beyond the scope of this article.
The non-uniqueness relies on the following phenomenon: We choose $\Omega = B_1(0)$ and $u_0\equiv \iota$ to be a constant function. If the constant is small, we observe minimizers that are negative already really close to the boundary. We expect it to look roughly like a funnel, which grows steeply close to the boundary and has a round-off tip in the negative region. If however the constant is large, the minimizer is a constant function (which is then always positive). Therefore there has to be a limit case in which one can find minimizers with both shapes.
To do so, we compute radial minimizers explicitly. The fact that there exists radial minimizers follows from Talenti's symmetrization principle, see \cite{Talenti} and Section \ref{sec:talenti} for details. The explicit computation also relies on the Navier boundary conditions, which will be discussed in Section 9.
\section{Preliminaries}
\subsection{Notation}
In the following we will fix some notation which we will use throughout the article. For a set $A \subset \mathbb{R}^n$ we denote its complement by $A^c := \mathbb{R}^n \setminus A$ and the interior of the complement by $A^C := \mathrm{int}(\Omega \setminus A)$. For a Lebesgue measurable set $E \subset \mathbb{R}^n$ we define the \emph{upper density} of $E$ at $x \in \mathbb{R}^n$ to be \begin{equation*}
\overline{\theta}(E,x) := \limsup_{r \rightarrow 0 + } \frac{|E \cap B_r(x)|}{|B_r(x)|} .
\end{equation*}
We say that a point $x$ lies in the \emph{measure theoretic boundary} of $E$ if both $\overline{\theta}(E,x)$ and $\overline{\theta}(E^c,x)$ are strictly positive. The measure theoretic boundary of $E$ is denoted by $\partial^*E$. If $\alpha$ is a measure on a measurable space $(X, \mathcal{F})$ and $A \in \mathcal{F}$ then we define the \emph{restriction measure} $\alpha\llcorner_A : \mathcal{F} \rightarrow \mathbb{R}_+ \cup \{ \infty\} $ via $\alpha\llcorner_A(B) := \alpha(A \cap B)$. If $(X, \mathcal{F}) = (\mathbb{R}^n, \mathcal{B}(\mathbb{R}^n))$ is the Euclidean space endowed with the Borel-$\sigma$-Algebra and $U \subset \mathbb{R}^n$ is a Borel set, then we denote by $M (U)$ the set of Radon measures on $U$, see \cite[Section 1.1]{EvGar}. Moreover $\mathcal{H}^s$ denotes the $s-$dimensional Hausdorff measure on $\mathbb{R}^2$.
\begin{definition}[The Hilbert Space $W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$] \label{def:hilbers}
In this article, the Hilbert space $W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$ is always endowed with the scalar product
\begin{equation*}
(u,v) := \int_\Omega \Delta u \Delta v \;\mathrm{d}x.
\end{equation*}
\end{definition}
\begin{definition}[Lebesgue Points]
Let $1 \leq p < \infty$and $f \in L^p_{loc}(\Omega)$. We say that $x_0 \in \Omega$ is a $p-$Lebesgue point of $f$ if
\begin{equation} \label{eq:avliim}
f^*(x_0) := \lim_{r\rightarrow 0} \fint_{B_r(x_0)} f(y) \;\mathrm{d}y
\end{equation}
exists and
\begin{equation*}
\lim_{r\rightarrow 0 } \fint_{B_r(x_0)} \left\vert f^*(x_0) - f(x)\right\vert^p \;\mathrm{d}x = 0 .
\end{equation*}
\end{definition}
\begin{definition}[Semiconvexity]
Let $\Omega \subset \mathbb{R}^n$ be open, $f : \Omega \rightarrow \mathbb{R}$ be a function and $A \in \mathbb{R}$. We call $f$ $A$-semiconvex if for each $x_0 \in \mathbb{R}^n$ the map $x \mapsto f(x) + A|x-x_0|^2$ is convex.
\end{definition}
\begin{definition}(Superharmonic Functions)
Let $A \subset \mathbb{R}^n$ be open.
A function $u : A \rightarrow \mathbb{R} \cup \{ - \infty, \infty\}$ is called \emph{superharmonic} if $u$ is lower semicontinuous in $A$ and for each $x \in A$ and $r > 0 $ such that $\overline{B_r(x)} \subset A$ one has
\begin{equation*}
u(x) \leq \frac{1}{\mathcal{H}^1(\partial B_r(x))} \int_{\partial B_r(x)} u(y) d\mathcal{H}^1(y) =: \fint_{\partial B_r(x)} u(y) dS_r(y).
\end{equation*}
A function $u$ is called \emph{subharmonic} if $-u$ is superharmonic.
\end{definition}
\subsection{Energy Bounds}
\begin{lemma}[Energy Bound for Minimizers]
Let $u_0$ be as in Definition \ref{def:adm}. Then
\begin{equation}\label{eq:infbound}
\inf_{w \in \mathcal{A}(u_0)} \mathcal{E}(w) \leq |\Omega|.
\end{equation}
\end{lemma}
\begin{proof}
Let $w\in W^{1,2}(\Omega)$ be the unique weak solution of
\begin{equation*}
\begin{cases}
\Delta w = 0 & \mathrm{in} \; \Omega, \\
w = u_0 & \mathrm{on} \; \partial \Omega.
\end{cases}
\end{equation*}
By elliptic regularity, $w-u_0 \in W^{2,2}(\Omega)\cap W_0^{1,2}(\Omega)$ and hence $w \in \mathcal{A}(u_0)$. By the maximum principle, $\inf_\Omega w \geq \inf_{\partial\Omega} u_0 \geq \delta >0 $. Hence $|\{w > 0 \} | = |\Omega|$. All in all
\begin{equation*}
\mathcal{E}(w) = \int_\Omega (\Delta w)^2 \;\mathrm{d}x + |\{ w > 0 \}| = |\Omega|. \qedhere
\end{equation*}
\end{proof}
\begin{example}\label{ex:iotapos}
In general, the bound in \eqref{eq:infbound} is not sharp. We give an example of $\Omega$ and $u_0$ as in Definition \ref{def:adm} such that
\begin{equation}\label{eq:inbel}
\inf_{w \in A(u_0)} \mathcal{E}(w) < |\Omega|.
\end{equation}
Suppose that $\Omega = B_1(0)$ and $u_0 \equiv C$ for some $C <\frac{1}{8\sqrt{2}}$. Further define $w(x) := 2C |x|^2 - C$ for $x \in B_1(0)$. One easily checks that $w \in \mathcal{A}(u_0)$. Now $\{w > 0 \} = B_1(0) \setminus \overline{ B_{\frac{1}{\sqrt{2}}}(0)}$ and $\Delta w \equiv 8C $. Hence
\begin{equation*}
\mathcal{E}(w) = \int_{B_1(0)} 64C^2 dx + |B_1(0) \setminus \overline{ B_{\frac{1}{\sqrt{2}}}(0)}| = 64C^2 \pi + \frac{1}{2} \pi = \left( 64C^2 + \frac{1}{2} \right) \pi ,
\end{equation*}
that is smaller that $\pi = |\Omega|$ by the choice of $C$.
\end{example}
\begin{remark}\label{rem:iotafin}
We claim that for large constant boundary values, \eqref{eq:infbound} is sharp. Indeed, let $\Omega$ be as in Definiton \ref{def:adm} and fix a constant function $u_0 \equiv const$ such that $u_0 > C_\Omega \mathrm{diam}(\Omega)^\frac{1}{2} |\Omega|^\frac{1}{2}$, where $C_\Omega$ denotes the operator norm of the embedding operator $W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega) \hookrightarrow C^{0, \nicefrac{1}{2}}(\overline{\Omega})$.
If $u \in \mathcal{A}(u_0)$ is a minimizer then for each $x \in \Omega$ and $z \in \partial \Omega$ one has
\begin{align*}
u(x)& \geq u_0 - |u(x)- u_0| = u_0 - |(u - u_0)(x)- (u-u_0)(z)| \\ & > C_\Omega \mathrm{diam}(\Omega)^\frac{1}{2} |\Omega|^\frac{1}{2} - ||u-u_0||_{C^{0,\nicefrac{1}{2}}} |x-z|^\frac{1}{2}
\\ & \geq C_\Omega \mathrm{diam}(\Omega)^\frac{1}{2} |\Omega|^\frac{1}{2}- C_\Omega ||u-u_0||_{W^{2,2} \cap W_0^{1,2}} \mathrm{diam}(\Omega)^\frac{1}{2} \geq 0,
\end{align*}
since $||u-u_0||_{W^{2,2}\cap W_0^{1,2}} = || \Delta u- \Delta u_0||_{L^2} = ||\Delta u ||_{L^2} \leq \sqrt{\mathcal{E}(u)} \leq \sqrt{|\Omega|}$. Therefore, all minimizers are positive, which means in particular that \eqref{eq:infbound} is sharp and the unique minimizer is given by the weak solution of
\begin{equation*}
\begin{cases}
\Delta u = 0 & \mathrm{in} \; \Omega
\\
u = u_0 & \mathrm{on} \; \partial \Omega,
\end{cases}
\end{equation*}
which is $u \equiv u_0$.
\end{remark}
\begin{remark}\label{rem:nontriv}
If $\inf_{w \in A(u_0)} \mathcal{E}(w) < |\Omega|$ and $u \in \mathcal{A}(u_0)$ is a minimizer then $\{ u= 0 \}$ cannot be empty. Indeed, if it were empty then $\{ u > 0 \} = \Omega$ by the embedding $W^{2,2}(\Omega) \subset C(\overline{\Omega})$. A contradiction.
\end{remark}
\subsection{Variational Inequality}
In the rest of this section we derive that each minimizer $u$ is biharmonic on $\{ u > 0\} \cup \{ u < 0 \}$ and $\Delta u $ is weakly superharmonic on the whole of $\Omega$. The techniques used are standard perturbation arguments. We also draw some first conclusions about regularity of $u$.
\begin{lemma}[Biharmonicity away from Free Boundary] \label{lem:biham}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Further, let $\phi \in C_0^\infty(\{u > 0 \})$ or $\phi \in C_0^\infty(\{ u < 0 \})$ or $\phi \in W_0^{1,2}(\Omega) \cap W^{2,2}(\Omega)$ with compact support in $\{ u >0 \}$. Then
\begin{equation}\label{eq:biiham}
\int_\Omega \Delta u \Delta \phi \;\mathrm{d}x = 0 .
\end{equation}
In particular, $u \in C^\infty(\{ u >0 \}) \cup C^\infty( \{ u < 0 \})$ and $\Delta^2 u = 0 $ in $\{ u > 0 \} \cup \{ u <0 \}$ .
\end{lemma}
\begin{proof}
We show \eqref{eq:biiham} only for $\phi \in W_0^{1,2}(\Omega) \cap W^{2,2}(\Omega)$ with compact support in $\{ u >0 \}$. The other cases are similar. Since $u \in W^{2,2}(\Omega) \subset C(\overline{\Omega})$ and $\mathrm{supp}(\phi)$ is compact in $\{u > 0 \}$ there exists $\theta > 0 $ such that $u \geq \theta$ on $\mathrm{supp}(\phi)$. In particular, for $t $ sufficiently small we get $\{u > 0 \} = \{ u + t\phi > 0 \} $. For such fixed $t$ one has
\begin{equation*}
\frac{\mathcal{E}(u) - \mathcal{E}(u+t\phi) }{t} =2 \int_\Omega \Delta u \Delta \phi \;\mathrm{d}x + t \int_\Omega
(\Delta \phi)^2 \;\mathrm{d}x .
\end{equation*}
From the right hand side we infer that $t \mapsto \mathcal{E}(u+t\phi)$ is differentiable at $t = 0$. Using this and the fact that $u$ is a minimizer we obtain
\begin{equation*}
0 = \frac{d}{dt}_{\mid_{t= 0}} \mathcal{E}(u+t\phi) = 2 \int_\Omega \Delta u \Delta \phi \;\mathrm{d}x .
\end{equation*}
By Weyl's lemma $\Delta u $ is harmonic in $\{u> 0 \}$ and hence $C^\infty(\{u> 0 \})$. The claim follows.
\end{proof}
\begin{cor}[A Neighborhood of the Boundary] \label{cor:guterrand}
Let $u \in \mathcal{A}(u_0)$ be a minimizer and $\delta$ be as in Definition \ref{def:adm}. Then there exists $\epsilon_0 > 0 $ such that
$\Omega_{\epsilon_0} := \{ x \in \Omega : \mathrm{dist}(x, \partial \Omega) < \epsilon_0 \}$ has $C^2$-boundary, $u \in C^\infty(\Omega_{\epsilon_0})$ and $\Delta^2 u = 0$, as well as $u \geq \frac{\delta}{2} $ in $ \Omega_{\epsilon_0} $.
\end{cor}
\begin{proof}
Let $\delta$ be as in Definition \ref{def:adm}.
Due to the uniform continuity of $u$, there exists $\epsilon^* >0 $ such that $u(x) > \frac{\delta}{2}$ whenever $\mathrm{dist}(x, \partial \Omega) < \epsilon^*$.
Because of \cite[Lemma 14.16]{Gilbarg} there is $\epsilon' > 0$ such that $\epsilon \leq \epsilon'$ implies that $\Omega_{\epsilon} := \{ x \in \Omega : \mathrm{dist}(x, \partial \Omega) < \epsilon \} $ has $C^2$-boundary.
The claim follows taking $\epsilon_0 := \min\{ \epsilon^* , \epsilon'\} $ and using Lemma \ref{lem:biham}.
\end{proof}
\begin{remark}
Since it is needed very often, we will use the notation $\Omega_{\epsilon_0}$ from now on without giving further reference to Corollary \ref{cor:guterrand}.
\end{remark}
\begin{lemma}[Euler-Lagrange-Type Properties] \label{lem:EL}
Let $u \in \mathcal{A}(u_0) $ be a minimizer. Then for each $\phi \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$ such that $\phi \geq 0 $ one has
\begin{equation}\label{eq:subbiham}
\int_\Omega \Delta u \Delta \phi \;\mathrm{d}x \leq 0
\end{equation}
and
\begin{equation}\label{eq:meas}
\limsup_{\epsilon \rightarrow 0+ } \frac{| \{ 0< u < \epsilon \phi \} | }{\epsilon}\leq 2\mathcal{E}(u) ^\frac{1}{2} ||\Delta \phi ||_{L^2}.
\end{equation}
\end{lemma}
\begin{proof}
Set $\psi:= - \phi$. Then one has
\begin{align}\label{eq:EL}
0 & \geq \limsup_{\epsilon \rightarrow 0 + } \frac{\mathcal{E}(u) - \mathcal{E}(u+\epsilon\psi) }{\epsilon}
\\ & =\limsup_{\epsilon \rightarrow 0 } - 2\int_\Omega \Delta u \Delta \psi \;\mathrm{d}x - \epsilon \int_\Omega (\Delta \psi)^2 \;\mathrm{d}x + \frac{ |\{0 < u < -\epsilon \psi \}|}{\epsilon}
\nonumber \\ & = - 2\int_\Omega \Delta u \Delta \psi \;\mathrm{d}x + \limsup_{\epsilon \rightarrow 0 + } \frac{ |\{0 < u < -\epsilon \psi \}|}{\epsilon} \nonumber.
\end{align}
Since $\psi \leq 0$, we can first estimate the measure term from below by zero to obtain
\begin{equation*}
0 \geq - \int_\Omega \Delta u \Delta \psi \;\mathrm{d}x = \int_\Omega \Delta u \Delta \phi \;\mathrm{d}x \quad \forall \phi \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega) : \phi \geq 0,
\end{equation*}
that is \eqref{eq:subbiham}. Going back to \eqref{eq:EL} and using the Cauchy-Schwarz inequality we find
\begin{equation*}
\limsup_{\epsilon \rightarrow 0 + } \frac{|\{0 < u < \epsilon(-\psi) \}| }{\epsilon } \leq 2 \int \Delta u \Delta \psi \;\mathrm{d}x \leq 2 ||\Delta u||_{L^2} ||\Delta \psi||_{L^2} \leq 2\sqrt{\mathcal{E}(u)} || \Delta \psi ||_{L^2} ,
\end{equation*}
from which \eqref{eq:meas} follows again replacing $\phi := - \psi$.
\end{proof}
\begin{cor}[Subharmonicity] \label{lem:corsubham}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $(\Delta u)^* \geq 0 $ at every $1-$ Lebesgue point of $\Delta u$. In particular, $u$ is subharmonic.
\end{cor}
\begin{proof}
Fix $x \in \Omega$ and let $r \in (0, \mathrm{dist}(x, \partial \Omega))$ be arbitrary. Denote by $\phi_r$ the weak $W_0^{1,2}$-solution of
\begin{equation*}
\begin{cases}
\Delta \phi_r = \frac{1}{|B_r(x)| } \chi_{B_r(x)} & \mathrm{in} \; \Omega \\
\phi_r = 0 & \mathrm{on} \; \partial \Omega
\end{cases} .
\end{equation*}
By elliptic regularity, $\phi_r \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$. By the maximum principle, $\phi_r \leq 0 $ a.e.. By \eqref{eq:subbiham}
\begin{equation*}
0 \leq \int_\Omega \Delta u \Delta \phi_r \;\mathrm{d}y = \fint_{B_r(x)} \Delta u \;\mathrm{d}y.
\end{equation*}
If $x$ is a Lebesgue point of $\Delta u $, we can let $r \rightarrow 0 $ and find that $(\Delta u)^*(x) \geq 0$. Since $u \in W^{2,2}(\Omega)$, almost every point is a Lebesgue point and hence $\Delta u \geq 0 $ a.e.. Since $u$ is furthermore continuous, we get that $u$ is subharmonic, see \cite[Theorem 4.3]{Serrin}.
\end{proof}
\begin{cor}[Positive Part Near Free Boundary]\label{cor:posdir}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then
\begin{equation*}
\limsup_{\epsilon \rightarrow 0 } \frac{|\{0 < u < \epsilon\}|}{\epsilon} < \infty.
\end{equation*}
\end{cor}
\begin{proof}
Choose $\phi^* \in C_0^\infty(\Omega)$ such that $0 \leq \phi^* \leq 1 $ and $\phi^* \equiv 1 $ on $\Omega_{\epsilon_0}^C$, which is defined as in Lemma \ref{lem:biham}. Then $u(x) \geq \frac{\delta}{2}$ for all $x \in \Omega_{\epsilon_0}$, where $\delta$ is given by Definition \ref{def:adm}. Therefore note that
\begin{equation*}
\lim_{\epsilon \rightarrow 0 } \frac{|\{ 0 < u < \epsilon \} \cap \Omega_{\epsilon_0}| }{\epsilon} = 0.
\end{equation*}
By \eqref{eq:meas}
\begin{align*}
\limsup_{\epsilon \rightarrow 0 } \frac{|\{0 < u < \epsilon\}|}{\epsilon} & = \limsup_{\epsilon \rightarrow 0 } \frac{|\{0 < u < \epsilon\} \cap \Omega_{\epsilon_0}^C|}{\epsilon} \\ & \leq \limsup_{\epsilon \rightarrow 0 } \frac{|\{0 < u < \epsilon\phi^*\}|}{\epsilon} \leq 2 \sqrt{\mathcal{E}(u) }||\Delta \phi^*||_{L^2} < \infty. \qedhere
\end{align*}
\end{proof}
\begin{cor}[The Biharmonic Measure] Let $u \in \mathcal{A}(u_0)$ be a minimizer.
Then there exists a finite Radon measure $\mu \in M(\Omega)$ such that $\mathrm{supp}(\mu) \subset \{ u = 0 \}$ and
\begin{equation}\label{eq:bihammeas}
2\int_\Omega \Delta u \Delta \phi \;\mathrm{d}x =- \int_\Omega \phi \; \mathrm{d}\mu \quad \forall \phi \in W_0^{2,2}(\Omega) .
\end{equation}
\end{cor}
\begin{proof}
Define $L: C_0^\infty(\Omega) \rightarrow \mathbb{R}$ by
\begin{equation*}
L(\phi) := - 2\int_\Omega \Delta u \Delta \phi \;\mathrm{d}x .
\end{equation*}
The map $L$ is linear and satisfies $L(f) \geq 0 $ for each $f \geq 0 $ by \eqref{eq:subbiham}. By the Riesz-Markov-Kakutani Theorem (see \cite[Corollary 1, Section 1.8]{EvGar}) we infer that there exists a (not necessarily finite) Radon measure $\mu \in M(\Omega)$ such that
\begin{equation*}
L(\phi) = \int_\Omega \phi \; \mathrm{d}\mu .
\end{equation*}
Furthermore, by Lemma \ref{lem:biham} we have that $L(\phi) = 0$ for each $\phi \in C_0^\infty(\{ u > 0 \}) \cup C_0^\infty( \{ u < 0 \} ) $. Since $\mu $ is Radon, this implies that $\mu( \{ u > 0 \} ) = \mu( \{ u < 0 \} ) = 0 $. Since $\{u > 0 \} $ and $\{ u < 0 \} $ are open by continuity of $u$, we have $\mathrm{\mathrm{supp}}(\mu) \subset \{ u = 0 \}$. However, since $u_{\mid_{\partial \Omega}} \geq \delta > 0 $ by Definition \ref{def:adm}, $\{u = 0 \} $ is compactly contained in $\Omega$. Hence $\mu(\Omega) = \mu(\{ u= 0 \} ) < \infty$ since $\mu$ is finite on compact subsets of $\Omega$. It remains to show that \eqref{eq:bihammeas} holds for $\phi \in W_0^{2,2}(\Omega)$, but this holds because of density and the fact that $W_0^{2,2}(\Omega) \subset C(\overline{\Omega})$.
\end{proof}
\begin{remark}
Note that for $\phi \in W_0^{2,2}(\Omega)$, \eqref{eq:bihammeas} holds only for the continuous representative of $\phi$. The precise representative is important since $\mu$ may not be absolutely continuous with respect to the Lebesgue measure.
\end{remark}
From now on, whenever we address a minimizer $u \in \mathcal{A}(u_0)$, $\mu_u$ or in case of nonambiguity $\mu$ denotes the measure that satisfies \eqref{eq:bihammeas}.
\begin{lemma}[Local BMO-regularity]\label{lem:bihmmeasmore}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $\Delta u \in BMO_{loc}(\Omega) \subset L^q_{loc}(\Omega), q \in [1, \infty)$ and \eqref{eq:bihammeas} holds true also for $\phi \in W_0^{2,p}(\Omega_{\epsilon_0}^C)$ for each $p \in (1,2)$.
\end{lemma}
\begin{proof}
For the assertion that $\Delta u \in BMO_{loc}(\Omega)\subset L^q_{loc}(\Omega), q \in [1, \infty)$ we refer to \cite[Theorem 1.1]{Valdinoci}. Now fix $\phi \in W_0^{2,p}(\Omega_{\epsilon_0}^C)$. Since $\Omega_{\epsilon_0}^C$ has $C^2$-boundary by Corollary \ref{cor:guterrand} we obtain by Sobolev embedding that $\phi \in C(\overline{\Omega_{\epsilon_0}^C})$ and that there exists a sequence $(\phi_n)_{n = 1}^\infty \subset C_0^\infty(\Omega_{\epsilon_0}^C)$ that is convergent to $\phi$ in $W^{2,p}(\Omega_{\epsilon_0}^C)$ and in $C(\overline{\Omega_{\epsilon_0}^C})$. From this and the fact that \eqref{eq:bihammeas} holds for all $\phi_n$ one can infer that it also holds for $\phi$.
\end{proof}
\begin{remark}\label{rem:ceins}
In particular the previous Lemma implies that each minimizer lies in $C^1(\Omega)$.
\end{remark}
\section{Regularity and Semiconvexity}
In this section we will study regularity and some properties of the minimizer, in particular the set of non-$1-$Lebesgue points of $D^2u$. We will expose a singular behavior of the Laplacian at all those points. Moreover we prove that minimizers are semiconvex, which can also be seen as a regularity property, having Aleksandrov's theorem in mind.
For our arguments, we need some remarkable facts about the fundamental solution in two dimensions that were already discovered and applied to the biharmonic obstacle problem by Caffarelli and Friedman in \cite[e.g. Equation (6.3)]{Friedman}.
\begin{lemma}[{Fundamental Solution of the Biharmonic Operator, cf. \cite[Section 7.3]{Mitrea}}] \label{lem:green}
Define $F : \mathbb{R}^2 \times \mathbb{R}^2 \setminus \{(x,x): x \in \mathbb{R}^2 \} \rightarrow \mathbb{R}$ via
\begin{equation*}
F(x,y) := \frac{1}{8\pi} |x-y|^2 \log |x-y| .
\end{equation*}
Then $F$ satisfies $\Delta^2 F(x,\cdot) = \delta_x$ on $\mathbb{R}^2$, where $\delta_x$ denotes the Dirac measure of $\{x\}$. Then for each $\beta \in (0,1] $ one has that $F(\cdot,y) \in W^{3,2-\beta}_{loc} (\mathbb{R}^2)$ for each $y \in \mathbb{R}^2$. Moreover, for all $(x,y) \in \mathbb{R}^2 $ such that $x \neq y$ one has
\begin{align}\label{eq:ablgreen}
\nabla_x F(x,y) & = - \nabla_y F(x,y) = \frac{1}{8\pi}(2 \log|x-y| + 1) (x-y), \\
\partial_{x_ix_i}^2 F(x,y)& = \frac{1}{8\pi} \left( 1+ 2 \frac{(x_i-y_i)^2}{|x-y|^2} +2 \log |x-y| \right) \quad i = 1,2 , \label{eq:seconder} \\
\partial^2_{x_1x_2} F(x,y)& = \frac{1}{4\pi} \frac{(x_1-y_1) (x_2-y_2) }{|x-y|^2}.
\end{align}
In particular,
\begin{equation}\label{eq:laplaci}
\Delta_x F(x,y) = \frac{1}{2\pi} \left( \log |x-y| + 1 \right),
\end{equation}
and $\partial_{x_1x_1}^2F(x,\cdot) - \partial_{x_2x_2}^2F(x,\cdot), \partial_{x_1x_2}F(x, \cdot)\leq \frac{3}{8\pi}$ on $\mathbb{R}^2 \setminus\{x\}$ for each $x \in \mathbb{R}^2$. Moreover, there is $C > 0 $ such that
\begin{equation}\label{eq:thidbound}
|D_x^3F(x,y)| \leq \frac{C}{|x-y|} \quad \forall y \in \mathbb{R}^2 \setminus \{ x \} .
\end{equation}
\end{lemma}
\begin{lemma}\label{lem:greensubham}
Let $x_0,y \in \mathbb{R}^2$ and
\begin{equation*}
H(r) := - \frac{1}{8\pi}\fint_{B_r(x_0)} \log|x-y| dx.
\end{equation*}
Then $H$ is decreasing on $(0,\infty)$ and its pointwise limit as $r \rightarrow 0$ is given by $- \frac{1}{8\pi}\log|x_0 - y|$ with the convention that $- \log 0 := \infty$
\end{lemma}
\begin{proof}
The claim follows directly from \cite[Proposition 4.4.11(6)]{Berenstein} and \cite[Proposition 4.4.15]{Berenstein}.
\end{proof}
The following result is very similar to crucial observations in \cite{Friedman}.
\begin{lemma}[Biharmonic Measure Representation, Proof in Appendix \ref{sec:tecprf}] \label{lem:bihammeasrep}
Let $u \in \mathcal{A}(u_0)$ be a minimizer and $\mu$ be as in \eqref{eq:bihammeas}. Further let $\Omega_{\epsilon_0} $ be as in Corollary \ref{cor:guterrand}. Then there exists $h \in C^\infty(\overline{\Omega_{\epsilon_0}^C}) $ such that
\begin{equation*}
u(x) = - \frac{1}{2}\int_\Omega F(x,y) \; \mathrm{d}\mu(y) + h(x) \quad \forall x \in \Omega_{\epsilon_0}^C,
\end{equation*}
where $F$ is the same as in Lemma \ref{lem:green}.
\end{lemma}
The explicit representation of the minimizer will help to prove a first regularity result. The method used here is explained in the following lemma, whose proof is very straightforward by the definition of a weak derivative and Fubini's theorem.
\begin{lemma}[Kernel Operators with Measures]\label{lem:kernelmeas}
Let $\Omega\subset \mathbb{R}^n$ be open and bounded and $1 \leq p < \infty$. Let $\alpha$ be a finite Borel measure on $\Omega$ and let $\lambda$ denote the $n$-dimensional Lebesgue measure on $\Omega$. Let $H :\Omega \times \Omega \rightarrow \overline{\mathbb{R}}$ be a Borel measurable function on $\Omega \times \Omega$ such that
\begin{enumerate}
\item $(x,y) \mapsto H(x,y) \in L^p(\lambda \times \alpha )$
\item For each $y \in \Omega$, $x \mapsto H(x,y)$ is weakly differentiable with $\Omega\times \Omega$-Borel measurable weak derivative $\nabla_x H(x,y)$.
\item $(x,y) \mapsto \nabla_x H(x,y) \in L^p(\lambda \times \alpha) $.
\end{enumerate}
Then $A(x) := \int_\Omega H(x,y) \; \mathrm{d}\alpha(y) $ lies in $W^{1,p}(\Omega)$ and its weak derivative satisfies
\begin{equation}\label{eq:abliden}
\nabla A(x) = \int_\Omega \nabla_x H(x,y) \; \mathrm{d}\alpha(y) .
\end{equation}
\end{lemma}
Using induction and the previous lemma, one easily obtains the following higher order version.
\begin{cor}[Higher Order Derivatives]\label{cor:regreg}
Let $\Omega \subset \mathbb{R}^n$ be open and $1 \leq p < \infty$. Let $H : \Omega \times \Omega \rightarrow \mathbb{R}$ be Borel measurable on $\Omega \times \Omega$ such that for each $y \in \Omega$ the map $x \mapsto H(x,y)$ lies in $W^{k,p}(\Omega)$ and $H, D_x H ,D_x^2 H ,... D_x^k H \in L^p(\lambda \times \alpha)$ and all derivatives are all Borel measurable in $\Omega \times \Omega$. Then $A(x) := \int_\Omega H(x,y) d\alpha(y)$ lies in $W^{k,p}(\Omega)$.
Moreover one has
\begin{equation}\label{eq:346}
D^kA(x) = \int_\Omega D^kH(x,y) \; \mathrm{d}\alpha(y) \quad k = 1,...,n \quad \mathrm{a.e.} \; x \in \Omega.
\end{equation}
\end{cor}
\begin{cor}[Sobolev Regularity of Minimizers] \label{cor:minregu}
Let $u\in \mathcal{A}(u_0)$ be a minimizer and $\beta \in (0,1]$. Then $u \in W^{3,2-\beta}(\Omega_{\epsilon_0}^C)$ for each $\beta > 0$ and the set of non-$1$-Lebesgue points of $D^2u$ in $\Omega_{\epsilon_0}^C$ has Hausdorff dimension $0$. Moreover, at every $1$-Lebesgue point of $D^2u$ which is not an atom of $\mu$ one has
\begin{equation}\label{eq:347}
(D^2u)^*(x) =- \frac{1}{2} \int_\Omega D^2F(x,y)\; \mathrm{d}\mu(y) + D^2h(x),
\end{equation}
where $F$, $\mu$ and $h$ are given in Lemma \ref{lem:bihammeasrep}.
\end{cor}
\begin{proof}
For the $W^{3,2-\beta}$-regularity we use the representation in Lemma \ref{lem:bihammeasrep} and Corollary \ref{cor:regreg}. The requirements of Corollary \ref{cor:regreg} are satisfied if we can show that $F,D_x F,D_x^2F$ and $D_x^3F$ lie in $L^{2-\beta}(\lambda \times \mu)$ (since the remaining requirements follow immediately from Lemma \ref{lem:green}). We show this only for $D_x^3F$, the other computations are very similar. Using \eqref{eq:thidbound}, Tonelli's Theorem and radial integration we find
\begin{align*}
\int_\Omega |D_x^3F&(x,y) |^{2-\beta} \; \mathrm{d}( \lambda \times \mu)(x,y) = \int_\Omega \int_\Omega |D_x^3F(x,y)|^{2-\beta} \;\mathrm{d}x \; \mathrm{d}\mu(y) \\ & \leq \int_\Omega \int_\Omega \frac{C^{2-\beta}}{|x-y|^{2-\beta}} \;\mathrm{d}x \; \mathrm{d}\mu(y)
\leq C^{2-\beta} \int_\Omega \int_{B_{\mathrm{diam}(\Omega)}(y)} \frac{1}{|x-y|^{2-\beta}} \;\mathrm{d}x \; \mathrm{d}\mu(y)
\\ & \leq C^{2-\beta} \int_\Omega \int_0^{\mathrm{diam}(\Omega)} 2\pi \frac{r}{r^{2-\beta}} \;\mathrm{d}r \; \mathrm{d}\mu(y)
\leq C^{2-\beta} \int_\Omega 2\pi \mathrm{diam}(\Omega)^\beta \; \mathrm{d}\mu(y) \\ & =2\pi C^{2-\beta} \mathrm{diam}(\Omega)^\beta \mu(\Omega) < \infty.
\end{align*}
The $W^{3,2-\beta}$-regularity claim is shown. We conclude that $D^2u \in W^{1,2-\beta}(\Omega_{\epsilon_0}^C)$ for each $\beta > 0$. Since $\Omega_{\epsilon_0}^C$ has Lipschitz boundary, $D^2u$ extends to a function in $W^{1,2-\beta}(\mathbb{R}^n)$ (cf. \cite[Thm.1, Sect.5.4]{Evans}). From \cite[Thm.1(i),(ii), Sect.4.8]{EvGar} follows that there is a Borel set $E_\beta \subset \Omega$ of $\beta$-Capacity zero, such that the non-$1-$Lebesgue points are contained in $E_\beta$. Now \cite[Thm.4, Sect.4.7]{EvGar} implies that $\mathcal{H}^{2\beta}(E_\beta)= 0$ and hence the set of non-$1-$Lebesgue points is a $\mathcal{H}^{2\beta}$ null set. Equation \eqref{eq:347} does not follow directly, since \eqref{eq:346} only gives one representative of $D^2u$. Let $x_0$ be a $1-$Lebesgue point of $D^2u$. Then, according to Lemma \ref{lem:green}
\begin{align*}
2 (\partial^2_{x_1x_1} u)^*(x_0) & = 2\lim_{r \rightarrow 0 } \fint_{B_r(x_0)} (\partial^2_{x_1x_1} u)(y) \;\mathrm{d}y \\ & = \lim_{r\rightarrow 0 } \fint_{B_r(x_0)} \int_\Omega \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} + 2 \log|x-y| \right) \;\mathrm{d}x \; \mathrm{d}\mu(y) \\ & \quad \qquad + 2 \fint_{B_r(x_0)} \partial^2_{x_1x_1} h(x) \;\mathrm{d}x .
\end{align*}
Since $h$ is smooth, the last summand tends to $\partial_{x_1x_1}^2h(x_0)$. We have already shown above that $\partial_{x_1x_1}^2F = \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} + 2 \log|x-y| \right)$ lies in $L^{2-\beta}(\lambda \times \mu)$. Therefore we can interchange the order of the two integrations by Fubini's Theorem. Hence
\begin{align*}
2(\partial^2_{x_1x_1} u)^*(x_0) & = 2 \partial_{x_1x_1}^2 h(x_0) \\ &+ \quad \lim_{r\rightarrow0 }\int_\Omega \fint_{B_r(x_0)} \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} + 2 \log|x-y| \right) \;\mathrm{d}x \; \mathrm{d}\mu(y) .
\end{align*}
Now observe that
\begin{equation*}
r \mapsto \fint_{B_r(x_0)} \frac{-1}{8\pi} \log|x-y| \;\mathrm{d}x
\end{equation*}
is decreasing in $r$ because of Lemma \ref{lem:greensubham} and hence the monotone convergence theorem yields
\begin{equation} \label{eq:convi}
\lim_{r\rightarrow 0+} \int_\Omega \fint_{B_r(x_0)} \frac{-1}{8\pi} \log|x-y| \;\mathrm{d}x \; \mathrm{d}\mu(y) = \int_\Omega \frac{-1}{8\pi}\log|x_0-y| \; \mathrm{d}\mu(y) .
\end{equation}
(Actually, the monotone convergence theorem is not exactly applicable since the integrand is not necessarily positive. This can however be fixed since $\mu$ is finite and for each $r$ the integrand is bounded from below by $-\frac{1}{8\pi}\log \mathrm{diam}(\Omega)$. Adding and subtracting this quantity one obtains the claimed convergence).
Therefore
\begin{align}\label{eq:lebptprf}
2(\partial^2_{x_1x_1} u)^*(x_0)& =\lim_{r\rightarrow0 }\int_\Omega \fint_{B_r(x_0)} \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} \right) \;\mathrm{d}x \; \mathrm{d}\mu(y)\nonumber\\ & \qquad + 2\partial_{x_1x_1}^2 h(x_0) - \int_\Omega \frac{1}{4\pi} \log|x_0-y| \;\mathrm{d}x \; \mathrm{d}\mu(y).
\end{align}
Observe that for $y \neq x_0$ one has
\begin{equation*}
\lim_{r \rightarrow 0+} \fint_{B_r(x_0)} \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} \right) \;\mathrm{d}x = \frac{-1}{8\pi} \left( 1 + 2 \frac{((x_0)_1 - y_1)^2}{|x_0-y|^2} \right).
\end{equation*}
Since $\mu(\{x_0\})= 0$ the integrand converges $\mu$-almost everywhere to the right hand side. This and fact that the expression is uniformly bounded in $r$ by $\frac{3}{8\pi}$ imply together with the dominated convergence theorem that
\begin{equation*}
\lim_{r\rightarrow0+}\int_\Omega \fint_{B_r(x_0)} \frac{-1}{8\pi} \left( 1 + 2 \frac{(x_1 - y_1)^2}{|x-y|^2} \right) \;\mathrm{d}x \; \mathrm{d}\mu(y) = \int_\Omega \frac{-1}{8\pi} \left( 1 + 2 \frac{((x_0)_1 - y_1)^2}{|x_0-y|^2} \right) \; \mathrm{d}\mu(y).
\end{equation*}
Plugging this into \eqref{eq:lebptprf} we find
\begin{equation*}
2(\partial^2_{x_1x_1} u)^*(x_0) = \int_\Omega \frac{-1}{8\pi} \left( 1 + 2 \frac{((x_0)_1 - y_1)^2}{|x_0-y|^2} + 2 \log|x_0-y| \right)\; \mathrm{d}\mu(y) + 2 \partial^2_{x_1x_1} h(x_0) .
\end{equation*}
The same techniques apply for $(\partial_{x_1x_2}^2u)^*$ and $(\partial_{x_2x_2}^2u)^*$. This proves \eqref{eq:347}.
\end{proof}
\begin{cor}\label{cor:lebgem}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $\partial^2_{x_1x_2}u$ and $\partial^2_{x_1x_1} u - \partial^2_{x_2x_2}u $ lie in $L^\infty(\Omega_{\epsilon_0}^C)$. Moreover, each $x_0 \in \Omega$ that is not an atom of $\mu$ is a Lebesgue point of $\partial^2_{x_1x_2}u$ and $\partial^2_{x_1x_1} u - \partial^2_{x_2x_2}u $.
\end{cor}
\begin{proof}
For the fact that $\partial^2_{x_1x_1} u - \partial^2_{x_2x_2} u \in L^\infty(\Omega_{\epsilon_0}^c)$ observe with the notation of \eqref{eq:347} that almost everywhere one has
\begin{align*}
|\partial^2_{x_1x_1} u - \partial^2_{x_2x_2} u | & = \left\vert- \frac{1}{2} \int_\Omega (\partial^2_{x_1x_1}F - \partial^2_{x_2x_2} F) \; \mathrm{d}\mu(y) + \partial^2_{x_1x_1} h - \partial^2_{x_2x_2} h \right\vert \\
& \leq \frac{3}{16 \pi} \mu(\Omega) + 2|| D^2h||_\infty < \infty,
\end{align*}
where we used Lemma \ref{lem:green} in the last step. Similarly one shows that $\partial^2_{x_1x_2} u \in L^\infty(\Omega_{\epsilon_0}^C)$.
Now we show that each non-atom $x$ of $\mu$ is a $1-$Lebesgue point of $\partial^2_{x_1x_2 }u $. By \eqref{eq:347} it is sufficient to show that each non-atom of $\mu$ is a $1-$Lebesgue point of $\int_\Omega \partial^2_{x_1x_2}F(\cdot,y) d\mu(y) $ as each point in $\Omega_{\epsilon_0}^C$ is a Lebesgue point of $D^2h$. We have already discussed in Corollary \ref{cor:minregu} that $\partial^2_{x_1x_2}F$ is $(\lambda \times \mu)$-measurable. Moreover it is product integrable as it is uniformly bounded.
By Fubini's theorem
\begin{align*}
& \frac{1}{|B_r(x)|} \int_{B_r(x) } \left( \int_\Omega \partial_{x_1x_2}^2 F(z,y) \; \mathrm{d}\mu(y) \right) \; \mathrm{d}z \\ & \quad \quad \quad \quad \quad \quad \quad \quad = \int_\Omega \left( \frac{1}{|B_r(x)|} \int_{B_r(x)} \partial^2_{x_1x_2} F(z,y)\; \mathrm{d}z \right) \; \mathrm{d}\mu(y) .
\end{align*}
For each $y \in \Omega \setminus \{x\} $ the expression in parentheses converges to $\partial^2_{x_1x_2} F(x,y)$ as $r \rightarrow 0 $ and since $x$ is not an atom of $\mu$ the expression converges to $ \partial^2_{x_1x_2} F(x,y)$ $\mu$-almost everywhere. Moreover Lemma \ref{lem:green} yields that the expression is uniformly bounded by $\frac{3}{8\pi}$
and hence the dominated convergence theorem yields
\begin{equation}\label{eq:3.26}
\lim_{r\rightarrow 0 } \frac{1}{|B_r(x)|} \int_{B_r(x) } \left( \int_\Omega \partial_{x_1x_2}^2 F(z,y) \; \mathrm{d}\mu(y) \right) \; \mathrm{d}z = \int_\Omega \partial^2_{x_1x_2} F(x,y) \; \mathrm{d}\mu(y).
\end{equation}
To show the Lebesgue point property it remains to show that
\begin{equation*}
\lim_{r \rightarrow 0} \frac{1}{|B_r(x)|} \int_{B_r(x)} \left\vert \int_\Omega \partial^2_{x_1x_2} F(z,y) \; \mathrm{d}\mu(y) - \int_\Omega \partial^2_{x_1x_2} F(x,y) \; \mathrm{d}\mu(y) \right\vert = 0 .
\end{equation*}
This is immediate once one observes with the triangle inequality and Fubini's theorem that
\begin{align*}
& \frac{1}{|B_r(x)|} \int_{B_r(x)} \left\vert \int_\Omega \partial^2_{x_1x_2} F(z,y) \; \mathrm{d}\mu(y) - \int_\Omega \partial^2_{x_1x_2} F(x,y) \; \mathrm{d}\mu(y) \right\vert \; \mathrm{d}z
\\ & \quad \quad \quad \quad \quad \quad \leq \int_\Omega \frac{1}{|B_r(x)|}\int_{B_r(x)} |\partial^2_{x_1x_2} F(z,y) - \partial^2_{x_1x_2} F(x,y)| \; \mathrm{d}z \; \mathrm{d}\mu(y) .
\end{align*}
The term on the right hand side can be shown to tend to zero as $r \rightarrow 0 $ with the dominated convergence theorem using arguments similar to the discussion before \eqref{eq:3.26}. For $\partial^2_{x_1x_1} u - \partial^2_{x_2x_2}u $ the analogous statement can be shown similarly.
\end{proof}
\begin{cor}\label{cor:suubhaam}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then for each $x \in \Omega$ the quantity $(\Delta u)^*(x) := \lim_{r \rightarrow 0 } \mathop{\ooalign{$\int$\cr$-$}}_{B_r(x)} \Delta u(y) dy$ exists in $[0, \infty]$. Moreover, the map $x \mapsto (\Delta u)^*(x)$ is superharmonic.
\end{cor}
\begin{proof}
Recall that by $\Delta u $ is weakly superharmonic by \eqref{eq:subbiham}. By \cite[Theorem 4.1]{Serrin} follows immediately that $(\Delta u)^*(x)$ exists in $\mathbb{R} \cup \{ \infty \}$ for all $x \in \Omega$. By Corollary \ref{lem:corsubham} it has to lie in $[0, \infty]$, which shows the first part of the claim. From \eqref{eq:347} and Lemma \ref{lem:green} we infer that
\begin{equation*}
\Delta u (x) = - \frac{1}{4\pi} \int_\Omega (\log|x-y| + 1 ) \; \mathrm{d}\mu(y) + \Delta h (x) \quad a.e. .
\end{equation*}
Similar to the discussion in \eqref{eq:convi} we can derive, using the special properties of the logarithm that
\begin{equation}\label{eq:3.32}
(\Delta u)^*(x) = - \frac{1}{4\pi} \int_\Omega (\log|x-y| + 1 ) \; \mathrm{d}\mu(y) + \Delta h (x) \quad \forall x \in \Omega_{\epsilon_0}^C.
\end{equation}
Note that $(\Delta u)^*$ is the so-called \emph{canonical representative} of a weakly subharmonic function in the sense of \cite[p.360]{Serrin}. To show that $(\Delta u)^*$ is subharmonic it suffices according to \cite[Theorem 4.3]{Serrin} to show that $(\Delta u)^*$ is lower semicontinuous. For this let $(x_n)_{n = 1}^\infty \subset \Omega_{\epsilon_0}^C$ be such that $x_n \rightarrow x \in \Omega_{\epsilon_0}^C$. Note that $ - \log|x_n - \cdot |$ is bounded from below independently of $n$ by $-\log\mathrm{diam}(\Omega)$. Thus Fatou's lemma yields
\begin{equation}\label{eq:faatou}
\liminf_{n \rightarrow \infty} \int_\Omega - \log |x_n - y | \; \mathrm{d}\mu(y) \geq \int_\Omega \liminf_{n\rightarrow \infty} ( - \log |x_n - y| \; \mathrm{d}\mu (y) = -\int_\Omega \log |x-y| \; \mathrm{d}\mu (y) .
\end{equation}
Since $(\Delta u)^*(x_n)$ consists only of continuous terms and a positive multiple of the left hand side in \eqref{eq:faatou}, one has $\liminf_{n \rightarrow \infty } (\Delta u)^*(x_n) \geq (\Delta u)^*(x)$, that is $(\Delta u)^*$ is lower semicontinuous. As we already explained this implies superharmonicity of $(\Delta u)^*$.
\end{proof}
\begin{remark}
Note that the notation $(\Delta u)^*$ creates a slight ambiguity with \eqref{eq:avliim}, namely whenever the limit in the definition is infinite. It will always be clear from the context what convention is used, especially in view of the following consistency result.
\end{remark}
\begin{prop}\label{prop:genfac}
Let $f : \Omega \rightarrow \mathbb{R}$ be a nonnegative superharmonic function. Then each point where $f< \infty$ is a $1-$Lebesgue point of $f$.
\end{prop}
\begin{proof}
By \cite[Theorem 3.1.3]{Armitage} one has $f(x) = \liminf_{y \rightarrow x} f(y)$ for each $x \in \Omega$. In particular
\begin{equation}\label{eq:liminf}
f(x) = \lim_{r \rightarrow 0 } \inf_{B_r(x)} f .
\end{equation}
Now suppose that $f(x)< \infty$. Then by the triangle inequality
\begin{align*}
\fint_{B_r(x)} |f(z) - f(x) | \; \mathrm{d}z & \leq \fint_{B_r(x)} | f(z) - \inf_{B_r(x)} f| \; \mathrm{d}z + | \inf_{B_r(x)} f - f(x) |
\\ & \leq \fint_{B_r(x)} f(z) \; \mathrm{d}z - \inf_{B_r(x)} f + | \inf_{B_r(x)} f - f(x) |.
\end{align*}
As $f$ is superharmonic we have
$
\fint_{B_r(x)} f(z) dz \rightarrow f(x)$ as $r \rightarrow 0+$.
Using this, $f(x)< \infty$ and \eqref{eq:liminf} we obtain that
\begin{equation*}
\lim_{r \rightarrow 0 } \fint_{B_r(x)} |f(z) - f(x) | \; \mathrm{d}z = f(x) - f(x) = 0 . \qedhere
\end{equation*}
\end{proof}
Putting the previous results together we obtain the following
\begin{cor}
Each non-$1-$Lebesgue point $x$ of $D^2u $ is an atom of $\mu$ or satisfies $(\Delta u)^*(x) = \infty$.
\end{cor}
\begin{proof}
Suppose that $x$ is neither an atom of $\mu$ nor $(\Delta u)^*(x) = \infty$. By Corollary \ref{cor:suubhaam} and Proposition \ref{prop:genfac} we get that $x$ is a $1-$Lebesgue point of $\Delta u$. By Corollary \ref{cor:lebgem} we also know that $x$ is a $1-$Lebesgue point of $\partial^2_{x_1x_1} u - \partial^2_{x_2x_2} u $ and $\partial^2_{x_1x_2} u $. Since all second derivatives of $u$ are linear combinations of the mentioned quantities, $x$ is a $1-$Lebesgue point of $D^2u$. The claim follows by contraposition.
\end{proof}
We can refine the statement with the following observations
\begin{lemma}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. If $x_0 \in \Omega$ is an atom of $\mu$ then $(\Delta u)^*(x_0) = \infty$.
\end{lemma}
\begin{proof}
Suppose that $x_0 $ is an atom of $\mu$ and set $\widetilde{\mu} := \mu - \mu(\{x_0\}) \delta_{x_0}$ which is also a finite measure.
Using \eqref{eq:3.32} we find with the notation from there that for each $x \in \Omega_{\epsilon_0}^C$
\begin{align*}
(\Delta u)^*(x) & = - \frac{1}{4\pi} \int_\Omega (\log |x-y| + 1 )\; \mathrm{d}\mu(y) + \Delta h(x) \\ & = - \frac{1}{4\pi} (\log |x-x_0| + 1) \mu(\{x_0\}) - \frac{1}{4\pi}\int_\Omega (\log |x-y| + 1 ) \; \mathrm{d}\widetilde{\mu}(y) + \Delta h (x)
\\ & \geq - ||\Delta h ||_\infty - \frac{1}{4\pi}( \log \mathrm{diam}(\Omega) + 1) \widetilde{\mu}(\Omega) - \frac{\mu(\{ x_0\} )}{4\pi} ( 1 + \log |x-x_0| ) .
\end{align*}
Plugging in $x= x_0 $ we obtain finally that $(\Delta u)^*(x_0) = \infty$ as claimed.
\end{proof}
\begin{remark}\label{rem:atoom}
The previous observations show that each non-$1-$Lebesgue point of $D^2u$ satisfies $(\Delta u)^* = \infty$ and each atom of $\mu$ is a non-$1-$Lebesgue point of $D^2u$.
\end{remark}
\begin{lemma}[Semiconvexity]\label{lem:semicon}
Let $u \in \mathcal{A}(u_0)$ be a minimizer and set
\begin{equation*}
A := \frac{\sqrt{5}}{2} \left(2 ||D^2h||_\infty + \frac{3}{16\pi} \mu(\Omega) \right).
\end{equation*}
Then at each $x \in \Omega_{\epsilon_0}^C$ which is $1-$Lebesgue point of $D^2u$ the matrix $(D^2 u)^* + AI $ is positive semidefinite, where $I = \mathrm{diag}(1,1)$ denotes the identity matrix. In particular, for each $x_0 \in \mathbb{R}^2$ one has that $x \mapsto u(x) + \frac{1}{2} A |x-x_0|^2$ is convex on $\Omega_{\epsilon_0}^C$.
\end{lemma}
\begin{proof}
Let $x$ be a Lebesgue point of $D^2u$. By Remark \ref{rem:atoom}, $x$ is not an atom of $\mu$.
Note that if $M = \begin{pmatrix} m_{11} & m_{12} \\ m_{12} & m_{22} \end{pmatrix} \in \mathbb{R}^{2\times 2} $ is a symmetric matrix then the eigenvalues of $M$ are given by
\begin{equation}\label{eq:eigw}
\lambda_{1,2} = \frac{m_{11} + m_{22}}{2} \pm \sqrt{\frac{1}{4} (m_{11} - m_{22} )^2 + m_{12}^2 }.
\end{equation}
If $M = (D^2u)^*(x) + AI$ then Corollary \ref{lem:corsubham} implies that
\begin{equation}\label{eq:eigenval}
\frac{m_{11} + m_{22}}{2} = (\Delta u)^* + 2A \geq 2 A .
\end{equation}
Using \eqref{eq:347}, the fact that $x$ is not an atom of $\mu$, and Lemma \ref{lem:green} we obtain
\begin{align*}
|m_{11}- m_{22}|& = \left\vert\frac{1}{2} \int_\Omega (\partial^2_{x_1x_1} F(x,y) - \partial^2_{x_2x_2}F(x,y) ) \; \mathrm{d}\mu(y) + \partial^2_{x_1x_1}h - \partial_{x_2x_2}^2h \right\vert \\ & \leq\frac{1}{2} \int_{\Omega \setminus \{x\} } |\partial^2_{x_1x_1} F(x,y) - \partial^2_{x_2x_2}F(x,y) | \; \mathrm{d}\mu(y) + 2 ||D^2h||_\infty\\ & \leq \frac{3}{16\pi}\mu(\Omega) + 2||D^2h||_\infty
.
\end{align*}
Analogously one can show that
\begin{equation*}
|m_{12}| \leq \frac{3}{16\pi} \mu(\Omega) + 2 ||D^2h ||_\infty.
\end{equation*}
Hence
\begin{equation*}
\sqrt{\frac{1}{4} (m_{11} - m_{22} )^2 + m_{12}^2 } \leq \sqrt{ \left( 1 + \frac{1}{4}\right) \left(\frac{3}{16\pi} \mu(\Omega) + 2 ||D^2h ||_\infty\right)^2 } \leq A.
\end{equation*}
Plugging this and \eqref{eq:eigenval} into \eqref{eq:eigw} we find
\begin{equation*}
\lambda_{1,2} \geq 2 A - A = A \geq 0.
\end{equation*}
Thus we obtain that $M$ is indeed positive semidefinite. For $\epsilon >0 $ let $\rho_\epsilon$ be the standard mollifier. Set $f_\epsilon(x) := \left( u(\cdot) + \frac{1}{2}A |\cdot - x_0|^2\right) * \rho_\epsilon$. Observe that for $\epsilon< \epsilon_0$, $f_\epsilon \in C^2(\Omega_{\epsilon_0}^C)$ and $D^2 f_\epsilon = (D^2 u + AI) * \rho_\epsilon$ on $\Omega_{\epsilon_0}^C$. This matrix is positive semidefinite since for each $z \in \mathbb{R}^2$
\begin{equation*}
z^TD^2f_\epsilon(x) z = (z^T(D^2u + AI) z * \rho_\epsilon)(x) \geq 0 ,
\end{equation*}
as $\rho_\epsilon $ is nonnegative and $D^2 u + AI$ is positive semidefinite almost everywhere. Hence $f_\epsilon$ is convex. However $f_\epsilon$ also converges to $u + \frac{1}{2} A |\cdot - x_0|^2$ uniformly on $\Omega_{\epsilon_0}^C$ as the latter function is continuous. It is easy to verify with the definition of convexity that uniform limits of convex functions are convex again.
\end{proof}
\section{Emptyness of The Singular Nodal Set}
In this section we study the gradient $\nabla u $ at points where $u$ vanishes. Whenever we refer to the gradient, we always mean its continuous representative, cf. Remark \ref{rem:ceins}.
We show that the set $\{ u = \nabla u = 0 \}$, which we refer to as \emph{singular nodal set}, is empty. It is vital for the argument to look at the behavior of the Hessian at points that lie in the singular nodal set. We have to distinguish between $1-$Lebesgue points of the Hessian and non-$1-$Lebesgue points of the Hessian. The $1-$Lebegue points can be discussed using blow-up arguments. For non $1-$Lebesgue points, one will profit from the characterization in Remark \ref{rem:atoom}.
The blow-up arguments in this section are based on the following version of Aleksandrov's theorem, which allows for a second order Taylor-type expansion.
\begin{lemma}[A version of Aleksandrovs theorem in $\mathbb{R}^n$]\label{lem:aleks}\label{rem:quasialeks}
Let $\Omega \subset \mathbb{R}^n$ be bounded and $f \in W^{2,2}(\Omega)\cap C^1(\Omega)$ be $A$-semiconvex for some $A \in \mathbb{R}$. If $x_0\in \Omega$ is a $1-$Lebesgue point of $D^2f$, then
\begin{equation}\label{eq:aleks}
f(x) - f(x_0) - \nabla f(x_0) (x-x_0) - \frac{1}{2}(x-x_0)^T (D^2f)^*(x_0) (x-x_0) = o(|x-x_0|^2 ).
\end{equation}
\end{lemma}
\begin{proof}
By considering $\widetilde{f} := f + \frac{1}{2}A|\cdot-x_0|^2$ we can assume without loss of generality that $f$ is convex. Note that for convex functions \cite[Thm.2,Sect.6.3]{EvGar} yields that $D^2f = (\mu_{i,j})_{i,j=1,...,n}$ for signed Radon measures $\mu_{i,j}$ in the sense of distributions. Hence one can also decompose the measures in their absolutely continuous and singular parts, i.e. $D^2f = [D^2f]_{ac} + [D^2f]_s$. In our case $[D^2f]_s = 0$ because of the additional regularity assumption that $f \in W^{2,2}(\Omega)$. Moreover $[D^2f]_{ac} = D^2 f \cdot \lambda$, where $\lambda$ denotes the $n$-dimensional Lebesgue measure.
In \cite[Thm.1,Sect.6.4]{EvGar}, a proof of the classical Aleksandrov theorem is given, and examining Part $1$ of the given proof, it is shown that \eqref{eq:aleks} holds for each convex $f$ and each point $x_0$ such that
\begin{enumerate}
\item $\nabla f(x_0)$ exists and $x_0$ is a $1-$Lebesgue point of $\nabla f$.
\item $x_0$ is a $1-$Lebesgue point of the Radon-Nikodym density of $[D^2f]_{ac}$
\item $x_0$ satisfies $\lim_{r\rightarrow 0 } \frac{1}{r^n} [D^2f]_s(B_r(x_0)) = 0 $
\end{enumerate}
Since $f$ was assumed to be $C^1$, each point $x_0$ trivially satisfies $(1)$. As we mentioned above $[D^2f]_s = 0$, so each point $x_0$ automatically satisfies $(3)$. Hence the proof works for each point $x_0$ satisfying $(2)$, i.e. each $1-$Lebesgue point of $D^2f$. \qedhere
\end{proof}
\begin{remark}\label{rem:tayli}
For $f$ as in the statement of Lemma \ref{lem:aleks}, Equation \eqref{eq:aleks} can be seen as a Taylor expansion around each $1$-Lebesgue point of $D^2f$. In particular note that each $1-$Lebesgue point $x_0$ of $D^2f$ such that $\nabla f (x_0) = 0 $ and $(D^2f)^*(x_0)$ is positive definite is a strict local minimum of $f$.
\end{remark}
\begin{lemma}[Hessian on Singular Nodal Set- I]
Let $u \in \mathcal{A}(u_0)$ be a minimizer and $x_0 \in \Omega_{\epsilon_0}^C$ be a Lebesgue point of $D^2u$ such that $u(x_0) = \nabla u(x_0) = 0$. Then either $(D^2u)^*(x_0)= 0 $ or $(D^2 u)^*(x_0)$ is positive definite.
\end{lemma}
\begin{proof}
First define
\begin{equation*}
C := \limsup_{\epsilon \rightarrow 0+ } \frac{|\{ 0 < u < \epsilon \}| }{\epsilon},
\end{equation*}
which is finite because of Corollary \ref{cor:posdir}. By Lemma \ref{lem:semicon} and Lemma \ref{rem:quasialeks} we get the following blow-up profile at $x_0$:
\begin{equation}\label{eq:425}
\frac{u(x_0 + \sqrt{\epsilon} w)}{\epsilon} \rightarrow \frac{1}{2} w^T (D^2u)^*(x_0) w ,
\end{equation}
locally uniformly as $\epsilon \rightarrow 0$.
Now fix $ \tau > 0$ and observe that
\begin{equation*}
C \geq \limsup_{\epsilon \rightarrow 0+ } \frac{|\{ 0 < u < \epsilon \} \cap B_{\tau \sqrt{\epsilon}}(x_0) | }{ \epsilon}.
\end{equation*}
Using scaling properties of the Lebesgue measure and \eqref{eq:425} we get
\begin{align*}
C & \geq \limsup_{\epsilon \rightarrow 0+ } \frac{|\{ 0 < u < \epsilon \} \cap B_{\tau \sqrt{\epsilon}}(x_0) | }{ \epsilon}
\\ & = \limsup_{\epsilon \rightarrow 0 } \frac{1}{\epsilon} \left\vert \left\lbrace \sqrt{\epsilon}w : w \in B_{\tau}(0) \; \textrm{s.t.} \; 0 < \frac{u(x_0 + \sqrt{\epsilon} w)}{\epsilon} < 1 \right\rbrace \right\vert
\\ & = \limsup_{\epsilon \rightarrow 0 } \left\vert \left\lbrace w \in B_{\tau}(0) : 0 < \frac{u(x_0 + \sqrt{\epsilon} w)}{\epsilon} < 1 \right\rbrace \right\vert
\\ & \geq \left\vert \left\lbrace w \in B_{\tau}(0) : \frac{1}{4} < \frac{1}{2} w^T (D^2u)^*(x_0) w < \frac{1}{2} \right\rbrace \right\vert ,
\end{align*}
where the last step can be carried out because the convergence in \eqref{eq:425} is uniform in $B_\tau(0)$.
Now let $\lambda_1 , \lambda_2 $ be the eigenvalues of $\frac{1}{2}(D^2u)^*(x_0)$. Since $(D^2u)^*$ is symmetric, we can use an orthogonal transformation to obtain
\begin{equation*}
C \geq \left\vert \left\lbrace w \in B_{\tau}(0) : \frac{1}{4} < \lambda_1 w_1^2 + \lambda_2 w_2^2 < \frac{1}{2} \right\rbrace \right\vert.
\end{equation*}
since $\tau > 0 $ was arbitrary we can let $\tau \rightarrow \infty$ to find
\begin{equation}\label{eq:measres}
C \geq \left\vert \left\lbrace w \in \mathbb{R}^2 : \frac{1}{4} < \lambda_1 w_1^2 + \lambda_2 w_2^2 < \frac{1}{2} \right\rbrace \right\vert.
\end{equation}
Recall moreover from Corollary \ref{lem:corsubham} that
\begin{equation}\label{eq:laplgro}
0 \leq (\Delta u)^* =\mathrm{tr}((D^2u)^*) = 2( \lambda_1 + \lambda_2) .
\end{equation}
Now we distinguish cases to show that $\lambda_1 = \lambda_2 = 0$ or $\lambda_1,\lambda_2 > 0$. Assume that none of the two cases apply. One out of the two eigenvalues has to be positive because of \eqref{eq:laplgro} and the other one has to be zero or negative. Without loss of generality $\lambda_1 > 0 $.
If $\lambda_2$ is negative one can observe that if $w_1 > 0$
\begin{equation*}
\frac{1}{4} < \lambda_1 w_1^2 + \lambda_2 w_2^2 < \frac{1}{2} \quad \Leftrightarrow \frac{1}{\sqrt{\lambda_1}}\sqrt{\frac{1}{4} + |\lambda_2| w_2^2} < w_1 < \frac{1}{\sqrt{\lambda_1}}\sqrt{\frac{1}{2} + |\lambda_2| w_2^2}.
\end{equation*}
Therefore \eqref{eq:measres} yields, using that for positive $a,b$ one has $a-b = \frac{a^2-b^2}{a+b}$ one has
\begin{align*}
C & \geq \int_0^\infty \frac{1}{\sqrt{\lambda_1}} \left( \sqrt{\frac{1}{2} + |\lambda_2| w_2^2} - \sqrt{\frac{1}{4} + |\lambda_2| w_2^2} \right) \; \mathrm{d}w_2 \\ & = \frac{1}{\sqrt{\lambda_1}} \int_0^\infty \frac{1}{4} \frac{1}{ \sqrt{\frac{1}{2} + |\lambda_2| w_2^2} + \sqrt{\frac{1}{4} + |\lambda_2| w_2^2} } \; \mathrm{d}w_2 \\ & \geq \frac{1}{\sqrt{\lambda_1}} \int_0^\infty \frac{1}{8 } \frac{1}{\sqrt{\frac{1}{4} + |\lambda_2| w_2^2 }}dw_2 = \infty,
\end{align*}
a contradiction. If $\lambda_1 > 0 $ and $\lambda_2 = 0 $ then it is easy to see that the right hand side of \eqref{eq:measres} equals infinity again. Therefore we obtain a contradiction and hence $\lambda_1 = \lambda_2 = 0 $ or $\lambda_1,\lambda_2 >0$. Since $(D^2u)^*(x_0)$ is symmetric and therefore diagonalizable we obtain that $(D^2u)^*(x_0) = 0 $ or $(D^2u)^*(x_0)$ is positive definite.
\end{proof}
Next we exclude that $(D^2u)^*(x_0)$ is positive definite using a variational argument.
\begin{lemma}[Hessian on Singular Nodal Set - II] \label{lem:Hesssing}
Let $u \in \mathcal{A}(u_0)$ be a minimizer and $x_0 \in \Omega_{\epsilon_0}^C$ be a Lebesgue point of $D^2u$ such that $u(x_0) = \nabla u(x_0) = 0$. Then $(D^2u)^*(x_0)= 0 $.
\end{lemma}
\begin{proof}
By the previous lemma, it remains to show that $(D^2u)^*(x_0)$ is not positive definite. To do so, we suppose the opposite, i.e. $(D^2u)^*(x_0)$ is positive definite. By Lemma \ref{rem:quasialeks} and Remark \ref{rem:tayli} $x_0$ is a strict local minimum of $u$ and grows quadratically away from $x_0$, i.e. there exists $r_0> 0 $ and $\beta > 0 $ such that $0 < u(x) < \beta |x-x_0|^2 $ for each $x \in B_{r_0}(x_0) \setminus \{x_0 \}$. Let $r \in (0, r_0)$ be arbitrary. Now choose $\phi \in C_0^\infty(B_r(x_0))$ such that $ 0 \geq \psi \geq - 1$ and $\psi \equiv - 1$ in $B_\frac{r}{2}(x_0)$. As for each $\epsilon > 0 $ the function $u + \epsilon \psi $ is admissible, one has
\begin{align*}
\mathcal{E}(u) & \leq \mathcal{E}(u + \epsilon \psi) \leq \int_\Omega ( \Delta u )^2 \; \mathrm{d}x + 2 \epsilon \int_\Omega \Delta u \Delta \psi \; \mathrm{d}x \\ & \quad + \epsilon^2 \int_\Omega (\Delta \psi)^2 \; \mathrm{d}x + \{ u > 0 \} | - |\{ x \in B_\frac{r}{2}(x_0) :0 < u(x) < \epsilon \} | \\
& = \mathcal{E}(u) - \epsilon \int_\Omega \psi \; \mathrm{d}\mu + \epsilon^2 \int_\Omega (\Delta \psi)^2 - |\{ x \in B_\frac{r}{2}(x_0) : u(x) < \epsilon \} |,
\end{align*}
where we used \eqref{eq:bihammeas} and the strict local minimum property of $x_0$ in the last step.
Note that
\begin{equation*}
| \{ x \in B_\frac{r}{2}(x_0) : u(x) < \epsilon \} | \geq | \{ x \in B _\frac{r}{2}(x_0) : \beta |x-x_0|^2 < \epsilon \} | = \min \left\langle \pi \frac{\epsilon}{\beta}, \frac{r}{2} \right\rangle.
\end{equation*}
We can compute for each $\epsilon < \frac{\beta r}{2\pi}$
\begin{align*}
\mathcal{E}(u) & \leq \mathcal{E}(u) - \epsilon \int_\Omega \psi d\mu + \epsilon^2 \int_\Omega (\Delta \psi)^2 - \epsilon \frac{\pi}{\beta}
\\ & \leq \mathcal{E}(u) + \epsilon \left(\mu(B_r(x_0)) - \frac{\pi}{\beta} \right) + \epsilon^2 \int_\Omega (\Delta \psi)^2.
\end{align*}
Rearranging and dividing by $\epsilon$ we obtain
\begin{equation*}
- \mu(B_r(x_0)) + \frac{\pi}{\beta} \leq \epsilon \int_\Omega (\Delta \psi)^2
\end{equation*}
Letting first $\epsilon \rightarrow 0 $ and then $r \rightarrow 0 $ we find
\begin{equation*}
\frac{1}{\beta} \leq \mu(\{x_0\}) = 0,
\end{equation*}
where we used in the last step that by Remark \ref{rem:atoom} $x_0$ is not an atom of $\mu$. Finally, we obtain a contradiction.
\end{proof}
\begin{lemma}[Hessian on Singular Nodal Set - III] \label{lem:zerohaus}
Let $u\in \mathcal{A}(u_0)$ be a minimizer. Then $\{ u = \nabla u = 0 \}$ does not contain any $1-$Lebesgue points of $D^2u$. In particular $\{u = \nabla u = 0 \}$ is of zero Hausdorff dimension and each $x_0 \in \{ u = \nabla u = 0 \} $ satisfies $(\Delta u)^*(x_0) = \infty$.
\end{lemma}
\begin{proof}
Assume that $\{ u = \nabla u = 0 \}$ contains a Lebesgue point $x_0$ of $D^2 u$. Then, according to the previous Lemma, $(D^2u)^*(x_0) = 0$. This implies in particular that $(\Delta u)^*(x_0) = 0 $. Now note that by Corollary \ref{cor:suubhaam} $(\Delta u)^*$ is a nonnegative superharmonic function. Nonnegativity of $(\Delta u)^*$ implies that $x_0$ is a point where $(\Delta u)^*$ attains its global minimum in $\Omega$, namely zero. By the strong maximum principle it follows that $(\Delta u)^* \equiv 0 $, which would however imply that $u$ is harmonic and hence positive since its boundary data $(u_0)_{\mid_{\partial \Omega}}$ are strictly positive. Thus $\{ u = \nabla u = 0 \}= \emptyset$, contradicting the existence of $x_0$. The first sentence of the statement follows. The second sentence of the statement follows immediately from Corollary \ref{cor:minregu} and Remark \ref{rem:atoom}.
\end{proof}
\begin{lemma}[Singular Nodal Points are Isolated]
Suppose that $x_0 \in \{ u = \nabla u = 0 \}$. Then there exists $r > 0 $ such that $u$ is convex and nonnegative on $B_r(x_0)$. Moreover, $B_r(x_0) \cap \{ u = 0 \} = \{ x_0 \} $.
\end{lemma}
\begin{proof} First we show convexity. As an intermediate step we show that there exists $r >0 $ such that for each $1-$Lebesgue point $x$ of $D^2u $ in $B_r(x_0)$ the matrix $(D^2u)^*(x)$ is positive definite. Note that by Corollary \ref{cor:lebgem} there exists $M > 0 $ such that for each $1-$Lebesgue point $x$ of $D^2 u$ in $\Omega_{\epsilon_0}^C$ one has
\begin{equation}\label{eq:difeigenval}
| ( \partial^2_{x_1x_1}u )^* - ( \partial^2_{x_2x_2} u )^* | \leq M
\end{equation}
and
\begin{equation*}
|( \partial^2_{x_1x_2} u )^*| \leq M
\end{equation*}
As $(\Delta u )^*$ is subharmonic by Corollary \ref{cor:suubhaam}, \cite[Theorem 3.1.3]{Armitage} yields that $(\Delta u)^*(x_0) = \liminf_{x \rightarrow x_0} (\Delta u)^*(x)$, which equals infinity by the previous lemma. Hence one can find $\overline{r} > 0 $ such that $(\Delta u)^* > 5 M $ on $B_{\overline{r}}(x_0)$. If $x \in B_{\overline{r}}(x_0)$ is now a $1-$Lebesgue point of $D^2 u$ this implies that $(\partial^2_{x_1x_1}u )^*(x) + ( \partial^2_{x_2x_2} u )^*(x) \geq 5M$. Together with \eqref{eq:difeigenval} we obtain that $(\partial^2_{x_ix_i} u)^*(x) \geq 2M $ for all $i = 1,2$. Now we can show using the principal minor criterion that $(D^2u)^*(x)$ is positive definite. Indeed $(\partial^2_{x_1x_1} u)^*(x) \geq 2M > 0$ and $\det (D^2u)^*(x) = (\partial^2_{x_1x_1} u)^*(x)(\partial^2_{x_2x_2} u)^*(x) -(\partial^2_{x_1x_2} u)^*(x)^2 \geq 4M^2 - M^2 > 0 $. All in all, $(D^2u)^*$ is positive definite on $B_{\overline{r}}(x_0)$. We will show next that this implies convexity of $u$ on a smaller ball. For $\epsilon \in (0, \frac{\overline{r}}{2})$ let $\phi_\epsilon$ be the standard mollifier with support in $B_\epsilon(0)$. Note that $D^2(u * \phi_\epsilon) = D^2u * \phi_\epsilon $ on $B_\frac{\overline{r}}{2}(x_0)$. As an easy computation shows, $(D^2u * \phi_\epsilon)(x)$ is positive definite for each $x \in B_\frac{\overline{r}}{2}(x_0)$. Therefore $u* \phi_\epsilon $ is convex on $B_\frac{\overline{r}}{2}(x_0)$. Eventually, $u$ is convex on $B_\frac{\overline{r}}{2}(x_0)$ as uniform limit of convex functions. Choosing $r := \frac{\overline{r}}{2}$ implies the desired convexity. Convexity also implies that for each $x,y \in B_r(x_0)$ one has
\begin{equation*}
u(x) - u(y) \geq \nabla u(y) \cdot (x-y) .
\end{equation*}
Plugging in $y = x_0$, we obtain $u(x) \geq 0 $ which shows the desired nonnegativity on $B_r(x_0)$. It remains to show that $B_r(x_0) \cap \{ u = 0 \} = \{ x_0 \}$. Assume that there is a point $x_1 \in B_r(x_0)$ such that $u(x_1)=0$. By convexity and nonnegativity we obtain for each $\lambda \in (0,1)$ that
\begin{equation*}
0 \leq u(\lambda x_1 + (1-\lambda) x_0 ) \leq \lambda u(x_1) + (1- \lambda) u(x_0) = 0 .
\end{equation*}
Hence $u_{\mid_{\overline{x_0x_1}}} \equiv 0 $, where $\overline{x_0x_1}$ denotes the line segment connecting $x_0$ and $x_1$. Now this line segment lies completely in $B_r(x_0)$ and because of the nonnegativity, each point in $\overline{x_0x_1}$ is a local minimum of $u$. This yields that $\nabla u $ vanishes on this line segment and hence $\overline{x_0x_1} \subset \{ u = \nabla u = 0 \}$. This contradicts Lemma \ref{lem:zerohaus}, as $\{u = \nabla u = 0 \} $ must have zero Hausdorff dimension. The claim follows.
\end{proof}
\begin{cor}(Emptyness of the Singular Nodal Set) \label{cor:nosing}
Let $u \in \mathcal{A}(u_0) $ be a minimizer. Then $\{ u = \nabla u = 0 \} = \emptyset $.
\end{cor}
\begin{proof}
Suppose that there exists some $x_0 \in \{ u = \nabla u = 0 \}$. Recall that then $(\Delta u)^*(x_0) = \infty$ by Lemma \ref{lem:zerohaus}. Also, by the previous Lemma, there exists $r > 0$ such that $\{u = 0 \} \cap B_r(x_0)= \{ x_0 \}$ and $u(x) > 0 $ for each $x \in B_r(x_0)\setminus \{ x_0 \}$. By possibly choosing a smaller radius $r$ we can achieve that $B_r(x_0) \subset \Omega_{\epsilon_0}^C$. Now define $g_1 := u_{\mid_{ \partial B_\frac{r}{2}(x_0)}}$ and $g_2 := \nabla u_{\mid_{ \partial B_\frac{r}{2}(x_0)}}$. Note that $g_1,g_2 \in C^\infty( \partial B_\frac{r}{2}(x_0))$ by Lemma \ref{lem:biham}. By \cite[Theorem 2.19]{Sweers} one obtains that there exists a unique solution $h \in C^\infty (\overline{B_\frac{r}{2}(x_0)})$ such that
\begin{equation*}
\begin{cases}
\Delta^2 h = 0 & \textrm{in } B_\frac{r}{2}(x_0), \\
h = g_1 & \textrm{on } \partial B_\frac{r}{2}(x_0), \\
\nabla h = g_2 & \textrm{in } \partial B_\frac{r}{2}(x_0).
\end{cases}
\end{equation*}
Moreover, as a standard variational argument shows, $h$ is uniquely determined by
\begin{align*}
\int_{B_\frac{r}{2}(x_0) } (\Delta h)^2 \; \mathrm{d}x = \inf \Bigg\lbrace \int_{B_\frac{r}{2}(x_0) } (\Delta w)^2 \; \mathrm{d}x : w \in & W^{2,2}(B_\frac{r}{2}(x_0) ) \\ & \; \textrm{s.t.} \; w_{\mid_{\partial B_\frac{r}{2}(x_0) }} \equiv g_1, \nabla w_{\mid_{\partial B_\frac{r}{2}(x_0) }} \equiv g_2 \Bigg\rbrace,
\end{align*}
where '$\equiv$' here means equality in the trace sense. In particular one has
\begin{equation}\label{eq:4.25}
\int_{B_\frac{r}{2}(x_0) } (\Delta h)^2 \; \mathrm{d}x \leq \int_{B_\frac{r}{2}(x_0) } (\Delta u)^2 \; \mathrm{d}x
\end{equation}
and equality holds if and only if $h \equiv u$, by strict convexity of the energy. Now define
\begin{equation*}
\widetilde{u}(x) := \begin{cases} u(x) & x \not \in B_\frac{r}{2}(x_0), \\ h(x) & x \in B_\frac{r}{2}(x_0). \end{cases}
\end{equation*}
Since $\widetilde{u}$ has the right regularity and the same boundary data as $u$ one obtains that $\widetilde{u} \in \mathcal{A}(u_0)$. Therefore one can compute with \eqref{eq:4.25}
\begin{align*}
\mathcal{E}(u) & \leq \mathcal{E}(\widetilde{u}) = \int_{\Omega \setminus B_\frac{r}{2}(x_0) } (\Delta u )^2 \; \mathrm{d}x + \int_{B_\frac{r}{2}(x_0) } (\Delta h)^2 \; \mathrm{d}x \\ & \quad \quad \quad \quad \quad \quad + |\{ u >0 \} \cap \Omega \setminus B_\frac{r}{2}(x_0) | + | \{ h > 0 \} \cap B_\frac{r}{2}(x_0) |
\\ & \leq \int_{\Omega \setminus B_\frac{r}{2}(x_0) } (\Delta u )^2 \; \mathrm{d}x + \int_{B_\frac{r}{2}(x_0) } (\Delta u)^2 \; \mathrm{d}x + |\{ u >0 \} \cap \Omega \setminus B_\frac{r}{2}(x_0) | + |B_\frac{r}{2}(x_0) |.
\end{align*}
Now note that $|B_\frac{r}{2}(x_0) | = |B_\frac{r}{2}(x_0) \setminus \{ x_0 \} | = |\{u >0 \} \cap B_\frac{r}{2}(x_0) | $, as we explained in the beginning of the proof. Therefore we obtain
\begin{equation*}
\mathcal{E}(u) \leq \mathcal{E}(\widetilde{u}) \leq \int_\Omega (\Delta u)^2 \; \mathrm{d}x + |\{ u >0 \}| = \mathcal{E}(u).
\end{equation*}
This means in particular that all estimates used on the way have to hold with equality. Since we used estimate \eqref{eq:4.25}, equality holds in \eqref{eq:4.25} and from this we can infer (see discussion below \eqref{eq:4.25}) that $h = u$. In particular $u \in C^\infty (\overline{B_\frac{r}{2}(x_0)})$. This however is a contradiction to $(\Delta u)^*(x_0) = \infty$ and the claim follows.
\end{proof}
\section{Nodal Set and Biharmonic Measure}
In this section we are finally able to understand the regularity of the free boundary $\{ u = 0 \}$ and - as a byproduct - the measure $\mu$ of \eqref{eq:bihammeas}. The fact that $\nabla u $ does not vanish on $\{u = 0 \}$ and $u \in C^1(\Omega)$ makes $\{u = 0\}$ already a $C^1$-manifold. By deriving \eqref{eq:bihame} for $u$, we can give a rigorous version of the formal statement \eqref{eq:formmeas}. Afterwards we use this equation to obtain $C^2$ for $u$ and as a result the same additional regularity for $\{ u = 0 \}$.
\begin{lemma}[The Measure-Theoretic Boundary]\label{lem:measthebou}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then
\begin{equation}\label{eq:438}
\partial^* \{ u > 0 \} = \{ u = 0 \} ,
\end{equation}
\end{lemma}
\begin{proof}
For the '$\supset$' inclusion in \eqref{eq:438} note that $x_0 \in \{ u = 0 \}$ implies by Corollary \ref{cor:nosing} that $\nabla u(x_0) \neq 0 $. Moreover one has
\begin{align*}
\frac{|\{u > 0\} \cap B_r(x_0) | }{|B_r(x_0)|} & = \frac{1}{|B_1(0)| r^2}\left\vert \left\lbrace rx : x \in B_1(0) \; \textrm{s.t} \; u(x_0 + rx) > 0 \right\rbrace \right\vert
\\ & = \frac{1}{|B_1(0)|}\left\vert \left\lbrace x \in B_1(0) : u(x_0 + rx) > 0 \right\rbrace \right\vert
\\ & = \frac{1}{|B_1(0)|}\left\vert \left\lbrace x \in B_1(0) : \frac{u(x_0 + rx)}{r} > 0 \right\rbrace \right\vert.
\end{align*}
Since the expression in the measure term converges uniformly in $r$ to $\nabla u(x_0) \cdot x $ we get by Fatou's lemma
\begin{align*}
\overline{\theta}(\{u > 0 \} ,x_0) & = \limsup_{r\rightarrow 0 } \frac{|\{u > 0\} \cap B_r(x_0) | }{|B_r(x_0)|} \\ & \geq \frac{1}{|B_1(0)|}\left\vert \left\lbrace x \in B_1(0) : \nabla u(x_0) \cdot x > 0 \right\rbrace \right\vert = \frac{1}{2},
\end{align*}
as $\{ x : \nabla u(x_0) \cdot x > 0 \}$ defines a half plane through the origin.
Similarly one shows $\overline{\theta}(\{ u \leq 0 \},x_0) \geq \frac{1}{2} > 0 $ and hence the inclusion is shown.
For the remaining inclusion take $x_0 \in \partial^* \{ u > 0 \} $. If $u(x_0) > 0$ then there exists $r_0 > 0 $ such that $ u > 0$ on $B_{r_0} (x_0)$ and this implies by definition of $\overline{\theta}$ that $\overline{\theta} (\{u \leq 0 \} ,x_0 ) = 0$. Similarly one shows that $u(x_0)< 0 $ implies that $\theta ( \{ u > 0 \} , x_0 ) = 0 $. Hence $u(x_0) = 0 $ and the claim follows.
\end{proof}
We will now characterize the measure found in \eqref{eq:bihammeas} using an inner variation technique that has led to rich insights in \cite{Valdinoci}.
\begin{lemma}[Noether Equation] \label{lem:noether}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then
\begin{equation}\label{eq:finper}
\int_{\Omega_{\epsilon_0}^C} \chi_{\{u> 0 \}} \mathrm{div}(\phi)\;\mathrm{d}x = - \int_{\Omega_{\epsilon_0}^C} \nabla u \cdot \phi \; \mathrm{d}\mu \quad \forall \phi \in C_0^\infty(\Omega_{\epsilon_0}^C, \mathbb{R}^2),
\end{equation}
where $\mu$ is the biharmonic measure from \eqref{eq:bihammeas}.
\end{lemma}
\begin{proof}
To compactify notation, we will leave out the `$\cdot$' to indicate the dot product for this proof.
From \cite[Lemma 4.3]{Valdinoci} follows that for each $\phi \in C_0^\infty(\Omega; \mathbb{R}^2)$ one has
\begin{equation}\label{eq:515}
2 \int_\Omega \Delta u \sum_{m = 1}^{2} \left( 2 \nabla (\partial_m u) \cdot \nabla \phi^m + \partial_m u\Delta \phi^m \right) \;\mathrm{d}x - \int_\Omega ( ( \Delta u)^2 + \chi_{\{ u > 0 \} } ) \mathrm{div}(\phi) \;\mathrm{d}x = 0 .
\end{equation}
Fix $\phi \in C_0^\infty(\Omega_{\epsilon_0}^C; \mathbb{R}^2)$. Then there is $\beta \in (0,1)$ such that $\nabla u\cdot \phi \in W_0^{2,2-\beta}(\Omega_{\epsilon_0}),$ by Corollary \ref{cor:minregu}. Observe that $\nabla u \cdot \phi$ is a valid test function for \eqref{eq:bihammeas} (cf. Lemma \ref{lem:bihmmeasmore}). Starting from \eqref{eq:515} Corollary \ref{cor:minregu} we can use \eqref{eq:bihammeas} to find
\begin{align*}
\int_{\Omega_{\epsilon_0}^C} \chi_{ \{u > 0 \}}& \mathrm{div}(\phi) \;\mathrm{d}x =2 \int_{\Omega_{\epsilon_0}^C} \Delta u \sum_{m = 1}^{2} \left( 2 \nabla (\partial_m u) \nabla \phi^m + \partial_m u\Delta \phi^m \right) \;\mathrm{d}x \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - \int_{\Omega_{\epsilon_0}^C} ( \Delta u)^2 \mathrm{div}(\phi) \;\mathrm{d}x
\\ & = 2 \int_{\Omega_{\epsilon_0}^C} \Delta u \sum_{m = 1}^{2} \left(
\Delta (\partial_m u \phi^m - \phi^m \Delta (\partial_m u) \right) \;\mathrm{d}x - \int_{\Omega_{\epsilon_0}^C} (\Delta u )^2 \mathrm{div}(\phi) \;\mathrm{d}x
\\ & =2 \int_{\Omega_{\epsilon_0}^C} \Delta u \Delta ( \nabla u \phi) \;\mathrm{d}x - \int_{\Omega_{\epsilon_0}^C} ( \phi \nabla (\Delta u)^2 + (\Delta u)^2 \mathrm{div}(\phi) \;\mathrm{d}x
\\ & = - \int_{\Omega_{\epsilon_0}^C} \nabla u \phi \; \mathrm{d}\mu - \int_{\Omega_{\epsilon_0}^C} \mathrm{div} ( (\Delta u)^2 \phi ) \;\mathrm{d}x .
\end{align*}
The second integral vanishes by the Gauss divergence theorem and the claim follows.
\end{proof}
\begin{cor}\label{cor:finhau}
Let $u \in \mathcal{A}(u_0) $ be a minimizer. Then $\{ u > 0 \} $ has finite perimeter in $\Omega$ and $\mathcal{H}^1(\{ u = 0 \} ) < \infty.$
\end{cor}
\begin{proof}
We first show that $\{u >0 \}$ has finite perimeter in $\Omega_{\epsilon_0}^C$. Observe that by \eqref{eq:finper} one has for each $\phi \in C_0^\infty ( \Omega_{\epsilon_0}^C ; \mathbb{R}^2) $ such that $\sup_{\Omega_{\epsilon_0}^C} |\phi| \leq 1$,
\begin{align*}
\int_{\Omega_{\epsilon_0}^C } \chi_{\{u > 0\} } \mathrm{div} ( \phi) \;\mathrm{d}x = -\int_{\Omega_{\epsilon_0}^C} \nabla u \cdot \phi \; \mathrm{d}\mu \leq \mu(\Omega)\sup_{x \in \Omega_{\epsilon_0}^C }|\nabla u (x)| .
\end{align*}
The quantity on the right hand side is finite since by Corollary \ref{cor:minregu} there is $\beta \in (0,1)$ such that $\nabla u \in W^{2,2-\beta}(\Omega_{\epsilon_0}^C) \subset C(\overline{\Omega_{\epsilon_0}^C})$. By \cite[Thm.1(i), Sect.5.9]{EvGar} we conclude that $\mathcal{H}^1(\partial^* \{ u > 0 \} \cap \Omega_{\epsilon_0}^C) < \infty$.
By Lemma \ref{lem:measthebou} we have $\partial^*\{ u > 0 \} = \{u = 0 \} \subset \Omega_{\epsilon_0}^C$. Therefore, $\mathcal{H}^1( \Omega \cap \partial^*\{ u >0 \} ) < \infty$ and by \cite[Thm.1, Sect.5.11]{EvGar} we obtain that $\{u > 0 \} $ has finite perimeter in $\Omega$. By Lemma \ref{lem:measthebou} we conclude
$\infty > \mathcal{H}^1( \partial^* \{ u > 0 \} ) = \mathcal{H}^1 ( \{ u = 0 \} )$. \qedhere
\end{proof}
\begin{lemma}[Biharmonic Measure and Hausdorff Measure] \label{lem:bihammeasreg}
Let $ A \subset \Omega$ be a Borel set and $u \in \mathcal{A}(u_0)$ be a minimizer. Then
\begin{equation}\label{eq:542}
\mu ( A ) = \int_A \frac{1}{|\nabla u |} \; \mathrm{d}\mathcal{H}^1 \llcorner_{ \{ u = 0 \} }.
\end{equation}
\end{lemma}
\begin{proof}
We first prove the formula
\begin{equation}\label{eq:543}
\int_{\Omega_{\epsilon_0}^C} \phi |\nabla u |^2 d\mu = \int_{\{ u = 0\} } \phi |\nabla u| \; \mathrm{d}\mathcal{H}^1 \quad \forall \phi \in C_0(\Omega_{\epsilon_0}^C).
\end{equation}
By density, it suffices to prove the claim for $\phi \in C_0^\infty(\Omega_{\epsilon_0}^C). $
For $\epsilon >0 $, let $\rho_\epsilon$ be the standard mollifier and define $f_\epsilon := (\phi \nabla u )* \rho_\epsilon$. Now note that $f_\epsilon$ lies in $C_0^\infty(\Omega_{\epsilon_0}^C)$ for appropriately small $\epsilon > 0$ and $f_\epsilon$ converges uniformly to $ \phi \nabla u$. By \eqref{eq:finper} and the fact that $\{ u >0 \}$ has finite perimeter in $\Omega_{\epsilon_0}^C$ by Corollary \ref{cor:finhau} we obtain with \cite[Thm.1,Sect.5.9]{EvGar} that
\begin{align}
\int_{\Omega_{\epsilon_0}^C} \phi |\nabla u|^2 \; \mathrm{d}\mu & = \lim_{\epsilon \rightarrow 0 } \int_{\Omega_{\epsilon_0}^C} \nabla u \cdot f_\epsilon \; \mathrm{d}\mu \nonumber \\
& \label{eq:545} = - \lim_{\epsilon \rightarrow 0 } \int_{\Omega_{\epsilon_0}^C}
\chi_{\{ u >0 \}} \mathrm{div} (f_\epsilon) \;\mathrm{d}x
=- \lim_{\epsilon \rightarrow 0 } \int_{\partial^* \{ u > 0 \}} f_\epsilon \cdot \nu_{ \{ u > 0 \} }\; \mathrm{d}\mathcal{H}^1
\\ & = - \lim_{\epsilon \rightarrow 0 } \int_{ \{ u = 0 \}} f_\epsilon \cdot \nu_{ \{ u > 0 \} }\; \mathrm{d}\mathcal{H}^1, \nonumber
\end{align}
where $\nu_{\{ u > 0\}}$ denotes the measure theoretic unit outer normal to $\{ u >0 \}$, cf. \cite[Thm.1,Sect.5.9]{EvGar}. Since by Remark \ref{rem:manifold}, $\{ u = 0 \}$ is locally a $C^1$-regular level set one obtains immediately that $\nu_{\{ u > 0\}}(x) = \frac{-\nabla u(x)}{|\nabla u (x) |}$. Together with the fact that $\mathcal{H}^1(\{u = 0 \})< \infty$ by Corollary \ref{cor:finhau} we obtain \eqref{eq:543}. Since $\nabla u$ is a continuous function that does not vanish on $\{u = 0 \} \subset \Omega_{\epsilon_0}^C$ there also exists some $\epsilon >0 $ such that $\overline{ B_\epsilon(\{ u = 0 \} )} \subset \Omega_{\epsilon_0}^C$ and $\nabla u $ does not vanish on $B_\epsilon( \{ u = 0 \} )$. Fix $\eta \in C_0^\infty( B_\epsilon ( \{ u = 0 \}))$ arbitrarily such that $\eta \equiv 1$ on $\{u = 0 \}$. Now suppose that $\psi \in C_0(\Omega)$. Note that $\eta \equiv 1$ on $\mathrm{supp}(\mu)$ and $\frac{\psi}{|\nabla u|^2} \eta \in C_0(\Omega_{\epsilon_0}^C)$. Therefore one has by \eqref{eq:543}
\begin{align*}
\int \psi \; \mathrm{d}\mu = \int_{\Omega_{\epsilon_0}^C} \frac{\psi \eta}{|\nabla u|^2} |\nabla u |^2 \; \mathrm{d}\mu = \int_{\{u = 0\}} \frac{\psi \eta}{|\nabla u|^2} |\nabla u| \; \mathrm{d}\mathcal{H}^1 = \int_{ \{u = 0 \} } \frac{\psi}{|\nabla u|} \; \mathrm{d}\mathcal{H}^1.
\end{align*}
From there the claim is easy to deduce by standard arguments in measure theory.
\end{proof}
Having now characterized the measure $\mu$ explicitly, one can obtain classical regularity with the representation \eqref{eq:347}. The details will be discussed in Appendix \ref{sec:regunod}.
\begin{lemma} [$C^2$-Regularity, Proof in Appendix \ref{sec:regunod}] \label{lem:regunod}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $ u \in C^2( \Omega)$.
\end{lemma}
\section{Proof of Theorem \ref{thm:1.1}}
\begin{proof}[Proof of Theorem \ref{thm:1.1}]
We first recall parts of the statement that have already been proved on the way: The $C^2$-regularity of $u$ and the property that $\nabla u \neq 0 $ whenever $u =0 $ follow from Remark \ref{rem:manifold} and Lemma \ref{lem:regunod}. The $W^{3,2-\beta}_{loc} $-regularity follows from Lemma \ref{lem:biham} and Corollary \ref{cor:minregu}.
By Corollary \ref{lem:corsubham} we can infer that $\Delta u \geq 0 $. We show now that $\{ u = 0 \}$ is a closed connected $C^2$-hypersurface. First $\{ u = 0 \}$ is a $C^2$-manifold as zero level set of a $C^2$-function with nonvanishing gradient on $\{u = 0\}$. Note that $\{ u = 0 \}$ is orientable as $\nu = \frac{\nabla u}{|\nabla u |}$ defines a continuous normal vector field. Furthermore, each connected component of $\{ u = 0 \}$ is a connected, orientable $C^2$-manifold. Note that $\{ u= 0 \}$ has only finitely many connected components $(S_i)_{i = 1}^N$ since it is compact.
We also claim that each connected component of $\{u = 0\}$ is compact. Indeed, connected components of topological spaces are closed the the same space, cf. \cite[Exercise 1.6.1]{Lawson}, and closed subsets of compact sets are compact. All in all, each connected component of $\{u = 0 \}$ is a compact, orientable, connected $C^2$-manifold. By the Jordan-Brower seperation theorem (see \cite{Lima}), we infer that for each $i \in \{ 1, ..., N \}$ the set $\mathbb{R}^2 \setminus S_i$ has two disjoint connected components, say $G_i$ and $\mathbb{R}^2 \setminus ( G_i \cup S_i) $ the boundary of both of which is $S_i$. We claim that one of these two components is a subset of $\Omega$. For if not, one can find an $x_1 \in G_i \setminus \Omega$ as well as $x_2 \in (\mathbb{R}^2 \setminus ( G_i \cup S_i)) \setminus \Omega$. One can then connect $x_1$ and $x_2$ with a continuous path lying in $\mathbb{R}^2 \setminus \Omega \subset \mathbb{R}^2 \setminus S_i $. This is a contradiction since $G_i$ and $\mathbb{R}^2 \setminus (G_i \cup S_i)$ are two different path components of $\mathbb{R}^2 \setminus S_i$. Without loss of generaliy $G_i$ is contained in $\Omega$.
Note that $G_i$ has positive distance of $\partial \Omega$ since $\overline{G_i}$ is compact $\inf_{G_i} \mathrm{dist}(\cdot, \partial \Omega)$ is attained in $\partial G_i = S_i$.
Since $u$ is subharmonic in $\Omega$ by Corollary \ref{lem:corsubham} we get that either $u \equiv 0$ in $G_i$ or $ u< 0 $ in $G_i$ by the strong maximum principle for subharmonic functions. The first possibility is excluded since $\nabla u $ does not vanish on $\{ u = 0\}$ as we already showed.
We show now that $G_i\cap G_j = \emptyset$ for all $i \neq j$. Since $u< 0 $ in $G_i$ for all $i$, we get $S_j \cap G_i = \emptyset$ for all $j \neq i$. Therefore
\begin{equation*}
\partial (G_i \cap G_j) \subset (S_j \cap \overline{G_i}) \cup (S_i \cap \overline{G_j}) = (S_j \cap G_i) \cup (S_i \cap G_j) \cup (S_i \cap S_j) = \emptyset
\end{equation*}
for all $i \neq j$ . This means that $\mathbb{R}^2$ is the disjoint union of $G_i \cap G_j$ and the interior of $(G_i \cap G_j)^c$. Since $\mathbb{R}^2$ is connected we obtain that $G_i \cap G_j = \emptyset$ for $i \neq j$. We show next that $\{ u< 0 \} = \bigcup_{i = 1}^N G_i$. Suppose that there is a point $\widetilde{x} \in \Omega \setminus \bigcup_{i = 1}^N G_i$ such that $u( \widetilde{x} ) < 0 $. Let $\widetilde{r}:= \sup\{ r > 0 : B_r( \widetilde{x} ) \subset \{ u < 0 \} \}$. Observe that $\widetilde{r}> 0 $ because of continuity of $u$. Note that $\overline{B_{\widetilde{r}}(\widetilde{x})} \subset \Omega$ because $ u > 0 $ on $\partial \Omega$.
Hence, $\overline{B_{\widetilde{r}}( \widetilde{x} )} $ touches some $S_j$ tangentially. Note also that $\{u < 0 \} $ in $B_{\widetilde{r}} ( \widetilde{x} )$ and $B_{\widetilde{r}} ( \widetilde{x} ) \cap G_j = \emptyset$ since $\widetilde{x} \in \mathbb{R}^2 \setminus( G_j \cup S_j)$ and $B_{\widetilde{r}} ( \widetilde{x} ) $ can only intersect one connected component of $\mathbb{R}^2 \setminus S_j$. Let $p \in S_j$ be a point where $\overline{B_{\widetilde{r}}( \widetilde{x} )} $ touches $S_j$. Now observe that $t \mapsto u(p + t \nabla u(p))$ is continuously differentiable in a neighborhood of $p$ as $u \in C^1(\Omega)$. Therefore
\begin{equation*}
\frac{d}{dt}_{\mid_{t = 0 }} u(p +t \nabla u(p) ) = | \nabla u (p)|^2 > 0
\end{equation*}
and hence there is $t_0 > 0 $ such that $\frac{d}{dt} u(p +t \nabla u(p) ) \geq \frac{1}{2} | \nabla u(p)|^2$ for each $t \in (-t_0,t_0)$. In particular the fundamental theorem of calculus yields that
\begin{equation} \label{eq:positi}
u(p + t \nabla u(p) ) > 0 \quad \forall t \in (0,t_0).
\end{equation}
Since $B_{\widetilde{r}}(\widetilde{x})$ touches $S_j$ tangentially at $p$ the exterior normal of $B_{\widetilde{r}}(\widetilde{x})$ at $p$ is given by $\nu = \pm \frac{\nabla u}{|\nabla u|}$. In case that $\nu = + \frac{\nabla u}{|\nabla u|}$ the exterior unit normal coincides with the exterior unit normal of $G_j$. Since $G_j$ and $B_{\widetilde{r}}(\widetilde{x})$ both satisfy the interior ball condition (see \cite[Remark 4.3.8]{Han}), we can now force a small ball into $G_j \cap B_{\widetilde{r}}(\widetilde{x})$ which is a contradiction to the fact that $G_j \cap B_{\widetilde{r}}(\widetilde{x}) = \emptyset$. Therefore $\nu = - \frac{\nabla u}{|\nabla u|}$ and hence there is $t_1 > 0 $ such that $ p + t \nabla u (p)$ lies in $B_{\widetilde{r}} ( \widetilde{x})$ for each $t \in (0,t_1)$. Choosing $t := \frac{1}{2} \min\{ t_0, t_1 \}$ we obtain a contradiction since $p +t \nabla u(p) \in B_{\widetilde{r}}(\widetilde{x})$ and $ u( p + t \nabla u(p) ) > 0 $ according to \eqref{eq:positi}, which is a contradiction to the choice of $B_{\widetilde{r}}(\widetilde{x})$. We have shown \eqref{eq:geb}.
Given this, we get the following chain of set inclusions:
\begin{equation*}
\{u = 0 \} = \bigcup_{i = 1}^\infty S_i = \bigcup_{i = 1}^\infty \partial G_i \subset \partial\{ u < 0 \} \subset \{ u= 0 \},
\end{equation*}
where we used the continuity of $u$ in the last step. We obtain that $\partial \{ u< 0 \} = \{ u = 0 \}$, which was also part of the statement. The property that $\{ u = 0\}$ has finite $1$-Hausdorff measure follows from Corollary \ref{cor:finhau}. The only statement that remains to show is \eqref{eq:bihame}. We first show \eqref{eq:bihame} for $\phi \in C_0^\infty(\Omega)$. By \eqref{eq:bihammeas} one has
\begin{equation*}
2\int_\Omega \Delta u \Delta \phi = -\int_\Omega \phi \; \mathrm{d}\mu \quad \forall \phi \in C_0^\infty(\Omega)
\end{equation*}
for a measure $\mu$ with $\mathrm{supp}(\mu)= \{ u= 0 \}$ which was examined more closely in Lemma \ref{lem:bihammeasreg}. From this lemma we can conclude that
\begin{align*}
\mu(A) & = \int_A \frac{1}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1 \llcorner_{ \{u = 0 , \nabla u \neq 0 \} } = \int_{\{u = 0 \}} \chi_A \frac{1}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1.
\end{align*}
Using this representation of $\mu$ we obtain \eqref{eq:bihame} for $\phi \in C_0^\infty(\Omega)$ and by density also for $ \phi \in W_0^{2,2}(\Omega)$. Now suppose that $\phi \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$. Choose $\eta \in C_0^\infty(\Omega_{\epsilon_0}^C)$ such that $0 \leq \eta \leq 1$ and $\eta \equiv 1$ in a neighborhood of $\{u \leq 0 \}$ that is compactly contained in $\Omega_{\epsilon_0}^C$ and rewrite $\phi = \phi \eta + \phi ( 1- \eta)$. Observe that $\phi (1- \eta)$ lies in $W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$ and is compactly supported in $\{u > 0 \}$. By Lemma \ref{lem:biham} we infer that
\begin{equation}\label{eq:wegmitrand}
2\int_\Omega \Delta u \Delta (\phi(1- \eta) ) \;\mathrm{d}x = 0 .
\end{equation}
Note that $\phi \eta \in W_0^{2,2}(\Omega)$ as $\eta$ is compactly supported in $\Omega$. Using \eqref{eq:wegmitrand} and that we have already shown \eqref{eq:bihame} for $W_0^{2,2}$-test functions we find
\begin{equation*}
2 \int_\Omega \Delta u \Delta \phi \;\mathrm{d}x = 2\int_\Omega \Delta u \Delta (\eta \phi) \;\mathrm{d}x= - \int_{\{ u= 0 \} } \phi \eta \frac{1 }{|\nabla u |} \; \mathrm{d}\mathcal{H}^1 .
\end{equation*}
Since $\eta \equiv 1$ on a neighborhood of $\{ u= 0 \}$ we obtain the claim.
\end{proof}
\begin{cor}
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then $\partial \{ u > 0 \} = \{ u = 0 \} \cup \partial \Omega$. In particular $ \{u > 0 \}$ has $C^2$-boundary.
\end{cor}
\begin{proof}
Recalling \eqref{eq:438} we find that
\begin{equation*}
\partial \{u > 0\} = (\partial \{ u > 0 \} \cap \Omega) \cup \partial \Omega \supset (\partial^* \{ u > 0 \} \cap \Omega) \cup \partial \Omega \supset \{ u =0\} \cup \partial \Omega .
\end{equation*}
The other inclusion $\partial \{ u >0 \} \subset \partial \Omega \cup \{ u = 0 \}$ is immediate by continuity of $u$. The rest of the claim follows from Theorem \ref{thm:1.1}.
\end{proof}
\section{Measure of the Negativity Region}
We have aleady discovered in \eqref{eq:infbound}, Example \ref{ex:iotapos}, and Remark \ref{rem:nontriv} that for `small' boudary values $u_0$ the energy of minimizers falls below $|\Omega|$ and the nodal set is nontrivial. On the contrary, for `large' boundary values $u_0$, one gets minimizers with trivial nodal set, see Remark \ref{rem:iotafin}. In this section, we want to derive some estimates that ensure one of the two cases.
\begin{lemma}[Universal Bound for Biharmonic Measure]
Let $u \in \mathcal{A}(u_0)$ be a minimizer. Then
\begin{equation}\label{eq:89}
\int_{\{ u = 0 \} } \frac{1}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1 \leq \frac{2|\{u < 0 \}|}{\inf_{\partial \Omega } u_0 } \leq \frac{2|\Omega|}{\inf_{\partial \Omega} (u_0)}
\end{equation}
\end{lemma}
\begin{proof}
Let $h \in W^{2,2}(\Omega)$ be a solution of
\begin{equation*}
\begin{cases}
\Delta h = 0 & \mathrm{in} \; \Omega, \\
h = u_0 & \mathrm{on} \; \partial \Omega .
\end{cases}
\end{equation*}
Note that by elliptic regularity and the trace theorem, see \cite[Theorem 8.12]{Gilbarg}, $h$ lies actually in $W^{2,2}(\Omega)$ and $h - u_0 \in W_0^{1,2}(\Omega)$. Observe that $u - h = (u- u_0) + (u_0 - h) \in W^{2,2}(\Omega) \cap W_0^{1,2}(\Omega)$. We obtain with \eqref{eq:bihame}
\begin{equation}\label{eq:hasuii}
\int_\Omega \Delta u \Delta (u-h) \;\mathrm{d}x= - \frac{1}{2}\int_{\{ u= 0 \}} \frac{u - h}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1 = \frac{1}{2}\int_{\{ u= 0 \}} \frac{ h}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1 .
\end{equation}
For the left hand side we can estimate using harmonicity of $h$ and \eqref{eq:infbound}
\begin{equation*}
\int_\Omega \Delta u \Delta (u-h) \;\mathrm{d}x = \int_\Omega (\Delta u)^2 \;\mathrm{d}x = \mathcal{E}(u) - |\{ u >0 \}| \leq |\Omega| - |\{ u >0 \}| = |\{u < 0 \}| ,
\end{equation*}
where we used that $|\{u = 0 \}| = 0 $ by Theorem \ref{thm:1.1}.
Therefore \eqref{eq:hasuii} implies that
\begin{equation*}
|\{u < 0 \}| \geq \frac{1}{2} \int_{\{ u= 0 \}} \frac{h}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1.
\end{equation*}
By the maximum priciple and the construction of $h$ we obtain that $h \geq \inf_{\partial \Omega} u_0$ and hence
\begin{equation*}
|\{ u< 0 \}| \geq \frac{1}{2} \inf_{\partial\Omega} ( u_0 ) \int_{\{ u= 0 \}} \frac{1}{|\nabla u | } \; \mathrm{d}\mathcal{H}^1. \qedhere
\end{equation*}
\end{proof}
\begin{cor}[One-phase Solutions for Large Boundary Values] \label{cor:onephase}
Let $\Omega$, $u_0$ be as in Definition \ref{def:adm} and $u \in \mathcal{A}(u_0)$ be a minimizer. Then, either $ \{ u = 0\}= \emptyset $ or
\begin{equation*}
| \{ u < 0 \} | \geq 2\pi \inf_{\partial \Omega } u_0
\end{equation*}
\end{cor}
\begin{proof}
Using that $| \{ u = 0 \}| =0$ by Theorem \ref{thm:1.1} and \eqref{eq:infbound} as well as the Cauchy-Schwarz inequality and Gauss divergence theorem we get
\begin{align*}
|\{ u < 0 \} | & = |\Omega | - |\{ u > 0 \} | \geq \mathcal{E}(u) - |\{ u > 0 \} | = \int_\Omega (\Delta u )^2 \;\mathrm{d}x
\\ & \geq \int_{\{ u<0 \} } (\Delta u )^2 \;\mathrm{d}x = \frac{1}{| \{ u < 0 \} |} \left( \int_{ \{ u < 0 \} } (\Delta u ) \;\mathrm{d}x \right)^2
\\ & = \frac{1}{|\{ u < 0 \}|} \left( \int_{\partial \{ u <0 \}} \nabla u \cdot \nu \; \mathrm{d}\mathcal{H}^1 \right)^2 .
\end{align*}
Note that the exterior outer normal of $\{ u< 0 \}$ is given by $\nu = \frac{\nabla u }{|\nabla u |}$ and therefore we obtain with Theorem \ref{thm:1.1}
\begin{equation}\label{eq:711}
|\{ u < 0 \} |^2 \geq \left( \int_{\partial \{ u < 0 \} } |\nabla u| \; \mathrm{d}\mathcal{H}^1 \right)^2 = \left( \int_{\{ u = 0 \} } |\nabla u| \; \mathrm{d}\mathcal{H}^1 \right)^2 .
\end{equation}
Now observe that by the Cauchy Schwarz inequality and \eqref{eq:89} we get
\begin{align*}
\mathcal{H}^1( \partial \{ u > 0 \} ) & \leq \left( \int_{ \{u= 0 \} } \frac{1}{| \nabla u |} \; \mathrm{d} \mathcal{H}^1 \right)^\frac{1}{2} \left( \int_{ \{u= 0 \} } {| \nabla u |} \; \mathrm{d} \mathcal{H}^1 \right)^\frac{1}{2} \\ & \leq \left(2 \frac{|\{ u < 0 \}| }{\inf_{\partial \Omega} u_0 } \right)^\frac{1}{2} \left( \int_{ \{u= 0 \} } {| \nabla u |} \; \mathrm{d} \mathcal{H}^1 \right)^\frac{1}{2}.
\end{align*}
Rearranging and plugging into \eqref{eq:711} we find
\begin{equation*}
|\{ u < 0 \} |^2 \geq \frac{\mathcal{H}^1(\partial \{ u <0 \} )^4}{4|\{ u <0 \}|^2} \left( \inf_{\partial \Omega} u_0 \right)^2 .
\end{equation*}
Using the isoperimetric inequality, see \cite[Theorem 14.1]{Maggi} we get that
\begin{equation}\label{eq:negativityset}
|\{ u < 0 \} |^2 \geq \frac{1}{4} \frac{\mathcal{H}^1(\partial B_1(0) )^4}{|B_1(0)|^2} \left(\inf_{\partial \Omega} u_0 \right)^2= 4\pi^2 \left( \inf_{\partial \Omega } u_0 \right)^2. \qedhere
\end{equation}
\end{proof}
\begin{remark}
This proves in particular that $\inf_{\partial \Omega} u_0 > \frac{|\Omega|}{2\pi}$ implies $\{u = 0 \}= \emptyset$, which is a lot better than the bound in Remark \ref{rem:iotafin}, at least for domains $\Omega$ with big Lebesgue measure.
\end{remark}
\section{A Non-Uniqueness Result}\label{sec:nonuni}
\begin{definition}[The Candidate for Non-Uniqueness] \label{def:cand}
Let $\Omega = B_1(0)$. For $C > 0 $ let $\mathcal{A}(C)$ denote the admissible set associated to the boundary function $u_0 \equiv C$, see Definition \ref{def:adm}.
\begin{equation*}
\iota := \sup \{ C > 0 : \inf_{u \in \mathcal{A}(C)} \mathcal{E}(u) < |B_1(0)| \}
\end{equation*}
\end{definition}
\begin{remark}
Note that $\frac{1}{8\sqrt{2}} \leq \iota < \infty$ by Example \ref{ex:iotapos} and Remark \ref{rem:iotafin}.
\end{remark}
\begin{lemma}[Energy in the Limit Case] \label{lem:limica}
Let $\iota$ be as in Definition \ref{def:cand}. Then
\begin{equation}\label{eq:intaen}
\inf_{ u \in \mathcal{A}(\iota) } \mathcal{E}(u) = |B_1(0)|.
\end{equation}
\end{lemma}
\begin{proof}
One inequality is immediate by \eqref{eq:infbound}. Now suppose that
\begin{equation}\label{eq:73}
\inf_{ u \in \mathcal{A}(\iota) } \mathcal{E}(u) < |B_1(0)|.
\end{equation}
Then by Theorem \ref{thm:1.1} and Remark \ref{lem:eximin} there exists a minimizer $u_\iota$ such that $ \{u_{\iota} =0 \} $ has finite $1-$Hausdorff measure and $\mathcal{E}(u_\iota) < |B_1(0)|$. Note that for each $\epsilon > 0$, $u_\iota + \epsilon \chi_{B_1(0)}$ is admissible for $u_0 \equiv \iota + \epsilon$. By the choice of $\iota$ we get
\begin{equation*}
|B_1(0)| \leq \mathcal{E}(u_{\iota} + \epsilon ) = \int_{B_1(0)} (\Delta u_{\iota})^2 \;\mathrm{d}x + |\{ u_\iota + \epsilon > 0 \}| \quad \forall \epsilon > 0.
\end{equation*}
Hence
\begin{equation*}
\int_{B_1(0)} (\Delta u_{\iota})^2 \;\mathrm{d}x +| \{ u_\iota > - \epsilon \}| \geq |B_1(0)|
\end{equation*}
Letting $\epsilon >0 $ monotonically from above, we obtain with \cite[Theorem 1 in Section 1.1]{EvGar} that
\begin{equation}\label{eq:contiii}
\int_{B_1(0)} (\Delta u_{\iota})^2 \;\mathrm{d}x +| \{ u_\iota \geq 0 \}| \geq |B_1(0)|.
\end{equation}
As we already pointed out, $\{ u_\iota = 0 \}$ is a set of finite $1$-Hausdorff measure and hence a Lebesgue null set, see \cite[Section 2.1, Lemma 2]{EvGar}. Therefore \eqref{eq:contiii} can be reformulated to
\begin{equation*}
\int_{B_1(0)} (\Delta u_{\iota})^2 \;\mathrm{d}x +| \{ u_\iota > 0 \}| \geq |B_1(0)|,
\end{equation*}
but the left hand side coincides with $\mathcal{E}(u_\iota)$, which is a contradiction to \eqref{eq:73}.
\end{proof}
\begin{remark} \label{rem:wehamini}
Equation \eqref{eq:intaen} already yields one immediate minimizer, namely $u \equiv \iota$. We have to show that there exists yet another minimizer.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:nonun}] Let $\iota$ be as in Definition \ref{def:cand}, $\Omega= B_1(0)$ and $u_0 \equiv \iota$.
Let $(\iota_n)_{n \in \mathbb{N}}$ be a sequence such that $\iota_n \leq \iota_{n+1}< \iota$ for each $n$, $\inf_{w \in \mathcal{A}(\iota_n)} \mathcal{E}(w) < |\Omega|$ and $\iota_n \rightarrow \iota$ as $n \rightarrow \infty$. Such a sequence exists by the choice of $\iota$, see Definition \ref{def:cand}. For each $n \in \mathbb{N}$ let $u_n \in \mathcal{A}(\iota_n )$ be a minimizer with boundary values $\iota_n$. By Remark \ref{rem:nontriv} we obtain that
\begin{equation}\label{eq:neaga}
\inf_{x \in \Omega} u_n(x) \leq 0 .
\end{equation}
We claim that $||u_n||_{W^{2,2}}$ is bounded. Indeed, by \cite[Theorem 2.31]{Sweers} we get for some $C> 0$ independent of $n$
\begin{align*}
||u_n||_{W^{2,2}} &\leq ||\iota_n||_{W^{2,2}}+ C || u_n - \iota_n ||_{W^{2,2}}
\leq ||\iota_n||_{L^2} + C || \Delta (u_n - \iota_n )||_{L^2}
\\ & \leq \iota|\Omega|^\frac{1}{2} + C || \Delta u_n||_{L^2} \leq (\iota + C) \sqrt{|\Omega|},
\end{align*}
where we used \eqref{eq:infbound} in the last step.
Therefore $(u_n)_{n = 1}^\infty$ has a weakly convergent subsequence in $W^{2,2}(\Omega)$, which we call $u_n$ again without relabeling. Let $u \in W^{2,2}(\Omega)$ be its weak limit. Since $u_n - \iota_n \in W_0^{1,2}(\Omega)$ and $W_0^{1,2}(\Omega)$ is weakly closed, we find that $u \in \mathcal{A}( \iota)$. Since $W^{2,2}(\Omega)$ embeds compactly into $C(\overline{\Omega})$, $u_n $ converges also uniformly to $u$. Using \eqref{eq:neaga} we obtain that
\begin{equation*}
\inf_{x \in \Omega} u(x) \leq 0 .
\end{equation*}
In particular, $u$ differs from the function identical to $\iota$ which was already identified in Remark \ref{rem:wehamini} as a minimizer in $\mathcal{A}(\iota)$. We show now that $u$ is another minimizer in $\mathcal{A}(\iota)$. By Lemma \ref{lem:limica}, the weak lower semicontinuity of the $L^2$ norm with respect to $L^2$-convergence and Fatou's Lemma we get
\begin{align}
|\Omega| & \leq \mathcal{E}(u) = \int_\Omega (\Delta u )^2 \;\mathrm{d}x + |\{u > 0 \}|\leq
\liminf_{n \rightarrow \infty} \int_\Omega (\Delta u_n)^2 \;\mathrm{d}x + \int_\Omega \chi_{\{ u > 0 \} } \;\mathrm{d}x \nonumber
\\ & = \liminf_{n \rightarrow \infty} \int_\Omega (\Delta u_n)^2 \;\mathrm{d}x + \int_\Omega \liminf_{n \rightarrow \infty } \chi_{\{ u_n > 0 \} } \;\mathrm{d}x \nonumber
\\ & \leq \liminf_{n \rightarrow \infty} \int_\Omega (\Delta u_n)^2 \;\mathrm{d}x + \liminf_{n \rightarrow \infty } \int_\Omega \chi_{\{ u_n > 0 \} } \;\mathrm{d}x \nonumber
\\ & \leq \liminf_{n \rightarrow \infty} \left( \int_\Omega (\Delta u_n)^2 \;\mathrm{d}x + |\{u_n > 0 \}| \right) = \liminf_{n \rightarrow \infty } \mathcal{E}(u_n) \leq |\Omega| \label{eq:linenerg}.
\end{align}
Therefore $\mathcal{E}(u) = |\Omega| = \inf_{ w \in \mathcal{A}(\iota) }\mathcal{E}(w)$ by \eqref{eq:intaen}, which proves the claim.
\end{proof}
\section{On Navier Boundary Conditions}\label{sec:fwts}
As we have only shown interior regularity of minimizers in Theorem \ref{thm:1.1} we cannot conclude anything about the behavior of the Laplacian at the boundary. However, the weak formulation of Navier boundary conditions in Definition \ref{def:adm} is equivalent to the strong formulation only provided that $u$ is regular enough to have trace at $\partial \Omega$, see the discussion in \cite[Section 2.7]{Sweers} for details. Provided that the domain $\Omega$ has actually smooth boundary (which we assume now), we can examine the measure-valued Poisson equation \eqref{eq:bihame} more closely, using the following result about equivalence between conceptions of solutions to a measure-valued Poisson problem with Dirichlet boundary conditions. For a comprehensive study of measure-valued Poisson equations we refer to \cite{Ponce}.
\begin{lemma}[Measure-valued Poisson equation, cf. {\cite[Proposition 6.3]{Ponce}} and {\cite[Proposition 5.1]{Ponce}}]\label{lem:maeval}
Suppose that $\Omega \subset \mathbb{R}^n$ is a bounded domain with smooth boundary and suppose that $\mu$ is a finite Radon measure on $\Omega$. Further, let $w : \Omega \rightarrow \overline{\mathbb{R}}$ be Lebesgue measurable. Then the following are equivalent
\begin{enumerate}
\item (Weak solutions with vanishing trace)
\begin{equation*}
w \in W_0^{1,1}(\Omega) \quad \; \textrm{and} \quad \; \int_\Omega \nabla w \nabla \phi \; \mathrm{d}x = \int_\Omega \phi \; \mathrm{d}\mu \quad \quad \forall \phi \in C_0^\infty(\Omega) .
\end{equation*}
\item (Test functions that can feel the boundary)
\begin{equation*}
w \in L^1(\Omega) \quad \; \textrm{and} \quad \; - \int_\Omega w \Delta \phi \; \mathrm{d}x = \int_\Omega \phi \; \mathrm{d}\mu \quad \forall \phi \in C^\infty(\overline{\Omega}) : \phi_{\mid_{\partial\Omega}} \equiv 0 .
\end{equation*}
\end{enumerate}
If one of the two statements hold true, then $w \in W_0^{1,q}(\Omega)$ for each $q \in \left[1, \frac{n}{n-1}\right)$.
\end{lemma}
This gives immediately the following
\begin{cor} [Navier Boundary Conditions in the Trace Sense]
Suppose that $\Omega\subset \mathbb{R}^2$ has smooth boundary and let $u \in \mathcal{A}(u_0)$ be a minimizer. Then for each $\beta \in (0,1) $ one has that $u \in C^2(\Omega)\cap W^{3,2-\beta}(\Omega)$ and $\Delta u \in W_0^{1,2-\beta}(\Omega)$ .
\end{cor}
\begin{proof} Let $\beta \in ( 0 ,1)$.
In view of \eqref{eq:bihame} one has that $w := \Delta u $ satisfies point $(2)$ of Lemma \ref{lem:maeval} with $\mu = \frac{1}{2|\nabla u|} \mathcal{H}^1 \llcorner_{ \{u = 0 \} } $, which is a finite Radon measure because of Theorem \ref{thm:1.1}. We infer from Lemma \ref{lem:maeval} that $\Delta u \in W_0^{1,2-\beta} (\Omega)$. Since $2- \beta > 1$ we have maximal regularity for $\Delta u$ and can infer that $u \in W^{3, 2-\beta}(\Omega)$, cf. \cite[Theorem 9.19]{Gilbarg}.
\end{proof}
\begin{remark}
Note in particular that the prevoius Corollary improves the regularity asserted in Theorem \ref{thm:1.1} for smooth domains $\Omega$.
\end{remark}
We have shown that $\Delta u$ vanishes for a minimizer in the sense of traces.
If $\Omega = B_1(0)$ there is another possible - and equally useful - conception of vanishing at the boundary, namely that $\Delta u$ has vanishing radial limits on $\partial B_1(0)$, i.e. $\lim_{r \rightarrow 1-} \Delta u(r,\theta) = 0$ for a.e. $\theta \in (0,2\pi)$.
These two conceptions of vanishing have a nontrivial relation. A result that relates the concepts uses the fine topology, cf. \cite[Theorem 2.147]{ZiemerMaly}. We believe that consistency results can be shown with the cited theorem but the details would go beyond the scope of this article. Instead we give a self-contained proof that the Laplacian of a minimizer $u$ has vanishing radial limits in Appendix \ref{sec:vanrad}.
\section{Radial symmetry and Explicit Solutions}\label{sec:talenti}
In this section we show that for $\Omega = B_1(0)$ and $u_0 \equiv C$, there exists a radial minimizer. We will then be able to compute radial minimizers explicitly and determine the nonuniqueness level $\iota$ from Definition \ref{def:cand}.
\begin{definition}[Symmetric Nonincreasing Rearrangement]
Let $u : B_1(0) \rightarrow \overline{\mathbb{R}}$ be measurable. The function $u^* : B_1(0) \rightarrow \overline{\mathbb{R}}$ is the unique radial and radially nonincreasing function such that
\begin{equation*}
| \{ u > t\} | = | \{ u^* > t \} | \quad \forall t \in \mathbb{R}.
\end{equation*}
\end{definition}
\begin{remark}\label{rem:102}
The fact that such a function exists follows from the construction in \cite[Chapter 3.3]{Lieb}. Moreover, one can show that for each $p \in [1,\infty]$, $u \in L^p(B_1(0))$ implies that $u^* \in L^p(B_1(0))$ and $||u||_{L^p} = ||u^*||_{L^p}$, see \cite[Chapter 3.3]{Lieb}.
\end{remark}
We recall the famous Talenti rearrangement inequality, which we will use.
\begin{theorem}[Talenti's Inequality, cf. {\cite[Theorem 1]{Talenti}}]
Let $f \in L^2(B_1(0))$ and $w \in W_0^{1,2}(B_1(0))$ be the weak solution of
\begin{equation*}
\begin{cases}
- \Delta u = f & \textrm{in } B_1(0) \\ u = 0 & \textrm{on } \partial B_1(0)
\end{cases}.
\end{equation*}
Further, let $u\in W_0^{1,2}(B_1(0))$ be the weak solution of
\begin{equation*}
\begin{cases}
- \Delta w = f^* & \textrm{in } B_1(0) \\ w = 0 & \textrm{on } \partial B_1(0)
\end{cases}.
\end{equation*}
Then $w \geq u^*$ pointwise almost everywhere.
\end{theorem}
We obtain the radiality of the solution as an immediate consequence.
\begin{cor}[Radiality]\label{cor:radial}
Suppose that $\Omega= B_1(0)$ and $u_0 \equiv C$. Then there exists a minimizer $ v \in \mathcal{A}(C)$ that is radial.
\end{cor}
\begin{proof}
First, fix a minimizer $u \in \mathcal{A}(C)$. Then by Remark \ref{rem:102}
\begin{align}
\mathcal{E}(u) & = \int_\Omega (\Delta u )^2 \; \mathrm{d}x + |\{ u > 0 \}| \nonumber \\
& = \int_\Omega (\Delta (u- C))^2 \; \mathrm{d}x + | \{ C - u < C\} | \nonumber
\\ & = \int_\Omega [( \Delta (u-C) )^* ]^2 \; \mathrm{d}x + |\{ (C-u)^* < C \}| \label{eq:tally} .
\end{align}
Now define $w\in W^{2,2}(B_1(0)) \cap W_0^{1,2}(B_1(0))$ to be the weak solution of
\begin{equation*}
\begin{cases}
- \Delta w = (\Delta (u-C))^* & \textrm{in } B_1(0) \\ w = 0 & \textrm{on } \partial B_1(0)
\end{cases}.
\end{equation*}
Note that $w$ is radial since the right hand side is radial. Observe now that $C- u$ is the unique weak solution of
\begin{equation*}
\begin{cases}
- \Delta v = \Delta (u-C) & \textrm{in } B_1(0) \\ v = 0 & \textrm{on } \partial B_1(0)
\end{cases}.
\end{equation*}
By Talenti's inequality (see previous theorem) $w \geq (C-u)^*$. In particular $|\{w < C\}| \leq |\{ (C-u)^* < C \} |$. Therefore we can estimate \eqref{eq:tally} in the following way
\begin{align*}
\mathcal{E}(u) & \geq \int_\Omega (\Delta w)^2 + | \{ w < C \} | \\ & = \int_\Omega (\Delta (C-w) )^2 - |\{ C- w > 0 \}| = \mathcal{E}(C-w).
\end{align*}
Now define $v := C-w$. Then $v \in \mathcal{A}(C)$ since $v- C = -w \in W_0^{1,2}(\Omega)$. By the estimate above we see that $v$ is yet another minimizer.
\end{proof}
Now we characterize the radial solutions explicitly using the following two propositions
\begin{prop}[Radial Solutions on Annuli]
Let $A_{R_1,R_2} := \{ x \in \mathbb{R}^2 : R_1 < |x| < R_2 \}$ be an annulus with inner radius $R_1 \geq 0 $ and outer radius $R_2 > R_1$. If $w \in W^{2,2}(A_{R_1,R_2})$ is weakly biharmonic and radial then there exists constants $A,B,C, D \in \mathbb{R} $ such that
\begin{equation}\label{eq:bihamsol}
w(x) = A|x|^2 + B + C \log|x| + D \frac{|x|^2}{2} \log|x|
\end{equation}
\end{prop}
\begin{proof}
The claim reduces to a straightforward ODE argument when expressing $\Delta^2$ in polar coordinates.
\end{proof}
\begin{prop}\label{prop:radsub}[Radial Zero Level Set]
Let $u \in \mathcal{A}(u_0)$ be a radial minimizer. Then there exists $R_0 > 0 $ such that
\begin{equation}
\{ u = 0 \} = \partial B_{R_0}(0)
\end{equation}
and $\{ u > 0 \} = B_1(0) \setminus \overline{B_{R_0}(0)}$.
\end{prop}
\begin{proof}
According to Theorem \ref{thm:1.1} one has $\{u = 0 \} = \bigcup_{i = 1}^N S_i $ for closed disjoint $C^2$-manifolds $S_i$ all of which form a connected component of $\{ u = 0 \}$. Since $u$ is radial one has $S_i = \partial B_{r_i}(0)$ for some radii $r_i > 0 $. Without loss of generality $r_1 < ... < r_N$. It remains to show that $N = 1$. If $N >1$ then $u \equiv 0 $ on $\partial B_{r_N}(0)$. By subharmonicity one has $u <0 $ on $B_{r_N}(0)$. However now $r_1< r_N$ and therefore one obtains a contradiction to $u = 0 $ on $\partial B_{r_1}(0)$.
\end{proof}
\begin{lemma} [Explicit radial solutions] \label{lem:explrad}
Suppose $\Omega = B_1(0)$. Let $u_0$ be a positive constant. Define
\begin{equation}\label{eq:infii}
h(u_0) := \min \left\lbrace \pi , \inf_{R_0 \in (0,1)} \left( \frac{4\pi u_0^2}{\frac{1-R_0^2}{2}+ R_0^2 \log(R_0) } + \pi (1-R_0^2) \right) \right\rbrace .
\end{equation}
In case that $h(u_0) < \pi$, the infimum in \eqref{eq:infii} is attained and for each $R \in (0,1)$ that realizes the infimum in \eqref{eq:infii} the function
\begin{equation*}
u(x) =u_0\begin{cases} \frac{ \log R |x|^2 - R^2 \log R}{R^2-1- 2R^2 \log R} & 0 \leq |x| \leq R, \\ \frac{- |x|^2 + R^2 -2 R^2 \log R + R^2 \log |x| + {|x|^2} \log |x|}{{R^2-1}-2 R^2 \log R} & R < |x| < 1. \end{cases}
\end{equation*}
is a minimizer with energy $\mathcal{E}(u) = h(u_0)$. In case that $h(u_0) = \pi$ a minimizer is given by a constant.
\end{lemma}
\begin{proof}Recall there exists a radial minimizer $u$ by Corollary \ref{cor:radial}.
By Theorem \ref{thm:1.1}, Corollary \ref{lem:corsubham} and Proposition \ref{prop:radsub} we deduce that $\{ u = 0 \} $ is either empty or there exists $R_0 \in (0,1)$ such that $\{ u = 0 \} = \partial B_{R_0}(0)$. If $\{u = 0\}$ is empty then the minimizer is a contant. In the other case, Lemma \ref{lem:biham} implies that $u$ is weakly biharmonic on the annuli $\{ 0 < |x| < R_0 \}$ and $\{R_0 < |x|<1\}$. Hence there exist real numbers $C_1,D_1, E_1,F_1,C_2,D_2,E_2,F_2$ such that
\begin{equation*}
u(x) = \begin{cases} C_1 |x|^2 + D_1 + E_1 \log|x| + F_1 \frac{|x|^2}{2}\log|x| & 0 < |x| < R_0 \\ C_2 |x|^2 + D_2 + E_2 \log |x| + F_2 \frac{|x|^2}{2} \log |x| & R_0 < |x| < 1 \end{cases} .
\end{equation*}
Since $u$ has to be continuous at zero we deduce that $E_1 = 0 $. Since second derivatives of $u$ have to be continuous at zero it is an easy computation to show that $F_1 = 0$. By the Navier boundary conditions (cf. Appendix \ref{sec:vanrad}) we get that $4C_2 + 2F_2 = 0 $ and thus
\begin{equation}\label{eq:hilfreich}
\Delta u(x) = \begin{cases} 4C_1 & 0 < |x| < R_0 \\2 F_2 \log|x| & R_0 < |x| < 1 \end{cases}.
\end{equation}
As $\Delta u$ has to be continuous we obtain that $4C_1 = 2 F_2 \log R_0 $, i.e. $C_1 = \frac{1}{2}F_2 \log R_0$.
The fact that $ u = 0 $ on $\partial B_{R_0} (0 )$ implies that
$0 = C_1 R_0^2 + D_1 $ and hence $D_1 = - C_1 R_0^2 = - \frac{F_2}{2}R_0^2 \log R_0$. From all these computations we obtain
\begin{equation*}
u(x) = \begin{cases} \frac{1}{2} F_2 \log R_0 |x|^2 - \frac{1}{2}F_2 R_0^2 \log R_0 & 0 < |x| \leq R_0 \\- \frac{1}{2} F_2 |x|^2 + D_2 + E_2 \log |x| + F_2 \frac{|x|^2}{2} \log |x| & R_0 < |x| < 1 \end{cases} .
\end{equation*}
If we take the radial derivative $\partial_r u$ in both cases and set them equal we obtain
\begin{equation*}
F_2 R_0 \log R_0 = - F_2 R_0 + E_2 \frac{1}{R_0} + F_2 R_0 \log R_0 + \frac{1}{2} F_2 R_0 ,
\end{equation*}
which results in $E_2 = \frac{1}{2}F_2 R_0^2$ and thus
\begin{equation*}
u(x) = \begin{cases} \frac{1}{2} F_2 \log R_0 |x|^2 - \frac{1}{2}F_2 R_0^2 \log R_0 & 0 < |x| \leq R_0 \\- \frac{1}{2} F_2 |x|^2 + D_2 + \frac{1}{2} F_2 R_0^2 \log |x| + F_2 \frac{|x|^2}{2} \log |x| & R_0 < |x| < 1 \end{cases} .
\end{equation*}
Note another time that $0 = \lim_{ |x| \rightarrow R_0 + } u$ and therefore
\begin{equation*}
0 = D_2 + F_2 \left( - \frac{1}{2}R_0^2 + R_0^2 \log R_0 \right) .
\end{equation*}
Hence $D_2 = \frac{1}{2}F_2 R_0^2 - F_2 R_0^2 \log R_0 $ and this yields that
\begin{equation*}
u(x) = F_2 \begin{cases} \frac{1}{2} \log R_0 |x|^2 - \frac{1}{2}R_0^2 \log R_0 & 0 \leq |x| \leq R_0 \\- \frac{1}{2} |x|^2 + \frac{1}{2} R_0^2 - R_0^2 \log R_0 + \frac{1}{2} R_0^2 \log |x| + \frac{|x|^2}{2} \log |x| & R_0 < |x| < 1 \end{cases}
\end{equation*}
Using that $u \equiv u_0$ on $\partial B_1(0)$ we find
\begin{equation}\label{eq:eff2}
u_0 = F_2 \left( \frac{R_0^2-1}{2}- R_0^2 \log R_0 \right),
\end{equation}
which finally determines $R_0$. Hence we know that there must exist some $R_0 \in (0,1)$ such that
\begin{equation}\label{eq:notkrituu}
u(x) =u_0\begin{cases} \frac{ \log R_0 |x|^2 - R_0^2 \log R_0 }{R_0^2-1- 2R_0^2 \log R_0} & 0 \leq |x| \leq R_0, \\ \frac{- |x|^2 + R_0^2 -2 R_0^2 \log R_0 + R_0^2 \log |x| + {|x|^2} \log |x|}{{R_0^2-1}-2 R_0^2 \log R_0} & R_0 < |x| < 1. \end{cases}
\end{equation}
Now define for $R_0 \in (0,1)$ the function $w_{R_0} \in \mathcal{A}(u_0)$ to be the right hand side of \eqref{eq:notkrituu}.
We have shown either $\{u = 0 \} $ is empty or that the minimizer is given by some $w_{R_0^*}$ for some $R_0^* \in (0,1)$.
Going back to \eqref{eq:hilfreich} and using that according to Proposition \ref{prop:radsub} $|\{ u > 0 \}| = \pi (1-R_0^2)$ we obtain that
\begin{align*}
\mathcal{E}(w_{R_0})& = 16C_1^2 \pi R_0^2 +4 F_2^2 \int_{B_1 \setminus B_{R_0}} 4 F_2^2 \log^2|x| \; \mathrm{d}x + \pi (1- R_0^2) \\& = 4F_2^2 \pi \left( R_0^2 \log^2 R_0 + 2 \int_{R_0}^1 r \log^2 r \; \mathrm{d}r \right) + \pi (1- R_0^2),
\end{align*}
where we use the derived parameter identity for $C_1$ and radial integration in the last step. Using that
\begin{equation*}
\int_{R_0}^1 r \log^2 r \; \mathrm{d}r = \frac{R_0^2}{2} \log^2 R_0 + \frac{R_0^2}{2} \log R_0 + \frac{1- R_0^2}{4}
\end{equation*}
we obtain using \eqref{eq:eff2}
\begin{align}
\mathcal{E}(w_{R_0} ) & = 4 F_2^2 \pi \left( R_0^2 \log R_0 + \frac{1-R_0^2}{2} \right) + \pi (1- R_0^2) \nonumber \\ & \label{eq:ero} = \frac{4\pi u_0^2}{\frac{1-R_0^2}{2} + R_0^2 \log R_0 } + \pi (1- R_0^2) .
\end{align}
We have shown that for each $R_0 \in (0,1)$ we can find an admissible function $w_{R_0} \in \mathcal{A}(u_0)$ such that $\mathcal{E}(w_{R_0})$ is given by the right hand side of \eqref{eq:ero}. Moreover we know that a minimizer $u$ is among such $w_{R_0}$ in case that $\{ u = 0 \} \neq \emptyset$. In case that $\{u = 0 \} = \emptyset $ however, we know from Remark \ref{rem:nontriv} that $\mathcal{E}(u) = \pi$. We obtain that
\begin{equation}\label{eq:enerrad}
\mathcal{E}(u) = \min \left\lbrace \pi, \inf_{R_0 \in (0,1) } \frac{4\pi u_0^2}{\frac{1-R_0^2}{2} + R_0^2 \log R_0 } + \pi (1- R_0^2) \right\rbrace .
\end{equation}
and in case that the infimum is smaller than $\pi$, it is attained by some $R_0^* \in (0,1) $ such that a minimizer is given by $w_{R_0^*}$.
\end{proof}
\begin{remark}
Let $h(u_0)$ be the quantity defined in the previous lemma. If $h(u_0)< \pi$ then one has to find
\begin{equation*}
\inf_{R_0 \in (0,1)} \left( \frac{4\pi u_0^2}{\frac{1-R_0^2}{2}+ R_0^2 \log(R_0) } + \pi (1-R_0^2) \right).
\end{equation*}
To do so, we set the first derivative of the expression equal zero, which becomes
\begin{equation*}
0 = \frac{-8\pi u_0^2 R_0 \log R_0 }{\left( \frac{1-R_0^2}{2}+ R_0^2 \log R_0 \right)^2} - 2\pi R_0^2.
\end{equation*}
Solving for $u_0$ and plugging this into \eqref{eq:infii} we find that
\begin{align}
\inf_{ w \in \mathcal{A}(u_0) }\mathcal{E}(w) & = - \frac{\pi}{\log R_0 } \left( \frac{1-R_0^2}{2}+ R_0^2 \log R_0 \right) + \pi (1- R_0^2) \nonumber
\\ & = \pi \left( 1- 2R_0^2 + \frac{R_0^2-1}{2 \log R_0} \right) \label{eq:rdill}
\end{align}
\end{remark}
\begin{lemma}[The nonuniqueness level]\label{lem:iooo}
Let $\Omega = B_1(0)$. Then the quantity $\iota$ in Definition \ref{def:cand} is given by
\begin{equation}\label{eq:iooo}
\iota = \frac{R_*}{2} \sqrt{\frac{1 -R_*^2}{2} + R_*^2 \log R_* } \simeq 0.112814
\end{equation}
where $R_* \simeq 0.533543$ is the unique solution of
\begin{equation}\label{eq:glfrr}
\frac{R^2 - 1}{2\log R} - 2R^2 = 0, \quad R \in (0,1).
\end{equation}
\end{lemma}
\begin{proof}
First we show that \eqref{eq:glfrr} has a unique solution $R_* \in (0,1)$. For this we first rewrite the equation multiplying by $2\log R$.
\begin{equation*}
R^2 - 1 - 4 R^2 \log R = 0
\end{equation*}
Using that $2 \log R = \log R^2 $ and substituting $R^2 = e^u$ for some $u \in (- \infty, 0) $ we find
\begin{equation*}
e^u - 1 - 2ue^u = 0 \quad \Leftrightarrow \quad ( 1-2u) = e^{-u}.
\end{equation*}
By \cite[Eq. (2.23)]{Lambert} this equation has the solution
\begin{equation}\label{eq:spezfu}
u \in \frac{1}{2}+ W \left(- \frac{1}{2\sqrt{e}} \right)
\end{equation}
where $W$ denotes the Lambert $W-$function, i.e. the multi-valued inverse of $f(x) = x e^x$. Note that for each negative number in $a \in (- e^{-1},0)$, $W(a)$ is exactly two-valued with one value smaller than $-1$ and one value larger than $-1$. This can be seen using that $f$ is negative on $(- \infty, 0)$ and has a global minimum at $-1$ with value $e^{-1}$. Moreover $f$ is decreasing on $(-\infty, -1) $ and increasing $(-1, 0)$. All of these assertions can be proved with standard techniques.
Now note that $f(-\frac{1}{2}) = - \frac{1}{2\sqrt{e}}$ and therefore $-\frac{1}{2}$ is one values of $w$, i.e. the first possible solution of $u$ is $u= 0 $. This however does not lie in ths interval $(-\infty, 0 )$ and hence resubstitution does not generate a vlues $R \in (0,1)$. The only remaining possibility is the other value of $W \left(- \frac{1}{2\sqrt{e}} \right)$ that falls strictly below $-1$ and hence the corresponding solution for $u$ lies in $(- \infty, 0 )$, cf. \eqref{eq:spezfu}. Therefore this unique solution $u_* \in (-\infty, 0 )$ generates a unique solution $R_* = e^{\frac{1}{2} u_*} \in (0,1)$. Now we show \eqref{eq:iooo}.
By Lemma \ref{lem:limica} one can find a minimizer with energy $\pi= |B_1(0)|$. Recall from the proof of Theorem \ref{thm:nonun} that a minimizer $u \in \mathcal{A}(\iota)$ can be constructed by taking a weak $W^{2,2}$-limit of minimizers $u_n \in \mathcal{A}(\iota_n)$ for some sequence of constants $(\iota_n)_{n \in \mathbb{N}}$ that converges from below to $\iota$. Without loss of generality we can assume that there exists $\delta > 0 $ such that $\iota_n \geq \delta > 0 $ for each $n \in \mathbb{N}$. By definition of $\iota$ one can achieve that $\mathcal{E}(u_n) < \pi$ for all $n \in \mathbb{N}$. Repeating the computation in \eqref{eq:linenerg} one can also has
\begin{equation*}
\pi = \lim_{n \rightarrow \infty } \mathcal{E}(u_n).
\end{equation*}
Now note also that $(u_n)_{n \in \mathbb{N}}$ can be chosen to be a sequence of radial minimizers. In particular we can choose $u_n$ to be of the form
\begin{equation*}
u(x) = \iota_n \begin{cases} \frac{ \log R_n |x|^2 - R_n^2 \log R_n }{R_n^2-1- 2R_n^2 \log R_n} & 0 \leq |x| \leq R_n, \\ \frac{- |x|^2 + R_n^2 -2 R_n^2 \log R_n + R_n^2 \log |x| + {|x|^2} \log |x|}{{R_n^2-1}-2 R_n^2 \log R_n} & R_n < |x| < 1 \end{cases}
\end{equation*}
for some $R_n \in (0,1)$. By \eqref{eq:rdill} we infer that $R_n$ satisfies
\begin{equation*}
\pi \underset{n \rightarrow \infty}{\longleftarrow} \mathcal{E}(u_n) = \pi \left( 1 - 2R_n^2 + \frac{R_n^2 - 1}{2\log(R_n)} \right)
\end{equation*}
and hence
\begin{equation*}
\frac{R_n^2 - 1}{2\log(R_n)} - 2R_n^2 \rightarrow 0 \quad ( n \rightarrow \infty) .
\end{equation*}
By \eqref{eq:negativityset}, we obtain that $\pi R_n^2 \geq 4\pi \iota_n^2 \geq 4\pi^2 \delta$. Therefore $R_n \geq 2\sqrt{\pi }\sqrt{\delta}$ is bounded from below by a strictly positive constant.
Define $a : [0,1] \rightarrow \mathbb{R}^2$ to be the continuous extension of $z \mapsto \frac{z^2-1}{2\log z} - 2 z^2$.
By compactness of $[0,1]$, $(R_n)_{n \in \mathbb{N}}$ has a convergent subsequence (again denoted by $(R_n)_{n \in \mathbb{N}}$)to some limit $R \in [0,1]$ that satisfies $a(R) = 0 $. Since $a(1) = -3 \neq 0 $ this equation is only solved by zero and by $R_*$ determined above. However $R \neq 0 $ since $(R_n)_{n \in \mathbb{N}}$ is bounded away from zero. This implies that $R = R_*$ and in particular that $(R_n)_{n \in \mathbb{N}}$ converges to $R_*$. By Lemma \ref{lem:explrad} we infer - since $\mathcal{E}(u_n) < \pi$ - that
\begin{equation*}
\mathcal{E}(u_n) = \frac{4\pi \iota_n^2}{\frac{1-R_n}{2}+ R_n^2 \log R_n} + \pi ( 1- R_n^2).
\end{equation*}
Using that $\iota_n \rightarrow \iota$, $R_n \rightarrow R_*$ and $\mathcal{E}(u_n) \rightarrow \pi$ as $n \rightarrow \infty$ we obtain in the limit that
\begin{equation*}
\pi = \frac{4\pi \iota^2}{\frac{1-R_*}{2}+ R_*^2 \log R_*} + \pi ( 1- R_*^2) .
\end{equation*}
Solving for $\iota$ we obtain the claim.
\end{proof}
Next we list some selected numerical values for radial minimizers in Table \ref{tabelle} and give some plots in the figure below. For this let $R$ be the set of all points $R_0 \in (0,1)$ where $h(u_0)$ in \eqref{eq:infii} is attained (which conicides with the radius of the nodal sphere of a minimizer)
\begin{table}[h]
\begin{tabular}{|c|c|c|}
\hline
$u_0$ &$R(u_0)$ & $\inf_{A(u_0)} \mathcal{E}$ \\
\hline
$0.01$ & $0.924036$ & $0.682707$ \\
\hline
$0.02$ & $0.876984$ & $1.07223$ \\
\hline
$0.04$ & $0.797621$ & $1.67144$ \\
\hline
$0.08$ & $0.654679$ & $2.56739$ \\
\hline
$0.1$ & $0.582373$ & $2.93062$ \\
\hline
$0.11$ & $0.544514$ & $3.09661$ \\
\hline
$0.112$ & $0.536733$ & $3.12866$ \\
\hline
\end{tabular}
\vspace{0.1cm}
\caption{Energy and nodal radius for selected boundary data}\label{tabelle}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{1a}
\caption{$u_0 = 0.01$: 3D-Plot}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{1b}
\caption{$u_0 = 0.01$: Profile curve}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2a}
\caption{$u_0 = 0.07$: 3D-Plot}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2b}
\caption{$u_0= 0.07$: Profile curve}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{3a}
\caption{$u_0= 0.112$: 3D-Plot}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{3b}
\caption{$u_0 = 0.112$: Profile curve}
\end{subfigure}
\caption{Selected Minimizers and their radial profile curves}\label{fig:OrbitlikeFreeElastica}
\end{figure}
\section{Optimality Discussion and Closing Remarks}\label{sec:opti}
In this section we present some open problems which we think would be interesting to consider in the context of the biharmonic Alt-Caffarelli problem. As we outlined in the introduction, the biharmonic Alt-Caffarelli problem is fundamentally different from some more established higher order variational problems with free boundary and therefore we believe that new techniques have to be developed.
\begin{remark}[Interior regularity]
It is an interesting question whether one can expect more interior regularity of minimizers than $C^2(\Omega)$. Recall that by Theorem \ref{thm:1.1}, regularity of minimizers and regularity of the free boundary are connected by the fact that minimizers have nonvanishing gradient on their nodal set. There is however one obstruction to higher regularity: The explicit minimizers found in Lemma \ref{lem:explrad} are do not lie in $C^3(\Omega)$. What remains then open is $C^{2,\gamma}$-regularity for some $\gamma > 0$. The solutions in Lemma \ref{lem:explrad} actually have a Lipschitz second derivative, so it is likely that better regularity statements can be derived.
\end{remark}
Another interesting and not immediately related question is the regularity up to the free boundary.
\begin{remark}[Regularity up to the free boundary]
We have found in Lemma \ref{lem:biham} that $u \in C^\infty( \{ u> 0 \})$, as $\Delta u $ is harmonic in $\{ u > 0 \}$. Moreover, Theorem \ref{thm:1.1} implies that $\Delta u $ is continuous on $\overline{\{ u < 0 \} }$, which makes it a classical solution to a Dirichlet problem.
Higher Regularity of $\Delta u_{\mid_{ u< 0 }}$ up to the free boundary turns out to be an interesting problem. The free boundary is regular enough for elliptic regularity theory, cf. \cite[Theorem 9.15]{Gilbarg}. However, it is unclear whether $\Delta u_{\mid_{ \{ u = 0 \}}}$ is a trace of a $W^{2,p}$ function for any $p \in (1,2)$, which is also a requirement in \cite[Theorem 9.15]{Gilbarg}.
This is actually delicate, see \cite{Hadamard} and \cite[Page 3]{Daners} for relevant counterexamples. Further regularity up to the free boundary would improve the regularity of the free boundary itself, which we do not consider impossible. Hence such a discussion is useful and could potentially give way to future research.
\end{remark}
\begin{remark}[Dirichlet boundary conditions]
The argumentation in the present article relied heavily on a weak version of the `maximum principle for systems', see \cite[Section 2.7]{Sweers} for the exact connection between elliptic systems and higher order PDE's with Navier boundary conditions. In the case of Dirichlet boundary conditions where these priciples are not available, statements like Corollary \ref{lem:corsubham} are not expected to hold true in the way they do in our case. We nevertheless believe that a discussion of the Dirichlet problem is also doable. It would be an interesting question whether the results are similar at all. The question has also been asked for other higher order free boundary problems, see \cite{Friedman} for the biharmonic obstacle problem, where rich similarities can be found.
\end{remark}
\begin{remark}[Connectedness of the Free Boundary]
It would also be interesting to understand some global properties of the minimizer. For example it is worth asking whether conditions on $\Omega$ and $u_0$ can be found under which the free boundary is connected, i.e. $N = 1$ in Theorem \ref{thm:1.1}. One would expect that $N=1$ is not always the case, for example for dumbbell-shaped domains. Such global properties of the solution are difficult to understand - again due to the lack of a maximum principle for fourth order equations.
\end{remark}
|
1,108,101,563,655 | arxiv | \section{Introduction}\label{sec:int}
The possibility of achieving the control over a quantum system is the fundamental prerequisite for developing a new form of technology based on quantum effects \cite{QTECH,QTECH1,QTECH2}. In particular this is an essential requirement for quantum computation, quantum communication, and more generally for all other data processing procedures that involve quantum systems as information carriers \cite{NC00}.
In many experimental settings, quantum control is implemented via an electromagnetic field interacting with the system of interest, as happens for \textit{cold atoms} in optical lattices \cite{nca}, for \textit{trapped ions} \cite{ion}, for \textit{electrons} in quantum dots \cite{dots}, and actually in virtually all experiments in low energy physics. In this context the electromagnetic field can be often treated as a classical field (in the limit of many \textit{quanta}), allowing a semiclassical description of the control over the quantum system \cite{Ale07,DP10,DH08}.
Furthermore in many cases of physical interest the whole process can be effectively formalized by assuming that via proper manipulation of the field parameters the experimenter produces
a series of pulses implementing some specially engineered control Hamiltonians from a discrete set $\{H_1,\ldots, H_m\}$. Such pulses are assumed to be applied in any order, for any durations, by switching them on and off very sharply,
the resulting transformation being a unitary evolution of the form
$e^{-iH_{j_N}t_N} \cdots e^{-iH_{j_1}t_1}$
with $j_1,\ldots,j_N \in \{1,\ldots,m\}$ and $t_1, \ldots, t_N$ being the selected temporal durations (hereafter $\hbar$ is set to unity for simplicity) \cite{NOTA1}. By the \textit{Lie-algebraic rank condition} \cite{Ale07} the unitary operators that can be realized
via such procedure
are those in the connected Lie group associated to the real Lie algebra $\mathfrak{Lie}(H_1 , \ldots , H_m )$ generated by the Hamiltonians $\{H_1,\ldots, H_m\}$ [where $\mathfrak{Lie}(H_1 , \ldots , H_m )$ is formed by the real linear combinations of
$H_j$ and their iterated commutators $i[H_{j_1}, H_{j_2}]$, $i\bm{[}H_{j_1}, i[H_{j_2}, H_{j_3}]\bm{]}$, etc.], i.e., $e^{-i \Xi}$ with $\Xi \in \mathfrak{Lie}(H_1 , \ldots , H_m )$.
In this framework one then says that full (unitary) controllability is achieved if the dimension of $\mathfrak{Lie}(H_1 , \ldots , H_m )$ is large enough to permit the implementation of all possible unitary transformations on the system, i.e., if $\mathfrak{Lie}(H_1 , \ldots , H_m )$ coincides with the
complete algebra $\mathfrak{u}(d)$ formed by self-adjoint $d\times d$ complex matrices \cite{NOTA2}, $d$ being the dimension of the controlled system.
The above scheme is the paradigmatic example of what is typically identified as \textit{open-loop} or \textit{non-adaptive} control, where all the operations are completely determined prior to the control experiment \cite{Ale07,DP10}. In other words the system is driven in the absence of an external feedback loop, i.e., without using any information gathered (via measurement)
\textit{during} the evolution.
It turns out that in quantum mechanics an alternative mechanism of non-adaptive control is available: it is enforced via quantum Zeno dynamics \cite{FP02,ZenoMP}. In this scenario, while measurements are present, the associated outcomes are not used to guide the forthcoming operations: only their effects on the system evolution are exploited (a fact which has no analog in the classical domain). The underlying physical principle is the following. When a quantum system undergoes a sharp (von Neumann) measurement, it is projected into one of the associated eigenspaces of the observable, say the space $\mathscr{H}_P$ characterized by an orthogonal projection $P$. It is then let to undergo a unitary evolution $e^{-i H \Delta t}$ for a short time $\Delta t$ and is measured again via the same von Neumann measurement. The probability to find it in a different measurement eigenspace $\mathscr{H}_{P'}$ orthogonal to the original one $\mathscr{H}_{P}$ is proportional to $(\Delta t)^2$. Instead, with high probability, the system remains in $\mathscr{H}_{P}$, while experiencing an effective unitary rotation of the form $e^{-i h \Delta t}$ induced by the projected Hamiltonian $h := PHP$ \cite{FP02,ZenoMP,ZENO}.
Accordingly, in the limit of infinitely frequent measurements performed within a fixed time interval $t$, the system remains in the subspace $\mathscr{H}_P$, evolving through an effective \textit{Zeno dynamics} described by the operator
\begin{equation} \label{zenosequence}
\lim_{N\to\infty}
(P e^{-i H {t}/{N}} P)^N = P e^{ -i PHP t} = P e^{-i h t}.
\end{equation}
In Ref.\ \cite{Bur14}, it was shown that, by adopting the quantum Zeno dynamics, the control that the experimenter can enforce on a quantum system can be greatly enhanced. For example, consider the case where the set of engineered Hamiltonians contains only two commuting elements $H_1$ and $H_2$. The associated Lie algebra they generate is just two-dimensional and hence is not sufficient to induce full controllability, even for the smallest quantum system, a qubit --- indeed $\dim[\mathfrak{u}(d=2)] =4$.
Under these conditions, it turns out that for a proper choice of the projection $P$ it may happen that the projected counterparts $h_1=PH_1P$
and $h_2=PH_2P$ of the control Hamiltonians do not commute. Accordingly
the Lie algebra generated by $\{h_1,h_2\}$ can be much larger than the one
associated with $\{H_1, H_2\}$,
and consequently the control exerted much finer.
In particular the enhancement can be exponential in the system size. For instance in Ref.\ \cite{Bur14} an explicit example is given where two commuting Hamiltonians $H_1$ and $H_2$ act on a chain of $n$ qubits, and once a proper Zeno projection $P$ is applied on the first qubit of the chain the resulting
Zeno Hamiltonians $h_1$ and $h_2$ generate the full algebra of traceless Hermitian operators acting on the remaining $n-1$ qubits (which is a Lie algebra of dimension $4^{n-1}$), thus allowing to perform any unitary operations on them.
Moreover, it can be shown that this is indeed a quite general phenomenon. In fact a simple argument \cite{Bur14} shows that if a system is controllable for a specific choice of the parameters, then it is controllable for almost all choices of the parameters (with respect, e.g., to the Lebesgue measure). In the present case it means that, for almost all choices of a rank-$2^{n-1}$ projection $P$ and of two commuting Hamiltonians $\{{H}_1,{H}_2\}$, the system is fully controllable in the projected subspace $\mathscr{H}_{{P}}$ with the Hamiltonians
${h}_1 = {P} {H}_1 {P}$ and ${h}_2 = {P} {H}_2 {P}$.
The aforementioned results of \cite{Bur14} show that as few as two commuting Hamiltonians, when projected on a smaller subspace of dimension $d$ through the Zeno mechanism, may achieve to generate the whole Lie algebra $\mathfrak{u}(d)$. The scope of the present article is to investigate the opposite question: given a set of Hamiltonians $\{h_1, \ldots, h_m\}$, which are non-commuting in general, is it possible to extend them to a set of commuting Hamiltonians $\{ H_1, \ldots, H_m\}$ from which $h_j$ can be obtained via a proper projection of the latter (i.e., $h_j = P H_jP$)?
We call this operation \textit{Hamiltonian purification}, taking inspiration from similar problems which have been investigated in quantum information. For instance, we recall that by the \textit{state purification} \cite{NC00} a quantum mixed state $\rho$ on a system $S \cong \mathscr{H}_d$ is extended to a pure state $|\psi_\rho\rangle$ on a system $S+A \cong \mathscr{H}_d \otimes \mathscr{H}_d$, from which $\rho$ can be recovered through a partial trace over the ancilla system $A \cong \mathscr{H}_d$. Another similar result can be obtained for the \textit{channel purification (Stinespring dilation theorem)} or for the \textit{purification of positive operator-valued measure (POVM) (Naimark extension theorem)}, according to which all the completely positive trace-preserving linear maps and all the generalized measurement procedures, respectively, can be described as unitary transformations
on an extended system followed by partial trace~\cite{NC00,REV}.
In what follows we start by presenting a formal characterization of the Hamiltonian purification problem and of the associated notions of {\it spanning-set purification} and {\it generator purification} of an algebra (see Sec.\ \ref{sec1}). Then we prove some theorems regarding the minimal dimension $d_E^{\text{(min)}}$ of the extended Hilbert space needed to purify a given set of operators $\{h_1, \ldots, h_m\}$.
Specifically, in Sec.\ \ref{sec2} we analyze the case in which one is interested in purifying two linearly independent Hamiltonians. In this context we provide the exact value for $d_E^{\text{(min)}}$ when the input Hilbert space has dimension $d=2$ or $d=3$ and give lower and upper bounds for the remaining configurations.
In Sec.\ \ref{sec.4} instead we present a generic construction which allows one to put a bound on $d_E^{\text{(min)}}$ when the set of the operators $\{h_1, \ldots, h_m\}$ contains an arbitrary number $m$ of linearly independent
elements. In Sec.\ \ref{sec5} we discuss the case in which the total number of linearly independent elements of $\{h_1, \ldots, h_m\}$
is maximum, i.e., equal to $d^2$ with $d$ being the dimension of the input Hilbert space. Under this condition we compute the exact value of $d_E^{\text{(min)}}$, showing that it is equal to $d^2$. As we shall see this corresponds to provide a spanning-set purification of the whole algebra $\mathfrak{u}(d)$ in terms of the largest commutative subalgebra of $\mathfrak{u}(d^2)$. Finally in Sec.\ \ref{sec:Daniel} we prove that it is always possible to obtain a generator purification of the algebra ${\mathfrak{u}(d)}$ with an extended space of dimension $d_E=d+1$, i.e., in terms of the largest commutative subalgebra of ${\mathfrak{u}(d+1)}$.
Conclusions and perspectives are given in Sec.\ \ref{sec.con}, and the proof of a Theorem is presented in the Appendix.
\section{Definitions and Basic Properties} \label{sec1}
In this section we start by presenting a rigorous formalization of the problem and discuss some basic properties.
\begin{mydef}[\textbf{Hamiltonian purification}]
Let $\mathcal{S} := \{ h_1, \ldots, h_m\}$ be a collection
of $m$ self-adjoint operators (Hamiltonians) acting on a Hilbert space $\mathscr{H}_d$ of dimension $d$. Given then a collection $\mathcal{S}_\mathrm{ext} := \{H_1, \ldots, H_m\}$ of self-adjoint operators acting on an extended Hilbert space $\mathscr{H}_{d_E}$ which includes $\mathscr{H}_d$ as a proper subspace (i.e., $d_E = \dim\mathscr{H}_{d_E} \geq d$), we say that $\mathcal{S}_\text{ext}$ provides a \textit{purification} for $\mathcal{S}$ if all elements of $\mathcal{S}_\text{ext}$ \textit{commute} with each other, i.e.,
\begin{equation}
[H_j,H_{j'}] =0, \quad \text{for all $j,j'$,} \label{commm}
\end{equation}
and are related to those of $\mathcal{S}$ as
\begin{equation}
h_j = P H_j P, \quad \text{for all $j$,} \label{eq1}
\end{equation}
where $P$ is the orthogonal projection onto $\mathscr{H}_{d}$
\cite{NOTA6}.
\end{mydef}
The requirement (\ref{commm}) that the operators of $\mathcal{S}_\text{ext}$
are pairwise commuting implies that such a set spans an Abelian (i.e., commutative) subalgebra of $\mathfrak{u}(d_E)$, and that
$H_j$ can be simultaneously diagonalized with a single unitary operator $U$ \cite{HJ12}, i.e.,
\begin{equation}\label{diago111}
H_1 = U D_1 U^\dag, \ldots ,H_m = UD_m U^\dag,
\end{equation}
with $D_1, \ldots, D_m$ being real diagonal matrices.
By construction, it is clear that each one of the elements of $\mathcal{S}_\text{ext}$ in general depends upon \textit{all} the operators of
the set $\mathcal{S}$ which one wishes to purify, and not just upon the one it extends.
Furthermore, if $h_j$ satisfy some special relations, identifying $\mathcal{S}_\text{ext}$ may be simpler than in the general case.
For instance, if all the elements of $\mathcal{S}$ admit a set of common eigenvectors, they already commute in the subspace spanned by those eigenvectors. Then, we are left with the simpler problem of making the operators commute only on the complementary subspace.
To keep the analysis as general as possible we will not consider these special cases in the following. We will however make
use of the linearity of Eq.\ (\ref{eq1}) to simplify the analysis.
\begin{lem}\label{lemma1}
Let $\mathcal{S}=\{ h_1, \ldots , h_m\}$ be a collection of self-adjoint operators acting on the Hilbert space $\mathscr{H}_d$ and suppose that a purifying set $\mathcal{S}_\text{ext} = \{H_1, \ldots, H_m\}$ can be constructed on $\mathscr{H}_{d_E}$. Then:
\begin{enumerate}
\item Given $\mathcal{S}'=\{ h'_1, \ldots , h'_{m'}\}$ a collection of self-adjoint operators obtained by taking linear combinations of the elements of $\mathcal{S}$, i.e.,
\begin{equation}
h'_i = \sum_{j=1}^m \alpha_{i,j} h_j
\label{eqn:LinearComb}
\end{equation}
with $\alpha_{i,j}$ being elements of a real rectangular $m'\times m$ matrix, then a
purifying set for $\mathcal{S}'$ on $\mathscr{H}_{d_E}$ is provided by $\mathcal{S}'_\text{ext} = \{H'_1, \ldots, H'_{m'}\}$ with elements
\begin{equation}
H'_i = \sum_{j=1}^m \alpha_{i,j} H_j ;
\label{eqn:LinearCombExt}
\end{equation}
\item Any subset of linearly independent elements of $\mathcal{S}$ corresponds to a subset of linearly independent elements in $\mathcal{S}_\text{ext}$ (the opposite statement being not true in general, i.e., linear independence among the elements of $\mathcal{S}_\text{ext}$ does not imply
linear independence among the elements of $\mathcal{S}$);
\item For $\lambda_1, \ldots ,\lambda_m \in \mathds{R}$, calling $I_d$ the identity on $\mathscr{H}_d$ and $I_{d_E}$ the identity on $\mathscr{H}_{d_E}$, a purifying set for
\begin{equation}
\{ h_1 + \lambda_1 I_d, \ldots , h_m + \lambda_m I_d\}
\end{equation}
is given by
\begin{equation}
\{H_1 + \lambda_1 I_{d_E}, \ldots , H_m + \lambda_m I_{d_E}\} ;
\end{equation}
\item For any unitary $U \in \mathcal{U}(d)$, setting $\widetilde{U} = U \oplus I_{d_E-d} \in \mathcal{U}(d_E)$,
a purifying set for
\begin{equation}
\{U h_1 U^\dagger, \ldots , U h_m U^\dagger\}
\end{equation}
is given by
\begin{equation}
\{\widetilde{U} H_1 \widetilde{U}^\dagger , \ldots , \widetilde{U} H_m \widetilde{U}^\dagger\} .
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
These facts are all trivially verified.
\end{proof}
Property 1 of Lemma \ref{lemma1} implies that a purifying set $\mathcal{S}_\text{ext}=\{H_1, \ldots, H_m\}$ can be extended by linearity to a purification of any linear combinations of the elements of $\mathcal{S}=\{ h_1, \ldots, h_m\}$.
Accordingly we can say that the purification of $\mathcal{S}$ by $\mathcal{S}_\text{ext}$ naturally induces a purification of the algebra spanned by the former by the algebra of the latter (more on this in Sec.\ \ref{SEC:al}). It is also clear that the fundamental parameter of the Hamitonian purification problem is not the number of elements of $\mathcal{S}$ but instead the maximum number of linearly independent elements which can be found in $\mathcal{S}$. Therefore, without
loss of generality, in the following we will assume $m$ to coincide with such a number, i.e., that all the elements of $\mathcal{S}$ are linearly
independent. Then, by Property 2 of Lemma \ref{lemma1}
also the elements of $\mathcal{S}_\text{ext}$ share the same property. By the same token, also the normalization of the operators $h_j$ can be fixed \textit{a priori}. Property 3 can be used instead to assume that all the elements of $\mathcal{S}$ be traceless (an option which we shall invoke from time to time to simplify the analysis). Finally Property 4 can be exploited to arbitrarily fix a basis on $\mathscr{H}_d$, e.g., the one which diagonalizes the first element of $\mathcal{S}$.
As we shall see in the following sections the mere possibility of finding a purification for a generic set $\mathcal{S}$ can be easily proved. A less trivial issue is to determine the \textit{minimal} dimension $d_E^{\text{(min)}}$ of the Hilbert space $\mathscr{H}_{d_E}$ which guarantees the existence of a purifying set for a generic collection $\mathcal{S}$ on $\mathscr{H}_d$. Clearly the value of $d_E^{\text{(min)}}$ will depend on the dimension $d$ of the Hilbert space $\mathscr{H}_d$ and on the number of (linearly independent) elements $m$ of the set, i.e., $d_E^{\text{(min)}} = d_E^{\text{(min)}}(d,m)$.
By construction it is clear that this quantity cannot be smaller than
$d$ and than $m$, i.e.,
\begin{equation}
d_E^{\text{(min)}} \geq \max\{d, m\}.
\end{equation}
This is a simple relation which, on one side, follows from the observation
that $ \mathscr{H}_{d_E}$ being an extension of $\mathscr{H}_d$ must have
dimension $d_E$ at least as large as $d$. On the other side the inequality
$d_E^{\text{(min)}} \geq m$ can be verified by exploiting the fact that the diagonal
$d_E\times d_E$ matrices $D_j$ entering Eq.\ (\ref{diago111}) must be linearly
independent in order to fulfill Property 2 of Lemma \ref{lemma1}.
Actually for all non-trivial cases the inequality is strict, resulting in
\begin{equation}
d_E^{\text{(min)}} \geq \max\{d+1, m+1\} . \label{trivia}
\end{equation}
In fact when the initial Hamiltonians $\{h_1, \ldots ,h_m\}$ do not already
commute, we need to expand the dimension of the space at least by one,
obtaining $d_E^{\text{(min)}} \geq d+1$. Moreover the inequality $d_E \geq m+1$ always
holds, unless the identity $I_d$ lies in the span of
$\{h_1, \ldots ,h_m\}$. Suppose in fact that we can purify a set of $m$
linearly independent Hamiltonians in dimension $m$; then the linear
span of the $m$ (linearly independent) diagonal matrices $D_j$ in
Eq.\ (\ref{diago111}) includes also the identity matrix
$I_{d_E}$. Because for any unitary $U$ we have
$U I_{d_E} U^\dag = I_{d_E}$,
the projection of $I_{d_E}$ on $\mathscr{H}_d$ gives
the identity on that subspace, and in conclusion
we have that ${I_d}\in \mathop{\text{span}}\nolimits(h_1, \ldots ,h_m)$.
Since this is not true in the general case, we obtain
$d_E^{\text{(min)}} \geq m+1$.
\subsection{Algebra purification} \label{SEC:al}
As anticipated in the previous section the linearity property of the Hamiltonian purification scheme allows us to introduce the notion of purification of an algebra. Specifically
there are at least two different possibilities:
\begin{mydef}[\textbf{Purification(s) of an algebra}] \label{def2}
Let $\mathfrak{a}$ be a {Lie} algebra of {self-adjoint} operators on $\mathscr{H}_d$. Given a commutative Lie algebra $\mathfrak{A}$ of {self-adjoint} operators on $\mathscr{H}_{d_E}$ we say that it provides
\begin{enumerate}
\item a spanning-set purification (or simply an algebra purification) of $\mathfrak{a}$ when we can provide an Hamiltonian purification of a spanning set (e.g., a basis) of the latter in $\mathfrak{A}$;
\item a generator purification of $\mathfrak{a}$ when we can provide a Hamiltonian purification of a generating set of the latter in $\mathfrak{A}$.
\end{enumerate}
\end{mydef}
The spanning-set purification typically requires the purification of more
Hamiltonians than the generator purification. For instance in Sec.\ \ref{sec5}
we shall see that the {(optimal)} spanning-set purification of $\mathfrak{u}(d)$ requires
$\mathfrak{A}$ to be the largest commutative subalgebra of $\mathfrak{u}(d^2)$, while in Sec.\ \ref{sec:Daniel} we shall see the generator purification requires $\mathfrak{A}$ to be the largest commutative subalgebra of $\mathfrak{u}(d+1)$.
At the level of quantum control via the Zeno effect, the advantage posed by the spanning-set purification is associated with the fact that,
in contrast to the scheme based on generator purification,
no complicated concatenation of Zeno pulses would be necessary to realize a desired control over a system on $\mathscr{H}_{d}$:
any unitary operator $e^{-i ht}$ on the latter can in fact be simply obtained as in Eq.\ (\ref{zenosequence}) by choosing $H$ to be the linear combination of
commuting Hamiltonians which purifies $h$ on $\mathscr{H}_{d^2}$.
On the contrary, in the case of generator purification, first we have to decompose
$e^{-i ht}$ into a sequence of pulses of the form $e^{-i h_{j_N}t_N}\cdots e^{-i h_{j_1}t_1}$ with $h_j$ being taken from the generator sets of
operators for which we do have a purification. Then each of the pulses $e^{-i h_{j_k}t_k}$ entering the previous decomposition is realized as in Eq.\ (\ref{zenosequence})
with a proper choice of the purifying Hamiltonians. See Fig.\ \ref{fig:1}
for a pictorial representation.
\begin{figure}[t]
\centering
\includegraphics[scale=.9]{fig1}
\caption{Pictorial representation of the control achieved via spanning-set purification (red line) and generator purification (blue lines) of an algebra. In the the former case an arbitrary unitary transformation $e^{-i ht}$ on $\mathscr{H}_{d}$ is obtained via a single Zeno sequence (\ref{zenosequence}) with $H$ being the purification of $h$. For generator purification instead one has to use a collection of Zeno sequences, one for each of the generator pulses $e^{-i h_{j_k}t_k}$ which are needed to implement $e^{-iht}$. The black tick lines represent the iterated projections on the system. }
\label{fig:1}
\end{figure}
\section{Purification of $\bm{m=2}$ Operators}\label{sec2}
In this section we discuss the case of the purification of two linearly independent Hamiltonians (i.e., $m=2$), providing bounds
and exact solutions. In particular we first present a simple construction which shows how to purify into an
extended Hilbert space $\mathscr{H}_{d_E}$ of dimension $d_E=2d$, implying hence $d_E^{\text{(min)}}(d,m=2) \leq 2 d$ (Proposition \ref{prop1}).
Such a result is interesting because it is elegant and simple to prove. However it is certainly not the optimal. Indeed we will show that the following inequality always holds
\begin{equation}\label{ris1}
\left\lceil \frac{3d}{2}\right\rceil \leq d_E^{\text{(min)}}(d,m=2) \leq 2 d-1.
\end{equation}
See Proposition \ref{lower_bound} (lower bound) and Proposition \ref{purif2ops_2d-1} (upper bound).
For $d=2$ (qubit) and $d=3$ (qutrit) this allows us to compute exactly $d_E^{\text{(min)}}(d,m=2) $, obtaining
respectively
\begin{align}
&d_E^{\text{(min)}}(d=2,m=2) =3, \quad \text{for a qubit} ,\label{qubit} \\
&d_E^{\text{(min)}}(d=3,m=2) =5, \quad \text{for a qutrit} . \label{qutrit}
\end{align}
For larger values of the system dimension $d$ there is a gap between the lower and upper bounds of the inequality (\ref{ris1}).
Numerical evidences conducted for $d=4,5,6$ however suggest that the former should be always attainable. See Fig.\ \ref{fig:Graphics_5}.
\begin{figure}[t]
\centering
\includegraphics[scale=.35]{Graphics_6c}
\caption{Graphical representation of the bounds (\ref{ris1}) for $d_E^{\text{(min)}}(d,m)$ for $m=2$ as a function of $d$. The blue points give the dimensions for which an explicit construction is known (Propositions \ref{purif_2ops_qubit} and \ref{purif2ops_2d-1}).
The gray line gives the lower bound of Proposition \ref{lower_bound}, and the black points give the values of $d_E^{\text{(min)}}(d,2)$ estimated by numerical inspection.}
\label{fig:Graphics_5}
\end{figure}
\begin{prop}[\textbf{Purification of $\bm{m=2}$ operators with $\bm{d_E = 2 d}$}]\label{prop1}
Let $\mathcal{S}=\{ h_1, h_2\}$ be a collection of two self-adjoint operators acting on the
Hilbert space $\mathscr{H}_d$. Then a purifying set can be constructed on $\mathscr{H}_{d_E} = \mathscr{H}_d\otimes \mathscr{H}_2$, with $\mathscr{H}_2$ being two-dimensional (qubit space) (i.e., $d_E = 2d$). In particular we can take
\begin{align}
H_1 &= h_1 \otimes I_2 + h_2 \otimes X , \nonumber \\
H_2 &= h_2 \otimes I_2 + h_1 \otimes X , \nonumber \\
P&= I_d \otimes (I_2 + Z)/2,
\end{align}
where
$X$ and $Z$ are the Pauli operators on $\mathscr{H}_2$~\cite{noteprop1}.
\end{prop}
\begin{proof}
The proof easily follows from the properties of Pauli operators.
But, to get a better intuition on what is going on, it is useful to adopt the following block-matrix representation for $H_j$ and $P$, i.e.,
\begin{equation}
H_1 =\left( \begin{array}{cc}
h_1 & h_2 \\
h_2 & h_1 \end{array}
\right), \quad
H_2 =\left( \begin{array}{cc}
h_2& h_1 \\
h_1 & h_2 \end{array}
\right), \quad
P =\left( \begin{array}{cc}
I_d & 0 \\
0 & 0 \end{array}
\right),
\end{equation}
from which the commutativity is evident~\cite{NOTAREMARK}.
\end{proof}
As we shall see in the next section, Proposition \ref{prop1}
admits a generalization for arbitrary values of $m$. Specifically, independently of the dimension of $\mathscr{H}_d$ (e.g., also for infinite-dimensional systems), we can construct a purification of $m$ not necessarily commuting Hamiltonians, by simply adding an $m$-level system to the original Hilbert space. In the case $\mathscr{H}_d$ is finite-dimensional this implies that a purification for $m$ Hamiltonians can always be achieved with an extended Hilbert space which has at most $m$ times the dimension of the original one, i.e., $d_E = m d$. This of course is not the best option. Indeed already for $d=2$ (qubit) and $m = 2$, it is possible to show (see Proposition \ref{purif_2ops_qubit} below) that the purification of two arbitrary Hamiltonians $h_1$ and $h_2$ is attained with a qutrit, i.e., $d_E= d+1= 3$, and this is clearly the optimal solution.
\begin{prop}[\textbf{Optimal purification of $\bm{m=2}$ operators of a qubit}]
\label{purif_2ops_qubit}
Let $\mathcal{S}=\{ h_1, h_2\}$ be a collection composed of two self-adjoint operators acting on the Hilbert space $\mathscr{H}_2$ of a qubit. Then a purifying set can be constructed on the Hilbert space $\mathscr{H}_{d_E}$ of dimension $d_E=3$ (qutrit space).
\end{prop}
\begin{proof}
We prove the thesis by providing an explicit purification. To do so we first notice that, up to irrelevant additive and renormalization factors, the operators $h_1$ and $h_2$ can be expressed as
\begin{equation}
h_1 = Z ,\qquad
h_2 = Z + \alpha (X \cos\theta +Y \sin\theta ) ,
\end{equation}
with $\alpha$ and $\theta$ being real parameters.
Indicating then with $\{ |0\rangle,|1\rangle\}$ the eigenvectors of $Z$, we define
$\mathscr{H}_{d_E}$ as the space spanned by the vectors $\{ |0\rangle, |1\rangle, |2\rangle\}$
with $|2\rangle$ being an extra state which is assumed to be orthogonal to both $|0\rangle$ and $|1\rangle$. We hence introduce the operators on $\mathscr{H}_{d_E}$ which in the basis
$\{ |0\rangle, |1\rangle, |2\rangle\}$ have the following matrix form,
\begin{align}
\widetilde{H}_1
&=\left(
\begin{array}{c|c}
Z&\begin{array}{c}0\\\sqrt{2}\end{array}\\
\hline
\begin{array}{cc}0&\sqrt{2}\end{array}&0
\end{array}
\right),
\nonumber\\
\widetilde{H}_2
&=\left(
\begin{array}{c|c}
M&\begin{array}{c}\sqrt{2}\,e^{-i\theta}\\0\end{array}\\
\hline
\begin{array}{cc}\sqrt{2}\,e^{i\theta}&0\end{array}&0
\end{array}
\right),
\label{eqn:PurificationPauli}
\end{align}
with $M := X \cos\theta + Y \sin\theta$.
One can easily verify that they commute, $[ \widetilde{H}_1, \widetilde{H}_2] =0$, and when projected on the subspace $\{ |0\rangle, |1\rangle\}$ they yield the matrices $Z$ and $M$, respectively. Defining hence $H_1$ and $H_2$ as the operators
\begin{equation}
H_1 = \widetilde{H}_1,\qquad
H_2 = \widetilde{H}_1 + \alpha \widetilde{H}_2 ,
\end{equation}
one notices that this is indeed a purifying set of $\mathcal{S}$.
\end{proof}
For arbitrary values of $d$ an improvement with respect to Proposition \ref{prop1} is obtained as follows:
\begin{prop}[\textbf{Purification of $\bm{m=2}$ operators with $\bm{d_E = 2 d-1}$}]\label{purif2ops_2d-1}
Let $\mathcal{S}=\{ h_1, h_2\}$ be a collection composed of two self-adjoint operators acting on the Hilbert space $\mathscr{H}_d$. Then, a purifying set can be constructed on $\mathscr{H}_{d_E} = \mathscr{H}_{2d-1}$, implying hence $d_E^{\text{(min)}}(d, m=2) \leq 2 d-1$.
\end{prop}
\begin{proof}
According to Eq.\ (\ref{diago111}) to construct a purifying set we have to find a unitary matrix $U \in \mathcal{U}(2d-1)$ such that
\begin{align}
h_1 = PUD_1 U^\dagger {P}, \nonumber\\
h_2 = PUD_2 U^\dagger {P} ,
\label{eqh1h2}
\end{align}
with $D_1,D_2 \in \mathop{\mathcal{D}iag}\nolimits(2d-1)$ being real diagonal matrices of dimension $2d-1$.
In $\mathscr{H}_{d_E}= \mathscr{H}_{d} \oplus \mathscr{H}_{d-1}$ we can write
\begin{equation}
P = \left(\begin{array}{c|c}
I_d & 0 \\
\hline
0 & 0_{d-1}
\end{array}\right), \qquad
PU = \left(\begin{array}{c|c}
L & R \\
\hline
0 & 0_{d-1}
\end{array}\right),
\end{equation}
where $L$ is a $d \times d$ matrix, $R$ is a $d \times (d-1)$ matrix, and the rows of $PU$ are orthogonal to each other, $L L^\dagger + R R^\dagger = I_d$, since $PU U^\dag P=P$.
We then write
\begin{equation}
D_1 = \left(\begin{array}{c|c}
D_1^L & 0 \\
\hline
0 & D_1^R
\end{array}\right)
,\quad
D_2 = \left(\begin{array}{c|c}
D_2^L & 0 \\
\hline
0 & D_2^R
\end{array}\right),
\end{equation}
where $D_1^L, D_2^L$ are diagonal $d \times d$ matrices and $D_1^R, D_2^R$ are diagonal $(d-1) \times (d-1)$ matrices. Then we notice that the equations in \eqref{eqh1h2} are equivalent to
\begin{gather}
h_1 = L D_1^L L^\dagger + R D_1^R R^\dagger, \nonumber\\
h_2 = L D_2^L L^\dagger + R D_2^R R^\dagger ,\nonumber\\
L L^\dagger + R R^\dagger = I_d.
\label{eqComplete}
\end{gather}
To find the purification, we need to solve these equations.
\textit{First equation:}
we choose without loss of generality $h_1$ to be positive definite: this can be obtained by adding $\alpha I_d$ with $\alpha > - \min\sigma(h_1)$ [where $\sigma(X)$ denotes the spectrum of $X$].
Then, $\sqrt{h_1}$ is the Hermitian positive-definite matrix such that $(\sqrt{h_1})^2 = h_1$. We also choose
\begin{equation}
L = \frac{1}{\lambda} \sqrt{h_1}\,V,
\end{equation}
where $V$ is an arbitrary unitary matrix, $VV^\dagger=I_d$.
Notice that for any unitary $V$ we have $\lambda^2 L L^\dagger = \sqrt{h_1}\,VV^\dagger \sqrt{h_1} = h_1$. Accordingly to solve the first of the equations in \eqref{eqComplete} we can simply take $D_1^L = \lambda^2 I_d$ and $D_1^R = 0$.
\textit{Third equation:} recast the third equation in the form
\begin{equation}
R R^\dagger = I_d - L L^\dagger = I_d -\frac{1}{\lambda^2}h_1 .
\end{equation}
This equation can be solved for $R$ if and only if the right-hand side is a positive semi-definite matrix with non-null kernel. This can be accomplished by choosing $\lambda^2 := \max\sigma (h_1)$, so that the smallest eigenvalue of $I_d - \lambda^{-2}h_1$ is equal to zero (this is easily seen in the basis in which $h_1$ is diagonal) \cite{NOTA5}. Explicitly, we can write
\begin{align}
I_d -\frac{1}{\lambda^2} h_1
& = W
\left(\begin{array}{c|c}
D' & 0\\
\hline
0 & 0
\end{array}\right)
W^\dagger \nonumber \\
& =
\left(\begin{array}{c|c}
W' & 0
\end{array}\right)
\left(\begin{array}{c|c}
D' & 0 \\
\hline
0 & 0
\end{array}\right)
\left(\begin{array}{c}
W'^\dagger \\
\hline
0
\end{array}\right),
\end{align}
where $W$ and $D'$ are obtained with the spectral theorem and $W'$ is a $d \times (d-1)$ matrix obtained from $W$ deleting its last column. So a solution to the third equation is given by $R = W'\sqrt{D'}$.
\textit{Second equation:} we exploit the fact that $V$ is so far an arbitrary unitary matrix. We take $D_2^R = 0$, and then we are left with
\begin{equation}
h_2 = \frac{1}{\lambda^2} \sqrt{h_1}\,VD_2^L V^\dagger\sqrt{h_1},
\end{equation}
or equivalently
\begin{equation}
\lambda^2 h_1^{-1/2} h_2 h_1^{-1/2}= VD_2^L V^\dagger ,
\end{equation}
which can be solved for $V$ and $D_2^L$ using the spectral theorem.
In conclusion, the explicit purification of $h_1$ and $h_2$, with $h_1$ positive definite, are found by extending
\begin{equation}
PU = \left(\begin{array}{c|c}
\lambda^{-1}\sqrt{h_1}\,V&W'\sqrt{D'}\\
\hline
0 & 0_{d-1}
\end{array}\right)
\end{equation}
to a unitary matrix $U$ and then expressing $H_1$ and $H_2$ as
\begin{equation}
H_1 =
U
\left(\begin{array}{c|c}
\lambda^2 I_d & 0 \\
\hline
0 & 0_{d-1}
\end{array}\right)
U^\dagger ,\quad
H_2 =
U
\left(\begin{array}{c|c}
D_2^L & 0 \\
\hline
0 & 0_{d-1}
\end{array}\right)
U^\dagger.
\end{equation}
\end{proof}
\begin{prop}[\textbf{Lower bound on the purification of $\bm{m=2}$ operators}]
\label{lower_bound}
The minimum dimension $d_E^{\text{(min)}}$ of the extended space on which it is possible to purify an arbitrary set of two Hamiltonians $\{ h_1, h_2\}$ acting on $\mathscr{H}_d$ is greater or equal to $3d/2$, i.e., $d_E^{\text{(min)}}(d,m=2) \geq \lceil 3d/2\rceil$.
\end{prop}
\begin{proof}
We want to find $H_1$ and $H_2$,
\begin{equation}
H_1 =
\left(\begin{array}{c|c}
h_1&B_1 \\
\hline
B_1^\dagger&C_1
\end{array}\right), \qquad
H_2 =
\left(\begin{array}{c|c}
h_2& B_2 \\
\hline
B_2^\dagger& C_2
\end{array}\right),
\end{equation}
such that $[H_1,H_2]=0$. Writing the commutators in block form, we obtain
the following three equations
\begin{gather}
[h_1,h_2] = - B_1 B_2^\dagger + B_2 B_1^\dagger, \nonumber\\
h_1B_2 - h_2B_1 = - B_1C_2 + B_2C_1 ,\nonumber\\
B_1^\dagger B_2 - B_2^\dagger B_1 = -[C_1, C_2] .
\end{gather}
Actually in order to prove the thesis, we need to consider just the first of these equations. In general $[h_1, h_2]$ can be of maximal rank, i.e., of rank $d$ \cite{NOTE2}. On the other hand $B_1$ and $B_2$ have ranks at most equal to $d_E-d$ (the number of their columns), and so $- B_1 B_2^\dagger + B_2 B_1^\dagger$ has rank at most equal to $2d_E-2d$. Therefore we have to impose $d = \mathop{\text{rank}}\nolimits(- B_1 B_2^\dagger + B_2 B_1^\dagger) \leq 2d_E - 2d$, which implies $d_E \geq {3}d/2$.
\end{proof}
For $d=2$ the lower bound of Proposition \ref{lower_bound} is trivial as it only predicts that the minimal value $d_E^{\text{(min)}}$ should be 3 which is the smallest dimension we can hope for to construct a space $\mathscr{H}_{d_E}$ that admits a proper bi-dimensional subspace.
In Proposition \ref{purif_2ops_qubit} we have explicitly provided a purification for the case $m=2$ and $d=2$, which uses exactly $d_E=3$, proving hence that the inequality of Proposition \ref{lower_bound} is tight at least in this case. The same result holds for $d=3$, as it is clear by comparing Proposition \ref{lower_bound} with Proposition \ref{purif2ops_2d-1}, yielding Eq.\ (\ref{qutrit}).
\section{An Upper Bound for $\bm{d_E^{\text{(min)}}(d,m)}$ for arbitrary $\bm{m}$ and $\bm{d}$} \label{sec.4}
Here we provide an explicit construction which generalizes Proposition \ref{prop1} to the case in which $\mathcal{S}$ is composed of $m\geq 2$ linearly independent elements and allows us to prove the following upper bound
\begin{eqnarray}
d_E^{\text{(min)}}(d,m) \leq m d . \label{ineqimpo}
\end{eqnarray}
While it is not tight [e.g., see Propositions \ref{purif_2ops_qubit} and \ref{purif2ops_2d-1} as well as Eq.\ (\ref{RISUL1}) below] this bound most likely gives the proper scaling in terms of the parameter $d$ at least for small values of $m$.
\begin{theo}[\textbf{Purification of $\bm{m}$ operators with $\bm{d_E = md}$}]
\label{theo_1}\label{teo1} Let $\mathcal{S}=\{ h_1, \ldots ,h_m\}$ be a collection of self-adjoint operators acting on the Hilbert space $\mathscr{H}_d$. Then, a purifying set can be constructed on $\mathscr{H}_{d_E} = \mathscr{H}_d\otimes \mathscr{H}_m$,
implying hence Eq.\ (\ref{ineqimpo}).
\end{theo}
\begin{proof}
We work in a fixed orthonormal basis, in which $\{|e_1\rangle, \ldots , |e_d\rangle\}$ span $\mathscr{H}_d$, $\{|f_1\rangle, \ldots , |f_m\rangle\}$ span $\mathscr{H}_m$, and thus $\{|e_\ell\rangle \otimes |f_i\rangle\}_{\ell \in \{1,\ldots, d\}, i \in \{1,\ldots, m\}}$ span the extended space $\mathscr{H}_{d_E} = \mathscr{H}_d\otimes \mathscr{H}_m$. We then use the spectral theorem to write $h_i = U_i D_iU_i^{\dagger}$, $\forall i$, with $D_i$ and $U_i$ being operators which, in the orthonormal basis $\{|e_1\rangle, \ldots , |e_d\rangle\}$, are described by
diagonal and unitary matrices, respectively.
A purifying set can then be assigned by introducing
the following operator in $\mathscr{H}_{d_E}$
\begin{equation}
W := \frac{1}{\sqrt{m}} \sum_{i=1}^m U_i \otimes f_{1i},
\end{equation}
where $f_{ij} := |f_i\rangle\langle f_j|$, $f_i := f_{ii}=|f_i\rangle\langle f_i|$. One gets
\begin{equation}
W W^\dag = \frac{1}{m} \sum_{i,j=1}^m U_i U_j^\dag \otimes f_{1i} f_{j1} = I_d \otimes f_{1} =: P.
\end{equation}
Therefore, $W$ is a partial isometry in $\mathscr{H}_{d_E}$ and $P$ is the orthogonal projection onto its range $\mathscr{H}_d\otimes \mathbb{C} |f_1\rangle\cong \mathscr{H}_d$. Now consider its polar decomposition $W=P U$ for some (non-unique) unitary $U$ on $\mathscr{H}_{d_E}$. [In terms of representative matrices in the canonical basis the projection $P$ selects the first $d$ rows of an arbitrary $md\times md$ matrix. Therefore,
since the first $d$ rows of $W$ are orthonormal they can be extended to build up a unitary matrix $U \in \mathcal{U}(md)$, such that $W=PU$].
By explicit computation one can then observe that the following identity holds:
\begin{equation}
h_i \otimes f_1 = PU(m D_i \otimes f_i)U^{\dagger} P.
\end{equation}
Accordingly the purifying set can be identified with the operators
$H_i =U (m D_i \otimes f_i)U^\dag$.
\end{proof}
\section{Optimal purification of the whole algebra $\bm{(m=d^2)}$}\label{sec5}
In this section we focus on the case where the set $\mathcal{S}$ one wishes to purify is large enough to span the whole algebra $\mathfrak{u}(d)$ of $\mathscr{H}_d$, i.e.,
according to Definition \ref{def2}, we study the spanning-set purification problem of $\mathfrak{u}(d)$.
This corresponds to having $m=d^2$ linearly independent elements in $\mathcal{S}$ (the maximum allowed by the dimension of the Hilbert space of the problem). It turns out that for this special case $d_E^{\text{(min)}}$ can be computed exactly showing that it saturates the bound of Eq.\ (\ref{trivia}), i.e.,
\begin{equation}
d_E^{\text{(min)}}(d, m=d^2) = d^2 .\label{RISUL1}
\end{equation}
On one hand this incidentally confirms that the bound of Theorem \ref{theo_1} is not thight. On the other hand it shows that a spanning-set purification for $\mathfrak{u}(d)$
requires the largest commutative subalgebra of $\mathfrak{u}(d^2)$ as minimal purifying algebra.
We start by proving this result for the case of $n$ qubits (i.e., $d=2^n$), as this special case admits a simple analysis
(see Proposition \ref{Sigma_l} and Corollary \ref{tensor_purif}).
The case of arbitrary $d$ is instead discussed in Theorem \ref{theo_2} by presenting a construction which allows one to purify an arbitrary set of $m=d^2$ linearly independent Hamiltonians in an extended Hilbert space of dimension $d^2$. Finally in Theorem \ref{Almost_all} we prove that the explicit solution proposed in Theorem \ref{theo_2} is far from being unique.
\begin{prop}[\textbf{Optimal purification of $\bm{\mathfrak{u}(2)}$}]
\label{Sigma_l}
A spanning-set purification for the algebra of $\mathfrak{u}(2)$
can be constructed on an extended Hilbert space of dimension $d_E=4$, i.e., $\mathscr{H}_{d_E} = \mathscr{H}_4$.
This is the optimal solution.
\end{prop}
\begin{proof} By Property 3 of Lemma \ref{lemma1} we can restrict the problem to the case of the traceless operators of $\mathscr{H}_2$, i.e., we can focus on the $\mathfrak{su}(2)$ subalgebra. A set of linearly independent elements for such a space is provided by the Pauli matrices $\{X,Y,Z\}$.
A purifying set $\{\Sigma_x,\Sigma_y,\Sigma_z\}$ of $\{X,Y,Z\}$ on $\mathscr{H}_4$ can then be exhibited explicitly, considering the following $4\times 4$ matrices,
\begin{align}
& \Sigma_x =
\left(
\begin{array}{cc|cc}
0 & 1 & 1+i & 0 \\
1 & 0 & 1+i & 0 \\ \hline
1-i & 1-i & 1 & 0 \\
0 & 0 & 0 & -1
\end{array}
\right), \nonumber\\
& \Sigma_y =
\left(
\begin{array}{cc|cc}
0 & -i & i & \frac{2+4i}{3}\\
i & 0 & 1 & \frac{1-i}{3} \\ \hline
-i & 1 & 0 & -1 \\
\frac{2-4i}{3}& \frac{1+i}{3}& -1& 0
\end{array}
\right),\nonumber\\
& \Sigma_z =
\left(
\begin{array}{cc|cc}
1 & 0 &- \frac{4+4i}{9}& \frac{7+8i}{9} \\
0 & -1 & \frac{5+5i}{9} & -\frac{16-i}{9}\\ \hline
-\frac{4-4i}{9}& \frac{5-5i}{9} & 0 & -i \\
\frac{7-8i}{9} & -\frac{16+i}{9}& i & 0
\end{array}
\right), \label{Sigma_l_Mat}
\end{align}
and taking $P= I_2 \otimes(I_2 +Z)/2$.
It can be seen by direct calculation that they indeed commute. The optimality of the solution follows from the inequality (\ref{trivia}).
\end{proof}
\begin{cor}[\textbf{Optimal purification of $\bm{\mathfrak{u}(2^n)}$}]
\label{tensor_purif}
Consider $\mathfrak{u}(2^n)$, the Lie algebra of self-adjoint operators acting on $n$ qubits (i.e., $\mathscr{H}_d =\mathscr{H}_2^{\otimes n}$). Then, a spanning-set purification for this algebra can be constructed with operators acting on $\mathscr{H}_{d_E} = \mathscr{H}_4^{\otimes n}$. This is the optimal solution.
\end{cor}
\begin{proof}
This result follows by observing that any element of $\mathfrak{u}(2^n)$ can be expressed as a linear combination of tensor products of $n$ (generalized) Pauli operators $S_\ell$, with the definitions $S_0 = I_2$, $S_1=X$, $S_2=Y$, $S_3=Z$:
\begin{equation}
h_j = \sum_{\substack{\ell_1,\ldots, \ell_n \\ \in \{ 0,1,2,3\} }} \beta^{(j)}_{\ell_1, \ldots, \ell_n}S_{\ell_1} \otimes \cdots \otimes S_{\ell_n}\quad
(j=1,\ldots,2^{2n}).
\end{equation}
Consider then the set formed by the operators
\begin{equation}
H_j = \sum_{\substack{\ell_1, \ldots, \ell_n \\ \in \{ 0,x, y,z\} }} \beta^{(j)}_{\ell_1, \ldots, \ell_n} \Sigma_{\ell_1} \otimes \cdots \otimes \Sigma_{\ell_n},
\end{equation}
with $\Sigma_\ell$ defined in Eq.\ \eqref{Sigma_l_Mat}.
The operators $H_j$ act on the Hilbert space $\mathscr{H}_{d_E} = \mathscr{H}_4^{\otimes n}= \mathscr{H}_2^{\otimes 2 n}$ and commute with each other (this is because they are tensor products of commuting elements). Finally, by projecting them with $P= [I_2 \otimes (I_2+Z)/2]^{\otimes n}$ they yield $h_j$. The solution is optimal due to Eq.\ (\ref{trivia}).
\end{proof}
The above can be used to bound the minimal value of $d_E$ for the case of an arbitrary finite-dimensional system $\mathscr{H}_d$ by simply embedding it into a collection of qubit system. Specifically consider
$\mathcal{S}
=\{ h_1, \ldots, h_m\}$, a collection of $m$ (not necessarily commuting) self-adjoint operators acting on the Hilbert space $\mathscr{H}_d$ of finite dimension $d$. Then, setting $n_0 = \lceil \log_2 d \rceil$, a purifying set for $\mathcal{S}$ can be constructed on
$\mathscr{H}_{d_E} = \mathscr{H}_4^{\otimes n_0}$. This implies that $d_E$ can be chosen to be equal to $4^{n_0}= (2^{n_0})^2\simeq d^2$. As a matter of fact,
this result can be strengthened by showing that indeed $d_E = d^2$ independently of the dimension $d$.
\begin{theo}[\textbf{Optimal purification of $\bm{\mathfrak{u}(d)}$}]
\label{theo_2}
A spanning-set purification for $\mathfrak{u}(d)$ can be constructed on $\mathscr{H}_{d_E} = \mathscr{H}_{d^2}$.
This is the optimal solution.
\end{theo}
\begin{proof}
The proof is given in the Appendix, where a purifying set is explicitly constructed.
\end{proof}
The construction presented in the proof of Theorem \ref{theo_2} in the Appendix provides a matrix $U$ that allows to perform the purification of all the Hermitian matrices in $\mathfrak{u}(d)$. But actually we notice that almost any unitary matrix will do the job equally well, as we show now. So there is almost free choice in determining a matrix $U$ that accomplishes the task, which can even be chosen at random in the parameter space.
\begin{theo}
\label{Almost_all}
Almost all unitary matrices $U \in \mathcal{U}(d^2)$ [with respect to (every absolutely continuous measure with respect to) Haar measure] are such that the map $f_{PU} $ defined in the proof of Theorem \ref{theo_2} is surjective. This implies that almost all unitary matrices $U \in \mathcal{U}(d^2)$ provide a purification for all sets of Hermitian operators.
\end{theo}
\begin{proof}
The linear application $f_{PU}$ defined in Eq.\ (\ref{defFU}) maps $\mathop{\mathcal{D}iag}\nolimits(d^2)$ into $\mathfrak{u}(d)$, which are both $d^2$-dimensional real vector spaces, and so it is surjective if and only if its determinant is different from zero.
Calling $x_{\ell,k}$ the entries of the matrix $U$, we see that $f_{PU} $ depends quadratically on the complex variables $x_{\ell,k}$, and its determinant $\det f_{PU}$ is a polynomial in these variables.
Preliminarily, if we take $U$ to be an arbitrary complex matrix, i.e., not necessarily unitary, the Theorem can be straightforwardly proved. In fact the set of $U$'s which make $f_{PU} $ non-surjective are the zeros of the polynomial $p(u_1,u_2, \ldots) := \det f_{PU}$, where $u_1,u_2, \ldots$ are real parameters
which encode the matrix $U$. Such a polynomial is clearly non-vanishing, as we have found in Theorem \ref{theo_2} an instance of $U$ for which $f_{PU}$ is surjective. The zero set of a non-null analytic function is a closed set (as it is preimage of a closed set), nowhere dense (otherwise the analytic function would be zero on all its connected domain of convergence), and has zero Lebesgue measure. We prove this by induction. The proposition is true for non-null analytic functions of one real variable, as the zero set is discrete. In general, suppose that $g(x_1,x_2,\ldots,x_K)$ is a non-null analytic function of real variables in $\mathds{R}^K$. Then fixing $x_1$, the function $g_{x_1}(x_2,\ldots, x_K):=g(x_1,x_2,\ldots,x_K)$ is an analytic function of $K-1$ variables.
Calling $S$ and $S(x_1)$ the zero sets of $g(x_1,x_2,\ldots,x_K)$ and $g_{x_1}(x_2, \ldots, x_K)$, respectively, by induction hypothesis $S(x_1)$ must have $(K-1)$-dimensional Lebesgue measure zero, for all except countably many values $x_1 \in \mathds{R}$. Then we integrate the characteristic function
\begin{align}
&\int \mathbf{1}_{S}(x_1, x_2,\ldots,x_K)\, dx_1 dx_2\cdots dx_K \nonumber \\
&\qquad
= \int \left( \int \mathbf{1}_{S(x_1)} (x_2,\ldots,x_K)\, dx_2 \cdots dx_K \right) dx_1 \nonumber \\
&\qquad = \int 0\, dx_1\quad\text{(almost everywhere)} \nonumber \displaybreak[0]\\
&\qquad = 0\vphantom{\int}
\end{align}
to achieve the stated result.
The same argument applies also when we restrict $U$ to be unitary. In fact, any unitary matrix can be obtained as an exponential of a Hermitian matrix. So the same reasoning as above applies to the analytic function $g(h_1,\ldots,h_K) = \det f(e^{iH})$ where $h_1,\ldots, h_K$ are real parameters which encode the Hermitian matrix $H$ [formally, the proof proceeds by considering a set of local charts that cover the manifold $\mathcal{U}(d^2)$]. Moreover, it can be shown that the Haar measure on $\mathcal{U}(d^2)$ is obtained from the Lebesgue measure on $\mathfrak{u}(d^2)$ via multiplication by a Jacobian of an analytic function, which is always regular, and the property of having zero measure is preserved under this operation.
\end{proof}
\section{Generator purification of $\bm{\mathfrak{u}(d)}$ into $\bm{\mathscr{H}_{d+1}}$}
\label{sec:Daniel}
The Propositions in Sec.\ \ref{sec2} concern the purification of two Hamiltonians ($m=2$).
In particular, it was proved in Proposition \ref{purif_2ops_qubit} that two non-commuting Hamiltonians acting on the Hilbert space $\mathscr{H}_2$ of a qubit can be purified into two commuting Hamiltonians in an extended Hilbert space $\mathscr{H}_3$, namely, by extending the Hilbert space \textit{by only one dimension}.
It is in general not the case for a larger system: adding one dimension is typically not enough to purify a couple of Hamiltonians for a system of dimension $d\ge3$, as proved in Proposition \ref{lower_bound}.
See also Eq.\ (\ref{ris1}).
On the other hand, Proposition \ref{purif_2ops_qubit} on the optimal purification for $m=2$ and $d=2$ helps us to prove
that one can always find a purification of a generating set of $\mathfrak{u}(d)$ which only involves a $d_E=d+1$ dimensional space. Expressed in the language introduced in Definition \ref{def2} this implies that the largest commutative subalgebra of $\mathfrak{u}(d+1)$ provides a generator purification of $\mathfrak{u}(d)$.
More precisely:
\begin{theo}
\label{Daniel}
A pair of randomly chosen commuting Hamiltonians $H_1$ and $H_2$ on $\mathscr{H}_{d+1}$ almost surely provide a pair of Hamiltonians $h_1$ and $h_2$ which generate the full Lie algebra on $\mathscr{H}_d$, i.e., $\mathfrak{Lie}(h_1,h_2)=\mathfrak{u}(d)$.
In other words, almost all pairs of commuting Hamiltonians in $\mathscr{H}_{d+1}$ are capable of quantum computation in $\mathscr{H}_d$.
\end{theo}
\begin{proof}
To prove this statement, we have only to find an example of such a set $\{H_1,H_2,P\}$ on $\mathscr{H}_{d+1}$ that yields $\{h_1,h_2\}$ generating the full Lie algebra on $\mathscr{H}_d$ (see Ref.\ \cite{Bur14}).
There is a particularly simple pair of generators $\{h_1,h_2\}$ of $\mathfrak{u}(d)$, namely,
\begin{equation}
h_1=\left(
\begin{array}{ccccc}
1&&&&\\
&0&&&\\
&&\ddots&&\\
&&&\ddots&\\
&&&&0
\end{array}
\right),\quad
h_2=\left(
\begin{array}{ccccc}
0&1&&&\\
1&0&1&&\\
&1&0&\ddots&\\
&&\ddots&\ddots&1\\
&&&1&0
\end{array}
\right).
\end{equation}
A proof that these generate $\mathfrak{u}(d)$ is given in Ref.\ \cite{ref:QSI}.
We can purify them in $\mathscr{H}_{d+1}$, by exploiting the formulas presented in Proposition \ref{purif_2ops_qubit} for the purification of a couple of Hamiltonians of a qubit.
Indeed, two $2\times2$ matrices
\begin{equation}
\left(\begin{array}{cc}1&0\\0&0\end{array}\right),\qquad
\left(\begin{array}{cc}0&1\\1&0\end{array}\right)
\end{equation}
are essentially Pauli matrices $Z$ and $X$, and can be purified to
\begin{equation}
\left(\begin{array}{c|cc}1/2&-1/\sqrt{2}&0\\\hline-1/\sqrt{2}&1&0\\0&0&0\end{array}\right),\qquad
\left(\begin{array}{c|cc}0&0&\sqrt{2}\\\hline0&0&1\\\sqrt{2}&1&0\end{array}\right),
\end{equation}
where we have used Properties 1 and 3 of Lemma \ref{lemma1} (multiplication by a constant and shift by the identity matrix) to convert the first matrix into $-(1/2)Z$ and applied the purification formulas in Eq.\ (\ref{eqn:PurificationPauli}), extending the matrices to the top-left by one dimension, instead of to the right-bottom.
This suggests the purification of the above $h_1$ and $h_2$ to
\begin{align}
H_1&=\left(
\begin{array}{c|ccccc}
1/2&-1/\sqrt{2}&0\\\hline
-1/\sqrt{2}&1&0&&&\\
0&0&0&&&\\
&&&\ddots&&\\
&&&&\ddots&\\
&&&&&0
\end{array}
\right),\\
H_2&=\left(
\begin{array}{c|ccccc}
0&0&\sqrt{2}\\\hline
0&0&1&&&\\
\sqrt{2}&1&0&1&&\\
&&1&0&\ddots&\\
&&&\ddots&\ddots&1\\
&&&&1&0
\end{array}
\right).
\end{align}
These matrices actually commute $[H_1,H_2]=0$ and reproduce $h_1$ and $h_2$ once projected by the projection
\begin{equation}
P=\left(
\begin{array}{c|ccc}
0&0&\cdots &0\\\hline
0 & \\
\vdots &&I_d& \\
0&
\end{array}
\right).
\end{equation}
The existence of an example makes us sure that all the sets $\{H_1,H_2,P\}$ on $\mathscr{H}_{d+1}$ except for discrete sets of measure zero do the same job, yielding $\{h_1,h_2\}$ generating the full $\mathfrak{u}(d)$ \cite{Bur14}.
\end{proof}
In Ref.\ \cite{Bur14}, it is shown that almost all pairs of commuting Hamiltonians $\{H_1,H_2\}$ of $n$ qubits are turned into $\{h_1,h_2\}$ capable of quantum computation on $n-1$ qubits, by projecting only a single qubit (i.e., $d_E=2^n$ and $d=2^{n-1}=d_E/2$).
The above Theorem \ref{Daniel} shows that the reduction by only one dimension can already make a big difference.
\section{Conclusions}
\label{sec.con}
In this work we have introduced the notion of Hamiltonian purification and the associated notion of algebra purification. As discussed in the Introduction these mathematical properties arise in the context of quantum control induced via a quantum Zeno effect \cite{Bur14}. We focus specifically on the problem of identifying the minimal dimension $d_E^{\text{(min)}}(d,m)$ which is needed in order to purify a generic set of $m$ linearly independent Hamiltonians, providing bounds and exact analytical results in many cases of interest. In particular the value of $d_E^{\text{(min)}}(d,m=d^2)$ has been exactly computed: this corresponds to the case where one wishes to induce a spanning-set purification of the whole algebra of operators acting on the input Hilbert space.
For smaller values of $m$, apart from some special cases discussed in Sec.\ \ref{sec2}, the quantity $d_E^{\text{(min)}}(d,m)$ is still unknown, e.g., see Fig.\ \ref{fig:Graphics_4}, which refers to the case $d=3$. Finally
for generator purification of $\mathfrak{u}(d)$ we showed that a $(d+1)$-dimensional Hilbert space can be sufficient. This allowed us to strengthen the argument in Ref.\ \cite{Bur14}: a rank-$d$ projection suffices to turn commuting Hamiltonians on the $(d+1)$-dimensional Hilbert space into a universal set in the $d$-dimensional Hilbert space.
\begin{figure}[t]
\centering
\includegraphics[scale=.23]{Graphics_4c}
\caption{Plots of the admissible regions for $d_E^{\text{(min)}}(d,m)$
for the qutrit case ($d=3$) as functions of $m$. The blue points give the dimensions for which an explicit construction is known.
The gray lines give the known lower bounds on $d_E^{\text{(min)}}(3,m)$.
The black points give the values of $d_E^{\text{(min)}}(3,m)$ estimated by numeric inspection. }
\label{fig:Graphics_4}
\end{figure}
\acknowledgments
This work was partially supported by the Italian National Group of Mathematical Physics (GNFM-INdAM), by PRIN 2010LLKJBX on ``Collective quantum phenomena: from strongly correlated systems to quantum simulators,'' and by Grants-in-Aid for
Scientific Research (C) (No.\ 22540292 and No.\ 26400406) from JSPS, Japan.
|
1,108,101,563,656 | arxiv | \section{Introduction}
\label{sec:introduction}
Back-translation \citep{sennrich2016improving} allows to naturally exploit monolingual corpora in Neural Machine Translation (NMT) by using a reverse model to generate a synthetic parallel corpus. Despite its simplicity, this technique has become a key component in state-of-the-art NMT systems. For instance, the majority of WMT19 submissions, including the best performing systems, made extensive use of it \citep{barrault2019findings}.
While the synthetic parallel corpus generated through back-translation is typically combined with real parallel corpora, iterative or online variants of this technique also play a central role in unsupervised machine translation \citep{artetxe2018usmt,artetxe2018unmt,artetxe2019effective,lample2018unsupervised,lample2018phrase,marie2018unsupervised,conneau2019crosslingual,song2019mass,liu2020multilingual}. In iterative back-translation, both NMT models are jointly trained using synthetic parallel data generated on-the-fly with the reverse model, alternating between both translation directions iteratively. While this enables fully unsupervised training without parallel corpora, some initialization mechanism is still required so the models can start producing sound translations and provide a meaningful training signal to each other. For that purpose, state-of-the-art approaches rely on either a separately trained unsupervised Statistical Machine Translation (SMT) system, which is used for warmup during the initial back-translation iterations \citep{marie2018unsupervised,artetxe2019effective}, or large-scale pre-training through masked denoising, which is used to initialize the weights of the underlying encoder-decoder \citep{conneau2019crosslingual,song2019mass,liu2020multilingual}.
In this paper, we aim to understand the role that the initialization mechanism plays in iterative back-translation. For that purpose, we mimic the experimental settings of \citet{artetxe2019effective}, and measure the effect of using different initial systems for warmup: the unsupervised SMT system proposed by \citet{artetxe2019effective} themselves, supervised NMT and SMT systems trained on both small and large parallel corpora, and a commercial Rule-Based Machine Translation (RBMT) system. Despite the fundamentally different nature of these systems, our analysis reveals that iterative back-translation has a strong tendency to converge to a similar solution. Given the relatively small impact of the initial system, we conclude that future research on unsupervised machine translation should focus more on improving the iterative back-translation mechanism itself.
\section{Iterative back-translation}
\label{sec:backtranslation}
We next describe the iterative back-translation implementation used in our experiments, which was proposed by \citet{artetxe2019effective}. Note, however, that the underlying principles of iterative back-translation are very general, so our conclusions should be valid beyond this particular implementation.
The method in question trains two NMT systems in opposite directions following an iterative process where, at every iteration, each model is updated by performing a single pass over a set of $N$ synthetic parallel sentences generated through back-translation. After iteration $a$, the synthetic parallel corpus is entirely generated by the reverse NMT model. However, so as to ensure that the NMT models produce sound translations and provide meaningful training signal to each other, the first $a$ warmup iterations progressively transition from a separate \textbf{initial system} to the reverse NMT model itself. More concretely, iteration $t$ uses $N_{init} = N \cdot \max (0, 1 - t / a)$ back-translated sentences from the reverse initial system, and the remaining $N - N_{initial}$ sentences are generated by the reverse NMT model. In the latter case, half of the translations use random sampling \citep{edunov2018understanding}, which produces more varied translations, whereas the other half are generated through greedy decoding, which produces more fluent and predictable translations. Following \citet{artetxe2019effective}, we set $N=1,000,000$ and $a=30$, and perform a total of 60 such iterations. Both NMT models use the big transformer implementation from Fairseq\footnote{\url{https://github.com/pytorch/fairseq}}, training with a total batch size of 20,000 tokens with the exact same hyperparameters as \citet{ott2018scaling}. At test time, we use beam search decoding with a beam size of 5.
\section{Experimental settings}
\label{sec:settings}
So as to better understand the role of initialization in iterative back-translation, we train different English-German models using the following \textbf{initial systems} for warmup:
\begin{itemize}
\item \textbf{RBMT}: We use the commercial Lucy LT translator \citep{alonso2003comprendium}, a traditional transfer-based RBMT system combining human crafted computational grammars and monolingual and bilingual lexicons.
\item Supervised \textbf{NMT}: We use the Fairseq implementation of the big transformer model using the same hyperparameters as \citet{ott2018scaling}. We train two separate models: one using the concatenation of all parallel corpora from WMT 2014, and another one using a random subset of 100,000 sentences. In both cases, we use early stopping according to the cross-entropy in newstest2013.
\item Supervised \textbf{SMT}: We use the Moses \citep{koehn2007moses} implementation of phrase-based SMT \citep{koehn2003statistical} with default hyperparameters, using FastAlign \citep{dyer2013simple} for word alignment. We train two separate models using the same parallel corpus splits as for NMT. In both cases, we use a 5-gram language model trained with KenLM \citep{heafield2013scalable} on News Crawl 2007-2013, and apply MERT tuning \citep{och2003MERT} over newstest2013.
\item \textbf{Unsupervised}: We use the unsupervised SMT system proposed by \citet{artetxe2019effective}, which induces an initial phrase-table using cross-lingual word embedding mappings, combines it with an n-gram language model, and further improves the resulting model through unsupervised tuning and joint refinement.
\end{itemize}
For each initial system, we train a separate NMT model through iterative back-translation as described in Section \ref{sec:backtranslation}. For that purpose, we use the News Crawl 2007-2013 monolingual corpus as distributed in the WMT 2014 shared task.\footnote{Note that the final systems do not see any parallel data during training, even if some initial systems are trained on parallel data. Thanks to this, we can measure the impact of the initial system in a controlled environment, which is the goal of the paper. In practical settings, however, better results could likely be obtained by combining real and synthetic parallel corpora.} Preprocessing is done using standard Moses tools, and involves punctuation normalization, tokenization with aggressive hyphen splitting, and truecasing.
We \textbf{evaluate} in newstest2014 using tokenized BLEU, and compare the performance of the different final systems after iterative back-translation and the initial systems used in their warmup.\footnote{Note that all systems use the exact same tokenization, so the reported BLEU scores are comparable among them.} However, this only provides a measure of the \textbf{quality} of the different systems, but not the similarity of the translations they produce. So as to quantify how similar the translations of two systems are, we compute their corresponding BLEU scores taking one of them as the reference. This way, we report the average \textbf{similarity} of each final system with the rest of final systems, and analogously for the initial ones. Finally, we also compute the similarity between each initial system and its corresponding final system, which measures how much the final solution found by iterative back-translation differs from the initial one.
\section{Results}
\label{sec:results}
Table \ref{tab:test} reports the test scores of different initial systems along with their corresponding final systems after iterative-backtranslation. As it can be seen, the standard deviation across final systems is substantially lower than across initial systems (1.7 vs 5.6 in German-to-English and 1.4 vs 4.9 in English-to-German), which shows that iterative back-translation tends to converge to solutions of a similar quality. This way, while the initial system does have certain influence in final performance, differences greatly diminish after applying iterative back-translation. For instance, the full NMT system is 13.4 points better than the RBMT system in German-to-English, but this difference goes down to 2.3 points after iterative back-translation.
\begin{table}[t]
\begin{center}
\begin{small}
\addtolength{\tabcolsep}{-1.8pt}
\begin{tabular}{lccccc}
\toprule
& \multicolumn{2}{c}{DE-EN} && \multicolumn{2}{c}{EN-DE} \\
\cmidrule{2-3} \cmidrule{5-6}
& init & final && init & final \\
\midrule
RBMT & 19.1 & 27.3 && 15.6 & 22.8 \\
NMT (full) & 32.5 & 29.6 && 27.6 & 24.9 \\
NMT (100k) & 15.2 & 25.0 && 12.5 & 20.8 \\
SMT (full) & 25.5 & 28.3 && 20.5 & 23.3 \\
SMT (100k) & 19.6 & 25.0 && 16.3 & 21.0 \\
Unsupervised & 20.1 & 26.1 && 15.8 & 21.9 \\
\midrule
Average & 22.0 & 26.9 && 18.1 & 22.4 \\
Standard dev. & 5.6 & 1.7 && 4.9 & 1.4 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\caption{Test results using different initial systems for warmup (BLEU), before (init column) and after iterative back-translation (final column).}
\label{tab:test}
\end{table}
\begin{table}[t]
\begin{center}
\begin{small}
\addtolength{\tabcolsep}{-1.8pt}
\begin{tabular}{lccccccc}
\toprule
& \multicolumn{3}{c}{DE-EN} & & \multicolumn{3}{c}{EN-DE} \\
\cmidrule{2-4} \cmidrule{6-8}
& init & init & final && init & init & final \\
& init & final & final && init & final & final \\
\midrule
RBMT & 23.8 & 27.6 & 48.0 && 20.0 & 25.0 & 41.7 \\
NMT (full) & 28.5 & 42.2 & 50.4 && 25.0 & 36.3 & 43.8 \\
NMT (100k) & 21.8 & 21.9 & 47.2 && 18.1 & 17.9 & 41.1 \\
SMT (full) & 35.4 & 33.2 & 53.4 && 30.5 & 25.2 & 46.7 \\
SMT (100k) & 31.5 & 27.4 & 48.4 && 28.0 & 22.4 & 42.4 \\
Unsupervised & 28.4 & 31.7 & 48.3 && 24.3 & 24.2 & 41.8 \\
\midrule
Average & 28.2 & 30.7 & 49.3 && 24.3 & 25.1 & 42.9 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\caption{Average similarity across initial systems and final systems, as well as each initial system and its corresponding final system (BLEU).}
\label{tab:similarity}
\end{table}
Interestingly, better initial systems do not always lead to better final systems. For instance, the initial RBMT system is weaker than both the unsupervised system and the small SMT system, yet it leads to a better final system after iterative back-translation. Similarly, the small SMT model is substantially better than the small NMT model in German-to-English (19.6 vs 15.2), yet they both lead to the exact same BLEU score of 25.0 after iterative back-translation. We hypothesize that certain properties of the initial system are more relevant than others and, in particular, our results suggest that the adequacy and lexical coverage of the initial systems has a larger impact than its fluency.
At the same time, it is remarkable that iterative back-translation has a generally positive impact, bringing an average improvement of 4.9 BLEU points for German-to-English and 4.3 BLEU points for English-to-German. Nevertheless, the full NMT system is a notable exception, as the final system learned through iterative back-translation is weaker than the initial system used for warmup. This reinforces the idea that iterative back-translation converges to a solution of a similar quality regardless of that of the initial system, to the extent that it can even deteriorate performance when the initial system is very strong.
So as to get a more complete picture of this behavior, Table \ref{tab:similarity} reports the average similarity between each final system and the rest of the final systems, and analogously for the initial ones. As it can be seen, final systems trained through iterative back-translation tend to produce substantially more similar translations than the initial systems used in their warmup (49.3 vs 28.2 for German-to-English and 42.9 vs 24.3 for English-to-German). This suggests that iterative back-translation does not only converge to solutions of similar quality, but also to solutions that have a similar behavior. Interestingly, this also applies to systems that follow a fundamentally different paradigm as it is the case of RBMT. In relation to that, note that the similarity of each final system and its corresponding initial system is rather low, which reinforces the idea that the solution found by iterative back-translation is not heavily dependent on the initial system.
\section{Related work}
\label{sec:related}
Originally proposed by \citet{sennrich2016improving}, back-translation has been widely adopted by the machine translation community \citep{barrault2019findings}, yet its behavior is still not fully understood. Several authors have studied the optimal balance between real and synthetic parallel data, concluding that using too much synthetic data can be harmful \citep{poncelas2018investigating,fadaee2018backtranslation,edunov2018understanding}. In addition to that, \citet{fadaee2018backtranslation} observe that back-translation is most helpful for tokens with a high prediction loss, and use this insight to design a better selection method for monolingual data. At the same time, \citet{edunov2018understanding} show that random sampling provides a stronger training signal than beam search or greedy decoding. Closer to our work, the impact of the system used for back-translation has also been explored by some authors \citep{sennrich2016improving,burlot2018using}, although the iterative back-translation variant, which allows to jointly train both systems so they can help each other, was not considered, and synthetic data was always combined with real parallel data.
While all the previous authors use a fixed system to generate synthetic parallel corpora, \citet{hoang2018iterative} propose performing a second iteration of back-translation. Iterative back-translation was also explored by \citet{marie2018unsupervised} and \citet{artetxe2019effective} in the context of unsupervised machine translation, relying on an unsupervised SMT system \citep{lample2018phrase,artetxe2018usmt} for warmup. Early work in unsupervised NMT also incorporated the idea of on-the-fly back-translation, which was combined with denoising autoencoding and a shared encoder initialized through unsupervised cross-lingual embeddings \citep{artetxe2018unmt,lample2018unsupervised}. More recently, several authors have performed large-scale unsupervised pre-training through masked denoising to initialize the full model, which is then trained through iterative back-translation \citep{conneau2019crosslingual,song2019mass,liu2020multilingual}. Finally, iterative back-translation is also connected to the reconstruction loss in dual learning \citep{he2016dual}, which incorporates an additional language modeling loss and also requires a warm start.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we empirically analyze the role that initialization plays in iterative back-translation. For that purpose, we try a diverse set of initial systems for warmup, and analyze the behavior of the resulting systems in relation to them. Our results show that differences in the initial systems heavily diminish after applying iterative back-translation. At the same time, we observe that iterative back-translation has a hard ceiling, to the point that it can even deteriorate performance when the initial system is very strong. As such, we conclude that the margin for improvement left for the initialization is rather narrow, encouraging future research to focus more on improving the iterative back-translation mechanism itself.
In the future, we would like to better characterize the specific factors of the initial systems that are most relevant. At the same time, we would like to design a simpler unsupervised system for warmup that is sufficient for iterative back-translation to converge to a good solution. Finally, we would like to incorporate pre-training methods like masked denoising into our analysis.
\section*{Acknowledgments}
This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017‐91692‐EXP MCIU/AEI/FEDER, UE), Project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018), the NVIDIA GPU grant program, Lucy Software / United Language Group (ULG), and the Catalan Agency for Management of University and Research Grants (AGAUR) through an Industrial Ph.D. Grant.
|
1,108,101,563,657 | arxiv | \section{Introduction}
The theory of Lie algebras is among the most developed f\/ields in algebra due to his broad applicability in dif\/ferential
geometry, theoretical physics, quantum f\/ield theory, classical or quantum mechanics and others.
Besides the purely algebraic interest in this problem, the classif\/ication of Lie algebras of a~given dimension is a~central
theme of study in modern group analysis of dif\/ferential equations~-- for further explanations and an historical background
see~\cite{popovici}.
The Levi--Malcev theorem reduces the classif\/ication of all f\/inite-dimensional Lie algebras over a~f\/ield of characteristic zero
to the following three subsequent problems: (1)~the classif\/ication of all semi-simple Lie algebras (solved by Cartan); (2)~the
classif\/ication of all solvable Lie algebras (which is known up to dimension~$6$~\cite{gra2}) and (3)~the classif\/ication of all
Lie algebras that are direct sums of semi-simple Lie algebras and solvable Lie algebras.
Surprisingly, among these three problems, the last one is the least studied and the most dif\/f\/icult.
Only in 1990 Majid~\cite[Theo\-rem~4.1]{majid} and independently Lu and Weinstein~\cite[Theo\-rem~3.9]{LW} introduced the
concept of a~\emph{matched pair} between two Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$.
To any matched pair of Lie algebras we can associate a~new Lie algebra $\mathfrak{g} \bowtie \mathfrak{h}$ called the
\emph{bicrossed product} (also called \emph{double Lie algebra} in~\cite[Definition~3.3]{LW}, \emph{double cross sum}
in~\cite[Proposition~8.3.2]{majid2} or \emph{knit product} in~\cite{Mic}).
In light of this new concept, problem~(3) can be equivalently restated as follows: for a~given (semi-simple) Lie algebra
$\mathfrak{g}$ and a~given (solvable) Lie algebra $\mathfrak{h}$, describe the set of all possible matched pairs $(\mathfrak{g},
\mathfrak{h}, \triangleleft, \triangleright)$ and classify up to an isomorphism all associated bicrossed products $\mathfrak{g}
\bowtie \mathfrak{h}$.
Leaving aside the semi-simple/solvable case this is just the \emph{factorization problem} for Lie algebras~-- we refer
to~\cite{abm1} for more details and additional references on the factorization problem at the level of groups, Hopf algebras,
etc.
The present paper continues our recent work~\cite{am-2013a, am-2013b} related to the above question (3), in its general form,
namely the factorization problem and its converse, called the \emph{classifying complement problem}, which consist of the
following question: let $\mathfrak{g} \subset \mathfrak{L}$ be a~given Lie subalgebra of $\mathfrak{L}$.
If a~complement of $\mathfrak{g}$ in $\mathfrak{L}$ exists (that is a~Lie subalgebra $\mathfrak{h}$ such that $\mathfrak{L} =
\mathfrak{g} + \mathfrak{h}$ and $\mathfrak{g} \cap \mathfrak{h} = \{0\}$), describe explicitly, classify all complements and
compute the cardinal of the isomorphism classes of all complements (which will be called the \emph{factorization index}
$[\mathfrak{L}: \mathfrak{g}]^f$ of $\mathfrak{g}$ in~$\mathfrak{L}$).
Our starting point is~\cite[Proposition~4.4]{am-2013b} which describes all Lie algebras~$\mathfrak{L}$ that contain a~given Lie
algebra~$\mathfrak{h}$ as a~subalgebra of codimension~$1$ over an arbitrary f\/ield~$k$: the set of all such Lie algebras~$L$ is
parameterized by the space~$\TwDer (\mathfrak{h})$ of twisted derivations of~$\mathfrak{h}$.
The pioneer work on this subject was performed by K.H.~Hofmann:~\cite[Theorem~I]{Hopmann} describes the structure
of~$n$-dimensional real Lie algebras containing a~given subalgebra of dimension~$n-1$.
Equivalently, this proves that the set of all matched pairs of Lie algebras $(k_0, \mathfrak{h}, \triangleleft, \triangleright)$
(by $k_0$ we will denote the Abelian Lie algebra of dimension~$1$) and the space $\TwDer (\mathfrak{h})$ of all twisted
derivations of $\mathfrak{h}$ are in one-to-one correspondence; moreover, any Lie algebra~$\mathfrak{L}$ containing
$\mathfrak{h}$ as a~subalgebra of codimension~$1$ is isomorphic to a~bicrossed product $k_0 \bowtie \mathfrak{h} =
\mathfrak{h}_{(\lambda, \Delta)}$, for some $(\lambda, \Delta) \in \TwDer (\mathfrak{h})$.
The classif\/ication up to an isomorphism of all bicrossed products $\mathfrak{h}_{(\lambda, \Delta)}$ is given in the case when~$\mathfrak{h}$ is perfect.
As an application of our approach, the group $\Aut_{\Lie} (\mathfrak{h}_{(\lambda, \Delta)}) $ of all automorphisms of
such Lie algebras is fully described in Corollary~\ref{izoaut}: it appears as a~subgroup of a~certain semidirect product
$\mathfrak{h} \ltimes(k^* \times \Aut_{\Lie} (\mathfrak{h}))$ of groups.
At this point we mention that the classif\/ication of automorphisms groups of all indecomposable real Lie algebras of dimension up
to f\/ive was obtained recently in~\cite{fisher} where the importance of this subject in mathematical physics is highlighted.
For the special case of sympathetic Lie algebras $\mathfrak{h}$, Corollary~\ref{morfismeperfectea} proves that, up to an
isomorphism, there exists only one Lie algebra that contains $\mathfrak{h}$ as a~Lie subalgebra of codimension one, namely the
direct product $k_0 \times \mathfrak{h}$ and $\Aut_{\Lie} (k_0 \times \mathfrak{h}) \cong k^* \times \Aut_{\Lie}
(\mathfrak{h})$.
Now, $k_0$ is a~subalgebra of $k_0 \bowtie \mathfrak{h} = \mathfrak{h}_{(\lambda, \Delta)}$ having $\mathfrak{h}$ as
a~complement: for a~$5$-dimensional perfect Lie algebra all complements of $k_0$ in $\mathfrak{h}_{(\lambda, \Delta)}$ are
described in Example~\ref{prefectnecomplet} as matched pair deformations of $\mathfrak{h}$.
Section~\ref{mpdefo} treats the same problem for a~given $(2n+1)$-dimensional non-perfect Lie algebra $\mathfrak{h}:=\mathfrak{l}(2n+1,k)$.
Theorem~\ref{teorema11} describes explicitly all Lie algebras containing $\mathfrak{l} (2n+1, k)$ as a~subalgebra of codimension~$1$.
They are parameterized by a~set ${\mathcal T}(n)$ of matrices $(A, B, C, D, \lambda_0, \delta) \in {\rmM}_n (k)^4 \times k
\times k^{2n+1}$: there are four such families of Lie algebras if the characteristic of~$k$ is $\neq 2$ and two families in
characteristic~$2$.
All complements of $k_0$ in two such bicrossed products $k_0 \bowtie \mathfrak{l} (2n+1, k)$ are described by computing all
matched pair deformations of the Lie algebra $\mathfrak{l} (2n+1, k)$ in Propositions~\ref{mpdef} and~\ref{mpdef2}.
In particular, in Example~\ref{n1} we construct an example where the factorization index of $k_0$ in the $4$-dimensional Lie
algebra $\mathfrak{m} (4, k)$ is inf\/inite: that is $k_0$ has an inf\/inite family of non-isomorphic complements in $\mathfrak{m}(4,k)$.
To conclude, there are three reasons for which we considered the Lie algebra $\mathfrak{l} (2n+1, k)$ in Section~\ref{mpdefo}:
on the one hand it provided us with an example of a~f\/inite-dimensional Lie algebra extension $\mathfrak{g} \subset \mathfrak{L}$
such that $\mathfrak{g}$ has inf\/initely many non-isomorphic complements as a~Lie subalgebra in $\mathfrak{L}$.
On the other hand, the Lie algebra $\mathfrak{l} (2n+1, k)$ serves for constructing two counterexamples in Remark~\ref{2noi}
which show that some properties of Lie algebras are not preserves by the matched pair deformation.
Finally, having~\cite[Corollary 3.2]{am-2012} as a~source of inspiration we believe that any $(2n+1)$-dimensional Lie algebra is
isomorphic to an~$r$-deformation of $\mathfrak{l} (2n+1, k)$ associated to a~given matched pair: a~more general open question is
stated at the end of the paper.
\newpage
\section{Preliminaries}
All vector spaces, Lie algebras, linear or bilinear maps are over an arbitrary f\/ield~$k$.
The Abelian Lie algebra of dimension~$n$ will be denoted by $k^n_0$.
For two given Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ we denote by $\Aut_{\Lie} (\mathfrak{g})$ the group of
automorphisms of $\mathfrak{g}$ and by $\Hom_{\Lie} (\mathfrak{g}, \mathfrak{h})$ the space of all Lie algebra maps
between $\mathfrak{g}$ and $\mathfrak{h}$.
A~Lie algebra $\mathfrak{L}$ \emph{factorizes} through $\mathfrak{g}$ and $\mathfrak{h}$ if $\mathfrak{g}$ and $\mathfrak{h}$
are Lie subalgebras of $\mathfrak{L}$ such that $\mathfrak{L} = \mathfrak{g} + \mathfrak{h}$ and $\mathfrak{g} \cap \mathfrak{h}
= \{0\}$.
In this case $\mathfrak{h}$ is called a~\emph{complement} of $\mathfrak{g}$ in $\mathfrak{L}$; if $\mathfrak{g}$ is an ideal of
$\mathfrak{L}$, then a~complement $\mathfrak{h}$, if it exists, is unique being isomorphic to the quotient Lie algebra
$\mathfrak{L}/ \mathfrak{g}$.
In general, if $\mathfrak{g}$ is only a~subalgebra of $\mathfrak{L}$, then we are very far from having unique complements; for
a~given extension $\mathfrak{g} \subset \mathfrak{L}$ of Lie algebras, the number of types of isomorphisms of all complements of
$\mathfrak{g}$ in $\mathfrak{L}$ is called the \emph{factorization index} of $\mathfrak{g}$ in $\mathfrak{L}$ and is denoted~by
$[\mathfrak{L}: \mathfrak{g}]^f$~-- a~theoretical formula for computing $[\mathfrak{L}: \mathfrak{g}]^f$ is given
in~\cite[Theorem 4.5]{am-2013a}.
For basic concepts and unexplained notions on Lie algebras we refer to~\cite{EW, H}.
A \emph{matched pair} of Lie algebras~\cite{LW, majid2} is a~system $(\mathfrak{g}, \mathfrak{h}, \triangleleft,
\triangleright)$ consisting of two Lie algebras~$\mathfrak{g}$ and~$\mathfrak{h}$ and two bilinear maps $\triangleright:
\mathfrak{h} \times \mathfrak{g} \to \mathfrak{g}$, $\triangleleft: \mathfrak{h} \times \mathfrak{g} \to \mathfrak{h}$ such that
$(\mathfrak{g}, \triangleright)$ is a~left $\mathfrak{h}$-module, $(\mathfrak{h}, \triangleleft)$ is a~right
$\mathfrak{g}$-module and the following compatibilities hold for all~$g, h \in \mathfrak{g}$ and~$x, y \in \mathfrak{h}$
\begin{gather*}
x \triangleright [g, h] = [x \triangleright g, h] + [g, x \triangleright h] + (x \triangleleft g) \triangleright h - (x
\triangleleft h) \triangleright g,
\\
\left[x, y \right] \triangleleft g = \left[x, y \triangleleft g \right] + \left[x \triangleleft g, y \right] + x \triangleleft
(y \triangleright g) - y \triangleleft (x \triangleright g).
\end{gather*}
Let $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$ be a~matched pair of Lie algebras.
Then $\mathfrak{g} \bowtie \mathfrak{h}: = \mathfrak{g} \times \mathfrak{h}$, as a~vector space, is a~Lie algebra with the
bracket def\/ined~by
\begin{gather*}
\{(g, x), (h, y) \}:= \big([g, h] + x\triangleright h - y \triangleright g, [x, y] + x \triangleleft h - y \triangleleft g\big)
\end{gather*}
for all~$g, h \in \mathfrak{g}$ and~$x, y \in \mathfrak{h}$, called the \emph{bicrossed product} associated to the matched
pair $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$.
Any bicrossed product $\mathfrak{g} \bowtie \mathfrak{h}$ factorizes through $ \mathfrak{g} = \mathfrak{g} \times \{0\}$ and $
\mathfrak{h} = \{0\} \times \mathfrak{h}$; the converse also holds~\cite[Proposition 8.3.2]{majid2}: if a~Lie algebra
$\mathfrak{L}$ factorizes through $\mathfrak{g}$ and $\mathfrak{h}$, then there exist an isomorphism of Lie algebras
$\mathfrak{L} \cong \mathfrak{g} \bowtie \mathfrak{h}$, where $\mathfrak{g} \bowtie \mathfrak{h}$ is the bicrossed product
associated to the matched pair $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$ whose actions are constructed from
the unique decomposition
\begin{gather}
\label{mpcan}
\left[x, g \right] = x \triangleright g + x \triangleleft g \in \mathfrak{g} + \mathfrak{h}
\end{gather}
for all $x \in \mathfrak{h}$ and $g \in \mathfrak{g}$.
The matched pair $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$ def\/ined by~\eqref{mpcan} is called the
\emph{canonical matched pair} associated to the factorization $\mathfrak{L} = \mathfrak{g} + \mathfrak{h}$.
\begin{Remark}
Over the complex numbers ${\mathbb C}$, an equivalent description for the factorization of a~Lie algebra $\mathfrak{L}$ through
two Lie subalgebras is given in~\cite[Definition~2.1]{ABDO} and in~\cite[Proposition~2.2]{AS},
in terms of \emph{complex product structures} of $\mathfrak{L}$, i.e.~linear maps $f: \mathfrak{L} \to \mathfrak{L}$
such that $f \neq \pm \Id$, $f^2 = f$ satisfying
the integrability conditions
\begin{gather*}
f ([x, y]) = [f(x), y] + [x, f(y)] - f \big([f(x), f(y)] \big)
\end{gather*}
for all $x, y \in \mathfrak{L}$.
The linear map $f: \mathfrak{g} \bowtie \mathfrak{h} \to \mathfrak{g} \bowtie \mathfrak{h}$, $f (g, h):= (g, - h)$ is a~complex
product structure on any bicrossed product $\mathfrak{g} \bowtie \mathfrak{h}$.
Conversely, if~$f$ is a~complex product structure on $\mathfrak{L}$, then $\mathfrak{L}$ factorizes through two Lie subalgebras
$\mathfrak{L} = \mathfrak{L}_{+} + \mathfrak{L}_{-}$, where $\mathfrak{L}_{\pm}$ denotes the eigenspace corresponding to the
eigenvalue $\pm 1$ of~$f$, that is $\mathfrak{L} \cong \mathfrak{L}_{+} \bowtie \mathfrak{L}_{-}$.
\end{Remark}
Let $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$ be a~matched pair of Lie algebras.
A~linear map $r: \mathfrak{h} \to \mathfrak{g}$ is called a~\emph{deformation map}~\cite[Definition 4.1]{am-2013a} of the
matched pair $(\mathfrak{g}, \mathfrak{h}, \triangleright, \triangleleft)$ if the following compatibility holds for any $x,y\in \mathfrak{h}$
\begin{gather}
\label{factLie}
r\big([x, y]\big) - \big[r(x), r(y)\big] = r \big(y \triangleleft r(x) - x \triangleleft r(y) \big) + x \triangleright
r(y) - y \triangleright r(x).
\end{gather}
We denote by ${\mathcal D}{\mathcal M} (\mathfrak{h}, \mathfrak{g} | (\triangleright, \triangleleft))$ the set of all
deformation maps of the matched pair $(\mathfrak{g}, \mathfrak{h}, \triangleright, \triangleleft)$.
If $r \in {\mathcal D}{\mathcal M} (\mathfrak{h}, \mathfrak{g} | (\triangleright, \triangleleft))$ then $\mathfrak{h}_{r}:=
\mathfrak{h}$, as a~vector space, with the new bracket def\/ined for any~$x, y \in \mathfrak{h}$ by
\begin{gather}
\label{rLiedef}
[x, y]_{r}:= [x, y] + x \triangleleft r(y) - y \triangleleft r(x)
\end{gather}
is a~Lie algebra called the \emph{$r$-deformation} of $\mathfrak{h}$.
A~Lie algebra $\overline{\mathfrak{h}}$ is a~complement of $\mathfrak{g} \cong \mathfrak{g} \times \{0\}$ in the bicrossed
product $\mathfrak{g} \bowtie \mathfrak{h}$ if and only if $\overline{\mathfrak{h}} \cong \mathfrak{h}_{r}$, for some
deformation map $r \in {\mathcal D}{\mathcal M} (\mathfrak{h}, \mathfrak{g} | (\triangleright, \triangleleft))$ \cite[Theo\-rem~4.3]{am-2013a}.
\section{The case of perfect Lie algebras}
\label{bpbicross}
Computing all matched pairs between two given Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ and classifying all associated
bicrossed products $\mathfrak{g} \bowtie \mathfrak{h}$ is a~challenging problem.
In the case when $\mathfrak{g}:= k = k_0$, the Abelian Lie algebra of dimension~$1$, they are parameterized by
the set $\TwDer (\mathfrak{h})$ of all twisted derivations of the Lie algebra $\mathfrak{h}$
as def\/ined in~\cite[Definition~4.2]{am-2013b}: a~\emph{twisted derivation} of $\mathfrak{h}$ is a~pair $(\lambda, \Delta)$
consisting of two linear maps $\lambda: \mathfrak{h} \to k$ and $\Delta: \mathfrak{h} \to \mathfrak{h}$ such that for any~$g, h\in \mathfrak{h}$
\begin{gather}
\label{lambderivari}
\lambda ([g, h]) = 0,
\qquad
\Delta ([g, h]) = [\Delta (g), h] + [g, \Delta (h)] + \lambda(g) \Delta (h) - \lambda(h) \Delta (g).
\end{gather}
$\TwDer (\mathfrak{h})$ contains the usual space of derivations $\Der (\mathfrak{h})$ via the canonical embedding
$\Der (\mathfrak{h}) \hookrightarrow \TwDer (\mathfrak{h})$, $D \mapsto (0, D) $, which is an isomorphism if
$\mathfrak{h}$ is a~perfect Lie algebra (i.e.~$\mathfrak{h} = [\mathfrak{h}, \mathfrak{h}]$).
As a~special case of~\cite[Proposition~4.4 and Remark~4.5]{am-2013b} we have:
\begin{Proposition}
\label{mpdim1}
Let $\mathfrak{h}$ be a~Lie algebra.
Then there exists a~bijection between the set of all matched pairs $(k_0, \mathfrak{h}, \triangleleft, \triangleright)$ and the
space $\TwDer (\mathfrak{h})$ of all twisted derivations of $\mathfrak{h}$ given such that the matched pair $(k_0,
\mathfrak{h}, \triangleleft, \triangleright)$ corresponding to $(\lambda, \Delta) \in \TwDer (\mathfrak{h})$ is defined by
\begin{gather}
\label{extenddim10}
h \triangleright a= a\lambda (h),
\qquad
h \triangleleft a= a\Delta (h)
\end{gather}
for all $h \in \mathfrak{h}$ and $a \in k = k_0$.
The bicrossed product $k_0 \bowtie \mathfrak{h}$ associated to the matched pair~\eqref{extenddim10} is denoted~by
$\mathfrak{h}_{(\lambda, \Delta)}$ and has the bracket given for any~$a, b \in k$ and~$x, y \in \mathfrak{h}$ by
\begin{gather}
\{(a, x), (b, y) \}:= \big(b \lambda(x) - a\lambda(y), [x, y] + b \Delta(x) - a\Delta(y)\big).
\label{exdim300aa}
\end{gather}
A~Lie algebra $\mathfrak{L}$ contains $\mathfrak{h}$ as a~subalgebra of codimension~$1$ if and only if $\mathfrak{L}$ is
isomorphic to~$\mathfrak{h}_{(\lambda, \Delta)}$, for some $(\lambda, \Delta) \in \TwDer (\mathfrak{h})$.
\end{Proposition}
Suppose $\{e_i \,|\, i\in I \}$
is a~basis for the Lie algebra $\mathfrak{h}$.
Then, $\mathfrak{h}_{(\lambda, \Delta)}$ has $\{F, e_i \,|\, i\in I \}$ as a~basis and the bracket given for any $i\in I$~by
\begin{gather*}
[e_i, F] = \lambda (e_i) F + \Delta (e_i),
\qquad
[e_i, e_j] = [e_i, e_j]_{\mathfrak{h}},
\end{gather*}
where $[-, -]_{\mathfrak{h}}$ is the bracket on $\mathfrak{h}$.
Above we identify $e_i = (0, e_i)$ and denote $F = (1, 0)$ in the bicrossed product $k_0 \bowtie \mathfrak{h}$.
Classifying the Lie algebras $\mathfrak{h}_{(\lambda, \Delta)}$ is a~dif\/f\/icult task.
In what follows we deal with this problem for a~perfect Lie algebra $\mathfrak{h}$: in this case $\TwDer (\mathfrak{h}) =
\{0\} \times \Der (\mathfrak{h})$ and we denote by $\mathfrak{h}_{(\Delta)} = \mathfrak{h}_{(0, \Delta)}$, for any $\Delta
\in \Der (\mathfrak{h})$.
\begin{Theorem}
\label{morfismeperfecte}
Let $\mathfrak{h}$ be a~perfect Lie algebra and~$\Delta, \Delta' \in \Der (\mathfrak{h})$.
Then there exists a~bijection between the set of all morphisms of Lie algebras $\varphi: \mathfrak{h}_{(\Delta)} \to
\mathfrak{h}_{(\Delta')}$ and the set of all triples $(\alpha, h, v) \in k \times \mathfrak{h} \times \Hom_{\Lie}
(\mathfrak{h}, \mathfrak{h})$ satisfying the following compatibility condition for all $x \in \mathfrak{h}$
\begin{gather}
\label{compmorfisme}
v\big(\Delta (x) \big) - \alpha \Delta ' \big(v(x) \big) = [v(x), h].
\end{gather}
The bijection is given such that the Lie algebra map $\varphi = \varphi_{(\alpha, h, v)}$ corresponding to $(\alpha, h, v)$ is
given by the formula
\begin{gather*}
\varphi: \mathfrak{h}_{(\Delta)} \to \mathfrak{h}_{(\Delta')},
\qquad
\varphi (a, x) = (a\alpha, ah + v(x))
\end{gather*}
for all $(a, x) \in \mathfrak{h}_{(\Delta)} = k_0 \bowtie \mathfrak{h}$.
Furthermore, $\varphi = \varphi_{(\alpha, h, v)}$ is an isomorphism of Lie algebras if and only if $\alpha \neq 0$ and $v \in
\Aut_{\Lie} (\mathfrak{h})$.
\end{Theorem}
\begin{proof}
Any linear map $\varphi: k \times \mathfrak{h} \to k \times \mathfrak{h}$ is uniquely determined by a~quadruple $(\alpha, h,
\beta, v)$, where $\alpha \in k$, $h \in \mathfrak{h}$ and $\beta: \mathfrak{h} \to k$, $v: \mathfrak{h} \to \mathfrak{h}$
are~$k$-linear maps such that
\begin{gather*}
\varphi (a,x) = \varphi_{(\alpha, h, \beta, v)} = (a\alpha + \beta(x), ah + v(x)).
\end{gather*}
We will prove that~$\varphi$ def\/ined above is a~Lie algebra map if and only if~$\beta$ is the trivial map,~$v$ is a~Lie algebra
map and~\eqref{compmorfisme} holds.
It is enough to test the compatibility
\begin{gather}
\label{Liemap}
\varphi \big([(a, x), (b, y)] \big) = [\varphi(a, x), \varphi(b, y)]
\end{gather}
for all generators of $\mathfrak{h}_{(\Delta)} = k \times \mathfrak{h}$, i.e.~elements of the form $(1, 0)$ and $(0, x)$, for
all $x \in \mathfrak{h}$.
Moreover, since $\mathfrak{h}$ is perfect (i.e.~$\lambda = 0$) the bracket on $\mathfrak{h}_{(\Delta)}$ given
by~\eqref{exdim300aa} takes the form: $\{(a, x), (b, y)\}=(0, [x, y] + b \Delta(x) - a~\Delta(y))$.
Using this formula we obtain that~\eqref{Liemap} holds for $(0, x)$ and $(0, y)$ if and only if
\begin{gather*}
\beta \big([x, y]\big) = 0,
\qquad
v \big([x, y]\big) = [v(x), v(y)] + \beta (y) \Delta (v(x)) - \beta (x) \Delta (v (y)).
\end{gather*}
As $\mathfrak{h}$ is perfect these two conditions are equivalent to the fact that $\beta = 0$ and~$v$ is a~Lie algebra map.
Finally, as $\beta =0 $, we can easily show that~\eqref{Liemap} holds in $(1, 0)$ and $(0, x)$ if and only
if~\eqref{compmorfisme} holds.
Thus, we have obtained that~$\varphi$ is a~Lie algebra map if and only if~$v$ is a~Lie algebra map, $\beta = 0$
and~\eqref{compmorfisme} holds.
In what follows we denote by $\varphi_{(\alpha, h, v)}$ the Lie algebra map corresponding to a~quadruple $(\alpha, h, \beta, v)$
with $\beta = 0$.
Suppose f\/irst that $\varphi:= \varphi_{(\alpha, h, v)}$ is a~Lie algebra isomorphism.
Then, there exists a~Lie algebra map $\overline{\varphi}: = \varphi_{(\gamma, g, w)}: \mathfrak{h}_{(\Delta')} \to
\mathfrak{h}_{(\Delta)}$ such that $\varphi \circ \overline{\varphi} (a, x) = \overline{\varphi} \circ \varphi(a, x) = (a, x)$
for all $a \in k$, $x \in \mathfrak{h}$.
Thus, for all $a \in k$ and $x \in \mathfrak{h}$, we have
\begin{gather}
\label{LLL}
a\alpha \gamma = a,
\qquad
a\gamma + v(a g) + v\big(w(x)\big) = x = a\alpha g + w(a h) + w\big(v(x)\big).
\end{gather}
By the f\/irst part of~\eqref{LLL} for $a = 1$ we obtain $\alpha \gamma = 1$ and thus $\alpha \neq 0$ while the second part
of~\eqref{LLL} for $a = 0$ implies~$v$ bijective.
To end with, assume that $\alpha \neq 0$ and $v \in \Aut_{\Lie} (\mathfrak{h})$.
Then, it is straightforward to see that $\varphi = \varphi_{(\alpha, h, v)}$ is an isomorphism with the inverse given~by
$\varphi^{-1}: = \varphi_{(\alpha^{-1}, -\alpha^{-1} v^{-1}(h), v^{-1})}$.
\end{proof}
Let $k^*$ be the units group of~$k$ and $(\mathfrak{h}, +)$ the underlying Abelian group of the Lie algebra $\mathfrak{h}$.
Then the map given for any $\alpha \in k^*$, $v\in \Aut_{\Lie} (\mathfrak{h})$ and $h\in \mathfrak{h}$ by
\begin{gather*}
\varphi: \ k^* \times \Aut_{\Lie} (\mathfrak{h}) \to \Aut_{\Gr} (\mathfrak{h}, +),
\qquad
\varphi (\alpha, v) (h):= \alpha^{-1} v(h)
\end{gather*}
is a~morphism of groups.
Thus, we can construct the semidirect product of groups $\mathfrak{h} \ltimes_{\varphi}(k^* \times \Aut_{\Lie}(\mathfrak{h}))$ associated to~$\varphi$.
The next result shows that $\Aut_{\Lie} (\mathfrak{h}_{(\Delta)})$ is isomorphic to a~certain subgroup of the semidirect
product of groups $\mathfrak{h} \ltimes_{\varphi}(k^* \times \Aut_{\Lie} (\mathfrak{h}))$.
\begin{Corollary}
\label{izoaut}
Let $\mathfrak{h}$ be a~perfect Lie algebra and~$\Delta, \Delta' \in \Der (\mathfrak{h})$.
Then the Lie algebras $\mathfrak{h}_{(\Delta)}$ and $\mathfrak{h}_{(\Delta')}$ are isomorphic if and only if there exists
a~triple $(\alpha, h, v) \in k^* \times \mathfrak{h} \times \Aut_{\Lie} (\mathfrak{h})$ such that $v \circ \Delta -
\alpha \Delta ' \circ v = [v(-), h]$.
Furthermore, there exists an isomorphism of groups
\begin{gather*}
\Aut_{\Lie} (\mathfrak{h}_{(\Delta)}) \cong {\mathcal G} (\mathfrak{h}, \Delta):= \{(\alpha, h, v) \in k^* \times
\mathfrak{h} \times \Aut_{\Lie} (\mathfrak{h}) \,|\, v \circ \Delta - \alpha \Delta \circ v = [v(-), h] \},
\end{gather*}
where ${\mathcal G} (\mathfrak{h}, \Delta)$ is a~group with respect to the following multiplication
\begin{gather}
\label{graut}
(\alpha, h, v) \cdot (\beta, g, w):= (\alpha \beta, \beta h + v(g), v \circ w)
\end{gather}
for all $(\alpha, h, v), (\beta, g, w) \in {\mathcal G} (\mathfrak{h}, \Delta)$.
Moreover, the canonical map
\begin{gather*}
{\mathcal G} (\mathfrak{h}, \Delta) \longrightarrow \mathfrak{h} \ltimes_{\varphi} \big(k^* \times \Aut_{\Lie}
(\mathfrak{h}) \big),
\qquad
(\alpha, h, v) \mapsto \big (\alpha^{-1} h, (\alpha, v) \big)
\end{gather*}
is an injective morphism of groups.
\end{Corollary}
\begin{proof}
The f\/irst part follows trivially from Theorem~\ref{morfismeperfecte}.
Consider now~$\gamma, \psi \in \Aut_{\Lie} (\mathfrak{h}_{(\Delta)})$.
Using again Theorem~\ref{morfismeperfecte}, we can f\/ind $(\alpha, h, v),
(\beta, g, w) \in k^* \times \mathfrak{h} \times \Aut_{\Lie} (\mathfrak{h})$
such that $\gamma = \varphi_{(\alpha, h, v)}$ and $\psi = \varphi_{(\beta, g, w)}$.
Then, for all $a \in k$, $x \in \mathfrak{h}$ we have
\begin{gather*}
\varphi_{(\alpha, h, v)} \circ \varphi_{(\beta, g, w)} (a, x) = \varphi_{(\alpha, h, v)} \big(a \beta, ag + w(x)\big)
= \big(\alpha \beta a, a\beta h + av(g) + v \circ w(x)\big)
\\
\hphantom{\varphi_{(\alpha, h, v)} \circ \varphi_{(\beta, g, w)} (a, x)}{}
= \varphi_{(\alpha \beta, \beta h + v(g), v \circ w)}(a, x).
\end{gather*}
Thus, $\Aut_{\Lie} (\mathfrak{h}_{(\Delta)}) $ is isomorphic to ${\mathcal G} (\mathfrak{h}, \Delta)$ with the
multiplication given by~\eqref{graut}.
The last assertion follows by a~routine computation.
\end{proof}
\begin{Remark}
\label{cazinner}
Let $\Delta = [x_0, -]$ be an inner derivation of a~perfect Lie algebra $\mathfrak{h}$.
Then the group $\Aut_{\Lie} (\mathfrak{h}_{([x_0, -])})$ admits a~simpler description as follows
\begin{gather*}
{\mathcal G} (\mathfrak{h}, [x_0, -]) = \{(\alpha, h, v) \in k^* \times \mathfrak{h} \times \Aut_{\Lie} (\mathfrak{h}) \,|\,
v(x_0) - \alpha x_0 + h \in {\rm Z} (\mathfrak{h})\},
\end{gather*}
where ${\rm Z} (\mathfrak{h})$ is the center of $\mathfrak{h}$.
Assume in addition that $\mathfrak{h}$ has trivial center, i.e.~${\rm Z} (\mathfrak{h}) = \{0\}$; it follows that there exists
an isomorphism of groups
\begin{gather*}
\Aut_{\Lie} (\mathfrak{h}_{([x_0, -])}) \cong k^* \times \Aut_{\Lie} (\mathfrak{h}),
\end{gather*}
since in this case any element~$h$ from a~triple $(\alpha, h, v) \in {\mathcal G} (\mathfrak{h}, [x_0, -])$ must be equal to $
\alpha x_0 - v (x_0)$.
Moreover, in this context, the multiplication given by~\eqref{graut} is precisely that of a~direct product of groups.
\end{Remark}
A Lie algebra $\mathfrak{h}$ is called \emph{complete} (see~\cite{JMZ, SZ} for examples and structural results on this class of
Lie algebras) if $\mathfrak{h}$ has trivial center and any derivation is inner.
A~complete and perfect Lie algebra is called \emph{sympathetic}~\cite{ben}: semisimple Lie algebras over a~f\/ield of
characteristic zero are sympathetic and there exists a~sympathetic non-semisimple Lie algebra in dimension $25$.
For sympathetic Lie algebras, Theorem~\ref{morfismeperfecte} takes the following form which considerably
improves~\cite[Corollary~4.10]{am-2013b}, where the classif\/ication is made only up to an isomorphism of Lie algebras which acts
as identity on~$\mathfrak{h}$.
\begin{Corollary}
\label{morfismeperfectea}
Let $\mathfrak{h}$ be a~sympathetic Lie algebra.
Then up to an isomorphism of Lie algebras there exists only one Lie algebra that contains $\mathfrak{h}$ as a~Lie subalgebra of
codimension one, namely the direct product $k_0 \times \mathfrak{h}$ of Lie algebras.
Furthermore, there exists an isomorphism of groups $\Aut_{\Lie} (k_0 \times \mathfrak{h}) \cong k^* \times \Aut_{\Lie} (\mathfrak{h})$.
\end{Corollary}
\begin{proof}
Since $\mathfrak{h}$ is perfect any Lie algebra that contains $\mathfrak{h}$ as a~Lie subalgebra of codimension $1$ is
isomorphic to $\mathfrak{h}_{(D)}$, for some $D \in \Der (\mathfrak{h})$.
As $\mathfrak{h}$ is also complete, any derivation is inner.
For an arbitrary derivation $D = [d, -]$ we can prove that $\mathfrak{h}_{(D)} \cong \mathfrak{h}_{(0)}$, where $0 = [0, -]$ is
the trivial derivation and moreover $\mathfrak{h}_{(0)}$ is just the direct product of Lie algebras $k_0 \times \mathfrak{h}$.
Indeed, by taking $(\alpha, h, v):= (1, -d, \Id_{\mathfrak{h}})$ one can see that relation~\eqref{compmorfisme} holds for
$D = [d, -]$ and $D' = [0, -]$, that is $\mathfrak{h}_{(D)} \cong \mathfrak{h}_{(0)}$.
The f\/inal part follows from Remark~\ref{cazinner}.
\end{proof}
\begin{Remark}
\label{defhdelta}
Let $\mathfrak{h}$ be a~perfect Lie algebra with a~basis $\{e_i \,|\, i\in I \}$, $\Delta \in \Der (\mathfrak{h})$ a~given
derivation and consider the extension $k_0 \subseteq \mathfrak{h}_{(\Delta)} = k_0 \bowtie \mathfrak{h}_{(\Delta)}$.
In order to determine all complements of $k_0$ in $\mathfrak{h}_{(\Delta)}$ we have to describe the set of all deformation maps
$r: \mathfrak{h} \to k_0$ of the matched pair~\eqref{extenddim10}.
A~deformation map is completely determined by a~family of scalars $(a)_{i\in I}$ satisfying the following compatibility
condition for any~$i, j\in I$
\begin{gather*}
r \big([e_i, e_j]_{\mathfrak{h}} \big) = r \big(a_i \Delta (e_j) - a_j \Delta (e_i) \big)
\end{gather*}
via the relation $r (e_i) = a_i$.
For such an $r = (a_i)_{i\in I}$, the~$r$-deformation of $\mathfrak{h}$ is the Lie algebra $\mathfrak{h}_r$ having $\{e_i \,|\, i\in I\}$
as a~basis and the bracket def\/ined for any~$i, j\in I$ by
\begin{gather*}
[e_i, e_j]_r = [e_i, e_j]_{\mathfrak{h}} + a_j \Delta (e_i) - a_i \Delta (e_j).
\end{gather*}
Any complement of $k_0$ in $\mathfrak{h}_{(\Delta)}$ is isomorphic to such an $\mathfrak{h}_r$.
An explicit example in dimension $5$ is given below.
\end{Remark}
\begin{Example}
\label{prefectnecomplet}
Let~$k$ be a~f\/ield of characteristic $\neq 2$ and $\mathfrak{h}$ the perfect $5$-dimensional Lie algebra with a~basis $\{e_{1},
e_{2}, e_{3}, e_{4}, e_{5}\}$ and bracket given by
\begin{alignat*}{4}
& [e_{1}, e_{2}] = e_{3},
\qquad &&
[e_{1}, e_{3}] = -2e_{1},
\qquad &&
[e_{1}, e_{5}] = [e_{3}, e_{4}] = e_{4},&
\\
& [e_{2}, e_{3} ] = 2e_{2},
\qquad &&
[e_{2}, e_{4} ] = e_{5},
\qquad &&
[e_{3}, e_{5} ] = - e_{5}.&
\end{alignat*}
By a~straightforward computation it can be proved that the space of derivations $\Der(\mathfrak{h})$ coincides with the
space of all matrices from $\mathcal{M}_{5}(k)$ of the form
\begin{gather*}
A= \left(
\begin{matrix} a_{1} & 0 & -2a_4 & 0 & 0
\\
0 & -a_{1} & -2a_{2} & 0 & 0
\\
a_{2} & a_{4} & 0 & 0 & 0
\\
a_{3} & 0 & a_{5} & a_{6} & a_{4}
\\
0 & a_{5} & -a_{3} & -a_{2} & (a_{6}-a_{1})
\end{matrix}
\right)
\end{gather*}
for all $a_1, \dots, a_6 \in k$.
Thus $\mathfrak{h}$ is not complete since $\Der(\mathfrak{h})$ has dimension~$6$.
One can show easily that the derivation $\Delta:= e_{11} - e_{41} - e_{22} + e_{53} - e_{44} - 2 e_{55}$ is not inner, where
$e_{i j} \in \mathcal{M}_{n}(k)$ is the matrix having $1$ in the $(i,j)^{\rm th}$ position and zeros elsewhere.
For the derivation~$\Delta$ we consider the extension $k_0 \subseteq k_0 \bowtie \mathfrak{h} = \mathfrak{h}_{(\Delta)}$ and we
will describe all the complements of $k_0$ in $\mathfrak{h}_{(\Delta)}$.
By a~routine computation it can be seen that $r: \mathfrak{h} \to k_0$ is a~deformation map of the matched
pair~\eqref{extenddim10} if and only if $r:= 0$ (the trivial map) or~$r$ is given~by
\begin{gather*}
r(e_1):= a,
\qquad
r(e_2):= - a^{-1},
\qquad
r(e_3) = 2,
\qquad
r(e_4) = r(e_5) = 0
\end{gather*}
for some $a \in k^*$.
Thus a~Lie algebra $\mathfrak{C}$ is a~complement of $k_0$ in $\mathfrak{h}_{(\Delta)}$ if and only if $\mathfrak{C} \cong
\mathfrak{h}$ or $\mathfrak{C} \cong \mathfrak{h}_a$, where $\mathfrak{h}_a$ is the $5$-dimensional Lie algebra with basis
$\{e_{1}, e_{2}, e_{3}, e_{4}, e_{5}\}$ and bracket given~by
\begin{gather*}
[e_1, e_2 ]_a:= - a^{-1} e_1 + ae_2 + e_3 + a^{-1} e_4,
\qquad
[e_1, e_3 ]_a:= -2 e_4 - ae_5,
\qquad
[e_1, e_4 ]_a:= ae_4,
\\
[e_1, e_5 ]_a:= e_4 + 2 ae_5,
\qquad
[e_2, e_3 ]_a:= a^{-1} e_5,
\qquad
[e_2, e_4 ]_a:= e_5 - a^{-1} e_4,
\\
[e_2, e_5 ]_a:= -2 a^{-1} e_5,
\qquad
[e_3, e_4 ]_a:= 3 e_4,
\qquad
[e_3, e_5 ]_a:= 3 e_5
\end{gather*}
for any $a \in k^*$.
Remark that none of the matched pair deformations $\mathfrak{h}_a$ of the Lie algebra $\mathfrak{h}$ is perfect since the
dimension of the derived algebra $[\mathfrak{h}_a, \mathfrak{h}_a]$ is equal to~$3$.
\end{Example}
\section{The non-perfect case}
\label{mpdefo}
In Section~\ref{bpbicross} we have described and classif\/ied all bicrossed products $k_0 \bowtie \mathfrak{h}$ for a~perfect Lie
algebra $\mathfrak{h}$; furthermore, Remark~\ref{defhdelta} and Example~\ref{prefectnecomplet} describe all complements of $k_0$
in a~given bicrossed product $k_0 \bowtie \mathfrak{h}$.
In this section we approach the same questions for a~given non-perfect Lie algebra $\mathfrak{h}:= \mathfrak{l} (2n+1, k)$,
where $\mathfrak{l} (2n+1, k)$ is the $(2n+1)$-dimensional Lie algebra with basis $\{E_i, F_i, G \,|\, i = 1, \dots, n\}$ and
bracket given for any $i = 1, \dots, n$~by
\begin{gather*}
[E_i, G]:= E_i,
\qquad
[G, F_i]:= F_i.
\end{gather*}
First, we shall describe all bicrossed products $k_0 \bowtie \mathfrak{l} (2n+1, k)$: they will explicitly describe all Lie
algebras which contain $\mathfrak{l} (2n+1, k)$ as a~subalgebra of codimension~$1$.
Then, as the second step, we shall f\/ind all~$r$-deformations of the Lie algebra $\mathfrak{l} (2n+1, k)$, for two given
extensions $k_0 \subseteq k_0 \bowtie \mathfrak{l} (2n+1, k)$.
Based on Proposition~\ref{mpdim1} we have to compute f\/irst the space $\TwDer(\mathfrak{l} (2n+1, k))$ of all
twisted derivations.
\begin{Proposition}
\label{tw2n1}
There exists a~bijection between $\TwDer(\mathfrak{l} (2n+1, k))$ and the set of all matrices $(A, B, C, D,
\lambda_0, \delta) \in {\rmM}_n (k)^4 \times k \times k^{2n+1}$ satisfying the following conditions
\begin{gather}
\lambda_0 A=- \delta_{2n+1} I_n,
\qquad
(2 + \lambda_0) B = 0,
\qquad
(2 - \lambda_0) C = 0,
\qquad
\lambda_0 D = \delta_{2n+1} I_n,
\label{primaa}
\end{gather}
where $\delta = (\delta_1, \dots, \delta_{2n+1}) \in k^{2n+1}$.
The bijection is given such that the twisted derivation $(\lambda, \Delta) \in \TwDer (\mathfrak{l} (2n+1, k))$
associated to $(A, B, C, D, \lambda_0, \delta)$ is given by
\begin{gather}
\lambda (E_i) = \lambda (F_i):= 0,
\qquad
\lambda (G):= \lambda_0,
\label{primab}
\\
\Delta:=
\begin{pmatrix}
A & B & \delta_1
\\
C & D :
\\
0 & 0 & \delta_{2n+1}
\end{pmatrix}.
\label{primac}
\end{gather}
${\mathcal T} (n)$ denotes the set of all $(A, B, C, D, \lambda_0, \delta) \in {\rmM}_n (k)^4 \times k \times k^{2n+1}$
satisfying~\eqref{primaa}.
\end{Proposition}
\begin{proof}
The f\/irst compatibility condition~\eqref{lambderivari} shows that a~linear map $\lambda: \mathfrak{l} (2n+1, k) \to k$ of
a~twisted derivation $(\lambda, D)$ must have the form given by~\eqref{primab}, for some $\lambda_0 \in k$.
We shall f\/ix such a~map for a~given $\lambda_0 \in k$.
We write down the linear map $\Delta: \mathfrak{l} (2n+1, k) \to \mathfrak{l} (2n+1, k)$ as a~matrix associated to the basis
$\{E_1, \dots, E_n, F_1, \dots, F_n, G \}$ of $\mathfrak{l} (2n+1, k)$, as follows
\begin{gather*}
\Delta =
\begin{pmatrix}
A & B & d_{1, 2n+1}
\\
C & D :
\\
d_{2n+1, 1} ..
d_{2n+1, 2n+1}
\end{pmatrix}
\end{gather*}
for some matrices~$A,B,C, D \in {\rmM}_n (k)$ and some scalars $d_{i, j} \in k$, for all~$i, j = 1, \dots, 2n+1$.
We denote $A = (a_{ij})$, $B = (b_{ij})$, $C = (c_{ij})$, $D = (d_{ij})$.
It remains to check the compatibility condition~\eqref{lambderivari} for~$\Delta$, i.e.\
\begin{gather*}
\Delta ([g, h]) = [\Delta(g), h] + [g, \Delta (h)] + \lambda(g) \Delta (h) - \lambda(h) \Delta (g)
\end{gather*}
for all $g \neq h \in \{E_1, \dots, E_n, F_1, \dots, F_n, G\}$.
As this is a~routinely straightforward computation we will only indicate the main steps of the proof.
We can easily see that the compatibility condition~\eqref{lambderivari} holds
for $(g, h) = (E_i, E_j)$ if and only if $d_{2n+1,i} = 0$, for all $i = 1, \dots, n$.
In the same way~\eqref{lambderivari} holds for $(g, h) = (F_i, F_j)$ if and only if $d_{2n+1, n + i} = 0$, for all $i = 1, \dots, n$.
This shows that $\Delta $ has the form~\eqref{primac}, that is the f\/irst $2n$ entries from the last row of the matrix~$\Delta$
are all zeros and we will denote the last column of~$D$ by $(d_{1, 2n+1}, \dots, d_{2n+1, 2n+1}) = \delta = (\delta_1, \dots,
\delta_{2n+1})$.
It follows from here that~\eqref{lambderivari} holds trivially for the pair $(g, h) = (E_i, F_j)$.
An easy computation shows that~\eqref{lambderivari} holds for $(g, h) = (E_i, G)$ if and only if the following equation holds
\begin{gather*}
(1 - \lambda_0) \left(\sum\limits_{j=1}^n a_{j, i} E_j + \sum\limits_{j=1}^n c_{j, i} F_j \right) = \sum\limits_{j=1}^n a_{j, i}
E_j - \sum\limits_{j=1}^n c_{j, i} F_j + \delta_{2n+1} E_i,
\end{gather*}
which is equivalent to $- \lambda_0 A= \delta_{2n+1} I_n$ and $(2 - \lambda_0) C = 0$, i.e.~the f\/irst and the third equations
from~\eqref{primaa}.
A~similar computation shows that~\eqref{lambderivari} holds for $(g, h) = (G, F_i)$ if and only if $(2 + \lambda_0) B = 0$ and
$\lambda_0 D = \delta_{2n+1} I_n$ and the proof is f\/inished.
\end{proof}
Let $\mathfrak{l} (2n+1, k)_{(A, B, C, D, \lambda_0, \delta)}$ be the bicrossed product $k_0 \bowtie \mathfrak{l} (2n+1, k)$
associated to the matched pair given by
the twisted derivation $\big(A = (a_{ji}), B = (b_{ji}), C = (c_{ji}), D = (d_{ji}),\lambda_0$, $\delta = (\delta_j) \big) \in {\mathcal T} (n)$.
From now on we will use the following convention: if one of the elements of the $6$-tuple ($A$,~$B$,~$C$,~$D$,
$\lambda_0$,~$\delta$) is equal to $0$ then we will omit it when writing down the Lie algebra $\mathfrak{l} (2n+1, k)_{(A,B,C,D,\lambda_0,\delta)}$.
A~basis of $\mathfrak{l} (2n+1, k)_{(A, B, C, D, \lambda_0, \delta)}$ will be denoted by $\{E_i, F_i, G, H \,|\, i = 1, \dots, n\}$:
these Lie algebras can be explicitly described by f\/irst computing the set ${\mathcal T} (n)$ and then using
Proposition~\ref{mpdim1}.
Considering the equations~\eqref{primaa} which def\/ine ${\mathcal T}(n)$ a~discussion involving the f\/ield~$k$ and the scalar~$\lambda_0$ is mandatory.
For two sets~$X$ and~$Y$ we shall denote by $X \sqcup Y$ the disjoint union of~$X$ and~$Y$.
As a~conclusion of the above results we obtain:
\begin{Theorem}
\label{teorema11}
$(1)$ If~$k$ is a~field such that $\charop (k) \neq 2$ then
\begin{gather*}
{\mathcal T} (n) \cong \big((k \setminus \{0, \pm 2\}) \times k^{2n+1} \big) \sqcup \big ({\rmM}_n(k)^2 \times k^{2n} \big)
\sqcup \big({\rmM}_n(k) \times k^{2n+1} \big) \sqcup \big({\rmM}_n(k) \times k^{2n+1} \big)
\end{gather*}
and the four families of Lie algebras containing $\mathfrak{l} (2n+1, k)$ as a~subalgebra of codimension $1$ are the following:
$\bullet$ the Lie algebra $\mathfrak{l}^1 (2n+1, k)_{(\lambda_0, \delta)}$ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = - \lambda_0^{-1} \delta_{2n+1} E_i,
\\
[F_i, H ] = \lambda_0^{-1} \delta_{2n+1} F_i,
\qquad
[G, H ] = \lambda_0 H + \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j + \delta_{2n+1} G
\end{gather*}
for all $(\lambda_0, \delta) \in (k \setminus \{0, \pm 2\}) \times k^{2n+1}$.
$\bullet$ the Lie algebra $\mathfrak{l}^2 (2n+1, k)_{(A, D, \delta)}$ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = \sum\limits_{j=1}^n a_{ji} E_j,
\\
[F_i, H ] = \sum\limits_{j=1}^n d_{ji} F_j,
\qquad
[G, H ] = \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j
\end{gather*}
for all $(A = (a_{ij}), D = (d_{ij}), \delta) \in {\rmM}_n(k) \times {\rmM}_n(k) \times k^{2n}$.
$\bullet$ the Lie algebra $\mathfrak{l}^3 (2n+1, k)_{(C, \delta)}$ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = - 2^{-1} \delta_{2n+1} E_i + \sum\limits_{j=1}^n c_{ji} F_j,
\\
[F_i, H ] = 2^{-1} \delta_{2n+1} F_i,
\qquad
[G, H ] = 2 H + \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j + \delta_{2n+1} G
\end{gather*}
for all $(C = (c_{ij}), \delta) \in {\rmM}_n(k) \times k^{2n+1}$.
$\bullet$ the Lie algebra $\mathfrak{l}^4 (2n+1, k)_{(B, \delta)} $ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[F_i, H ] = \sum\limits_{j=1}^n b_{ji} E_j - 2^{-1} \delta_{2n+1} F_i,
\\
[E_i, H ] = 2^{-1} \delta_{2n+1} E_i,
\qquad
[G, H ] = - 2 H + \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j + \delta_{2n+1} G
\end{gather*}
for all $(B = (b_{ij}), \delta) \in {\rmM}_n(k) \times k^{2n+1}$.
$(2)$ If $\charop (k) = 2$ then
\begin{gather*}
{\mathcal T} (n) \cong \big({\rmM}_n(k)^4 \times k^{2n} \big) \sqcup \big (k^* \times k^{2n+1} \big)
\end{gather*}
and the two families of Lie algebras containing $\mathfrak{l} (2n+1, k)$ as a~subalgebra of codimension $1$ are the following:
$\bullet$ the Lie algebra $\mathfrak{l}_1 (2n+1, k)_{(A, B, C, D, \delta)}$ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = \sum\limits_{j = 1}^n \big(a_{ji} E_j + c_{ji} F_j \big),
\\
[F_i, H ] = \sum\limits_{j = 1}^n \big(b_{ji} E_j + d_{ji} F_j \big),
\qquad
[G, H ] = \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j
\end{gather*}
for all $(A, B, C, D, \delta) \in {\rmM}_n(k)^4 \times k^{2n}$.
$\bullet$ the Lie algebra $\mathfrak{l}_2 (2n+1, k)_{(\lambda_0, \delta)}$ with the bracket given for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = - \lambda_0^{-1} \delta_{2n+1} E_i,
\\
[F_i, H ] = \lambda_0^{-1} \delta_{2n+1} F_i,
\qquad
[G, H ] = \lambda_0 H + \sum\limits_{j=1}^n \delta_j E_j + \sum\limits_{j=1}^n \delta_{n +j} F_j + \delta_{2n+1} G
\end{gather*}
for all $(\lambda_0, \delta) \in k^* \times k^{2n+1}$.
\end{Theorem}
\begin{proof}
The proof relies on the use of Propositions~\ref{mpdim1} and~\ref{tw2n1} as well as the equations~\eqref{primaa}
def\/ining ${\mathcal T} (n)$.
Besides the discussion on the characteristic of~$k$ it is also necessary to consider whether $\lambda_0$ belongs to the set
$\{0, 2, -2\}$.
In the case that $\charop (k) \neq 2$, the f\/irst Lie algebra listed is the bicrossed product which corresponds to the case
when $\lambda_0 \notin \{0, 2, -2\}$.
In this case, we can easily see that $\big(A, B, C, D, \lambda_0, \delta = (\delta_j) \big) \in {\mathcal T} (n)$ if and only
if $B = C = 0$, $A = - \lambda_0^{-1} \delta_{2n+1} I_n$ and $D = \lambda_0^{-1} \delta_{2n+1} I_n$.
The Lie algebra $\mathfrak{l}^1 (2n+1, k)_{(\lambda_0, \delta)}$ is exactly the bicrossed product $k_0 \bowtie \mathfrak{l}
(2n+1, k)$ corresponding to this twisted derivation.
The Lie algebra $\mathfrak{l}^2 (2n+1, k)_{(A, D, \delta)}$ is the bicrossed product $k_0 \bowtie \mathfrak{l} (2n+1, k)$
corresponding to the case $\lambda_0 = 0$ while the last two Lie algebras are the bicrossed products $k_0 \bowtie \mathfrak{l}
(2n+1, k)$ associated to the case when $\lambda_0 = 2$ and respectively $\lambda_0 = -2$.
If the characteristic of~$k$ is equal to $2$ we distinguish the following two possibilities: the Lie algebra $\mathfrak{l}_1
(2n+1, k)_{(A, B, C, D, \delta)}$ is the bicrossed product $k_0 \bowtie \mathfrak{l} (2n+1, k)$ associated to $\lambda_0 = 0$
while the Lie algebra $\mathfrak{l}_2 (2n+1, k)_{(\lambda_0, \delta)}$ is the same bicrossed product but associated to
$\lambda_0 \neq 0$.
\end{proof}
Let~$k$ be a~f\/ield of characteristic $\neq 2$ and $\mathfrak{l}^1 (2n+1, k)_{(\lambda_0, \delta)}$ the Lie algebra of
Theorem~\ref{teorema11}.
In order to keep the computations ef\/f\/icient we will consider $\lambda_0:= 1$ and $\delta:= (0, \dots, 0, 1)$ and we denote by $L
(2n+2, k):= \mathfrak{l}^1 (2n+1, k)_{(1, (0, \dots, 0, 1))}$, the $(2n+2)$-dimensional Lie algebra having a~basis $\{E_i, F_i,
G, H \,|\, i = 1, \dots, n \}$ and the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i] = F_i,
\qquad
[E_i, H ] = - E_i,
\qquad
[F_i, H ] = F_i,
\qquad
[G, H ] = H + G.
\end{gather*}
We consider the Lie algebra extension $kH \subset L (2n+2, k)$, where $kH \cong k_0$ is the Abelian Lie algebra of dimension
$1$.
Of course, $L (2n+2, k)$ factorizes through $kH$ and $\mathfrak{l} (2n+1, k)$,
i.e.~$L (2n+2, k) = kH \bowtie \mathfrak{l} (2n+1,k)$~-- the actions $\triangleleft: \mathfrak{l} (2n+1, k) \times kH \to \mathfrak{l} (2n+1, k)$
and $\triangleright: \mathfrak{l}(2n+1, k) \times kH \to kH$ of the canonical matched pair are given by
\begin{gather}
E_i \triangleleft H:= - E_i,
\qquad
F_i \triangleleft H:= F_i,
\qquad
G \triangleleft H:= G,
\qquad
G\triangleright H:= H
\label{mpcanon}
\end{gather}
and all undef\/ined actions are zero.
Next we compute the set ${\mathcal D}{\mathcal M}(\mathfrak{l} (2n+1, k), kH | (\triangleright, \triangleleft))$ of
all deformation maps of the matched pair $(kH, \mathfrak{l} (2n+1, k), \triangleright, \triangleleft)$ given by~\eqref{mpcanon}.
\begin{Lemma}
\label{defmaps}
Let~$k$ be a~field of characteristic $\neq 2$.
Then there exists a~bijection
\begin{gather*}
{\mathcal D}{\mathcal M} \big(\mathfrak{l} (2n+1, k), kH | (\triangleright, \triangleleft) \big) \cong \big(k^n \setminus
\{0\} \big) \sqcup \big(k^n \times k \big).
\end{gather*}
The bijection is given such that the deformation map $r = r_a: \mathfrak{l} (2n+1, k) \to kH$ associated to $a = (a_i) \in k^n
\setminus \{0\}$ is given~by
\begin{gather}
\label{def1a}
r (E_i):= a_i H,
\qquad
r(F_i):= 0,
\qquad
r (G):= H,
\end{gather}
while the deformation map $r = r_{(b, c)}: \mathfrak{l} (2n+1, k) \to kH$ associated to $(b = (b_i), c) \in k^n \times k$ is
given as follows
\begin{gather}
\label{def2a}
r (E_i):= 0,
\qquad
r(F_i):= b_i H,
\qquad
r (G):= c H
\end{gather}
for all $i = 1, \dots, n$.
\end{Lemma}
\begin{proof}
Any linear map $r: \mathfrak{l} (2n+1, k) \to kH$ is uniquely determined by a~triple $(a = (a_i), b = (b_i), c) \in k^n \times
k^n \times k$ via: $r(E_i):= a_i H$, $r(F_i):= b_i H$ and $r(G):= c H$, for all $i = 1, \dots, n$.
We need to check under what conditions such a~map $r = r_{(a, b, c)}$ is a~deformation map.
Since $kH$ is Abelian, equation~\eqref{factLie} comes down to
\begin{gather}
\label{defabc}
r([x, y]) = r \big(y \triangleleft r(x) - x \triangleleft r(y) \big) + x \triangleright r(y) - y \triangleright r(x),
\end{gather}
which needs to be checked for all~$x, y \in \{E_i, F_i, G \,|\, i = 1, \dots, n \}$.
Notice that~\eqref{defabc} is symmetrical i.e.~if~\eqref{defabc} is fulf\/illed for $(x, y)$ then~\eqref{defabc} is also fulf\/illed
for $(y, x)$.
By a~routinely computation it can be seen that $r = r_{(a, b, c)}$ is a~deformation map if and only if
\begin{gather}
\label{defabc11}
a_i b_j = 0,
\qquad
(1 - c) a_i = 0
\end{gather}
for all~$i, j = 1, \dots, n$.
Indeed,~\eqref{defabc} holds for $(x, y) = (E_i, F_j)$ if and only if $a_i b_j = 0$ and it holds for $(x, y) = (E_i, G)$ if and
only if $a_i = a_i c$.
The other cases left to study are either automatically fulf\/illed or equivalent to one of the two conditions above.
The f\/irst condition of~\eqref{defabc11} divides the description of deformation maps into two cases: the f\/irst one corresponds to
$a = (a_i) \neq 0$ and we automatically have $b = 0$ and $c = 1$.
The second case corresponds to $a:= 0$ which implies that~\eqref{defabc11} holds for any $(b, c) \in k^n \times k$.
\end{proof}
The next result describes all deformations of $\mathfrak{l} (2n+1, k)$ associated to the canonical matched pair $(kH,
\mathfrak{l} (2n+1, k), \triangleright, \triangleleft)$ given by~\eqref{mpcanon}.
\begin{Proposition}
\label{mpdef}
Let~$k$ be a~field of characteristic $\neq 2$ and the extension of Lie algebras $kH \subset L (2n+2, k)$.
Then a~Lie algebra $\mathfrak{C}$ is a~complement of $kH$ in $L (2n+2, k)$ if and only if $\mathfrak{C}$ is isomorphic to one of
the Lie algebras from the three families defined below:
$\bullet$ the Lie algebra $\mathfrak{l}_{(a)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$~by{\samepage
\begin{gather}
\label{1famdef}
[E_i, E_j]_a:= a_i E_j - a_j E_i,
\qquad
[E_i, F_j]_a:= -a_i F_j,
\qquad
[E_i, G]_a:= - a_i G
\end{gather}
for all $a = (a_i) \in k^n \setminus \{0\}$.}
$\bullet$ the Lie algebra $\mathfrak{l}'_{(b)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, F_j]_{b}:= - b_j E_i,
\qquad
[E_i, G]_{b}:= - E_i,
\\
[F_i, F_j]_{b}:= b_j F_i - b_i F_j,
\qquad
[F_i, G]_{b}:= F_i - b_i G
\end{gather*}
for all $b = (b_i) \in k^n$.
$\bullet$ the Lie algebra $\mathfrak{l}''_{(b)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$~by
\begin{gather*}
[E_i, F_j]_{b}:= - b_j E_i,
\qquad
[F_i, F_j]_{b}:= b_j F_i - b_i F_j,
\qquad
[F_i, G]_{b}:= - b_i G
\end{gather*}
for all $b = (b_i)\in k^n$.
Thus the factorization index $[L (2n+2, k): kH]^f$ is equal to the number of types of isomorphisms of Lie algebras of the set
\begin{gather*}
\{\mathfrak{l}_{(a)} (2n+1, k), \mathfrak{l}'_{(b)} (2n+1, k), \mathfrak{l}''_{(b)} (2n+1, k) \,|\, a\in k^n \setminus \{0\}, b \in k^n \}.
\end{gather*}
\end{Proposition}
\begin{proof}
$\mathfrak{l} (2n+1, k)$ is a~complement of $kH$ in $L (2n+2, k)$ and we can write $L (2n+2, k) = kH \bowtie \mathfrak{l} (2n+1,
k)$, where the bicrossed product is associated to the matched pair given in~\eqref{mpcanon}.
Hence, by~\cite[Theorem~4.3]{am-2013a} any other complement $\mathfrak{C}$ of $kH$ in $L (2n+2, k)$ is isomorphic to
an~$r$-deformation of $\mathfrak{l} (2n+1, k)$, for some deformation map $r: \mathfrak{l} (2n+1, k) \to kH$ of the matched
pair~\eqref{mpcanon}.
These are described in Lemma~\ref{defmaps}.
The Lie algebra $\mathfrak{l}_{(a)} (2n+1, k)$ is precisely the $r_a$-deformation of $\mathfrak{l} (2n+1, k)$, where $r_a$ is
given by~\eqref{def1a}.
On the other hand the $r_{(b, c)}$-deformation of $\mathfrak{l} (2n+1, k)$, where $r_{(b, c)}$ is given by~\eqref{def2a} for
some $(b = (b_i), c) \in k^n \times k$, is the Lie algebra denoted by $\mathfrak{l}_{(b, c)} (2n+1, k)$ having the bracket given
for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, F_j]_{(b, c)}:= - b_j E_i,
\qquad
[E_i, G]_{(b, c)}:= (1-c) E_i,
\\
[F_i, F_j]_{(b, c)}:= b_j F_i - b_i F_j,
\qquad
[F_i, G]_{(b, c)}:= (c-1) F_i - b_i G
\end{gather*}
for all $(b = (b_i), c) \in k^n \times k$.
Now, for $c \neq 1$ we can see that $\mathfrak{l}_{(b, c)} (2n+1, k) \cong \mathfrak{l}'_{(b)} (2n+1, k)$ (by sending~$G$ to
$(c-1)^{-1} G$) while $\mathfrak{l}_{(b, 1)} (2n+1, k) = \mathfrak{l}''_{(b)} (2n+1, k)$ and we are done.
\end{proof}
\begin{Remark}\label{cind}
An attempt to compute $[L (2n+2, k): kH]^f$ for an arbitrary integer~$n$ is hopeless.
However, one can easily see that $\mathfrak{l}'_{(0)} (2n+1, k) = \mathfrak{l} (2n+1, k)$ and $\mathfrak{l}''_{(0)} (2n+1, k) =
k^{2n+1}_0$, the Abelian Lie algebra of dimension $2n +1$.
Thus, $[L (2n+2, k): kH]^f \geq 2$.
The case $n = 1$ is presented below.
\end{Remark}
\begin{Example}
\label{n1}
Let~$k$ be a~f\/ield of characteristic $\neq 2$ and consider $\{E, F, G\}$ the basis of $\mathfrak{l} (3, k)$ with the bracket
given by $[E, G] = E$ and $[G, F] = F$.
Then, the factorization index $[L (4, k): kH]^f = 3$.
More precisely, the isomorphism classes of all complements of $kH$ in $L (4, k)$ are represented by the following three Lie
algebras: $\mathfrak{l} (3, k)$, $k^3_0$ and the Lie algebra $L_{-1}$ having $\{E, F, G\}$ as a~basis and the bracket given~by
\begin{gather*}
[F, E] = F,
\qquad
[E, G] = - G.
\end{gather*}
Since $\charop (k) \neq 2$ the Lie algebras $\mathfrak{l} (3, k)$ and $L_{-1}$ are not isomorphic~\cite[Exercise 3.2]{EW}.
For $a\in k^*$ the Lie algebra $\mathfrak{l}_{(a)} (3, k)$ has the bracket given by $[E, F] = -a F$ and $[E, G] = -a G$.
Thus, $\mathfrak{l}_{(a)} (3, k) \cong \mathfrak{l}_{(1)} (3, k)$, and the latter is isomorphic to the Lie algebra $L_{-1}$.
On the other hand we have: $\mathfrak{l}''_{(0)} (3, k) = k^3_0$ and for $b \neq 0$ we can easily see that $\mathfrak{l}''_{(b)}
(3, k) \cong \mathfrak{l}''_{(1)} (3, k) \cong \mathfrak{l} (3, k) $.
Finally, $\mathfrak{l}'_{(0)} (3, k) = \mathfrak{l} (3, k) $ and for $b \neq 0$ we have that $\mathfrak{l}'_{(b)} (3, k) \cong
\mathfrak{l}'_{(1)} (3, k)$~-- the latter is the Lie algebra having $\{f_1, f_2, f_3 \}$ as a~basis and the bracket given by
$[f_1, f_2] = - f_1$, $[f_1, f_3] = f_1$ and $[f_3, f_2] = f_2 + f_3$.
This Lie algebra is also isomorphic to $\mathfrak{l} (3, k)$, via the isomorphism which sends $f_1$ to~$E$, $f_3$ to~$G$ and
$f_2$ to $F - G$.
\end{Example}
Let~$k$ be a~f\/ield of characteristic $\neq 2$ and $\mathfrak{l}^2 (2n+1, k)_{(A, D, \delta)}$ the Lie algebra of
Theorem~\ref{teorema11}.
In order to simplify computations we will assume $A = D:= I_n$ and $\delta:= (1, 0, \dots, 0, 1)$.
Let $\mathfrak{m} (2n+2, k):= \mathfrak{l}^2 (2n+1, k)_{(I_n, I_n, (1, 0, \dots, 0, 1))}$ be the $(2n+2)$-dimensional Lie
algebra having $\{E_i, F_i, G, H \,|\, i = 1, \dots, n \}$ as a~basis and the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G ] = E_i,
\qquad
[G, F_i ] = F_i,
\qquad
[E_i, H ] = E_i,
\qquad
[F_i, H ] = F_i,
\qquad
[G, H ] = E_1 + F_n.
\end{gather*}
We consider the Lie algebra extension $kH \subset \mathfrak{m} (2n+2, k)$, where $kH \cong k_0$ is the Abelian Lie algebra of
dimension~$1$.
Of course, $\mathfrak{m} (2n+2, k)$ factorizes through $kH$ and $\mathfrak{l} (2n+1, k)$, i.e.~$\mathfrak{m}(2n+2,k)=kH\bowtie\mathfrak{l} (2n+1, k)$.
Moreover, the canonical matched pair $\triangleleft: \mathfrak{l} (2n+1, k) \times kH \to \mathfrak{l} (2n+1, k)$ and
$\triangleright: \mathfrak{l} (2n+1, k) \times kH \to kH$ associated to this factorization is given as follows:
\begin{gather}
E_i \triangleleft H:= E_i,
\qquad
F_i \triangleleft H:= F_i,
\qquad
G \triangleleft H:= E_{1} + F_{n}
\label{mpcanon2}
\end{gather}
and all undef\/ined actions are zero.
In particular, we should notice that the left action $\triangleright: \mathfrak{l} (2n+1, k) \times kH \to kH$ is trivial.
Next, we describe the set ${\mathcal D}{\mathcal M}(\mathfrak{l} (2n+1, k), kH | (\triangleright, \triangleleft))$
of all deformation maps of the matched pair $(kH, \mathfrak{l} (2n+1, k), \triangleright, \triangleleft)$ given
by~\eqref{mpcanon2}.
\begin{Lemma}
\label{defmaps2}
Let~$k$ be a~field of characteristic $\neq 2$.
Then there exists a~bijection
\begin{gather*}
{\mathcal D}{\mathcal M} \big(\mathfrak{l} (2n+1, k), kH \big| (\triangleright, \triangleleft) \big) \cong \big(k^n \setminus
\{0\} \big) \sqcup \big(k^n \setminus \{0\} \big) \sqcup k.
\end{gather*}
The bijection is given such that the deformation map $r = r_a: \mathfrak{l} (2n+1, k) \to kH$ associated to $a = (a_i) \in k^n
\setminus \{0\}$ is given~by
\begin{gather}
\label{def1a1}
r (E_i):= a_i H,
\qquad
r(F_i):= 0,
\qquad
r (G):= (a_{1} - 1)H
\end{gather}
the deformation map $r = r_b: \mathfrak{l} (2n+1, k) \to kH$ associated to another $b = (b_i) \in k^n \setminus \{0\}$ is given
by
\begin{gather}
\label{def1a2}
r (E_i):= 0,
\qquad
r(F_i):= b_{i} H,
\qquad
r (G):= (b_{n} + 1)H,
\end{gather}
while the deformation map $r = r_{c}: \mathfrak{l} (2n+1, k) \to kH$ associated to $c \in k$ is given~by
\begin{gather}
\label{def2a3}
r (E_i):= 0,
\qquad
r(F_i):= 0,
\qquad
r (G):= c H
\end{gather}
for all $i = 1, \dots, n$.
\end{Lemma}
\begin{proof}
Any linear map $r: \mathfrak{l} (2n+1, k) \to kH$ is uniquely determined by a~triple $(a = (a_i)$, $b = (b_i), c) \in k^n \times
k^n \times k$ via: $r(E_i):= a_i H$, $r(F_i):= b_i H$ and $r(G):= c H $, for all $i = 1, \dots, n$.
We only need to check when such a~map $r = r_{(a, b, c)}$ is a~deformation map.
Since $kH$ is the Abelian Lie algebra and the left action $\triangleright: \mathfrak{l} (2n+1, k) \times kH \to kH$ is trivial,
equation~\eqref{factLie} comes down~to
\begin{gather}
\label{defabc2}
r([x, y]) = r \big(y \triangleleft r(x) - x \triangleleft r(y) \big).
\end{gather}
Since~\eqref{defabc2} is symmetrical it is enough to check it only for pairs of the form $(E_{i}, E_{j})$, $(F_{i}, F_{j})$,
$(E_{i}, F_{j})$, $(E_{i}, G)$, and $(F_{i}, G)$, for all~$i, j = 1, \dots, n$.
It is straightforward to see that~\eqref{defabc2} is trivially fulf\/illed for the pairs $(E_{i}, E_{j})$, $(F_{i}, F_{j})$ and
$(E_{i}, F_{j})$.
Moreover,~\eqref{defabc2} evaluated for $(E_{i}, G)$ and respectively $(F_{i}, G)$ yields $a_{i}(a_{1} + b_{n} - c - 1) = 0$ and
$b_{i}(a_{1} + b_{n} - c + 1) = 0$ for all $i = 1, \dots, n$.
Therefore, keeping in mind that we work over a~f\/ield of characteristic $\neq 2$, the triples $(a = (a_i), b = (b_i), c) \in k^n
\times k^n \times k$ for which $r_{(a, b, c)}$ becomes a~deformation map are given as follows: $(a = (a_i) \in k^{n}\setminus
\{0\}, b = 0, c = a_{1} - 1)$, $(a = 0, b = (b_i) \in k^{n}\setminus \{0\}, c = b_{n} + 1)$ and $(a = 0, b = 0, c \in k)$.
The corresponding deformation maps are exactly those listed above.
\end{proof}
The next result describes all deformations of $\mathfrak{l} (2n+1, k)$ associated to the canonical matched pair $(kH,
\mathfrak{l} (2n+1, k), \triangleright, \triangleleft)$ given by~\eqref{mpcanon2}.
\begin{Proposition}
\label{mpdef2}
Let~$k$ be a~field of characteristic $\neq 2$ and the extension of Lie algebras $kH \subset \mathfrak{m} (2n+2, k)$.
Then a~Lie algebra $\mathfrak{C}$ is a~complement of $kH$ in $\mathfrak{m} (2n+2, k)$ if and only if $\mathfrak{C}$ is
isomorphic to one of the Lie algebras from the three families defined below:
$\bullet$ the Lie algebra $\overline{\mathfrak{l}}_{(a)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[E_{i}, E_{j}]_{a}:= a_{j} E_{i} - a_{i} E_{j},
\qquad
[E_i, F_j]_{a}:= - a_i F_j,
\\
[E_i, G]_{a}:= a_{1} E_{i} - a_{i} (E_{1} + F_{n}),
\qquad
[G, F_i]_{a}:= (2-a_{1}) F_i
\end{gather*}
for all $a = (a_i) \in k^n \setminus \{0\}$.
$\bullet$ the Lie algebra $\overline{\mathfrak{l'}}_{(b)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[F_{i}, F_{j}]_{b}:= b_{j} F_{i} - b_{i} F_{j},
\qquad
[E_i, F_j]_{b}:= b_j E_i,
\\
[E_i, G]_{b}:= (2 + b_n) E_i,
\qquad
[G, F_i]_{b}:= b_{i}(E_{1} + F_{n}) - b_{n} F_{i}
\end{gather*}
for all $b = (b_i)\in k^n \setminus \{0\}$.
$\bullet$ the Lie algebra $\overline{\mathfrak{l''}}_{(c)} (2n+1, k)$ having the bracket def\/ined for any $i = 1, \dots, n$ by
\begin{gather*}
[E_i, G]_c:= (1+c) E_i,
\qquad
[G, F_i]_c:= (1-c) F_i
\end{gather*}
for all $c \in k$.
Thus the factorization index $[\mathfrak{m} (2n+2, k): kH]^f$ is equal to the number of types of isomorphisms of Lie algebras of
the set
\begin{gather*}
\big\{\overline{\mathfrak{l}}_{(a)} (2n+1, k), \,\overline{\mathfrak{l}'}_{(b)} (2n+1, k), \, \overline{\mathfrak{l}''}_{(c)} (2n+1, k) \,|\,
a, b \in k^n \setminus \{0\},\, c \in k \big\}.
\end{gather*}
\end{Proposition}
\begin{proof}
As in the proof of Proposition~\ref{mpdef} we make use of~\cite[Theorem~4.3]{am-2013a}.
More precisely, this implies that all complements $\mathfrak{C}$ of $kH$ in $\mathfrak{m} (2n+2, k)$ are isomorphic to
an~$r$-deformation of $\mathfrak{l} (2n+1, k)$, for some deformation map $r: \mathfrak{l} (2n+1, k) \to kH$ of the matched
pair~\eqref{mpcanon2}.
These are described in Lemma~\ref{defmaps2}.
By a~straightforward computation it can be seen that $\overline{\mathfrak{l}}_{(a)} (2n+1, k)$ is exactly the complement
corresponding to the deformation map given by~\eqref{def1a1}, $\overline{\mathfrak{l'}}_{(b)} (2n+1, k)$ corresponds to the
deformation map given by~\eqref{def1a2} while $\overline{\mathfrak{l''}}_{(c)} (2n+1, k)$ is implemented by the deformation map
given by~\eqref{def2a3}.
\end{proof}
\begin{Example}
Let~$k$ be a~f\/ield of characteristic $\neq 2$.
Then, the factorization index $[\mathfrak{m} (4, k): kH]^f$ depends essentially on the f\/ield~$k$.
We will prove that all complements of $kH$ in $\mathfrak{m} (4, k)$ are isomorphic to a~Lie algebra of the form:
\begin{gather*}
L_{\alpha}: \ [x, z] = x,
\qquad
[y, z] = \alpha y,
\qquad
\text{with}
\quad
\alpha \in k.
\end{gather*}
Hence, $[\mathfrak{m} (4, k): kH]^f = \infty $, if $|k| = \infty $ and $[\mathfrak{m} (4, k): kH]^f = (1 + p^n)/2$, if $|k| =
p^n$, where $p\geq 3$ is a~prime number.
Indeed, for $n=1$, the Lie algebras described in Proposition~\ref{mpdef2} become
\begin{gather*}
\overline{\mathfrak{l}}_{(a)} (3, k): \quad [E, F]_{a}:= - aF, \qquad [E, G]_{a}:= - aF, \qquad [G, F]_{a}:= (2-a) F,
\\
\overline{\mathfrak{l'}}_{(b)} (3, k): \quad [E, F]_{b}:= b E, \qquad [E, G]_{b}:= (2 + b) E, \qquad [G, F]_{b}:= b E,
\\
\overline{\mathfrak{l''}}_{(c)} (3, k): \quad [E, G]_c:= (1+c) E, \qquad [G, F]_c:= (1-c) F,
\end{gather*}
$a, b \in k^{*}$, $c \in k$.
To start with, we should notice that the f\/irst two Lie algebras $\overline{\mathfrak{l}}_{(a)}$ and
$\overline{\mathfrak{l'}}_{(b)}$ are isomorphic for all~$a, b \in k^{*}$.
The isomorphism $\gamma: \overline{\mathfrak{l}}_{(a)} \to \overline{\mathfrak{l'}}_{(b)}$ is given as follows
\begin{gather*}
\begin{split}
& \gamma(E) := 2^{-1}(b-a) E + 2^{-1} (b-a+2) F + 2^{-1}(a-b) G,
\qquad
\gamma(F):= E,
\\
& \gamma(G) := 2^{-1} (b-a+4) E + 2^{-1}(b-a+4) F + 2^{-1}(a-b-2) G.
\end{split}
\end{gather*}
Moreover, the map $\varphi: \overline{\mathfrak{l}}_{(a)} \to L_{0}$ given by
\begin{gather*}
\varphi(E):= y + az,
\qquad
\varphi(F):= x,
\qquad
\varphi(G):= x + y + (a-2) z
\end{gather*}
is an isomorphism of Lie algebras for all $a \in k^{*}$.
Therefore, the f\/irst two Lie algebras are both isomorphic to $L_{0}$ for all~$a, b \in k^{*}$.
We are left to study the family $\overline{\mathfrak{l''}}_{(c)}$.
If $c = -1$ then $\overline{\mathfrak{l''}}_{(-1)}$ is again isomorphic to $L_{0}$.
Suppose now that $c \neq -1$.
Then the map $\psi: \overline{\mathfrak{l''}}_{(c)} \to L_{(c-1)(c+1)^{-1}}$ given by
\begin{gather*}
\psi(E):= x,
\qquad
\psi(F):= y,
\qquad
\psi(G):= (c+1) z
\end{gather*}
is an isomorphism of Lie algebras.
Finally, we point out here that if $\alpha \notin \{\beta, \beta^{-1} \}$ then $L_{\alpha}$ is not isomorphic to $L_{\beta}$
(see, for instance~\cite[Exercise 3.2]{EW}) and the conclusion follows.
\end{Example}
\begin{Remark}\label{2noi}
We end this section with two more applications.
The deformation of a~given Lie algebra $\mathfrak{h}$ associated to a~matched pair $(\mathfrak{g}, \mathfrak{h}, \triangleright,
\triangleleft)$ of Lie algebras and to a~deformation map~$r$ as def\/ined by~\eqref{rLiedef} is a~very general method of
constructing new Lie algebras out of a~given Lie algebra.
It is therefore natural to ask if the properties of a~Lie algebra are preserved by this new type of deformation.
We will see that in general the answer is negative.
First of all we remark that the Lie algebra $\mathfrak{h}:= \mathfrak{l} (2n+1, k)$ is metabelian,
that is $[[\mathfrak{h},\mathfrak{h}], [\mathfrak{h},\mathfrak{h}]] = 0$.
Now, if we look at the matched pair deformation $\mathfrak{h}_r = \mathfrak{l}_{(a)} (2n+1, k)$ of $\mathfrak{h}$ given
by~\eqref{1famdef} of Proposition~\ref{mpdef}, for $a = (a_i) \in k^n \setminus \{0\}$ we can easily see that
$\mathfrak{l}_{(a)} (2n+1, k)$ is not a~metabelian Lie algebra, but a~$3$-step solvable Lie algebra.
Thus the property of being metabelian is not preserved by the~$r$-deformation of a~Lie algebra.
Next we consider an example of a~somewhat dif\/ferent nature.
First recall~\cite{medina} that a~Lie algebra $\mathfrak{h}$ is called \emph{self-dual} (or \emph{metric}) if there exists
a~non-degenerate invariant bilinear form $B: \mathfrak{h} \times \mathfrak{h} \to k$, i.e.~$B([a, b], c) = B(a, [b, c])$, for
all~$a,b,c\in \mathfrak{h}$.
Self-dual Lie algebras generalize f\/inite-dimensional complex semisimple Lie algebras (the second Cartan's criterion shows that
any f\/inite-dimensional complex semisimple Lie algebra is self-dual since its Killing form is non-degenerate and invariant).
Besides the mathematical interest in studying self-dual Lie algebras, they are also important and have been intensively studied
in physics~\cite{fig, pelc}.
Now, $\mathfrak{h}:= \mathfrak{l} (2n+1, k)$ is not a~self-dual Lie algebra since if $B: \mathfrak{l} (2n+1, k) \times
\mathfrak{l} (2n+1, k) \to k$ is an arbitrary invariant bilinear form then we can easily prove that $B(E_i, -) = 0$ and thus any
invariant form is degenerate.
On the other hand, the~$r$-deformation of $\mathfrak{l} (2n+1, k)$ denoted by $\mathfrak{l}_{(0)}'' (2n+1, k)$ in
Remark~\ref{cind} is self-dual since it is just the $(2n+1)$-dimensional Abelian Lie algebra.
\end{Remark}
\section{Two open questions}
The paper is devoted to the factorization problem and its converse, the classifying complements problem, at the level of Lie
algebras.
Both problems are very dif\/f\/icult ones; even the case considered in this paper, namely $\mathfrak{g} = k_0$, illustrates the
complexity of the two problems.
We end the paper with the following two open questions:
\textbf{Question 1.} \emph{Let $n \geq 2$.
Does there exist a~Lie algebra $\mathfrak{h}$ and a~matched pair of Lie algebras $(\mathfrak{gl} (n, k), \mathfrak{h},
\triangleleft, \triangleright)$ such that $\mathfrak{gl} (n, k) \bowtie \mathfrak{h} \cong \mathfrak{gl} (n+1, k)$?}
A more restricted version of this question is the following: does the canonical inclusion $\mathfrak{gl} (n, k) \hookrightarrow
\mathfrak{gl} (n +1, k)$ have a~complement that is a~Lie subalgebra of $\mathfrak{gl} (n +1, k)$? Although it seems unlikely for
such a~complement to exist we could not f\/ind any proof or reference to this problem in the literature.
Secondly, having~\cite[Corollary 3.2]{am-2012} as a~source of inspiration we ask:
\textbf{Question 2.} \emph{Let $n \geq 2$.
Does there exist a~matched pair of Lie algebras $(\mathfrak{g}, \mathfrak{h}, \triangleleft, \triangleright)$ such that
any~$n$-dimensional Lie algebra $\mathfrak{L}$ is isomorphic to an~$r$-deformation of $\mathfrak{h}$ associated to this matched
pair?}
At the level of groups, question 2 has a~positive answer by considering the canonical matched pair associated to the
factorization of $S_{n+1}$ by $S_n$ and the cyclic group $C_n$.
\subsection*{Acknowledgements}
We would like to thank the referees for their comments and suggestions that substantially improved the f\/irst version of this
paper.
A.L.~Agore is research fellow `Aspirant' of FWO-Vlaanderen.
This work was supported by a~grant of the Romanian National Authority for Scientif\/ic Research, CNCS-UEFISCDI, grant
no.~88/05.10.2011.
\pdfbookmark[1]{References}{ref}
|
1,108,101,563,658 | arxiv | \section{Introduction}
It has recently been demonstrated that the selective reflection (SR) of a laser radiation from an interface of dielectric window and atomic vapour confined in a nano-cell (NC) with a thickness of a few hundred nanometres is a convenient tool for atomic spectroscopy \cite{sargsyan_jetpl_2016,sargsyan_josab_2017,sargsyan_ol_2017}. The real-time derivative of SR signal (dSR), where each frequency position of the recorded peaks coincides with the atomic transition ones, is used and provides a 30 -- 50~MHz spectral resolution with linearity of the signal response with respect to the transition probabilities. The large amplitude and the sub-Doppler width of a detected signal in addition to the simplicity of the dSR-method make it appropriate for applications in metrology and magnetometry. In particular, the dSR-method provides a convenient frequency marker for atomic transitions \cite{sargsyan_jetpl_2016}. In \cite{sargsyan_ol_2017}, we have implemented the dSR-method for atomic layers having thicknesses of a few tens of nanometres to probe atom-surface interactions and we have observed a 240 MHz red-shift for a cell thickness $L\sim40$~nm. With the dSR-method, a complete frequency-resolved hyperfine Paschen-Back (HPB) splitting of ten atomic transitions (four for $^{87}$Rb and six for $^{85}$Rb) was recorded in a strong magnetic field ($B > 2$~kG) in \cite{sargsyan_jetpl_2016}; similar results for Cs have been reported in \cite{sargsyan_josab_2017}.
One of the reason why K atomic vapours are less frequently used as compared to Rb or Cs ones is the following: for a temperature $\sim100~^\circ$C the Doppler-broadening is $\sim0.9$~GHz, which exceeds the hyperfine splitting of the ground and excited levels ($\sim462$~ MHz and $\sim10-20$~MHz, correspondingly) such that the transitions $F_g=1, 2\rightarrow F_e= 0,1,2,3$ of $^{39}$K are completely masked by the Doppler-profile. Therefore, there is a small number of papers concerning the laser spectroscopy of potassium: the accurate identification of atomic transitions of K was reported in \cite{das_jpb_2008,hanley_jpb_2015}; saturated absorption spectra of the D$_1$ line of potassium atoms have been studied in details both theoretically and experimentally in \cite{bloch_lp_1996}. Potassium vapours were used for the investigation of nonlinear magneto-optical Faraday rotation in a antirelaxation paraffin-coated cell \cite{guzman_pra_2006}, polarisation spectroscopy and magnetically-induced dichroism for magnetic fields in the range of 1 -- 50~G \cite{pahwa_oe_2012}, formation of dark resonance having a sub-natural linewidth \cite{lampis_oe_2016}, electromagnetically induced transparency \cite{sargsyan_os_2017} and four-wave mixing process \cite{zlatkovic_lpl_2016}. A theory describing the transmission of Faraday filters based on sodium and potassium vapours is presented in \cite{harrell_josab_2009}.
In external magnetic field, there is an additional splitting of the energy levels which causes the formation of a number of atomic transitions spaced by a frequency interval of $\sim100$~MHz in the HPB regime; that is why a Doppler-free method must be implemented to perform an efficient study of atomic transitions of K vapours. In this paper we demonstrate that the dSR-method is a very convenient one to investigate the behaviour of an individual transition of $^{39}$K D$_2$ line (energy levels are shown in Fig~\ref{fig1}a). The particularity of $^{39}$K is a small characteristic value of magnetic field $B_0 =A_{hf}/\mu_B =165$~G at which the HPB regime starts as compared to other alkalis such as $^{133}$Cs ($B_0 =1700$~G) and $^{ 85}$Rb ($B_0 =700$~G), $^{87}$Rb ($B_0 =2400$~G) isotopes \cite{olsen_pra_2011,sargsyan_ol_2012,weller_ol_2012,zentile_cpc_2015}; where $A_{hf}$ is the magnetic dipole constant for $4^2S_{1/2}$ ground level and $\mu_B$ is the Bohr magneton. Hence, there is a significant change in atomic transition probabilities of $^{39}$K D$_2$ line for relatively small magnetic fields, at least by an order smaller as compared to other alkalis. To our best knowledge, the recording of the HPB regime of K D$_2$ line with high spectral resolution is demonstrated for the first time.
\section{Experimental arrangement}
Figure \ref{fig1}b shows the layout of the experimental setup. A frequency-tunable cw narrowband ($\gamma_L \sim 2\pi \cdot 1$~MHz) extended cavity diode laser (ECDL) with $\lambda =766.7$~nm wavelength, protected by a Faraday isolator (FI), emits a linearly polarised radiation directed at normal incidence onto a K nano-cell mounted inside the oven. A quarter-wave plate, placed in between the FI and the NC, allows to switch between $\sigma^+$ (left-hand) and $\sigma^-$ (right-hand) circularly-polarised radiation. The NC is filled with natural potassium that consist of $^{39}$K (93.25\%) and $^{41}$K (6.70\%) atoms, and the details of its design can be found in \cite{keaveney_prl_2012,sargsyan_epl_2015}. The necessary vapour density $N\sim 5\times 10^{12}$~cm$^{-3}$ was attained by heating the cell's thin sapphire reservoir (R) containing metallic potassium, to $T_R \sim 150~^\circ$C, while keeping the window temperature some $20~^\circ$C higher.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{Fig1.eps}
\caption{(a) Energy-level diagram for $4^2S_{1/2}$ and $4^2P_{3/2}$ states of $^{39}$K. (b) Layout of the experimental setup: ECDL -- extended cavity diode laser, FI -- Faraday isolator, BS -- beam splitter, NC -- K nano-cell inside an oven (not shown), (R) -- thin sapphire reservoir, PM -- permanent magnet, PD -- photodetector, $\lambda /4$ -- quarter-wave plate, SR -- selective reflection channel, SA -- saturated absorption reference channel. Upper right inset: photograph of the NC; the oval marks the region $L=250 - 350$~nm. Upper left inset: geometry of the 3 reflected beams; the selective reflection beam (SR) propagates in the direction of $R_2$.}
\label{fig1}
\end{figure}
A longitudinal magnetic field $\mathbf{B}\parallel\mathbf{k}$ up to 1~kG, where $\mathbf{k}$ is the wavevector of the laser radiation, was applied using a permanent neodymium-iron-boron alloy magnet placed near the output window of the NC. The variation of the field strength was achieved by axial displacement of the magnet system and was monitored by a calibrated magnetometer. In spite of a strong spatial gradient of the field produced by the permanent magnet, the field inside the interaction region is uniform thanks to the very small thickness of the NC. The right inset of Fig.~\ref{fig1}b shows the photograph of the K NC where one can see interference fringes formed by light reflection from the inner surfaces of the windows because of variable thickness $L$ of the vapour column across the aperture.
The SR measurements (geometry of the reflected laser beams presented in the left inset) were performed for $L \sim 350$~nm. Although the decrease of $L$ improves the spatial resolution (which is very important when using high-gradient field permanent magnets), it simultaneously results in a broadening of the SR spectral linewidth; therefore $L = 350$~nm appears as the optimal thickness. This additional broadening is a result of atom-wall collisions: the reduction of the thickness $L$ between the windows shortens the flight time of atoms toward the surface, determined as $t = L/v_z $ ($v_z$ is the projection of the thermal velocity perpendicular to the window plane), thus making atom-wall collisions more frequent and leading to additional broadening. To form a frequency reference, a part of the laser radiation was branched to an auxiliary saturated absorption (SA) setup formed in a 1.4~cm-long K cell.
\section{Theoretical considerations}
In this section, we give the outline of our theoretical model, additional details on the calculations of dSR spectrum are given in \cite{sargsyan_josab_2017,klinger_epjd_2017}. The problem of calculating the spectrum of alkaline vapours confined in a NC when a longitudinal $B$-field is applied can be split in two points: the calculation of the transition probabilities and frequencies \cite{tremblay_pra_1990}, and the calculation of the line profile inherent to the NC properties \cite{zambon_oc_1997,dutier_josab_2003}.
\subsection{Transition probabilities and frequencies under magnetic field}
The starting point for spectroscopic analysis of alkali vapours under longitudinal magnetic field is to write down the Hamiltonian of the system $H_m$ as the sum of the hyperfine structure Hamiltonian $H_0$ modified by the magnetic interaction, such that
\begin{equation}
H_m = H_0 +\frac{\mu_B}{\hbar}B_z(g_LL_z + g_SS_z+g_II_z),
\label{eq:hamil_B}
\end{equation}
where $L_z$, $S_z$, and $I_z$ are respectively the projection of the orbital, electron spin and nucleus spin momentum along $z$, chosen as the quantization axis; $g_{L,S,I}$ are the associated Land\'e factors (for the sign convention, see \cite{steck_2011}). Details of the construction of the Hamiltonian in the base $|F,m_F\rangle$ can be found in \cite{tremblay_pra_1990}. The transitions probabilities $W_{eg}$ are proportional to the square of the dipole moment $\mu_{eg}$ between the states $|e\rangle$ and $|g\rangle$
\begin{equation}
W_{eg}\propto \left(\sum_{F_e'F_g'}c_{F_eF_e'}a(\Psi(F_e,m_e);\Psi(F_g,m_g);q)c_{F_gF_g'}\right)^2,
\end{equation}
with the coefficients $c_{FF'}$ given by the eigenvectors of diagonalized $H_m$ matrix, and
\begin{eqnarray}
\fl \eqalign{a(\Psi(F_e,m_e);\Psi(F_g,m_g);q)=&(-1)^{1+I+J_e+F_e+F_g-m_{Fe}}\sqrt{2J_e+1}\sqrt{2F_e+1}\sqrt{2F_g+1}\\
&\times \left( \begin{array}{r@{\quad}cr}
F_e & 1 & F_g \\
-m_{F_e} & q & m_{F_g}
\end{array}\right)
\left\{ \begin{array}{r@{\quad}cr}
F_e & 1 & F_g \\
J_g & I & J_e
\end{array}\right\},}
\end{eqnarray}
where the parentheses and the curly brackets denote the 3-$j$ and 6-$j$ coefficients, respectively; $q=0,\pm1$ is associated the polarisation of the excitation such that $q=0$ for a $\pi$-polarised laser field, $q=\pm1$ for a $\sigma^\pm$-polarised laser field.
\subsection{Line profile}
The line profile of atoms confined in thin cells having a gap between their windows of the order of the laser wavelength has been deeply studied in \cite{zambon_oc_1997,dutier_josab_2003}. The resonant contribution to the reflected field reads (in intensity)
\begin{equation}
S_r\cong 2 \frac{t_{cw}}{|Q|^2}\Re{\Big\lbrace r_w\big[1-\exp(-2ikL)\big]\times\big[I_b - r_wI_f\exp(2ikL)\big]\Big\rbrace}E_{in},
\end{equation}
where $t_{cw}$, $r_w$ are respectively transmission and reflection coefficients, $Q=1-r_w^2\exp(2ikL)$ is the quality factor associated to the NC of thickness $L$; $I_f$ and $I_b$ are integrates of the forward and backward polarisation
\begin{equation}
I_f=\frac{ik}{2\epsilon_0}\int_0^L P_0(z)dz, \qquad I_b=\frac{ik}{2\epsilon_0}\int_0^L P_0(z)\exp(2ikz)dz.
\end{equation}
The polarisation $P_0(z)$ is induced by the interaction of the laser with an ensemble of 2-level systems; it is given by averaging the coherences of the reduced density matrix $\rho$ over the atomic distribution of speed in the cell (supposed as Maxwellian)
\begin{equation}
P_{0}(z) = \sum_i N\mu_{i}\int\limits^{+\infty}_{-\infty} W(v)\rho^i_{eg}(z,v,\Delta_i) dv,
\label{eq:polarization_only_positive_velocities_Ensemble_2-level_system}
\end{equation}
with $\Delta_i=\omega-\omega_i$. Each 2-level system $|i\rangle$, having a transition frequency $\nu_i=\omega_i/2\pi$ and a transition intensity $|\mu_i|^2$, can contribute to the recorded signal if they are close to resonance with the laser pulsation $\omega$. Expressions of the atomic coherences $\rho^i_{eg}$ are found by solving the Liouville equation of motion of the density matrix, that is
\begin{equation}
\frac{d}{dt}\rho=-\frac{i}{\hbar}\big[H,\rho\big]-\frac{1}{2}\big\lbrace \Gamma,\rho\big\rbrace,
\end{equation}
where $H$ is the Hamiltonian describing the interaction of the vapour with the laser radiation, and the matrix $\Gamma$ accounts for homogeneous relaxation processes; $\lbrace a, b \rbrace= ab + ba$ is the anticommutator.
\section{Results and discussion}
\subsection{Circular polarisation analysis}
On Fig.~\ref{fig2}a, the red curves show dSR experimental spectra in the case of $\sigma^-$ circularly-polarised laser radiation for five different values of the applied longitudinal magnetic field, from bottom to top: 470, 500, 690, 720 and 780~G. The complementary study with a $\sigma^+$ circularly-polarised excitation for the $B$-field values of 530, 590, 680 and 800~G is shown on Fig.~\ref{fig2}b. The spectra are recorded for a reservoir temperature $T_R\sim150~^\circ$C, a laser power $P_L\sim 0.1$~mW, and atomic transitions linewidth $\sim80$~MHz Full Width at Half Maximum (FWHM).
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{Fig2.eps}
\caption{$^{39}$K D$_2$ line recorded (red solid lines) and calculated (blue solid lines) spectra for (a) $\sigma^-$-polarised light at $B = 470,~500,~690,~720,~ 780$~G; and (b) $\sigma^+$-polarised laser radiation at $B = 530,~590,~680,~800$~G. Experimental parameters: transition linewidth $\sim 100$~MHz, NC's thickness $L=350$~nm, laser power $P_L=0.1$~mW, reservoir's temperature $T_R = 150~^\circ$C. The lower curve is the recorded SA spectrum that serves as a frequency reference. The curves have been shifted vertically for clarity. In each case, all eight atomic transitions are well spectrally resolved. Although the proportion of $^{41}$K in the cell is small (6.70\%), a portion of its spectra can be seen; it is indicated on the graphs with arrows.}
\label{fig2}
\end{figure}
It is worth to note that the dSR amplitudes are proportional to the relative probabilities presented in Fig.~\ref{fig3}. As it is seen from Fig.~\ref{fig2}a, there are two groups formed by transitions 1'--4' and \textcircled{5}'--8' and all these eight transitions whose diagram is shown in the inset of Fig.~\ref{fig3}a are well seen. The same remark holds for transitions 1--\textcircled{8} from Fig.~\ref{fig2}b (transition diagram shown in the inset of Fig.~\ref{fig3}b). Note that for a given group, the amplitudes of the transitions are equal to each other with the frequency intervals between them beeing nearly equidistant. These peculiarities as well as the strict number of atomic transitions which remain the same with increasing magnetic field are evidences of the establishment of Pashen-Back regime. Transitions labeled \textcircled{5}' and \textcircled{8} are the so-called ``guiding'' transitions (GT) \cite{sargsyan_epl_2015,sargsyan_jetpl_2015}: their probability as well as their frequency shift slope remain the same ($s^\pm=\pm1.4$~MHz/G) in the whole range of applied $B$-fields.
The lower (black) curves show SA spectra for $B=0$. As shown in \cite{zielinska_ol_2012}, the existence of crossover lines makes the SA technique useless for a spectroscopic analysis for $B>100$~G. The blue curves show the calculated dSR spectra of $^{39}$K and $^{41}$K isotopes with the linewidth of 80 -- 100~MHz for $\sigma^-$- (Fig.~\ref{fig2}a) and $\sigma^+$- (Fig.~\ref{fig2}b) polarised laser radiations. As it is seen, there is a very good agreement between the experiment and the theory. Although there is only 6.70\% of $^{41}$K isotope in natural K, a much better agreement with the experiment is realized when $^{41}$K levels are also included into theoretical considerations; particularly the peaks shown by the arrows in Fig.~\ref{fig2}b are caused by the influence of $^{41}$K isotope. A very good agreement between the experiment and the theory can be seen for both polarisations and all applied magnetic fields.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{Fig3.eps}
\caption{(a) Evolution of the probabilities of 1'--4' and \textcircled{5}'--8' transitions versus $B$-field for a $\sigma^-$-polarised excitation. (b) Evolution of the probabilities of 1--4 and 5--\textcircled{8} transitions versus $B$-field for a $\sigma^+$-polarised excitation. The insets show the corresponding atomic transitions diagrams of $^{39}$K D$_2$ line in the HPB regime, expressed in the basis $|m_J, m_I\rangle$ (uncoupled basis). Selection rules for the transitions are $\Delta m_J =\pm1,~ \Delta m_I = 0$ for $\sigma^\pm$-polarised light. For simplicity, only the transitions that remain in the spectrum for strong magnetic field are presented.}
\label{fig3}
\end{figure}
It is important to note that at a relatively small magnetic field $B\sim 400$~G the two groups are already well formed and separated which is, as mention earlier, a consequence of the small value of $B_0(^{39}$K$)=165$~G. In order to detect similar well formed groups for $^{87}$Rb atoms, one must apply a much stronger magnetic field of $B\sim 6$~kG since $B_0(^{87}$Rb$)/ B_0(^{39}$K$) \sim 15$. It is also interesting to note that the total number of the atomic transitions for both circularly-polarised laser excitations is 44 when $B\sim B_0(^{39}$K$) \sim 150$~G, while for $B\gg B_0(^{39}$K) only 16 transitions remain: this is the manifestation of the HPB regime. \\
Figure \ref{fig3} shows the transition probabilities versus magnetic field for (a) 1'--4' and \textcircled{5}'--8' transitions ($\sigma^-$ excitation) and for (b) 1--4 and 5--\textcircled{8} transitions ($\sigma^+$ excitation); for labelling, see the transition diagrams in insets. As we see, the transition probabilities inside the groups 1--4 and 1'--4' for $B\gg 165$~G tend asymptotically to the same value (respectively 0.056 and 0.0625); the same remark can be made for the groups 5--\textcircled{8} and \textcircled{5}'--8' which probabilities tend to the ones of the GT \textcircled{8} (0.167) and \textcircled{5}' (0.185) respectively. The later remarks are another peculiarities of the HPB regime.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Fig4.eps}
\caption{Calculated magnetic field dependence of the frequency shifts of $^{39}$K D$_2$ line, for the transitions 1--4 and 5--\textcircled{8} ($\sigma^+$ excitation) and for the transitions 1'--4' and \textcircled{5}'--8' ($\sigma^-$ excitation). The guiding transitions \textcircled{8} and \textcircled{5}' are indicated.}
\label{fig4}
\end{figure}
The calculated magnetic field dependence of the transition frequency shifts under circularly-polarised excitation is presented on Fig.~\ref{fig4}. Note that the frequency slope of transitions 5--\textcircled{8} tends asymptotically to the same value of the GT transition \textcircled{8} ($s^+=+1.4$~MHz/G), while the frequency slope of transitions \textcircled{5}' --8′ transitions tends to the one of the GT transition \textcircled{5}' ($s^-=-1.4$~MHz/G).
\subsection{Linear polarisation analysis}
To achieve a linear ($\pi$) polarisation excitation of the K vapour, the experimental setup presented on Fig.~\ref{fig1}b is slightly modified: the $B$-field is directed along the laser electric field $\mathbf{E}$ ($\mathbf{B} \perp \mathbf{k}$), the $\lambda /4$ plate is removed and two permanent magnets are used to set the $\mathbf{B} \parallel \mathbf{E}$ configuration.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Fig5.eps}
\caption{K D$_2$ line, for $\pi$-polarised radiation recorded and calculated for $B=300,~380,~480,~680$~G. The red and blue traces show respectively the experimental and theoretical dSR spectra of $^{39}$K and $^{41}$K atoms, with a linewidth $\sim120$~MHz, NC's thickness $L=350$~nm, $P_L=0.1$~mW, and $T_R \sim 150~^\circ$C. The lower curve is the SA spectrum that serves as a frequency reference. The transitions $\fbox{2}$ and $\fbox{6}$ are IFFA transitions (see the text). The dashed lines show the frequency position of the atomic transitions and are drawn for convenience.}
\label{fig5}
\end{figure}
In Fig.~\ref{fig5}, the red curves represent dSR experimental spectra obtained for $B=300,~380,~480,~680$~G, with $T_R\sim150~^\circ$C and a $\pi$-polarised laser radiation having a power $P_L=0.1$~mW. The blue lines show the theoretical calculations with the corresponding experimental parameters. As we see there are eight well resolved transitions, labelled 1--8, having amplitudes that tend asymptotically to the same value (see Fig.~\ref{fig6}a). Note that, in this case, the transition linewidth is a bit larger ($\sim120~$MHz) which is caused by inhomogeneities of the transverse magnetic field across the laser beam diameter of 1~mm. The magnetic field dependence of their transition frequencies is presented on Fig.~\ref{fig6}b.
The transitions $\fbox{2}$ and $\fbox{6}$ are $|F_g=1, m_F=0\rangle\rightarrow |F_e=1, m_F=0\rangle$ and $|F_g=2, m_F=0\rangle\rightarrow |F_e=2, m_F=0\rangle$ transitions. For zero magnetic field dipole matrix elements for these $\pi$ transitions are zero \cite{steck_2011}, in other words they are ``forbidden'': for these transitions neither resonant absorption nor resonant fluorescence is detectable. In Fig.~\ref{fig6}a, the transition probability of $\fbox{2}$ and $\fbox{6}$ starts from zero and undergo a significant enhancement with increasing magnetic field. For this reason, we call them``initially forbidden further allowed'' (IFFA) transitions. Note that there is another type of ``forbidden'' transitions, so-called magnetically induced (MI) transitions since they appear only with the presence of $B$-field and verify the selection rule $\Delta F = F_e-F_g= \pm2$; particularly, the groups of transitions $F_g=2\rightarrow F_e=4$ (Rb D$_2$ line) and $F_g=3\rightarrow F_e=5$ (Cs D$_2$ line) have been respectively studied in \cite{klinger_epjd_2017,sargsyan_lpl_2014}. For strong magnetic fields $B\gg B_0$ the probabilities of MI transitions tend to zero, contrary to the one of the IFFA transitions that asymptotically tend to its maximum with increasing $B$-field (see Fig.~\ref{fig6}a). We have confirmed this statement using a magnetic field of 500~G and MI transitions of the $^{39}$K, $F_g=1\rightarrow F_e=3$ with $\sigma^+$ excitation. While IFFA transitions already have big amplitudes (see Fig.~\ref{fig5}), the MI transitions are not detectable. Note that for the confirmation of this statement one must to apply a magnetic field of 5--6~kG on $^{87}$Rb atoms.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{Fig6.eps}
\caption{Calculated probabilities (a) and frequency shifts (b) of $^{39}$K D$_2$ line $F_g =1, 2 \rightarrow F_e=0,1,2,3$ transitions for $\pi$-polarised laser radiation. Transition diagram (in the uncoupled basis $|m_I,m_J\rangle$) for the HPB regime is shown in the inset. Selection rules for the transitions are $\Delta m_J =0,~ \Delta m_I = 0$. The transitions $\fbox{2}$ and $\fbox{6}$ are IFFA transitions (see the text).}
\label{fig6}
\end{figure}
\subsection{Discussion}
The peculiarities of the atomic transitions behaviour of K in the HPB regime is different in the case D$_1$ and D$_2$ lines: (\textit{i}) as mention earlier there are two groups of eight transitions formed either by $\sigma^+$ and $\sigma^-$ circularly-polarised light for the D$_2$ line and each of these group contains one GT. Meanwhile, in the case of D$_1$ line there are two groups of only four transitions, one for each circularly-polarised light, and the GT are absent. (\textit{ii}) In the case of $\pi$-polarised laser radiation, the spectrum of K D$_2$ is composed by eight atomic transitions, including two IFFA transitions, meanwhile the one of the D$_1$ line counts two GT and two IFFA transitions.
Investigation of the modification of the transition frequencies and probabilities for K D$_1$ line using absportion spectroscopy from a NC was reported in \cite{sargsyan_epl_2015}. The small number of atomic transitions (four for each circular polarisation) and the narrowing of the spectra due to the Dicke effect at $L = \lambda/2 = 385$~nm \cite{sargsyan_jpb_2016} were the reasons that the resolution of the recorded spectra was sufficient enough. However, due to a larger number of the atomic transitions (eight) formed by circularly-polarised laser radiation, absorption spectra of K D$_2$ line in NC are strongly broadened. For this reason, we illustrate on Fig.~\ref{fig7} the advantage of the dSR technique over the conventional absorption one. In this figure, the upper blue trace shows the absorption spectrum of K D$_2$ line obtained from a NC with $L=385$~nm and $\sigma^+$ laser excitation and a magnetic field set as $B\sim600$~G. Although the eight absorption peaks are slightly resolved, they have big pedestals which overlap with one another, causing strong distortions in amplitudes. In order to get the correct ones, one needs to perform a not trivial fitting due to a special absorption profile inherent to the NC. Meanwhile, the middle curve shows the corresponding dSR spectrum, where eight atomic transitions are completely resolved. Thus, for K D$_2$ line selective reflection technique is strongly preferable.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Fig7.eps}
\caption{$^{39}$K D$_2$ line for a $\sigma^+$- polarised radiation and a magnetic field $B=600$~G. The upper (blue) trace shows the absorption spectrum obtained with a NC of thickness $L=385$~nm; the middle (red) trace shows the corresponding dSR spectrum where eight atomic transitions are completely resolved. The lower curve is the SA spectrum that serves a frequency reference.}
\label{fig7}
\end{figure}
As mentioned earlier, a remarkable property of K atoms is that the HPB regime occurs at very small magnetic field as compared to widely used Rb and Cs atoms, and magneto-optical studies using the HPB regime can be realized with a much smaller magnetic field. Particularly, in \cite{zlatkovic_lpl_2016}, a complete HPB regime was achieved for the $^{133}$Cs D$_2$ line for a magnetic field $B=27~$kG. Using an additional laser and optical pumping process of the ground-state sublevel a high polarisation of nuclear momentum was achieved. Similar results could be obtained with K atomic vapour for 10 times smaller magnetic field $B \sim 2.5$~kG, because $B_0 (^{39}$K$)/B_0(^{133}$Cs$) \sim 10$.
Let us note that the energy level structure for the isotope $^{41}$K is very similar to that of $^{39}$K, while the hyperfine splitting for the ground and excited levels are smaller \cite{bendali_jpb_1981,tonoyan_nasra_2016}. Particularly, the hyperfine splitting of the ground $4S_{1/2}$ level is 254 MHz, which is 1.8 times smaller than the one of $^{39}$K. This means that the constant $B_0(^{41}$K)$~\sim90$~G and the HPB regime is achieved at smaller magnetic field. In addition, the structure of $^{41}$K spectrum in strong longitudinal magnetic field is the same than $^{39}$K: two groups of eight transitions are recorded for circularly-polarised excitation, each of these groups containing one GT. In the case of $\pi$-polarised radiation, one can count eight transitions from which two are IFFA transitions. Transitions of $^{41}$K follow the same behaviour than the ones of $^{39}$K (see Fig.~\ref{fig3} and \ref{fig6}a), while equivalent probabilities are reached at a smaller magnetic field strength.
\section{Conclusion}
We have demonstrated that, despite a large Doppler-width of atomic transitions of potassium, SR of laser radiation from a K atomic vapour confined in a NC with a 350 nm gap thickness allows to realize close to Doppler-free spectroscopy. Narrow linewidth and linearity of the signal response with respect to transition probabilities allow us to detect, in an external longitudinal magnetic field, separately two groups each of them composed by eight transitions and formed either by $\sigma^+$- or $\sigma^-$-polarised light.
We have also showed that the dSR-method provided much better spectral resolution ($\sim80$~MHz) than that based on the absorption spectrum realized in NC with a thickness $L=\lambda/2$, since narrow linewidth of the transitions allows one to avoid overlapping of the nearly located atomic transitions; here, the frequency separation between transitions is $\sim100$~MHz.
The theoretical model describes the experiments very well. The experimental results along with calculated magnetic field dependence of the frequency shifts and the probabilities for 1--4 (1'--4') and 5--\textcircled{8} (\textcircled{5}'--8') transitions under $\sigma^+$ ($\sigma^-$) laser radiation, as well as the one for transitions 1--8 transitions observed in the case of $\pi$-polarised laser excitation and the detection of two IFFA transitions give the complete picture of potassium D$_2$ line atomic transitions behaviour in magnetic field.
The implementation of this recently developed setup based on narrowband laser diodes, strong permanent magnets, and dSR method using NC allows one to study the behaviour of any individual atomic transition of $^{39}$K atoms as well as of $^{23}$Na, $^{85}$Rb, $^{87}$Rb, $^{133}$Cs D$_1$ and D$_2$ lines. Particularly, the NC-based dSR method could be a very convenient tool for the study of $^{23}$Na atomic vapour, since the atomic transitions are masked under a huge Doppler width of $\sim1.5$~GHz in conventional spectroscopic experiments.
It should be noted that a recently developped fabrication process of a glass NC \cite{whittaker_jpc_2015}, much simpler than the one using sapphire materials, will make the NC-based dSR-technique widely available for researchers.
\ack
The authors are grateful to A. Papoyan and G. Hakhumyan for valuable discussions.
\section*{References}
|
1,108,101,563,659 | arxiv | \chapter*{Acknowledgements}
I would like to thank my supervisor, Prof. Meir Feder, for his devoted mentoring throughout my research. Without Meir's enthusiasm, curiosity and endless support, this thesis could not have been what it is now. His depth as an information theorist, his broad-mindedness as a scientist and his openheartedness inspired me all along the way. I am grateful for having given the opportunity to walk this path with him.
I would also like to thank Or Ordentlich and Yuval Lomnitz for their interest and insights, which helped me tackle some of the most challenging problems in my research.
\cleardoublepage
\begin{abstract}
In this study we consider rateless coding over discrete memoryless channels (DMC) with feedback. Unlike traditional fixed-rate codes, in rateless codes each codeword in infinitely long, and the decoding time depends on the confidence level of the decoder. Using rateless codes along with sequential decoding, and allowing a fixed probability of error at the decoder, we obtain results for several communication scenarios. The results shown here are non-asymptotic, in the sense that the size of the message set is finite.
First we consider the transmission of equiprobable messages using rateless codes over a DMC, where the decoder knows the channel law. We obtain an achievable rate for a fixed error probability and a finite message set. We show that as the message set size grows, the achievable rate approaches the optimum rate for this setting. We then consider the \emph{universal} case, in which the channel law is unknown to the decoder. We introduce a novel decoder that uses a mixture probability assignment instead of the unknown channel law, and obtain an achievable rate for this case.
Finally, we extend the scope for more advanced settings. We use different flavors of the rateless coding scheme for joint source-channel coding, coding with side-information and a combination of the two with universal coding, which yields a communication scheme that does not require any information on the source, the channel, or the amount the side information at the receiver.
\end{abstract}
\cleardoublepage
\pagestyle{plain}
\pagenumbering{roman}
\tableofcontents
\singlespace
\listoffigures
\cleardoublepage
\baselineskip0.7cm
\pagestyle{plain}
\pagenumbering{arabic}
\chapter{Introduction}
\section{Background} \label{sec:Background}
In traditional channel coding schemes the code rate, which is the ratio between the lengths of the encoder's input and output blocks, is an integral part of the code definition. If one of $M$ messages is to be encoded at rate $R$, then the corresponding codeword has length $n=(\log M)/R$. Provided that the rate is chosen properly, the error probability decreases as $M$ grows. The capacity of the channel $C$ is defined as the largest value of $R$ for which the error probability can vanish.
An alternative approach to fixed-rate channel coding is \emph{rateless codes}. In this approach, we abandon the basic assumption of a fixed coding rate, and allow the codeword length, and hence also the rate, to depend on the channel conditions. When the encoder wants to send a certain message, it starts transmitting symbols from an infinite-length codeword. The decoder receives the symbols that passed through the channel and when it is confident enough about the message, it makes a decision. Perhaps the simplest example of a rateless code is the following (see e.g. \cite[Ch.3]{Nadav} or \cite[Ch.7]{Cover}). Suppose that we have a binary erasure channel (BEC) with erasure probability $\delta$. Suppose also that noiseless feedback exists, i.e. the encoder at time instant $n$ has an access to the outputs of the channel at times $1,\ldots,n-1$. We use a simple repetition coding, in which each binary symbol is retransmitted until the decoder receives an unerased symbol. Since the erasure probability is $\delta$, the expected number of transmissions until an unerased symbol is received is $1/(1-\delta)$. This transmission time implies a rate of $1-\delta$, which is exactly the capacity of the binary erasure channel. This simple setting exemplifies some important concepts of rateless codes. First, the transmission time is not fixed, but rather is a random variable (geometrically-distributed in the above case); second, when the length of the transmission is set dynamically, the error probability may be controllable. In this case the transmission is only terminated once the decoder \emph{knows} what message has been transmitted, so the error probability of this coding scheme is zero; third, the code design is rate-independent. In fact, this code can be used for any binary erasure channel; fourth, the continuity of the transmission requires feedback to the encoder. Indeed, as we shall see in this thesis, when rateless codes are used for point-to-point communication, some form of feedback, which can be limited to decision feedback, must exist to enable continuity. However, rateless codes are also invaluable for other settings such as multicast or broadcast communications, in which the existence of feedback is not explicitly required. Shulman \cite{Nadav} introduced the concept of \emph{Static Broadcasting}, in which the transmitter sends a message to multiple users, and each user remains connected until it retrieved enough symbols to make a confident decision. This scheme does not require feedback; the user remains online only as much as it needs, and the rate is determined according to the time the user spent online.
In this thesis we assume a discrete memoryless channel (DMC) with feedback, and devise rateless coding schemes which allow a small (but fixed) error probability $\epsilon$. We investigate the dependence between the rate, the error probability and the size of the message set. The entire analysis is done for a finite message set, and we show that when the size of the message set is taken to infinity, our results agree with classic results from coding theory. We also investigate the rate of convergence to these results. We start by building a simple rateless coding scheme for a known channel. The motivation for this method is due to Wald's analysis (see \cite[Ch.3]{Wald}), where he demonstrated that the Sequential Probability Ratio Test (SPRT) performs like the most powerful test in terms of error probabilities, while using about half the samples on average.
Building on the rateless coding scheme devised for the case of known channel, we obtain a \emph{universal} channel coding scheme that does not require channel knowledge at the receiver. Unlike previous results on universal decoding, the results here are non-asymptotic and are valid for an arbitrary message set size. We then extend the coding scheme to joint source-channel coding, and show that optimal rate is achievable even when the encoder is uninformed on the source statistics. Next, we use a rateless coding scheme for source coding with side information at the receiver and show that the Slepian-Wolf rate for this scenario is achievable even when the encoder is unaware of the amount of side information. Finally, we show how to combine the above-mentioned techniques with universal source coding, to obtain a scheme that can operate when the statistics of both the source and the channel are unknown, potentially using side information that is obscure to the encoder.
Our work follows previous results discovered by Shulman \cite{Nadav} for the universal case, where the decoder is ignorant of the channel law. In particular, a sequential version of the maximal mutual information (MMI) decoder \cite{CsiszarKorner} is used for universal channel decoding and joint-source channel coding, including the case of side information at the decoder. However, the results in \cite{Nadav} are asymptotic in the size of the message set, while the analysis here is made for a fixed size of the message set. For the case of known channel, the decoder used here can be viewed as the counterpart of the sequential MMI decoder that uses the channel law rather than the empirical mutual information. This scheme has been originally introduced by Polyanskiy in \cite{PPV}, where it is proven to achieve the best variable-length coding rate. While the analysis in \cite{PPV} concentrates on finding the best achievable size of the message set with a constraint on the average decoding time, in this paper we seek the optimum decoding time for a fixed size of the message set. More importantly, the analysis introduced here is then extended naturally to apply for the case of unknown channel, where we use a novel universal decoder, as well as for joint source-channel coding with and without side information at the receiver.
\section{Thesis Outline} \label{sec:Outline}
The rest of the thesis is organized as follows. In Chapter \ref{ch:DefinitionsAndNotation} we define rateless codes and provide related definitions and notation. In Chapter \ref{ch:PreviousResults} we survey previous results related to universal communication and rateless codes. In Chapter \ref{ch:KnownChannel} we treat the case of known channel, for which we obtain an achievable rate using rateless codes. We also prove a converse theorem showing that this rate is asymptotically optimal, and we analyze the rate of convergence. The case of unknown channel is examined in Chapter \ref{ch:UnknownChannel}, where we develop a universal decoder and analyze its performance for a general DMC. In Chapter \ref{ch:Extensions} we extend the coding scheme for the case of message sets with non-equiprobable messages, and we also show how rateless coding can be used for problems with side information. Chapter \ref{ch:Summary} concludes the thesis.
\chapter{Definitions and Notation}
\label{ch:DefinitionsAndNotation}
Throughout this thesis, random variables will be denoted by capital letters and their realizations by the corresponding lowercase letters. Vectors are denoted by superscript that indicate their length, for instance $X^n = [X_1,\ldots,X_n]$. Unless otherwise stated, all logarithms are taken to the base of 2. We focus on communication over a discrete memoryless channel (DMC) characterized by a transition probability $p(y|x)$, $x \in \mathcal{X} , y \in \mathcal{Y}$, where $\mathcal{X}$ and $\mathcal{Y}$ are the input and output alphabets of the channel, respectively. With a slight abuse of notation, we use $p(\cdot|\cdot)$ also to denote the joint transition probabilities of the channel, thus $p(y^n|x^n) = \prod_{i=1}^n p(y_i|x_i)$. The capacity of the channel (in bits per channel use) in conventionally defined as $C = \max_{q(x)}I(X;Y)$, where $I(X;Y)$ is the mutual information between the input of the channel and its output, and the maximization is over all channel input priors $q(x)$. If $|\mathcal{X}| = |\mathcal{Y}|$, and $p(y|x)=1$ if $x=y$ and $p(y|x)=0$ otherwise, then the channel is said to be noiseless, and in that case $C = \log |\mathcal{X}|$. We also assume that a noiseless feedback exists from the receiver to the transmitter.
A rateless code has the following elements:
\begin{enumerate}
\item Message set $\mathcal{W}$ containing $M$ messages. Without the loss of generality we assume that $\mathcal{W}=\{1,\ldots,M\}$, with corresponding probabilities $\pi(1),\ldots,\pi(M)$. Occasionally, we define $K=\log M$ as the number of bits conveyed in a message.
\item Codebook $\mathcal{C} = \{\mathbf{c}_i\}_{i=1}^M$, where each codeword $\mathbf{c}_i \in \mathcal{X}^\infty$ is generated by drawing i.i.d. symbols according to a prior $q(x), x\in\mathcal{X}$.
\item Set of encoding functions $f_n:\mathcal{W} \rightarrow \mathcal{X}$, $n \geq 1$.
\item Set of decoding function $g_n:\mathcal{Y}^n \rightarrow \mathcal{W} \cup \{0\}$, $n \geq 1$.
\end{enumerate}
Unlike conventional codes, for which the rate is a fundamental property, the above description does not specify a working rate--hence the term \emph{rateless code}. To encode a message $w \in \mathcal{W}$, the encoder starts transmitting the codeword $\mathbf{c}_w$ over the channel. Upon receiving each channel output, the decoder can either decide on one of the messages $\hat{w}$ or decide to wait for further channel outputs, returning `$0$'. Through feedback, the decoder's decision is known to the encoder, which correspondingly decides whether to transmit further symbols from $\mathbf{c}_w$ or to proceed to the next message. We note that two different forms of feedback can be assumed here: channel feedback and decision feedback. In channel feedback, the encoder at time instance $t$ observes $Y^{t-1}$, the channel outputs so far, and by imitating the decoder's operation it becomes aware of any decision made by the decoder. In decision feedback, the encoder is only informed that a decision has been made, and it can proceed to the next message. While channel feedback requires no intervention from the decoder in the feedback process, it essentially assumes that the feedback channel has the same bandwidth as the main channel. Decision feedback, in contrast, requires only one feedback bit per symbol.
We conclude this section with a few definitions required for the next sections.
\begin{definition} \label{def:StoppingTime}
A \emph{stopping time} $T$ of a rateless code is a random variable defined as
\begin{equation} \label{eq:StoppingTimeDef}
T = \min\{n: g_n(Y^n) \neq 0\}
\end{equation}
\end{definition}
\begin{definition} \label{def:EffectiveRate}
An \emph{effective rate} $R$ of a rateless code is defined as
\begin{equation} \label{eq:EffectiveRateDef}
R = \frac{\log M}{\E\{T\}}
\end{equation}
where $\E\{T\}=\E_q \{ \E_p \{T\}\}$, i.e. the averaging is done over all possible codebooks and channel realizations.
\end{definition}
Using the definition of stopping time, we can define the error event as the case in which the decoder stops, deciding on the wrong message. The error event conditioned on a particular message is defined as
\begin{equation} \label{eq:EmDef}
E_w = \{\hat{W} \neq w \ | \ W = w \}
\end{equation}
where $\hat{W}=g_{_T}(Y^T)$.
The average error probability for the entire message set is therefore
\begin{equation} \label{eq:PeDef}
P_e = \sum_{w=1}^M \pi(w) \cdot \Pr\{E_w\}
\end{equation}
\begin{definition}
For a given DMC, an $(R,M,\epsilon)$-code is a rateless code with effective rate $R$, containing $M$ messages and error probability $P_e \leq \epsilon$.
\end{definition}
\chapter{Previous Results}
\label{ch:PreviousResults}
As noted, the rateless coding scheme is a special case of communication over a channel with feedback. Shannon \cite{ShannonZEC} proved that the capacity of a DMC is not increased by adding feedback. However, adding feedback \emph{can} increase the zero-error capacity of the channel. In his well known paper, Burnashev \cite{Burnashev} investigated the effect of feedback in communication over a DMC by analyzing the error exponent of such channel. Introducing the notion of random transmission time, Burnashev obtained a bound on the mean transmission time for a fixed error probability, from which he derived the error exponent\footnote{Referred to as \emph{reliability function}.} for a DMC with feedback. He also proved a converse theorem showing that the expected transmission time, hence also the error exponent, are asymptotically optimal. (That is, they coincide with the results of the converse theorem as the size of the message set grows to infinity.) The main result of \cite{Burnashev} is the following theorem.
\begin{untheorem}[Burnashev \cite{Burnashev}]
The optimum error exponent for a DMC with noiseless feedback is
\begin{equation}\label{eq:BurnasheErrorExp}
\lim_{M \to \infty} -\frac{1}{\E\{T\}} \log P_e = C_1\left(1 - \frac{R}{C} \right), \qquad 0 \leq R \leq C
\end{equation}
where $T$ is the transmission time, $R$ is defined in \eqref{eq:EffectiveRateDef} and
\begin{equation}\label{eq:C1Def}
C_1 \triangleq \max_{(x,x') \in \mathcal{X} \times \mathcal{X}} D\left(p(\cdot|x)||p(\cdot|x')\right)
\end{equation}
\end{untheorem}
Examining \eqref{eq:BurnasheErrorExp} we can observe that whenever $R \geq C$, the error exponent vanishes, which concurs with Shannon's result \cite{ShannonZEC}. Moreover, whenever the channel has at least two inputs that are completely distinguishable from one another, i.e. $p(y|x)>0$ and $p(y|x')=0$ for some $x,x' \in \mathcal{X}$ and $y \in \mathcal{Y}$, it holds that $D\left(p(\cdot|x)||p(\cdot|x')\right) \to \infty$ and hence also $C_1 \to \infty$ for that channel. Therefore, the error exponent in that case is infinite at \emph{every} rate below the channel capacity, which implies that the zero-error capacity coincides with the channel capacity $C$.
Also for the case feedback channels, Shulman \cite{Nadav} developed a coding scheme providing reliable communication over an unknown channel, without compromising the rate. Introducing the concept of \emph{static broadcasting}, which is based on random codebook and universal sequential decoder, he demonstrated that it is possible to achieve vanishing error probability at rate that tends to the capacity of the channel as the size of the message set grows indefinitely. Furthermore, Shulman showed that even if the statistics of the information source is unknown to the transmitter, this scheme achieves the optimal decoding length that would have been achieved if the source were compressed by an optimal source-encoder and the channel were known at both ends. More formally, if $K$ information bits of a source $S$ were to be transmitted over an unknown channel $W$, then the average decoding length satisfies
\begin{equation}\label{eq:DecodingLengthNadav}
\lim_{K \to \infty} \frac{\E\{T\}}{K} = \frac{H(S)}{I(P;W)}
\end{equation}
where $P$ is the codebook generation prior and $I(P;W)$ is the mutual information between the input and the output of the channel $W$ when the input is drawn according to distribution $P$.
Shulman also used the coding scheme for source-encoding of correlated sources. He demonstrated that using static broadcasting, it is possible to achieve the Slepian-Wolf optimal rate region. Combining all into one communication scheme, the achievable decoding length is
\begin{equation}\label{eq:DecodingLengthNadav}
\lim_{K \to \infty} \frac{\E\{T\}}{K} = \frac{H(S|Z)}{I(P;W)}
\end{equation}
where $Z$ is the side information at the decoder. Shulman's work has been the main inspiration for this research.
For the case of unknown channel, Tchamkerten and Telatar in \cite{Telatar} used a rateless coding scheme similar to the one defined in Chapter \ref{ch:DefinitionsAndNotation}, where the stopping condition is that the mutual information between (at least) one of the codewords and the channel output sequence exceeds a certain time-dependent threshold. The authors proved that this scheme can achieve the capacity of a general DMC.\footnote{Since no assumption has been made on the capacity-achieving prior, authors only demonstrated that the rate approaches $I(PQ)$, where $P$ is the codebook generation prior and $Q$ is the transition probability of the channel.} Moreover, they demonstrated that for the class of binary symmetric channel with crossover probabilities $L \in [0,1/2)$, this coding scheme can achieve Burnashev's exponent at a rate bounded by any fraction of the channel capacity. The latter result is obtained by using a second coding phase, in which the transmitter indicates whether the decoder's decision is correct (an \emph{Ack/Nack} phase). Tchamkerten and Telatar also demonstrated that for the class of $Z$ channels with parameter $L \in [0,1)$, the achievable rate can be arbitrarily close to the channel capacity, while the error exponent is infinite. The latter result also coincides with Burnashev's exponent ($C_1$ in \eqref{eq:BurnasheErrorExp} is infinite in this case), since error-free communication is attainable for the $Z$ channel.
We note that all the above-mentioned results were asymptotic in the size of the message set. Recently, Polyanskiy, Poor and Verd{\'u} in \cite{PPV} introduced non-asymptotic results for communication over DMC with feedback. Through the use of variable-rate coding and sequential decoding they obtained upper and lower bounds for the maximal message set size for fixed bounds on the error probability and mean decoding length. The authors showed that for an error probability constraint $P_e \leq \epsilon$ and mean decoding length constraint $\E \{T\} \leq \ell$, the maximal message set size $M^*(\ell,\epsilon)$ satisfies
\begin{equation}\label{eq:PolyanskiyUpperLower}
\frac{\ell C}{1-\epsilon} - \log \ell + O(1) \leq \log M^*(\ell,\epsilon) \leq \frac{\ell C}{1-\epsilon} + O(1)
\end{equation}
The setting of \cite{PPV}, as well as the coding scheme, is similar to the one defined later in Chapter \ref{ch:KnownChannel}. However, while in \cite{PPV} the optimization is on $M$, for fixed $\epsilon$ and $\ell$, we fix $\epsilon$ and $M$ and find the optimum mean decoding length. The analysis is slightly different, but the results of Chapter \ref{ch:KnownChannel} comply with \cite{PPV}. The analysis in Chapter \ref{ch:KnownChannel}, coming next, lays the ground for the derivation of our novel results for the case of unknown channel.
\chapter{Rateless Coding -- Known Channel}
\label{ch:KnownChannel}
\section{Sequential Decoder} \label{sec:SequentialDecoder}
We begin by introducing a rateless coding scheme for noisy channels and analyzing its effective rate, under certain constraints on the size of the message set and the error probability. As will be shown in the sequel, the effective rate is closely related to the channel capacity $C$. More precisely, we will show that under the conventional setting, in which the size message set is taken to infinity, the effective rate coincides with the capacity of the channel.
Consider a discrete memoryless source with a set of $M$ equiprobable messages, i.e. $\pi(i)=1/M$, $i=1,\ldots,M$. We use a rateless code as defined in Section \ref{ch:DefinitionsAndNotation}, where each codeword $\mathbf{c}_i$, $i=1,\ldots,M$ is generated by drawing i.i.d. symbols according to $q(x)$, the capacity-achieving prior of the channel. The source of randomness generating the codewords is shared by the encoder and the decoder, so that the codebook in known at both ends. The decoder uses the following decision rule:
\begin{equation} \label{eq:ChannelDecoderLin}
g_n(y^n)= \begin{cases}
w, \ \prod_{k=1}^n p(c_{w,k}|y_k) \geq A \cdot \prod_{k=1}^n q(c_{w,k}) \\
0, \ \text{if no such $w$ exists}
\end{cases}
\end{equation}
where $\{c_{w,k}\}_{k=1}^{\infty}$ are the symbols in $\mathbf{c}_w$. If the threshold crossing condition in \eqref{eq:ChannelDecoderLin} is satisfied by more than one codeword, we randomly choose one of them and declare an error. We note here that similar decoders have been proposed by Polyanskiy \cite{PPV} and Burnashev \cite[Ch.3]{Burnashev}.
The decision rule at \eqref{eq:ChannelDecoderLin} can be equivalently written as
\begin{equation} \label{eq:ChannelDecoderLog}
g_n(y^n)= \begin{cases}
w, \ z_{w,1}+\ldots+z_{w,n} \geq a \\
0, \ \text{if no such $w$ exists}
\end{cases}
\end{equation}
where
\begin{equation}
z_{w,k} = \log \frac{p(c_{w,k}|y_k)}{q(c_{w,k})}, \qquad k=1,\ldots,n
\end{equation}
and we define $a = \log A$.
The above-described coding scheme can be summarized as follows. Having selected a message, the encoder starts transmitting an infinite-length random codeword corresponding to that message. The decoder sequentially receives symbols from this codeword that passed through the channel, and at each time instant $k$ calculates $z_{w,k}$ for $w={1,\ldots,M}$. It then updates a set of $M$ accumulators, each corresponding to a possible message, and checks whether any of those crossed a prescribed threshold $a$. If neither of the counters crossed the threshold, `$0$' is returned and the decoder waits for the next channel output; if exactly one counter crossed the threshold, the decoder makes a decision; and if more that one threshold crossing occurred, an error is declared. In the two latter cases, the encoder proceeds to the next codeword.
For the above-described scheme we have the following theorem.
\begin{theorem} \label{Theorem1}
For the decoder in \eqref{eq:ChannelDecoderLog} with $P_e \leq \epsilon$, the following effective rate is achievable:
\begin{equation}
R = \frac{C}{1+ \frac{C - \log \epsilon}{\log M}} \label{eq:AchievableRateChannelDec}
\end{equation}
\end{theorem}
\begin{proof}
Since $T$ is a stopping time of the i.i.d. sequence $Z_1,Z_2,\ldots$, Wald's equation \cite{Wald} implies
\begin{equation} \label{eq:ETForWald}
\E\{T\} = \frac{\E \{Z_1 + \ldots + Z_T\}}{\E\{Z\}},
\end{equation}
where $\E\{Z\}$ is the expectation of a single sample $Z_i$. If $X_i$ and $Y_i$ are input and output of the channel, respectively, then by the definition of $Z_i$ we have
\begin{equation} \label{eq:EZForWald}
\E\{Z\} = \E\{Z_i\} = \E \left\{\frac{p(X_i|Y_i)}{q(X_i)}\right\} = C.
\end{equation}
Furthermore, since the stopping condition was not fulfilled time instant $T-1$ we have
\begin{equation}
Z_1 + \ldots + Z_{T-1} < a
\end{equation}
which implies
\begin{equation} \label{eq:SumZForWald}
Z_1 + \ldots + Z_T < a + Z_T
\end{equation}
Combining \eqref{eq:ETForWald}, \eqref{eq:EZForWald} and \eqref{eq:SumZForWald} we obtain
\begin{equation} \label{eq:WaldC}
\E\{T\} < \frac{a+C}{C}.
\end{equation}
We now tune the threshold parameter $a$ to meet the error probability requirement. Suppose that the stopping time of the correct codeword is $T_w$. An error occurs if a competing codeword $\mathbf{c}_{w'}$, independent of $\{Y_k\}_{k=1}^{\infty}$, crosses the threshold before $\mathbf{c}_w$ does. Thus,
\begin{align}
\Pr \{E_w\} &= \Pr \left\{ \bigcup_{w' \neq w} \bigcup_{t=1}^{T_w}
\left\{ \frac{\prod_{k=1}^{t} p(C_{w',k}|Y_k)}{\prod_{k=1}^{t} q(C_{w',k})} > A \right\} \right\} \\
&\leq (M-1) \Pr \left\{ \bigcup_{t=1}^{T_w}
\left\{ \frac{\prod_{k=1}^{t} p(X_k|Y_k)}{\prod_{k=1}^{t} q(X_k)} > A \right\} \right\} \label{eq:ErrorProbUB} \\
&\leq (M-1) \Pr \left\{ \bigcup_{t=1}^{\infty}
\left\{ \frac{\prod_{k=1}^{t} p(X_k|Y_k)}{\prod_{k=1}^{t} q(X_k)} > A \right\} \right\} \label{eq:ErrorProbInf}
\end{align}
where \eqref{eq:ErrorProbUB} follows from the union bound for an arbitrary series $\{X_k\}_{k=1}^{\infty}$ drawn i.i.d. from $q(x)$, independently of $\{Y_k\}_{k=1}^{\infty}$. Note that the bound in \eqref{eq:ErrorProbInf} represents the probability that a randomly-chosen codeword will exceed the threshold at any time instant. Define a sequence of random variables
\begin{equation}\label{eq:uidef}
U_t = \begin{cases}
\frac{p(X_t|Y_t)}{q(X_t)}, \ \prod_{k=1}^{t-1} U_k \leq A \\
1, \ \text{otherwise}
\end{cases}
\end{equation}
If at instant $t$ the threshold at \eqref{eq:ErrorProbInf} is exceeded for the first time, then we have $U_k=p(X_k|Y_k)/q(X_k)$ for $k=1,\ldots,t$ and $U_k=1$ for all $k>t$. Therefore, it is easy to see that
\begin{equation}
\bigcup_{t=1}^{\infty}
\left\{ \frac{\prod_{k=1}^{t} p(X_k|Y_k)}{\prod_{k=1}^{t} q(X_k)} > A \right\}
\Leftrightarrow \prod_{t=1}^{\infty} U_t > A
\end{equation}
We can also see that $\E\{U_t\}=1$ for \emph{all} $t$ because
\begin{equation*}
\E \left\{ U_t|\prod_{k=1}^{t-1} U_k > A \right\} = 1
\end{equation*}
since $U_t = 1$ deterministically in this case, and
\begin{align}
\E \left\{ U_t|\prod_{k=1}^{t-1} U_k \leq A \right\} &= \E \left\{ \frac{p(X_t|Y_t)}{q(X_t)} \right\} \\
&= \E \left\{ \E \left\{ \frac{p(X_t|Y_t)}{q(X_t)}|Y_t \right\} \right\} \\
&= \E \left\{ \sum_{x \in \mathcal{X}} \frac{p(x|Y_t)}{q(x)} \cdot q(x) \right\} \label{eq:IndependentX} \\
&= 1
\end{align}
where \eqref{eq:IndependentX} follows since $X_t$ and $Y_t$ are independent. For an arbitrary $N$ we have
\begin{align}\label{eq:ProdU}
\E \left\{\prod_{t=1}^N U_t \right\} &= \E \left\{ \E \left\{\prod_{t=1}^N U_t | \prod_{t=1}^{N-1} U_t \right\} \right\} \\
&= \E \{ U_N \} \cdot \E \left\{\prod_{t=1}^{N-1} U_t \right\} \\
&= \E \left\{\prod_{t=1}^{N-1} U_t \right\} = \ldots = \E\{U_1\} = 1
\end{align}
Since the above holds for all $N$, we also have
\begin{equation}\label{eq:InfProdU}
\E \left\{\prod_{t=1}^\infty U_t \right\} = 1
\end{equation}
Returning to \eqref{eq:ErrorProbInf}, we get
\begin{align}
\Pr \{E_w\} &\leq (M-1) \Pr \left\{ \bigcup_{t=1}^{\infty}
\left\{ \frac{\prod_{k=1}^{t} p(X_k|Y_k)}{\prod_{k=1}^{t} q(X_k)} > A \right\} \right\} \label{eq:BoundErrorProbKnownCh} \\
&= (M-1) \Pr \left\{ \prod_{t=1}^{\infty} U_t > A \right\} \\
&\leq \frac{M-1}{A} \label{eq:Markov}
\end{align}
where \eqref{eq:Markov} follows from \eqref{eq:InfProdU} and Markov's Inequality.
Since the above holds for all $w \in \mathcal{W}$, we also have
\begin{equation}
P_e \leq \frac{M-1}{A}
\end{equation}
By choosing $a = \log M - \log \epsilon $, or equivalently $A = M/\epsilon$, we secure that $P_e < \epsilon$. Substituting $a$ into \eqref{eq:WaldC} and using Definition \ref{def:EffectiveRate}, we obtain \eqref{eq:AchievableRateChannelDec}.
\end{proof}
It is important to note that the encoding operation is independent of the working rate; the encoder needs to know the channel law only to generate the codebook. However, if the channel is known to belong to a family for which the capacity-achieving prior is known (e.g. the uniform prior for symmetric channel), then the optimal rate can be achieved even when the encoder is uninformed on the channel law. Furthermore, from a practical point of view, using the uniform prior instead of the capacity-achieving prior is known to perform relatively well in many cases. For instance, using a uniform prior in a binary channel will lose at most 6\% of the capacity (see \cite[Ch.5]{Nadav}).
\section{Coding Theorem for Known Channel} \label{sec:CodingThmKnownChannel}
We will now use the coding scheme developed in Section \ref{sec:SequentialDecoder} to prove the main result for rateless channel coding. For a fixed error probability, we will obtain an achievable rate using rateless codes. Then, we will prove that this rate is within $O(\log \log M / \log M)$ from the optimal rate achievable with this error probability. Before we get to the main theorem, we prove the following lemma, which facilitates some refinement in the achievable rate.
\begin{lemma} \label{Lemma1}
Suppose that an $(R,M,\epsilon)$-code exists for a DMC. Then for any $0<\alpha<1$, there also exists an $(R',M,\epsilon')$-code for the same channel, where
\begin{eqnarray}
R' & = & (1-\alpha)^{-1}R \\
\epsilon' & = & \alpha + \epsilon - \alpha \epsilon
\end{eqnarray}
\end{lemma}
\begin{proof}
To show that the triplet $(R',M,\epsilon')$ is achievable, we use the original code with randomized decision-making at the decoder. For each transmitted message, the decoder either terminates the transmission immediately and declares an error, with probability $\alpha$, or uses the original decision rule. Denote the stopping time of the original decoder and the modified one by $T$ and $T'$, respectively. The expected decision time of the modified decoder is
\begin{equation}
\E\{T'\} = (1-\alpha) \E\{T\},
\end{equation}
which implies
\begin{equation}
R' = (1-\alpha)^{-1} R.
\end{equation}
The error event in the modified scheme is a union of two non-mutually-exclusive events: error in the original decoder and the event of early termination. The probability of this union is
\begin{equation}
\epsilon' = \alpha + \epsilon - \alpha \epsilon.
\end{equation}
Finally, we note that the number of messages in the codebook remains unchanged---which completes the proof of the lemma.
\end{proof}
\begin{theorem}
For rateless codes, the following rate is achievable:
\begin{equation} \label{eq:AchievableRate}
R' = \begin{cases}
\frac{1 - 1/\log M}{1+ \frac{C + \log \log M}{\log M}} \cdot \frac{C}{1-\epsilon} & \epsilon > 1/\log M \\
\frac{C}{1+ \frac{C - \log \epsilon}{\log M}} & \epsilon \leq 1/\log M \\
\end{cases}
\end{equation}
\end{theorem}
We note that if $\epsilon$ is fixed and $M$ is large enough so that $\epsilon > 1/\log M$, the achievable rate has the following asymptotics:
\begin{equation} \label{eq:AchievableRateAsym}
R' = \frac{C}{1-\epsilon} \cdot \left(1 - O \left( \frac{\log \log M}{\log M} \right) \right)
\end{equation}
\begin{proof}
Theorem \ref{Theorem1} implies that the triplet $(R,M,\delta)$ is achievable for all $0 < \delta < 1$, where
\begin{equation}
R = \frac{C}{1+ \frac{C - \log \delta}{\log M}}
\end{equation}
By Lemma \ref{Lemma1}, we can also achieve $(R',M,\delta')$, where
\begin{eqnarray}
R' & = & \frac{C}{(1-\alpha)\left(1+ \frac{C - \log \delta}{\log M}\right)} \\
\delta' & = & \alpha + \delta - \alpha \delta
\end{eqnarray}
for all $0 < \alpha < 1$. By choosing
\begin{equation}
\alpha = \frac{\epsilon - \delta}{1 - \delta}
\end{equation}
we obtain
\begin{eqnarray}
R' & = & \frac{1 - \delta}{1+ \frac{C - \log \delta}{\log M}} \cdot \frac{C}{1-\epsilon} \\
\delta' & = & \epsilon
\end{eqnarray}
Since the foregoing analysis holds for all $0 < \delta < \epsilon$, we can choose $\delta = \min\{\epsilon, 1/\log M\}$ to obtain \eqref{eq:AchievableRate}.
\end{proof}
\begin{remark}
If $\epsilon \leq 1/ \log M$, the choice $\delta = \epsilon$ implies $\alpha = 0$, that is, no randomization at the decoder. This result could be anticipated, since the randomized decoder trades rate for reliability: it obtains a better effective rate with some compromise on the error probability. Hence, whenever the error probability constraint is more important than the working rate -- randomization can only worsen matters.
\end{remark}
\section{Error Exponent}
Theorem 2 in the previous section provides a relation between the working rate and the allowed error probability. We will now investigate this dependency in the regime of low error probability by developing the error exponent induced by this coding scheme. Assuming that a low error probability is required, randomization at the decoder is inapplicable, so \eqref{eq:AchievableRate} can be rewritten as
\begin{equation}
- \frac{R}{\log M} \log \epsilon = C - R - \frac{CR}{\log M}
\end{equation}
Recall that $R = \log M / \E\{T\}$, so
\begin{equation}
- \frac{\log \epsilon}{\E\{T\}} = C - R - \frac{CR}{\log M} \triangleq E(R)
\end{equation}
We can see that the error exponent is a \emph{linear} function of the rate, which is also the case in Burnashev's analysis \eqref{eq:BurnasheErrorExp} (albeit with a different coefficient). Furthermore, as $M$ grows, the error exponent converges to $C-R$ and the convergence is dominated by a term of order $O(1/\log M)$, or $O(1/K)$. This term can be interpreted as a penalty for using a finite message set.
\section{Weak Converse}
In the previous section we have seen that if we use a codebook with $M$ messages and allow an error probability $P_e \leq \epsilon$, then we can achieve an effective rate with the following asymptotics:
\begin{equation}
R' = \frac{C}{1-\epsilon} \cdot \left(1 - O \left( \frac{\log \log M}{\log M} \right) \right)
\end{equation}
We will now prove that under the above constraints on the message set and the error probability, the best achievable rate has the same asymptotics. In other words, the achievable rate at \eqref{eq:AchievableRate} converges to the optimal rate, and the convergence is dominated by a term of order $O(1/\log M)$.
\begin{theorem}
Given a decoder with random \footnote{Fixed stopping time is a private case of random stopping time, in which $T$ takes only one value.} stopping time $T$, any rate for which the probability of error does not exceed $\epsilon$ satisfies
\begin{equation}
R' \leq \frac{C}{1-\epsilon} \cdot \left(1 + O \left( \frac{1}{\log M} \right) \right).
\end{equation}
\end{theorem}
\begin{proof}
Define
\begin{equation}
\mu(n) = H(W|Y^n) + nC.
\end{equation}
By \cite[Lemma 2]{Burnashev}) we have
\begin{equation}
\E \{\mu(n+1)|Y^n\} - \mu(n) = \E \{H(W|Y^{n+1})-H(W|Y^n)|Y^n\} + C \geq 0
\end{equation}
which implies that $\mu(n)$ is a submartingale with respect to the process $\{Y_k\}_{k=1}^{\infty}$. Therefore we have
\begin{align}
\log M &= H(W) = \mu(0) \nonumber \\
&\leq \E \{\mu(T)\} \nonumber \\
&= \E \{H(W|Y^T)\} + C \cdot \E\{T\} \label{eq:SubmartingalIneq}
\end{align}
Furthermore, by \cite[Lemma 1]{Burnashev} we have
\begin{align}
\E\{H(W|Y^T)\} &\leq h \left(P_e\right) + P_e \cdot \log (M-1) \nonumber \\
&< 1 + \epsilon \cdot \log M \label{eq:Fano}
\end{align}
where \eqref{eq:Fano} follows from the requirement $P_e \leq \epsilon$, and from an upper bound on the binary entropy function. Combining \eqref{eq:SubmartingalIneq} and \eqref{eq:Fano} we obtain
\begin{equation} \label{eq:BoundLogM}
\log M < 1 + \epsilon \cdot \log M + C \cdot \E\{T\}
\end{equation}
which implies
\begin{equation} \label{eq:RPrimeWithET}
R' = \frac{\log M}{\E\{T\}} < \frac{C}{1-\epsilon} \cdot \left(1 + \frac{1}{C \cdot \E\{T\}}\right)
\end{equation}
Furthermore, from \eqref{eq:BoundLogM} we can see that
\begin{equation}
C \cdot \E\{T\} > (1 - \epsilon) \cdot \log M - 1
\end{equation}
and therefore \eqref{eq:RPrimeWithET} can be replaced by
\begin{equation}
R' = \frac{\log M}{\E\{T\}} \leq \frac{C}{1-\epsilon} \cdot \left(1 + O\left(\frac{1}{\log M}\right)\right) \label{eq:UpperBound}
\end{equation}
\end{proof}
\begin{remark}
While \eqref{eq:AchievableRate} approaches \eqref{eq:UpperBound} for large $M$, the upper bound is not tight for a finite $M$. Note that the converse used here is ``weak'', in that it is based on Fano's inequality, which is known to be loose in many cases. We conjecture that a strong converse can be found, which will be tighter (i.e. closer to \eqref{eq:AchievableRate}) even in the non-asymptotic realm.
\end{remark}
\begin{remark}
Equation \eqref{eq:AchievableRateAsym}, the achievable rate, is essentially equivalent to the left-hand side of \cite[Eq.18]{PPV}, and equation \eqref{eq:UpperBound}, the upper bound on the rate, is equivalent to the right-hand side of that equation. Note, however, that the formulation is slightly different: in \cite{PPV} size of the message set $M$ is optimized with constraint on the maximal transmission time, while here $M$ is fixed and the transmission time is minimized.
\end{remark}
\section{Further Discussions}
\subsection{Application for Gaussian Channels}
While the analysis in Sections \ref{sec:SequentialDecoder} and \ref{sec:CodingThmKnownChannel} is done for discrete channels, it can be easily extended to memoryless Gaussian channels. Suppose that $X_t$ and $Y_t$ are the input and output of an additive white Gaussian noise channel at time instant $t$, i.e.
\begin{equation} \label{eq:AWGNDef}
Y_t = X_t + V_t, \qquad t=1,2,\ldots
\end{equation}
where $\{V_t\}_{t=1}^\infty$ is a sequence of i.i.d. Gaussian RV's with zero mean and a known variance. The encoding and the decoding processes, as well as the expression for the resulting effective rate, are similar to those of the DMC, where $q(\cdot)$ is the codebook generation PDF and $p(\cdot|\cdot)$ is the transition PDF of the backward channel.
Specifically, consider the above-described setting where $V_k \sim N(0,\theta)$. Suppose that the codebook is Gaussian with power constraint $P$, i.e. $C_{m,k} \sim N(0,P)$ for all $m,k$. (Here again, $C_{m,k}$ is the $k$-th symbol of the $m$-th codeword.) The decoding rule is given by \eqref{eq:ChannelDecoderLin}, where
\begin{align}
p(x|y) &= \left( 2\pi \condvar \right)^{-1/2} \exp \left\{ -\frac{1}{2 \cdot \condvar} \left( x - \Wiener \cdot y \right)^2 \right\} \\
q(x) &= \left( 2\pi P \right)^{-1/2} \exp \left\{ -\frac{x^2}{2P} \right\}
\end{align}
The effective rate of the decoder is given in \eqref{eq:AchievableRateChannelDec}, where
\begin{equation}
C = \frac{1}{2} \log \left( 1 + \frac{P}{\theta}\right)
\end{equation}
\subsection{Limited Feedback Channel}
In the forgoing analysis, we assumed that the feedback channel must be used once per each main channel use. In practice, however, it may be desirable to reduce the amount of data transmitted over the feedback channel. For instance, in the case of broadcasting to multiple users, the upstream channel may have a more stringent bandwidth constraint as it must be accessed by all users. It is therefore interesting to see how lowering the frequency of the feedback affects the performance of the rateless coding scheme. Suppose that we want to use the feedback channel only once per $s$ received symbols. The maximal number of excess symbols transmitted over the main channel (i.e. the number of symbols transmitted after a decoder without feedback limitation would acknowledge the message) is $s-1$, which implies an effective rate of
\begin{equation}\label{eq:RateLimitedFB}
R = \frac{C}{1+ \frac{(s-1)C - \log \epsilon}{\log M}}
\end{equation}
From \eqref{eq:RateLimitedFB} we see that limiting the feedback frequency has negligible effect if either $s \ll (-\log \epsilon)/C$ or $s \ll (\log M)/C$. In the former case, the required confidence level is high, and in the latter case the messages are long. That is, in both cases the codewords are long with respect to the capacity of the channel, which implies long transmission time. Therefore, in both cases the excess decoding time is small compared to the entire transmission length, and the effect of the limiting the feedback is negligible.
\chapter{Rateless Coding -- Unknown Channel}
\label{ch:UnknownChannel}
In Chapter \ref{ch:KnownChannel} we assumed that the communication channel, characterized by $p(y|x)$, is known at the receiver end. Assume now, that the underlying channel is unknown to the receiver. The capacity of the channel is known to be achievable in this scenario using sequential versions of the Maximal Mutual Information (MMI) decoder \cite{Nadav}, \cite{Telatar}. However, while these schemes provide reliable communication at rate equal to the channel capacity, they assume that the size of the message set $M$ is infinite. In this chapter we try to answer the question whether universal communication is feasible with a finite message set, and if it is, what rates are achievable? As we shall see shortly, it is possible to achieve reliable communication over an unknown channel even when the message set is finite, and we can also bound the rate degradation due to lack of information about the channel law.
\section{Achievable Rate for an Unknown Channel} \label{sec:RateUnknownChannel}
Suppose that we wish to communicate over a DMC with unknown (backward) transition probabilities
\begin{equation}
\theta_{ij} = \Pr\{X=i|Y=j\}, \qquad i = 1,\ldots,|\mathcal{X}| \qquad j = 1,\ldots,|\mathcal{Y}|
\end{equation} \label{eq:ThetaParamsDef}
We use a coding scheme similar to the one described in Chapter \ref{ch:KnownChannel} with the following modification. Instead of using the true transition probability $p_{\tv}(x^t|y^t)$, which is unknown to the decoder, we use a \emph{universal} probability assignment defined as
\begin{equation}\label{eq:UniversalProb}
p_U (x^t|y^t) \triangleq \int_{\Lambda} w(\tv') p_{\tv'} (x^t|y^t) d\tv'
\end{equation}
where
\begin{equation}
\Lambda = \left\{\tv' \in [0,1]^{\XY} \ | \ \sum_{i=1}^{|\mathcal{X}|} \theta'_{ij} = 1, \quad j = 1,\ldots,|\mathcal{Y}|\right\}
\end{equation}
and the weight function $w(\cdot)$ is chosen to be Jeffreys Prior \footnote{This is also a special case of Dirichlet Distribution.}, i.e.
\begin{equation} \label{eq:JeffreysPrior}
w(\tv') = \frac{1}{\BXY \sqrt{\prod_{i,j} \theta'_{ij}}}
\end{equation}
where
\begin{equation} \label{eq:DefBXY}
\BXY = \int_{\Lambda} \frac{d\tv'}{\sqrt{\prod_{i,j} \theta_{ij}}}
\end{equation}
\begin{remark}
While the unknown channel is usually characterized by a set of transition probabilities
\begin{equation*}
\tilde{\theta}_{ij} = \Pr\{Y=j|X=i\}, \qquad i = 1,\ldots,|\mathcal{X}| \qquad j = 1,\ldots,|\mathcal{Y}|
\end{equation*}
the entire derivation here is done for the \emph{backward} channel parameterization given in \eqref{eq:ThetaParamsDef}. However, this does not need to bother us since the entire analysis assumes a known input prior $q(x)$, and therefore given $\{\tilde{\theta}_{ij}\}$, the parameters in \eqref{eq:ThetaParamsDef} are well-defined. Moreover, the region $\tilde{\Lambda}$, induced by $\{\tilde{\theta}_{ij}\}$ and $q(x)$, is clearly contained in the region $\Lambda$. Therefore, if a coding scheme is universal with respect to all possible realizations of the backward channel, it is also universal w.r.t. all possible realizations of the forward channel.
\end{remark}
The universal probability assignment implies the following decoding rule, which is the universal counterpart of \eqref{eq:ChannelDecoderLin}:
\begin{equation} \label{eq:ChannelDecoderUnv}
g_n(y^n)= \begin{cases}
w, \ p_U(\mathbf{c}_w|y^n) \geq A \cdot q(\mathbf{c}_w) \\
0, \ \text{if no such $w$ exists}
\end{cases}
\end{equation}
In Chapter \ref{ch:KnownChannel} we used Wald's Identity to bound the expected transmission time, thereby obtaining an effective rate for the sequential decoder. Unfortunately, in the universal case $p_U(\cdot|\cdot)$ is not necessarily multiplicative, so $\log p_U(\cdot|\cdot)$ cannot be expressed as the sum of i.i.d. random variables. Therefore, the expected transmission time in the universal case cannot be calculated directly by applying Wald's identity. Nevertheless, as we shall see shortly, we can use the results for the known channel case to obtain an upper bound for the transmission time in the universal case.
The following lemma shows that given two sequences $x^t$ and $y^t$, the universal metric cannot be too far from the conditional probability assignment that is optimally fitted to $x^t$ and $y^t$.
\begin{lemma} \label{thm:UnvProbLemma}
For any two series $x^t$ and $y^t$ we have
\begin{equation}
\log \frac{p_{\htv} (x^t|y^t)}{p_U (x^t|y^t)} \leq \frac{\left(\XX-1\right)\YY}{2} \log \frac{t}{2\pi} + \YY \Lkappa_{\XX} + \left( \frac{\XX^2 \YY}{4} + \frac{\XY}{2} \right) \log e
\end{equation}
where
\begin{equation} \label{eq:ThetaOpt}
\htv = \arg \max_{\tv' \in \Lambda} p_{\tv'} (x^t|y^t)
\end{equation}
and we define
\begin{equation}
\Lkappa_{\XX} = \log \frac{\Gamma(1/2)^{\XX}}{\Gamma(\XX/2)}
\end{equation}
\end{lemma}
\begin{proof}
Note that
\begin{align}
p_{\htv}(x^t|y^t)&= \max_{\{\theta_{i,j}\}} \prod_{i,j} \theta_{i,j}^{N(x^t,y^t;i,j)} \\
&= \max_{\{\theta_{i,1}\}} \prod_i \theta_{i,1}^{N(x^t,y^t;i,1)} \cdot \max_{\{\theta_{i,2}\}} \prod_i \theta_{i,2}^{N(x^t,y^t;i,2)} \cdot \ldots \cdot
\max_{\{\theta_{i,\YY}\}} \prod_i \theta_{i,\YY}^{N(x^t,y^t;i,\YY)}
\end{align}
where
\begin{equation} \label{eq:NxyDef}
N(x^t,y^t;i,j) = \left| \left\{k \ : \ (x_k,y_k)=(i,j) \right\} \right|
\end{equation}
Since both $w(\cdot)$ and $p_{\tv}$ are multiplicative functions, we also have
\begin{align}
p_U (x^t|y^t) &= \int_{\Lambda} w(\tv') p_{\tv'} (x^t|y^t) d\tv' \\
&= \int_{\tilde{\Lambda}} w(\tilde{\tv}) \prod_i \tilde{\theta}_i^{N(x^t,y^t;i,1)} d\tilde{\tv} \cdot \int_{\tilde{\Lambda}} w(\tilde{\tv}) \prod_i \tilde{\theta}_i^{N(x^t,y^t;i,2)} d\tilde{\tv} \cdot \ldots \\
& \cdot \int_{\tilde{\Lambda}} w(\tilde{\tv}) \prod_i \tilde{\theta}_i^{N(x^t,y^t;i,\YY)} d\tilde{\tv}
\end{align}
where
\begin{equation}
\tilde{\Lambda} = \left\{\tilde{\tv} \in [0,1]^{\XX} \ | \ \sum_{i=1}^{|\mathcal{X}|} \tilde{\theta}_i = 1, \right\}
\end{equation}
From \cite[Lemma 1]{Barron} we know that
\begin{equation}\label{eq:BarronsIneq}
\log \frac{\max_{\{\theta_{i,j}\}} \prod_i \theta_{i,j}^{N(x^t,y^t;i,j)}}{\int_{\tilde{\Lambda}} w(\tilde{\tv}) \prod_i \tilde{\theta}_{i,j}^{N(x^t,y^t;i,j)} d\tilde{\tv}} \leq \frac{\XX-1}{2} \log \frac{t}{2\pi} + \Lkappa_{\XX} + \left( \frac{\XX^2}{4} + \frac{\XX}{2} \right) \log e
\end{equation}
for all $j = 1,\ldots,\YY$. Thus, we obtain
\begin{align}
\log \frac{p_{\htv} (x^t|y^t)}{p_U (x^t|y^t)} &= \log \prod_j \frac{\max_{\{\theta_{i,j}\}} \prod_i \theta_{i,j}^{N(x^t,y^t;i,j)}}{\int_{\tilde{\Lambda}} w(\tilde{\tv}) \prod_i \tilde{\theta}_{i,j}^{N(x^t,y^t;i,j)} d\tilde{\tv}} \\
&\leq \frac{\left(\XX-1\right)\YY}{2} \log \frac{t}{2\pi} + \YY \Lkappa_{\XX} + \left( \frac{\XX^2 \YY}{4} + \frac{\XY}{2} \right) \log e \\
&=\frac{\left(\XX-1\right)\YY}{2} \log t + \beta
\end{align}
where we define
\begin{equation}\label{eq:BetaDef}
\beta \triangleq \YY \Lkappa_{\XX} + \left( \frac{\XX^2 \YY}{4} + \frac{\XY}{2} \right) \log e - \frac{\left(\XX-1\right)\YY}{2} \log (2\pi)
\end{equation}
\end{proof}
We are now ready to prove the main theorem for rateless coding over an unknown channel.
\begin{theorem} \label{thm:UnvRate}
For the decoder in \ref{eq:ChannelDecoderUnv} with $P_e \leq \epsilon$, the following effective rate is achievable:
\begin{equation}\label{eq:UnvEffectiveRate}
R = \frac{C \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}{1 + \frac{C + \beta - \log \epsilon + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right)}{\log M}}
\end{equation}
\end{theorem}
\begin{proof}
The stopping time in the above-described scheme is
\begin{equation} \label{eq:StoppingTimeUnv}
T = \min \left\{ t: \frac{p_U (x^t|y^t)}{\prod_{k=1}^t q(x_k)} > A \right\}
\end{equation}
since
\begin{equation}
\log p_U (x^t|y^t) = \log p_{\tv} (x^t|y^t) - \log \frac{p_{\tv} (x^t|y^t)}{p_U (x^t|y^t)}
\end{equation}
we have
\begin{align}
T &= \min \left\{ t: \frac{p_U (x^t|y^t)}{\prod_{k=1}^t q(x_k)} > A \right\} \\
&= \min \left\{ t: \log \frac{ \prod_{k=1}^t p_{\tv} (x_k|y_k)}{\prod_{k=1}^t q(x_k)} > \log A + \log \frac{p_{\tv} (x^t|y^t)}{p_U (x^t|y^t)} \right\} \\
&\leq \min \left\{ t: \log \frac{ \prod_{k=1}^t p_{\tv} (x_k|y_k)}{\prod_{k=1}^t q(x_k)} > \log A + \log \frac{p_{\htv} (x^t|y^t)}{p_U (x^t|y^t)} \right\} \label{eq:ThetaOptUse} \\
&< \min \left\{ t: \sum_{k=1}^t \log \frac{p_{\tv}(x_k|y_k)}{q(x_k)} > \log A + \frac{\XY}{2} \log t + \beta \right\} \label{eq:UnvProbLemmaUse}
\end{align}
where \eqref{eq:ThetaOptUse} follows since $p_{\htv} (x_k|y_k) \geq p_{\tv} (x_k|y_k)$ by definition \eqref{eq:ThetaOpt} and \eqref{eq:UnvProbLemmaUse} follows from Lemma \ref{thm:UnvProbLemma}.
From the same considerations as in the proof of Theorem \ref{Theorem1}, at the stopping time $T$ we necessarily have
\begin{equation} \label{eq:UnvStoppingTimeCond}
\sum_{k=1}^T \log \frac{p_{\tv}(X_k|Y_k)}{q(X_k)} \leq a + \frac{\XY}{2} \log T + \beta + \log \frac{p_{\tv}(X_T|Y_T)}{q(X_T)}
\end{equation}
where we define $a = \log A$. By \eqref{eq:UnvStoppingTimeCond} and Wald's Identity,
\begin{align}
\E \{T\} &= \frac{\E \left\{\sum_{k=1}^T \log \frac{p_{\tv}(X_k|Y_k)}{q(X_k)}\right\}}{\E\left\{\log \frac{p_{\tv}(X|Y)}{q(X)}\right\}} \\
&\leq \frac{a + \frac{\XY}{2} \cdot \E\{\log T\} + \beta + C}{C} \label{eq:BoundWithLog}
\end{align}
Since $\log_2 u \leq \frac{u}{v \ln 2} + \log_2 v - \frac{1}{\ln 2}$ for all $u,v>0$, \eqref{eq:BoundWithLog} implies
\begin{equation}
\E \{T\} \leq \frac{a + \frac{\XY}{2} \left( \log v - \frac{1}{\ln 2} \right) + \beta + C}{C \left( 1 - \frac{\hXY}{C \cdot v\ln 2} \right)}
\end{equation}
For $v = \frac{\log M}{C}$ we obtain
\begin{equation} \label{eq:UnvExpStoppingTime}
\E \{T\} \leq \frac{a + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right) + \beta + C}{C \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}
\end{equation}
which corresponds to the following effective rate:
\begin{equation}\label{eq:UnvEffectiveRateParam}
R = \frac{C \log M \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}{a + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right) + \beta + C}
\end{equation}
Similarly to the derivation in Chapter \ref{ch:KnownChannel}, we bound the error probability by
\begin{equation*} \label{eq:ErrorProbUnv}
\Pr \{E_w\} \leq (M-1) \Pr \left\{ \bigcup_{t=1}^{\infty}
\left\{ \frac{p_U(X^t|Y^t)}{q(X^t)} > A \right\} \right\}
\end{equation*}
where $\{X_k\}_{k=1}^{\infty}$ and $\{Y_k\}_{k=1}^{\infty}$ are independent sequences. Define
\begin{equation}\label{eq:uidef}
\Phi_t = \begin{cases}
\frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \cdot q(X_t)}, \ \prod_{k=1}^{t-1} \Phi_k \leq A \\
1, \ \text{otherwise}
\end{cases}
\end{equation}
We can see that
\begin{equation}
\bigcup_{t=1}^{\infty}
\left\{ \frac{p_U(X^t|Y^t)}{q(X^t)} > A \right\}
\Leftrightarrow \prod_{t=1}^{\infty} \Phi_t > A
\end{equation}
Furthermore, we can see that $\E\{\Phi_t\}=1$ for all $t$ since
\begin{equation*}
\E \left\{ \Phi_t | \prod_{k=1}^{t-1} \Phi_k > A \right\} = 1
\end{equation*}
and
\begin{align}
\E \left\{ \Phi_t | \prod_{k=1}^{t-1} \Phi_k \leq A \right\}
&= \E \left\{ \frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \cdot q(X_t)} \right\} \\
&= \E \left\{ \E \left\{ \frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \cdot q(X_t)} | X^{t-1}, Y^t \right\} \right\} \\
&= \E \left\{ \frac{ \E \left\{ \int_{\Lambda} w(\tv') \frac{p_{\tv'} (x^t|y^t)}{q(x_t)} d\tv' | X^{t-1}, Y^t \right\}}{\int_{\Lambda} w(\tv') p_{\tv'} (x^{t-1}|y^{t-1}) d\tv'} \right\} \\
&= \E \left\{ \frac{ \sum_{x_t \in \mathcal{X}} q(x_t) \int_{\Lambda} w(\tv') \frac{p_{\tv'} (x^t|y^t)}{q(x_t)} d\tv'}{\int_{\Lambda} w(\tv') p_{\tv'} (x^{t-1}|y^{t-1}) d\tv'} \right\} \\
&= \E \left\{ \frac{ \int_{\Lambda} w(\tv') \sum_{x_t \in \mathcal{X}} p_{\tv'} (x^t|y^t) d\tv' }{\int_{\Lambda} w(\tv') p_{\tv'} (x^{t-1}|y^{t-1}) d\tv'} \right\} \\
&= \E \left\{ \frac{ \int_{\Lambda} w(\tv') p_{\tv'} (x^{t-1}|y^{t-1}) d\tv' }{\int_{\Lambda} w(\tv') p_{\tv'} (x^{t-1}|y^{t-1}) d\tv'} \right\} = 1
\end{align}
For an arbitrary $N$ we have
\begin{align}\label{eq:ProdPhi}
\E \left\{\prod_{t=1}^N \Phi_t \right\} &= \E \left\{ \E \left\{\prod_{t=1}^N \Phi_t | \prod_{t=1}^{N-1} \Phi_t \right\} \right\} \\
&= \E \{ \Phi_N \} \cdot \E \left\{\prod_{t=1}^{N-1} \Phi_t \right\} \\
&= \E \left\{\prod_{t=1}^{N-1} \Phi_t \right\} = \ldots = 1
\end{align}
Since the above holds for all $N$, we also have
\begin{equation}\label{eq:InfProdPhi}
\E \left\{\prod_{t=1}^\infty \Phi_t \right\} = 1
\end{equation}
Thus, similarly to the case of known channel, the error probability can be bounded by
\begin{align}
\Pr \{E_w\} &\leq (M-1) \Pr \left\{ \bigcup_{t=1}^{\infty}
\left\{ \frac{p_U(X^t|Y^t)}{q(X^t)} > A \right\} \right\} \label{eq:BoundErrorProbUnknownCh}\\
&= (M-1) \Pr \left\{ \prod_{t=1}^{\infty} \Phi_t > A \right\} \leq \frac{M-1}{A}
\end{align}
Here again, we choose $A = M/\epsilon$ to obtain $P_e < \epsilon$. Substituting $a = \log A = \log M - \log \epsilon$ into \eqref{eq:UnvEffectiveRateParam} we finally get \eqref{eq:UnvEffectiveRate}.
\end{proof}
\begin{remark}
Interestingly, the upper bound on the error probability in \eqref{eq:BoundErrorProbKnownCh}, obtained when the decoder uses the known channel law $p(x|y)$, applies for an arbitrary probability assignment $p_U(x|y)$, where the only required constraint is that the latter integrates to unity.
\end{remark}
\begin{remark}
As in the case of known channel, we can use randomized decoder here to obtain the following rate:
\begin{equation}\label{eq:UnvEffectiveRateRand}
R = \frac{C \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}{1 + \frac{C + \beta - \log \delta + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right)}{\log M}} \cdot \frac{1-\delta}{1-\epsilon}
\end{equation}
for all $0 < \delta < \epsilon$. As we mentioned in Section \ref{sec:CodingThmKnownChannel}, if the required error probability is small, randomization should not be applied. However, if the error probability constraint is loose enough, a better rate may be obtained by optimizing delta in \eqref{eq:UnvEffectiveRateRand}.
\end{remark}
\section{Discussion}
\subsection{Comparison to the Known Channel Case}
Having obtained achievable rates for the cases of both known and unknown channels, it is interesting to compare these results and evaluate the rate degradation due to the unknown channel. For the case of a known channel, the effective rate at \eqref{eq:AchievableRateChannelDec} can be approximated by
\begin{equation}
R \thickapprox C \left(1 - \frac{C- \log \epsilon}{\log M} \right)
\end{equation}
For the case of unknown channel, we can approximate \eqref{eq:UnvEffectiveRate} by
\begin{align}
R_U &\thickapprox C \left(1 - \frac{C- \log \epsilon}{\log M} \right) \nonumber \\
&- C \left( \frac{\XY}{2} \frac{\log \log M}{\log M} + \frac{(\hXY-1)/\ln 2 + \beta + \log C}{\log M} \right) + O \left( \frac{1}{\log^2 M}\right)
\end{align}
Hence, the penalty for lack of channel knowledge amounts to
\begin{align} \label{eq:RateDegradation}
R-R_U &= C \left( \frac{\XY}{2} \frac{\log \log M}{\log M} + \frac{(\hXY-1)/\ln 2 + \beta + \log C}{\log M} \right) \\
&+ O \left( \frac{1}{\log^2 M}\right) \nonumber
\end{align}
The leading term in the latter expression behaves as $O(\log \log M / \log M) = O(\log K/K)$, factorized by the product of the cardinalities of input and output of the channel. It is interesting to compare this result with known results from universal source coding, where the \emph{redundancy}\footnote{The excess of the average codeword length above the entropy of the source.} is dominated by the cardinality of the alphabet of the source \cite{UnvPrediction}, and a term that behaves as $O(\log n/n)$, where $n$ is the source length.
\subsection{Induced Error Exponent}
Let us now examine Theorem \ref{thm:UnvRate} in light of the previous results. Equation \eqref{eq:UnvEffectiveRate} implies the following error exponent:
\begin{equation}\label{eq:UnvErrorExp}
- \frac{\log \epsilon}{\E\{T\}} = C - R - \frac{\XY}{2} \cdot \frac{\log \log M}{\log M} + O\left(\frac{1}{\log M}\right)
\end{equation}
As in the case of a known channel, we see that the error exponent is a linear function of the rate, but an additional term of order $O(\log \log M / \log M)$ is added. Here again, we interpret this term as a penalty for the lack of channel knowledge at the receiver. Furthermore, by taking $M \to \infty$, we can also see that \eqref{eq:UnvErrorExp} coincides with \cite[Proposition 1]{Telatar}.
\subsection{Training and Channel Estimation}
In many practical applications, communication over an unknown channel is done by means of channel estimation. In this approach, the transmission includes predefined \emph{training} signals, which are known to the receiver and are used to estimate the channel parameters. As an alternative to the universal communication scheme introduced in this chapter, we can use the following method. Prior to any message transmission, the transmitter sends a training sequence, which the receiver uses to estimate the channel. After the training phase, the transmitter sends the message. The receiver uses the \emph{estimated} channel parameters to decode the message, using, for instance, the decoding rule at \eqref{eq:ChannelDecoderLog}. A drawback from this approach is that even after the channel estimation phase, the residual error in the estimated channel parameters will degrade the performance of the decoder. Furthermore, enhancement of the channel estimation accuracy requires long training sequences, which will introduce non-negligible overhead to the transmission time. Clearly, using training will not lead to the convergence rate of \eqref{eq:RateDegradation}.
\chapter{Extensions}
\label{ch:Extensions}
\section{Joint Source-Channel Coding} \label{sec:JointSC}
In the previous chapters we assumed that the messages conveyed over the channel were equiprobable, which is the case if, for instance, the source of information has been compressed and the message $W$ is the output of the source encoder. Assume now, that the messages have arbitrary probabilities ${\pi(1),\ldots,\pi(M)}$. Each message now contains a different amount of information, which would translate into different codeword length at the output of the source encoder. However, in rateless codes the codeword assigned to each message is always infinite, and the actual codeword length is determined by the decoder. (The effective length of the message depends on the decoder's stopping time.) It is therefore tempting to use rateless codes for an uncompressed source and try to achieve good compression rate and reliable communication simultaneously. To simplify matters, we begin by tackling the case of known channel and postpone the analysis for unknown channel to Section \ref{sec:CompleteUnv}. We use the following generalized version of the encoder \eqref{eq:ChannelDecoderLog}.
\begin{equation} \label{eq:JointScDec}
g_n(y^n)= \begin{cases}
w, \ z_{w,1}+\ldots+z_{w,n} \geq a_w \\
0, \ \text{if no such $w$ exists}
\end{cases}
\end{equation}
where $a_w$ is a threshold that depends on the message $w$, and we define $a_w= \log A_w$. Repeating the derivation for the error probability done in the previous section, we get by Markov's inequality
\begin{equation}
\Pr\{E_w\} \leq \sum_{w' \neq w} \frac{1}{A_{w'}}
\end{equation}
By choosing
\begin{equation}
A_w = \frac{1}{\epsilon \cdot \pi(w)} \qquad \forall w \in \mathcal{W}
\end{equation}
we get a uniform bound on the error probability
\begin{equation}
\Pr\{E_w\} \leq \epsilon \cdot \sum_{w' \neq w} \pi(w') \leq \epsilon
\end{equation}
which also implies
\begin{equation}
P_e \leq \epsilon
\end{equation}
Thus, for an appropriate choice of message-dependent threshold values, the average probability of error for the entire message set is bounded by $\epsilon$. Recall, however, that the effective rate depends on the threshold value and therefore needs to be reexamined here. When different thresholds are used for different messages, the stopping time depends on which message crosses the threshold. We can therefore use Wald's equation \eqref{eq:WaldC} conditioned on the true message:
\begin{equation}
\E\{T|W=w\} \leq \frac{a_w + C}{C}
\end{equation}
where
\begin{equation}
a_w = \log A_w = - \log \pi(w) - \log \epsilon
\end{equation}
Averaging on the entire message set, we have
\begin{align}
\E\{T\} &= \E\{\E\{T|W\}\} \leq \frac{\E\{a_{_W}\} + C}{C} \nonumber \\
&= \frac{\E\{- \log \pi(W)\} - \log \epsilon + C}{C} \nonumber \\
&= \frac{H(W) - \log \epsilon + C}{C} \label{eq:JointScET}
\end{align}
where $H(W)$ is the entropy rate of the source in bits per symbol. Let us now examine \eqref{eq:JointScET} in a practical setting. Suppose the we wish to convey blocks of $K$ source bits with fixed probability of error $\epsilon > 0$. Since every source symbol contains $\log M$ bits, $K / \log M$ source symbols will be needed. Thus, the rate at which source bits can be conveyed over the channel will be
\begin{align}
R &= \frac{K}{\E\{T\}} \geq \frac{K \cdot C}{\frac{K \cdot H(W)}{\log M} - \log \epsilon + C} \\
&= \frac{C}{\mathscr{H}(W) + \frac{C - \log \epsilon}{K}} \\
&= \frac{C}{\mathscr{H}(W)} \cdot \frac{1}{1 + \frac{C - \log \epsilon}{\mathscr{H}(W) \cdot K}} \\
&= \frac{C}{\mathscr{H}(W)} \cdot \left(1 - O \left( \frac{1}{K} \right) \right) \label{eq:JointScRateAsym}
\end{align}
where we define $\mathscr{H}(W)=H(W)/ \log M$ as the per-bit entropy of the source.
Note that the encoder used here, as well as the codebook, are the same ones defined in Chapter \ref{ch:DefinitionsAndNotation} and the only change is in the definition of the decoder. The encoder is uninformed on the statistics of the source or the capacity of the channel, yet the rate approaches the optimum rate achievable by an informed encoder. We note the practical implication of such scheme: the compression algorithms can be implemented and maintained at the decoder, while the encoder remains simple and source-independent.
\section{Source Coding with Side Information} \label{sec:SI}
Suppose now, that the source of information emits independent pairs of messages $(W_1,W_2) \in \mathcal{W}_1 \times \mathcal{W}_2$ according to a probability distribution $\pi_{_{W_1,W_2}}(w_1,w_2)$, which are encoded separately and pass through a noiseless channel. Suppose that $R_1$ and $R_2$ are the coding rates of $W_1$ and $W_2$, respectively. By Slepian-Wolf theorem, if $W_1$ is encoded with rate $R_1 \geq H(W_1)$, then $W_2$ can be encoded independently with $R_2 = H(W_2|W_1)$. (This rate pair is a corner point in the achievable rate region.) We will now show that using rateless codes, we can approach this rate with some redundancy due to the usage of finite message set. The encoder of $W_1$ assigns to each message in $\mathcal{W}_1$ an infinite codeword $\mathbf{c}_{w_1} \in \{0,1\}^\infty$, $w_1 = 1,\ldots,|\mathcal{W}_1|$, and transmits it over the channel. The encoder of $W_2$ operates similarly to that of $W_1$ and independently of it, with codewords $\mathbf{d}_{w_2} \in \{0,1\}^\infty$, $w_2 = 1,\ldots,|\mathcal{W}_2|$. The codewords are assumed to be i.i.d.\ Bernoulli$(1/2)$ sequences. To reconstruct $W_1$, the decoder can use the decision rule \eqref{eq:JointScDec}, to to obtain an error probability of
\begin{equation} \label{eq:BoundErrorW1}
\Pr\{\hat{W_1} \neq W_1\} \leq \frac{\epsilon}{2}
\end{equation}
Since binary code is used and the channel is noiseless, we have $C=1$, so \eqref{eq:JointScET} implies that the expected transmission time for $W_1$ satisfies
\begin{equation} \label{eq:SlepianWolfR1}
R_1 = \E\{T_1\} \leq \ H(W) - \log \frac{\epsilon}{2} + 1
\end{equation}
Note that the coding rate is defined here as the average codeword length for the message set. Therefore, the effective rate equals the expected transmission time, rather than its reciprocal as in channel coding.
Having decoded message $W_1$, the decoder uses the following decision rule to reconstruct $W_2$:
\begin{equation}
g^{(2)}_n(y^n,w_1)= \begin{cases}
w_2, \ z_{w_2,1}+\ldots+z_{w_2,n} \geq a(w_1,w_2) \\
0, \ \text{if no such $w_2$ exists}
\end{cases}
\end{equation}
where
\begin{equation}
z_{w_2,k} = \log \frac{p(y_k|d_{w_2,k})}{p(y_k)}, \qquad k=1,\ldots,n
\end{equation}
Similar derivation for the error probability as in Section \ref{sec:JointSC} yields
\begin{equation}
\Pr\{\hat{W_2} \neq w_2 \ | \ W_1 = w_1, W_2 = w_2 \} \leq \sum_{w_2' \neq w_2} \frac{1}{A(w_1,w_2)}
\end{equation}
We choose
\begin{equation}
A(w_1,w_2) = \frac{1}{\epsilon/2 \cdot \pi_{_{W_2|W_1}}(w_2|w_1)}
\end{equation}
so that
\begin{equation}
\Pr\{\hat{W_2} \neq w_2 \ | \ W_1 = w_1, W_2 = w_2 \} \leq \epsilon/2 \cdot \sum_{w_2' \neq w_2} \pi_{_{W_2|W_1}}(w_2|w_1) \leq \frac{\epsilon}{2}
\end{equation}
Therefore,
\begin{equation} \label{eq:BoundErrorW2}
\Pr\{\hat{W_2} \neq W_2\} \leq \frac{\epsilon}{2}
\end{equation}
Using \eqref{eq:BoundErrorW1}, \eqref{eq:BoundErrorW2} and the union bound, we have
\begin{equation}
\Pr\{\hat{W_1} \neq W_1 \bigcup \hat{W_2} \neq W_2\} \leq \epsilon
\end{equation}
Since $a(w_1,w_2) = - \log \epsilon/2 - \log \pi_{_{W_2|W_1}}(w_2|w_1)$, we can use Wald's equation for the stopping time of decoding $W_2$ to obtain
\begin{align}
R_2 &= \E\{T_2\} = \E\{\E\{T_2|W_1,W_2\}\} \nonumber \\
&\leq \E\{a(W_1,W_2)\} + 1 \nonumber \\
&= \E\{- \log \pi_{_{W_2|W_1}}(W_2|W_1)\} - \log \frac{\epsilon}{2} + 1 \nonumber \\
&= H(W_2|W_1) - \log \frac{\epsilon}{2} + 1 \label{eq:SlepianWolfR2}
\end{align}
Combining \eqref{eq:SlepianWolfR1} and \eqref{eq:SlepianWolfR2}, we get
\begin{equation} \label{eq:SlepianWolfSumRate}
R_1 + R_2 = H(W_1,W_2) - 2 \log \frac{\epsilon}{2} + 2
\end{equation}
Similarly to Section \ref{sec:JointSC}, if we take blocks of $K$ source bits and a fixed error probability $\epsilon > 0$, we obtain
\begin{equation}
R_1 + R_2 = H(W_1,W_2) \cdot \left(1 + O \left( \frac{1}{K} \right) \right)
\end{equation}
\section{Complete Universality} \label{sec:CompleteUnv}
We now consider the case of joint source-channel coding of an unknown source over an unknown channel, with an unknown amount of side-information at the receiver. Initially, we bring together the results of the previous sections to obtain a communication scheme for a source with unknown statistics over an unknown channel. As a straightforward generalization of the universal source coding scheme in Section \ref{sec:RateUnknownChannel}, we use a fusion of the decoders \eqref{eq:ChannelDecoderUnv} and \eqref{eq:JointScDec}, i.e.
\begin{equation} \label{eq:JointScUnvDec}
g_n(y^n)= \begin{cases}
w, \ p_U(\mathbf{c}_w|y^n) \geq A_w \cdot q(\mathbf{c}_w) \\
0, \ \text{if no such $w$ exists}
\end{cases}
\end{equation}
where
\begin{equation}\label{eq:Aw}
A_w = \frac{1}{\epsilon \cdot \pi(w)}
\end{equation}
Similar derivation to those done at Sections \ref{sec:RateUnknownChannel} and \ref{sec:JointSC} yields the following rate for an uncompressed source $W \in \{1,\ldots,M\}$ over an unknown channel with capacity $C$:
\begin{equation}\label{eq:JointScUnvRate}
R = \frac{C \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}{\mathscr{H}(W) + \frac{C + \beta - \log \epsilon + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right)}{\log M}}
\end{equation}
where $\mathscr{H}(W)$ is defined in Section \ref{sec:JointSC}. We note that while the encoder can be ignorant of the source statistic, the decoder needs to know $\pi(w), \ w \in \mathcal{W}$.
We now go one step further and assume that the decoder has no knowledge of the statistics of the source or the channel. Suppose that the source $S$ generates sequences of $L$ symbols from an alphabet $\mathcal{S}$, drawn i.i.d. according to set of $|\mathcal{S}|$ unknown probabilities $\gv$. Each sequence is encoded as one message, hence $M = |\mathcal{S}|^L$. Instead of using the set of thresholds \eqref{eq:Aw}, which depends on the unknown probabilities, we use a universal probability measure \cite{UnvPrediction}
\begin{equation}
\ph(s^L) = \int u(\gv) \pi_{\gv}(s^L)
\end{equation}
so that
\begin{equation}
a_w = \log A_w = -\log \epsilon -\log \ph (s^L)
\end{equation}
If the weight function $u(\cdot)$ is chosen to be Jeffreys prior, we get (see \cite[Eq.17]{UnvPrediction})
\begin{equation}
\E\{a_w\} = -\log \epsilon + H(W) + \frac{|\mathcal{S}|-1}{2} \log \frac{L}{2 \pi e} + O(1)
\end{equation}
Hence, similarly to \eqref{eq:JointScUnvRate} we can achieve the following rate
\begin{equation}\label{eq:CompUnvRate}
R = \frac{C \left( 1 - \frac{\hXY}{\log M \ln 2} \right)}{\hat{\mathscr{H}}(W) + \frac{C + \beta - \log \epsilon + \frac{\XY}{2} \left( \log \log M - \log C - \frac{1}{\ln 2} \right)}{\log M}}
\end{equation}
where
\begin{equation}
\hat{\mathscr{H}}(W) = \mathscr{H}(W) + \frac{|\mathcal{S}|-1}{2} \frac{\log L}{\log M} + O\left(\frac{1}{\log M}\right)
\end{equation}
Recall that $L = \log_{|\mathcal{S}|} M$, so
\begin{equation}\label{eq:EmpEntropy}
\hat{\mathscr{H}}(W) = \mathscr{H}(W) + \frac{|\mathcal{S}|-1}{2} \frac{\log \log M}{\log M} + O\left(\frac{1}{\log M}\right)
\end{equation}
By plugging \eqref{eq:EmpEntropy} into \eqref{eq:CompUnvRate} we get
\begin{equation}\label{eq:CompUnvRateAsym}
R = \frac{C}{\mathscr{H}(W)} \cdot \left(1 - O \left( \frac{\log K}{K} \right) \right) + O \left( \frac{1}{K} \right)
\end{equation}
where $K = \log M$ is the number of encoded bits. Comparing \eqref{eq:CompUnvRateAsym} to \eqref{eq:JointScRateAsym}, we see that the leading term is unchanged and equals the optimal rate achievable by separated source-channel coding. However, the lack of information affects the rate of convergence, which is now dominated by a $O \left( \frac{\log K}{K} \right)$ term, as opposed to $O \left( \frac{1}{K} \right)$ for an informed decoder.
The implications of the latter result are far-reaching. We have shown that even if the statistics of both the channel and the source are unknown to the decoder, rateless coding not only achieves the best source-channel coding rate as $M \to \infty$, but it also has the same asymptotics of a rateless scheme with an informed decoder. This observation has been made in \cite[Ch.4]{Nadav} for infinitely large message sets. The results obtained here coincide with those of \cite{Nadav}, and also quantify the redundancy caused by the lack of information on the source and the channel, and by the use of finite blocks.
\subsection*{Unknown Side Information at the Decoder}
Similarly to Section \ref{sec:SI}, if the source contains side information $V$ that is known non-causally at the decoder, we can further improve the communication rate. Combining the technique from Section \ref{sec:SI} with the derivation above, we obtain the following rate for universal joint source-channel coding with side information at the decoder:
\begin{equation}\label{eq:CompUnvRateSI}
R = \frac{C}{\mathscr{H}(W|V)} \cdot \left(1 - O \left( \frac{\log K}{K} \right) \right) + O \left( \frac{1}{K} \right)
\end{equation}
where $\mathscr{H}(W|V)$ is the conditional entropy of the source $W$ given the side information $V$, normalized by $\log M$. Since $\mathscr{H}(W|V) \leq \mathscr{H}(W)$, the side information improves the rate, even if the encoder is uninformed on the amount (or the existence) of the side information.
\chapter{Summary}
\label{ch:Summary}
In this study we developed and analyzed several communication schemes that are all based on the concept of \emph{rateless codes}. In rateless codes, each codeword has an infinite length and the decoding length is dynamically determined by the confidence level of the decoder. Throughout this study, we allowed the coding schemes to have a fixed error probability, while aiming to achieve shortest mean transmission time, or equivalently, the highest rate. This approach is different than the prevalent one, in which the communication rate is held fixed and the codebook is enlarged indefinitely so that the error probability vanishes. We demonstrated how rateless codes, combined with sequential decoding, can be used in basic communication scenarios such as communication over a DMC, but can also be used to solve more complex problems, such as communication over an unknown channel. The decoding methods introduced here enabled us to obtain results for finite message set, while previous studies were restricted to asymptotic results.
We began by describing rateless codes and surveyed some previous results related to such coding schemes. Then, we introduced the sequential decoder that uses a known channel law. Using Wald's theory and the notion of stopping time, we obtained an upper bound for the mean transmission time for a fixed error probability, and the resulting effective rate is shown to approach to the capacity of the channel as the size of the message set, $M$, grows. We also obtained an upper bound for the rate for a fixed error probability. The upper bound is not tight for small $M$, but it converges to the achievable rate as $M \to \infty$. We conjecture that a stronger converse can be found, which will be tighter also in the non-asymptotic realm. Although we developed the above-mentioned scheme for a DMC, we also demonstrated that it is applicable in a memoryless Gaussian channel.
For the case of an unknown channel we introduced a novel decoding metric. Unlike previous studies, the universal decoding metric in not based on empirical mutual information, but on a mixture probability assignment. For an appropriate choice of mixture, we were able to bound the difference between the universal metric and the one used by an informed decoder. Thus, we used the results obtained for an informed decoder to upper bound the mean transmission time in the universal case.
We then applied rateless coding to more advanced scenarios. We showed how with only a minor change in the sequential decoder, we can easily use rateless codes as a joint source-channel coding scheme. We also used rateless coding for source coding with side information, obtaining the optimum Slepian-Wolf rate for this setting. Finally, we combined the techniques for universal channel coding, joint source-channel coding and source coding with side information and demonstrated that even without any information on the source, the channel or the amount (or even the existence) of side information---reliable communication is feasible, and the rate can be analyzed even for a finite message set.
\bibliographystyle{IEEETran}
|
1,108,101,563,660 | arxiv | \section*{Introduction}
The success of the negatively-charged nitrogen-vacancy (NV) center in diamond \textcolor{black}{for magnetic field sensing} is due to a \textcolor{black}{powerful combination of characteristics:} a long spin-coherence time, the ability to perform optically-detected magnetic resonance (ODMR) \textcolor{black}{spectroscopy} at room temperature, and a solid-state host environment which facilitates sample-sensor integration~\cite{Abe2018Tutorial:Magnetometry, Hong2013NanoscaleDiamond,Schirhagl2014Nitrogen-VacancyBiology,Balasubramanian2009UltralongDiamond}. One exciting emerging application is magnetic field imaging using \textcolor{black}{a diamond sensor comprising a diamond substrate with a thin layer of NV centers fabricated at the top surface}~\cite{Levine2019PrinciplesMicroscope,Tetienne2017QuantumGraphene,Barry2016OpticalDiamond,Kehayias2019ImagingCenters,Schlussel2018Wide-FieldDiamond}. In this NV ensemble imaging system, it is critical that the inhomogeneities across the \textcolor{black}{imaging area} are eliminated in order to reach the \textcolor{black}{sensitivity} floor given by \textcolor{black}{a} single-pixel resonance curve. To this end, the NV community has demonstrated many sensor growth and fabrication methods that increase NV density and homogeneity~\cite{Acosta2009DiamondsApplications,Ohno2014Three-dimensionalImplantation,Osterkamp2019EngineeringDiamond,Eichhorn2019OptimizingSensing,Tetienne2018SpinDiamond,Kleinsasser2016}, as well as quantum control methods that suppress external\textcolor{black}{-}field dependence and increase ensemble \textcolor{black}{spin} coherence~\cite{Bauch2018UltralongControl,Myers2017Double-QuantumCenters,Mamin2014MultipulseCenters,DeLange2012ControllingDiamond}. In this paper, we expand on these techniques to present a quantum control method compatible with \textcolor{black}{high-frame-rate} magnetic-field imaging. Our method, double-double quantum (DDQ) driving, mitigates \textcolor{black}{inhomogeneities} caused by variations of the \textcolor{black}{NV} resonance curve.
In NV ensemble ODMR, magnetic fields can be imaged by characterizing the photoluminescence of optically-excited NV centers \textcolor{black}{after} \textcolor{black}{probing} the NV spin \textcolor{black}{ground} states with radio-frequency (RF) \textcolor{black}{$\pi$}-pulses [Fig.~\ref{fig:NVexplanation}]. The resonant RF depends on magnetic field because of the Zeeman splitting of the \textcolor{black}{$|m_s = \pm1\rangle$} NV electronic spin states~\cite{Abe2018Tutorial:Magnetometry}. However, in addition to sensitivity to magnetic field, the resonances are also perturbed by inhomogeneities in electric field, temperature, and crystal strain~\cite{Dolde2011Electric-fieldSpins,Mittiga2018ImagingDiamond,Acosta2010TemperatureDiamond,Broadway2019MicroscopicSensors}. Due to the symmetry of the NV center, these \textcolor{black}{non-magnetic} perturbations affect the two NV electron spin resonances \textcolor{black}{($|m_s = 0\rangle\leftrightarrow|m_s = \pm1\rangle$)} in the same way and can be eliminated by characterizing both resonances~\cite{Mamin2014MultipulseCenters,Fang2013High-SensitivityCenters}.
For wide-field magnetic imaging of time-varying fields, \textcolor{black}{the} full characterization of the resonance curves \textcolor{black}{is too slow to capture the magnetic-field dynamics in many applications}. In \textcolor{black}{these dynamic applications}, shifts of one resonance curve can instead be mapped to changes in emitted NV photoluminescence (PL) intensity by applying single-frequency RF excitation~\cite{Pham2011MagneticEnsembles,McCoey2019RapidMicrobeads,Wojciechowski2018Camera-limitsSensor}. This \textcolor{black}{``single-quantum" (SQ)} imaging modality enables partial reconstruction of the local magnetic field with a higher frame-rate. The double-quantum (DQ) \textcolor{black}{modality}, which drives both \textcolor{black}{NV electron} spin transitions simultaneously by applying a two-tone RF pulse, eliminates pixel-to-pixel \textcolor{black}{non-magnetic} perturbations of the transition resonant frequencies~\cite{Mamin2014MultipulseCenters,Fang2013High-SensitivityCenters}. However, variations of the shape of the resonance curve also \textcolor{black}{cause changes in the emitted PL intensity and therefore generate spurious contrast which limits the magnetic sensitivity}. These variations arise from inhomogeneities in NV and other paramagnetic spin densities as well as external fields~\cite{Bauch2018UltralongControl}, \textcolor{black}{and as shown in this work, can severely limit the utility of the DQ modality for wide-field imaging applications}. By expanding to a four-tone DDQ driving scheme, we suppress \textcolor{black}{anomalous contrast due to resonance-}curve-shape variations \textcolor{black}{pixel-by-pixel} across \textcolor{black}{the} field of view. \textcolor{black}{This enables high-frame-rate imaging of time-dependent magnetic fields. We first demonstrate the SQ, DQ, and DDQ imaging modalities by imaging static fields, and show the DDQ signal is linearly proportional to the magnetic field projection along the NV symmetry axis. We then use the DDQ driving technique to image the dynamic magnetic field produced by a ferromagnetic nanoparticle tethered by a single DNA molecule to the diamond sensor surface.}
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth]{ddqpaperfigure1.pdf}
\caption{Wide-field pulsed magnetic imaging using an NV ensemble. (a) NV electronic energy level diagram showing the ground, excited spin states ($|m_s =0\rangle$,$|m_s = \pm1\rangle$), singlet states, optical excitation (green arrow), emitted photoluminescence (red arrows), and spin-selective, non-radiative inter-system-crossing (gray arrows). (b) Schematic showing 50~nm ferromagnetic nanoparticles adhered to the diamond surface. The magnetic moments of the particles are oriented randomly. The 150~nm NV layer (pink) is fabricated on top of the diamond substrate (\textcolor{black}{gray}), and a single NV pointed along the (111) orientation is shown (red). (c) NV ground state energy level diagram showing the zero-field splitting ($D$), Zeeman splitting of the $|m_s = \pm1\rangle$ states ($2\gamma_{\text{NV}}\hat{z}_{\text{NV}}\cdot\vec{B}$), and $^{15}$N-NV hyperfine splitting (3.05 MHz). RF excitation (orange and blue arrows) rotates the NV spin between the $|m_s = 0\rangle$ and $|m_s = \pm1\rangle$ states. Two-tone RF excitation is simultaneously driven over the two $^{15}$N-NV hyperfine transitions to produce a single combined resonance for each NV electron spin state ($|m_s = \pm1\rangle$). (d) Laser and multi-tone RF $\pi$-pulses are repeated throughout the camera exposure to facilitate wide-field imaging. (e) A Lorentzian-shaped reduction in NV PL is observed when a RF scan is performed through \textcolor{black}{each} spin-transition frequency. The resonances are Zeeman split by the external magnetic field. The outer (inner) inflection points $f_1,f_4$ ($f_2,f_3$) are denoted by black squares (circles). For the resonance curves shown, the FWHM linewidth $\delta\nu = 300\ \text{kHz}$ and fractional optical contrast $C=0.03$, with optical pulse = \SI{500}{\nano\second}, RF pulse = \SI{3500}{\nano\second}, photon collection rate = $1.1\times10^{7} $\SI{}{\hertz} from a \SI{1}{\micro\meter^{2}} pixel (\SI{0.15}{\micro\meter^{3}} voxel), and integration time per data point = \SI{144}{\milli\second}.}
\label{fig:NVexplanation}
\end{figure*}
\section*{Experimental Methods}
The wide-field NV magnetic particle imaging (magPI) platform used in this work utilizes a diamond sensor with a near-surface, high density NV ensemble [Fig.~\ref{fig:NVexplanation}(b)]. A \SI{150}{\nano\meter} $^{15}$N doped, isotope-purified (99.999$\%$ $^{12}$C) layer was grown by chemical vapor deposition on an electronic-grade diamond substrate (Element Six). The sample was implanted with 25~keV He$^{+}$ at a dose of $5 \times 10^{11}$~ions/cm$^{2}$ to form vacancies, followed by a vacuum anneal at 900~$\degree$C for \SI{2}{\hour} for NV formation and an anneal in $\text{O}_{2}$ at 425~$\degree$C for \SI{2}{\hour} for charge state stabilization~\cite{Kleinsasser2016}. The resulting ensemble has NV density of $1.7\times10^{16}$~cm$^{-3}$ and ensemble spin coherence time $T^{*}_{2} = 2.5$~\SI{}{\micro\second} \textcolor{black}{(further details in Supplemental Material~\cite{DDQSM2020})}.
The NV electronic structure and optical and RF control are summarized in Fig.~\ref{fig:NVexplanation}. A 532 nm laser pulse is used to optically pump the NV ensemble into the $|m_s = 0\rangle$ triplet ground state. RF excitation drives transitions from this ground state into the $|m_s = \pm1\rangle$ spin states. Optical excitation from the $|m_s = \pm1\rangle$ states results in a reduction of PL intensity due to a spin selective, nonradiative inter-system-crossing~\cite{Gali2019AbDiamond}. This relaxation provides the spin-dependent PL contrast and initialization into the $|m_s=0\rangle$ state. Monitoring the emitted PL as a function of RF enables measurement of ODMR for each NV ground state spin transition [Fig.~\ref{fig:NVexplanation}(e)].
From a Lorentzian-shaped resonance curve [see Fig.~\ref{fig:NVexplanation}(e)], the \textcolor{black}{volume-normalized,} shot-noise-limited dc magnetic sensitivity is given by
\begin{equation}
\textcolor{black}{\eta_{SQ}^{V} \approx \frac{1}{\gamma_{\text{NV}}}\frac{\delta\nu}{C}\sqrt{\frac{V}{N}}},
\label{equation:sensitivity}
\end{equation}
where $C$ is the optical contrast (fractional depth of the resonance curve), $\delta\nu$ is the full width at half maximum (FWHM) linewidth, $\gamma_{\text{NV}}$ is the NV gyromagnetic ratio ($28$ \SI{}{\mega\hertz}/\SI{}{\milli\tesla}), $N$ is the photon detection rate \textcolor{black}{and $V$ is the collection volume}~\cite{Abe2018Tutorial:Magnetometry}. In the magPI platform, the average \textcolor{black}{volume normalized} sensitivity \textcolor{black}{$\eta_{SQ}^{V}~\approx~ $\SI{31}{\nano\tesla\ \hertz^{-1/2}\ \micro\meter^{3/2}}}. All three sensor parameters $C, \delta\nu$, and $N$ vary across the imaging field of view.
Optical power broadening of the resonance curve is eliminated by using pulsed excitation in which optical and RF fields are applied separately~\cite{Dreau2011AvoidingSensitivity}. Optical pulses and RF $\pi$-pulses (both \SI{}{\micro\second}-scale) are applied to the ensemble repeatedly to fill a sCMOS camera exposure (\SI{}{\milli \second}-scale). Each camera exposure is taken with a single set of RFs with a fixed pulse duration [Fig.~\ref{fig:NVexplanation}(d)], enabling pulsed NV ensemble control and readout with wide-field camera exposure times~\cite{Steinert2013MagneticResolution}.
RF excitation is delivered via a broadband microwave antenna with transmission resonance at the NV zero-field-splitting $D$~\cite{Sasaki2016BroadbandDiamond}. Each RF applied is mixed to create two equal tones separated by \SI{3.05}{\mega\hertz}, which enables simultaneous driving of the two $^{15}$N-NV hyperfine transitions~\cite{Doherty2012TheoryDiamond} [Fig.~\ref{fig:NVexplanation}(c)] and produces one combined resonance for each $|m_s=0\rangle\leftrightarrow|m_s = \pm1\rangle$ transition [Fig.~\ref{fig:NVexplanation}(e)]. Samarium cobalt ring magnets (SuperMagnetMan) are used to apply a \SI{1}{\milli\tesla} static external magnetic field along the (111)~NV orientation ($\hat{z}_{\text{NV}}$). More details about the experimental setup can be found in \textcolor{black}{the Supplemental Material.}
\textcolor{black}{We first use the magPI platform} to image the static dipolar magnetic field produced by a \SI{50}{\nano\meter} dextran coated $\text{Co}\text{Fe}_{\text{2}}\text{O}_{\text{4}}$ ferromagnetic nanoparticle (micromod Partikeltechnologie) deposited and dried onto the diamond sensor surface [Fig. 1(b)]. These bio-compatible particles produce nano-scale magnetic fields which lie in the dynamic range of the NV sensing ensemble defined by $\delta\nu$ of the resonance curves. For other imaging applications, the sensor dynamic range can be increased at the expense of magnetic sensitivity by RF broadening the resonance curve.
\textcolor{black}{We then demonstrate dynamic magnetic imaging using the DDQ technique on a tethered-particle-motion (TPM) assay~\cite{Finzi1995}. \SI{500}{\nano\meter} streptavidin coated ferromagnetic nanoparticles (micromod Partikeltechnologie 05-19-502) were tethered by 940~bp single DNA molecules to the diamond surface. The diamond-DNA-particle tethering protocol follows Ref. \cite{Kovari2018}. Fluid flowed through the sample chamber alters the orientation of the nanoparticle magnetic moment, and the changing magnetic field is imaged at a high frame-rate.}
\begin{figure*}
\includegraphics[]{ddqpaperfig2.pdf}
\centering
\caption{\textcolor{black}{Multiple mechanisms lead to changes in NV ODMR on a micron scale when imaging static magnetic dipole and strain fields. (a) Resonant frequency $\nu$ shifts due to magnetic field and crystal strain. (b) The FWHM linewidth $\delta \nu$ varies due to \textcolor{black}{gradients of magnetic and strain fields that inhomogenously broaden the NV ODMR in each pixel. (c) Optical contrast $C$ also varies due to due to inhomogenous broadening of the NV ODMR.}}}
\label{fig:sqcandlw}
\end{figure*}
\section*{Static Magnetic Imaging Modality}
\textcolor{black}{To measure static fields,} the full NV resonance curve can be measured in wide-field with arbitrarily long acquisition times. Taking a series of PL images over a range of RFs allows fitting of the entire resonant response in the measured range. Mapping the fitted resonant frequency at each pixel results in a partial reconstruction of the magnetic field as seen in Fig.~\ref{fig:sqcandlw}(a). Taking the difference between the $|m_s=0\rangle\leftrightarrow|m_s = \pm1\rangle$ resonant frequency maps eliminates shifts of the resonance due to fields other than the magnetic field, and enables imaging of the absolute magnetic field projection along the NV symmetry axis~\cite{Fescenko2019DiamondNanocrystals}, as seen in Fig.~\ref{fig:DDQcomparison}(a).
For quickly-varying magnetic fields, \textcolor{black}{imaging via the static magnetic imaging procedure} may not be possible. We thus require a dynamic imaging modality that reproduces the absolute magnetic field across the imaging field of view \textcolor{black}{without prior per-pixel calibration} and is compatible with high-frame-rate imaging.
\section*{Dynamic Magnetic Imaging Modalities}
\subsection*{Single Quantum Difference Imaging}
The simplest dynamic imaging modality uses RF excitation applied at one inflection point of \textcolor{black}{one of} the \textcolor{black}{NV} resonance curves [Fig.~\ref{fig:NVexplanation}(e)]. PL images taken with RF $\pi$-pulses applied are subtracted from PL images taken without RF to detect changes in emitted NV PL ~\cite{Wojciechowski2018Camera-limitsSensor}. We define a SQ difference image (DI) as
\begin{equation}
\text{SQ}(f_1) = \frac{I_{\text{off}} - I_{\text{on}}(f_1)}{I_{\text{off}}},
\label{equation:sqdi}
\end{equation}
in which $I_{\text{on}}(f_1)$ is the intensity image taken with applied RF $\pi$-pulses at $f_1$ and $I_{\text{off}}$ is the image taken with no applied RF. \textcolor{black}{We assume the NV ensemble is operating in the linear-response regime of the resonance curve, i.e. $\gamma_{\text{NV}}\hat{z}_{\text{NV}}\cdot\vec{B} < \delta \nu$. The per-pixel signal is then given by}
\begin{equation}
\text{SQ}^{\text{pp}}(f_1) = \frac{9}{8} C - \frac{3 \sqrt{3}}{4} \frac{C}{\delta\nu} (\nu(\vec{E}, \vec{B}, T, ...) - f_1).
\label{equation: single quantum difference signal}
\end{equation}
As discussed above, the resonant frequency $\nu$ depends on the magnetic field $\vec{B}$ and also varies with local electric field $\vec{E}$, temperature $T$, and crystal strain.
\begin{figure*}
\begin{center}
\includegraphics[width=1.8\columnwidth]{ddqcomparisonfig.pdf}
\end{center}
\caption{\textcolor{black}{Competing magnetic imaging modalities. (a) \textcolor{black}{`True'} static magnetic field projection map generated with the frequency scanning technique outlined in the above Static Magnetic Imaging Modality section (acquisition time \SI{12}{\second}). (b) Single quantum difference imaging (\SI{2.4}{\second}). The signal measured with the SQ DI modality is a convolution of the magnetic and strain fields, which are impossible to separate with a single measurement. (c) Double quantum difference imaging (\SI{2.4}{\second}). While the DQ DI modality has reduced the impact from the fields that homogeneously shift the NV centers, the DQ signal is more sensitive to the local contrast and linewidth variations of the NV sensing curves. (d) Double-double quantum difference imaging (\SI{2.4}{\second}). An inspection of the competing dynamic imaging schemes (b-d) reveals that both the SQ (b) and DQ (c) schemes are significantly compromised by spurious contrast caused by strain gradients and curve-shape variation, respectively, while the DDQ scheme (d) faithfully approximates the \textcolor{black}{`true'} magnetic field projection (a) with a decreased acquisition time.}}
\label{fig:DDQcomparison}
\end{figure*}
Additionally, variations in curve-shape, and thus $C$ [Fig.~\ref{fig:sqcandlw}(b)] and $\delta\nu$ [Fig.~\ref{fig:sqcandlw}(c)], cause variations of the resonance curve inflection point, contributing to the SQ signal. Microscopic and mesoscopic strain inhomogeneities result in inhomogeneous broadening of the resonance curve~\cite{Kehayias2019ImagingCenters}. Also, dephasing due to dipolar interactions between \textcolor{black}{sensing} NV centers and other paramagnetic impurities (e.g. P1 and NV) fundamentally limit the NV ensemble coherence, and thus, ODMR linewidth~\cite{Kleinsasser2016,Bauch2018UltralongControl}.
The SQ DI enables imaging of some nano-scale magnetic structure, but the sensitivity of this technique is limited. In Fig.~\ref{fig:DDQcomparison}(b) we show a SQ DI image with its corresponding static magnetic field map in \ref{fig:DDQcomparison}(a). The SQ DI enables partial mapping of the magnetic field projection, \textcolor{black}{but is limited by contributions to the resonant frequency shift by strain gradients, as seen in the SQ-signal-color-gradient from upper-left to lower-right in Fig.~\ref{fig:DDQcomparison}(b).}
\subsection*{Double Quantum Difference Imaging}
Temperature, electric field, and strain shift the zero-field-splitting of the NV ground state, causing common-mode shifts of the $|m_s=0\rangle~\leftrightarrow~|m_s~=~\pm~1\rangle$ transitions~\cite{Levine2019PrinciplesMicroscope}. Conversely, the magnetic-field-induced Zeeman effect splits the two transitions. Thus, by probing the difference of the two transition resonant frequencies, the common-mode shifts can be subtracted out and the magnetic field projection can be measured directly. We use a DQ driving scheme~\cite{Fang2013High-SensitivityCenters,Myers2017Double-QuantumCenters}, applying two-tone RF $\pi$-pulses at the opposite inflection points of the two resonance curves simultaneously [Fig.~\ref{fig:NVexplanation}(e)]. We construct a DQ DI by subtracting the PL image taken with DQ RF driving from an image taken with no RF applied
\begin{equation}
\text{DQ}(f_{1},f_{4}) =
\frac{I_{\text{off}} - I_{\text{on}}(f_{1},f_{4})}{I_{\text{off}}},
\end{equation}
in which $I_{\text{on}}(f_1,f_4)$ is the image taken with applied RF $\pi$-pulses \textcolor{black}{at $f_1$ and $f_4$ simultaneously}, and $I_{\text{off}}$ is the image taken with no applied RF. \textcolor{black}{We again assume linear-response of the resonance curves and additionally assume that the two resonance curves have the same shape \textcolor{black}{(see Supplemental Material)}}. The per-pixel DQ signal is
\begin{multline}
\text{DQ}^{\text{pp}}(f_1, f_4) \approx \frac{9}{4} C - \frac{3 \sqrt{3}}{4} \frac{C}{\delta\nu}(f_{1} - f_{4})
\\ - \frac{3 \sqrt{3}}{4} \frac{C}{\delta\nu} (2 \gamma_{\text{NV}} \hat{z}_{\text{NV}} \cdot \vec{B}).
\label{equation: unsimplified DQ linear signal}
\end{multline}
By defining $\langle\vec{B}\rangle$ as the average magnetic field over the imaging field of view, Eq.~\ref{equation: unsimplified DQ linear signal} simplifies to
\begin{multline}
\text{DQ}^{\text{pp}}(f_1, f_4) = \frac{9}{4} C + \frac{3 \sqrt{3}}{4} \frac{C}{\delta\nu}2 \delta_0
\\ - \frac{3 \sqrt{3}}{4} \frac{C}{\delta\nu} \left( 2 \gamma_{\text{NV}} \hat{z}_{\text{NV}} \cdot \left(\vec{B} - \langle\vec{B}\rangle \right) \right),
\label{equation: double quantum difference signal}
\end{multline}
where $2\delta_0 = (f_4-f_1)-2\gamma_{\text{NV}}\hat{z}_{\text{NV}}\cdot\langle\vec{B}\rangle$. By applying $f_1$ and $f_4$ at the outer inflection points of the NV resonance curves \textcolor{black}{simultaneously} [Fig.~\ref{fig:NVexplanation}(e)], intensity changes induced by non-magnetic, common-mode shifts are cancelled out, while splittings caused by magnetic signal result in a sum of changes in PL intensity. Hence, for constant $C$ and $\delta\nu$, the DQ DI technique enables absolute magnetic imaging.~\cite{Fang2013High-SensitivityCenters}
\textcolor{black}{Although the contribution of strain-induced-resonance-shifts to the imaging have been eliminated, overcoming the sensitivity limits of the SQ DI, we demonstrate that} DQ DI has \textit{increased} the effect of variations in curve-shape on the magnetic imaging. \textcolor{black}{More specifically,} variations of $C$ and $\delta\nu$ still cause perturbations of the first two terms in Eq.~\ref{equation: double quantum difference signal}. This effect can be seen by comparing the map of $C$ in Fig.~\ref{fig:sqcandlw}(c) to the DQ DI in Fig.~\ref{fig:DDQcomparison}(c). \textcolor{black}{Thus, for practical applications of the DQ method to wide-field imaging, we find that curve-shape variations dominate and the DQ DI (Fig.~\ref{fig:DDQcomparison}(c)) is ineffective at reproducing a map of the magnetic field projection (Fig.~\ref{fig:DDQcomparison}(a)).}
\\
\\
\newline
\subsection*{Double-Double Quantum Difference Imaging}
\textcolor{black}{To suppress the imaging dependence on curve-shape, we apply bias RFs on either side of the resonance curves \cite{Gould2014}.} We construct a DDQ DI
\begin{equation}
\textcolor{black}{\text{DDQ} = 2\frac{
I_{\text{on}}(f_1,f_4) - I_{\text{on}}(f_2, f_3)}
{I_{\text{on}}(f_1,f_4) + I_{\text{on}}(f_2, f_3)}
}
\label{equation: ddq from images signal}
\end{equation}
where $I_{\text{on}}(f_1, f_4)$ ($I_{\text{on}}(f_2, f_3)$) is the image taken with RF applied at the outer (inner) inflection points of the two resonance curves \textcolor{black}{simultaneously}, as shown in Fig.~\ref{fig:NVexplanation}(e). \textcolor{black}{By applying DQ bias RFs on either side of the resonance curves, the effects of variations in the shape of the the ODMR curve and external non-magnetic fields are mitigated. Here, the DDQ DI signal is normalized by dividing by the mean of the individual DQ frames $I_{\text{on}}(f_i,f_j)$.}
\begin{figure*}
\begin{center}
\includegraphics[width=2.1\columnwidth]{dynamics_oneline_withfits.pdf}
\end{center}
\caption{\textcolor{black}{DDQ imaging of the reorientation of a DNA-tethered magnetic nanoparticle under applied flow. In each panel, the observed DDQ image is compared with a \textcolor{black}{fitted DDQ image (inset)} to estimate the magnetic nanoparticle dipole orientation ($\theta$,~$\phi$), where the NV ensemble symmetry axis is (54.735\degree,~0\degree). For all DDQ DI in this figure, a Gaussian smoothing filter with $\sigma = $ \SI{533}{\nano\meter} is applied. (a) A time-averaged DDQ DI (\SI{8}{\second}) showing initial magnetic nanoparticle orientation before flow. (b) Representative DDQ frames (\SI{64}{\milli\second} of exposure) showing nanoparticle reorientation in response to an applied flow. (c) A time-averaged DDQ DI (\SI{8}{\second}) showing the final magnetic nanoparticle orientation with applied flow.}}
\label{fig:dynamics}
\end{figure*}
Because of the choice of RF, the per-pixel DDQ signal simplifies in a similar manner as the DQ signal in Eq.~\ref{equation: double quantum difference signal} giving
\begin{equation}
\text{DDQ}^{\text{pp}} \approx \frac{3 \sqrt{3}}{2} \frac{C}{\delta\nu}\left(2 \gamma_{\text{NV}} \hat{z}_{\text{NV}} \cdot \left(\vec{B} - \langle\vec{B}\rangle\right) \right).
\label{equation: double double quantum difference signal}
\end{equation}
DDQ eliminates the first two terms of the DQ DI signal Eq.~\ref{equation: double quantum difference signal} to obtain a single term which is linearly proportional to $(\vec{B} - \langle \vec{B} \rangle)$. There is still multiplicative dependence on $C$ and $\delta\nu$, but because the shift of the resonant frequency far \textcolor{black}{($>$\SI{1}{\micro\meter})} from magnetic field sources falls off faster than the impact of spatial variations of curve-shape \textcolor{black}{due to inhomogeneous broadening}, there is no \textcolor{black}{DDQ signal} generated in regions with no magnetic field. The DDQ \textcolor{black}{DI} completely eliminates the large-scale, non-magnetic gradients in the SQ DI [Fig. $\ref{fig:DDQcomparison}$(b)] and suppresses the $C$ and $\delta\nu$ dependence [Fig.~\ref{fig:sqcandlw}(b-c)] of the DQ DI [Fig.~\ref{fig:DDQcomparison}(c)]. As shown in Fig.~\ref{fig:DDQcomparison}(d), DDQ DI provides similar magnetic sensitivity as the static magnetic projection map in Fig.~\ref{fig:DDQcomparison}(a), with a \textcolor{black}{greater than four}-fold \textcolor{black}{acquisition-time-reduction}. The static imaging modality requires enough images to fit both resonance curves; the DDQ modality instead extracts the magnetic field dependence of the resonances with only two images: $I_{\text{on}}(f_1,f_4)$ and $I_{\text{on}}(f_2,f_3)$. While the integration time of the DDQ image shown in Fig.~\ref{fig:DDQcomparison}(d) was chosen to match the signal-to-noise ratio of the magnetic field map in Fig.~\ref{fig:DDQcomparison}(a), DDQ enables even faster magnetic imaging \textcolor{black}{as demonstrated in the next section}.
\textcolor{black}{We emphasize the general conditions for applicability for the DDQ method: (i) the resonance curve shapes of the two NV spin transitions used must be matched by driving each transition with equal Rabi frequency, (ii) non-magnetic inhomogeneities across the imaging field of view must be smaller than the resonance FWHM linewidth in order to be suppressed, and (iii) sensor operation is in the linear regime, i.e. magnetic signals to be imaged are smaller than the resonance FWHM linewidth. We discuss errors associated with condition (i) in the Supplemental Material, and note criteria (ii) and (iii) are prerequisites for any intensity-based wide-field magnetic imaging involving NV ensembles.}
\textcolor{black}{
\section*{WIDE-FIELD DYNAMIC MAGNETIC MICROSCOPY}
To demonstrate that DDQ difference imaging can facilitate high-frame-rate imaging of dynamic fields, we image the changing magnetic field produced by a ferromagnetic nanoparticle tethered to the diamond sensor surface by a single DNA molecule. The approximate diamond-particle distance is \SI{400}{\nano\meter}. Fig.~\ref{fig:dynamics}a shows an \SI{8}{\second} time-averaged image of the field produced by the magnetic nanoparticle. A preferred orientation is observed due to the partial alignment of the ferromagnetic nanoparticle to the \SI{0.35}{\milli \tesla} external magnetic-field, oriented along $(\theta,\phi)=(54.735\degree,0\degree)$. Next, phosphate-buffered-saline is pulled from a reservoir through the sample channel by a syringe pump at 4 ml/min. The fluid flow exerts a hydrodynamic force and torque on the tethered-particle, causing it to reorient, changing the magnetic field at the diamond sensor surface. Fig.~\ref{fig:dynamics}b displays characteristic frames, in chronological order, showing time-resolved imaging of the nanoparticle moment reorientation at a \SI{15.6}{\hertz} frame-rate \textcolor{black}{(\SI{64}{\milli \second} per frame)}, with insets showing \textcolor{black}{fitted} DDQ images displaying the changing magnetic-moment direction in each frame. The fluid-flow-steady-state nanoparticle orientation is imaged with an \SI{8}{\second} time-averaged DDQ image in Fig.~\ref{fig:dynamics}c. The full dynamic magnetic imaging video \textcolor{black}{and simulation information} can be found in the Supplemental Material. This experiment represents the novel application of micron-scale dynamic magnetometry to a single-molecule biological system.
}
\section*{Conclusion and Outlook}
\textcolor{black}{Although} the NV community has made significant progress toward eliminating inhomogeneities in NV ensemble-based-sensors \textcolor{black}{through advanced NV fabrication}~\cite{Acosta2009DiamondsApplications}, quantum control methods \textcolor{black}{can significantly increase the sensitivity of these systems for magnetometry applications}~\cite{Bauch2018UltralongControl}. \textcolor{black}{However, existing wide-field schemes fail to reliably image magnetic fields due to micron-scale variation in the resonance-curve shape.} Here, we introduce a novel quantum control technique, double-double quantum difference imaging, that is suitable for mitigating inhomogeneities in wide-field dc magnetometry to enable imaging of time-varying fields. Using four-tone RF pulses and only a two-image sequence, we show both theoretically and experimentally that DDQ difference imaging not only mitigates \textcolor{black}{non-magnetic} perturbations of the NV resonant frequency but also variations of resonance curve-shape. \textcolor{black}{Static-field imaging reveals} that these resonance shape variations can be the dominant source of imaging noise in a state-of-the-art NV magnetic imaging surface. \textcolor{black}{Finally, we use the DDQ technique to perform wide-field magnetic microscopy of a dynamic, biological system, enabling high frame-rate orientation imaging of a magnetic nanoparticle tethered to the diamond sensor by a single DNA molecule. DDQ difference imaging eliminates the need for per-pixel calibration and enables high-frame-rate magnetic microscopy via NV photoluminescence intensity imaging.}
\newline
\newline
\textcolor{black}{\textit{During the revision process, we became aware of work demonstrating a similar technique with similar conditions of applicability to mitigate wide-field inhomogeneities in NV magnetic microscopy~\cite{Hart2020}. In relation to this work, our technique utilizes a simpler two-image (versus four-image) scheme, requires no phase control of the RF excitation, uses substantially lower RF power, and was used to perform dynamic magnetic imaging.}}
\section*{Acknowledgements}
This material is based on work supported by the National Science Foundation under Grant No. 1607869. Helium ion implantation measurements were carried out at the Environmental and Molecular Sciences Laboratory, a national scientific user facility sponsored by DOE?s Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram laboratory operated for DOE by Battelle under Contract DE-AC05-76RL01830. K.M.I. acknowledges support from the Spintronics Research Network of Japan.
|
1,108,101,563,661 | arxiv | \section{Introduction}
The ground state of a quantum many-body system changes drastically as parameters in the Hamiltonian are driven across a quantum transition point.
This phenomenon is preceded in a finite-size system by closing of the energy gap between the ground and first excited states as a function of the system size growing toward the thermodynamic limit.
The rate of the gap closing is an important measure characterizing a quantum phase transition.
According to the finite-size scaling theory, when applied to a second order phase transition
with a divergent correlation length, physical quantities generally behave
polynomially as a function of the system size \cite{nishimori-ortiz,santos,osterloh}.
In contrast, the gap is expected to close exponentially fast at the first order
quantum phase transition:
the two ground states at opposite sides of the first order transition point have
significantly different properties and consequently their
overlap in a finite-size system is very -- typically exponentially -- small.
The overlap of the two states determines the energy gap since the overlap
corresponds to the off-diagonal elements of the effective two-level Hamiltonian
describing the system around the transition point and the gap is
directly related to the magnitude of the off-diagonal elements.
It is therefore expected that the order of quantum phase transition
in the thermodynamic limit is generally in one-to-one correspondence
with the rate of the gap closing toward the thermodynamic limit,
polynomially for the second order transition and exponentially for the first order transition \cite{privman,hatano}.
Studies of the energy gap are also important from the viewpoint of quantum annealing \cite{kadowaki,kadowaki2,finilla,morita,das,santoro,bapst},
in which parameters of the system are controlled in time and driven across a quantum critical point.
The rate of the gap closing directly affects the efficiency of quantum annealing.
An exponential closing of the gap implies an exponentially long computation time whereas a polynomial gap leads to a polynomial time \cite{farhi,seki,seoane,bapst2,jorg,young1,young2}.
Although the above-mentioned correspondence between the order of quantum phase
transition and the rate of the gap closing generally holds true in most systems, an
interesting counterexample has been found by Cabrera and Jullien
\cite{cabrera,cabrera2} who showed that the first order quantum phase
transition in the one-dimensional transverse-field Ising model with
antiperiodic boundary condition is accompanied by a polynomial closing of the energy gap.
See also Ref. \cite{laumann} for essentially the same result.
Furthermore, another very anomalous example has been given for the infinite-range
quantum $XY$ model, where the energy gap behaves in many different ways at first order
quantum phase transitions \cite{tsuda}.
It has been shown there that many types of the gap closing: polynomial, exponential,
and even factorial closing coexist along a critical line in the phase diagram.
This example suggests that we should be very careful while relating
the type of quantum phase transition to the rate of the gap closing.
In the present paper, we first analyze the one-dimensional quantum isotropic $XY$ model with $s=1/2$, which exhibits he second order phase transition,
and show that this model is another example of the anomalous gap behavior.
Although the properties of one-dimensional quantum spin systems
have been studied from a number of different perspectives
\cite{sachdev,Henkel,lieb,katsura,pfeuty,mccoy,kurmann,hoeger,kenzelmann,antonella,Damski,CV2010,CNPV2014,CPV2015},
our focus is on the anomalous system-size dependence of the gap,
at variance from most of the previous studies.
We conclude that the energy gap behaves quite peculiarly (highly oscillatory) as a function of the system size,
in a very similar way to the case of the infinite-range quantum $XY$ model \cite{tsuda}.
This observation should be taken seriously from the perspective of efficiency of quantum annealing.
Next, we analyze the anisotropic $XY$ model and show that the energy gap also behaves anomalously as a function of the system size in some regions of the parameters.
Finally, we observe that a direct relation holds between the energy gap and the correlation function in the ground state. For instance, the anomalous oscillatory behavior of the gap precisely coincides with the oscillations of the correlation functions.
The organization of the paper is as follows. In Sec. I\hspace{-.1em}I, we define the model, diagonalize the Hamiltonian and obtain the energy gap.
In Sec. I\hspace{-.1em}I\hspace{-.1em}I, we analyze the size dependence of the energy gap using both analytical and numerical methods. In Sec. I\hspace{-.1em}V, we show that a relation is established between the energy gap and the correlation function in the systems. Finally, our conclusion is given in Sec. V.
\section{The Hamiltonian and its spectrum}
We study the one-dimensional quantum $XY$ model with $s=1/2$ in transverse and longitudinal fields, $\Gamma$ and $h$, respectively, and periodic boundary condition,
\begin{eqnarray}
H&=&-\frac{1}{2} \sum_{i=1}^{N}[(1+\gamma)\sigma_i^x\sigma_{i+1} ^x+(1-\gamma)\sigma_i^y\sigma_{i+1} ^y]-\Gamma \sum_{i=1}^{N} \sigma_i^z-h\sum_{i=1}^{N} \sigma_i^x \label{eq:hamiltonian1},
\end{eqnarray}
where $\gamma$ controls the anisotropy, and $\sigma_i^x, \sigma_i^y$ and $\sigma_i^z$ are the Pauli matrices acting at site $i$. For the moment, we assume that the system size $N$ is even.
Without loss of generality, we assume that $\Gamma \ge 0$ and $\gamma \ge0$.
We consider quantum annealing protocol where the longitudinal filed $h$ is driven in time for a given, fixed value of the transverse field $\Gamma$.
For $\gamma>0$, there is a first order quantum transition at $h=0$ between the phases with $\langle \sigma_j^x \rangle>0$ and $\langle \sigma_j^x \rangle<0$ (for $h<0$ and $h>0$, respectively), as long as $0\leq \Gamma < 1$, see Fig. 1 \cite{lieb,katsura,pfeuty,mccoy,kurmann,hoeger,kenzelmann,antonella}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.4\hsize]{souzu-crop.eps}
\end{center}
\caption{The phase diagram of the XY model in Eq.\ (\ref{eq:hamiltonian1}), which undergoes a first order phase transition across the surface marked in pink
between the phases with positive and negative expectation values of $\sigma_j^x$. Along the isotropic line of $\gamma=0$ marked in green, for $0\le \Gamma <1$, the transition is of the second order.}
\label{fig:phasediagram}
\end{figure}
This expectation value has a jump at $h=0$, which is the reason why we call it the first order transition.
In contrast, for $\gamma=0$, a second order transition takes place, as the
expectation value $\langle \sigma_j^x \rangle$ continuously changes across 0 at $h=0$ \cite{mccoy2}.
Our aim is to clarify the system-size dependence of the energy gap at $h=0$.
Notice that the longitudinal field in Eq.\ (\ref{eq:hamiltonian1}), $-h\sum_j\sigma_j^x$, induces transitions between states with even and odd values of
the $z$-component of the total spin $S_{\rm tot}^z=\frac{1}{2}\sum_j\sigma_j^z$, while other terms in the Hamiltonian conserve the parity.
For our protocol, it is therefore relevant to study the energy gap between the ground
and first excited states, where one has an odd and the other has even value of $S_{\rm tot}^z$.
The XY model in Eq.\ (\ref{eq:hamiltonian1}), for $h = 0$, can be diagonalized following the standard procedure \cite{lieb,katsura,pfeuty,mccoy,kurmann,hoeger,kenzelmann,antonella}.
Since the energy gap depends on the subtle finite-size effects in the energy spectrum, we repeat the derivation in some detail.
After the Jordan-Wigner transformation, $\sigma^z_i= 2 c_i^\dagger c_i-1 $, $\sigma^x_i - i \sigma^y_i = 2 c_i \prod_{j<i}(-\sigma^z_j)$, where $c_i$ are fermionic annihilation operators, the Hamiltonian is expressed as
\begin{eqnarray}
H=&-& \sum_{i=1}^{N-1} \left[(c_i^{\dagger}c_{i+1}-c_ic_{i+1}^{\dagger})+\gamma(c_i^{\dagger}c_{i+1}^{\dagger}-c_ic_{i+1}) \right]
\nonumber \\
&+& (-1)^{N_c} \left[ (c_N^{\dagger}c_{1}-c_N c_{1}^{\dagger})+\gamma(c_N^{\dagger} c_{1}^{\dagger}-c_N c_{1}) \right]
\nonumber \\
&-&2 \Gamma \sum_{i=1}^{N} (c_i^{\dagger} c_i -\frac{1}{2}) \label{eq:hamiltonian2} ,
\end{eqnarray}
where we will call $N_c \equiv \sum_i c_i^{\dagger} c_i$ sign operator.
After the Fourier transformation,
\begin{eqnarray}
c_j= \frac{1}{\sqrt{N}} \sum_{k}e^{ikj} a_k ,
\end{eqnarray}
the Hamiltonian in Eq.\ (\ref{eq:hamiltonian2}) is transformed as
\begin{eqnarray}
H&=& \sum_{k} H_k ,
\\
H{_k}&=& - \left[ 2a_k^{\dagger} a_k (\cos k +\Gamma) + \gamma ( e^{ik}a_k^{\dagger} a_{-k}^{\dagger} - e^{-ik}a_k a_{-k}) \right] + \Gamma \label{k4} ,
\end{eqnarray}
where the values of $k$ depend on the boundary condition: $k = k_1$ for periodic $c_{N+j} = c_j$ ($N_c$ odd) and $k = k_2$ for antiperiodic $c_{N+j} = -c_j$ ($N_c$ even), with
\begin{eqnarray}
k_1 &=&0 , \pm \frac{2}{N} \pi ,\cdots , \pm \frac{N-2}{N} \pi , \pi \label{k_1} ,
\\
k_2 &=& \pm \frac{1}{N} \pi , \pm \frac{3}{N} \pi , \cdots , \pm \frac{N-1}{N} \pi \label{k_2} .
\end{eqnarray}
For $\gamma=0$, the transformed Hamiltonian $H_k$ is already diagonal. The part of the Hamiltonian for a given absolute value of momentum reads
\begin{eqnarray}
H_k + H_{-k} = &-2&
\left[
\begin{array}{ccc}
a_k^{\dagger} && a_{-k}^{}
\end{array}
\right]
\left[
\begin{array}{ccc}
\cos k +\Gamma && i\gamma \sin k \\
-i\gamma \sin k && -\cos k - \Gamma
\end{array}
\right]
\left[
\begin{array}{ccc}
a_k\\
a_{-k}^{\dagger}
\end{array}
\right] -2 \cos k.
\end{eqnarray}
We diagonalize it using the Bogoliubov transformation
\begin{eqnarray}
\left[
\begin{array}{ccc}
d_k \\
d_{-k}^{\dagger} \\
\end{array}
\right]
&=& \left[
\begin{array}{ccc}
\cos \frac{\theta_k}{2} && i \sin \frac{\theta_k}{2} \\
i \sin \frac{\theta_k}{2} && \cos \frac{\theta_k}{2} \\
\end{array}
\right]
\left[
\begin{array}{ccc}
a_k \\
a_{-k}^{\dagger} \\
\end{array}
\right],
\\
\nonumber \\
\cos \theta_k &=& \left(\cos k +\Gamma\right) / \epsilon(k), \label{eq:costh}\\
\sin \theta_k &=& \gamma \sin k / \epsilon(k) \label{eq:sinth},\\
\label{epsilonk}
\epsilon (k) &=& \sqrt{ (\cos k+\Gamma)^2 + (\gamma \sin k)^2} \label{epsilon} .
\end{eqnarray}
Finally, the diagonalized Hamiltonian in each parity sector reads
(i) For $N_c$ odd
\begin{eqnarray}
H^{\mathrm{odd}} &=& -2 (\Gamma+1) d_0^{\dagger} d_0 - 2(\Gamma-1) d_{\pi}^{\dagger} d_{\pi} + 2\Gamma
\nonumber \\
&&- 2\sum_{k_3} ( d_{k_3}^{\dagger} d_{k_3} +d_{-k_3}^{\dagger} d_{-k_3} -1 ) \epsilon (k_3) , \\
k_3 &=& \frac{2}{N} \pi , \frac{4}{N} \pi , \cdots, \frac{N-2}{N} \pi .
\end{eqnarray}
The ground-state energy is
\begin{eqnarray}
E_0^{\mathrm{odd}} &=& -2 - 2\sum_{k_3} \epsilon(k_3) = -2 - \sum_{k_1} \epsilon(k_3) +\epsilon(0)+\epsilon(\pi) \nonumber
\\
&=&
\begin{cases}
-\sum_{k_1} \epsilon(k_1) & (\Gamma \le 1)
\\
2(\Gamma-1) -\sum_{k_1} \epsilon(k_1) & (\Gamma>1)
\end{cases} \label{eq:E_0^{odd}},
\end{eqnarray}
where we have to be careful to pick the state with correct fermionic parity.
(ii) For $N_c$ even
\begin{eqnarray}
H^{\mathrm{even}} &=& -2\sum_{k_4} ( d_{k_4}^{\dagger} d_{k_4} +d_{-k_4}^{\dagger} d_{-k_4} -1 ) \epsilon (k_4), \\
k_4 &=& \frac{1}{N} \pi , \frac{3}{N} \pi , \cdots, \frac{N-1}{N} \pi .
\end{eqnarray}
The ground-state energy is
\begin{eqnarray}
E_0^{\mathrm{even}} = - 2\sum_{k_4} \epsilon(k_4) = -\sum_{k_2} \epsilon(k_2) .
\end{eqnarray}
The true ground-state energy is given by one of these two possibilities, $E_0^{\mathrm{odd}}$ or $E_0^{\mathrm{even}}$, while the energy of the first excited state is the other one. Therefore the energy gap is equal
\begin{eqnarray}
\Delta(N,\Gamma,\gamma) \equiv \left| E_0^{\mathrm{odd}}-E_0^{\mathrm{even}} \right| &=\begin{cases}
\displaystyle{\left| \sum_{k_2} \epsilon(k_2)-\sum_{k_1} \epsilon(k_1) \right| }& (\Gamma \le 1)
\\
\displaystyle{2(\Gamma-1) + \sum_{k_2} \epsilon(k_2) -\sum_{k_1} \epsilon(k_1)} & (\Gamma>1)
\end{cases} \label{eq:E_0^{odd}} \label{eq:Delta2}
\end{eqnarray}
When the system size $N$ is odd, we diagonalize the Hamiltonian in the same way and obtain the energy gap as,
\begin{eqnarray}
\Delta(N,\Gamma,\gamma) =
\begin{cases}
\displaystyle{ \left| \sum_{k_5} \epsilon(k_5)-\sum_{k_6} \epsilon(k_6) \right| } & (\Gamma \le 1)
\\
\displaystyle{2(\Gamma-1) +\sum_{k_5} \epsilon(k_5) -\sum_{k_6} \epsilon(k_6) }& (\Gamma>1)
\end{cases} \label{eq:Deltaodd}
\end{eqnarray}
where
\begin{eqnarray}
k_5 &=&0 , \pm \frac{2}{N} \pi ,\cdots , \pm \frac{N-3}{N} \pi , \pm \frac{N-1}{N} \pi \label{k_5} ,
\\
k_6 &=&\pi, \pm \frac{1}{N} \pi , \pm \frac{3}{N} \pi , \cdots , \pm \frac{N-2}{N} \pi \label{k_6},
\end{eqnarray}
are wave numbers in the subspaces with odd and even parities, respectively.
\section{Energy gap as a function of the system size}
In this section, we derive the asymptotic form of the energy gap and show that it behaves in an irregular way as a function of the system size.
\subsection{Isotropic case $(\gamma = 0)$}
We start with the isotropic case of $\gamma = 0$, for which the Hamiltonian is already diagonal in Eq.\ (\ref{k4}),
\begin{eqnarray}
H&=& \sum_{k} \left[ -2a_k^{\dagger} a_k (\cos k +\Gamma) + \Gamma \right] \label{eq:H} ,
\end{eqnarray}
where the values of $k$ depend on the boundary condition as in Eqs.\ (\ref{k_1}) and (\ref{k_2}). We find it convenient not to use the general expression derived in Eq.\ \eqref{eq:Delta2} here, but we start with the above formula instead. For simplicity, we only consider even $N$ here.
We carefully identify the ground state in preparation to evaluate the energy gap.
An apparent candidate for the ground state is the state where all modes with wave numbers satisfying
\begin{align}
\cos k > -\Gamma \label{concon},
\end{align}
are occupied.
We check that those states have the number of fermions $N_c$ matching the respective parity sectors.
Indeed, as should be clear from Fig. \ref{fig:k1}, the series of wave numbers $k_1$ in
Eq.\ (\ref{k_1}) is consistent with odd $N_c$ and $k_2$ is consistent with even $N_c$.
Notice that the parity of $N_c$ coincides with the parity of $S_{\rm tot}^z$ up
to a constant $N/2$,
\begin{equation}
S_{\rm tot}^z=\frac{1}{2}\sum_{j=1}^N \sigma_j^z
=\sum_{j=1}^N \Big(c^{\dagger}_jc_j -\frac{1}{2}\Big) =N_c-\frac{N}{2},
\end{equation}
which is related to the statement that we consider the gap between states with even and odd values of $S_{\rm tot}^z$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[height=55mm]{k1} \hspace{1.5cm}
\includegraphics[height=55mm]{k2}
\end{center}
\caption{Left panel: Wave numbers for the presumed ground state in the case of $k_1$ on the
complex-$k$ plane. The black points mark the occupied modes, and the white ones are for the unoccupied ones. The number of occupied modes is odd.
Right panel: Likewise, the number of occupied states for $k_2$ is even.}
\label{fig:k1}
\end{figure}
\subsubsection{Energy gap}
The lowest energy in each parity sector is given by Eq.\ (\ref{eq:H}), with $a_k^\dagger a_k =1$ for $k$ satisfying Eq.\ (\ref{concon}) and $a_k^\dagger a_k=0$ otherwise.
The energy gap reads
\begin{align}
\Delta (N,\Gamma,0)= \left| \sum_{|k_1| \le \frac{2 n }{N} \pi} \left[ -2\left(\cos k + \Gamma \right) \right]
- \sum_{|k_2| \le \frac{2 m -1}{N}\pi} \left[ -2\left(\cos k + \Gamma \right) \right] \right| , \label{gap}
\end{align}
where $\frac{2 n }{N}\pi$ and $\frac{2 m -1}{N}\pi$ are, respectively, the largest $k_1$ and $k_2$ satisfying Eq.\ (\ref{concon}).
Then,
\begin{align}
\sum_{|k_1| \le \frac{2 n }{N} \pi} \cos k = 1 + 2\sum_{j=1}^n \cos \frac{2j}{N} \pi = \frac{\displaystyle\sin \frac{(2n+1)\pi}{ N}} {\displaystyle\sin\frac{\pi}{N}}; ~ ~ ~ ~
\sum_{|k_2| \le \frac{2 m -1}{N}\pi} \cos k = \frac{\displaystyle\sin \frac{2m\pi}{N}}{\displaystyle\sin \frac{\pi}{N}}.
\label{k1sum}
\end{align}
Next, we find $n$ and $m$, as
\begin{align}
n= \left\lfloor \frac{Nx}{2\pi} \right\rfloor
,~~
m= \left\lfloor \frac{Nx}{2\pi} +\frac{1}{2} \right\rfloor
,~~
x = \arccos (-\G).
\label{nm}
\end{align}
Dividing the quantities in the floor symbol above into integer and fractional parts,
\begin{align}
\frac{Nx}{2\pi} &= \left\lfloor \frac{Nx}{2\pi} \right\rfloor + \delta_1
= n + \delta_1 ~~~(0\leq\delta_1<1), \label{Nx2pi}\\
\frac{Nx}{2\pi} + \frac{1}{2} &= \left\lfloor \frac{Nx}{2\pi}
+ \frac{1}{2} \right\rfloor + \delta_2 = m + \delta_2 ~~~(0\leq \delta_2<1) .
\end{align}
we find that for $0\leq\delta_1<\frac{1}{2}$, $\delta_2=\delta_1+\frac{1}{2}$, and the energy gap is
\begin{align}
\Delta(N,\Gamma,0)=
2 \left|
\frac{\displaystyle\G\cos\left(\frac{\pi}{2N}(1-4\delta_1)\right)+\sqrt{1-\G^2}
\sin\left(\frac{\pi}{2N}\left( 1-4\delta_1 \right)\right)}
{\displaystyle\cos \frac{\pi}{2N}}-\G \right|,
\label{kekka1}
\end{align}
while for $\frac{1}{2} \leq\delta_1<1$, $\delta_2=\delta_1-\frac{1}{2}$ and the gap reads
\begin{align}
\Delta(N,\Gamma,0) = 2 \left| \frac{\displaystyle\G\cos\left(\frac{\pi}
{2N}(3-4\delta_1)\right)+\sqrt{1-\G^2}
\sin\left(\frac{\pi}{2N}\left(3-4\delta_1 \right)\right)}
{\displaystyle\cos \frac{\pi}{2N}}-\G \right|. \label{kekka2}
\end{align}
In both cases we can expand the above expressions in the powers of $1/N$, obtaining the asymptotic formula
\begin{align}
\Delta(N,\Gamma,0) \approx \frac{\pi \sqrt{1-\Gamma^2}}{N} \left| 2 ~{\rm mod} \left( \frac{N \phi}{\pi} ,1 \right) -1 \right| +\mathcal{O}(N^{-2}),
\label{eq:isotropic_asymptotic}
\end{align}
where the angle $\phi$ is defined as
\begin{eqnarray}
\phi = \arccos(\Gamma).
\end{eqnarray}
The standard, polynomial decay of the gap is modified here by $\left(2 ~{\rm mod} \left( \frac{N \phi}{\pi} ,1 \right) -1\right)$, which is a piecewise linear, continuos function of $\phi N$, oscillating between $-1$ and $+1$.
In the Appendix A we present another derivation of the above formula using more general approach, suitable for anisotropic case as well.
\subsubsection{Anomaly in the energy gap}
We plot typical examples of the energy gap as a function of the system size in Fig. \ref{fig:initial}, which have been calculated using the exact formulas in Eqs.\ (\ref{kekka1}) and (\ref{kekka2}).
\begin{figure}[t]
\begin{minipage}[]{.5\linewidth}
\centering
\includegraphics[width=1.0\hsize]{XX-some.eps}
\end{minipage
\begin{minipage}[]{.5\linewidth}
\centering
\includegraphics[width=1.0\hsize]{0999.eps}
\end{minipage
\caption{System-size dependence of the energy gap for the isotropic XY model in Eq.\ \eqref{eq:hamiltonian1} with $\gamma=0$ and $h=0$. Left panel: Green, red and blue dots show values of the energy gap for $\Gamma=0.1$, $\Gamma=0.3$ and $\Gamma=0.5$, respectively. Right panel: The energy gap for $\Gamma=0.999$.}
\label{fig:initial}
\end{figure}
Most interesting, these figures look qualitatively very similar to those for the infinite-range quantum $XY$ model \cite{tsuda},
\begin{align}
H_\infty = -\frac{1}{4N} \sum_{i,j=1}^{N}
\left(\sigma^x_i \sigma^x_{j} + \sigma^y_i \sigma^y_{j} \right)
- \frac{\Gamma}{2} \sum_{i=1}^{N} \sigma^z_i
- \frac{h}{2} \sum_{i=1}^{N} \sigma^x_i \label{hamiltonian_IR}.
\end{align}
For convenience, we reproduce some of them in Fig. \ref{fig:IR}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.5\hsize]{IR.eps}
\end{center}
\caption{System-size dependence of the energy gap for the infinite-range XY model in Eq.\ \eqref{hamiltonian_IR} for the longitudinal field $h$=0 and various, fixed values of the transverse field $\Gamma$. Red, blue and green dots show the data for $\Gamma=\pi/10$, $\Gamma=1/3$ and $\Gamma=1-\pi/300$, respectively.}
\label{fig:IR}
\end{figure}
It is worth remaining that, in the infinite-range XY model, the gap can be tuned
to close polynomially, exponentially, or factorially by using an appropriate series of system sizes \cite{tsuda}.
Comparison of Figs. \ref{fig:initial} and \ref{fig:IR} strongly suggests that the same may be true for the one-dimensional model as well.
We discuss it in some detail below.
\subsubsection{Sensitivity of the energy gap to the change of magnetic field $\Gamma$}
There are several important points to notice in these results.
Firstly, a slight change in $\Gamma$ can lead to drastically different behavior.
In order to illustrate this, it is useful to reduce the exact formulas for the gap in Eqs.\ (\ref{kekka1}) and (\ref{kekka2}) for some specific values of $\Gamma$.
For example, for $\Gamma =\frac{1}{2}$, $x=\arccos(-\Gamma)=2\pi /3$, and hence $Nx/2\pi =N/3$.
Then, for $N=3l~(l\in\mathbb{N})$, $\delta_1=0$ according to Eq.\ (\ref{Nx2pi}) and the gap takes a simple form
\begin{equation}
\Delta(N=3l, \Gamma=1/2, 0) =\sqrt{3}\, \tan \frac{\pi}{2N}. \label{simple delta}
\end{equation}
On the other hand, when $N=3l+1$, $\delta_1=1/3$ and for $N=3l+2$, $\delta_1=2/3$ and the gap becomes respectively
\begin{eqnarray}
\Delta(N=3l+1, \Gamma=1/2, 0) &=& 2 \left| \frac{\displaystyle\frac{1}{2}\cos\frac{\pi}
{6N}-\frac{\sqrt{3}}{2}\sin\frac{\pi}
{6N}}{\displaystyle\cos \frac{\pi}{2N}}-\frac{1}{2} \right| \label{3l+1}, \\
\Delta(N=3l+2, \Gamma=1/2, 0) &=& 2 \left| \frac{\displaystyle\frac{1}{2}\cos\frac{\pi}
{6N}+\frac{\sqrt{3}}{2}\sin\frac{\pi}
{6N}}{\displaystyle\cos \frac{\pi}{2N}}-\frac{1}{2} \right|. \label{3l+2}
\end{eqnarray}
For large $N$, Eqs.\ (\ref{3l+1}-\ref{3l+2}) have the same system-size dependence since $\cos{\left(\pi/6N\right)} \gg \sin{\left(\pi/6N \right)}$.
Thus, the gap follows essentially two separate curves as a function of the system size, given by Eqs.\ (\ref{simple delta}) and (\ref{3l+1}) (or (\ref{3l+2})), as can indeed be seen
in Fig. \ref{fig:initial} for $\Gamma=0.5$.
A similar argument applies to $x=\arccos (-\Gamma)=\pi l/j$ with $l, j\in\mathbb{N}$ satisfying $1/2\leq l/j<1$, the latter condition coming from $0\le \Gamma <1$.
In such a case
\begin{equation}
\frac{Nx}{2\pi}=\frac{lN}{2j},
\end{equation}
and $\delta_1$ assumes a fixed value for selected series of system sizes.
For instance, for $N=ji~(i\in\mathbb{N})$, $\delta_1=0$ or $\frac{1}{2}$,
and for $N=ji+1$, $\delta_1=l/(2j)$ or $1/2+l/(2j)$.
Each of the series $N=ji$ and $N=ji+1$ gives a smooth curve describing $\Delta$.
Similar argument applies to other series of the form $N=ji+{\rm const}$.
Therefore, for some rational values of $\arccos (-\Gamma)/\pi$, the gap, as a function
of the system size, follows a finite number of different curves.
For irrational $\arccos (-\Gamma)/\pi$ this argument does not apply
and the gap behaves less regularly.
\subsubsection{Envelope of the energy gap}
Secondly, the {\it envelope} of the gap as a function of the system size is proportional to its inverse. This is apparent when looking at the asymptotic expression in Eq.\ \eqref{eq:isotropic_asymptotic}, and we show it as well in Fig. \ref{Ndelta} by plotting $N\Delta$ as a function of $N$ for $\Gamma =0.1$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.5\hsize]{N01.eps}
\end{center}
\caption{System-size dependence of the gap multiplied by the size for $\gamma=0$ and $\G =0.1$.}
\label{Ndelta}
\end{figure}
This power-law dependence is indeed what is expected from the second order phase transition. One can easily construct the series of the system sizes which exhibits such a behavior by sticking to $N$'s for which the oscillating term in Eq.\ \eqref{eq:isotropic_asymptotic} is of the order of one. This would be a {\it typical} behavior as well.
\subsubsection{Possibility of rapid decay of the gap}
Note however, that we may be able to choose such a series of system sizes that the gap closes very rapidly.
The reason is that the expressions in Eqs.\ (\ref{kekka1}) and (\ref{kekka2}) oscillate between positive and negative values as illustrated in Figs. \ref{fig:initial} and in the asymptotic formula in Eq.\ \eqref{eq:isotropic_asymptotic}. This implies that the gap is crossing zero if $N$ is allowed to take continuous values. Then, it may be possible to choose appropriate values of $N$ close to those zeros which give very small, possibly exponentially small, values of the gap.
If this is correct, which we failed to prove rigorously (unlike the case of
the infinite-range quantum $XY$ model \cite{tsuda}), the rate of the gap closing
could be tuned to behave exponentially by an appropriate choice of the
series of system size, or be outright zero for some specific system size and magnetic field $\Gamma$.
\subsubsection{Special case: independence of the system size}
Finally, Eq.\ (\ref{gap}) simplifies considerably when $\cos (\pi/N)<\Gamma$. In that a case, the summations over $k_1$ and $k_2$ in Eq.\ (\ref{gap}) run over
all allowed values of wave numbers except for $k_1=\pi$ and the gap reduces to a simple form \cite{antonella},
\begin{align}
\Delta(N, \Gamma, 0) = \left| \sum_{k_1} \left[ -2\left(\cos k + \Gamma \right) \right]
-2(1-\Gamma)
- \sum_{k_2} \left[ -2\left(\cos k + \Gamma \right) \right] \right|
= 2(1-\G).
\end{align}
This last expression is independent of $N$ as can be observed in the right panel of Fig. \ref{fig:initial} in the range of $\cos (\pi/N)<\Gamma$.
\subsection{Anisotropic case $(0<\gamma<1)$}
In the anisotropic case, we derive asymptotic formulas for the energy gap and show that its behavior differs remarkably in various regions in the $(\gamma,\Gamma)$ plane.
Most intriguingly, we observe that the energy gap is oscillating in the incommensurate ferromagnetic phase, which is neighboring the isotropic critical line discussed earlier.
We are able to link the period of those oscillations with the oscillations of the correlation functions in those cases.
To that end, we start with Eqs.\ \eqref{eq:Delta2} and \eqref{eq:Deltaodd} and extend the procedure used in Ref. \cite{Damski} for the case of the Ising model, see Appendix A for details of the derivation.
We summarize the results below.
\noindent In the the ferromagnetic phase for $\Gamma<1$, which is the most interesting for us,
\begin{align}
\Delta \approx \displaystyle {\left\{ \begin{array}{ll}
\displaystyle{ \left( \frac{-8\gamma(\Gamma^2 +\gamma^2-1-\Gamma \gamma\sqrt{\Gamma^2+\gamma^2-1})}{\pi (1-\gamma^2)} \right)^{1/2} \frac{\lambda_2^{-N}}{\sqrt{N} } \left(1 + \mathcal{O}(N^{-1}) \right) }, & (\text{$\Gamma^2 + \gamma^2>1$), } \\
\displaystyle{ 0}, & (\Gamma^2 + \gamma^2 =1), \\
\displaystyle{ \left| \frac{4\sqrt{2}}{\sqrt{\pi}} \left( \frac{\gamma^2(1-\Gamma^2)(1-\Gamma^2-\gamma^2)}{(1-\gamma^2)} \right)^{1/4} \frac{\alpha^N}{\sqrt{N}} \cos(\psi N+\psi_0/2) \left(1 + \mathcal{O}(N^{-1}) \right) \right| }, & (\Gamma^2 + \gamma^2 <1).
\end{array} \right. } \label{asymptotic form}
\end{align}
\noindent In the paramagnetic phase for $\Gamma>1$,
\begin{equation}
\Delta \approx \displaystyle{ 2(\Gamma-1) + \left(\frac{8\gamma \left( \Gamma^2+\gamma^2-1-\gamma \Gamma \sqrt{\Gamma^2 +\gamma^2 -1} \right) }{\pi(1-\gamma^2)} \right)^{1/2} \frac{\lambda_{2}^{N}}{\sqrt{N}} \left(1 + \mathcal{O}(N^{-1}) \right) },
\end{equation}
and, finally, on the critical line for $\Gamma=1$,
\begin{equation}
\Delta \approx \frac{\gamma\pi}{2 N} + \left(2\gamma- \frac{3}{2\gamma} \right) \frac{\pi^3}{48N^3} +\mathcal{O}(N^{-5}). \label{eq:gapIsing}
\end{equation}
For convenience, $\alpha$, $\lambda_2$, $\psi$ and $\psi_0$ appearing above are defined as in Ref. \cite{mccoy}
\begin{eqnarray}
\alpha &=& \sqrt{\frac{1-\gamma}{1+\gamma}}, \label{eq:alpha}
\\
\cos \psi &=& \frac{\Gamma}{(1-\gamma^2)^{\frac{1}{2}}}, \label{eq:cospsi}
\\
\psi_0 &=& \arg (\sqrt{1-\Gamma^2-\gamma^2}+i\gamma \Gamma),
\\
\lambda_2 &=& \frac{\Gamma - \left[ \Gamma^2 + \gamma^2 -1 \right]^{\frac{1}{2} } }{1-\gamma}. \label{eq:lambda2}
\end{eqnarray}
From these results, we notice that the energy gap oscillates as a function of $N$ for $\Gamma^2+\gamma^2<1$, which means that the ground state changes between subspaces with even and odd parity of $N_c$ as the system size $N$ is increasing.
On the other hand, the ground state for $\Gamma^2 + \gamma^2 >1$ always has even parity. (For $\Gamma >0$, $\gamma > 0$. For negative $\Gamma$ or $\gamma$ one can easily map the model onto the one with positive values of the parameters by suitable spin rotation. This leads to change of parity of the ground state for odd $N$ and $\Gamma<0$).
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.5\hsize]{g=015-G=02andg=095-G=085}
\caption{System-size dependence of the energy gap for an anisotropic XY model in Eq.\ \eqref{eq:hamiltonian1} for $h=0$ and in the ferromagnetic phase.
Blue and red dots show the energy gap for $(\Gamma,\gamma)=(0.85,0.95)$ and $(\Gamma,\gamma)=(0.2,0.15)$, respectively. In the second case the gap falls toward zero very rapidly at periodically observed dips.}
\label{fig:3-3}
\end{center}
\end{figure}
We plot the typical examples of the energy gap as a function of the system size in the ferromagnetic phase in Fig. \ref{fig:3-3}, which have been obtained by numerically evaluating Eq.\ (\ref{eq:Delta2}).
For $(\Gamma,\gamma)=(0.85,0.95)$, i.e. $\Gamma^2+\gamma^2>1$, the gap decays in a standard, exponential way. By contrast, for $(\Gamma,\gamma)=(0.2,0.15)$, i.e. $\Gamma^2+\gamma^2<1$, there are additional dips visible, which mark the oscillations.
\section{Relation between energy gap and correlation functions}
We notice that the connected correlation function in the infinite system,
\begin{eqnarray}
G_{z}^R &\equiv& \langle \sigma_z^i \sigma_z^{i+R} \rangle - \langle \sigma_z^i \rangle \langle \sigma_z^{i+R} \rangle,
\end{eqnarray}
has similar qualitative behavior as the gap. We focus on the ZZ correlation function here, but this observation is independent of the specific choice of local observables.
For convenience, we quote the asymptotic of the correlation function, following Ref. \cite{mccoy}.
\noindent In the ferromagnetic phase, $\Gamma<1$,
\begin{align}
G_{z}^R \approx \displaystyle {\left\{
\begin{array}{ll}
\displaystyle{ - \frac{1}{2\pi} \lambda_2^{-2R-2} \left(1 + \mathcal{O}(R^{-1}) \right), } &( \Gamma^2 + \gamma^2>1),
\\[5mm]
\displaystyle{0}, & (\Gamma^2 + \gamma^2=1 ),
\\[5mm]
\displaystyle{ -\frac{4}{\pi} \frac{\alpha^{2R}}{ R^{2} } \Re \left[ e^{i\psi (R+1)} \left(\frac{1-e^{2i\psi} }{1-\alpha^2 e^{-2i\psi} } \right)^{1/2} \right] }
\\
\displaystyle{\Re \left[ e^{i\psi (R-1)} \left(\frac{1-\alpha^2 e^{-2i\psi} }{1-e^{-2i\psi} } \right)^{1/2} \right] \left(1 + \mathcal{O}(R^{-1}) \right), } & (\Gamma^2 + \gamma^2 <1),
\end{array}
\right. \label{eq:soukankansuu} }
\end{align}
where $\Re$ denotes the real part and $\alpha, \lambda_2$ and $\psi$ are defined in Eqs.\ \eqref{eq:alpha}-\eqref{eq:lambda2}.
\noindent In the paramagnetic phase, $\Gamma>1$,
\begin{equation}
G_{z}^R \approx \displaystyle{ -\frac{1}{2\pi} \frac{\lambda_{2}^{2R}}{R^2} } \left(1 + \mathcal{O}(R^{-1}) \right).
\end{equation}
For the critical line, $\Gamma=1$,
\begin{equation}
G_{z}^R \approx \displaystyle{ -\frac{4}{\pi^2R^2} } \left(1 + \mathcal{O}(R^{-2}) \right),
\end{equation}
and finally, for the isotropic case of $\gamma=0$ and $\Gamma < 1$.
\begin{equation}
G_{z}^R = \displaystyle{ -\frac{1}{\pi^2R^2} } \sin(\psi R )^2. \label{eq:Cisotropic}
\end{equation}
Comparison of Eqs.\ \eqref{asymptotic form}-\eqref{eq:gapIsing} and \eqref{eq:soukankansuu}-\eqref{eq:Cisotropic} reveals close similarities.
In the ferromagnetic phase, which is a first order transition from the viewpoint of our quantum annealing, the gap disappears in an exponential way on the length scale given by the correlation length in the system, which, of course, agrees with the standard qualitative prediction \cite{sachdev}.
Note however, that the period of oscillations of the gap as a function of the system size in the incommensurate phase ($\gamma^2 + \Gamma^2 <1$) coincides {\it precisely} with period of oscillations of the correlation function. Remarkably, the same observation holds true for the second order transition in the isotropic case. Indeed, for $\gamma = 0$, $\phi$ in Eq.\ \eqref{eq:isotropic_asymptotic} is exactly equal to $\psi$ in Eq.\ \eqref{eq:Cisotropic}. Trivially, this relation is also satisfied in other discussed cases, where neither the gap nor the correlation functions oscillate.
For the XY model discussed in this article this relation is not accidental and can be understood since both the correlation function and the energy gap in the finite system are closely related to the Fourier coefficients of $\cos \theta_k$ ($\sin \theta_k$) in Eqs.\ (\ref{eq:costh}-\ref{eq:sinth}) and of $\epsilon_k$ in Eq.\ \eqref{epsilonk}, respectively (see \cite{mccoy} and the Appendix A).
Asymptotic behavior of the Fourier coefficients in both cases is determined by the poles of $\epsilon(k)$ in the complex plane, namely by $\lambda_2$. When $\lambda_2$ is real and positive there is no oscillatory behavior. On the other hand, for complex $\lambda_2$, its absolute value, $\alpha$, is giving the rate of decay and its phase, $\psi$, determines the period of oscillation.
It is an interesting question if such a relation holds true also for other models, especially since it can severally spoil the performance of the quantum annealing protocol.
While not directly relevant for us, we note, that for the systems in the thermodynamic limit there is a general and similarly looking relation linking the minima of the dispersion relation of the translationally invariant Hamiltonian (i.e. its spectrum) to the oscillations of the correlation function in the ground state \cite{Zauner2015}.
We discuss some further similarities between the gap and the correlation function in the {\it finite} system in the Appendix B. In particular, if $\Gamma^2+\gamma^2<1$ and the Fermion number is odd, a direct relation holds between the gap and the correlation function for finite size $N$ as follows,
\begin{eqnarray}
G_{z}^R(N , \Gamma , \gamma ) &\equiv& \langle \sigma_z^i \sigma_z^{i+R} \rangle - \langle \sigma_z^i \rangle \langle \sigma_z^{i+R} \rangle ,
\\
G_{z}^N(2N , \Gamma ,\gamma ) &=& \left( \frac{1}{2N} \frac{\partial}{\partial \Gamma} \Delta(N , \Gamma , \gamma) \right)^2.
\end{eqnarray}
This establishes a direct nontrivial relation between the correlation function and the energy gap.
\section{Conclusion}
In this papers, we have analytically studied the energy gap of the one-dimensional quantum $XY$ model with $s=1/2$.
We first analyzed the energy gap of the isotropic $XY$ model to
show that the energy gap at the second order transition point behaves quite anomalously as a
function of the system size toward the thermodynamic limit.
This resembles the property of the energy gap of the infinite-range quantum $XY$ model,
where the gap decreases in many different ways: polynomially, exponentially, or
factorially, depending on the choice of the series of system sizes \cite{tsuda}.
Also, our expressions of the gap, Eqs.\ (\ref{kekka1}) and (\ref{kekka2}),
reproduce some of the results conjectured from numerical calculations in Ref. \cite{CV2010}.
In particular, their Eqs.\ (B3) and (B4) are corroborated by the asymptotic expansion in Eq.\ \eqref{eq:isotropic_asymptotic}.
It is an interesting problem to study what aspects of these one-dimensional
and infinite-range models are the key to the anomalous size dependence of the gap.
The continuous symmetry is one of the common features of these models, but we should
carefully investigate if this is the most essential property.
From the viewpoint of implementation of quantum annealing, it is reassuring that
the energy gap can be tuned to decrease polynomially as a function of the system
size even when the lower bound closes very quickly, because the right choice of the
series of system size allows us to avoid the problematic rapid closing of the gap.
This also means that that one should be careful to interpret the results
obtained from numerical simulations or other methods for a limited series
of system sizes because different properties may emerge for different series of system sizes.
Finally, we have analyzed the energy gap of the anisotropic $XY$ model at the first order transition point.
We obtained closed asymptotic expressions for the gap and notice similarities between the energy gap and the correlation function.
For instance, both for the anisotropic and isotropic case the oscillations of the energy gap in the finite system precisely coincides with the oscillations of the correlation function in the thermodynamic limit. It is an interesting future problem to investigate whether or not similar relations hold in other models.
\subsection*{Acknowledgement}
We thank Junichi Tsuda, Kazuya Kaneko, Yuya Seki, Helmut Katzgraber and Ettore Vicari for useful discussions and comments.
The work of H.~N. was supported by the grant No. 26287086
from the Japan Society for the Promotion of Science and the work of M.M.R. by the Polish National Science Centre (NCN) grant DEC-2013/09/B/ST3/00239.
|
1,108,101,563,662 | arxiv | \section{Introduction}
The observation of gravitational waves (GWs) from compact binaries has become a primary avenue of scientific exploration. A plethora of events have flooded ground-based detectors, leading to the emergence of a novel era of GW astronomy and black hole (BH) spectroscopy. Systematic GW observations from LIGO and Virgo~\cite{LIGOScientific:2018mvr,Abbott:2020niy}, as well as future ground- and space-based interferometers~\cite{LISA:2017pwj,TianQin:2015yph,Hu:2017mde}, will improve our understanding of gravity in the strong-field regime~\cite{Berti:2015itd,Barack:2018yly,Perkins:2020tra}.
GWs carry pristine information on strong-field gravity, and in particular on compact objects and their environment. According to uniqueness theorems in general relativity, the merger of two isolated BHs eventually leads to a stationary BH described by at most three parameters: mass, charge (which is astrophysically expected to be negligible), and angular momentum~\cite{Robinson:1975bv,Bekenstein:1996pn,Chrusciel:2012jk,Cardoso:2016ryw}. This state is approached via a characteristic relaxation stage (the ``ringdown'') of the final distorted BH: the GW signal after coalescence is well described by a superposition of exponentially damped sinusoids.
The oscillation frequencies and decay time scales form a discrete set of complex numbers, the so-called quasinormal mode (QNM) frequencies, which contain specific information on the underlying geometry~\cite{Kokkotas:1999bd,Berti:2009kk,Konoplya:2011qq}. The relaxation (a consequence of the dissipative nature of GWs) implies, mathematically, that QNMs are (generically) not a complete set. The timescales involved are similar to the energy levels of atoms and molecules, and they can reveal the structure of the compact object producing the radiation~\cite{Chandrasekhar:1975zza,Leaver:1986gd,Cardoso:2008bp}.
The QNM spectrum of BHs in general relativity is well understood~\cite{Kokkotas:1999bd,Berti:2009kk,Konoplya:2011qq,Berti:2005ys}, while the QNM {\it content} of the signal generated by the coalescence of compact objects (i.e., the relative amplitudes and phases of the modes) is less understood, but there are good indications that several modes -- including higher overtones and different multipolar components -- are important to fully understand the signal~\cite{Leaver:1986gd,Berti:2006wq,Berti:2005ys,Baibhav:2017jhs}. The analysis of recent GW events shows evidence for more than one mode, even at relatively low signal-to-noise ratios~\cite{Isi:2019aib,Capano:2021etf}. Upgrades to the existing GW facilities should lead to routine detections of BH ringdown signals with higher signal-to-noise ratios, heralding the new field of BH spectroscopy~\cite{Berti:2005ys,Berti:2007zu,Berti:2016lat,Cardoso:2017cqb,Berti:2018vdi,Cardoso:2019rvt}.
\subsection{Black hole quasinormal mode instability}
To fully exploit the potential of BH spectroscopy we must better understand the relative excitation of each QNM~\cite{Nollert:1998ys}, the sensitivity of the BH response to fluctuations {\it close} to these resonances, and the possible instability of the spectrum itself under small perturbations of the scattering potentials. Because astrophysical BHs are not isolated, this last question is of paramount importance. Early investigations found that BH QNMs are exponentially sensitive to small perturbations, either due to (far-away or nearby) matter~\cite{Nollert:1996rf,Nollert:1998ys,Leung:1999rh,Leung:1999iq,Barausse:2014tra} or due to variations in the boundary conditions~\cite{Cardoso:2016rao,Cardoso:2016oxy,Cardoso:2019rvt}. The pioneering work by Nollert and Price~\cite{Nollert:1996rf,Nollert:1998ys} has been recently extended to more general types of ``ultraviolet'' (small-scale) perturbations~\cite{Daghigh:2020jyk,Jaramillo:2020tuu,Qian:2020cnz,Liu:2021aqh,Jaramillo:2021tmt}.
Spectral instabilities are common to other dissipative systems. A spectral analysis may be insufficient to understand the response of systems affected by such instabilities, requiring the development of alternative tools~\cite{Trefethen:2005,Sjostrand2019}.
Spectral instabilities can have important implications for physical systems. This is well illustrated by the case of hydrodynamics, where theoretical predictions of the onset of turbulent flow based on eigenvalue analyses agree poorly with experiments~\cite{Trefethen:1993}. Similarly, the introduction of non-Hermitian (non-selfadjoint) operators in PT-symmetric quantum mechanics entails that the associated spectra contain insufficient information to draw full, quantum-mechanically relevant conclusions~\cite{Krejcirik:2014kaa}. Most importantly, one-dimensional wave equations with dissipative boundary conditions -- analogous to those that apply to perturbed BHs in spherical symmetry -- suffer from similar limitations in their spectral predictions~\cite{Driscoll:1996}. The common feature among these different physical problems is their formulation in terms of non-selfadjoint operators.
For selfadjoint operators, the spectral theorem underlies the notion of normal modes (which provide an orthonormal basis) and guarantees the stability of the eigenvalues under perturbations. In other words, a small-scale perturbation to the operator leads to spectral values migrating in the complex plane within a region of size comparable to the scale of the perturbation. In stark contrast, the lack of such a theorem in the non-selfadjoint case entails, in general, the loss of completeness in the set of eigenfunctions as well as their orthogonality, possibly leading to spectral instabilities. Thus, the eigenvalues may show a strong sensitivity to small-scale perturbations. In these cases, the spectral points migrate to an extent that is orders of magnitude larger than the perturbation scale. This feature of non-selfadjoint operators (more generally, non-normal operators) is called spectral instability, and it is related to the loss of collinearity of ``left'' (bra) and ``right'' (ket) eigenvectors corresponding to a given eigenvalue.
\subsection{Pseudospectrum and universality of quasinormal modes}
The pseudospectrum is the formal mathematical concept capturing the extent to which systems controlled by non-selfadjoint operators exhibit spectral instabilities. Pseudospectral contour levels portray a ``topographical map'' of spectral migration, identifying the region in the complex plane where QNMs can migrate. Of particular importance is the behavior of pseudospectral contour levels at large real values of the QNM frequency: these are intimately related with the notion of QNM-free regions, namely the regions in the complex plane to where QNMs cannot migrate. QNM-free regions of general scatterers are known to belong to ``universal'' classes \cite{zworski2017mathematical}, whose parameters are controlled by the qualitative properties of the underlying system.
In the context of BH perturbation theory, Ref.~\cite{Jaramillo:2020tuu} presented a systematic framework to address BH QNM instability based on the notion of pseudospectrum, performing a comprehensive study of the Schwarzschild case. These results indicate a direct connection between the pseudospectral contour lines of the unperturbed Schwarzschild potential and the open branches (called ``Nollert-Price'' QNM branches in~\cite{Jaramillo:2020tuu})
formed by migrating perturbed QNM overtones, which resemble the w-mode spectra of neutron stars~\cite{Kokkotas:1992xak,ZhaWuLeu11}. Although QNMs are in principle ``free'' to move above the QNM-free regions bounded by the pseudospectral lines, the results in Refs.~\cite{Jaramillo:2020tuu,Jaramillo:2021tmt} show that QNM frequencies typically approach pseudospectral contours for perturbations of sufficiently large wave number (i.e., probing small scales) in patterns that seem independent of the detailed nature of such ultraviolet perturbations. Besides being useful to assess the spectral instability of BH QNMs, the universal asymptotics of pseudospectral contour lines are then good indicators of perturbed QNM branches, and therefore they hint at a possible universality in the asymptotics of QNM spectra of generic compact objects~\cite{Jaramillo:2020tuu}.
Another important property of BH QNM spectra is isospectrality. This property is a delicate feature of specific BH spacetimes~\cite{Chandrasekhar:1985kt,Nichols:2012jn,Cardoso:2019mqo,Moulin:2019bfh} and it is absent, for example, in compact stars. A better understanding of isospectrality breaking can offer hints of possible universal features in the QNM spectra of compact astrophysical objects. For Schwarzschild BHs, Ref.~\cite{Jaramillo:2021tmt} shows the existence of different regimes of isospectrality loss in different types of perturbed QNM branches. A systematic interpolation between BHs and compact stars -- in particular in terms of inner boundary conditions: see e.g. the work of Ref.~\cite{Maggio:2020jml}, based on the membrane paradigm -- can be used to improve our understanding of QNM isospectrality.
\subsection{The Reissner-Nordstr\"om spacetime}
In this paper we study the pseudospectrum of Reissner-Nordstr\"om (RN) BHs with mass $M$ and charge $Q$. As the closest non-trivial extension of Schwarzschild, the RN spacetime provides a well-controlled model to systematically extend the exploration of universality properties of BH pseudospectra initiated in \cite{Jaramillo:2020tuu,Jaramillo:2021tmt}, as well as inquiring into the proposed possible connection with the QNMs of generic compact objects. Indeed, the possibility of varying a parameter in a whole family of potentials permits to test the universality hypotheses in a well-defined setting: if BH QNM universality, controlled by asymptotically similar pseudospectra, is violated in this simple case, this probably rules out universality in realistic settings where matter plays a role.
From a technical perspective, the RN solution allows us to test a geometrical aspect of universality, namely its ``spacetime slicing'' independence. In our approach, the calculation of BH pseudospectra relies on the so-called hyperboloidal framework~\cite{Zenginoglu:2007jw,Zenginoglu:2011jz,Ansorg:2016ztf,PanossoMacedo:2018hab,PanossoMacedo:2018gvw,PanossoMacedo:2019npm,Jaramillo:2020tuu}, where the dissipative boundary conditions at the BH horizon and in the wave zone are geometrically incorporated into the problem via a choice of constant-time slices intersecting future null infinity $\scri^+$ and the BH horizon ${\cal H}^+$. The hyperboloidal framework can be implemented using different slices, raising the question of the possible (gauge) dependence of the pseudospectrum on the adopted coordinates. The explicit construction of two independent coordinate systems for the RN spacetime~\cite{PanossoMacedo:2018hab}, reviewed in Sec.~\ref{section RN Hyperboloidal}, shows that different gauges yield consistent results, giving strong support to the geometrical nature of the pseudospectrum.
Finally, RN has features that are absent in the Schwarzschild case, such as the appearance of a family of near-extremal, long-lived modes in the extremal limit $Q\to M$~\cite{Kim:2012mh,Zimmerman:2015trm,Richartz:2015saa,Cardoso:2017soq}. These zero-quality factor modes can dominate the BH response to perturbations.
Extremal RN geometries are marginally stable under neutral massless scalar perturbations~\cite{Aretakis:2011ha,Aretakis:2011hc} and they can develop local horizon hair~\cite{Angelopoulos:2018yvt}. Moreover, gravitational-led QNMs with angular index $\ell$ coincide with electromagnetic-led QNMs with angular index $\ell-1$ in the extremal limit~\cite{Onozawa:1996ba,Okamura:1997ic,Kallosh:1997ug,Berti:2004md}, providing an intriguing testing ground for pseudospectral calculations that probe the near-resonance region. We will show that this symmetry is not broken away from the resonances.
\section{Reissner-Norstr\"om perturbations in the hyperboloidal framework}
We are interested in static, spherically symmetric spacetimes described by the line element
\begin{equation}
ds^2 = -f(r) dt^2 + f(r)^{-1} dr^2 + r^2 \left({d \theta^2} + \sin^2\theta d\varphi^2\right),\label{line element}
\end{equation}
where $t=\text{constant}$ slices correspond to Cauchy surfaces which intersect the horizon bifurcation sphere and spatial infinity $i^0$. For charged BH spacetimes, described by the RN geometry, the (square of the) lapse function $f(r)$ is
\begin{align}
f(r)= 1-\dfrac{2M}{r} + \dfrac{Q^2}{r^2} = \left( 1 - \dfrac{r_+}{r}\right)\left( 1 - \dfrac{r_-}{r}\right),\label{lapse}
\end{align}
where $M$ and $Q$ are the BH mass and electric charge, while $r=r_-$ and $r=r_+$ are the Cauchy and event horizon radii, respectively, such that $f(r_\pm)=0$. The horizons of RN geometries are explicitly given by
\begin{equation}
\label{RN horizons}
r_{\pm} = M \pm \sqrt{M^2 - Q^2}.
\end{equation}
It is convenient to introduce a tortoise coordinate such that $r_*\in ]-\infty, +\infty[$ and $dr_*/dr=1/f(r)$. Explicitly, the tortoise coordinate reads
\beq
\label{tortoise RN}
\dfrac{r_*}{r_+} = \dfrac{r}{r_+} + \dfrac{1}{1-\kappa^2}\bigg[ \ln\left( \dfrac{r}{r_+} - 1 \right) - \kappa^4 \ln\left( \dfrac{r}{r_+} - \kappa^2 \right) \bigg],
\eeq
where\footnote{Note that Ref.~\cite{PanossoMacedo:2018hab} used a different definition for $\kappa$. One must replace $\kappa \rightarrow \kappa^2$ when comparing expressions from Ref.~\cite{PanossoMacedo:2018hab} with the ones presented here. } $\kappa \equiv {Q}/{r_+} \in [-1,1]$. In particular, the asymptotic regions in the BH exterior correspond to $r=r_+$ ($r_*\rightarrow -\infty$) and $r\rightarrow +\infty$ ($r_*\rightarrow +\infty$).
From Eq.~\eqref{RN horizons} we have $Q^2=r_+ r_-$, $2M = r_+ + r_-$, and therefore
\begin{equation}
\kappa^2=\dfrac{r_-}{r_+}, \quad \dfrac{M}{r_+} = \dfrac{1+\kappa^2}{2}.
\end{equation}
We can express $\kappa$ in terms of the more common dimensionless charge parameter $Q/M$ as
\beq
\kappa = \dfrac{{Q/M}}{1+\sqrt{1-(Q/M)^2}}.
\eeq
\subsection{Perturbations of charged black holes}
The dynamics of scalar, electromagnetic and gravitational fields in the RN background is described by a second-order partial differential (wave) equation of the form
\begin{equation}\label{evolution equation}
\left(\frac{\partial^2}{\partial t^2} -\frac{\partial^2}{\partial r_*^2} + V \right)\phi = 0.
\end{equation}
Here, $\phi$ is a master wavefunction, which is a combination of the fundamental perturbed quantities. The effective potential $V$ depends on
the nature of the field. We focus here on scalar fields, and on (polar) gravitoelectric fluctuations. The effective potential for such perturbations can be written in the compact form~\cite{Moncrief74a,Moncrief74b,Moncrief75,Chandrasekhar:1985kt,Berti:2009kk,PanossoMacedo:2018hab}
\begin{equation}
V= \dfrac{f(r)}{r^2}\left[ \ell(\ell+1) + \dfrac{r_+}{r}\left(\mu - \kappa^2 \nu \dfrac{r_+}{r}\right) \right],\label{eq:potential_RN}
\end{equation}
with
\begin{align}\label{potential factors}
\nu &= 3 n_p - 1, \quad \mu = n_p(1+\kappa^2) - (1-n_p)\mathfrak{m}_\pm, \nonumber\\
\mathfrak{m}_\pm &= \dfrac{1+\kappa^2}{4} \Bigg[1 \pm 3 \sqrt{1 + \dfrac{4\kappa^2 A}{(1+\kappa^2)^2}}\Bigg], \\
A &= \dfrac{4}{9}(\ell+2)(\ell-1),\nonumber
\end{align}
where $\ell$ is the angular index of the perturbation. In the above parametrization, scalar perturbations are recovered for $n_p=1$, whereas electromagnetic-led $(\mathfrak{m}_-)$ and gravitational-led $(\mathfrak{m}_+)$ perturbations (which reduce to electromagnetic and gravitational perturbations of Schwarzschild BHs in the uncharged limit) are recovered for $n_p=-1$.
\subsection{The hyperboloidal framework}\label{section RN Hyperboloidal}
To study BH resonances and pseudospectra, we adopt an approach in which the relevant wave-like operators are considered on a compact spatial domain. The hyperboloidal approach provides a geometric framework to compactify the wave equation along spatial directions and, in particular, study QNMs. The advantage of this scheme lies in the fact that the outgoing boundary conditions at the event horizon and infinity, which are fundamental for dissipative systems like BHs, are geometrically imposed by shifting the Cauchy slice $\Sigma_t$ appropriately so that it intersects future null infinity $\scri^+$ and the BH event horizon ${\cal H}^+$. In what follows, we summarize the basic ingredients of this framework (see also Refs.~\cite{Zenginoglu:2007jw,Zenginoglu:2011jz,Ansorg:2016ztf,PanossoMacedo:2018hab,PanossoMacedo:2018gvw,PanossoMacedo:2019npm,Jaramillo:2020tuu} and references therein; in particular, Ref.~\cite{PanossoMacedo:2018hab} gives more details on the hyperboloidal framework in the RN spacetime employed in this work).
A practical way to introduce the hyperboloidal approach follows from the coordinate transformation
\begin{equation}
\begin{aligned}
\dfrac{t}{\lambda} = \tau - h(\sigma), \quad
\dfrac{r_*}{\lambda} = g(\sigma),
\end{aligned}\label{transformation}
\end{equation}
with $\lambda$ an appropriate length scale. The so-called height function~\cite{Zenginoglu:2007jw,Zenginoglu:2011jz} $h(\sigma)$ ``bends'' the original Cauchy slice $\Sigma_t$ so that $\tau=\text{constant}$ corresponds to hypersurfaces $\Sigma_\tau$ which penetrate the BH horizon and intersect null infinity. The function $g(\sigma)$ introduces a spatial compactification from $r_*\in\left]-\infty,\infty\right[$ to a bounded interval $\sigma\in\left[0,1\right]$. The wave zone is now explicitly included in the domain, with $\sigma=0$ and $\sigma=1$ representing future null infinity and the BH horizon, respectively.
Upon the hyperboloidal coordinate transformation \eqref{transformation}, a conformal rescaling of the line element \eqref{line element} can be performed via $d\tilde s^2 = \Omega^{2} ds^2$, with the conformal factor being directly associated to the radial coordinate $\sigma$ via $\Omega = \sigma/\lambda$. Future null infinity is then characterized by $\left.\Omega\right|_{\scri^+}=0$, whereas the conformal metric $d\tilde s^2$ is regular in the entire domain $\sigma\in[0,1]$. Ref.~\cite{PanossoMacedo:2018hab} provides explicit expressions for the conformal RN metric in the so-called minimal gauge. We will review this gauge in the next sections with focus, however, on the wave equation dictating the dynamics of scalar and gravitoelectric fluctuations propagating on this spacetime.
\subsubsection{Wave Equation}
Under the coordinate change~\eqref{transformation}, the wave equation \eqref{evolution equation} acquires the form
\begin{equation}\label{hyperboloidal evolution equation}
- \ddot \phi + L_1 \phi + L_2 \dot \phi = 0,
\end{equation}
with $\dot{} = \partial_\tau$, and the differential operators given by~\cite{Jaramillo:2020tuu}
\begin{align}\label{L1}
L_1 &= \frac{1}{w(\sigma)}\big[\partial_\sigma\left(p(\sigma)\partial_\sigma\right) - q_\ell(\sigma)\big], \\\label{L2}
L_2 &= \frac{1}{w(\sigma)}\big[2\gamma(\sigma)\partial_\sigma + \partial_\sigma\gamma(\sigma)\big].
\end{align}
The height and compactification functions $h(\sigma)$ and $g(\sigma)$ enter the above expression via
\begin{equation}\label{new operators}
\begin{aligned}
& w(\sigma)=\frac{g'^2-h'^2}{|g'|}, \ \ p(\sigma) = |g'|^{-1}, \\
& \gamma(\sigma)=\frac{h'}{|g'|}, \ \ \ \ \ \ \ \ \ q(\sigma)= \lambda^2 |g'|\;V.
\end{aligned}
\end{equation}
Fundamental for the hyperboloidal approach is the fact that $p(\sigma)$ vanishes at the domain boundaries $\sigma=0$ and $\sigma=1$. This property implies that the (Sturm-Liouville) operator $L_1$ is singular. Thus, the physically relevant solutions (describing ingoing waves at the horizon and outgoing waves at future null infinity) are those satisfying the equation's underlying regularity conditions.
The advantageous structure of Eq.~\eqref{hyperboloidal evolution equation} becomes obvious when one performs a first-order reduction in time, to rewrite it as a matrix evolution problem. By introducing $\psi=\dot \phi$, Eq.~\eqref{hyperboloidal evolution equation} reads
\begin{equation}\label{matrix evolution}
\dot u=i L u, \quad L =\frac{1}{i}\!
\left(
\begin{array}{c c}
0 & 1 \\
L_1 & L_2
\end{array}
\right), \quad u=\left(
\begin{array}{c}\phi \\ \psi \end{array}\right),
\end{equation}
which has the formal solution
\begin{equation}
\label{e:evolution_operator}
u(\tau,\sigma)=e^{iL\tau}u(0,\sigma)
\end{equation}
in terms of the (non-unitary: see Sec.~\ref{s:energy_norm} below) evolution operator $e^{iL\tau}$. By further performing a harmonic decomposition $u(\tau,\sigma)\sim u(\sigma)e^{i\omega\tau}$ in Eq.~\eqref{matrix evolution} we arrive at the eigenvalue equation
\begin{equation}\label{eigenvalue problem}
L u_n=\omega_n u_n,
\end{equation}
where $\omega_n$ is an infinite set of eigenvalues of the operator $L$, with $n\geq 0$ the mode number. Thus, the calculation of QNMs through the hyperboloidal framework ultimately translates to the eigenvalue problem of the operator $L$, which, in turn, contains information concerning the boundary conditions and the spacetime metric. Finally, since $\partial_t=(1/\lambda)\partial_\tau$, $t$ and $\lambda\tau$ ``tick'' at the same rate. Therefore, the QNMs $\omega_n$ conjugate to the two distinct temporal coordinates coincide up to a scaling constant $1/\lambda$, and the change in the time coordinate does not affect the QNM frequencies~\cite{Jaramillo:2020tuu}.
\subsubsection{Hyperboloidal framework in the Reissner-Nordstr\"om spacetime}\label{hyperboloidal}
In what follows, we choose the BH horizon as the characteristic length scale,\footnote{Note that $\lambda = 2r_+$ in Ref.~\cite{PanossoMacedo:2018hab}. This implies that the quantities $\rho(\sigma)$ and $h(\sigma)$ differ from those in~Ref.~\cite{PanossoMacedo:2018hab} by a factor of $2$.} so that $\lambda = r_+$. Ref.~\cite{PanossoMacedo:2018hab} introduces the so-called minimal gauge, in which a compactification in the radial coordinate follows from
\bea
\label{radial compactification}
r=r_+ \dfrac{\rho(\sigma)}{\sigma}, \quad \rho(\sigma) = 1 -\rho_1(1-\sigma),
\eea
where $\rho_1$ is a parameter yet to be fixed. With this choice, the horizon $r_+$ is fixed at $\sigma=1$ regardless of $\rho_1$. As expected, $r\rightarrow \infty$ is mapped to $\sigma=0$. Substituting Eq.~\eqref{radial compactification} into the the tortoise coordinate \eqref{tortoise RN} leads to
\bea
\label{Tortoise RN 2}
\dfrac{r_*}{r_+} &=& \rho_1 + \dfrac{\ln(1-\rho_1)}{1-\kappa^2} + \dfrac{1-\rho_1}{\sigma} - (1+\kappa^2)\ln\sigma \nn \\
&+& \dfrac{ \ln(1-\sigma) - \kappa^4 \ln \left[ 1-\rho_1 - \sigma (\kappa^2 - \rho_1)\right]}{1-\kappa^2}.
\eea
The right-hand side of Eq.~\eqref{Tortoise RN 2} provides the overall structure for the function $g(\sigma)$. In particular, one can ignore any term not depending on $\sigma$ when defining $g(\sigma)$, because only $g'(\sigma)$ contributes to the expressions in Eq.~\eqref{new operators}. Finally, the height function in the minimal gauge reads
\bea
&h(\sigma) = g(\sigma) + h_0(\sigma), \nn \\
&h_0(\sigma) = 2\left[ (1+\kappa^2) \ln\sigma - \dfrac{1-\rho_1}{\sigma}\right].
\label{RN height}
\eea
There is still a remaining degree of freedom within the minimal gauge, that is, the choice of the parameter $\rho_1$. Below, we will describe the two available options providing us with different limits to extremality~\cite{PanossoMacedo:2018hab}: the usual extremal RN spacetime, and the near-horizon geometry given by the Robinson-Bertotti solution~\cite{Robinson59,Bertotti59}.
\smallskip
{\emph{\bf Areal radius fixing gauge.}} The simplest choice is to set $\rho_1=0$, so that $r=r_+/\sigma$. We refer to this case as the {\em areal radius fixing gauge}, since it implies $\rho(\sigma)=1$ in the above expressions. It then follows that the Cauchy horizon $r_-$ is located at $\sigma_- = \kappa^{-2}$, i.e. its location in the new compact coordinate $\sigma$ changes with the charge parameter $\kappa$. In particular, the Schwarzschild limit $\kappa=0$ yields the BH singularity $\sigma_-\rightarrow \infty$ ($r_- = 0$), whereas the event and Cauchy horizons coincide at $\sigma_+=\sigma_-=1$ in the extremal limit $|\kappa|=1$.
By fixing $\rho_1=0$, Eqs.~\eqref{Tortoise RN 2} and \eqref{RN height} lead to
\begin{align}\label{hyperboloidal1}
g(\sigma) &= \dfrac{1}{\sigma} - (1+\kappa^2) \ln\sigma + \dfrac{\ln(1-\sigma) -\kappa^4\ln(1-\kappa^2\sigma)}{1-\kappa^2}, \\ \label{hyperboloidal2}
h(\sigma) &= -\dfrac{1}{\sigma} + (1+\kappa^2) \ln\sigma + \dfrac{\ln(1-\sigma) -\kappa^4\ln(1-\kappa^2\sigma)}{1-\kappa^2}.
\end{align}
Inserting Eqs.~\eqref{hyperboloidal1} and \eqref{hyperboloidal2} into Eqs.~\eqref{new operators} yields
\begin{align}
p(\sigma) &= \sigma^2(1-\sigma)(1-\kappa^2\sigma), \\
w(\sigma) &= 4 \left[ 1 + \kappa^2(1+\kappa^2)(1-\sigma)\right] \left[1 + \sigma (1+\kappa^2)\right], \\
\gamma(\sigma) &= 1 - 2 \left[ 1 + \kappa^2(1+\kappa^2) \right]\sigma^2 + 2\kappa^2(1+\kappa^2) \sigma^3,\\
q(\sigma) &= \ell(\ell+1) +\sigma \left( \mu - \kappa^2 \nu \sigma \right),
\end{align}
which completely characterize the operator $L$ and the spectral problem of scalar and gravitoelectric perturbations in the RN spacetime. Here, the singular character of the operator $L$ becomes more evident, since $p(\sigma)$ clearly vanishes at $\sigma=0$ ($\scri^+$) and $\sigma=1$ (${\cal H}^+$), as well as at the Cauchy horizon $\sigma=\kappa^{-2}$.
From the differential equation perspective, it is important to note the different singular character of future null infinity and that of the event and Cauchy horizons. More specifically, $p(\sigma)$ behaves as $\sigma^2$ when $\sigma\rightarrow 0$, though it vanishes linearly as $\sigma \rightarrow 1$ or $\sigma \rightarrow \kappa^{-2}$, as long as $|\kappa|\neq 1$. In other words, in the subextremal case the operator $L$ possesses an essential singularity (irregular singular point) at future null infinity ($\sigma = 0$), whereas it has a removable singularity (regular singular point) at the horizons.
In the extremal case $|\kappa|=1$, however, both horizons coincide. As a consequence, $p(\sigma)$ also possesses an essential singularity at $\sigma=1$, i.e., it vanishes as $(1-\sigma)^2$. This property is a direct manifestation of the so-called discrete conformal isometry between the extremal horizon and spacetime boundaries at infinity~\cite{Lubbe:2013yia},
namely the duality between spatial infinity $i^0$ and the horizon bifurcation sphere, on the one hand, and between null infinity and the regular part of the horizon, on the other hand. Such a symmetry explains, for instance, the Aretakis instability of a massless scalar field at the extremal RN horizon~\cite{Aretakis:2011ha,Aretakis:2011hc} in terms of well-known results for the field's decay at future null infinity~\cite{Bizon:2012we}.
To better identify this symmetry in the extremal limit $|\kappa|=1$, it is convenient to map the radial coordinate $\sigma \in [0,1]$ into $x\in[-1,1]$ via the transformation
\beq
x = 2\sigma -1, \quad \sigma = \dfrac{1+x}{2}.
\eeq
Hence, the conformal isometry between the extremal horizon and spacetime boundaries at infinity is assessed via the mapping $x\rightarrow - x$. In the limit $|\kappa| \rightarrow 1$, the height and compactification functions in Eqs.~\eqref{hyperboloidal1} and \eqref{hyperboloidal2} read, in terms of the coordinate $x$:
\begin{align}
g(x) &= \dfrac{2}{1+x} - \dfrac{1+x}{1-x} - 2 \ln(1+x) + 2 \ln(1-x), \\
h(x) &= -\dfrac{2}{1+x} + \dfrac{1+x}{1-x} - 2 \ln(1+x) + 2 \ln(1-x),
\end{align}
which in turn leads to the following form for the functions entering the operator $L$ [cf.~Eq.~\eqref{new operators}]:
\beq
\label{extremal functions}
p(x)=\dfrac{(1-x^2)^2}{8},\,\, w(x)=2(4-x^2), \,\,
\gamma(x) = -\dfrac{x(3-x^2)}{2}.
\eeq
The symmetry of $p$ and $w$ as $x\rightarrow -x$ is evident.\footnote{Note that $p$ is actually associated with a second-order operator $\sim \partial^2_x$, which is also symmetric under $x\rightarrow-x$.} The odd symmetry of $\gamma \rightarrow -\gamma$ is in accordance with its definition in terms of the operator $L_2$. Indeed, $L_2$ remains invariant since $\partial_x \rightarrow - \partial_x$. Clearly, the symmetry of the operator $L_1$ ultimately depends on the behavior of the potential $q(x)$. We will study this feature in Sec.~\ref{sec Gravitoelectric}.
\smallskip
{\emph{\bf Cauchy horizon fixing gauge.}} A second option allows us to fix the Cauchy horizon $\sigma_-$ at a coordinate location that does not depend on the parameter $\kappa$. In particular, the choice $\rho_1 = \kappa^2$ fixes the Cauchy horizon at $\sigma_- \rightarrow \infty$~\cite{PanossoMacedo:2018hab}. We call this the {\em Cauchy horizon fixing gauge}. In the extremal limit $|\kappa|\rightarrow 1$, this gauge shows a discontinuous transition in the near-horizon geometry~\cite{PanossoMacedo:2018hab}.
With the choice $\rho_1 = \kappa^2$, and ignoring terms not depending on $\sigma$, one reads from Eqs.~\eqref{Tortoise RN 2} and \eqref{RN height}
\bea
g(\sigma) &=& \dfrac{1-\kappa^2}{\sigma} - (1+\kappa^2)\ln\sigma + \dfrac{ \ln(1-\sigma)}{1-\kappa^2}, \\
h(\sigma) &=& - \dfrac{1-\kappa^2}{\sigma} + (1+\kappa^2)\ln\sigma + \dfrac{ \ln(1-\sigma)}{1-\kappa^2}.
\eea
From these expressions, Eqs.~\eqref{new operators} yield
\bea
&& p(\sigma) = (1-\kappa^2)\dfrac{\sigma^2(1-\sigma)}{\rho(\sigma)^2}, \nn \\
&& q(\sigma) = \dfrac{1-\kappa^2}{\rho(\sigma)^2} \left[ \ell(\ell+1) + \dfrac{\sigma}{\rho(\sigma)}\left( \mu - \kappa^2 \nu \dfrac{\sigma}{\rho(\sigma)}\right)\right], \nn \\%\ell(\ell+1)+\sigma\left[ \mu \rho(\sigma) - \kappa^2 \nu \sigma \right]}{\rho(\sigma)^4} \nn \\
&& w(\sigma) = 4\dfrac{\sigma + \rho(\sigma)}{\rho(\sigma)^2}, \quad \gamma(\sigma) =1-\dfrac{2\sigma^2}{\rho(\sigma)^2}. \nn \\
\eea
The singular behavior at the extremal limit $|\kappa|=1$ is evident in the functions $p(\sigma)$ and $q(\sigma)$, since $L_1$ vanishes altogether. A regularization can however be implemented by rescaling $\tau$ with a $(1 - \kappa^2)$ factor: this is relevant in the Robinson-Bertotti solution limit (cf.~\cite{PanossoMacedo:2018hab}).
\section{Pseudospectrum}\label{pseudospectra}
Following Ref.~\cite{Jaramillo:2020tuu}, we start by discussing the intuitive notion of spectral stability, in which a perturbation of order $\epsilon$ to the underlying operator leads to perturbed QNMs migrating up to a distance of the same order $\epsilon$. This result is formally proven in the context of self-adjoint operators. More specifically, if one considers a linear operator $L$ on a Hilbert space $\mathcal{H}$ with a scalar product $\langle \cdot , \cdot \rangle$, then the adjoint $L^\dagger$ is the linear operator fulfilling $\langle L^\dagger u,v\rangle=\langle u,Lv\rangle$, for all $u, v$ in $\mathcal{H}$. The operator $L$ is normal if and only if $\left[L,L^\dagger\right]=0$. Clearly, self-adjoint operators $L=L^{\dagger}$ are normal. In this context, the spectral theorem for normal operators ensures that eigenfunctions form an orthonormal basis and that the eigenvalues are stable under perturbations of $L$.
On the contrary, non-normal operators lack a spectral theorem and lead to a weak control of eigenfunction completeness and eigenvalue stability. Thus, the analysis based solely on the spectrum may be misleading if the system is beset with spectral instability, where the perturbed eigenvalues may extend into far regions in the complex plane, despite a rather small change in the operator. Indeed, strong non-normality, occurring in quite generic settings, leads to a severely uncontrolled spectrum under perturbations of the governing operator~\cite{Trefethen:1993,Driscoll:1996,Davies99,Trefethen:2005,Davie07,Sjostrand2019,Jaramillo:2020tuu}.
The notion of pseudospectrum formally captures the sensitivity of the spectrum to perturbations~\cite{Trefethen:2005,Davie07,Sjostrand2019}. To introduce this concept, we first recall the definition of an eigenvalue as the value $\omega$ for which $\omega \mathbb{I}-L$ is singular. To intuitively enlarge this notion, one may inquire: what is the region in the complex plane in which $||\omega \mathbb{I}-L||$ is small, or equivalently, in which $||\omega \mathbb{I}-L||^{-1}$ is large?
Addressing this question naturally leads to the definition of the pseudospectrum~\cite{Trefethen:2005,Davie07,Sjostrand2019}, which states that the $\epsilon$-pseudospectrum $\sigma^\epsilon(L)$ with $\epsilon> 0$ is the set of $\omega\in \mathbb{C}$ for which $||\left(\omega\mathbb{I}-L\right)^{-1}||>\epsilon^{-1}$. The operator $\left(\omega\mathbb{I}-L\right)^{-1}$ is called the resolvent of $L$ at $\omega$. In turn, $||\left(\omega\mathbb{I}-L\right)^{-1}||$ diverges for $\omega\in\sigma(L)$, where $\sigma(L)$ is the spectrum of $L$, so that (from the very definition of the pseudospectrum) the spectrum -- a discrete set of numbers in the complex plane -- is contained in the $\epsilon$-pseudospectrum for every $\epsilon$. In other words, the $\epsilon$-pseudospectra $\sigma^\epsilon(L)$ are nested sets around the spectrum. Specifically, the norm of the resolvent maps values $\omega$ in the complex plane into real positive numbers, in such a way that the boundaries of $\sigma^\epsilon(L)$ are contour levels of the function defined by the norm of the resolvent. Such boundaries are then nested contour lines around the spectral points. As $\epsilon\rightarrow0$, $\sigma^\epsilon(L)\rightarrow\sigma(L)$. The structure of the resolvent encodes fundamental information concerning non-trivial structures in the complex plane that cannot be revealed by the spectrum alone. For instance, normal operators exhibit trivial resolvent structures that extend circularly up to order $\sim\epsilon$ around the spectrum (e.g.~Fig.~4 in Ref.~\cite{Jaramillo:2020tuu}). Non-normal operators may possess resolvent structures that extend far from the spectrum, which is an imprint of poor analytic behavior of the resolvent as a function of $\omega$.
The connection between the low regularity/analyticity of the resolvent and the underlying spectral instabilities follows from considering a perturbed operator $L+\delta L$, with perturbation norm $||\delta L||<\epsilon$. If the spectrum of the perturbed operator stays bounded in a vicinity of order $\sim\epsilon$ around $\sigma(L)$, then the operator displays spectral stability. On the other hand, if it migrates in the complex plane far from $\sigma(L)$ at distances that are orders of magnitude larger than $\epsilon$, then $L$ is spectrally unstable. This distinction can be made directly through the pseudospectrum, at the level of the non-perturbed operator $L$, without the need of systematically introducing perturbations to the operator.
We emphasize that the definition of the pseudospectrum, and therefore any statement on spectral instability, depends on the choice of the underlying scalar product $\langle \cdot , \cdot \rangle$. The scalar product fixes the norm that quantifies the notion of ``big'' or ``small'' perturbations $||\delta L||$. On physical grounds, and following~\cite{Driscoll:1996,Jaramillo:2020tuu}, we argue in favor of the system's energy as the most adequate norm to control the pseudospectra and assess spectral instability, because it conveniently encodes the size of the physical perturbation with respect to the nature of the problem (see \cite{GasJar21} for a systematic discussion of this point). On the computational side, calculating the pseudospectrum requires the numerical evaluation of the norm of the resolvent, according to the previous definition. The numerical scheme must consistently incorporate the particular choice for the norm. The next section summarizes the core concepts needed in this work.
\subsection{The energy norm}
\label{s:energy_norm}
A physically motivated choice is the so-called energy norm~\cite{Driscoll:1996,Jaramillo:2020tuu}, a natural way of framing the problem in terms of the physical energy contained in the field $\phi$ with dynamics dictated by the wave equation~\eqref{evolution equation}. Within the hyperboloidal formulation of the wave equation given by Eq.~\eqref{matrix evolution}, the energy norm for a ($\ell$-mode) vector $u$ reads~\cite{Jaramillo:2020tuu} (see \cite{GasJar21} for a full account of its relation with the total energy of the field on a spacetime slice)
\begin{align}\label{energy norm}
&||u||^2_{_{E}} = \Big|\Big|\begin{pmatrix}
\phi \\
\psi
\end{pmatrix}\Big|\Big|^2_{_{E}} \equiv E(\phi, \psi) \\
&=\frac{1}{2}\int_0^1 \left(w(\sigma)|\psi|^2
+ p(\sigma)|\partial_\sigma\phi|^2 + q(\sigma) |\phi|^2\right) d\sigma, \nn
\end{align}
where the subscript $E$ denotes the energy norm, and the integration limits correspond to the $[0,1]$ compact spatial interval in our hyperboloidal scheme. In turn, the energy scalar product that defines the norm reads
\bea
&& \langle u_1,\! u_2\rangle_{_{E}} = \Big\langle\begin{pmatrix}
\phi_1 \\
\psi_1
\end{pmatrix}, \begin{pmatrix}
\phi_2 \\
\psi_2
\end{pmatrix}\Big\rangle_{_{E}} \\
&&=
\frac{1}{2} \int_0^1 \left( w(\sigma)\bar{\psi}_1 \psi_2 + p(\sigma) \partial_\sigma\bar{\phi}_1\partial_\sigma\phi_2 + q(\sigma)\bar{\phi}_1 \phi_2 \right) d\sigma, \nn\label{energy scalar product}
\eea
from which $||u||^2_{_{E}} = \langle u, u\rangle_{_{E}}$ trivially holds. Due to the dissipative nature of the block operator $L_2$ --- see Eqs.~\eqref{L2} and \eqref{matrix evolution} --- which encodes the boundary conditions at null infinity and at the event horizon, the operator $L$ is not selfadjoint with respect to the scalar product \eqref{energy scalar product}~\cite{Jaramillo:2020tuu}: this justifies the non-unitary character of the evolution operator in Eq.~(\ref{e:evolution_operator}). We use Chebyshev spectral methods to numerically implement Eq.~\eqref{energy scalar product} in the calculation of the pseudospectrum.
\subsection{Chebyshev's spectral method}
There are various methods to compute BH QNMs~\cite{Leaver85,Iyer:1986np,Berti:2009kk,Konoplya:2011qq}. The differential operator \eqref{eigenvalue problem} can be discretized and turned into a matrix problem using Chebyshev's spectral method~\cite{Trefethen:2000,Trefethen:2005}.
The compactified domain of the operator $L$, $\sigma\in\left[0,1\right]$, is discretized with $N+1$ Chebyshev-Lobatto interpolation grid points, while for the discretization of the differential operators we utilize Chebyshev differentiation matrices~\cite{Trefethen:2000}. Overall, the resulting matrix $L$ has dimensions $\mathcal{N}\times \mathcal{N}$, with $\mathcal{N}=2(N+1)$, where the factor of $2$ comes from the first-order reduction in time, so $(N+1)$ values are used for each $\phi$ and $\psi$. Once $L$ is discretized, the BH QNMs follow straightforwardly from the eigenvalues of the matrix.
To calculate matrix norms and pseudospectra, one must also implement the Chebyshev-discretized version of the energy scalar product norm~\eqref{energy scalar product} via
\begin{equation}
\langle u, v\rangle_{_E} = (u^*)^i G^{E}_{ij}v^j = u^*\cdot G^{E}\cdot v, \ \ u,v \in \mathbb{C}^\mathcal{N},
\end{equation}
with $u^*$ the conjugate transpose of $u$. The construction of the Gram matrix $G^E_{ij}$ corresponding to \eqref{energy scalar product} is detailed in Appendix A of Ref.~\cite{Jaramillo:2020tuu}, and the adjoint operator $L^\dagger$ reads
\begin{equation}\label{adjoint}
L^\dagger = \left(G^E\right)^{-1}\cdot L^*\cdot G^E.
\end{equation}
Finally, the $\epsilon$-pseudospectrum $\sigma^\epsilon_E(L)$ in the energy norm is given by~\cite{Jaramillo:2020tuu}
\begin{equation}
\label{pseudospectra energy norm}
\sigma^\epsilon_{_E} (L) = \{\omega\in\mathbb{C}: s_{_E}^\mathrm{min}(\omega \mathbb{I}- L)<\epsilon\},
\end{equation}
where $s_E^\mathrm{min}$ is the minimum of the generalized singular value decomposition, which incorporates the adjoint in the energy scalar product
\begin{equation}
\label{svd definition}
s_{_E}^\mathrm{min}(M) = \min \{\sqrt{\omega}: \omega\in \sigma(M^\dagger M) \}, \quad M=\omega \mathbb{I}- L.
\end{equation}
\subsection{QNM-free regions: logarithmic boundaries and pseudospectra}\label{QNM free region}
We conclude our general discussion of the pseudospectrum with a short summary of ``universality classes''~\cite{trefethen2005spectra,zworski2017mathematical,Sjostrand2019}. As discussed in Sec.~\ref{pseudospectra}, the $\epsilon$-pseudospectrum $\sigma^\epsilon(L)$ determines the maximal region in the complex plane that QNMs can reach under perturbations of norm $\epsilon$ on the operator $L$. Equivalently, the regions beyond a given $\epsilon$-pseudospectrum boundary are called QNM-free (or resonance-free) regions.
The study of the asymptotics of QNM-free regions is crucial to assess the existence of QNM-free strips around the real axis, so that the notion of fundamental (or principal) resonance, understood as the closest to the real axis, makes sense. This is fundamentally related to the study of the decay of waves scattered by a resonator (in our case, the BH), and therefore is key in the context of GW ringdown.
The QNM-free regions fall into different ``universality classes'' according to their asymptotic behavior for large real parts of the frequencies:
\bea
\label{e:resonance_free_regions_1}
\mathrm{Im}(\omega) < F(\mathrm{Re}(\omega)), \ \ \mathrm{Re}(\omega)\gg 1,
\eea
where $F(x)$ is a real function controlling the asymptotics. The mathematical literature (cf. e.g. \cite{zworski2017mathematical,dyatlov2019mathematical}) identifies several possibilities, such as
\bea
\label{e:resonance_free_regions_2}
F(x) =
\left\{
\begin{array}{rcl}
&\hbox{(i)}& \ e^{\alpha x}, \ \alpha>0 \\
&\hbox{(ii)}& \ \tilde{C}, \\
&\hbox{(iii)}& \ C\ln(x), \\
&\hbox{(iv)}& \ \gamma \; x^\beta,\, \beta\in\mathbb{R},\, \gamma> 0
\end{array}
\right.
\eea
where $C, \tilde{C}, \alpha, \beta, \gamma$ are constants. The specific form of $F$ and the constants in Eqs.~(\ref{e:resonance_free_regions_2}) depend on qualitative features of the underlying effective potential $V$ and on the boundary conditions of the problem. Typical behaviors in our setting belong to cases (iii) or (iv) (see~\cite{zworski2017mathematical}). In particular, logarithmic resonant-free regions of class (iii) appear in the setting of generic scattering by impenetrable obstacles or by potentials (either of compact support or extending to infinity) when we allow for low regularity~\cite{Regge58,LaxPhi71,LaxPhi89,Vainb73,zworski2017mathematical,dyatlov2019mathematical,Sjoes90,Marti02,SjoZwo07}. In the case of potentials and/or boundary conditions with enhanced regularity, the more stringent power-law (iv) controls the QNM-free regions.
For the Schwarzschild potential, numerical investigations~\cite{Jaramillo:2021tmt} have shown that the QNM-free regions are asymptotically bounded from below by logarithmic curves of the form
\bea
\label{e:log_branches}
\mathrm{Im}(\omega) \sim C_1 + C_2 \ln \big[\mathrm{Re}(\omega) + C_3\big].
\eea
In the asymptotic regime $\mathrm{Re}(\omega)\gg 1$ the constants $C_1$ and $C_3$
can be neglected, leading to case (iii) in Eq.~(\ref{e:resonance_free_regions_2}). Nevertheless, it is remarkable that by adding nonzero constants $C_1$ and $C_3$ the logarithmic behavior holds also at intermediate values of $\mathrm{Re}(\omega)$ and even close to the actual QNM spectrum (in a region one can hardly consider as asymptotic).
These features will be investigated in detail below, confirming the preliminary results in~\cite{Jaramillo:2021tmt}
and, more importantly, providing a systematic extension and refinement that is crucial for the assessment of asymptotic universality.
\section{The pseudospectrum of Reissner-Nordstr\"om black holes}
In what follows we discuss the pseudospectrum of subextremal and extremal RN BHs, calculated (unless specified otherwise) within the areal radius fixing gauge. To ensure agreement with the known values of the QNM frequencies~\cite{Chandrasekhar:1985kt,Kokkotas:1988fm,Richartz:2014jla,Richartz:2015saa} we typically use $N=200$ grid points and set the internal precision in all calculations to $20\times\text{MachinePrecision}\sim 300$ digits. We scanned the part of the complex plane shown in our plots with resolution $\sim250\times 150$, corresponding to $\sim 3.8\times 10^4$ points.
\begin{figure}[t]
\includegraphics[scale=0.34]{Scalar_Pseudospectra_Q=0_5M_l0_N=200}
\includegraphics[scale=0.34]{Gravitoelectric_Pseudospectra_Q=0_5M_Z1l1_N=200}
\includegraphics[scale=0.34]{Gravitoelectric_Pseudospectra_Q=0_5M_Z2l2_N=200}
\caption{Pseudospectra of a BH with charge $Q/M=0.5$. Top: $\ell=0$ scalar QNMs (red dots) and $\epsilon$-pseudospectra boundaries (white lines). Middle: $\ell=1$ electromagnetic-led QNMs and $\epsilon$-pseudospectra boundaries. Bottom: same for $\ell=2$ gravitational-led QNMs. The contour levels $\log_{10}(\epsilon)$ range from $-55$ (top level) to $-5$ (bottom level) in steps of $5$. The blue dots designate branch-cut (non-convergent) modes.}
\label{grel}
\end{figure}
In the asymptotically flat setting of RN spacetime and due to the inclusion of (non-regular) null infinity in the grid of the compactified hyperboloidal formulation, the counterpart of the power-law decay of asymptotically flat effective potentials -- which is responsible for the branch cut in the Green's function of Eq.~\eqref{evolution equation}~\cite{Leaver:1986gd} and for the polynomial late-time tails~\cite{Price:1971fb,Gundlach:1993tp,Gundlach:1993tn} -- is seeded into the operator $L$. This means that the operator $L$ contains both the discrete set of QNMs (the ``point spectrum'') and a continuous spectrum.
With the discretized approximate operator $L$, the continuous spectrum turns into a discrete set of points along the positive imaginary axis. In contrast to QNMs, which converge to known values~\cite{Chandrasekhar:1985kt,Kokkotas:1988fm,Richartz:2014jla,Richartz:2015saa}, these additional ``discrete'' eigenvalues of the approximate operator $L$ do not converge as $N\rightarrow\infty$. On the contrary, they accumulate on the positive imaginary axis along the expected branch cut. We will refer to these points as ``branch-cut'' modes. We must take these points into account when calculating pseudospectra, since they are fundamentally entangled with the discretized formulation of the spectral problem. For clarity we visualize them as blue points in the figures that follow (in contrast to converging QNMs, shown as red points).
\subsection{Spectral instability: the pseudospectra}
In this section we discuss, for illustration, the pseudospectrum of BHs with charge $Q/M=0.5$. Our results for scalar $\ell=0$, electromagnetic-led $\ell=1$ and gravitational-led $\ell=2$ perturbations are shown in Fig.~\ref{grel} (note the different units for the frequency, as compared with the Schwarzschild case in \cite{Jaramillo:2020tuu}, were a normalization $4M\omega$ is used). More specifically, Fig.~\ref{grel} displays both the spectra (red dots) and the underlying pseudospectra of the respective operators. As expected, the QNMs appear on the upper half of the complex plane, reflecting modal stability, and they concentrate in the vicinity of the imaginary axis. The pseudospectral contour lines, shown in white, correspond to different values of $\epsilon$, as shown in the log-scale color bar.
Although the pseudospectra of non-normal operators, such as that shown in Fig.~\ref{grel}, still form circular sets if we zoom arbitrarily close to the spectrum, their large scale global structure presents open sets. These extend into large regions of the complex plane even for small $\epsilon$, indicating spectral instability. This property implies that small-scale perturbations $||\delta L||_E<\epsilon$ can lead to perturbed spectra which migrate into regions that are much further away than $\epsilon$. Such picture is in sharp contrast with Fig.~\ref{spectral_stability} of Appendix \ref{stability} (see also Fig. 4 in \cite{Jaramillo:2020tuu} and the discussion in \cite{GasJar21}), where nested circular sets of radius $\sim\epsilon$ form around non-perturbed QNMs. In that sense, we can differentiate between spectral stability and instability according to the topographic structure of pseudospectra.
\begin{figure}\hspace{0cm}
\includegraphics[scale=0.39]{Pseudospectrum_vs_Q}
\caption{Comparison of pseudospectral levels for gravitational-led $\ell=2$ QNMs with varying BH charge. Here, $Q/M=0$, $0.5$ and $1$ corresponds to red, green and blue contours, respectively. The red and green contour levels $\log_{10}(\epsilon)$ range from $-55$ (top level) to $-5$ (bottom level) in steps of $5$, while the blue ones from $-50$ (top level) to $-5$ (bottom level) in steps of $5$.}
\label{Q_dependence}
\end{figure}
For the particular case of subextremal RN BHs, the structure of pseudospectra close to the imaginary axis depends on $\ell$. Higher angular indices move QNMs further away from the imaginary axis (higher real parts), as expected from the correspondence between QNMs and photon orbits~\cite{Berti:2005eb,Cardoso:2008bp}, and change the structure of the pseudospectra in a similar way. Asymptotically (beyond the region of non-perturbed QNMs), the pseudospectral boundaries have a logarithmic dependence, similar to that found in the Schwarzschild spacetime~\cite{Jaramillo:2021tmt}. We will return to this point in Sec.~\ref{asymptotics} below.
We observe a close similarity between the $\ell=2$ gravitational-led QNMs and pseudospectra in the bottom panel of Fig.~\ref{grel} and the $\ell=2$ gravitational QNMs and pseudospectra of the Schwarzschild spacetime (see Fig.~11 in~\cite{Jaramillo:2020tuu}, where $Q/M=0$). In Fig.~\ref{Q_dependence} we present a direct comparison of pseudospectral levels for selected values of $Q/M$. The pseudospectral contours for different BH charges are remarkably similar, consistently with the asymptotic universality discussed above, with a charge-dependent offset which increases as $Q/M\rightarrow1$ and as $\log_{10}(\epsilon)\rightarrow-\infty$.
\
\subsection{The areal radius and Cauchy fixing gauges}\label{sec:RadialCauchy}
Before discussing extremal geometries, it is important to address possible technical issues arising when taking the extremal limit. As discussed in Sec.~\ref{hyperboloidal}, the hyperboloidal areal radius fixing gauge and the Cauchy horizon fixing gauge have different limits to extremality, and the case $Q/M=1$ can only be studied within the areal radius fixing gauge.
If the pseudospectrum has a geometrical origin, it should depend only mildly on this gauge choice. We will show below that this is indeed the case by computing pseudospectra for subextremal BHs in two different gauges. Note that a similar issue appears when calculating the QNM spectrum in the extremal limit. For example, Leaver's continued fraction algorithm~\cite{Leaver90} is suitable for sub-extremal BHs, but it fails in a continuous limit to extremality. A modified version of Leaver's algorithm is only applicable if the extremal condition $Q/M=1$ is imposed at the onset of the calculation~\cite{Onozawa:1995vu,Richartz:2015saa}.
These algorithms consider the problem in the frequency domain, and the strategy to incorporate the boundary conditions and regularize the underlying ordinary differential equation can be understood from a spacetime perspective in terms of the hyperboloidal framework~\cite{PanossoMacedo:2018hab}. In particular, Ref.~\cite{PanossoMacedo:2018hab} shows that the Cauchy horizon fixing gauge naturally yields the regularization factor in the frequency domain employed by Leaver~\cite{Leaver90}. The technical failure in the algorithm to reach the extremal limit is then geometrically understood as a discontinuous transition to the near-horizon geometry. Similarly, the regularization factor in the frequency domain employed by Onozawa~\cite{Onozawa:1995vu} follows naturally if the wave equation is initially written in the areal radius fixing gauge, which has a well-behaved extremal limit.
\begin{figure}[t!]\hspace{-0.8cm}
\includegraphics[scale=0.39]{Radial_vs_Cauchy_gauge_Q=0_5M_Z2l2_N=200}
\caption{Comparison of the $\ell=2$ gravitational-led $\epsilon$-pseudospectra for a RN BH with $Q/M=0.5$. The pseudospectra in red were computed in the areal radius fixing gauge, while those in blue where computed in the Cauchy horizon fixing gauge. In both cases, the contour levels $\log_{10}(\epsilon)$ range from $-55$ (top level) to $-5$ (bottom level) in increments of $5$.}
\label{radial_vs_Cauchy}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35]{Scalar_Pseudospectra_Q=M_l0_N=200}
\includegraphics[scale=0.35]{Scalar_Pseudospectra_Q=M_l0_N=200_zoom}
\caption{Top: $\ell=0$ scalar QNMs (red dots) and $\epsilon$-pseudospectra boundaries (white lines) of a RN BH with $Q/M=1$. The contour levels $\log_{10}(\epsilon)$ range from $-45$ (top level) to $-5$ (bottom level) in steps of $5$. Bottom: Zoomed region around the the first few QNMs of the top panel. The contour levels $\log_{10}(\epsilon)$ range from $-23$ (top level) to $-3$ (bottom level) in steps of $2$. The blue dots designate branch-cut (non-convergent) modes.}
\label{scalar}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.34]{Pseudospectra_l=0_RN_Q=0_5M_lower_half_zoom}
\includegraphics[scale=0.34]{Pseudospectra_l=0_RN_Q=M_lower_half_zoom}
\caption{Left: zoom-in of the region around the real axis (shown as a dashed black line) of $\ell=0$ scalar $\epsilon$-pseudospectral boundaries (white lines) for a RN BH with $Q/M=0.5$. Right: same, but for $Q/M=1$. A single zero-frequency QNM (shown as a red dot) exists in this case. The contour levels $\log_{10}(\epsilon)$ range from $-6$ (top level) to $-2$ (bottom level) in steps of $1$. The blue dots designate branch-cut (non-convergent) modes.}
\label{scalar_zoom}
\end{figure*}
\begin{figure}\hskip -2ex
\includegraphics[scale=0.39]{Pseudospectra_subextremal_vs_extremal_RN_contours}
\caption{Overplotted contours from Fig.~\ref{scalar_zoom}, with the same parameters and contour levels.}
\label{contour_overplot_zoom}
\end{figure}
Let us return to the issue of the slicing dependence of the pseudospectrum of subextremal RN BHs. Figure~\ref{radial_vs_Cauchy} shows the $\epsilon$-pseudospectra of gravitational-led QNMs for the two available hyperboloidal slices in a RN spacetime with $Q/M=0.5$, using exactly the same $\epsilon$ contour levels for both cases. We omit QNM frequencies from Fig.~\ref{radial_vs_Cauchy} for clarity. The pseudospectral contour lines reaching low overtones (large $\epsilon$) nearly coincide, but they separate as $\epsilon$ decreases. This can be understood in terms of a slight ``renormalization'' between the induced matrix norm in the areal radius and Cauchy fixing gauges, which is a consequence of using different functions in the construction of the operators $L_1$ and $L_2$. The (set of) pseudospectral contours are, to a very good approximation, gauge-independent, especially for small $\epsilon$. They have not only the same asymptotic logarithmic behavior (\ref{e:log_branches}) for both gauge choices, as expected, but also very similar values for the constants $C_1, C_2,$ and $C_3$. This demonstrates that the bulk properties of the pseudospectral levels do not depend on the choice of gauge, supporting the notion that the pseudospectrum is a geometrical property of the spacetime.
\subsection{The extremal limit}\label{sec extremal}
\subsubsection{Scalar potential}
We first consider scalar fields in the extremal limit. The pseudospectrum is depicted in Fig.~\ref{scalar}, where we extend similar results presented in Fig.~\ref{grel}. The qualitative features of the pseudospectrum do not depend on the spin of the perturbing field. Note that the spectrum of extremal and near-extremal BHs is markedly different. In particular, near-extremal RN BHs have a family of slowly damped modes~\cite{Kim:2012mh,Zimmerman:2015trm,Richartz:2015saa,Cardoso:2017soq}. If we define $\delta \ll 1$ as
\beq
Q=M\left (1-\frac{\delta^2}{2}\right),
\eeq
then this family is well described by the purely damped mode
\beq
M\omega=i \delta (n+\ell +1),\qquad n=0,\,1,\,2...
\eeq
Our numerical results agree with these predictions. For example, for $Q=0.999M$ $(\delta=0.0447214)$ we find the fundamental modes $M\omega=0.045125 i$ ($\ell=0$) and $M\omega=0.0899565 i$ ($\ell=1$).
The pseudospectrum in the extremal case (see Fig.~\ref{scalar}), however, has the same qualitative log-like asymptotic behavior as in subextremal BHs, despite the existence of a zero-frequency QNM with $M\omega=0$, associated to the Aretakis instability~\cite{Aretakis:2011ha,Aretakis:2011hc,Angelopoulos:2018yvt}. The existence of a mode which lies at the origin of the complex plane is of key importance to transient instabilities that are typically resolved with pseudospectra: see e.g. the discussion of the transition to turbulence in hydrodynamics in Ref.~\cite{Trefethen:1993}. In the right panel of Fig.~\ref{scalar_zoom} we observe that the pseudospectral contour levels cross to the lower half of the complex plane for $\log_{10}(\epsilon)=-4$, or $\epsilon=10^{-4}$, which translates to an unstable perturbed spectrum. In fact, the transition occurs for similar levels in Ref.~\cite{Trefethen:1993}. This could be tantalizing evidence of analogies between BH spacetime instabilities and hydrodynamics. Unfortunately the evidence is inconclusive, because we observe a similar behavior for subextremal BHs (left panel of Fig.~\ref{scalar_zoom}). Contour lines of extremal RN dive slightly deeper into the unstable QNM region, as shown in Fig.~\ref{contour_overplot_zoom}, but we cannot draw definite conclusions due to the existence of the branch-cut eigenvalues, which ``poison'' the spectrum of the discrete operators and accumulate arbitrarily close to the origin. The present analysis is -- to our knowledge -- the first attempt to resolve transient BH instabilities through pseudospectra, but further work is required to draw solid conclusions.
\begin{figure*}\hspace{-1cm}
\includegraphics[scale=0.36]{Gravitoelectric_Pseudospectra_Q=M_Z2l2_N=200}\hskip 3ex
\includegraphics[scale=0.41]{Gravitoelectric_Pseudospectra_Q=M_Z1l1_Z2l2_N=200_isospectrality}
\caption{Left: $\ell=2$ gravitational-led QNMs (red dots) and $\epsilon$-pseudospectra boundaries (white lines) of a RN BH with $Q/M=1$. Right: superimposed $\ell=1$ electromagnetic-led (black lines) and $\ell=2$ gravitational-led (green dashed lines) $\epsilon$-pseudospectral contours of a RN BH with $Q/M=1$. In both cases, the contour levels $\log_{10}(\epsilon)$ range from $-50$ (top level) to $-5$ (bottom level) in steps of $5$. The blue dots designate branch-cut (non-convergent) modes.}
\label{grelext}
\end{figure*}
\subsubsection{Gravitoelectric potential}\label{sec Gravitoelectric}
\begin{figure*}[t]
\includegraphics[scale=0.39]{Asymptotic_Pseudospectra_Q=0_5M_Z1l2_Z2l2_Scalarl2_N=200_near}\hskip 3ex
\includegraphics[scale=0.39]{Asymptotic_Pseudospectra_Q=0_5M_Z1l2_Z2l2_Scalarl2_N=200_far}
\caption{Superimposed scalar (red), electromagnetic-led (green) and gravitational-led (blue) pseudospectral levels for a RN BH with $Q/M=0.5$. All perturbations share the same angular index $\ell=2$. The contour levels $\log_{10}(\epsilon)$ for the left plot range from $-50$ (top level) to $-5$ (bottom level) in steps of $5$; for the right plot, they range from $-20$ (top level) to $-2$ (bottom level) in steps of $2$.}
\label{asympt}
\end{figure*}
\begin{figure}[t]
\includegraphics[scale=0.38]{kinkposition_for_varying_N_epsilon37}
\caption{Position on the real axis of the asymptotic kink $Re(M\omega)_\text{kink}$ for the $\ell=2$ gravitational-led pseudospectral levels of a RN BH with $Q/M=0.5$ with respect to the increment of the grid interpolation points $N$ (see right panel of Fig.~\ref{asympt}).}
\label{kink}
\end{figure}
We now return to gravitoelectric perturbations. We focus on $\ell-1$ electromagnetic-led and $\ell$ gravitational-led perturbations, which are known to share the same spectra at extremality~\cite{Onozawa:1996ba,Okamura:1997ic,Kallosh:1997ug,Berti:2004md}.
The left panel of Fig.~\ref{grelext} shows the QNM spectra and pseudospectra of $\ell=2$ gravitational-led perturbations, while in the right panel we overplot the pseudospectral contour levels of $\ell=1$ electromagnetic and $\ell=2$ gravitational-led perturbations. It is apparent that the isospectrality involved in this case is of a stronger nature. It is not only the spectra that coincide in both cases, but rather the entire pseudospectrum. This occurs because the Green's function as a whole is the same in the two cases.
To understand this claim, recall that the isospectrality between $\ell-1$ electromagnetic-led and $\ell$ gravitational-led fields on an extremal RN geometry is of a different nature from the axial/polar parity isospectrality in the Schwarzschild spacetime~\cite{Chandrasekhar:1985kt}. Here, the isospectrality follows from an invariance of the underlying potential under transformations exchanging the horizon and infinity: see Eq.~(22) in Ref.~\cite{Onozawa:1996ba}, where $r_*\rightarrow -r_*$.
As discussed in Sec.~\ref{hyperboloidal}, this symmetry is not only restricted to the gravitoelectric potential, but it is a discrete isometry underlying the conformal geometry of extremal BH spacetimes~\cite{Lubbe:2013yia}. In our compactified hyperboloidal coordinates, Eqs.~\eqref{extremal functions} capture the symmetry within the functions in the operator $L$. Comparing the functions present in $L$, the hyperboloidal gravitoelectric potential reads
\beq
q_{(\pm)} = \ell (\ell \mp x) - \left(1 - x^2 \right).
\eeq
for both the $\ell-1$ electromagnetic-led $(-)$ and the $\ell$ gravitational-led $(+)$ fields.
Thus, the symmetry $x\rightarrow -x$ which maps the horizon to infinity at the level of the conformal geometry is also responsible for mapping electromagnetic-led and gravitational-led potentials.
It then follows that the operators for the gravitational and electromagnetic sectors in the extremal RN spacetime are essentially the same, i.e.,
\beq
L^{(+)}_\ell(x) = L^{(-)}_{\ell -1}(-x).
\eeq
As a consequence, not only do the eigenvalues $\omega^{(+)}_\ell$ and $\omega^{(-)}_{\ell-1}$ coincide, but so do the eigenvectors $v^{(+)}_{\ell}(x) = v^{(-)}_{\ell-1}(-x)$ and the entire pseudospectrum.
\subsection{Asymptotic universality of pseudospectra}\label{asymptotics}
Figure~\ref{asympt} demonstrates the asymptotic structure of three different spectral problems, namely the $\ell=2$ scalar, electromagnetic-led and gravitational-led QNMs of a RN BH with $Q/M=0.5$. The lower pseudospectral levels of the three potentials coincide already at small frequencies, and right beyond the respective QNMs, while higher pseudospectral levels begin to agree with each other only at larger frequencies. Therefore, besides the local region in the non-perturbed QNM vicinity, we find support for an asymptotic universality shared by a whole class of effective potentials, with similar logarithmic patterns.
An interesting aspect of the asymptotic pseudospectral contours is evident in the right panel of Fig.~\ref{asympt}, where we observe a kink at $\text{Re}(M\omega)\sim 6$. In the range of our analysis, the value of $\text{Re}(M\omega)$ at which the kink occurs decreases as $\log_{10}(\epsilon)$ decreases, slowly moving closer to the imaginary axis. We expect that the kink will eventually meet the imaginary axis for very small $\log_{10}(\epsilon)$.
Our numerical investigation indicates that this kink does not have a physical origin, but rather is due to the accumulation of numerical error. The curvature sign of the contour changes at the kink, so any fitting beyond the kink itself will not reflect the true logarithmic structure of the pseudospectra. In Fig.~\ref{kink} we verify that the kink is indeed a numerical artifact: as the number of interpolation grid points $N$ increases, the kink moves further away from the imaginary axis. This shows that the well-resolved region in the complex plane grows as we increase the resolution. A similar phenomenon occurs in the calculation of operator eigenvalues through matrix approximations: for a given resolution $N$ only certain eigenvalues can be trusted, but their number increases as $N$ grows. What we observe is the counterpart of this phenomenon at the level of the pseudospectrum, so we expect that the kink should disappear in the limit $N\rightarrow \infty$.
Taking into account the above limitations of fitting pseudospectral contour lines, we can explore if these branches agree with Eq.~\eqref{e:log_branches}. All branches we have checked in the range $\log_{10}(\epsilon)\in[-3,-35]$ can indeed be fitted accurately by a logarithmic function of the form \eqref{e:log_branches}, in agreement with the Schwarzschild findings in Ref.~\cite{Jaramillo:2021tmt}. (Note in passing that the asymptotics of pseudospectra for the P\"oschl-Teller potential are also described by the logarithmic expression~(\ref{e:log_branches})~\cite{Jaramillo:2020tuu}, but the constants $C_1, C_2$ and $C_3$ have very different values, reflecting the different nature of the P\"oschl-Teller potential at null infinity.) Even though the fits are quite accurate away from the QNM region, where logarithmic asymptotic behavior is expected, Eq.~\eqref{e:log_branches} is a good approximation even close to the QNM corresponding to some given $\epsilon$, in agreement with the discussion in Sec.~\ref{QNM free region}.
\section{Conclusions}
In this work we have presented a detailed study of the pseudospectrum from scalar and gravitoelectric perturbations of the RN spacetime. We observe the same qualitative behavior as in Ref.~\cite{Jaramillo:2020tuu} for all values of the BH charge $Q/M\in[0,1]$ and for all perturbing fields: the pattern of pseudospectral levels is typical of spectrally unstable systems, with a logarithmic asymptotic behavior in the large-frequency regime.
The hyperboloidal approach to BH perturbation theory plays a crucial role in recasting the underlying wave equation with dissipative boundary conditions into a form best-suited for studying QNMs as the eigenvalue problem of a non-selfadjoint operator. The RN spacetime allows us to examine the effect of different coordinate choices (i.e., different spacetime slicings) on the calculation of the pseudospectra. We have used the so-called areal radius fixing gauge and Cauchy horizon fixing gauges~\cite{PanossoMacedo:2018hab} and found that the behavior of the pseudospectra in the two gauges is nearly identical. This provides strong support to the geometrical nature of BH pseudospectra.
We have paid special attention to the extremal limit $Q/M \rightarrow 1$, characterized by a family of slowly damped modes along the imaginary axis, which approach the value $M\omega=0$ as $Q/M \rightarrow 1$~\cite{Kim:2012mh,Zimmerman:2015trm,Richartz:2015saa,Cardoso:2017soq}. If the underlying potential is slightly modified, a mode arbitrarily close to the real axis is, in principle, prone to cross into the region ${\rm Im}(M\omega) < 0$, and the field's dynamic evolution could display exponentially growing modes.
As observed in hydrodynamics~\cite{Trefethen:1993}, this behavior can be resolved via a pseudospectrum analysis. The pseudospectral countour lines around marginally stable QNMs bound the region where the QNM can migrate under perturbations. If the $\epsilon$-contour line crosses into the unstable region ${\rm Im}(M\omega) < 0$ for small values of $\epsilon$, the likelihood of having an exponentially growing dynamical evolution for a slightly perturbed system is high. We observe such crossings of the pseudospectral contour lines in the lower half of the complex plane for sufficiently small $\epsilon\sim 10^{-4}$, which even agree with the levels considered in the context of turbulence in Ref.~\cite{Trefethen:1993}. However we cannot make any conclusive claims, because we observe a similar behavior even for subextremal RN BHs, presumably due to the presence of (non-convergent) eigenvalues which appear because of our discretization of the differential operators and because of the hyperboloidal compactification in asymptotically flat spacetimes.
Another noteworthy feature of extremal RN BHs is the isospectrality between the QNM frequencies of $\ell-1$ electromagnetic-led and $\ell$ gravitational-led perturbations. We show that the isospectrality is valid not only for the spectrum, but also for the pseudospectrum. This ``strong isospectrality'' is a consequence of a horizon-infinity symmetry which has already been identified as responsible for the gravitoelectric QNM isospectrality~\cite{Onozawa:1996ba}. Such a discrete symmetry is also apparent in the conformal geometry of extremal BH spacetimes~\cite{Lubbe:2013yia}, and it explains, for instance, the aforementioned transient extremal horizon instability under scalar perturbations~\cite{Aretakis:2011ha,Aretakis:2011hc} in terms of the field properties at future null infinity~\cite{Bizon:2012we,Angelopoulos:2018yvt}. Thus, the horizon-infinity symmetry for the extremal RN BH leads to the Green's functions of electromagnetic-led and gravitational-led perturbations agreeing as a whole, and not just at the poles.
The plurality of effective potentials describing perturbations of RN BHs allowed us to analyze the asymptotic behavior of pseudospectral contour lines for different spectral problems. By fixing the BH charge and an angular index for scalar, gravitational-led and electromagnetic-led perturbations, we find that beyond the QNM region, $\epsilon$-level sets practically coincide, regardless of the perturbation field. This suggests an ``asymptotic universality'' of pseudospectra. In fact, numerical results for the contour levels are always consistent with a logarithmic asymptotic behavior, in agreement with previous findings for Schwarzschild BHs~\cite{Jaramillo:2021tmt}.
The pseudospectral analysis presented here can be extended in multiple directions (see also \cite{Jaramillo:2020tuu} for a related list of possible perspectives). It would be interesting to study the superradiant amplification of charged scalar fields~\cite{Bekenstein:1973mi,Denardo:1973pyo}. It is also important to generalize our work to asymptotically de Sitter BHs: the regularity of the cosmological horizon removes the branch cut, and thus the problem becomes closer to the P\"oschl-Teller case studied in \cite{Jaramillo:2020tuu}, which corresponds to the de Sitter spacetime~\cite{Bizon:2020qnd}. By removing the branch cut, the spurious eigenvalues corresponding to the branch cut will also be absent. RN-de Sitter (RNdS) BHs have rich QNM spectra consisting of different mode families~\cite{Cardoso:2017soq}, and as such they provide a perfect testing ground for the pseudospectra of zero modes $M\omega=0$ of neutral scalar fields~\cite{Cardoso:2017soq}, which are prone to superradiant instabilities when the field is charged~\cite{Cardoso:2018nvb,Zhu:2014sya,Konoplya:2014lha,Destounis:2019hca}. Accelerating spacetimes share many similarities with RNdS BHs, with the cosmological horizon being replaced by an acceleration horizon~\cite{Griffiths:2009dfa}. These spacetimes should not be affected by non-convergent ``contaminations'' due to the absence of a non-regular null infinity in the grid, since the boundary conditions are imposed at the acceleration horizon instead~\cite{Hawking:1997ia,Destounis:2020pjk,Destounis:2020yav}.
Another important extension concerns asymptotically anti-de Sitter (AdS) spacetimes. Their timelike null-infinity provides a model for a geometrical ``box'' that suggests interesting analogies with QNM instability problems in optical cavities~\cite{SheJar20}. A study of asymptotically AdS spacetimes requires different boundary conditions. One should similarly introduce different boundary conditions when studying horizonless compact objects, where the BH spectra appear as intermediate time excitations, eventually giving way to ``echoes''~\cite{Cardoso:2016rao,Cardoso:2016oxy,Cardoso:2017cqb,Cardoso:2019rvt}. The relation between pseudospectra, ordinary QNMs and echoes deserves further study.
\section*{Acknowledgments}
K.D. is indebted to George Pappas for very helpful discussions during the early stage of this work. R.P.M. was partially supported by the European Research Council Grant No. ERC-2014-StG 639022-NewNGR ``New frontiers in numerical general relativity''. R.P.M. thanks the warm hospitality of CENTRA-Instituto Superior T\'ecnico (Lisboa) and J.A. Valiente-Kroon for fruitful discussions.
E.B. is supported by NSF Grants No. PHY-1912550 and AST-2006538, NASA ATP Grants No. 17-ATP17-0225 and 19-ATP19-0051, NSF-XSEDE Grant No. PHY-090003, and NSF Grant PHY-20043.
V.C.\ acknowledges financial support provided under the European Union's H2020 ERC
Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's
theory'' grant agreement no. MaGRaTh--646597.
J.-L.J thanks the discussions with E. Gasperin, O. Meneses Rojas, L. Al Sheikh and J. Sj\"ostrand, and acknowledges the support of the French ``Investissements d'Avenir'' program through project ISITE-BFC (ANR-15-IDEX-03), the ANR ``Quantum Fields interacting with Geometry'' (QFG) project (ANR-20-CE40-0018-02), the EIPHI Graduate School (ANR-17-EURE-0002) and the Spanish FIS2017-86497-C2-1 project (with FEDER contribution).
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements No 690904 and No 843152.
We thank FCT for financial support through Project~No.~UIDB/00099/2020 and through grants PTDC/MAT-APL/30043/2017 and PTDC/FIS-AST/7002/2020.
The authors would like to acknowledge networking support by the GWverse COST Action
CA16104, ``Black holes, gravitational waves and fundamental physics.''
Computations were performed on the ``Baltasar Sete-Sois'' cluster at IST, XC40 at YITP in Kyoto University and at Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT.
|
1,108,101,563,663 | arxiv | \section{Introduction}
Semileptonic pseudoscalar $B_{q}$ decays are crucial tools to
restrict the Standard Model (SM) parameters and search for new
physics beyond the SM. These decays provide possibility to calculate
the elements of the Cabbibo-Kobayashi-Maskawa (CKM) matrix, leptonic
decay constants as well as the origin of the CP violation.
When LHC begins to operate, a large number of $B_{q}$ mesons will
be produced. This will provide experimental framework to check the
$B_{q}$ decay channels. An important class of $B_{s}$ and entire
$B_{u,d}$ decays occur via the $b$ quark decays. Among the $b$
decays, $b\rightarrow c$ transition plays a significant role,
because this transition is the most dominant transition among $b$
decays. Some of the $B$ decay channels could be $B_{q}\rightarrow
D^{\ast}_{q}l\nu$ $(q=s, d, u)$ via $b\rightarrow c$ transition.
These decays could give useful information about the structure of
the vector $D^{\ast}_{s}$ mesons. The observation
of two narrow resonances with charm and strangeness, $D_{sJ}(2317)$ in the $D_{s}\pi^{0}$ invariant mass distribution
~\cite{1}--\cite{6}, and $D_{sJ}(2460)$ in the $D_{s}^{\ast}\pi^{0}$
and $D_{s}\gamma$ mass distribution \cite{2,3,4,6,7,8}, has raised
discussions about the nature of these states and their quark
contents \cite{9,10}.
Analysis of the $D_{s_{0}}(2317)\rightarrow
D_{s}^{\ast}\gamma$, $D_{sJ}(2460)\rightarrow
D_{s}^{\ast}\gamma$ and $ D_{sJ}(2460)\rightarrow D_{s_{0}}(2317)\gamma$ indicates that the quark
content of these mesons is probably $\overline{c}s$ \cite{11}.
Form factors are central objects in studying of the semileptonic $B_{q}\rightarrow
D^{\ast}_{q}l\nu$ decays. For the calculation of these form factors,
we need reliable non-perturbative approaches. Among all
non-perturbative models, the QCD sum rules has received especial
attention since this model is based on the QCD Lagrangian. QCD sum rules is
a framework which connects hadronic parameters with
QCD parameters. In this method, hadrons are represented by their interpolating
currents taken at large virtualities. The correlation
function is calculated in hadrons and quark-gluon languages. The physical quantities are
determined by matching these two representations of correlators .
The application of sum rules has been extended
remarkably during the past twenty years and applied for wide
variety of problems ( For review see for example \cite{13}).
The aim of this paper is to analyze the semileptonic $B_{q}\rightarrow
D^{\ast}_{q}l\nu$ decays using three point QCD sum rules method.
Note that, this problem has been studied for $B_{q}\rightarrow
D^{\ast}_{q}l\nu$ $(q=s, d, u)$ in constituent quark meson (CQM)
model in \cite{zhao} and for $q=d, u (B^{0}, B^{\pm})$ cases in
experiment \cite{Yao}. The application of subleading Isgur-Wise form factor
for $B\rightarrow D^{\ast}l\nu$ at heavy quark effective theory (HQET) is calculated
in \cite{neubert2} (see also \cite{grozin1,ovcinkov}). Present
work takes into account the SU(3) symmetry breaking and could be considered as an extension
of the form factors of $D\rightarrow
K^{*} e \nu$ presented in \cite{15}.
The paper is organized as fallow: In section II, sum rules expressions
for form factors relevant to these decays and their HQET limit and $1/m_{b}$ corrections
are obtained.
The numerical analysis for form factors and their HQET limit at zero recoil and other
values of y,
conclusion and comparison of our
results with the other approaches are presented in section III.
\section{Sum rules for the $B_{q}\rightarrow D^{\ast}_{q}\ell\nu$ transition form factors }
The $B_q \rightarrow D^{\ast}_{q}$ transitions occur via the
$b\rightarrow c$ transition at the quark level. At this level, the
matrix element for this transition is given by:
\begin{equation}\label{lelement}
M_{q}=\frac{G_{F}}{\sqrt{2}} V_{cb}~\overline{\nu}
~\gamma_{\mu}(1-\gamma_{5})l~\overline{c}
~\gamma_{\mu}(1-\gamma_{5}) b.
\end{equation}
To derive the matrix elements for $B_{q}\rightarrow
D^{\ast}_{q}l\nu$ decays, it is necessary to sandwich Eq. (\ref{lelement})
between initial and final meson states. The amplitude of the
$B_{q}\rightarrow
D^{\ast}_{q}l\nu$ decays can be written as follows:
\begin{equation}\label{2au}
M=\frac{G_{F}}{\sqrt{2}} V_{cb}~\overline{\nu}
~\gamma_{\mu}(1-\gamma_{5})l<D^{\ast}_{q}(p',\varepsilon)\mid~\overline{c}
~\gamma_{\mu}(1-\gamma_{5}) b\mid B_{q}(p)>.
\end{equation}
The aim is to calculate the matrix element
$<D^{\ast}_{q}(p',\varepsilon)\mid\overline{c}\gamma_{\mu}(1-\gamma_{5})
b\mid B_{q}(p)>$ appearing in Eq. (\ref{2au}). Both the vector and
the axial vector part of
$~\overline{c}~\gamma_{\mu}(1-\gamma_{5}) b~$ contribute to the
matrix element stated above. Considering Lorentz and parity
invariances, this matrix element can be parameterized in terms of
the form factors below:
\begin{equation}\label{3au}
<D^{\ast}_{q}(p',\varepsilon)\mid\overline{c}\gamma_{\mu} b\mid
B_q(p)>=i\frac{f_{V}(q^2)}{(m_{B_{q}}+m_{D^{\ast}_{q}})}\varepsilon_{\mu\nu\alpha\beta}
\varepsilon^{\ast\nu}p^\alpha p'^\beta,
\end{equation}
\begin{eqnarray}\label{4au}
< D^{\ast}_{q}(p',\varepsilon)\mid\overline{c}\gamma_{\mu}
\gamma_{5} b\mid B_{q}(p)> &=&i\left[f_{0}(q^2)(m_{B_{q}}
+m_{D^{\ast}_{q}})\varepsilon_{\mu}^{\ast}
\right. \nonumber \\
-
\frac{f_{+}(q^2)}{(m_{B_{q}}+m_{D^{\ast}_{q}})}(\varepsilon^{\ast}p)P_{\mu}
&-& \left.
\frac{f_-(q^2)}{(m_{B_{q}}+m_{D^{\ast}_{q}})}(\varepsilon^{\ast}p)q_{\mu}\right],
\end{eqnarray}
where $f_{V}(q^2)$, $f_{0}(q^2)$, $f_{+}(q^2)$ and $f_{-}(q^2)$ are
the transition form factors and $P_{\mu}=(p+p')_{\mu}$,
$q_{\mu}=(p-p')_{\mu}$. In order to calculate these form factors,
the QCD sum rules method is applied. Initially the following
correlator is considered:
\begin{equation}\label{6au}
\Pi _{\mu\nu}^{V;A}(p^2,p'^2,q^2)=i^2\int
d^{4}xd^4ye^{-ipx}e^{ip'y}<0\mid T[J _{\nu D^{\ast}_{q}}(y)
J_{\mu}^{V;A}(0) J_{B_{q}}(x)]\mid 0>,
\end{equation}
where $J _{\nu D^{\ast}_{q}}(y)=\overline{q}\gamma_{\nu} c$ and
$J_{B_{q}}(x)=\overline{b}\gamma_{5}q$ are the interpolating
currents of $D^{\ast}_{q}$ and $B_{q} $ mesons, respectively and
$J_{\mu}^{V}=~\overline{c}\gamma_{\mu}b $ and $J_{\mu}^{A}=~\overline{c}\gamma_{\mu}\gamma_{5}b$
are vector and axial vector transition currents .
Two complete sets of intermediate states with the same quantum
numbers as the currents $J_{D^{\ast}_{q}}$ and $J_{B_{q}}$ are
inserted to calculate the phenomenological part of the correlation
function given in Eq. (\ref{6au}). After standard calculations, the
following equation is obtained:
\begin{eqnarray} \label{7au}
&&\Pi _{\mu\nu}^{V,A}(p^2,p'^2,q^2)=
\nonumber \\
&& \frac{<0\mid J_{D^{\ast}_{q}}^{\nu} \mid
D^{\ast}_{q}(p',\varepsilon)><D^{\ast}_{q}(p',\varepsilon)\mid
J_{\mu}^{V,A}\mid B_{q}(p)><B_{q}(p)\mid J_{B_{q}}\mid
0>}{(p'^2-m_{D^{\ast}_{q}}^2)(p^2-m_{Bq}^2)}+\cdots
\nonumber \\
\end{eqnarray}
where $\cdots$ represents contributions coming from higher states and continuum. The matrix
elements in Eq. (\ref{7au}) are defined as:
\begin{equation}\label{8au}
<0\mid J^{\nu}_{D^{\ast}_{q}} \mid
D^{\ast}_{q}(p',\varepsilon)>=f_{D^{\ast}_{q}}m_{D^{\ast}_{q}}\varepsilon^{\nu}~,~~<B_{q}(p)\mid
J_{B_{q}}\mid 0>=-i\frac{f_{B_{q}}m_{B_{q}}^2}{m_{b}+m_{q}},
\end{equation}
where $f_{D^{\ast}_{q}}$ and $f_{B_{q}}$ are the leptonic decay
constants of $D^{\ast}_{q} $ and $B_{q}$ mesons, respectively. Using
Eq. (\ref{3au}), Eq. (\ref{4au}) and Eq. (\ref{8au}) and performing
summation over the polarization of the $D^{\ast}_{q}$ meson in Eq.
(\ref{7au}) the equation below are derived:
\begin{eqnarray}\label{9amplitude}
\Pi_{\mu\nu}^{A}(p^2,p'^2,q^2)&=&\frac{f_{B_{q}}m_{B_{q}}^2}{(m_{b}+m_{q})}\frac{f_{D^{\ast}_{q}}m_{D^{\ast}_{q}}}
{(p'^2-m_{D^{\ast}_{q}}^2)(p^2-m_{B_{q}}^2)}\nonumber\\ &\times&
[-f_{0}g_{\mu\nu} (m_{B_{q}}+m_{D^{\ast}_{q}})
+\frac{f_{+}P_{\mu}p_{\nu}}{(m_{B_{q}}+m_{D^{\ast}_{q}})} \nonumber
+\frac{f_{-}q_{\mu}p_{\nu}}{(m_{B_{q}}+m_{D^{\ast}_{q}})}]\\&+&
\mbox{excited states,}\nonumber\\\Pi_{\mu\nu}^{V}(p^2,p'^2,q^2)&=&
-\varepsilon_{\alpha\beta\mu\nu}p^{\alpha}p'^{\beta}\frac{f_{B_{q}}m_{B_{q}}^2}{(m_{b}+m_{s})(m_{B_{q}}+m_{D^{\ast}_{q}})}\frac{f_{D^{\ast}_{q}}m_{D^{\ast}_{q}}}
{(p'^2-m_{D^{\ast}_{q}}^2)(p^2-m_{B_{q}}^2)}f_{V}
\nonumber \\
&+&\mbox{excited states.}
\end{eqnarray}
From the QCD (theoretical) sides, $\Pi _{\mu\nu}(p^2,p'^2,q^2)$ can
also be calculated by the help of OPE in the deep space-like region
where $p^2 \ll (m_{b}+m_{q})^2 $ and $p'^2 \ll (m_{c}+m_{q})^2$.
The theoretical part of the correlation function is calculated by
means of OPE, and up to operators having dimension $d=6$, it is
determined by the bare-loop (Fig. 1 a) and the power corrections
(Fig. 1 b, c, d) from the operators with $d=3$,
$<\overline{\psi}\psi>$, $d=4$, $m_{s}<\overline{\psi}\psi>$, $d=5$,
$m_{0}^{2}<\overline{\psi}\psi>$ and $d=6$,
$<\overline{\psi}\psi\bar \psi \psi>$. The $d=6$ operator is ignored
in the calculations. To calculate the bare-loop contribution, the
double dispersion representation for the coefficients of
corresponding Lorentz structures appearing in the correlation
function are used:
\begin{equation}\label{10au}
\Pi_i^{per}=-\frac{1}{(2\pi)^2}\int ds'\int
ds\frac{\rho_{i}(s,s',q^2)}{(s-p^2)(s'-p'^2)}+\textrm{ subtraction
terms.}
\end{equation}
\begin{figure}
\vspace*{-1cm}
\begin{center}
\includegraphics[width=10cm]{feyman.eps}
\end{center}
\caption{Feynman diagrams for $B_{q}\rightarrow D^{\ast}_{q}l\nu $
$(q=s, d, u)$ transitions.} \label{fig1}
\end{figure}
The spectral densities $\rho_{i}(s,s',q^2)$ can be calculated from
the usual Feynman integral with the help of Cutkosky rules, i.e., by
replacing the quark propagators with Dirac delta functions:
$\frac{1}{p^2-m^2}\rightarrow-2\pi\delta(p^2-m^2),$ which implies
that all quarks are real. After long and straightforward
calculations for the corresponding spectral densities the following
expressions are obtained:
\begin{eqnarray}\label{11au}
\rho_{V}(s,s',q^2)&=&4N_{c}I_{0}(s,s',q^2)\left[{(m_{b}-m_{q})A+(m_{c}-m_{q})B}-m_{q}\right],\nonumber\\
\rho_{0}(s,s',q^2)&=&-2N_{c}I_{0}(s,s',q^2)\Bigg[2m_{q}^{3}-2m_{q}^{2}(m_{c}+m_{b})\nonumber\\
&+&m_{q}(q^{2}+s+s'-2m_{b}m_{c})+[q^{2}(m_{b}-m_{q})\nonumber\\
&+&s(3m_{q}-2m_{c}-m_{b})+s'(m_{q}-m_{b})]A+[q^{2}(m_{c}-m_{q})\nonumber\\
&+&s(m_{q}-m_{c})+s'(3m_{q}-2m_{b}-m_{c})]B
+4(m_{b}-m_{s})C\Bigg],\nonumber \\
\rho_{+}(s,s',q^2)&=&2N_{c}I_{0}(s,s',q^2)\Bigg[m_{q}+(3m_{q}-m_{b})A+(m_{q}-m_{c})B
\nonumber
\\&+&2(m_{q}+m_{b})D+2(m_{q}-m_{b})E
\Bigg]
,\nonumber \\
\rho_{-}(s,s',q^2)&=&2N_{c}I_{0}(s,s',q^2)\Bigg[-m_{q}+(m_{q}+m_{b})A-(m_{q}+m_{c})B
\nonumber
\\&+&2(m_{q}-m_{b})D+2(m_{b}-m_{q})E
\Bigg],\nonumber \\
\end{eqnarray}
where
\begin{eqnarray}\label{12}
I_{0}(s,s',q^2)&=&\frac{1}{4\lambda^{1/2}(s,s',q^2)},\nonumber\\
\lambda(a,b,c)&=&a^{2}+b^{2}+c^{2}-2ac-2bc-2ab,\nonumber \\
A&=&\frac{1}{(s'+s-q^{2})^{2}-4ss'}\Bigg[(-2m_{b}^{2}+q^{^2}+s-s')s'\nonumber \\&+&m_{q}^{2}(q^{^2}-s+s')
+m_{c}^{2}(-q^{^2}+s+s')\Bigg],\nonumber\\
B&=&\frac{1}{(s'+s-q^{2})^{2}-4ss'}\Bigg[m_{q}^{2}(q^{^2}+s-s')\nonumber \\&+&(-2m_{c}^{2}+q^{^2}-s+s')s
+m_{b}^{2}(-q^{^2}+s+s')\Bigg],\nonumber\\
C&=&\frac{1}{2[(s'+s-q^{2})^{2}-4ss']}\Bigg[m_{c}^{4}s+m_{b}^{4}s'\nonumber
\\&+&q^{2}[m_{q}^{4}+m_{q}^{2}(q^{^2}-s-s')+s s']
+m_{b}^{2}m_{c}^{2}(q^{^2}-s-s')\nonumber\\
&-&(q^{^2}+s-s')s'-m_{q}^{2}(q^{^2}-s+s')\nonumber\\
&-&m_{c}^{2}m_{q}^{2}(q^{^2}+s-s')+s(q^{^2}-s+s')\Bigg],\nonumber\\
D&=&\frac{1}{[(s'+s-q^{2})^{2}-4ss']^{2}}\Bigg[m_{q}^{4}[q^{4}-2q^{2}(s-2s')+(s-s')^{2}]\nonumber
\\&+&[6m_{b}^{4}+q^{^4}+q^{^2}(4s-2s')+(s-s')^{2}-6m_{b}^{2}(q^{^2}+s-s')]s'^{2}
\nonumber\\
&+&m_{c}^{4}[q^{^4}+s^{2}+4ss'+s'^{2}-2q^{^2}(s+s')]\nonumber\\
&-&2m_{q}^{2}s'[-2q^{^4}+(s-s')^{2}+3m_{b}^{2}(q^{^2}-s+s')+q^{^2}(s+s')]\nonumber\\
&-&2m_{c}^{2}m_{q}^{2}(q^{^2}+s^{2}+s s'-2s'^{2}+q^{^2}(-2s+s'))\nonumber\\
&+&s'[q^{^4}+q^{^2}s-2s^{2}-2q^{^2}s'+s
s'+s'^{2}+3m_{b}^{2}(-q^{^2}+s+s')]
\Bigg],\nonumber\\
E&=&\frac{1}{[s'+s-q^{2})^{2}-4ss']^{2}}\Bigg[2m_{q}^{4}q^{4}+m_{q}^{2}q^{6}-m_{q}^{4}q^{2}s-m_{q}^{2}q^{4}s-m_{q}^{4}s^{2}\nonumber
\\&-&m_{q}^{2}q^{^2}s^{2}+m_{q}^{2}s^{3}-m_{q}^{4}q^{^2}s'-m_{q}^{2}q^{^4}s'+2m_{q}^{4}s
s'
\nonumber\\
&+&6m_{q}^{2}q^{^2}s s'+2q^{^2}s s'-m_{q}^{2}s^{2}s'-q^{^2}s^{2}s'-s^{3}s'\nonumber\\
&+&3m_{b}^{4}(q^{^2}-s+s')s'-m_{q}^{4}s'^{2}-m_{q}^{2}q^{^2}s'^{2}-m_{q}^{2}s s'^{2}\nonumber\\
&-&q^{^2}s s'^{2}+2s^{2}s'^{2}+m_{q}^{2}s'^{3}-s s'^{3}-3m_{c}^{4}s(-q^{^2}+s+s')\nonumber\\
&-&2m_{c}^{2}m_{q}^{2}[q^{^4}-2s^{2}+q^{^2}(s-2s')+s s'+s'^{2})]\nonumber\\
&+&s[q^{^2}+s^{2}+s s'-2s'^{2}+q^{^2}(-2s+s')]\nonumber\\
&+&2m_{b}^{2}\{-m_{q}^{2}(q^{^4}-2q^{^2}s+s^{2}+q^{^2}s'+s
s'-2s'^{2})\nonumber\\
&-&s'(q^{^4}+q^{^2}s-2s^{2}-q^{^2}s'+s s'+s'^{2})\nonumber\\
&+&m_{c}^{2}[q^{^4}+s^{2}+4s s'+s'^{2}-2q^{^2}(s+s')]\}
\Bigg].\nonumber\\
\end{eqnarray}
The subscripts V, 0 and $\pm$ correspond to the coefficients of the
structures proportional to $i\varepsilon_{\mu\nu\alpha\beta}p'^{\alpha}p^{\beta}$, $g_{\mu\nu}$ and $\frac{1}{2}(p_{\mu}p_{\nu}
\pm p'_{\mu}p_{\nu})$, respectively. In Eq. (\ref{11au}) $N_{c}=3$ is the number of colors.
The integration region for the perturbative contribution
in Eq. (\ref{10au}) is determined from the condition that arguments of the
three $\delta$ functions must vanish simultaneously. The physical
region in s and $s'$ plane is described by the following
inequalities:\\
\begin{equation}\label{13au}
-1\leq\frac{2ss'+(s+s'-q^2)(m_{b}^2-s-m_{q}^2)+(m_{q}^2-m_{c}^2)2s}{\lambda^{1/2}(m_{b}^2,s,m_{q}^2)\lambda^{1/2}(s,s',q^2)}\leq+1.
\end{equation}
From this inequalities, we calculate s in terms of $s'$ in order to put to the lower
limit of integration over s. For the contribution of power
corrections, i.e., the contributions of operators with dimensions
$d=3$, $4$ and $5$, the following results were derived:
\begin{eqnarray}\label{14au}
f_{V}^{(3)}+f_{V}^{(4)}+f_{V}^{(5)}&=&\frac{1}{2}<\overline{q}q>\Bigg[-\frac{1}{rr'^{3}}
m_{c}^2(m_{0}^{2}-2m_{q}^2) \nonumber
\\&-&\frac{1}{3r^{2}r'^{2}}[-3m_{q}^2(m_{b}^2+m_{c}^2-q^{2})\nonumber
\\&+&m_{0}^{2}
(m_{b}^2+m_{b}m_{c}+m_{c}^2-q^{2})]\nonumber
\\&-&
\frac{1}{rr'^{2}}m_{c}m_{q}-
\frac{1}{r^{3}r'}m_{b}^{2}(m_{0}^{2}-2m_{q}^{2}) \nonumber
\\&+&\frac{1}{3r^{2}r'}(2m_{0}^{2}-3m_{b}m_{q}) +\frac{2}{rr'}\Bigg] ,
\nonumber \\
f_{0}^{(3)}+f_{0}^{(4)}+f_{0}^{(5)}&=&\frac{1}{4}<\overline{q}q>\Bigg[-\frac{1}{rr'^{3}}m_{c}^2(m_{0}^{2}-2m_{q}^2)
\nonumber \\
&\times&(m_{b}^{2}+2m_{b}m_{c}+m_{c}^{2}-q^{2})
\nonumber \\
&-& \frac{1}{3r^{2}r'^{2}}(m_{b}^{2}+2m_{b}m_{c}+m_{c}^{2}-q^{2})\nonumber \\
&\times& [-3m_{q}^2(m_{b}^2+m_{c}^2-q^{2})
+m_{0}^{2}(m_{b}^2+m_{b}m_{c}+m_{c}^2-q^{2})]\nonumber\\&-&
\frac{1}{3rr'^{2}}[m_{0}^{2}(m_{b}^2+3m_{b}m_{c}-q^{2})\nonumber\\&+&3(m_{c}-m_{q})m_{q}
(m_{b}^{2}+2m_{b}m_{c}+m_{c}^{2}-q^{2})]
\nonumber\\&-&\frac{1}{r^{3}r'}m_{b}^2(m_{0}^{2}-2m_{q}^{2})
(m_{b}^{2}+2m_{b}m_{c}+m_{c}^{2}-q^{2})\nonumber\\&+&\frac{1}{3r^{2}r'}
[-3(m_{b}-m_{q})m_{q}(m_{b}^{2}+2m_{b}m_{c}+m_{c}^{2}-q^{2})\nonumber\\&+&
m_{0}^{2}(m_{c}^2+3m_{b}m_{c}-q^{2})]\nonumber\\&+&\frac{1}{3rr'}(4m_{0}^{2}+
6m_{b}^{2}+12m_{b}m_{c}+6m_{c}^{2}\nonumber
\\&-&3m_{b}m_{q}+3m_{c}m_{q}-6m_{q}^{2}-6q^{2})\Bigg],
\nonumber \\
f_{+}^{(3)}+f_{+}^{(4)}+f_{+}^{(5)}&=&\frac{1}{4}<\overline{q}q>\Bigg[-\frac{1}{rr'^{3}}
m_{c}^2(m_{0}^{2}-2m_{q}^2)\nonumber \\
&+&\frac{1}{3r^{2}r'^{2}}
[-3m_{q}^2(m_{b}^2+m_{c}^2-q^{2})\nonumber \\
&+&m_{0}^{2}(m_{b}^2+m_{b}m_{c}+m_{c}^2-q^{2})]
\nonumber \\
&+&\frac{1}{rr'^{2}}m_{c}m_{q}
<\overline{q}q>+\frac{1}{4r^{3}r'}m_{b}^2(m_{0}^{2}-2m_{q}^2)
\nonumber \\
&+&\frac{1}{3r^{2}r'}[-4m_{0}^{2}+3m_{q}(m_{b}+2m_{q})]
-\frac{1}{3rr'}\Bigg],\nonumber \\
f_{-}^{(3)}+f_{-}^{(4)}+f_{-}^{(5)}&=&\frac{1}{4}<\overline{q}q>\Bigg[-\frac{1}{rr'^{3}}
m_{c}^2(m_{0}^{2}-2m_{q}^2)\nonumber \\
&-&\frac{1}{3r^{2}r'^{2}}
[-3m_{q}^2(m_{b}^2+m_{c}^2-q^{2})\nonumber \\
&+&m_{0}^{2}(m_{b}^2+m_{b}m_{c}+m_{c}^2-q^{2})]
\nonumber \\
&-&\frac{1}{rr'^{2}}m_{c}m_{q}
-\frac{1}{r^{3}r'}m_{b}^2(m_{0}^{2}-2m_{q}^2)
\nonumber \\
&+&\frac{1}{r^{2}r'}m_{q}(-m_{b}+2m_{q})+\frac{2}{rr'}\Bigg] ,
\end{eqnarray}
where $r=p^{2}-m_{b}^{2}$ and $r'=p'^{2}-m_{c}^{2}$. Here we should
mentioned that, considering the definition of double dispersion
relation in Eq. (\ref{10au}) and parametrization of the form factors
and the coefficient of selected structures, with the changes: 1)
$b\rightarrow c$ and $c\rightarrow s$, 2) set the $m_{q}\rightarrow
0$ and 3) ignore the terms $\sim m_{s}^{2}$, the Eqs. (\ref{11au},
\ref{14au}) reduce to the expressions for the spectral densities
and quark condensate contributions up to 5 mass dimensions for the
form factors $f_{V}$, $f_{0}$ and $f_{+}$ presented in the appendix
A of \cite{15} which describes the form factors of $D\rightarrow
K^{*} e \nu$.
By equating the phenomenological expression given in Eq. (\ref{9amplitude}) and the
OPE expression given by Eqs. (\ref{11au}-\ref{14au}), and applying
double Borel transformations with respect to the variables $p^2$ and
$p'^2$ ($p^2\rightarrow M_{1}^2,~p'^2\rightarrow M_{2}^2$) in order
to suppress the contributions of higher states and continuum, the
QCD sum rules for the form factors $f_{V}$, $f_{0}$, $f_{+}$ and
$f_{-}$ are obtained:
\begin{eqnarray}\label{15au}
f_{i}(q^2)=\kappa\frac{(m_{b}+m_{q})
}{f_{B_{q}}m_{B_{q}}^2}\frac{\eta}{f_{D_{q}^{\ast}}m_{D_{q}^{\ast}}}e^{m_{B_{q}}^2/M_{1}^2+m_{D_{q}^{\ast}}^2/M_{2}^2}
\nonumber
\\\times[\frac{1}{(2\pi)^2}\int_{(m_{c}+m_{s})^{2}}^{s_0'} ds' \int_{f(s')}^{s_0} ds\rho_{i}(s,s',q^2)e^{-s/M_{1}^2-s'/M_{2}^2}\nonumber
\\+\hat{B}(f_{i}^{(3)}+f_{i}^{(4)}+f_{i}^{(5)})],\nonumber\\
\end{eqnarray}
where $i=V,0$ and $\pm$, and $\hat B$ denotes the double Borel
transformation operator and $\eta=m_{B_{q}}+m_{D_{q}^{\ast}}$ for
$i=V,\pm $ and $\eta=\frac{1}{m_{B_{q}}+m_{D_{q}^{\ast}}}$ for $i=0$
are considered. Here $\kappa=+1$ for $i=\pm$ and $\kappa=-1$ for
$i=0$ and $V$. In Eq. (\ref{15au}), in order to subtract the
contributions of the higher states and the continuum, the
quark-hadron duality assumption is used, i.e., it is assumed that
\begin{eqnarray}
\rho^{higher states}(s,s') = \rho^{OPE}(s,s') \theta(s-s_0)
\theta(s'-s'_0).
\end{eqnarray}
In calculations the following rule for the double Borel
transformations is used:\\
\begin{equation}\label{16au}
\hat{B}\frac{1}{r^m}\frac{1}{r'^n}\rightarrow(-1)^{m+n}\frac{1}{\Gamma(m)}\frac{1}{\Gamma
(n)}e^{-m_{b}^{2}/M_{1}^2}e^{-m_{c}^{2}/M_{2}^2}\frac{1}{(M_{1}^{2})^{m-1}(M_{2}^{2})^{n-1}}.
\end{equation}
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$
Here, we should mention that the contribution of higher dimensions
are proportional to the powers of the inverse of the heavy
quark masses, so this contributions are suppressed.
Next, we present the infinite heavy quark mass limit of the form
factors for $B_{q}\rightarrow D^{\ast}_{q}l\nu $ transitions. In
HQET, the following procedure are used (see
\cite{ming,neubert1,kazem}). First, we use the following
parametrization:
\begin{equation}\label{melau}
y=\nu\nu'=\frac{m_{B_{q}}^2+m_{D_{q}^{\ast}}^2-q^2}{2m_{B_{q}}m_{D_{q}^{\ast}}}
\end{equation}
where $\nu$ and $\nu'$ are
the four-velocities of the initial and final meson states, respectively and $y=1$
is so called zero recoil limit. Next, we try to find the y dependent
expressions of the form factors by taking
$m_{b}\rightarrow\infty$, $m_{c}=\frac{m_{b}}{\sqrt{z}}$, where
z is given by $\sqrt{z}=y+\sqrt{y^2-1}$ and setting the mass of light quarks to zero.
In this limit the Borel
parameters take the form $M_{1}^{2}=2 T_{1} m_{b}$ and $M_{2}^{2}=2 T_{2}
m_{c}$ where $ T_{1}$ and $ T_{2}$ are the new Borel parameters.
The new continuum thresholds $\nu_{0}$, and
$\nu_{0}'$ take the following forms in this limit
\begin{equation}\label{17au}
\nu_{0}=\frac{s_{0}-m_{b}^2}{m_{b}},~~~~~~
\nu'_{0}=\frac{s'_{0}-m_{c}^2}{m_{c}},
\end{equation}
and the new integration variables are defined as:
\begin{equation}\label{18au}
\nu=\frac{s-m_{b}^2}{m_{b}},~~~~~~ \nu'=\frac{s'-m_{c}^2}{m_{c}}.
\end{equation}
The leptonic decay constants are rescaled:
\begin{equation}\label{21au}
\hat{f}_{B_{q}}=\sqrt{m_{b}}
f_{B_{q}},~~~~~~~\hat{f}_{D_{q}^{*}}=\sqrt{m_{c}} f_{D_{q}^{*}}.
\end{equation}
After the standard calculations, we obtain the y-dependent
expressions of the form factors as follows:
\begin{eqnarray}\label{22au}
f_{V}&=&\frac{(1+\sqrt{z})}{48
\hat{f}_{D_{q}^{*}}\hat{f}_{B_{q}}z^{1/4}}e^{(\frac{\Lambda}{T_{1}}
+\frac{\overline{\Lambda}}{T_{2}})}\Bigg\{\nonumber\\&&
\frac{3}{\pi^{2}(y+1)
\sqrt{y^{2}-1}}\int_{0}^{\nu_{0}}d\nu\int_{0}^{\nu_{0}'}d\nu'(\nu+\nu')
e^{-\frac{\nu}{2T_{1}}-\frac{\nu'}{2T_{2}}}
\theta(2y\nu\nu'-\nu^{2}-\nu'^2)\nonumber\\&+&16<\overline{q}q>\Bigg[1-\frac{m_{0}^{2}}{8}\Bigg(\frac{1}{2T_{1}^{2}}+\frac{1}{2T_{2}^{2}}
+\frac{1}{3T_{1}T_{2}}(1+\frac{1}{\sqrt{z}}+\frac{1}{z})\Bigg)\Bigg]\Bigg\},
\end{eqnarray}
\begin{eqnarray}\label{222au}
f_{0}&=&\frac{z^{1/4}}{16
\hat{f}_{D_{q}^{*}}\hat{f}_{B_{q}}(1+\sqrt{z})}
e^{(\frac{\Lambda}{T_{1}}+\frac{\overline{\Lambda}}{T_{2}})}\Bigg\{
\frac{3}{\pi^{2}
\sqrt{y^{2}-1}}\int_{0}^{\nu_{0}}d\nu\int_{0}^{\nu_{0}'}d\nu'(\nu+\nu')
e^{-\frac{\nu}{2T_{1}}-\frac{\nu'}{2T_{2}}}\nonumber\\
&&\theta(2y\nu\nu'-\nu^{2}-\nu'^2)+\frac{<\overline{q}q>\sqrt{z}}{3}\Bigg[
\Bigg(\frac{1}{2}+\frac{1}{2z}+\frac{1}{\sqrt{z}}\Bigg)\nonumber\\
&&\Bigg(16-m_{0}^{2}(\frac{1}{T_{1}^{2}}+\frac{1}{T_{1}^{2}})\Bigg)-
\frac{m_{0}^{2}}{T_{1}T_{2}}
\Bigg(1+\frac{1}{3z^{\frac{3}{2}}}+\frac{4}{3\sqrt{z}}
+\frac{1}{z}+\frac{\sqrt{z}}{3}\Bigg)\Bigg]\Bigg\},
\end{eqnarray}
\begin{eqnarray}\label{2222au}
f_{+}&=&\frac{(1+\sqrt{z})}{96
\hat{f}_{D_{q}^{*}}\hat{f}_{B_{q}}z^{1/4}}e^{(\frac{\Lambda}{T_{1}}+\frac{\overline{\Lambda}}{T_{2}})}\Bigg\{\nonumber\\
&& \frac{9}{\pi^{2}(y+1)
\sqrt{y^{2}-1}}\int_{0}^{\nu_{0}}d\nu\int_{0}^{\nu_{0}'}d\nu'(\nu+\nu')e^{-\frac{\nu}{2T_{1}}-\frac{\nu'}{2T_{2}}}
\theta(2y\nu\nu'-\nu^{2}-\nu'^2)\nonumber\\
&-&16<\overline{q}q>
\Bigg[1+\frac{m_{0}^{2}}{8}\Bigg(\frac{1}{2T_{1}^{2}}+\frac{1}{2T_{2}^{2}}
+\frac{1}{3T_{1}T_{2}}(1+\frac{1}{\sqrt{z}}+\frac{1}{z})\Bigg)\Bigg]\Bigg\},
\end{eqnarray}
\begin{eqnarray}\label{22222au}
f_{-}&=&-\frac{(1+\sqrt{z})}{96\hat{f}_{D_{q}^{*}}\hat{f}_{B_{q}}z^{1/4}}
e^{(\frac{\Lambda}{T_{1}}+\frac{\overline{\Lambda}}{T_{2}})}\Bigg\{\nonumber\\
&& \frac{9}{\pi^{2}(y+1)
\sqrt{y^{2}-1}}\int_{0}^{\nu_{0}}d\nu\int_{0}^{\nu_{0}'}d\nu'(\nu+\nu')e^{-\frac{\nu}{2T_{1}}-\frac{\nu'}{2T_{2}}}
\theta(2y\nu\nu'-\nu^{2}-\nu'^2)\nonumber\\
&+&16<\overline{q}q>\Bigg[1-\frac{m_{0}^{2}}{8}\Bigg(\frac{1}{2T_{1}^{2}}+\frac{1}{2T_{2}^{2}}
+\frac{1}{3T_{1}T_{2}}(1+\frac{1}{\sqrt{z}}+\frac{1}{z})\Bigg)\Bigg]\Bigg\},
\end{eqnarray}
where $\Lambda=m_{B_{q}}-m_{b}$ and
$\bar{\Lambda}=m_{D_{q}^{*}}-m_{c}$.
At the end of this section, we would like to present
$\frac{1}{m_{b}}$ corrections for the form factors in Eqs.
(\ref{22au})-(\ref{22222au}) using subleading Isgur-Wise form
factors similar to \cite{neubert2} (see also
\cite{neubert1,grozin}). These corrections are given as:
\begin{eqnarray}\label{3333au}
f_{V}^{(1/m_{b})}&=&\frac{m_{B}+m_{D}^{*}}{\sqrt{m_{B}m_{D}^{*}}}
\Bigg\{\frac{\Lambda}{2m_{b}}+\frac{\Lambda}{m_{b}}[\rho_{1}(y)-\rho_{4}(y)]\Bigg\},\nonumber\\
f_{0}^{(1/m_{b})}&=&\frac{(y+1)\sqrt{m_{B}m_{D}^{*}}}{m_{B}+m_{D}^{*}}
\Bigg\{\frac{\Lambda}{2m_{b}}\frac{y-1}{y+1}+\frac{\Lambda}{m_{b}}[\rho_{1}(y)-\frac{y-1}{y+1}\rho_{4}(y)]\Bigg\},\nonumber\\
f_{+}^{(1/m_{b})}&=&\frac{1}{2}f_{V}^{(1/m_{b})},\nonumber\\
f_{-}^{(1/m_{b})}&=&-f_{+}^{(1/m_{b})},
\end{eqnarray}
where the explicit expressions for $\rho_{i}(y)$ functions are given
in \cite{neubert2}. The value of those functions at zero recoil
limit $(y=1)$ are given as
\begin{eqnarray}\label{555555au}
\rho_{1}(1)=\rho_{2}(1)=0,~~~~~ \rho_{3}(1)\simeq0,~~~~~
\rho_{4}(1)\simeq\frac{1}{3}.
\end{eqnarray}
\section{Numerical analysis}
This section is devoted by numerical analysis for the form factors
$f_{V}(q^2)$, $f_{0}(q^2)$, $f_{+}(q^2)$ and $f_{-}(q^2)$. From sum
rule expressions of these form factors it is clear that the
condensates, leptonic decay constants of $B_{q}$ and $D_{q}^{\ast}$
mesons, continuum thresholds $s_{0}$ and $s'_{0} $ and Borel
parameters $M_{1}^2$ and $M_{2}^2$ are the main input parameters. In
the numerical analysis the values of the condensates are chosen at a
fixed renormalization scale of about $1$ GeV. The values of the
condensates are\cite{20} :
$<\overline{u}u>=<\overline{d}d>=-(240\pm10~MeV)^3$,
$<\overline{s}s>=(0.8\pm0.2)<\overline{u}u>$ and
$m_{0}^2=0.8~GeV^2$.
The quark masses are taken to be $ m_{c}(\mu=m_{c})=
1.275\pm
0.015~ GeV$, $m_{s}= 95\pm25 ~MeV$, $m_{u}=(1.5-3) ~MeV$, $m_{d}\simeq(3-5) ~MeV$ \cite{Yao} and $m_{b} =
(4.7\pm
0.1)~GeV$ \cite{20}. The mesons masses are chosen to be $m_{D_{s}^{\ast}}=2.112~GeV$ , $m_{D_{u}^{\ast}}=2.007~GeV$ ,
$m_{D_{d}^{\ast}}=2.010~GeV$, $m_{B_{s}} =5.3~GeV$, $m_{B_{d}} =5.2794~GeV$and $ m_{B_{u}}=5.2790~GeV$\cite{Yao}. For
the values of the leptonic decay
constants of $B_{q}$ and $D_{q}^{\ast} $ mesons the results obtained
from two-point QCD analysis are used: $f_{B_{s}} = 0.209\pm
38~ GeV $ \cite{13}, $f_{D_{s}^{\ast}} =0.266\pm0.032
~GeV $\cite{11}. For the others $f_{B_{d(u)}}=0.14\pm0.01
~GeV $ and $f_{D_{d(u)}^{\ast}}=0.23\pm0.02
~GeV $\cite{Yao} are selected. The threshold parameters
$s_{0}$ and $s_{0}' $ are also determined from the two-point QCD sum
rules: $s_{0} =(35\pm 2)~ GeV^2$ \cite{12} and $s_{0}' =(6-8)~ GeV^2
$ \cite{11}. The Borel parameters $M_{1}^2$ and $M_{2}^2 $ are not
physical quantities, hence form factors should not depend on them.
The reliable regions for the Borel parameters $M_{1}^2 $ and
$M_{2}^2$ can be determined by requiring that both the continuum
contribution and the contribution of the operator with the highest
dimension be small. As a result of the above-mentioned requirements,
the working regions are determined to be $ 10~ GeV^2 < M_{1}^2 <25~
GeV^2 $ and $ 4~ GeV^2 <M_{2}^2 <10 ~GeV^2$.
To determine the decay width of $B_{q} \rightarrow D_{q}^{\ast}l\nu$, the $q^2$ dependence of the form factors $ f_{V}(q^2)$, $f_{0}(q^2)$, $f_{+}(q^2)$ and $f_{-}(q^2)$ in the whole
physical region $ m_{l}^2 \leq q^2 \leq (m_{B_{q}} -
m_{D_{q}^{\ast}})^2$ are needed. The value of the form factors at
$q^2=0$ are given in Table 1.
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|} \hline
$f_{i}(0)$& $B_{s}\rightarrow D_{s}^{\ast}\ell \nu $ &
$B_{d}\rightarrow D_{d}^{\ast}\ell \nu$ & $B_{u}\rightarrow
D_{u}^{\ast}\ell \nu $
\\\cline{1-4}\hline\hline
$f_{V}(0)$ & $0.36\pm0.08$ & $0.47\pm0.13$ & $0.46\pm0.13
$\\\cline{1-4} $f_{0}(0)$ & $0.17\pm0.03$ & $0.24\pm0.05$ &
$0.24\pm0.05$\\\cline{1-4} $f_{+}(0)$ & $0.11\pm0.02$ &
$0.14\pm0.025$ & $0.13\pm0.025$\\\cline{1-4}$f_{-}(0)$
&$-0.13\pm0.03$& $-0.16\pm0.04$ & $-0.15\pm0.04$\\\cline{1-4}
\end{tabular}
\vspace{0.8cm} \caption{The value of the form factors at $q^2=0$}.
\label{tab:2}
\end{table}
The $q^2 $ dependence of the form
factors can be calculated from QCD sum rules (for details, see
\cite{15,16}). To obtain the $q^2$ dependent expressions of the form
factors from QCD sum rules, $q^2 $ should be stay approximately $1~
GeV^2$ below the perturbative cut, i.e., up to $10 ~GeV^2$. Our sum
rules, also, are truncated at $\simeq10 ~GeV^{2}$, but in the
interval $0 \leq q^2 \leq 10~ GeV^2$ we can trust the sum rules. For
the reliability of the sum rules in the full physical region, the
parametrization of the form factors were identified such that in the
region $0 \leq q^2 \leq 10~ GeV^2$, these parameterizations coincide
with the sum rules prediction. Figs. \ref{fig1}, \ref{fig2},
\ref{fig3} and \ref{fig4} show the dependence of the form factors
$f_{V}(q^2)$, $f_{0}(q^2)$, $f_{+}(q^2)$ and $f_{-}(q^2)$ on $q^2$.
To find the extrapolation of the form factors, we choose the
following two fit functions.\\ i)
\begin{equation}\label{17au}
f_{i}(q^2)=\frac{f_{i}(0)}{1+\alpha\hat{q}+\beta\hat{q}^2+\gamma\hat{q}^3+\lambda\hat{q}^4},
\end{equation}
where $\hat{q}=q^2/m_{B_{q}}^2$. The values of the parameters
$f_{i}(0),\alpha,\beta,\gamma$, and $\lambda$ are
given in Tables 2, 3 and 4.\\ ii)
\begin{equation}\label{17au}
f_{i}(q^2)=\frac{a}{(q^{2}-m_{B^{*}}^{2})}+\frac{b}{(q^{2}-m_{fit}^{2})}.
\end{equation}
The values for a, b and
$m_{fit}^{2}$ are given in Tables 5, 6 and 7. For details about the
fit parametrization (ii) which is theoretically more reliable and
some other fit functions see \cite{damir,damirinbali}. These two
parameterizations coincide well with the sum rules predictions in
the whole physical region $0 \leq q^2 \leq 10~ GeV^2$ and also for
$q^{2}<0$ region. For higher $q^{2}$, starting from the upper limit
of the physical region the two fit functions deviate from each other
and this behavior is almost the same for all form factors. As an
example, we present the deviation of above mentioned fit functions
in Fig. \ref{fig6}. From this figure, we see that in the outside of
the physical region the fit (i) growthes more rapidly than fit (ii).
The fit parametrization (ii) depicts that the $m_{B^{*}}$ pole
exists outside the allowed physical region and related to that one
could calculate the hadronic parameters such as $g_{BB^{*}D^{*}}$
(see \cite{damir,damirinbraunu}).
\begin{table}[h] \centering
\begin{tabular}{|c||c|c|c|c|c|} \hline
& f(0) & $\alpha$ & $\beta$& $\gamma$& $\lambda$\\\cline{1-6} \hline \hline
$f_{V}$ & 0.38 & -2.53 & 2.77& -2.41& 0.03\\\cline{1-6}
$f_{0}$ & 0.18 & -1.77 & 0.98& -0.23& -3.50\\\cline{1-6}
$f_{+}$ & 0.12 & -2.90 & 3.66& -3.72& -1.69\\\cline{1-6}
$f_{-}$ & -0.15 & -2.63 & 2.72& -0.99& -6.48\\\cline{1-6}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (i) for form
factors of the $B_{s}\rightarrow D_{s}^{\ast}(2112)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c|c|} \hline
& f(0) & $\alpha$ & $\beta$& $\gamma$& $\lambda$\\\cline{1-6}\hline \hline
$f_{V}$ & 0.46 & -2.90 & 2.99& 0.67& -5.04\\\cline{1-6}
$f_{0}$ & 0.24 & -0.21 & 2.19& -1.68& -2.15\\\cline{1-6}
$f_{+}$ & 0.13 & -4.21 & 9.52& -16.86& 12.97\\\cline{1-6}
$f_{-}$ & -0.15 & -3.93 & -8.03& -13.48&9.15 \\\cline{1-6}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (i) for form
factors of the $B_{u}\rightarrow D_{u}^{\ast}(2007)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c|c|} \hline
& f(0) & $\alpha$ & $\beta$& $\gamma$& $\lambda$\\\cline{1-6}\hline \hline
$f_{V}$ & 0.47 & -3.08 & 4.83& -5.95& 2.95\\\cline{1-6}
$f_{0}$ & 0.24 & -2.20 & 2.18& -1.83& -1.90\\\cline{1-6}
$f_{+}$ & 0.14 & -4.13 & 8.99& -15.10& 10.65\\\cline{1-6}
$f_{-}$ & -0.16 & -3.87 & 7.73& -12.71& 8.26\\\cline{1-6}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (i) for form
factors of the $B_{d}\rightarrow D_{d}^{\ast}(2010)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|} \hline
& a & b & $m_{fit}^{2}$\\\cline{1-4} \hline \hline
$f_{V}$ & 55.03 & -54.30 & 23.18\\\cline{1-4}
$f_{0}$ & 1.43 & -4.32 & 18.80\\\cline{1-4}
$f_{+}$ & 1.14 & -2.57 & 14.88\\\cline{1-4}
$f_{-}$ & -2.80 & 3.43 & 14.60\\\cline{1-4}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (ii) for form
factors of the $B_{s}\rightarrow D_{s}^{\ast}(2112)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|} \hline
& a & b & $m_{fit}^{2}$\\\cline{1-4} \hline \hline
$f_{V}$ & 118.69 & -108.48 & 23.43\\\cline{1-4}
$f_{0}$ & 4.54 & -5.12 & 20.74\\\cline{1-4}
$f_{+}$ & 7.79 & -5.84 & 14.57\\\cline{1-4}
$f_{-}$ & -6.72 & 5.46 & 14.02\\\cline{1-4}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (ii) for form
factors of the $B_{u}\rightarrow D_{u}^{\ast}(2007)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|} \hline
& a & b & $m_{fit}^{2}$\\\cline{1-4} \hline \hline
$f_{V}$ & 115.74 & -106.73 & 23.41\\\cline{1-4}
$f_{0}$ & 10.43 & -12.85 & 20.66\\\cline{1-4}
$f_{+}$ & 5.50 & -5.07 & 14.58\\\cline{1-4}
$f_{-}$ & -5.36 & 4.90 & 14.03\\\cline{1-4}
\end{tabular}
\vspace{0.8cm}
\caption{Parameters appearing in the fit function (ii) for form
factors of the $B_{d}\rightarrow D_{d}^{\ast}(2010)\ell\nu$ at
$M_{1}^2=19~GeV^2$, $M_{2}^2=5~GeV^2.$} \label{tab:1}
\end{table}
In deriving the numerical values for the ratio of the form factors
at HQET limit, we take the value of the $\Lambda$ and
$\overline{\Lambda}$ obtained from two point sum rules,
$\Lambda=0.62 GeV$ \cite{huang} and $\overline{\Lambda}=0.86
GeV$\cite{dai}. The following relations are defined for the ratio of
the form factors,
\begin{eqnarray}\label{rler}
R_{1(2)[3]}&=&\Bigg[1-\frac{q^{2}}{(m_{B}+m_{D^{*}})^{2}}\Bigg]\frac{f_{V(+)[-]}(y)}{f_{0}(y)},
\nonumber\\
R_{4(5)}&=&\Bigg[1-\frac{q^{2}}{(m_{B}+m_{D^{*}})^{2}}\Bigg]\frac{f_{+(-)}(y)}{f_{V}(y)},
\nonumber\\
R_{6}&=&\Bigg[1-\frac{q^{2}}{(m_{B}+m_{D^{*}})^{2}}\Bigg]\frac{f_{-}(y)}{f_{+}(y)},\nonumber\\
\end{eqnarray}
The numerical values of the above mentioned ratios and a comparison
of our results with the predictions of \cite{neubert2} which
presents the application of the subleading Isgur-Wise form factors
for $B\rightarrow
D^{\ast}\ell\nu$ are shown in Table 8. Note that the values in this
Table are obtained with $T_{1}=T_{2}=2~ GeV$ correspond to $M_{1}^{2}=19~
GeV^{2}$ and $M_{2}^{2}=5~ GeV^{2}$ which are used in Tables [2-7].
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|} \hline
y& 1 (zero recoil) & $1.1$ & $1.2$& $1.3$& $1.4$&1.5\\\cline{1-7}
$q^2 (GeV^{2})$ & 10.69 & 8.57 & 6.45& 4.33& 2.20&0.08\\\cline{1-7}
\hline \hline
$R_{1}$ & 1.34 & 1.31 & 1.25& 1.19& 1.10&0.95\\\cline{1-7}
$R_{2}$ & 0.80 & 0.99 & 1.10& 1.22& 1.30&1.41\\\cline{1-7}
$R_{3}$ & -0.80 & -0.79 & -0.80& -0.81& -0.80&-0.80\\\cline{1-7}
$R_{4}$ & 0.50 & 0.64 & 0.77& 0.94& 1.20&1.46\\\cline{1-7}
$R_{5}$ & -0.50 & -0.51 & -0.56& -0.62& -0.71&-0.89\\\cline{1-7}
$R_{6}$ & -0.80 & -0.67 & -0.64& -0.61& -0.55&-0.53\\\cline{1-7}
$R_{1}$ \cite{neubert2}& 1.31 & 1.30 & 1.29& 1.28& 1.27&1.26\\\cline{1-7}
$R_{2}$ \cite{neubert2}& 0.90 & 0.90 & 0.91& 0.92& 0.92&0.93\\\cline{1-7}
\end{tabular}
\vspace{0.8cm}
\caption{The values for the $R_{i}$ and comparison of $R_{1, 2}$
values with the predictions of \cite{neubert2}.} \label{tab:4}
\end{table}
Table 8 shows a good consistency between our results and the
prediction of \cite{neubert2} for $R_{1}$ at zero recoil limit,
$y=1.1$ and $1.2$, but for the other values of y, the changes in
present work results are little greater. The values for $R_{2}$
shows an approximate agreement between two predictions, however the
changes in the value of $R_{2}$ in our work is also a bit more then
\cite{neubert2}. For both $R_{1}$ and $R_{2}$, our study and
\cite{neubert2} predictions have the same behavior, i.e., $R_{1}$
decreases when the value of y is increased and increasing in the
value of y causes the increasing in the value of $R_{2}$. From this
Table, we also see that the $R_{4}$ is sensitive to the changes in
the value of y. However, the results of $R_{3}$, $R_{5}$ and $R_{6}$
vary slowly with respect to y. Our numerical analysis for $1/m_{b}$
corrections of form factors in Eq. (\ref{3333au}) shows that this
correction increase the HQET limit of the form factors $f_{V}$ and
$f_{+}$ about $7.1^{0}/_{0}$ and $6^{0}/_{0}$, respectively, however
it doesn't change the $f_{0}$ and decrease the $f_{-}$ about
$6.5^{0}/_{0}$.
The next step is to calculate the differential decay width in terms of the form factors.
After some calculations for differential decay rate
\begin{eqnarray}\label{28au}
\frac{d\Gamma}{dq^2}&=&\frac{1}{8\pi^4m_{B_{q}}^2}\mid\overrightarrow{p'}\mid
G_{F}^2\mid V_{cb}\mid^2\{(2A_{1}+A_{2}q^2)[\mid
f'_{V}\mid^2(4m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2)+\mid
f'_{0}\mid^2]\}\nonumber\\
&+&\frac{1}{16\pi^4m_{B_{q}}^2}|\overrightarrow{p'}|
G_{F}^2|V_{cb}|^2\left\{(2A_{1}+A_{2}q^2)\Bigg[\mid
f'_{V}\mid^2(4m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2 \right.
\nonumber \\ &+&
m_{B_{q}}^2\frac{\mid\overrightarrow{p'}\mid^2}{m_{D_{q}^{\ast}}^{2}}
(m_{B_{q}}^2-m_{D_{q}^{\ast}}^2-q^2))+\mid f'_{0}\mid^2 \nonumber \\
&-& \mid
f'_{+}\mid^2\frac{m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2}{m_{D_{q}^{\ast}}^2}(2m_{B_{q}}^2
+2m_{D_{q}^{\ast}}^2 -q^2)-\mid
f'_{-}\mid^2\frac{m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2}{m_{D_{q}^{\ast}}^2}q^2
\nonumber\\&-& 2 \left.
\frac{m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2}{m_{D_{q}^{\ast}}^2}(Re(f'_{0}
f_{+}^{'\ast}+f'_{0}
f_{-}^{'\ast}+(m_{B_{q}}^2-m_{D_{q}^{\ast}}^2)f'_{+}f_{-}^{'\ast}))\right]
\nonumber \\ &-&
2A_{2}\frac{m_{B_{q}}^2\mid\overrightarrow{p'}\mid^2}{m_{D_{q}^{\ast}}^2}
\Bigg[\mid f'_{0}\mid^2+(m_{B_{q}}^2-m_{D_{q}^{\ast}}^2)^2\mid
f'_{+}\mid^2+q^4\mid f'_{-}\mid^2 \nonumber \\ &+& 2(
\left.m_{B_{q}}^2-m_{D_{q}^{\ast}}^2)Re(f'_{0}f_{+}^{'\ast})
+2q^2f'_{0}f_{-}^{'\ast}+2q^2(m_{B_{q}}^2-m_{D_{q}^{\ast}}^2)Re(f'_{+}f_{-}^{'\ast})
\Bigg]\right\},
\nonumber \\
\end{eqnarray}
is obtained,
where\\
\begin{eqnarray}\label{30au}
\mid\overrightarrow{p'}\mid&=&\frac{\lambda^{1/2}(m_{B_{q}}^2,m_{D_{q}^{\ast}}^2,q^2)}{2m_{B_{q}}},\nonumber\\
A_{1}&=&\frac{1}{12q^2}(q^2-m_{l}^2)^2I_{0},\nonumber\\
A_{2}&=&\frac{1}{6q^4}(q^2-m_{l}^2)(q^2+2m_{l}^2)I_{0},\nonumber\\
I_{0}&=&\frac{\pi}{2}(1-\frac{m_{l}^2}{q^2}),\nonumber\\
f_{0}'&=&f_{0} (m_{D_{q}^{*}}+m_{B_{q}}),\nonumber\\
f_{V}'&=& \frac{f_{V}}{(m_{D_{q}^{*}}+m_{B_{q}})},\nonumber\\
f_{+}'&=& \frac{f_{+}}{(m_{D_{q}^{*}}+m_{B_{q}})},\nonumber\\
f_{-}'&=& \frac{f_{-}}{(m_{D_{q}^{*}}+m_{B_{q}})}.
\end{eqnarray}
The following part presents evaluation of the value of the branching ratio of these decays.
Taking into account the $q^2$ dependence of
the form factors and performing integration over $q^2$ in the
interval $m_{l}^2\leq q^2\leq(m_{B_{q}}-m_{D_{q}^{\ast}})^2$ and
using the total life-times $\tau_{B_{u}}=1.638\times10^{-12}s$ ,
$\tau_{B_{d}}=1.53\times10^{-12}s$ \cite{Yao} and
$\tau_{B_{s}}=1.46\times10^{-12}s$ \cite{26}, the branching ratios
which are the same for both fit functions are obtained as:
\begin{eqnarray}\label{31au}
\textbf{\emph{B}}(B_s\rightarrow
D_{s}^{\ast}(2112)\ell\nu)&=&(1.89-6.61) \times 10^{-2},\nonumber\\
\textbf{\emph{B}}(B_d\rightarrow
D_{d}^{\ast}(2010)\ell\nu)&=&(4.36-8.94) \times 10^{-2},\nonumber\\
\textbf{\emph{B}}(B_u\rightarrow
D_{u}^{\ast}(2007)\ell\nu)&=&(4.57-9.12) \times 10^{-2}.
\end{eqnarray}
The ranges appearing in the above equations are related to the
different lepton masses $(m_{e}, m_{\mu}, m_{\tau})$ as well as
the errors in the value of input parameters. Finally, we would like to
compare our results of the branching ratios with the predictions of
CQM model \cite{zhao} and existing experimental data in Table 9.
From this Table, we see a good agreement among the phenomenological
models and the experiment for $u$ and $d$ cases. However for s case
our results are about 1.7 times smaller than that of the CQM model.
Also, there is a same behavior between present work results and the
experiment. In the experiment, the value for branching ratios
decreases from u to d. In our results also, this value decreases
from u to s cases. The order of the branching fraction in present
work for $B_{s}\rightarrow D^{\ast}_{s}\ell\nu$ decay shows that
this transition could also be detected at LHC in the near future.
For the present and future experiments about the semileptonic
$b\rightarrow c l \nu$ based decays see
~\cite{Aubert}--\cite{Drutskoy} and references therein. The
comparison of results from the experiments and phenomenological
models like QCD sum rules could give useful information about the
strong interaction inside the $D_{s}^{*}$ and its structure.
In
conclusion, the form factors related to the $B_{q}\rightarrow
D^{\ast}_{q}\ell\nu$
decays were calculated using
QCD sum rules approach. The HQET limit of the form factors as well
as $1/m_{b}$ corrections to those limits were also obtained. A
comparison of the results of form factors in HQET limit with the
application of the subleading Isgur-Wise form factors at zero recoil
limit and others values of y was presented. Taking into account the
$q^{2}$ dependencies of the form factors, the total decay width and
branching ratio for these decays were evaluated. Our results are in
good agreement with that of the CQM model and existing experimental
data. The result of $B_{s}\rightarrow D^{\ast}_{s}\ell\nu$
case shows a possibility to
detect this decay channel at LHC in the near future.
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|} \hline
& $B_{s}\rightarrow D_{s}^{\ast}\ell \nu $ & $B_{d}\rightarrow
D_{d}^{\ast}\ell \nu$ & $B_{u}\rightarrow D_{u}^{\ast}\ell \nu $
\\\cline{1-4}\hline\hline
Present study & $(1.89-6.61) \times 10^{-2}$ & $(4.36-8.94) \times
10^{-2}$ & $(4.57-9.12) \times 10^{-2}$\\\cline{1-4} CQM model &
$(7.49-7.66) \times 10^{-2}$ & $(5.9-7.6) \times 10^{-2}$ &
$(5.9-7.6) \times 10^{-2}$\\\cline{1-4} Experiment & - & $(5.35 \pm
0.20) \times 10^{-2}$ & $(6.5 \pm 0.5) \times 10^{-2}$\\\cline{1-4}
\end{tabular}
\vspace{0.8cm} \caption{Comparison of the branching ratio
of the $B_{q}\rightarrow D^{\ast}_{q}\ell\nu$} decays in
present study, the CQM model \cite{zhao} and the experiment
\cite{Yao}. \label{tab:2}
\end{table}
\section{Acknowledgment}
The authors would like to thank T. M. Aliev and A. Ozpineci for
their useful discussions and also TUBITAK, Turkish Scientific and Research
Council, for their financial support provided under the project
103T666.
\newpage
|
1,108,101,563,664 | arxiv | \section{Introduction}
\citet{fanaroff74} introduced the first classification scheme for
extragalactic radio sources with large-scale structures (i.e., greater than
$\sim$15-20 kpc in size) based on the ratio $R_{FR}$ of the distance between
the regions of highest surface brightness on opposite sides of the central
host galaxy to the total extent of the source up to the lowest brightness
contour in the radio images. Radio sources with $R_{FR}<$0.5 were placed in
Class I (i.e., the edge-darkened FR~Is) and sources with $R_{FR}>$0.5 in Class
II (i.e., the edge-brightened FR~IIs). This morphology-based classification
scheme was found to be linked to their intrinsic power; Fanaroff and Riley
found that all sources in their sample with luminosity at 178 MHz smaller than
2$\times$10$^{25}$ W Hz$^{-1}$ sr$^{-1}$ (for a Hubble constant of 50
$\rm{\,km \,s}^{-1}$\ Mpc$^{-1}$) were classified as FR~I while the brighter sources all were
FR~II. The luminosity distinction between FR classes is fairly sharp at 178
MHz; their separation is even cleaner in an optical-radio luminosity plane,
implying that the FR~I/FR~II dichotomy depends on both optical and radio
luminosity \citep{ledlow96}.
The two Fanaroff-Riley classes do not instead correspond to a division from
the point of view of the optical spectroscopic properties of their hosts.
\citet{laing94} defined low and high excitation galaxies (LEGs and HEGs) based
on the ratios of the diagnostic optical emission lines in a scheme similar to
that adopted to distinguish LINERs and Seyferts in radio quiet AGN
(\citealt{kewley06}). While all FR~Is for which a reliable classification can
be obtained are LEGs, both LEGs and HEGs are found among the FR~IIs (e.g.,
\citealt{buttiglione10}). Buttiglione et al. find that LEGs and HEGs also differ
from other points of view. While LEGs do not show prominent broad lines,
they are observed in $\sim 30\%$ of HEGs; narrow emission lines are a factor
$\sim$ 10 brighter in HEGs than in LEGs at the same radio luminosity. Also, HEGs
show bluer colors than LEGs \citep{smolcic09}. \citet{baldi08} identify
compact knots in the UV images of 3C HEG radio galaxies, a morphological
evidence of recent star formation extending over 5$-$20 kpc; conversely, LEGs
hosts are usually red, passive galaxies. These results suggest that the radio
galaxies belonging to the two spectroscopic classes correspond to different
manifestation of the radio loud AGN phenomenon
\citep{hardcastle07,buttiglione10,best12}.
The recent multiwavelength large-area surveys are a unique tool to further
explore the connection between the morphological and spectroscopic classes of
radio galaxies, providing us with large samples of radio emitting AGN
extending to lower luminosities than in previous studies.
In \citet{capetti16} we created a catalog of 219 edge-darkened FR~I radio
galaxies called FRI{\sl{CAT}}. We found that the FRI{\sl{CAT}}\ hosts are remarkably homogeneous,
as they are all luminous red early-type galaxies (ETGs) with large black hole
masses and spectroscopically classified as LEGs. All these properties are
shared by the hosts of more powerful FR~Is in the 3C sample. They do not show
significant differences from the point of view of their colors with respect to
the general population of massive ETGs. The presence of an active nucleus (and
its level of activity) does not appear to affect the hosts of FR~Is.
We now extend this study to the population of edge-brightened FR~II radio
galaxies with the main aim of comparing the properties of FR~Is and FR~IIs
by also considering their spectroscopic classification.
This paper is organized as follows. In Sect.\ 2 we present the selection
criteria of the sample of FR~IIs. The radio and optical properties of the
selected sources are presented in Sect.\ 3. Sect.\ 4 is devoted to results
and conclusions.
Throughout the paper we adopt the same cosmology parameters used in
\citet{capetti16}, i.e., $H_0=67.8 \, \rm km \, s^{-1} \, Mpc^{-1}$,
$\Omega_{\rm M}=0.308$, and $\Omega_\Lambda=0.692$ \citep{ade16}.
For our numerical results, we use c.g.s. units unless stated
otherwise. Spectral indices $\alpha$ are defined in the usual convention on
the flux density, $S_{\nu}\propto\,\nu^{-\alpha}$. The SDSS magnitudes are in
the AB system and are corrected for the Galactic extinction; {\em WISE}
magnitudes are instead in the Vega system and are not corrected for extinction
since, as shown by, for example, \citet{dabrusco14}, such correction affects
mostly the magnitude at 3.4 $\mu$m of sources lying at low Galactic latitudes
(and by less than $\sim$3\%).
\section{Sample selection}
\label{sample}
We searched for FR~II radio galaxies in the sample of 18,286 radio sources
built by Best \& Heckman (2012; hereafter the BH12 sample) by limiting our
search to the subsample of objects in which, according to these authors, the
radio emission is produced by an active nucleus. They cross-matched the
optical spectroscopic catalogs produced by the group from the Max Planck
Institute for Astrophysics and Johns Hopkins University \citep{bri04,tre04}
based on data from the data release 7 of the Sloan Digital Sky Survey
(DR7/SDSS; \citealt{abazajian09}),\footnote{Available at {\tt
http://www.mpa-garching.mpg.de/SDSS/}.} with the National Radio Astronomy
Observatory Very Large Array Sky Survey (NVSS; \citealt{condon98}) and the
Faint Images of the Radio Sky at Twenty centimeters survey (FIRST;
\citealt{becker95,helfand15}) adopting a radio flux density limit of 5 mJy in
the NVSS. We focused on the sources with redshift $z < 0.15$.
We adopted a purely morphological classification based on the radio
structure shown by the FIRST images. We visually inspected the FIRST
images of each source and preserved those with an edge brightened
morphology in which at least one of the emission peaks lies at a
distance of at least 30 kpc from the center of the optical host. The
30 kpc radius corresponds to 11$\farcs$4 for the farthest objects; the
$z<0.15$ redshift limit ensures that all the selected sources are well
resolved with the 5$\arcsec$ resolution of the FIRST images. The three
authors performed this analysis independently and we included only the
sources for which a FR~II classification is proposed by at least two
of us. We allowed for the presence of diffuse emission leading to X-,
Z-, or C-shaped morphologies, but not extending at larger distances
with respect to the emission peaks, thus excluding wide angle tail
sources \citep{owen76}. Most of these sources are double, i.e., they
do not show nuclear radio emission; the lack of this precise position
reference requires a further check of the original optical
identifications. We discarded three objects in which the
identification of the host is not secure.
The resulting sample, to which we refer as FRII{\sl{CAT}}, is formed by 122 FR~IIs
whose FIRST images are presented in the Appendix. Their main properties
are presented in Table \ref{tab}, where we report the SDSS name, redshift, and
NVSS 1.4 GHz flux density (from BH12). The [O~III] line flux, the r-band SDSS
AB magnitude, $m_r$, the Dn(4000) index (see Section 3 for its definition),
and the stellar velocity dispersion $\sigma_*$ are instead from the MPA-JHU
DR7 release of spectrum measurements. The concentration index $C_r$ was
obtained for each source directly from the SDSS database. For sake of clarity,
uncertainties are not shown in the table; we estimated a median uncertainty of
0.09 on $C_r$, of 0.03 on Dn(4000), of 0.005 magnitudes on $m_r$, and of 10
$\rm{\,km \,s}^{-1}$\ on $\sigma_*$. We also list the resulting radio and line luminosity and
the black hole masses estimated from the stellar velocity dispersion and the
relation $\sigma_* - M_{\rm BH}$ of \citet{tremaine02}. The uncertainty in the
$M_{\rm BH}$ value is dominated by the spread of the relation used (rather
than by the errors in the measurements of $\sigma_*$) resulting in an
uncertainty of a factor $\sim$ 2. Finally, we give the classification (from
BH12) into LEGs and HEGs based on the optical emission line ratios in their
SDSS spectra.
\section{FRII{\sl{CAT}}\ hosts and radio properties}
\label{hosts}
\subsection{Hosts properties}
The majority (107) of the selected FR~IIs are classified as LEG, but there
are also 14 HEG and just one source that cannot be classified spectroscopically
because of the lack of emission lines, namely J1446+2142.
The distribution of absolute magnitude of the FRII{\sl{CAT}}\ hosts covers the range
$-20 \gtrsim M_r \gtrsim -24$ (see Fig. \ref{mhist}, left panel). The
distribution of black hole masses (Fig. \ref{mhist}, right panel) is rather
broad. Most sources have $8.0 \lesssim \log M_{\rm BH} \lesssim 9.0 M_\odot$,
but a tail toward smaller values, down to $M_{\rm BH} \sim10^{6.5} M_\odot$,
that includes $\sim 15\%$ (13 LEGs and 4 HEGs) of the sample.
The FR~II HEGs are less luminous, overall, and harbor less massive black holes
with respect to the FR~II LEGs; the medians of their distributions are
$<M_r({\rm HEG})> = -21.97 \pm 0.17$, $<M_r({\rm LEG})> = -22.62 \pm 0.06$,
$<\log M_{\rm BH}({\rm HEG})> = 8.21 \pm 0.11$, and $<\log M_{\rm BH}(LEG)> =
8.46 \pm 0.04$, respectively. The comparison between the FRII{\sl{CAT}}\ LEG sources
and the FRI{\sl{CAT}}\ hosts ($< M_r({\rm FR~I}) > = -22.69 \pm 0.03$ and $<\log M_{\rm
BH}({\rm FR~I})> = 8.55 \pm 0.02$) indicates that only marginal differences
(and not statistically significant) are present between the medians of these
distributions.
The Dn(4000) spectroscopic index, defined according to
\citet{balogh99} as the ratio between the flux density measured on the
red side and blue side of the Ca~II break, is an indicator of the
presence of young stars or of nonstellar emission. Low redshift
($z < 0.1$) red galaxies have Dn(4000)$= 1.98 \pm 0.05$, which is a
value that decreases to $= 1.95 \pm 0.05$ for $0.1 < z < 0.15$
galaxies \citep{capetti15}.
The concentration index $C_r$, which is defined as the ratio of the
radii including 90\% and 50\% of the light in the $r$ band, can be
used for a morphological classification of galaxies, in which
early-type galaxies have higher values of $C_r$ than late-type
galaxies. Two thresholds have been suggested to define ETGs: a more
conservative value at $C_r \gtrsim$ 2.86 \citep{nakamura03,shen03} and
a more relaxed selection at $C_r \gtrsim$ 2.6
\citep{strateva01,kauffmann03b,bell03}. \citet{bernardi10} found that
the second threshold of the concentration index corresponds to a mix
of E+S0+Sa types, while the first mainly selects ellipticals galaxies,
removing the majority of Sas, but also some Es and S0s.
In Fig. \ref{crdn} we show the concentration index $C_r$ versus the Dn(4000)
index (left panel) for the FRII{\sl{CAT}}\ sources. More than $\sim$90\% of the LEG
FR~II hosts lie in the region of high $C_r$ and Dn(4000) values, indicating
that they are red ETGs. The HEG FR~II are still ETGs, but they show generally
lower values of Dn(4000).
We also consider the $u-r$ color of the galaxies, obtained from SDSS
imaging and thus referred to the whole galaxy rather than just the
3\arcsec\ circular region covered by the SDSS spectroscopic aperture. In
Fig.~\ref{crdn} (central panel) we show the $u-r$ color versus the absolute
r-band magnitude $M_r$ of the hosts. As already found by considering the
Dn(4000) index, the FR~II HEGs show bluer color than the FR~II LEGs; in this
latter class, only two sources have a $u-r$ color that is smaller than the threshold
separating red and blue ETGs \citep{schawinski09}.
In Fig. \ref{crdn}, right panel, we show the comparison of the
{\em{WISE}} mid-IR colors of FRII{\sl{CAT}}\ and FRI{\sl{CAT}}\ sources; the associations
with the {\em{WISE}} catalog are computed by adopting a 3\farcs3
search radius \citep{dabrusco13}. All but one of the (J1121+5344)
FRII{\sl{CAT}}\ sources have a {\em{WISE}} counterpart, but 25 of these sources
are undetected in the $W3$ band. The LEG FR~IIs have mid-IR colors
similar to those of the FRI{\sl{CAT}}\ sources; their mid-IR emission is
dominated by their host galaxies since they fall in the same region of
elliptical galaxies \citep{wright10}. Only five LEGs have
$W2-W3 > 2.5$, exceeding the highest value measured for FR~Is.
Conversely, HEGs reach mid-IR colors as high as $W2-W3$=4.3, colors similar to
those of Seyfert and starburst galaxies (e.g., \citealt{stern05}). Their red
colors are likely due to a combination of star-forming regions and/or emission
from hot dust within a circumnuclear dusty torus.
Overall, we found 10 LEG FR~IIs whose properties do not conform with
the general behavior of their class, for example, showing blue colors
or being associated with small black holes. In some cases, this is due
to relatively large errors particularly in the measurement of
$\sigma_*$, a possible uncertain identification of their spectroscopic
class, or a substantial contribution from a bright nonthermal
nucleus. However, there are three objects (namely J0755+5204,
J1158+3006, and J1226+2538) for which we obtain estimates of the black
hole mass of $\log M_{\rm BH} \sim 6.5 - 6.8$; based on their $C_r$
value these objects are late-type galaxies and two of them also show
blue optical colors (and red mid-IR colors). These properties are all
typical of radio quiet AGN. This contrasts with the observed radio
power ($\log \nu L_r \sim 40.5 - 40.9$) and morphology.
\begin{figure*}
\includegraphics[width=9.5cm]{mrhist.ps}
\includegraphics[width=9.5cm]{mbhhist.ps}
\caption{Distributions of the $r$-band absolute magnitude (left) and black
hole masses (right). The black histograms are for FRII{\sl{CAT}}\ (cyan for the HEG
FRII{\sl{CAT}}\ subsample), the red histograms for the FRI{\sl{CAT}}. The FRI{\sl{CAT}}\ histograms are all scaled by the relative number of FR~I and FR~II, i.e., by 122/219.}
\label{mhist}
\end{figure*}
\begin{figure*}
\includegraphics[width=6.2cm]{crdn.epsi}
\includegraphics[width=6.2cm]{ur.epsi}
\includegraphics[width=6.2cm]{wise.epsi}
\caption{Left: concentration index $C_r$ vs. Dn(4000) index for the FRII{\sl{CAT}}\ and
FRI{\sl{CAT}}\ samples indicated by black and red dots, respectively. The HEG FR~II are
represented by cyan circles. Center: absolute $r$-band magnitude, $M_r$,
vs. $u-r$ color. Right: {\em{WISE}} mid-IR colors.}
\label{crdn}
\end{figure*}
\subsection{Radio properties}
\begin{figure*}
\includegraphics[width=9.5cm]{lrhist.ps}
\includegraphics[width=9.5cm]{ledlow.ps}
\caption{Left panel: radio luminosity distribution of the FRII{\sl{CAT}}\ (black, cyan
for the HEGs) and FRI{\sl{CAT}}\ sources (red). The dotted vertical line indicates the
transition power between FR~I and FR~II reported by \citet{fanaroff74}.
Right panel: radio luminosity (NVSS) vs. host absolute magnitude, $M_r$, for
FRII{\sl{CAT}}\ and FRI{\sl{CAT}}\ (black and red, respectively; cyan for the HEG FR~II). The
dotted line shows the separation between FR~I and FR~II reported by
\citet{ledlow96} to which we applied a correction of 0.34 mag to account for
the different magnitude definition and the color transformation between the
SDSS and Cousin systems.}
\label{lr}
\end{figure*}
The radio luminosity at 1.4 GHz of the FRII{\sl{CAT}}\ covers the range $L_{1.4}$ =
$\nu_{\rm r} l_{\rm r}$ $\sim 10^{39.5} - 10^{42.5}$ $\>{\rm erg}\,{\rm s}^{-1}$ (Fig. \ref{lr},
left panel), reaching a radio power almost two orders of magnitude lower than
the FR~IIs in the 3C sample. The HEGs are brighter than LEGs (with median of $\log
L_{1.4} = 41.37$ and 40.76, respectively) and LEGs are brighter than the
FRI{\sl{CAT}}\ sources by a factor $\sim$3; 90\% of the FRII{\sl{CAT}}\ fall below the
separation between FR~Is and FR~IIs originally reported by \citet{fanaroff74}
which translates, with our adopted cosmology and by assuming a spectral index
of 0.7 between 178 MHz and 1.4 GHz, into $L_{1.4} \sim 10^{41.6}$ $\>{\rm erg}\,{\rm s}^{-1}$.
Similarly, we find that $\sim$75\% of the FRII{\sl{CAT}}\ sources (and including also
four HEGs) are located {\sl below} the dividing line in the optical-radio
luminosity plane defined by \citet{ledlow96};\footnote{We shifted the dividing
line to the right of the diagram to include a correction of 0.12 mag to
scale our total host magnitude to the M$_{24.5}$ used by these authors, and
an additional 0.22 mag to convert the Cousin system into the SDSS system
\citep{fukugita96}.} see Fig. 3, right panel.
FR~IIs show a large spread in both radio and [O~III] line luminosities
(see Fig. \ref{lrlo3}). In this plane, the FR~II LEGs cover
essentially the same region of the FR~Is with just a tail toward
higher power both in line and in radio; no correlation is seen between
these two quantities. The FR~II HEGs generally have higher ratios
between $L_{\rm[O~III]}$ and $L_{1.4}$ than LEGs, which is an effect
already found in the 3C sample \citep{buttiglione10}. The HEGs in
FRII{\sl{CAT}}\ are mostly located above the correlation defined by the 3C
HEGs. A linear fit including both samples is indeed shallower (with a
slope of 0.91) than that obtained from the 3C sources alone (whose
slope is 1.15).
\subsection{Comparison with previous works}
As discussed in Sect. 2, we decided to maintain the traditional
morphological visual classification into edge-brightened FR~IIs and
edge-darkened FR~Is rather than adopting quantitative methods such as those
used by \citet{lin10}. The comparison of our classification with that
proposed by these authors indicates that, among the 96 sources in common,
$\sim 80\%$ of the FRII{\sl{CAT}}\ sources are classified as class {\sl a} in their
nomenclature, i.e., sources with two hot spots on either side of the
galaxy. Most of the remaining objects fall in class {\sl b}, in which the
emission peak is coincident with the host galaxy; however, the inspection of
these FR~II radio sources does not show any clear distinguishing feature
from those in the main class, other than having a relatively brighter
central source in addition to the two lobes.
The main drawback of our scheme, based on the subjective visual
inspection of radio images, is the relatively high fraction of
sources of uncertain classification. The rather strict criteria
adopted for a positive classification as FR~I in \citet{capetti16}
and here for the FR~II enabled us to select only 219 FR~Is and 122
FR~II; more than half of the 714 radio galaxies extended more than
30 kpc cannot be allocated to any FR class. On the other hand, this
strategy allows us to select samples that are very uniform from a
morphological point of view and that are optimally suited for our
main purpose, i.e., the comparison of the properties of the two
classes.
More recently, \citet{miraghaei17} performed an analysis on the same initial
sample, with an apparently similar selection strategy based on visual
inspection. However, the resulting sample of both FR~I and FR~II differ
significantly from those we obtained with only $\sim$ 25\% of objects in
common for both classes, even restricting the comparison to the same range of
redshift, $0.03 < z < 0.15$. This mismatch is likely due to the different
requirements (based, e.g., on linear instead of angular sizes and different
radio flux limits); most importantly they considered only sources
corresponding to multicomponents objects in FIRST and this rejects most of
the edge darkened sources we included in FRI{\sl{CAT}}. Overall, their results do not
strongly differ from ours, probably because (leaving aside the HEGs) the
properties of low z radio AGN are very homogeneous regardless of their radio
morphology. However, for example, we do not find significative differences in
the $C_r$ values between FRI and FRII hosts.
\section{Discussion and conclusions}
The properties of the FRII{\sl{CAT}}\ sources differ between those
spectroscopically identified as LEGs or HEGs. The HEGs have lower
optical luminosities, smaller black hole masses, and higher radio
luminosities with respect to LEGs, although a substantial overlap
between the two classes exist for all these quantities. The clearest
differences are related to the ratio between line and radio
luminosities and to their colors; HEGs are bluer in the optical and
redder in the mid-IR. These results confirm the conclusions of
previous studies (e.g.,
\citealt{baldi08,buttiglione10,baldi10b,best12}).
The population of the LEG FR~IIs included in the FRII{\sl{CAT}}\ is remarkably
uniform. They are all luminous red ETGs with large black hole masses ($M_{\rm
BH} \gtrsim 10^8 M_\odot$); only $\sim$ 10\% of the LEG\ FR\ IIs depart from
this general description. All these properties are shared with the hosts of
the FRI{\sl{CAT}}\ sources. The distributions of $M_{\rm BH}$ and $M_{r}$ differ with a
statistical significance higher than 95\% according to the Kolmogoroff-Smirnov
test; this is due to the presence of a tail of low $M_{\rm BH}$ sources,
reaching value as low as 3$\times 10^6 M_\odot$. However, the median of
$M_{\rm BH}$ and $M_{r}$ differ only marginally by less than 0.1 dex. Even
the median radio luminosity of LEG FR~II is just a factor $\sim$3 higher than
that measured in FR~I. Apparently, the difference in radio morphology between
edge-brightened and edge-darkened radio sources does not translate into a
clear separation between the nuclear and host properties, while the
spectroscopic classes, LEG and HEG, do.
The FRII{\sl{CAT}}\ sample unveils a population of FR~IIs of much lower radio
power with respect to those obtained at high radio flux thresholds,
extending it downward by two orders of magnitude. The correspondence
of the morphological classification of FR~I and FR~II with a
separation in radio power that is observed, for example, in the 3C
sample, disappears. This conclusion is in line with previous results
\citep{best09,lin10,wing11}. A radio source produced by a low power
jet can be edge brightened or edge darkened and the outcome is not due
to differences in the optical properties of the host galaxy.
Nonetheless, \citet{capetti16} find that the connection between radio
morphology and host properties is preserved in FR~Is; there is a
well-defined threshold of radio power above which an edge darkened
radio source does not form and this limit has a strong positive
dependence on the host luminosity. This effect was originally seen by
\citet{ledlow96} but partly lost in subsequent studies; we believe
that we recover it because of the stricter criteria we adopted for the
selection of FR~Is.
It can be envisaged that brighter galaxies are associated with denser and more
extended hot coronae that are able to disrupt more powerful jets. But the
large population of low power FR~IIs indicates that the situation is more
complex; there is a large overlap of radio power between FR~Is and FR~IIs and
radio power is believed to be a robust proxy for the jet power (e.g.,
\citealt{willott99,birzan04}). Apparently, jets of the same power expanding in
similar galaxies can form both FR~I and FR~II. This indicates that the optical
properties of the host and radio luminosity are not the only parameters
driving the evolution of low power radio sources. Further studies of, for example,
the X-rays properties and the larger scale environment are needed to clarify
this issue.
\begin{figure}
\includegraphics[width=9.5cm]{lrlo3.ps}
\caption{Radio (NVSS) vs. [O~III] line luminosity of the FRII{\sl{CAT}}\ (black),
FRI{\sl{CAT}}\ (red), 3C-FR~I (green), and 3C-FR~II samples (blue) The green line
(blue) shows the linear correlation between these two quantities derived
from the FR~Is (FR~IIs) of the 3C sample from \citep{buttiglione10}.
The dashed blue-cyan line is instead the linear fit on both the 3C
and the FRII{\sl{CAT}}\ HEGs.}
\label{lrlo3}
\end{figure}
\bibliographystyle{aa}
|
1,108,101,563,665 | arxiv | \section{Introduction}
\vfigbegin
\image{0.7\columnwidth}{!}{lace_Ammann_Beenker_lowres.jpg}
\vfigend{fig:lace2}{`Nodding bur-marigold', Veronika Irvine 2019: dual of Ammann-Beenker tiling worked in DMC~Cordonnet~Special~80 cotton thread.}{Lace Ammann-Beenker}
Bobbin lace is a 500-year-old fibre art in which threads are braided together to form a fabric characterized by many holes.
In bobbin lace, a closed region, which may be part of the background or, less commonly, part of the foreground\footnote{Traditionally, regions of a foreground figure or motif are filled with a plain (cloth-stitch) or triaxial (half-stitch) weave or a combination of both. Contemporary artists, such as Pierre Fouch\'{e}, are starting to include decorative patterns in foreground regions.}, is filled with a decorative pattern. A variety of patterns are used to distinguish different forms, provide shading and create aesthetic interest.
Traditionally, these patterns, called \emph{grounds} or \emph{fillings} by lacemakers, consist of a small design repeated at regular intervals horizontally and vertically.
Contemporary lace pieces also employ very regular grounds, although modern lacemakers have experimented with `random' fillings in which stitches in a traditional ground are performed without a fixed sequence~\cite[Chapter~17]{moderne}.
For a more detailed description of bobbin lace and an example pattern, we refer the reader to previous work by Irvine and Ruskey~\cite{lace1}.
Regular fillings with crystallographic symmetry create pleasing designs and in small areas provide sufficient complexity to be interesting. However, modern lacemakers are tending towards large scale, abstract pieces with sizeable areas of a homogeneous texture. At this scale, the simplicity of translational repetition can dominate the piece, overwhelming the more subtle reflection and rotation symmetries. From a distance, the viewer perceives only a regular grid of holes which, on closer inspection, may give rise to a perception of motion due to a simultaneous lightness contrast illusion similar to the scintillating grid~\cite{ninio} (in particular when realized in white thread on a black background).
We are interested in breaking up the integral regularity of translation in the pattern while still maintaining local areas of symmetry. This characteristic appears in the famous Penrose tilings, which belong to the larger family of quasiperiodic tilings. We will formally define quasiperiodicity in Section~\ref{sec:problem} but for now we can think of it informally as uniform repetition within a pattern that globally lacks periodic symmetry.
A quasiperiodic tiling contains small patches of recognizable symmetry, providing a familiar handle for the viewer to approach the design. The recognizable patches are arranged in a large variety of ways challenging the viewer to identify them.
Several philosophers have attributed aesthetic appeal to just such a relationship. The 18th century Scottish philosopher Hutcheson proposed a calculus of beauty which he described as ``\emph{the compound ratio of uniformity and variety}'' \cite{hutcheson}.
Another 18th century British philosopher, Gerard, ascribed aesthetic pleasure to an activity of the imagination:
\begin{adjustwidth}{1cm}{1cm}
\emph{``\dots uniformity, when perfect and unmixed, is apt to pall upon the sense, to grow languid, and to sink the mind into an uneasy state of indolence. It cannot therefore alone produce pleasure, either very high, or of very long duration. \dots Variety in some measure gratifies the sense of novelty, as our ideas vary in passing from the contemplation of one part to that of another. This transition puts the mind in action, and gives it employment, the consciousness of which is agreeable.''}~\cite{gerard}
\end{adjustwidth}
Closer to our own time, art historian Gombrich explored the importance of decoration and conjectured:``\emph{Aesthetic delight lies somewhere between boredom and confusion}.''~\cite{gombrich}
In this paper we explore the role that quasiperiodicity can play in lacemaking. To do that, we extend the mathematical model for bobbin lace so that it can describe non-periodic patterns. We look at the P3 tiling, consisting of thick and thin rhombs, as well as the Ammann bar decorations of this tiling and derive workable bobbin lace patterns from them. Several new quasiperiodic lace patterns are then realized in thread.
\section{Problem description}
\label{sec:problem}
Let us start by briefly summarizing previous work on creating a mathematical model for bobbin lace grounds \cite{irvinePhD, lace1, lace2}.
A bobbin lace ground is a small pattern used to fill a closed shape.
Although the lace that includes this ground is obviously finite in size, rarely larger than a tablecloth, we can imagine that the ground may extend arbitrarily far in any direction and is therefore unbounded. Traditionally, the ground fills this space by translating a small rectangular patch of the pattern in two orthogonal directions such that the copies fit together edge to edge. In the mathematics of tiling theory, such patterns are called \emph{periodic}.
Bobbin lace is created by braiding four threads at a time. Threads travel from one 4-stranded braid to the next in pairs. We can therefore divide the ground into two independent components: a drawing that captures the flow of pairs of threads from one braid to another, and a description of the braid formed each time four threads, or more specifically two pairs, meet (as shown in Figure~\ref{fig:graph}.
More formally, we can represent a bobbin lace ground as the pair $(\Gamma(G),\zeta(v))$ where $G$ is a directed graph embedding that captures the flow of pairs of threads from one braid to another,
$\Gamma(G)$ is a specific drawing of $G$ that assigns a position to every vertex, and $\zeta(v)$ is a mapping from a vertex $v \in V(G)$ to a mathematical braid word which specifies the over and under crossings performed on the subset of four threads meeting at $v$.
\vfigbegin
\imagepdftex{0.9\columnwidth}{graph2.pdf_tex}
\vfigend{fig:graph}{From bobbin lace ground to mathematical representation as a graph drawing and a set of braid words, one for each vertex in the graph}{Graph drawing representation}
We are interested in ground patterns that can actually be realized by a lacemaker. To this end, Irvine and Ruskey identified four necessary conditions that the drawing $\Gamma(G)$ must meet:
\begin{itemize}
\item[CP1.] \textbf{2-2-regular digraph:} Two pairs come together to form a braid and, when the braid is complete, the four threads continue on, pair-wise, to participate in other braids. Therefore, $G$ is a directed 2-2-regular digraph, meaning that every vertex has two incoming edges and two outgoing edges.
\item[CP2.] \textbf{Connected filling of unbounded size:} The purpose of a bobbin lace ground is to fill a simple region of any size with a continuous fabric. One way to accommodate large sizes is to choose a pattern that can cover the infinite plane. In traditional lace, this is accomplished with a periodic pattern. As shown on the right side of Figure~\ref{fig:graph}, a parallelogram with the smallest area that captures the translational period of the pattern, called a unit cell, can represent the entire pattern. Threads leaving the bottom edge of the parallelogram continue as threads entering the top edge of the next repeat; threads leaving the right edge enter the next repeat on the left edge and vice versa.
We refer to this representation as a \emph{flat torus}.
For the resulting fabric to hold together as one piece, it is important that translated copies of the pattern are connected to each other. We express this formally by saying that the graph embedding of $G$ must have an oriented genus of 1 (i.e. it must be an embedding on the torus).
\item[CP3.] \textbf{Partially ordered:} Bobbin lace is essentially an alternating braid and therefore the thread crossings must have a partial order, or put another way, all thread crossings must happen in the forward direction. For the graph embedded on the torus, this means all directed circuits of $G$ are non-contractible. A contractible circuit can be reduced to a point by shortening the lengths of its edges.
\item[CP4.] \textbf{Thread conserving:} Loose ends, caused by cutting threads or adding new ones, are undesirable because they inhibit the speed of working, can fray or stick out in an unsightly manner and, most importantly, degrade the strength of the fabric. For the model, this means that, once started, a rectangular region of arbitrary width $w$ can be worked to any length without the addition or termination of threads (assuming that sufficient thread has been wound around the bobbins). We refer to this ability to extend the pattern indefinitely in one direction, using a fixed set of threads, as ``conservation of threads''.
We say that a pair of paths in a graph drawing is \emph{osculating} if, when the two paths meet, they do not cross transversely but merely kiss, i.e., touch and continue, without crossing, as shown in Figure~\ref{fig:partition}.
In order to ensure that the pattern conserves threads, when the edges of $\Gamma(G)$ are partitioned into directed osculating circuits we must ensure that each circuit in the partition is in the $(1,0)$-homotopy class of the torus (Adams gives an excellent description of the torus-knot naming convention~\cite{adams}). That is, each circuit must wrap once around the minor (vertical) axis of the flat torus and zero times around its major (non-vertical) axis. This ensures that the threads do not have a net drift left or right but rather return to the same horizontal position at the start of each vertical repeat.
\end{itemize}
\begin{center}
\begin{minipage}{\textwidth}
\begin{center}
\imagepdftex{0.9\columnwidth}{conservation_periodic.pdf_tex}
\captionof{figure}[]{At a 2-in, 2-out vertex, two paths meet in either a transverse or an osculating manner. An example of a graph drawing partitioned into osculating paths.}
\label{fig:partition}
\end{center}
\end{minipage}
\end{center}
For a more detailed description of these conditions, we refer the reader to previous work by Irvine and Ruskey~\cite{lace1} or Biedl and Irvine~\cite{lace2}.
Irvine used conditions $CP1$--$CP4$ to prove that there exist an infinite number of periodic bobbin lace grounds~\cite{irvinePhD}. She also performed a combinatorial search to generate several million examples for graph drawings with up to 20 vertices~\cite{irvinePhD}.
However, the conditions introduced previously are only applicable to periodic patterns. Here, we introduce a new set of conditions that cover any infinite pattern, including those that are non-periodic.
Although we represent the flow of threads as a drawing of one-dimensional curves, lace is realized in a physical medium (such as cotton thread) which has width. There are physical limitations on how close together two braids can be before they will overlap or how close two pins can be before the braid between them buckles out of plane. Conversely, if a hole in the lace ground is too wide, it may be wider than the region we are trying to fill, slicing the patch of fabric into two unconnected parts.
We must therefore specify a set of minimum and maximum feature sizes, namely a lower and upper bound on the length of an edge and a lower and upper bound on the distance between two vertices of a face.
In previous work on periodic grounds, we took these bounds for granted because they are determined by the boundary of the unit cell and $CP4$. Here we will include them in a formal way by first defining a few terms:
\begin{definition}\cite{grunbaum}
The faces of a graph drawing are \emph{uniformly bounded} if there exist positive real numbers $r$ and $R$, $r < R$, such that every face contains a ball of radius $r$ and every face is contained in a ball of radius $R$.
\end{definition}
In any tiling with a finite number of distinct tile shapes (or specifically, a graph drawing with a finite number of distinct face shapes), the tiles will be uniformly bounded.
\begin{definition}\cite{senechal}
A \emph{Delone point set} is uniformly discrete and relatively dense.
A set of points $\mathcal{P}$ in $\mathbb{R}^n$ is \emph{uniformly discrete} if there exists a positive real number $d$ such that given any two points $x$ and $y$ in $\mathcal{P}$, the distance between them is at least $2d$.
A set of points $\mathcal{P}$ in $\mathbb{R}^n$ is \emph{relatively dense} if there exists a positive real number $D$ such that every ball with radius greater than $D$ contains at least one point of $\mathcal{P}$ in its interior.
\end{definition}
\begin{itemize}
\item[C0.] \textbf{Bounded in feature size:} The faces of the graph drawing are uniformly bounded and the vertices form a Delone
point set.
\end{itemize}
The remaining conditions are a generalization of $CP1$ --- $CP4$.
\begin{itemize}
\item[C1.] \textbf{2-2-regular digraph:} The underlying graph $G$, representing the flow of pairs of threads between braids, is a 2-2-regular digraph.
\item[C2.] \textbf{Connected filling of unbounded size:} The lace must fill a 2D region of unbounded size with a continuous fabric. The graph $G$ must therefore be infinite (also implied by $C0$) and connected.
\item[C3.] \textbf{Partially ordered:} Bobbin lace is braided, which means that all crossings must happen in the forward direction; therefore, the planar embedding of $G$ cannot contain a directed cycle. This implies that the planar embedding of $G$ is simple (faces have degree at least 3): it does not contain any self-loops; either the edges of a bigon (a face of degree two) form a cycle or the edges and vertices of a bigon represent a continuous braid made on the same four threads which, by our mapping, corresponds to a single vertex in the graph.
\item[C4.] \textbf{Thread conserving:} For threads to be conserved, there must exist a partition of the plane graph drawing, $\Gamma(G)$, into a set of \emph{well-behaved} osculating paths. A path is well-behaved if there exists a line $\ell$ and a finite distance $s$ such that every point on the path is within a perpendicular distance $s$ of $\ell$.
\end{itemize}
It is worth noting that a necessary condition for $C3$ is that the outgoing edges of an infinite digraph embedded in the plane must be \emph{rotationally consecutive} (i.e., adjacent to one another in clockwise order around their common endpoint), a result that was demonstrated by Irvine and Ruskey~\cite{lace1}. This rotationally consecutive ordering coupled with condition $C1$--a regular digraph with in-degree equal to out-degree--also ensures that we can partition the graph into a unique set of osculating paths~\cite{lace1}.
Conditions $C0$--$C4$ give a generalized model of bobbin lace grounds that encompass both periodic and non-periodic patterns. Every lace pattern can be represented as a planar infinite graph. Every simple planar graph can be rendered as a straight-line graph drawing~\cite{fary}, which in turn defines a tiling of the plane. We can now define a quasiperiodic tiling and ask the question `Does there exist a quasiperiodic drawing of an infinite graph that meets conditions $C0$--$C4$?'.
\begin{definition}\cite{durand, delvenne}
A \emph{quasiperiodic tiling} is a tiling $T$ such that for every patch $P$ (a finite, simply connected subset of tiles in $T$), there exists a real number $b>0$ such that a ball of radius $b$, centered on any point in the tiling,
contains a copy of $P$. In addition, the tiling is not periodic, that is, a copy $T'$ of $T$ cannot be superimposed on $T$ by a non-trivial translation.
\end{definition}
In the rest of this paper, we will look at different families of quasiperiodic patterns. When trying to assess whether a quasiperiodic pattern is a good candidate for a bobbin lace ground, we will pay particular attention to two conditions: Do all of the vertices have degree four ($C1$)? Can we assign a direction to the edges to give a well-behaved osculating partition ($C4$)?
In the next section, we start our exploration with a very simple family of non-periodic patterns composed from parallelograms arranged in a grid.
\section{Parallelogram tiling from two sets of parallel lines}
\label{sec:two}
Obviously a periodic grid will satisfy $C0$--$C4$ so we ask the question ``What minimal alterations will break periodicity while still meeting these conditions?''
To start, consider a simple periodic bigrid formed by overlaying two infinite sets of regularly spaced parallel lines, $A$ and $B$, at a relative angle of $\alpha > 0$, where $\alpha$ is the small angle between lines $A_i$ and $B_j$ for any $A_i \in A$ and $B_j \in B$.
To turn this into a planar graph drawing, we place a vertex at every crossing of two lines and assign an edge between pairs of vertices that are consecutive along a line. We observe that the faces in this drawing are all parallelograms and the planarity of the drawing satisfies $C2$.
To assign a direction to each edge of the drawing, we first choose a vector that is not perpendicular to any set of lines and then rotate the drawing so that this vector points up. We can now unambiguously assign a downward direction to each edge because there are no horizontal edges. The directed graph drawing thus created has two useful properties. First, each vertex has two incoming and two outgoing edges such that the outgoing edges are rotationally consecutive, satisfying $C1$. Second, no directed cycle can exist because there are no edges directed `up' to complete the cycle, satisfying condition $C3$.\footnote{The edge directions specified here fix a start and end vertex for each edge. Once the edge directions have been assigned, rotating the drawing such that an edge is no longer strictly pointing down is not a problem because the topological direction of the edges is still preserved.}
Adherence to conditions $C1$--$C3$ is determined by the topology of the pattern. As long as any change we make to the geometry of the drawing does not alter its topology, the drawing will still meet these conditions.
We now consider modifications to the geometry. We can turn our drawing into a non-periodic pattern by changing the spacing between lines.
For simplicity, we will restrict the allowed spacings to two values: long $(L)$ and short $(S)$. Under this restriction, the faces in the drawing are limited to four parallelogram classes: $SS$, $LL$, $SL$ and $LS$. The largest circle that fits inside every tile of the tile set has radius $(|S|\sin{\alpha})/2$ and the smallest circle that can contain every tile of the tile set has radius less than $|L|$. No pair of vertices in the tiling are closer to each other than $|S|/\sin{\alpha}$, and every vertex is at most $2|L|$ from some other vertex. Since $\alpha$ is greater than $0$, the vertices form a Delone point set, the faces are uniformly bounded, and $C0$ is satisfied.
Our goal is to create a pattern that is not periodic but that still has some regularity. To achieve this, instead of randomly choosing whether to use $L$ or $S$ spacing between two lines in a set, we will use a quasiperiodic word.
An infinite word $W$ is \emph{quasiperiodic} if and only if, for every finite substring $w$ of $W$, there exists an integer $b$ such that every substring of length $b$ in $W$ contains at least one copy of $w$~\cite{delvenne}. We can think of it as a quasiperiodic tiling in one dimension.
In Figure~\ref{fig:bigrid}, we show a pattern that uses the infinite binary Fibonacci word to determine the spacing.
The sequence of characters in the Fibonacci word is defined by the following recursive expansion relation: $\sigma(L)=LS$, $\sigma(S)=L$. For example, starting with the word $L$, we have $\sigma(L)=LS$, $\sigma^2(L)=LSL$, $\sigma^3(L)=LSLLS$, $\sigma^4(L)=LSLLSLSL$ and so on.
\begin{center}
\begin{minipage}{\textwidth}
\begin{center}
\imagepdftex{0.6\columnwidth}{bigrid2.pdf_tex}
\captionof{figure}[]{A simple quasiperiodic lace pattern created by superimposing two sets of parallel lines with interline spacing determined by the Fibonacci word: a) pattern with one path from osculating partition highlighted in bold red, b) pattern worked as bobbin lace, tallies (fat braids) at vertices incident to four squares of length $L$.}
\label{fig:bigrid}
\end{center}
\end{minipage}
\end{center}
To test compliance with $C4$, we will use the \emph{Hamming weight} of a binary word, which can be defined here as the number of occurrences of $S$ within that word. For example, substring $w=LLSLLSLSL$ of the Fibonacci word has Hamming weight $3$. Lothaire showed that the Fibonacci word is \emph{balanced}: given any two equal length substrings, their Hamming weights differ by at most one~\cite{lothaire}.
We partition our graph drawing into a set of osculating paths and consider one such path $P$. If $P$ follows two consecutive edges in line set $A$, it must transversely cross another path, contradicting the definition of an osculating path, therefore, the steps in $P$ alternate between an edge from line set $A$ and an edge from line set $B$.
Consider $\ell$, the least squares regression line (also known as the Deming regression line) derived from the vertex positions of $P$. A least squares regression line minimizes the sum, over all vertices $v$ in $P$, of the perpendicular distance from $v$ to $\ell$~\cite{glaister}.
A walk of $2k$ steps along path $P$ can be described by two equal length substrings of the Fibonacci word which we shall call $w_A$ (steps along consecutive lines in $A$) and $w_B$ (steps along consecutive lines in $B$). Because the line spacing of our bigrid is based on a balanced word, $w_A$ and $w_B$ have the same number of $S$ steps, plus or minus one, for all values of $k$.
Therefore, for any walk along $P$, the distance travelled along lines in set $A$, in a left to right direction relative to $\ell$ is equal or very nearly equal to the distance travelled along lines in set $B$, right to left relative to $\ell$. When a one step discrepancy occurs, it does not accumulate because $w_A$ and $w_B$ are balanced for all values of $k$.
We observe that $\ell$ is parallel to the bisector of the angle between $A$ and $B$. For the Fibonacci word, the maximum perpendicular distance from a vertex of $P$ to the line $\ell$ is $\frac{1}{2}(L+S)\sin{\alpha}$.
The final step to satisfy $C4$ is to orient the pattern so that line $\ell$ is vertical.
The Fibonacci word belongs to a larger family called the Sturmian words. This family of infinite binary words is characterized as being non-periodic and having a balanced Hamming weight~\cite{lothaire}. Additional lace patterns can be obtained by varying the line spacing using an alternative Sturmian word, such as the Octonacci or Thue--Morse sequences~\cite{walter}.
\begin{theorem}
The graph drawing induced by a bigrid, with line spacing determined by a Sturmian word, has a set of edge directions that satisfies conditions $C0$--$C4$ and thus forms a workable bobbin lace pattern.
\end{theorem}
In Figure~\ref{fig:bigrid}, we have our first example of a quasiperiodic bobbin lace ground. We also note that it has some local
$D_2$ and $D_4$ dihedral symmetry.
To illustrate the significance of geometry in the conservation of threads, we present
a counterexample that is homeomorphic to both the 2D periodic lattice and the 2D Fibonacci tiling discussed above. The topology of our counterexample obeys $C1$--$C3$ and the geometry even observes $C0$. The interline spacing, however, does not result in well-behaved osculating paths.
In our counterexample, the spacing between south-easterly lines is specified by the bi-infinite word $W_A:=\dots S^{64}L^{16}S^{4} LL S^{4}L^{16}S^{64}\dots$ and the spacing between south-westerly lines by $W_B:=\dots L^{64}S^{16}L^{4} SS L^{4}S^{16}L^{64}\dots$.
In both sequences, the number of consecutive, equally-spaced lines (i.e. consecutive $S$s or consecutive $L$s) grows exponentially out from the center, indicated by $O$ in Figure~\ref{fig:zigzag}. The key difference between $W_A$ and $W_B$ is that they are out of phase: when $W_A$ contains a stretch of $S$s, $W_B$ contains an equal number of $L$s. When the osculating path travels through a region of $SL$ parallelograms, it has a net negative slope; when the path moves into a region of $LS$ parallelograms, the net slope becomes positive. The area occupied by connected parallelograms of the same type grows exponentially as we move out from $O$ causing the osculating path to travel increasingly larger distances from the ideal least squares regression line. From a bobbin lacemaker's perspective, this means that in order to use the same set of threads from top to bottom to cover a rectangular patch with this ground, the minimum width of the patch must increase as the length of the patch increases, a contradiction to $C4$.
\vfigbegin
\imagepdftex{\columnwidth}{zigzag4.pdf_tex}
\vfigend{fig:zigzag}{Counterexample: Two sets of parallel lines for which an osculating path deviates an increasing amount from the least squares regression line, calculated from the positions of the vertices.}{Counterexample}
As shown in Figure~\ref{fig:bigrid}, we have now achieved the first part of our goal: to demonstrate the existence of non-periodic bobbin lace grounds. Aesthetically however, the pattern of holes produced by two sets of parallel lines is quite simple. In appearance, they resemble woven cloth; woven patterns derived from Sturmian words were previously introduced by Ahmed~\cite{ahmed}.
In bobbin lace, threads can travel in more than two directions which might permit more complex patterns.
In the next section we will demonstrate that the P3 tiling gives rise to a bobbin lace pattern.
\section{Lace pattern from the Penrose thin and thick rhombs}
The quasiperiodic tiling commonly known as the \emph{Penrose thick and thin rhomb} tiling (or, for brevity, the \emph{P3} tiling) was discovered independently by Roger Penrose and Robert Ammann in the late 1970s~\cite{grunbaum}. One of its remarkable properties is that it has regions of fivefold symmetry which occur at larger and larger scales, ultimately leading to two particular configurations that have global 5-fold dihedral symmetry. This tendency toward fivefold symmetry also shows up in the Bragg diffraction pattern of P3. Recall that global 5-fold symmetry cannot occur in a periodic pattern.
The set of prototiles used in P3 consists of two rhombs with interior angles as indicated in Figure~\ref{fig:penrose}a. It is possible to create a periodic tiling from one or both of these prototiles, so the shapes are decorated in a way that prohibits periodic configurations. Such decorations are called \emph{local matching conditions}. For example, in Figure~\ref{fig:penrose}a, the edges are marked with arrows. Only tile edges with the same arrow shape and direction can be placed together as shown in Figure~\ref{fig:penrose}b.
\vfigbegin
\imagepdftex{\columnwidth}{penrose3.pdf_tex}
\vfigend{fig:penrose}{P3 tiling: a) prototiles with matching rules, b) a patch of P3, c) deflation rules d) two steps of deflation applied to the patch in (b).}{P3 decorated rhombs}
There are several ways to construct a patch of P3. In this paper, we will look at two methods which will provide insight into how a bobbin lace pattern can be derived from this tiling.
\textbf{Deflation and matching:} A recursive process can be used to grow a small patch of tiles into a patch of unbounded size. It starts with a finite configuration known to appear in the pattern. For example, for P3 it could be as simple as a single thin or thick rhomb. At each iteration, the patch is scaled up by a factor, which for P3 is $\tau = (1+\sqrt{5})/2$, and each tile in the patch is subdivided (deflated) into tiles or parts of tiles from the tile set. Note that the orientations of the new tiles are determined by the markings of the original tile. For P3, the subdivision of thin and thick rhombs is shown in Figures~\ref{fig:penrose}c~and~d.
This deflation can be iterated to produce a patch of any desired size, and defines a tiling of the entire plane in the limit.
A quasiperiodic tiling that can be generated using the deflation and matching technique has the property of being \emph{self-similar}.
\textbf{Generalized dual method (GDM):} This method, discovered by de Bruijn~\cite{deBruijn1981}, creates a tiling from its topological dual. It starts with $n$ sets of equally spaced parallel lines, called an \emph{$n$-grid} or \emph{multigrid}, and a star of unit vectors, each vector being orthogonal to one of the sets of lines.
For P3, the orientation star, given by $\{\vec{e_i} = (\cos(2\pi i/5), \sin(2\pi i/5))\}_{i=1}^{5}$, is regular (angles are equal) as shown in Figure~\ref{fig:pentagrid}b.
The line sets are in \emph{general position}: that is, at any point only two lines intersect.\footnote{
For details on how to avoid degenerate intersections in the multigrid, we refer the reader to more in-depth discussions~\cite{socolar, boyle, egan}.
}
As in the previous section, we derive a planar graph embedding from the multigrid by placing a vertex at the intersection point of every pair of lines.
The next step is to construct the dual of the multigrid. Given a primal graph embedding, its \emph{dual} is another graph embedding in which every face in the primal becomes a vertex in the dual, every vertex in the primal becomes a face in the dual, and an edge in the dual is incident to a pair of vertices that correspond to adjacent faces in the primal.
To create a quasiperiodic tiling from a multigrid, the GDM-dual is defined using the following rule: Within a set of parallel lines, each line is assigned an integer index $k$.
Each face of the multigrid is assigned an $n$-tuple $\textbf{p}{=}(p_1,\cdots,p_{n})$ consisting of ordinal positions, one for each of the $n$ sets of lines. That is, a face that lies between lines $k$ and $k+1$ in the $i$th set of lines will have $p_i=k$. The position of a vertex in the dual is then given by $f(\textbf{p}){=}\sum_{i=1}^{n}p_i \vec{e_i}$.
\begin{center}
\begin{minipage}{\textwidth}
\begin{center}
\imagepdftex{0.9\columnwidth}{pentagrid.pdf_tex}
\captionof{figure}[]{Generalized dual method for P3 tiling: a) pentagrid, b) orientation star, c) intersections with a single line, d) corresponding tiles in GDM-dual.}
\label{fig:pentagrid}
\end{center}
\end{minipage}
\end{center}
The intersection of two lines in the multigrid results in a rhombus in the GDM-dual.
In Figures~\ref{fig:pentagrid}c~and~d, we demonstrate the GDM for P3 by following a single line $r$ in the 5-grid (also known as \emph{pentagrid}).
The result is a sequence of rhombi, each of which has a pair of edges perpendicular to $r$. This sequence of rhombi is called a `stack' which we define more formally as follows:
\begin{definition}[\cite{deBruijn2013}]
Select an edge $e$ of a parallelogram $p$ in an edge-to-edge tiling by parallelograms. In the tiling, there are two parallelograms $p'$ and $p''$ that are adjacent to $p$ and have an edge parallel to $e$. We will call $p'$ and $p''$ the $e$-neighbours of $p$. A chain of tiles is obtained by taking the $e$-neighbours of $p'$ and $p''$ and iteratively extending this relationship across the tiling. The resulting bi-infinite sequence of tiles is called a \emph{stack}.
\end{definition}
Applying the GDM to a pentagrid, the set of lines parallel to $r$ will result in a set of stacks, all of which have edges perpendicular to $r$. This collection of stacks is called a \emph{stack family}.
Two relationships in the GDM-dual are worth noting. First, the acute and obtuse angles formed by the intersection of two lines in the multigrid correspond to the angles of either a thin rhomb or a thick rhomb in the tiling. Second, in the primal multigrid, only two lines intersect at a point. Thus, in the dual tiling, each tile belongs to exactly two stacks.
Now that we are familiar with the P3 tiling, let us turn to the question of whether it can be a pattern for bobbin lace.
The pentagrid has the right topology for bobbin lace, but its geometry does not satisfy $C0$. The P3 tiling, on the other hand, is well behaved geometrically but its graph structure is not workable. We define a kind of dual that marries the topology of the pentagrid with the geometric arrangement of P3, ultimately producing a pattern that can be rendered in lace.
This dual, which we shall call the \emph{centroid-dual} of the tiling is created by applying the following rule to the P3 tiling: a vertex in the dual graph drawing is placed at the centroid of the corresponding face in the primal tiling.
The centroid of a face is its center of gravity
which can be determined from the positions of the vertices on the boundary of the face by assuming that the face polygon has uniform density.
We will refer to the resulting graph drawing as the \emph{centroid-dual} of the tiling. An example is shown in Figure~\ref{fig:dualP3Proof}c.
To assign a direction to each edge of the centroid-dual, we note that there is a homeomorphism between the centroid-dual and the pentagrid~\cite{deBruijn2013}. We first orient the lines of the pentagrid by choosing a vector that is not perpendicular to any set of lines and rotating the pentagrid so that this vector points up. A down direction can be unambiguously assigned to each of the lines in the pentagrid. Each edge of the centroid-dual is given the same direction as its topological counterpart in the pentagrid.
Note that an edge in the centroid-dual may be horizontal or even angled slightly up but its corresponding edge in the pentagrid always points down.
\begin{theorem}
\label{thm:penrosep3}
The edges of the centroid dual of the P3 tiling can be assigned directions in such a way that the resulting graph satisfies $C0$--$C4$. Therefore, the centroid dual of the P3 tiling is a workable bobbin lace pattern.
\end{theorem}
\begin{proof}
$C0$: All quasiperiodic patterns that can be created by the deflation and matching method, such as the P3 tiling, have vertices that form a Delone point set~\cite{senechal2008}. Therefore, there exists a distance $d$ such that every pair of vertices in P3 is separated by a distance at least $2d$ and a distance $D$ such that every circle of radius $D$ contains at least one point.
The two prototiles of P3, shown in Figure~\ref{thm:penrosep3}, are uniformly bounded. Therefore there exists a largest incircle of radius $r$ and a smallest containing circle of radius $R$.
P3 satisfies $C0$, but we need to prove that this is also true of the centroid-dual of P3.
First, let us prove that the vertices of the centroid-dual of P3 form a Delone point set.
We observe that the incircle of a rhomb is centered at its centroid.
The incircles of any pair of rhombs in the tiling cannot overlap, therefore, the distance between any pair of vertices in the centroid-dual is at least $2r$.
In the primal graph, a circle of radius $D$ contains at least one vertex $v$. This vertex is incident to a face $f$ which is bounded by a circle of radius $R$. The centroid of $f$ is at most a distance $R$ from $v$. A circle of radius $D+R$ will therefore contain at least one vertex of the centroid-dual.
Now we must prove that the prototiles of the centroid-dual of P3 are uniformly bounded. De Bruijn showed that there are eight distinct vertex configurations in P3, as shown in Figure~\ref{fig:dualP3C0}a~\cite{deBruijn1981}. Each face in the centroid-dual maps to a vertex in the primal. The centroid of a rhomb is located at the intersection of its diagonals. Therefore, we can derive the set of dual prototiles directly from the primal vertex configurations, as shown in Figure~\ref{fig:dualP3C0}b. We conclude that the centroid-dual tiling is 7-hedral and therefore uniformly bounded.
\begin{center}
\begin{minipage}{\textwidth}
\begin{center}
\imagepdftex{\columnwidth}{centroid_dual_p3.pdf_tex}
\captionof{figure}[]{Demonstrating that the centroid-dual tiling of P3 is 7-hedral: (a) eight vertex configurations of P3, (b) prototiles of centroid-dual in bold red.}
\label{fig:dualP3C0}
\end{center}
\end{minipage}
\end{center}
$C1$: Every face in the primal graph drawing is a rhomb and therefore every vertex in the dual graph drawing has degree 4.
Each vertex in the centroid-dual is homeomorphic to the intersection of two directed lines in the pentagrid. Therefore, by homeomorphism, each vertex in the dual graph has in-degree~2 and out-degree~2.
$C2$: The P3 tiling is planar, as is its dual. Every vertex in an edge-to-edge tiling by rhombs has degree 3 or more, therefore every face in the dual also has degree at~least~3.
$C3$:
The pentagrid was rotated so that all of its line sets are directed downward. Since it has no upward edges, the pentagrid has no directed cycles. By homeomorphism, the same can be said of the centroid-dual drawing.
The proof of $C4$ requires a few steps. First we will prove that a stack in P3 is well-behaved.
A stack is well-behaved if there exists a line $\ell$ such that every tile vertex in the stack is within a finite distance $s$ from $\ell$.
We will then show that in P3, every stack in a stack family contains an osculating path from the centroid-dual of P3 that travels strictly within the stack. Finally, we will show that the stacks are uniformly distributed and therefore any osculating path in the centroid-dual that is not contained within a stack is sandwiched between well-behaved osculating paths and is therefore also well-behaved.
We start by demonstrating that a stack in P3 is well-behaved. We phrase the lemma in more general terms for future use.
\begin{lemma}
Let $T$ be a quasiperiodic tiling generated by applying the generalized dual method to a multigrid consisting of $n$ sets of equally spaced lines with a regular orientation star. The stacks of $T$ are well-behaved.
\label{thm:wellstack}
\end{lemma}
\begin{proof}
Select a line $k$ from the multigrid. For ease of explanation, let us assume that $k$ is a vertical line belonging to $A_0$, the set of vertical lines. See, for example, the vertical red line in Figure~\ref{fig:pentagrid}c.
Because the orientation star is regular, the remaining sets of parallel lines can be paired up with set $A_i$ being the set that intersects $k$ at angle $\theta_i$ and $A_i'$ being the set that intersects line $k$ at angle $-\theta_i$, for $1 \le i \le \floor{n/2}$.
When $n$ is even, one set, $A_{\ceil{n/2}}$ will be horizontal.
The translational spacing between lines within the set is the same for all sets, therefore, the distance between the points where lines in set $A_i$ intersect line $k$ is equal to the distance between the points where lines of set $A_i'$ intersect line $k$.
The subsequence formed by the intersections of $A_i$ and $A_i'$ with line $k$ must therefore alternate (one set of intersections cannot `catch up and pass' the other). This alternation between lines with opposite angle is known as the alternation condition and applies for all values of $i$.
The rhombs in the GDM-dual resulting from an intersection of line $k$ and any line from $A_i$ tilt to the left of the line $k$, making the stack bend to the left. Rhombs resulting from an intersection of line $k$ with a line from $A_i'$ tilt to the right with the same magnitude, making the stack bend an equal amount to the right. Because the rhombs in the stack alternate between $A_i$ and $A_i'$, the amount of bend left or right is balanced making the stack well-behaved.
When the number of non-vertical sets is odd, the orthogonal intersection of a line from $A_{\ceil{n/2}}$ with line $k$ corresponds to a square that is edge aligned with line $k$ and therefore has no bending effect on the stack.
\end{proof}
Now we will show that each stack in P3 contains an osculating path and, by inclusion, the path is therefore also well-behaved.
\begin{lemma}
Let $T$ be a P3 tiling and let $T'$ be a deflation of $T$ according to the deflation and matching method.
For each stack $s$ of $T$, there exists a path in the osculating partition of the centroid-dual of $T'$ that does not cross the boundary edges of $s$.
\end{lemma}
\begin{proof}
Proof by exhaustion: We will consider each of the tile configurations that can occur in a stack of $T$ (up to rotation by $\pi$, vertical or horizontal reflection) and demonstrate that each configuration includes a subpath of an osculating path in the centroid-dual of $T'$. Further, when two configurations are joined together, their osculating subpaths connect.
Once again, from de Bruijn's extensive analysis of P3, we know that there are seven arrangements of tiles to consider (see Figure~\ref{fig:dualP3Proof}a).
\begin{lemma}\cite{deBruijn2013}
A \emph{central tile configuration} is a central tile and its four direct neighbours, each neighbour having one edge in common with the central tile. For P3, there exist seven distinct central tile configurations up to rotation. Further, the matching conditions can be applied to each configuration in only one way.
\end{lemma}
Let $t$ be the central tile of a configuration. For convenience of discussion, we will assume that the edge $e$ which defines the stack $s$ is horizontal. In Figure~\ref{fig:dualP3Proof}a, we have highlighted one stack through the central tile of each central tile configuration in white. Every tile belongs to two stacks; however, choosing the alternate stack $s'$ in each configuration and orienting $s'$ so the defining edge is horizontal will result in the same set of drawings up to reflection in a vertical mirror.
Within $t$, select the edges of the centroid-dual that extend from a horizontal edge of $t$ to a non-horizontal edge of $t$ or vice versa, see Figure~\ref{fig:dualP3Proof}d. We observe that in all seven central tile configurations these edges exist and connect to form a direct path from the top of $t$ to the bottom of $t$. Further, the tile preceding $t$ in $s$ and the tile following $t$ in $s$ also have such a path and these three paths are all connected.
The path thus formed, contained within the boundary of $s$, does not transversely cross a pair of edges in the centroid-dual drawing, so it is a subpath of an osculating path. Under rotation by $\pi$ or reflection in the horizontal or vertical direction, the selected edges remain unchanged.
The continuation in preceding and succeeding tiles proves that the subpaths connect to form an infinite path.
\vfigbegin
\imagepdftex{\columnwidth}{dualP3_proof3.pdf_tex}
\vfigend{fig:dualP3Proof}{Proving of $C4$ for centroid-dual of P3, white tiles indicate stacks: a) 7 central tile configurations of P3 tiling $T$, b) Deflation of configurations in (a) c) centroid-dual of (b) d) overlay of (a) and (c) e) a patch of P3 overlaid with osculating partition of centroid-dual. }{Proof of C4 for centroid-dual of P3}
\end{proof}
Finally we show that the stacks of P3 are uniformly distributed throughout the tiling
and therefore there is also a uniform distribution of well-behaved paths from the osculating partition of the centroid-dual.
\begin{lemma}
\label{thm:uniformstack}
In P3, the stacks of a stack family are uniformly distributed throughout the tiling.
\end{lemma}
\begin{proof}
Let $S_e$ be the stack family defined by all edges parallel to $e$.
Select a tile $t$ from one stack $s$ in $S_e$. A second stack $r$ defined by edge $f$ also passes through $t$ and intersects not only with $s$ but with all stacks $S_e$.
The tiles at these intersection points have two sides parallel to $e$ and two sides parallel to $f$ and therefore are all translations of $t$ along $r$. We know from the generalized dual method that these congruent tiles
are dual to a periodic set of lines intersecting with the line dual to $r$. They are therefore uniformly distributed along $r$. We can therefore conclude that the stacks of $S_e$ are uniformly distributed in the tiling.
\end{proof}
By Lemmas~\ref{thm:wellstack}~and~\ref{thm:uniformstack}, we have shown that there exists a set $W$ of uniformly distributed, well-behaved osculating paths in the oriented centroid-dual of the P3 tiling. The set $W$ is a subset of the osculating partition of the tiling. However, any osculating path not in $W$ is sandwiched between two paths that are in $W$ and must therefore also be well-behaved. This concludes the proof of $C4$.
\end{proof}
We have demonstrated that the P3 tiling gives rise to a lace pattern. Based on empirical exploration, we conjecture that other self-similar quasiperiodic tilings, such as the Ammann-Beenker tiling used to create the lace in Figure~\ref{fig:lace2}, will also produce lace patterns.
\begin{conjecture}
Let $T$ be a quasiperiodic tiling by edge-to-edge rhombs created by applying the generalized dual method to sets of periodically spaced parallel lines and let $T'$ be the centroid-dual of $T$. For all $T$ there exists a set of edge directions of $T'$ that satisfies $C0$--$C4$ and thus forms a workable bobbin lace pattern.
\end{conjecture}
\section{Lace pattern from the Ammann bar decoration of P3}
There are several ways in which the P3 tiles can be decorated to enforce the local matching rules. One of the most well known is the arcs proposed by Conway which form attractive curved designs with local fivefold symmetry. Another significant decoration is the Ammann bars, shown in Figure~\ref{fig:ammann}a. The tiles are decorated by line segments and the local matching rule requires that adjacent tiles extend the segments, without bending, to form an infinite line in the limit.
Ammann bars were discovered by several people in the late 1970s~\cite{grunbaum} and named after Ammann because he was
the first to recognize that they clearly illustrate how the matching rules affect not just adjacent tiles but also tiles at a distance.
As with the pentagrid, the Ammann bars fall into five regularly-spaced families of parallel lines in general position.
The orientation star of the sets is a regular spacing of $\pi i /5$ and the lines are in general position. A key difference between the Ammann bars and the pentagrid is that the spacing between lines is not periodic but rather a sequence of long and short spaces corresponding to a Fibonacci word. Whereas the faces of the Pentagrid can be vanishingly small and belong to an infinite set of shapes, Ammann bars divide the plane into a finite set of distinct tile shapes.
We can create a graph drawing from the Ammann bars, just as we have done in previous examples, by placing a vertex at every line intersection and an edge between consecutive pairs of vertices along a line. A set of edge directions is assigned by choosing an up vector, rotating the tiling until no line in the Ammann bars is horizontal and directing all lines downward.
\begin{theorem}
There exists a set of directions for the edges of the graph drawing derived from the Ammann bar decoration of the P3 tiling that satisfies conditions $C0$--$C4$ and thus forms a workable bobbin lace pattern.
\end{theorem}
\begin{proof}
$C0$: A patch of the Ammann bar decoration for P3 can be obtained via iterative substitution, independent of the tiling it decorates. It is self-similar, therefore, the vertices form a Delone point set~\cite{senechal}. The complement of the Ammann bars divides the plane into cells from a finite set of prototiles from which we can observe that the faces are uniformly bounded.
$C1$: Each vertex is the intersection of two directed lines, therefore, it has in-degree~2 and out-degree~2.
$C2$: A vertex is placed at every line crossing so the graph embedding is planar. Further, the straight lines prevent self-loop or bigons.
$C3$: Since the graph drawing has no upward edges it has no directed cycles.
$C4$: As shown in Figure~\ref{fig:ammann}, we again exhaustively examine each of the seven de Bruijn configurations and show that every stack in the P3 tiling contains an osculating path from the Ammann bar decoration. Using Lemmas~\ref{thm:wellstack}~and~\ref{thm:uniformstack}, that the rhombs of the P3 tiling form well-behaved stacks that are uniformly distributed throughout the tiling, we can conclude that an osculating partition of the Ammann bars of P3 contains a uniform distribution of well-behaved paths. Because the remaining paths are sandwiched between well-behaved paths, all paths in the partition are well-behaved.
\vfigbegin
\imagepdftex{\columnwidth}{ammann2.pdf_tex}
\vfigend{fig:ammann}{Ammann bars for P3: a) prototiles with bar decorations, b) a patch of P3, c) a path from the osculating partition appears in each of de Bruijn's central tile configurations.}{Ammann bars}
\end{proof}
The Ammann bar decoration of P3 can be generalized to decorate a family of quasiperiodic tilings. The defining characteristic of these decorations is that the line segments in adjacent tiles align to form straight lines in the infinite tiling, lines which we refer to as \emph{Ammann lines}. Boyle and Steinhardt give a detailed analysis of these patterns~\cite{boyle} and use this generalization to generate new tilings with matching rules. We note that some of their Ammann line patterns, such as the new 12-fold-B and 12-fold-C patterns, are not in general position.
\begin{conjecture}
For every set of Ammann lines in general position there exists a set of edge directions for the associated graph drawing that satisfies $C0$--$C4$. Therefore Ammann lines form a workable bobbin lace pattern.
\end{conjecture}
\section{Assigning a braid word mapping}
As mentioned in Section~\ref{sec:problem}, a lace pattern consists of two elements: a graph drawing and a mapping from each vertex to a braid word.
Deciding which braid word to use at each vertex is an artistic choice: any combination of crosses and twists containing at least one cross will work.
In a periodic pattern, we would typically select a unit cell and assign a braid word to each vertex in that parallelogram, paying attention to any symmetry elements that we wish to emphasize and considering how symmetrically related vertices are positioned in space.\footnote{When one vertex is rotated relative to another, the edge directions around the two vertices may be quite different. It may not be possible to construct two braid words that look sufficiently similar in both orientations.
}
The same braid word mapping would then be applied to all translations of the unit cell. We can therefore describe the mapping of an unbounded number of vertices with a finite number of braid words.
In a quasiperiodic pattern, there is no unit cell. How does this affect the assignment of braid words?
Fortunately, for the patterns explored here, there is a repeating unit called a quasi-unit cell~\cite{steinhardt1999}. Roughly speaking, a unit cell tiles the plane without gaps or overlaps whereas copies of a quasi-unit cell are allowed to overlap, creating a covering (as opposed to a tiling) of the plane. So once again we are able to consider a single subpatch of the pattern, make some artistic choices and apply them to the entire pattern.
The quasi-unit cell has a finite number of vertices but, unlike the unit cell, it can appear in a finite number of distinct orientations, all of which must be considered. To simplify the process, we first look at the number of twists that we want to appear in two threads travelling between braids, label the edges of the drawing accordingly, and use the edge labels to determine the braid word at each vertex for each rotation. This process is illustrated in Figure~\ref{fig:design} for the lace appearing in Figure~\ref{fig:lace2}.
\vfigbegin
\imagepdftex{\columnwidth}{design2.pdf_tex}
\vfigend{fig:design}{Quasi-unit cell and braid word considerations for `Nodding bur-marigold': a) quasi-unit cell overlayed on Ammann-Beenker tiles, b) edges labelled with twist information, c) three copies of the quasi-unit cell showing the overlap of a face and two different braid words assigned to the same vertex of the quasi-unit cell based on rotation.}{Design}
\section{Artistic results and conclusion}
\vfigbegin
\imagepdftex{\columnwidth}{lace_p3_bars.pdf_tex}
\vfigend{fig:laceP3bars}{`Ammann's web', Veronika Irvine 2019: Ammann P3 bar pattern worked in DMC~Cordonnet~Special~80 cotton thread.}{Lace Ammann bars of P3}
Using the patterns identified in the previous sections, we created the lace samples shown in Figures~\ref{fig:lace2},~\ref{fig:bigrid},~\ref{fig:laceP3bars}~and~\ref{fig:laceP3dual}. To the best of our knowledge, these are the first quasiperiodic bobbin lace pieces.
Traditional bobbin lace is monochromatic. Because the regularity in quasiperiodic patterns is less obvious, we use colour to draw attention to some of the repeated elements. In Figure~\ref{fig:lace2}, the petal shape is emphasized by yellow threads which are wrapped around two continuous white threads. The wrapping, which is a variation on the traditional grand Venetian cord~\cite{cook}, is performed as the lace is constructed. In Figure~\ref{fig:laceP3bars}, we used hot pink threads to emphasize the five directions of the parallel line sets. Where the pink threads intersect, they form a pentagram. The pink pentagram is inside a white pentagon which forms the core of a white pentagram, which intersects a larger white pentagon. These concentric pentagrams continue to the edges of the piece. Larger copies are a little harder to spot amongst the crisscrossing lines. In Figure~\ref{fig:laceP3bars}, it is also possible to spot nearly circular shapes formed by copies of the quasi-unit cell.
\vfigbegin
\imagepdftex{0.75\columnwidth}{lace_p3_centroid.pdf_tex}
\vfigend{fig:laceP3dual}{Two variations worked from the centroid-dual of P3 pattern.}{Lace P3}
As a practical observation, we note that the faces in the Ammann bar pattern range significantly in size. This makes it challenging to choose the scale of the pattern relative to the thickness of the thread being used. Edge lengths in the centroid-dual of P3 are more regular and therefore much easier to work with.
In this paper we have laid the groundwork for exploring non-periodic patterns as bobbin lace grounds by presenting a generalized model.
We have proven the existence of simple quasiperiodic lace patterns based on Sturmian words as well as two richer self-similar patterns based on the P3 tilings and its Ammann bar decoration. We have conjectured that the larger families to which these patterns belong also satisfy the conditions of bobbin lace.
\section*{Acknowledgements}
We would like to thank Robert Lang for making the recommendation to explore Ammann bars and Latham Boyle for his help with understanding generalized Ammann patterns. This research was supported by NSERC.
|
1,108,101,563,666 | arxiv | \section{Introduction}
In the last years, there has been an increasing interest in so-called
multiferroic
materials, displaying simultaneously spontaneous ferroelectric (FE) polarization
and ferro- or
antiferromagnetic (AFM) ordering. Multiferroics exhibit a rich variety
of fundamental
physical phenomena, and it is generally believed that they have a
potential for novel applications in non-volatile memories \cite{Scott07,Roy12},
magnonics~\cite{Kruglyak10} and magnetic sensors \cite{Nan08}. These
applications would rely on the coupling of order parameters on various time
scales, from quasi-static to ultrafast. However, the understanding of the
microscopic mechanism of the magnetodielectric coupling is still
a fundamental problem of solid state physics. The static and dynamic
magnetoelectric (ME) couplings can have different origins. Owing to the
static ME coupling, the macroscopic FE polarization emerges in the cycloidal
or transverse conical modulated magnetic structures; this polarization can
change with magnetic field. In contrast, the dynamic ME coupling generates an
oscillatory polarization and leads to a dielectric dispersion in the terahertz
(THz) region. Indeed, THz studies of
multiferroics revealed a new kind of electric-field-active spin excitations
contributing to the dielectric permittivity $\varepsilon =
\varepsilon^{\prime}-\mbox{i}\varepsilon^{\prime\prime}$, called
electromagnons (EMs)~\cite{Pimenov06}. Their characteristic feature is a
coupling with polar phonons, which
manifests itself in the spectra by a transfer of dielectric strength from
phonons to
EMs on cooling \cite{ValdesAguilar07}. In contrast to ferromagnetic and AFM
resonances, which are magnons from the Brillouin zone (BZ) center contributing
to the
magnetic permeability $\mu= \mu^{\prime}-\mbox{i}\mu^{\prime\prime}$, the EMs
can be activated also outside of the BZ center
\cite{ValdesAguilar09,Takahashi12,Stenberg12,Mochizuki10}. The
understanding of this fact is not trivial, because the photons which excite EMs
have wavevectors much smaller than the EMs. Thus, to date, there are several
different theories attempting to explain the observed properties of EMs in
various materials \cite{ValdesAguilar09,Stenberg12,Mochizuki10,Khomskii09}.
The EMs were discovered first in TbMnO$_3$ and GdMnO$_3$ \cite{Pimenov06}
which belong to multiferroics denoted~\cite{Khomskii09} as type II, where the FE
order is induced by a special magnetic ordering. Since then, EMs were confirmed
in numerous type-II
multiferroics~\cite{ValdesAguilar09,ValdesAguilar07,Sushkov07,Sushkov08,Pimenov08,Kida09,Seki10,Kezsmarki11,Shuvaev11}.
Other reports of EMs in type-I multiferroics (e.g.
BiFeO$_{3}$~\cite{Cazayous08,Talbayev11,Komandin10} or hex-YMnO$_3$
\cite{Pailhes09}) appear inconclusive, since no transfer of the dielectric
strength from polar phonons to EMs was observed~\cite{Cazayous08,Pailhes09}.
Also, recent infrared IR and THz studies did not confirm the EM in hex-YMnO$_3$
\cite{Kadlec11}.
Here we report experiments which reveal an excitation identified as an EM in
the ferrimagnetic $\varepsilon$ phase of $\rm Fe_2O_3$. Thanks to
its chemical simplicity, this phase appears also as a suitable model system for
theoretical studies of electromagnonic excitations. While $\varepsilon$-$\rm
Fe_2O_3$ is quite rare and less known than the $\alpha$ (hematite) or
$\gamma$ (maghemite) phases of $\varepsilon \mbox{-Fe}_2\mbox{O}_3$~\cite{Machala11}, its
properties make it attractive for applications, such as electromagnetic-wave absorbers
and memories~\cite{Namai08,Namai12,Ohkoshi07}. Owing to limited phase stability, it can
be synthesized only in the form of nanoparticles tens of nanometers in size
\cite{Namai12,Tucek10}, epitaxial thin films~\cite{Gich10} or nanowires a few micrometers long \cite{Ding07}.
Below 480--495\,K, it is ferrimagnetic~\cite{Jin04,Sakurai05}; at room-temperature, it
has a collinear spin structure~\cite{Tucek11} and exhibits a coercive
field of $H_{\rm c}\approx 2\,\mbox{T}$~\cite{Jin04}---the highest known value among
metal oxides. The crystal lattice has a temperature-independent
non-centrosymmetric orthorhombic structure with the $Pna2_1$ space group
\cite{Tronc98} (magnetic space group $Pn^{\prime}a2^{\prime}_1$). It consists of
three crystallographically non-equivalent
$\rm FeO_6$ octahedra, forming chains along the $a$ direction, and one type of $\rm
FeO_4$ tetrahedra~\cite{Tucek10,Gich06}. Compared to isostructural GaFeO$_{3}$, the
low-temperature phase diagram of $\varepsilon$-$\rm Fe_2O_3$ is complex---below
150\,K, a series of magnetic phase transitions occurs. Below $T_{\rm m}=110\,\mbox{K}$, an incommensurate
magnetic ordering appears where the magnetic structure
modulation has a periodicity of about 10
unit cells \cite{Gich06}. Near
$T_{\rm m}$, a drop in $\varepsilon^{\prime}$ was observed, and magnetocapacitive
measurements revealed a quadratic coupling \cite{Gich06a}.
Room-temperature microwave measurements provided evidence of a strong ferromagnetic
resonance (FMR) near 0.74\,\mbox{meV} (frequency of 180\,GHz) which can be tuned by
doping with Al, Ga or Rh~\cite{Namai08,Ohkoshi07,Namai12}. In order to gain insight into
the dynamic ME properties of $\varepsilon$-$\rm Fe_2O_3$, we obtained
THz, IR and inelastic neutron scattering (INS) spectra of $\varepsilon$-$\rm Fe_2O_3$ nano-grain ceramics
upon cooling down to 10\,K, providing information about polar and magnetic excitations.
\section{Samples and experimental methods}
The nanoparticles of $\varepsilon$-$\rm Fe_2O_3$ were synthesized by sol-gel
chemistry. $\rm SiO_2$-$\rm Fe_2O_3$ composite gels containing 30 wt.\,\% of $\rm
Fe_2O_3$ were prepared from iron nitrate nonahydrate (Sigma-Aldrich $>98\%$) and
tetra\-ethoxy\-silane (TEOS, Sigma-Aldrich 98\%) in hydroethanolic medium at
TEOS:H$_2$O:EtOH = 1:6:6 molar ratio. Iron nitrate was first dissolved and then
TEOS added dropwise to the mixture under stirring. The sol was poured into 5\,cm
diameter petri dishes that were closed with its cover and gelation took place
for between 4 and 5 weeks. The gels were dried overnight in a stove at 70\,\dgC,
crushed and thermally treated in air atmosphere for 3 hours at 1100\,\dgC\
(heating rate 80\,\dgC/h). The resulting material was a composite of
$\varepsilon$-$\rm Fe_2O_3$ nanoparticles of about 25\,nm in diameter dispersed
in an amorphous $\rm SiO_2$ matrix as checked by X-ray diffraction (XRD) which
did not reveal any trace of other $\rm Fe_2O_3$ polymorphs. The silica was
removed by stirring the composite powder for 12\,h in a 12M aqueous NaOH
solution at 80\,\dgC\ under reflux. XRD patterns recorded after the silica
removal revealed that the microstructure and the phase stability of
$\varepsilon$-$\rm Fe_2O_3$ nanoparticles were not affected by the etching
process. The nanoparticles were further processed by spark plasma sintering
(SPS) in order to prepare a pellet suitable for dielectric, terahertz (THz) and
IR measurements by pressing the
$\varepsilon$-$\rm Fe_2O_3$ powder in a graphite mould for 4 minutes at
350\,\dgC\ under 100\,MPa. The XRD analysis of the sintered pellet showed that
the SPS process did not induce any grain growth or phase transformation. Finally, the SPS pellets were polished to thin disks with a thickness of 1.2\,mm. Some
IR and THz measurements were performed on $\varepsilon$-$\rm Fe_2O_3$ pellets
with a diameter of about 6\,mm, which were prepared from powder at room
temperature using a standard tabletop manual hydraulic press (Perkin Elmer). The
spectra were qualitatively the same, only the value of the high-frequency IR
reflectance was affected by the roughness of the sample surface, which could not
be polished.
IR reflectance measurements with the resolution of 0.25\,meV were
performed using the Fourier transform infrared spectrometer Bruker
IFS-113v in near-normal reflectance geometry with an incidence angle of
$11^{\circ}$. An Oxford Instruments Optistat optical cryostat with
polyethylene windows was used for sample cooling down to 10\,K, and a
liquid-He-cooled Si
bolometer operating at 1.6\,K was applied as a detector. We also measured far-IR
reflectivity with applied magnetic field up to 13\,T. To this aim,
another Bruker IFS-113v spectrometer and a custom-made superconducting
magnetic cryostat allowing the measurements at 2 and 4\,K were used.
Time-domain THz
spectroscopy was based on measurements of sample transmittance using custom-made
spectrometers based on Ti:sapphire femtosecond lasers; one with an Optistat
cryostat with mylar windows for measurements without magnetic field but with a
higher frequency resolution, enabling to discern the FMR profile, and one
with an Oxford Instruments Spectromag cryostat, enabling measurements with
magnetic
field of up to 7\,T. Here, the Voigt configuration was used with the external
static magnetic field
$B_{\rm ext}$ perpendicular to the magnetic component of the THz radiation
$B_{\rm THz}$. Similar
effects were observed also for $B_{\rm ext}\parallel B_{\rm THz}$.
INS experiments were performed between 10 and 190\,K using about 3\,g of
loose $\varepsilon \mbox{-Fe}_2\mbox{O}_3$ nanopowder in the IN4
time-of-flight diffractometer at the Institut Laue-Langevin in Grenoble, France.
\section{Results and discussion}
\subsection{Broad-band study of the electromagnetic response.}
\begin{figure}
\raggedright
\hspace*{2.5pt}\includegraphics[width=0.867\columnwidth]{Fig1a.eps}
\includegraphics{Fig1bc.eps}
\caption{(a) Lines: IR reflectivity spectra showing polar phonons. Symbols
below 8\,\mbox{meV}: data calculated from THz spectra. The
inset shows in detail the low-energy part where, below 100\,K and
10\,\mbox{meV}, a new reflection band appears due to the EM. (b), (c):
Fits of the complex permittivity in the far IR
region, obtained from the IR reflectivity spectra using a sum of
harmonic oscillators
(lines), compared to data obtained from THz spectroscopy
(symbols).}
\label{fig:IRtemp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{Fig2.eps}
\caption{Temperature dependence of the plasma frequencies
(defined as $\Omega_{\rm pj}=\sqrt{\Delta\varepsilon_j}\omega_j$) of the
10-meV-mode attributed to EM and of the TO1 and TO2 phonons. The
dielectric strengths $\Delta\varepsilon_j$ were evaluated by fitting using a
model with harmonic oscillators.}
\label{fig:transfer}
\end{figure}
Fig.\ \ref{fig:IRtemp}a shows the far and mid-IR reflectivity spectra
displaying polar optical phonons
of $\varepsilon$-$\rm Fe_2O_3$ between 10 and 300\,K. Figs.\ \ref{fig:IRtemp}b, c
show the far-IR $\varepsilon(E)$ spectra calculated from
the fits of IR reflectivity together with the experimental THz data.
To this purpose, we used a model involving 35 harmonic oscillators;
this number is lower than the number of IR active modes provided by the
factor group analysis (see Appendix A); apparently, a part of the modes
are too weak to be observed. Upon cooling, all phonons above 12\,meV
exhibit the usual behavior---their intensity increases due to reduced
phonon damping at low temperatures. The TO1 phonon near 11\,meV exhibits an
anomalous behavior: on cooling, its intensity increases only down to
115\,K. Below this temperature, it markedly weakens,
while a supplementary broad reflectivity peak develops below
$E\sim 10\,\mbox{meV}$ and becomes more intense upon cooling (see the inset of Fig.\ \ref{fig:IRtemp}a).
This transfer of strengths involves also the TO2 phonon (see Fig.\
\ref{fig:transfer}), evidencing a coupling
among these three polar modes. Despite the lattice distortions
which occur between 150\,K and 75\,K, the crystal symmetry of $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$ does not change with temperature
\cite{Gich06,Tseng09}. This is further confirmed by our IR reflectivity
spectra, displaying a temperature-independent number of polar phonons;
should a structural phase transition occur, it would imply a change
of the
factor group analysis and different phonon selection rules. Given the high
number of atoms in the unit cell, multiple new reflection bands throughout the
IR range would be observed. Therefore, one can exclude the new mode to originate
in a structural modification.
Another option to be considered is the polar phonon splitting due to
exchange coupling below AFM phase transitions which was reported in various
transition-metal monoxides and chromium spinels \cite{Kant12}; the mode
splitting increased on cooling below the N\'eel temperature. However, this
explanation cannot be valid as we observe an
opposite temperature dependence---the new mode appears below $T_{\rm m}$ at low
energies and hardens towards the TO1 phonon energy on cooling, i.e.\ their
energy difference decreases.
Finally, one cannot a priori exclude the hypothesis of activation of the
TO1 phonon branch from the area of the BZ near its edge. This would require
a
folding of the structural BZ which could be caused by a transfer of the magnetic
BZ folding (linked to incommensurability) via magnetostriction. Nevertheless,
in the X-ray diffraction studies, no appropriate satellite reflections were
observed. Even supposing these satellite reflections to be very weak, one would
expect the off-center phonons to activate also at higher energies, which we did
not observe. This hypothesis therefore seems unlikely. Based on further
experimental evidence, especially in view of an analogous temperature
behavior observed by INS, we argue below that the reflection band
activated below $T_{\rm m}$ is most probably an EM.
\begin{figure}
\centering
\includegraphics[width=0.51\columnwidth]{Fig3ab.eps
\begin{minipage}[b]{0.48\columnwidth
\includegraphics[width=\textwidth]{Fig3cd.eps}\\
\includegraphics[width=\textwidth]{Fig3e.eps}
\end{minipage}
\caption{Temperature dependence of the spectra of the (a) real and (b)
imaginary parts of the $\varepsilon\mu$ product, obtained by
THz spectroscopy. Spectra of $\mu^{\prime}$ (c),
$\mu^{\prime\prime}$ (d), corresponding to the FMR mode, obtained by fitting the THz
spectra. (e) Temperature dependence of the FMR
energy and strength $\Delta\mu \omega_{\rm FMR}^{2}$ derived from parts c, d.}
\label{fig:magnonT}
\end{figure}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.48\textwidth]{Fig4ab.eps}
\includegraphics[width=0.48\textwidth]{Fig4cd.eps}
\caption{(a), (b): Spectra of complex refractive index $N\equiv
n-\mbox{i}\kappa$ of $\varepsilon \mbox{-Fe}_2\mbox{O}_3$ measured by
THz spectroscopy at
$T=100\,\mbox{K}$ as a function of applied magnetic field. Inset:
$B$-dependence of the FMR frequency, determined as the peak in
$\kappa(E)$ spectra. (c), (d):
Changes of the value of $n$, $\kappa$, determined within
$\pm0.001$, for $E=5\,\mbox{meV}$ as a function of temperature and
increasing magnetic field (except at 75\,K).
\label{fig:NkTHz100K}}
\end{figure*}
The temperature dependent THz spectra (see Fig. \ref{fig:magnonT}) reveal the
sharp FMR which was previously reported at room
temperature~\cite{Namai08,Namai12}. To quantify its temperature behavior, we
used the harmonic oscillator model for all phonons and one term accounting for
the FMR in $\mu(E)$, while assuming a smooth dependence of $\varepsilon(E)$ in
this interval. The resulting spectra, matching well the measured data, are shown
in Fig.\ \ref{fig:magnonT}c,d. From the fit parameters, we derived the
temperature dependence of the magnon strength and FMR energy (see Fig.\
\ref{fig:magnonT}e). We observe a sharp drop in the resonance energy between
150\,K and 75\,K, very similar to that of the coercive field $H_{\rm
c}(T)$~\cite{Gich05}. This can be explained by the fact that the FMR energy is
proportional to the magnetocrystalline anisotropy field $H_a$. As the sample
consists of randomly oriented particles with a uniaxial magnetic anisotropy,
$H_a$ is proportional to the $H_{\rm c}$ value \cite{Ohkoshi07}.
Furthermore, we measured THz time-domain spectra with external magnetic field
ranging from 0 to 7\,T. Because of the high absorption of the EM, lying near 10
meV, the sample was opaque above 7\,meV. Therefore, we could measure only the
low-frequency wing of the EM. When the magnetic field is applied, two types of
changes in the THz spectra can be observed: an increase of the FMR frequency
corresponding to the peak of the $\kappa(E)$ spectra, and a change of the slope
of both real and imaginary parts of the index of refraction, indicating shifts
of the EM frequency with magnetic field. An example of the former behavior at
$T=100\,\mbox{K}$ is shown in Fig.~\ref{fig:NkTHz100K}a, b; the FMR frequency,
upon applying a static magnetic field of $B=7\,\rm T$, increases from 0.6 to
1.3\,\mbox{meV} (see inset of Fig.~\ref{fig:NkTHz100K}a, b). The latter
phenomenon is illustrated by Fig.~\ref{fig:NkTHz100K}c, d which traces the
values of the complex refractive index at $E=5\,\mbox{meV}$ as a function of
temperature and applied magnetic field. While changes only close to the
sensitivity level were detected at temperatures of 10 and 300\,K (not shown in
Fig.~\ref{fig:NkTHz100K}), there is a clear $B$-dependence of the spectra at
intermediate temperatures. The highest sensitivity was observed at 100\,K, close
to the magnetic phase transition. Also, at $T=75\,\mbox{K}$, a marked
hysteresis in $B$ occurs, similarly to the temperature hysteresis observed by
radio-frequency impedance spectroscopy techniques near this temperature (see
Figure~\ref{fig:SupImped}); this observation will be discussed below. At
$T\ll T_{\rm m}$, where the magnetic structure is probably stable, the
changes of $N$ with magnetic field are smaller. This explains also why we did
not detect any significant changes of the far-IR spectra with magnetic field at
$T=2$\,K.
\begin{figure}[h]
\centering
\includegraphics[width=0.42\textwidth]{Fig5a.eps}\\
\includegraphics[width=0.38\textwidth]{Fig5b.eps}\\
\includegraphics[width=0.4\textwidth]{Fig5c.eps}
\caption{(a) Temperature hysteresis of the dielectric permittivity
(black lines, left
axis) and
losses (red lines, right axis) observed at 300\,kHz. (b) Temperature dependence of the permittivity at
1\,THz measured on heating. The dashed line is a guide to the eyes. The
values at 300\,kHz are systematically higher than at 1\,THz due to a small
dielectric relaxation between
these two frequencies; one can see a similar permittivity peak near 75\,K
in
both experiments. (c) Temperature dependence
of relative changes of the 1\,kHz-permittivity due to magnetic field with $B=
9\,\mbox{T}$ (taken on heating).}
\label{fig:SupImped}
\end{figure}
In the frequency range from $f=$10\,Hz to 1\,MHz, the complex permittivity
$\varepsilon$ was
measured by impedance spectroscopy as a function of temperature (see Fig.\ \ref{fig:kHz}).
No sign of a FE phase transition was detected. Above 200\,K, both $\varepsilon^{\prime}(T)$ and
$\varepsilon^{\prime\prime}(T)$ increase due to the leakage
conductivity and the related Maxwell-Wagner polarization. Between 100 and
200\,K, we observed a step-like decrease of $\varepsilon^{\prime}(T)$ towards lower
temperatures and maxima in losses
$\tan\delta(T,f)=\varepsilon^{\prime\prime}(T,f)/\varepsilon^{\prime}(T,f)$, which is typical
of a dielectric relaxation. The temperature dependence of the relaxation time
$\tau(T)$ obtained from the peaks of $\tan\delta(T,f)$
follows an Arrhenius
behavior, $\tau(T)=\tau_{0}\mbox{e}^{{E_0}/{k_{\rm B}T}}$ with $k_{\rm B}$ denoting the
Boltzmann constant, $\tau_{0}=(1.5\pm 0.2)\texp{-12}\,\mbox{s}$ and $E_0=(0.195\pm
0.002)\,\mbox{eV}$. The origin of this relaxation is not clear, however, similar effects
are known from several perovskite rare-earth manganites, including the multiferroics
TbMnO$_3$ and DyMnO$_3$~\cite{Schrettle09}. We attribute the relaxation
to thermally activated vibrations of the FE domain walls or magnetic domain walls which can be
polar \cite{Pyatakov11}. The huge room-temperature coercive field $H_{\rm c}$ is
the consequence of a single-domain magnetic structure of the
nanograins~\cite{Namai12}. Below 200\,K, $H_{\rm c}$ strongly decreases
due to a transition to a polydomain structure \cite{Gich05} which explains why
the dielectric relaxation exists only in this temperature range.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Fig6.eps}
\caption{Temperature dependence of the real permittivity
$\varepsilon^\prime$ (left) and dielectric losses $\tan\delta$ (right),
measured upon heating by impedance spectroscopy. Inset:
dependence of the polarization on the applied 50\,Hz ac bias at 120\,K
(black) and 15\,K (red).}\label{fig:kHz}
\end{figure}
The inset of Fig.\ \ref{fig:kHz} shows the measured dependences of the
polarization on applied electric field. No open FE hysteresis loops nor signs
of saturation were observed under the applied fields. Since the $Pna2_1$ crystal structure of
$\varepsilon \mbox{-Fe}_2\mbox{O}_3$ corresponds to a pyroelectric space group,
we cannot exclude that an applied electric field with an intensity higher than
the one we used (beyond 5\,kV/cm, our sample became leaky) would switch the
polarization and that $\varepsilon \mbox{-Fe}_2\mbox{O}_3$ is in fact
FE. Actually, one of us recently investigated strained epitaxial $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$ thin films and, under an applied electric field one
order of magnitude stronger, observed a room-temperature FE
switching.\cite{Gich-prep} Since the crystal symmetry of $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$ does not change with temperature \cite{Nizn-priv},
one can not exclude that the $\varepsilon \mbox{-Fe}_2\mbox{O}_3$ nanograins are
also FE already above the ferrimagnetic phase transition occurring near 490\,K;
in any case, it is at least pyroelectric. Consequently, $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$ would belong to type-I multiferroics.
Near 75\,K, a small peak in $\varepsilon^{\prime}(T)$ was observed in our
impedance spectroscopy measurements (as marked by
the arrow in Fig.\ \ref{fig:kHz}). This peak is rather weak on cooling, but it
becomes more distinct on heating, and it exhibits a temperature hysteresis of
$\approx$15\,K (see also Fig.~\ref{fig:SupImped}). This is reminiscent of a
dielectric anomaly typical for pseudoproper or improper FE phase transitions,
such as those in perovskite rare-earth manganites. However, this hypothesis is
not confirmed by the polarization measurements shown in Fig.\
\ref{fig:kHz}, and the X-ray and neutron diffraction investigations did not
reveal any structural changes near 75\,K either \cite{Gich06,Tseng09}. In
type-II multiferroics, a narrow dielectric peak is seen at $T_{\rm c}$ only at
frequencies below 1\,MHz and its intensity strongly decreases with rising
frequency \cite{Schrettle09}. By contrast, in our impedance spectra, the
peak is present at all frequencies up to the THz region (see
Fig.~\ref{fig:SupImped}b), although it is partly covered by the stronger
dielectric relaxation at low frequencies. Therefore, this anomaly must
originate from phonons or an EM. As the observed dielectric anomaly occurs at a
temperature close to the lowest-temperature magnetic phase transition
\cite{Gich06}, we propose that it arises from the transfer of the dielectric
strength from the TO1 and TO2 phonons to the EM (see Fig.\
\ref{fig:transfer}). We note that in single-crystal
multiferroics, often a step-like increase of the permittivity occurs below the
temperature where the electromagnon activates.~\cite{Sushkov08} Our observations
on nanograin samples are somewhat different---while a step-like increase of
$\varepsilon^{\prime}$ below $\approx130$\,K, superimposed with the narrow-range anomaly near 75\,K,
was detected in the THz range (see Fig.~\ref{fig:SupImped}b), only the anomaly
near 75\,K manifests itself in the kHz range (see Fig.~\ref{fig:SupImped}a). We
suppose that the step in the low-frequency permittivity is screened by the observed
dielectric relaxation in the microwave range.
We also investigated the dependence of the permittivity at 1\,kHz on
external magnetic field up to 9\,T. We found that $\varepsilon'(B)$ exhibits
the highest changes (almost 2\%) near 70 and 130\,K (see Fig.~\ref{fig:SupImped}c). Both
of these anomalies are clearly linked to the changes of magnetic
structure~\cite{Gich06}. We
suppose that the lower-temperature change
corresponds to the EM anomaly observed also in THz experiments, while that
observed near 130\,K is due to the relaxation linked to the magnetic and
simultaneously polar domain walls.
\begin{figure}
\includegraphics[width=0.485\columnwidth]{Fig7a.eps}
\includegraphics[width=0.445\columnwidth]{Fig7b.eps}
\includegraphics[width=0.45\columnwidth]{Fig7c.eps}
\includegraphics[width=0.48\columnwidth]{Fig7d.eps}
\caption{(a), (b), (c): Bose-Einstein-factor-normalized INS intensity as a
function of momentum $Q$ and energy $E$ transfers for $T=10$, 80 and
170\,K. Near $Q=1.4\,\mbox{\AA}^{-1}$, a magnon branch with a cut-off
energy of $\approx11\,\mbox{meV}$ can be seen. (d): DOS determined by
integrating over the regions marked by black solid lines
in (a)--(c). Inset of (d): scheme of the magnon dispersion branch in
reciprocal lattice units, involving the FMR and EM near the BZ center and
boundary, respectively. \label{fig:neutron_map}}
\end{figure}
\subsection{Neutron scattering.}
In order to further explore the hypothesis of an EM, we performed time-of-flight
INS experiments which allow measuring the phonon and magnon density of states
(DOS) in the meV energy range. As the nanopowder does not allow us to determine
directly the phonon and magnon dispersion branches in the BZ, the data represent
an orientation-averaged scattering function $S(Q,E)$ where $Q$ is the total
momentum transfer and $E$ the energy transferred between the crystal lattice and
the neutrons (see Fig.\ \ref{fig:neutron_map}). The data reveal a steep column
of intense scattering, emanating from magnetic Bragg peaks at $Q =
1.4\,\mbox{\AA}^{-1}$, and extending up to $E \sim 10\,\mbox{meV}$. The weaker
columns at $Q > 2\,\mbox{\AA}^{-1}$ are due to scattering in higher-order BZs.
The fact that the area of most intense scattering is located at low $Q$ shows
unambiguously \cite{Shirane06} that the dominant contribution to the low-$Q$
scattering comes from spin waves.
A qualitatively similar magnon response was
recently observed in INS spectra of polycrystalline BiFeO$_{3}$
\cite{Delaire12}; the spin wave character of the excitation was confirmed by INS
on BiFeO$_{3}$ crystals, where the magnon dispersion branch was directly
measured \cite{Jeong12}. Our scattering from the magnon waves becomes weaker on
cooling due to the decreasing Bose-Einstein factor. Around 10\,meV, a distinct
scattering peak persists down to low temperatures, corresponding to a maximum of
the magnon DOS; this is obviously due to a flat end of the branch below the BZ
boundary. Moreover, the energy at the maximal magnon DOS, as well as its
temperature evolution, corresponds to that of the newly IR-activated mode (see
Fig.\ \ref{fig:neutron_map}d).
The inset of Fig.\ \ref{fig:neutron_map}d shows a schematic view of an acoustic-like magnon
dispersion branch
giving rise to the observed excitations, both the one below 10.5\,meV (at the BZ
boundary) and the FMR near 0.5\,meV (in the BZ center). This dispersion
behavior is similar to that observed in the ferrimagnetic $\rm HoFe_2$ \cite{Rhyne78}, which exhibits a
slightly higher Curie temperature of 597\,K. In $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$, the optic-like magnon
branches lie probably above 12\,meV, beyond the energy range used in our INS
experiments. We suggest that this acoustic-like magnon is activated in the IR spectra due to the loss of magnetic
translation symmetry in the incommensurate magnetic phase below $T_{\rm m}$.
Such an activation is analogous to that of phonons with $q\neq0$ in structurally
modulated crystals \cite{Petzelt81}. We suppose that the large damping of the
newly activated excitation can be explained by an activation of the magnon DOS in
the IR spectra. Since the observed spin-wave excitation is coupled with the
lowest-energy TO1 phonon, it must be excited by the electric component of the
electromagnetic radiation; at the same time, it has to contribute to dielectric
permittivity. Therefore, the excitation seen near 10\,meV must be an EM.
\section{Conclusion}
In conclusion, in $\varepsilon \mbox{-Fe}_2\mbox{O}_3$, we have discovered
an excitation, appearing simultaneously with the modulation of the magnetic
structure, at energies below the TO1 phonon. We attribute this excitation to
an EM whose energy corresponds to a magnon from the BZ boundary. We did not
observe any other excitation at lower energies, in contrast to type-II multiferroics.
There, the Dzyaloshinskii-Moriya (D.-M.) interaction breaks the center of
symmetry, induces ferroelectricity
\cite{Khomskii09} and the EMs are activated thanks to magnetostriction
(($\textbf{S}_{i}\cdot\textbf{S}_{j}$)-type interaction)
\cite{ValdesAguilar09}. In $\varepsilon \mbox{-Fe}_2\mbox{O}_3$, the crystal
structure is acentric at all temperatures and it permits to activate the D.-M.
interaction in an originally collinear ferrimagnetic structure~\cite{Fennie08};
the D.-M.\ interaction tilts the spins and finally induces an incommensurately
modulated magnetic structure below $T_{\rm m}=110\,\mbox{K}$, where the EM activates due to
magnetostriction.
Up to now, EMs were reported mainly in type-II multiferroics. Previous reports
of EMs in type-I multiferroics were lacking evidence of their coupling with
polar phonons, e.g.\ in BiFeO$_3$~\cite{Cazayous08,Talbayev11,Komandin10} or
hex-YMnO$_3$ \cite{Pailhes09}. Our results indicate that $\varepsilon
\mbox{-Fe}_2\mbox{O}_3$ belongs to type-I multiferroics; it is pyroelectric and
perhaps FE even above the ferrimagnetic phase transition \cite{Nizn-priv} at
490\,K, but the EM is activated only below $T_{\rm m}$,
corresponding to the onset of the incommensurately modulated magnetic structure.
In our case, a clear transfer of dielectric strength from a low-energy phonon to
the zone boundary magnon was observed.
Finally, we would like to stress that EMs were previously identified only
in single crystals using a thorough polarization analysis of measured spectra.
Here we have determined an EM from unpolarized IR and THz spectra of nanograin
ceramics showing its coupling with a TO1 phonon. Simultaneously, we have shown
from INS experiments made on powder that the EM in $\varepsilon$-$\rm Fe_2O_3$
comes from the BZ boundary. This combination of experimental methods provides a
guideline for an unambiguous determination of EMs in materials where
sufficiently large single crystals for polarized IR and THz measurements are not
available.
\begin{acknowledgments}
This work was supported by the Czech Science Foundation (project\ P204/12/1163).
The experiment in ILL Grenoble was carried out at the IN4 spectrometer
within the project LG11024 financed by the Ministry of Education of the Czech
Republic. M.G.\ acknowledges funding from the Spanish Ministerio de Econom\'\i a y Competitividad (projects
RyC-2009-04335, MAT 2012-35324 and CONSOLIDER-Nanoselect-CSD2007-00041) and the European
Commission (FP7-Marie Curie Actions, PCIG09-GA-2011-294168). S.K.\ thanks Petr
Br\'{a}zda for his stimulation of our $\varepsilon$-$\rm Fe_2O_3$ research and
S.\ Artyukchin for a helpful discussion.
\end{acknowledgments}
|
1,108,101,563,667 | arxiv | \section{Introduction}
In anticipation of start of the LHC experiments there were presented many different theoretical views on the underlying physics at so large collision energies as well as numerous predictions for observables - a big collection of the predictions for heavy ion collision was assembled in \cite{LHCpred}. The similar took place on the eve of the RHIC experiments. At that ``pre-RHIC'' time the future results on the correlation femtoscopy of particles, that is the topic of this entry, were expecting with great interest. One of the reason was a hope to find the interferometry signature of the quark-gluon plasma (QGP). A very large value of the
ratio of the two transverse interferometry radii, $R_{out}$ to $R_{side}$, was predicted as a signal of the QGP formation \cite{Bertsch}. While the $R_{side}$ radius
is associated with the transverse homogeneity length \cite{Sin}, the $R_{out}$
includes besides that also additional contributions, in particular,
the one which is related to a duration of the pion emission. Since
the lifetime of the systems obviously should grow with collision
energy, if it is accompanied by an increase of the initial energy
density and/or by a softening of the equation of state due to phase
transition between hadron matter and quark-gluon plasma (QGP),
the duration of pion emission should also grow with energy and so
$R_{out}/R_{side}$ ratio could increase.
The RHIC experiments brought an unexpected result: the ratio \\
$R_{out}/R_{side}\approx 1$ is similar or even smaller than at SPS.
The another surprise was the absolute values of the radii. Naively it was expected that when the energy of colliding nuclei
increases, the pion interferometry volume $V_{int}$ - product of
the interferometry radii in three orthogonal directions - will rise
at the same maximal centrality for Pb+Pb and Au+Au collisions just
proportionally to $\frac{dN_{\pi}}{dy}$. However, when experiments
at RHIC starts, an increase of the interferometry volume with energy
turn out to be essentially smaller then if the proportionality law takes place.
Both these unexpected results were called the RHIC HBT puzzle
\cite{HBTpuzzle}.
During a long period this puzzle was not solved in hydrodynamic/hybrid models of A+A collisions which reproduce good the single particle transverse spectra and its axial anisotropy in non-central collisions described by the $v_2$ coefficients. Only a few years ago the main factors which allow one
to describe simultaneously the spectra and femtoscopic
scales at RHIC become clear. They are \cite{sin1}-\cite{Pratt}: a relatively hard equation of state because of crossover transition (instead of the 1st order one) between quark-gluon and hadron phases and due to
nonequilibrium composition of hadronic matter,
presence of prethermal transverse flows and their anisotropy developed to thermalization time,
an `additional portion' of the transverse flows
owing to the shear viscosity effect and fluctuation of initial
conditions. An account of these factors gives the possibility to describe good the pion and kaon spectra together with the femtoscopy data of RHIC within realistic freeze-out picture with a gradual decay of
nonequilibrium fluid into observed particles \cite{sin3}.
Now, when the heavy ion experiments at LHC already starts, and the ALICE Collaboration published the first results on the femtoscopy in A+A collisions at $\sqrt{s}=2.76$ TeV \cite{Alice}, the main question is whether an understanding of the physics responsible for the space-time matter evolution in Au+Au collisions at RHIC can be extrapolated to the LHC energies, or some new ``LHC HBT puzzle'' is already apprehended just as it happened in the way from SPS to RHIC energies. In this note we describe the physical mechanisms responsible for the peculiarities of energy
dependence of the interferometry radii and therefore solving the RHIC HBT puzzle, present the quantitative predictions given for LHC within hydrokinetic model earlier \cite{sin4}, compare them with the recent ALICE LHC results and make the corresponding inference.
\section{Hydro-kinetic approach to A+A collisions}
Let us briefly describe the main features of the HKM \cite{PRL,PRC}.
It incorporates hydrodynamical expansion of the systems formed in
\textit{A}+\textit{A} collisions and their dynamical decoupling
described by escape probabilities.
{\it Initial conditions}--- Our results are all related to the
central rapidity slice where we use the boost-invariant Bjorken-like
initial condition. We consider the proper time of thermalization of
quark-gluon matter to be
$\tau_0=1$ fm/c, at present there is no theoretical arguments permitting smaller value. The initial energy density in the transverse plane
is supposed to be Glauber-like \cite{Kolb}, i.e. is proportional to
the participant nucleon density for Pb+Pb (SPS) and Au+Au (RHIC,
LHC) collisions with zero impact parameter. The height of the
distribution - the maximal initial energy density -
$\epsilon(r=0)=\epsilon_0$ is the fitting parameter. From analysis
of pion transverse spectra we choose it for the top SPS energy to be
$\epsilon_0 = 9$ GeV/fm$^3$ ($\langle\epsilon\rangle_0$ = 6.4
GeV/fm$^3$), for the top RHIC energy $\epsilon_0 = 16.5$ GeV/fm$^3$
($\langle\epsilon\rangle_0$ = 11.6 GeV/fm$^3$). The brackets $<...>$
correspond to mean value over the distribution associated with the
Glauber transverse profile. We also demonstrate results at
$\epsilon_0 = 40$ GeV/fm$^3$ and $\epsilon_0 = 60$ GeV/fm$^3$. In hydrokinetic model $\epsilon_0 = 40$ GeV/fm$^3$ correspond to multiplicity of charged particles $dN_{ch}/d\eta \approx 1500$. We suppose that soon after thermalization the matter
created in A+A collision at energies considered is in the quark gluon
plasma (QGP) state.
At the time of thermalization, $\tau_0=1$ fm/c, the system already
has developed collective transverse velocities \cite{sin1,JPG}. The
initial transverse rapidity profile is supposed to be linear in
radius $r_T$:
\begin{equation}
y_T=\alpha\frac{r_T}{R_T} \label{yT},
\end{equation}
where $\alpha$ is the second fitting parameter and
$R_T=\sqrt{<r_T^2>}$. Note that the fitting parameter $\alpha$
should absorbs also a positive correction for underestimated
resulting transverse flow since in this work we did not account in
direct way for the viscosity effects \cite{Teaney} neither at QGP
stage nor at hadronic one. In formalism of HKM \cite{PRC} the
viscosity effects at hadronic stage are incorporated in the
mechanisms of the back reaction of particle emission on hydrodynamic
evolution which we ignore in current calculations. Since the
corrections to transverse flows which depend on unknown viscosity
coefficients are unknown, we use fitting parameter $\alpha$ to
describe the "additional unknown portion" of flows, caused both
factors: by a developing of the pre-thermal flows and the viscosity
effects in quark-gluon plasma. The best fits of the pion transverse
spectra at SPS and RHIC are provided at $\alpha=0.194$ ($\langle v_T
\rangle= 0.178$) for SPS energies and $\alpha=0.28$ ($\langle
v_T\rangle=0.25$) for RHIC ones. The latter value we use also for
LHC energies aiming to analyze just influence of energy density
increase.
{\it Equation of state}--- Following to Ref. \cite{PRC} we use at
high temperatures the EoS \cite{Laine} adjusted to the QCD lattice
data with the baryonic chemical potential $\mu_B =0$ and matched
with chemically equilibrated multi-component hadron resonance gas at
$T=175$ MeV. Such an EoS could be a good approximation for the RHIC
and LHC energies; as for the SPS energies we utilize it just to
demonstrate the energy dependent mechanism of formation of the
space-time scales \footnote{a good description of the spectra and HBT radii at the SPS energies with realistic EoS within hydrokinetic model is presented in Ref.\cite{Toneev}}. We suppose the chemical freeze-out for the hadron
gas at $T_{ch}=165$ MeV \cite{PBM1}. It guarantees us the correct
particle number ratios for all quasi-stable particles (here we
calculate only pion observables) at least for RHIC. Below $T_{ch}$ a
composition of the hadron gas is changed only due to resonance
decays into expanding fluid. We include 359 hadron states made of u,
d, s quarks with masses up to 2.6 GeV. The EoS in this non
chemically equilibrated system depends now on particle number
densities $n_i$ of all the 359 particle species $i$:
$p=p(\epsilon,\{n_i\})$. Since the energy densities in expanding
system do not directly correlate with resonance decays, all the
variables in the EoS depend on space-time points and so an
evaluation of the EoS is incorporated in the hydrodynamic code. We
calculate the EoS below $T_{ch}$ in the Boltzmann approximation of
ideal multi-component hadron gas.
{\it Evolution}--- At the temperatures higher than $T_{ch}$ the
hydrodynamic evolution is related to the quark-gluon and hadron
phases which are in chemical equilibrium with zero baryonic chemical
potential. The evolution is described by the conservation law for
the energy-momentum tensor of perfect fluid:
\begin{equation}
\partial_\nu T^{\mu\nu}(x)=0
\label{conservation}
\end{equation}
At $T<T_{ch}$=165 MeV the system evolves as non chemically
equilibrated hadronic gas. The concept of the chemical freeze-out
implies that afterwards only elastic collisions and resonance decays
take place because of relatively small
densities allied with a fast rate of expansion at the last stage.
Thus, in addition to (\ref{conservation}), the equations accounting
for the particle number conservation and resonance decays are added.
If one neglects the thermal motion of heavy resonances the equations
for particle densities $n_i(x)$ take the form:
\begin{equation}
\partial_\mu(n_i(x) u^\mu(x))=-\Gamma_i n_i(x) + \sum\limits_j b_{ij}\Gamma_j
n_j(x)
\label{paricle_number_conservation}
\end{equation}
where $b_{ij}=B_{ij}N_{ij}$ denote the average number of i-th
particles coming from arbitrary decay of j-th resonance,
$B_{ij}=\Gamma_{ij}/\Gamma_{j,tot}$ is branching ratio, $N_{ij}$ is
a number of i-th particles produced in $j\rightarrow i$ decay
channel. We also can account for recombination in the processes of
resonance decays into expanding medium just by utilizing the
effective decay width $\Gamma_{i,eff}=\gamma\Gamma_i$. We use
$\gamma = 0.75$ supposing thus that near 30\% of resonances are
recombining during the evolution. All the equations
(\ref{conservation}) and 359 equations
(\ref{paricle_number_conservation}) are solving simultaneously with
calculation of the EoS, $p(x)=p(\epsilon(x),\{n_i(x)\})$, at each
point $x$.
{\it System's decoupling and spectra formation} --- During the
matter evolution, in fact, at $T\leq T_{ch}$, hadrons continuously
leave the system. Such a process is described by means of the
emission function $S(x,p)$ which is expressed for pions through the
{\it gain} term, $G_{\pi}(x,p)$, in Boltzmann equations and
the escape probabilities ${\cal P}_{\pi}(x,p)=\exp(-\int\limits_{t}^{\infty
dsR_{\pi+h}(s,{\bf r}+\frac{{\bf p}}{p^0}(s-t),p))$:
$S_{\pi}(x,p)=G_{\pi}(x,p){\cal P}_{\pi}(x,p)$ \cite{PRL,PRC}. For
pion emission in relaxation time approximation $G_{\pi}\approx
f_{\pi}R_{\pi+h}+G_{H\rightarrow\pi}$ where $f_{\pi}(x,p)$ is the
pion Bose-Einstein phase-space distribution, $R_{\pi+h}(x,p)$ is the
total collision rate of the pion, carrying momentum $p$, with all
the hadrons $h$ in the system in a vicinity of point $x$, the term
$G_{H\rightarrow\pi}$ describes an inflow of the pions into
phase-space point $(x,p)$ due to the resonance decays. It is
calculated according to the kinematics of decays with simplification
that the spectral function of the resonance $H$ is
$\delta(p^2-\langle m_H\rangle^2)$. The cross-sections in the
hadronic gas, that determine via the collision rate $R_{\pi+h}$ the
escape probabilities ${\cal P}(x,p)$ and emission function $S(x,p)$,
are calculated in accordance with the UrQMD method \cite{UrQMD}. The
spectra and correlation functions are found from the emission
function $S$ in the standard way (see, e.g., \cite{PRL}).
\begin{figure*}[htb]
\centering
\includegraphics[scale=0.25]{s-sps.eps
\includegraphics[scale=0.25]{s-rhic.eps
\includegraphics[scale=0.25]{s-lhc60.eps}
\includegraphics[scale=0.25]{rout.eps
\includegraphics[scale=0.25]{rside.eps
\includegraphics[scale=0.25]{rlong.eps}
\includegraphics[scale=0.25]{spectra.eps
\includegraphics[scale=0.25]{rors.eps}
\vspace{-0.1in}
\caption{The $p_T$-integrated emission functions of negative pions
for the top SPS, RHIC and LHC energies (top); the interferometry
radii (middle) $R_{out}/R_{side}$ ratio and transverse momentum
spectra (bottom) of negative pions at different energy densities,
all calculated in HKM model. The experimental data are taken from
CERES \cite{ceres} and NA-49 Collaborations \cite{na49-spectra, na49-hbt} (SPS CERN),
STAR \cite{star-spectra, star-hbt} and PHENIX \cite{phenix-spectra, phenix-hbt} Collaborations (RHIC BNL) and ALICE Collaboration (LHC, CERN)\cite{Alice}.}
\end{figure*}
\section{Results and conclusions}
The pion emission function per unit (central) rapidity, integrated
over azimuthal angular and transverse momenta, is presented in Fig.
1 for the top SPS, RHIC and LHC energies as a function of transverse
radius $r$ and proper time $\tau$. The two fitting parameters
and $\langle v_T \rangle$ are fixed as discussed above,
$\epsilon_0$ is also marked in figures. The pion transverse momentum spectrum, its
slope as well as the absolute value, and the interferometry radii,
including $R_{out}$ to $R_{side}$ ratio, are in a good agreement
with the experimental data both for the top SPS and RHIC energies.
As one can see particle emission lasts a total lifetime of the
fireballs; in the cental part, ${\bf r}\approx 0$, the duration is
half of the lifetime. Nevertheless, according to the
results \cite{PRC, freeze-out}, the Landau/Cooper-Frye presentation
of sudden freeze-out could be applied in a generalized form
accounting for momentum dependence of the freeze-out hypersurface
$\sigma_p(x)$; now $\sigma_p(x)$ corresponds to the {\it maximum of
emission function} $S(t_{\sigma}({\bf r},p),{\bf r},p)$ at fixed
momentum ${\bf p}$ in an appropriate region of ${\bf r}$. This
finding allows one to keep in mind the known results
based on the Cooper-Frye formalism, applying them to a surface of
the maximal emission for given $p$. Then the typical features of the
energy dependence can be understood as follows. The inverse of the
spectra slopes, $T_{eff}$, grows with energy, since as one sees from
the emission functions, the duration of expansion increases with
initial energy density and, therefore, the pressure gradient driven fluid elements gets more
transverse collective velocities $v_T$ when reach a decoupling
energy densities. Therefore the blue shift of the spectra becomes
stronger. A rise of the transverse collective flow with energy leads
to some compensation of an increase of $R_{side}$: qualitatively the
homogeneity length at decoupling stage is $R_{side}=
R_{Geom}/\sqrt{1+\langle v_{T}^2\rangle m_{T}/2T}$, (see, e.g.,
\cite{AkkSin}). So, despite an significant increase of the
transverse emission region, $R_{Geom}$, seen in Fig.1, a
magnification of collective flow partially compensates this. It
leads to only a moderate increase of the $R_{side}$ with energy.
Since the temperatures in the regions of the maximal emission
decrease very slowly when initial energy density grows (e.g., the
temperatures for SPS, RHIC and LHC are correspondingly 0.105, 0.103
and 0.95 MeV for $p_T=0.3$ GeV/c ) the $R_{long}\sim
\tau\sqrt{T/m_T}$ \cite{Averch} grows proportionally to an increase
of the proper time associated with the hypersurface
$\sigma_{p_T}(x)$ of {\it maximal} emission. As we see from Fig. 1
this time grows quite moderate with the collision energy.
A non trivial result concerns the energy behavior of the
$R_{out}/R_{side}$ ratio. It slowly drops when energy grows and
apparently is saturated at fairly high energies at the value close
to unity (Fig.1). To clarify the physical reason of it let us make a
simple half-quantitative analysis. As one can see in Fig. 1, the
hypersurface of the maximal emission can be approximated as
consisting of two parts: the "volume" emission ($V$) at $\tau
\approx const$ and "surface" emission ($S$). A similar picture
within the Cooper-Frye prescription, which generalizes the
blast-wave model \cite{blast-wave} by means of including of the surface emission has
been considered in Ref. \cite{Marina}. If the hypersurface of
maximal emission $\tilde{\tau}(r)$ is double-valued function, as in
our case, then at some transverse momentum $p_T$ the transverse
spectra and HBT radii will be formed mostly by the two contributions
from the different regions with the homogeneity lengths
$\lambda_{i,V}=\sqrt{<(\Delta r_i)^2>}$ ($i$ = side, out) at the
$V$-hypersurface and with the homogeneity lengths $\lambda_{i,S}$ at
the S-hypersurface. Similar to Ref.\cite{AkkSin}, one can apply at
$m_T/T\gg1$ the saddle point method when calculate the single and
two particle spectra using the boost-invariant measures
$\mu_V=d\sigma^V_{\mu}p^{\mu}= \widetilde{\tau}(r)r dr d\phi d\eta
(m_T\cosh(\eta-y)-p_T\frac{d\widetilde{\tau}(r)}{dr}\cos(\phi -
\alpha))$ and $\mu_S=d\sigma^S_{\mu}p^{\mu}= \widetilde{r}(\tau)
\tau d\tau d\phi d\eta
(-m_T\cosh(\eta-y)\frac{d\widetilde{r}(\tau)}{d\tau}+p_T\cos(\phi -
\alpha))$ for $V$- and $S$- parts of freeze-out hypersurface
correspondingly (here $\eta$ and $y$ are space-time and particle
pair rapidities, the similar correspondence is for angles $\phi$ and
$\alpha$, also note that
$\frac{p_T}{m_T}>\frac{d\widetilde{r}(\tau)}{d\tau}$ \cite{PRC,
freeze-out}). Then one can write, ignoring for simplicity the
interference (cross-terms) between the surface and volume
contributions,
\begin{eqnarray}
R_{side}^2 = c_V^2\lambda_{side,V}^2+c_S^2\lambda_{side,S}^2 \label{3} \\
R_{out}^2 = c_V^2\lambda_{out,V}^2+c_S^2\lambda_{out,S}^2(1-
\frac{d\tilde{r}}{d\tau})^2, \label{4}
\end{eqnarray}
where the coefficients $c_V^2+c_S^2\leq1$ and we take into account
that at $p^0/T\gg1$ for pions $\beta_{out}=p_{out}/p^0 \approx 1$.
All homogeneity lengths depends on mean transverse momentum of the
pion pairs $p_T$. The slope $\frac{d\tilde{r}}{d\tau}$ in the region
of homogeneity expresses the strength of $r-\tau$ correlations
between the space and time points of particle emission at the
$S$-hypersurface $\tilde{r}(\tau)$. The picture of emission in Fig.
1 shows that when the energy grows the correlations between the time
and radial points of the emission becomes positive,
$\frac{d\tilde{r}}{d\tau}> 0$, and they increase with energy
density. The positivity is caused by the initial radial flows
\cite{sin1} $u^r(\tau_0)$, which are developed at the pre-thermal
stage, and the strengthening of the $r-\tau$ correlations happens
because the non-central $i$th fluid elements, which produce after
their expansion the surface emission, need more time
$\tau_i(\epsilon_0)$ to reach the decoupling density if they
initially have higher energy density $\epsilon_0$. (Let us
characterized this effect by the parameter
$\kappa=\frac{d\tau_i(\epsilon_0)}{d\epsilon_{0}} > 0 $). Then the
fluid elements before their decays run up to larger radial
freeze-out position $r_i$: if $a$ is the average Lorentz-invariant
acceleration of those fluid elements during the system expansion,
then roughly for $i$th fluid elements which decays at time $ \tau_i$
we have at $a\tau_i \gg 1$: $r_i(\tau_i)\approx
r_i(\tau_0)+\tau_i+(u_i^r(\tau_0)-1)/a$. Then the level of $r-\tau$
correlations within the homogeneous freeze-out "surface" region,
which is formed by the expanding matter that initially at $\tau_0$
occupies the region between the transversal radii $r_1(\tau_0)$ and
$r_2(\tau_0)>r_1(\tau_0)$, is
\begin{equation}
\frac{d\tilde{r}}{d\tau} \approx \frac{r_1(\tau_1)-r_2(\tau_2)}{\tau_1-\tau_2}
\approx 1-\frac{R}{\epsilon_0\kappa}\label{5}
\end{equation}
and, therefore, the strength of $r-\tau$ correlations grows with
energy: $\frac{d\tilde{\tau}}{dr}\rightarrow 1$. Note that here we
account for $\tau_2 - \tau_1 \approx \kappa(\epsilon_0(r_2(\tau_0))
- \epsilon_0(r_1(\tau_0)))$ and that
$\frac{d\epsilon_0(r)}{dr}\approx -\frac{\epsilon_0}{R}$ where
$\epsilon_0\equiv\epsilon_0(r=0)$ and $R$ is radius of nuclear. As a
result the second S-term in Eq. (\ref{4}) tends to zero at large
$\epsilon_0$ , reducing, therefore, the $R_{out}/R_{side}$ ratio. In
particular, if $\lambda_{side,V}^2 \gg \lambda_{side,S}^2$ then,
accounting for a similarity of the volume emission in our
approximation and in the blast wave model, where as known
$\lambda_{side,V} \approx \lambda_{out,V}$, one can get:
$\frac{R_{out}}{R_{side}}\approx 1 + const\cdot
\frac{R}{\epsilon_0\kappa}\rightarrow 1$ at $\epsilon_0 \rightarrow
\infty$. It is worthy of note that also measure $\mu_S$ tends to
zero when $\frac{d\tilde{\tau}}{dr}\rightarrow 1$ that again reduces
the surface contribution to $side-$ and $out-$ radii at large $p_T$.
The presented qualitative analysis demonstrates the main
mechanisms responsible for the non-trivial behavior of $R_{out}$ to
$R_{side}$ ratio exposed in HKM calculations, see Fig.1
(bottom). The very recent first LHC data for Pb+Pb collisions
presented by the ALICE Collaboration \cite{Alice} conform, in fact, the discussed above
physical picture of space-time evolution responsible for formation of the
HBT radii and $R_{out}$ to $R_{side}$ ratio, see Fig.1. The transverse femtoscopy scales, predicted for the charged multiplicity
$dN_{ch}/d\eta$=1500 in HKM at the initial energy density $\epsilon_0=40$ GeV/fm$^3$, are quite close to the experimental data associated with $dN_{ch}/d\eta\approx 1600$ at the collision energy $\sqrt{s}=2.76$ TeV. As for the longitudinal HBT radius, $R_{long}$, it is underestimated in HKM by around 20\%. As the result, HKM gives smaller interferometry volume than is observed at LHC. The reason could be that HKM describes a gradual decay of the system which evolves hydrodynamically until fairly large times. It is known \cite{AkkSin2} that at the isentropic and chemically frozen hydrodynamic evolution the interferometry volume increases quite moderate with initial energy density growth in collisions of the same/similar nucleus. The RHIC results support such a theoretical view (see solid line in Fig.2), while the ALICE Collaboration observes a significant increase of the interferometry volume at LHC. One should change, thus, the global fit of $V_{int}(dN/d\eta)$ for A+A collisions for steeper slope (upper dash line). However, no one linear fit cannot be extrapolated to $V_{int}(dN/d\eta)$-dependence discovered by the ALICE Collaboration in p+p collisions \cite{Alice2} (bottom dashed line in Fig.2). Could one call these two observed peculiarities as the "`LHC HBT puzzle"'? On our opinion, at least qualitavely, it is not puzzling. An essential growth of the interferometry volume in Pb+Pb collisons at the first LHC energy can be conditioned by an increase of the duration of the last very non-equilibrium stage of the matter evolution which cannot be considered on the hydrodynamic basis and one should use hadronic cascade models like UrQMD. At such late stage the results obtained in \cite{AkkSin2} for isentropic and chemically frousen evolution are violated. As for the different linear $V_{int}(dN/d\eta)$ dependence in A+A and p+p collisions, the interferometry volume depends not only on multiplicity but also on intial size of colliding systems \cite{AkkSin2}. Therefore, qualitavely, we see no puzzle in the newest HBT results obtained at LHC in Pb+Pb and p+p collisions, but the final concusion can be done only after detailed quantitative analysis.
{\it Summary}---We conclude that energy behavior of the pion
interferometry scales can be understood at the same hydrokinetic basis as for the SPS and RHIC energies supplemented by hadronic cascade model at the latest stage of the evolution.
In this approach the EoS accounts for a crossover
transition between quark-gluon and hadron matters at high collision
energies and non-equilibrated expansion of the
hadron-resonance gas at the later stage.\\
\begin{figure*}[htb]
\centering
\includegraphics[scale=0.7]{Vint_paper1
\vspace{-0.1in}
\caption{Illustration of multiplicity dependence of the pion inteferometry volume on charged particle multiplicity for central heavy ion collisions at AGS, SPS, RHIC and LHC energies and comparison with the results in p+p collisions (bottom, left). All the HBT radii are taken at the pion transverse momentum $p_T=$ 0.3 GeV. For A+A collision the data are taken from Fig. 4 of Ref. \cite{Alice} (see all details there), for p+p collisions the points for $p_T=$ 0.3 GeV are interpolated from the results of Ref. \cite{Alice2}. The solid line corresponds to linear fit of $V_{int}(dN_{ch}/d\eta)$-dependence only for the top SPS and the RHIC energies; upper dashed line is fit for all the A+A energies including the newest LHC point at $\sqrt{s}=2.76$ TeV; the bottom dashed line is the linear fit for ALICE LHC results for p+p collisions with energies 0.9 and 2.76 TeV.}
\end{figure*}
The HKM alows one to treat correctly the process of particle
emission from expanding fireball, that is not sudden and lasts about
system's lifetime. Also it takes into account the prethermal
formation of transverse flows. Then the
main mechanisms that lead to the paradoxical behavior of the
interferometry scales find a natural explanation. In particular, a
slow decrease and apparent saturation of $R_{out}/R_{side}$ ratio
around unity at high energy happens due to a strengthening of
positive correlations between space and time positions of pions
emitted at the radial periphery of the system. Such an effect is a
consequence of the two factors accompanying an increase of collision
energy: a developing of the pre-thermal collective transverse flows
and an increase of initial energy density in the fireball. The prediction of the HKM for LHC energies are quite close to the first experimental data in Pb+Pb collisions at LHC.
\section*{Acknowledgments}
Yu.S. gives thanks to P. Braun-Munzinger for support this study within EMMI/GSI organizations as well as
for fruitful and very stimulating discussions. The researches were carried
out in part within the scope of the EUREA: European Ultra Relativistic Energies Agreement (European Research Group
GDRE: Heavy ions at ultrarelativistic energies) and is supported by the State Fund for
Fundamental Researches of Ukraine (Agreement of 2011) and National Academy of Sciences of Ukraine (Agreement of 2011).
|
1,108,101,563,668 | arxiv | \section{Introduction}
It is well established in gauge field theories that the vacuum expectation value of a field $\left\langle \phi\right\rangle$ can take on non-zero values when the ground state, at the minimum of the zero-temperature potential or ``true vacuum'', breaks the symmetry of the underlying Lagrangian \cite{vil,hin}. In the hot early universe, thermal effects lead to an effective potential with a temperature dependence, such that the vacuum starts in a symmetric ``false'' vacuum and only later cools to its ground state below a certain critical temperature~\cite{linde,vil,ba,va84,vil85}. In general however the expansion of the universe occurs too quickly for the system to find its true ground state at all points in space~\cite{ki}. Depending upon the topology of the manifold of degenerate vacua, topologically stable defects form in this process, such as domain walls, cosmic strings, and monopoles~\cite{vil,hin,va84,vil85,sak06}. In particular, any field theory with a broken U(1) symmetry will have classical solutions extended in one dimension, and in cosmology these structures generically form a cosmic network of macroscopic, quasi-stable strings that steadily unravels but survives to the present day, losing energy primarily by gravitational radiation ~\cite{vi81,ho3,vi85,batt,turok,allen01}. A dual superstring description for this physics is given in terms of one-dimensional branes, such as D-strings and F-strings~\cite{vi05,pol,dv,sas,jones1,pol06}.
In this paper we calculate the spectrum of background gravitational radiation from cosmologies with strings. We undertake calculations at much lower string masses than previously, with several motivations:
\begin{enumerate}
\item
Recent advances in millisecond pulsar timing have reached new levels of precision and are providing better limits on low frequency backgrounds; the calculations presented here provide a precise connection between the background limits and fundamental theories of strings and inflation~\cite{fir05,jenet,jones,tye06}.
\item
The \emph{Laser Interferometer Space Antenna} (LISA) will provide much more sensitive limits over a broad band around millihertz frequencies. Calculations have not previously been made for theories in this band of sensitivity and frequency, and are needed since the background spectrum depends significantly on string mass~\cite{lisa,armstrong,ci}.
\item
Recent studies of string network behavior strongly suggest (though they have not yet proven definitively) a high rate of formation of stable string loops comparable to the size of the cosmic horizon. This results in a higher net production of gravitational waves since the loops of a given size, forming earlier, have a higher space density. The more intense background means experiments are sensitive to lower string masses~\cite{ring,van2,martins}.
\item
Extending the observational probes to the light string regime is an important constraint on field theories and superstring cosmology far below the Planck scale. The current calculation provides a quantitative bridge between the parameters of the fundamental theory (especially, the string tension), and the properties of the observable background~\cite{ki04,sak06}.
\end{enumerate}
The results of our study confirm the estimates made in an earlier exploratory analysis, that also gives simple scaling laws, additional context and background~\cite{ho}. Here we add a realistic concordance cosmology and detailed numerical integration of various effects. We estimate that the approximations used to derive the current results are reliable to better than about fifty percent, and are therefore useful for comparison of fundamental theory with real data over a wide range of string parameters.
Cosmic strings' astrophysical properties are strongly dependent upon two parameters: the dimensionless string tension $G\mu$ (in Planck units where $c=1$ and $G={m_{pl}}^{-2}$) or $G\mu/c^2$ in SI units, and the interchange probability p. Our main conclusion is that the current pulsar data~\cite{jenet,det,sti} already place far tighter constraints on string tension than other arguments, such as microwave background anisotropy or gravitational lensing. Recent observations from WMAP and SDSS \cite{wy} have put the value of $G\mu< 3.5\times10^{-7}$ due to lack of cosmic background anisotropy or structure formation consistent with heavier cosmic strings. From \cite{wy} it is found that up to a maximum of 7\% (at the 68\% confidence level) of the microwave anisotropy can be cosmic strings. In other words, for strings light enough to be consistent with current pulsar limits, there is no observable effect other than their gravitational waves. The limit is already a powerful constraint on superstring and field theory cosmologies. In the future, LISA will improve this limit by many orders of magnitude.
In general, lower limits on the string tension constrain theories farther below the Planck scale. In field theories predicting string formation the string tension is related to the energy scale of the theory $\Lambda_s$ through the relation $G\mu\propto\Lambda_s^2/m_{pl}^2$~\cite{vil,ki,ba}. Our limits put an upper limit on $\Lambda_s$ in Planck masses given by $\Lambda_s<10^{-4.5}$, already in the regime associated with Grand Unification; future sensitivity from LISA will reach $\Lambda_s\approx 10^{-8}$, a range often associated with a Peccei-Quinn scale, inflationary reheating, or supersymmetric B-L breaking scales~\cite{jean}. In the dual superstring view, some current brane cosmologies predict that the string tension will lie in the range $10^{-6}<G\mu<10^{-11}$~\cite{tye05,fir05,tye06}; our limits are already in the predicted range and LISA's sensitivity will reach beyond their lower bound.
Cosmic strings radiate the bulk of their gravitational radiation from loops~\cite{an,vil,vi85,turok,vi81}. For a large network of loops we can ignore directionality for the bulk of their radiation which then forms a stochastic background~\cite{allen96}. Each loop contributes power to the background:
\begin{equation}
\frac{dE}{dt}=\gamma G \mu^2 c,
\end{equation}
where $\gamma$ is generally given to be on the order of 50 to 100 \cite{ho,vil,burden85,ga87,allen94}.
Gravitational radiation from strings is characterized for our purposes by the dimensionless luminosity of gravitational waves $\gamma$, which depends on the excitation spectrum of typical radiating modes on string loops. Loop formation is characterized by $\alpha$, the typical size of newly formed loops as a fraction of the Hubble scale $H(t)^{-1}$. Estimates based on numerical simulations have in the past suggested very small values, leading to the hypothesis that $\alpha \sim G \mu$ or smaller~\cite{sie2,bennett88,allen90}; those ideas suggest that all but a fraction $\sim G \mu$ or smaller of loops shatter into tiny pieces. Recent simulations designed to avoid numerical artifacts suggest a radically different picture, that in fact a significant fraction of loops land on stable non-self-intersecting orbits at a fraction $\sim0.1$ of the horizon scale~\cite{van1,van2,ring,martins}. Our study is oriented towards this view, which leads to a larger density of loops and a more intense background for a given string mass.
For units of the gravitational energy in a stochastic background a convenient measure we adopt is the conventional dimensionless quantity given by
\begin{equation}
\label{eqs:omega}
\Omega_{gw}(f)=\frac{1}{\rho_c}\frac{d\rho_{gw}}{d\ln f},
\end{equation}
where $\rho_{gw}$ is the energy density in the gravitational waves and $\rho_c$ is the critical density. $\Omega_{gw}(f)$ gives the energy density of gravitational waves per log of the frequency in units of the critical density. $\Omega_{gw}$ is proportional to the mean square of the strain on spacetime from a stochastic background of gravitational waves.
There has been much interest in cusps and kinks as sources of gravitational radiation from cosmic strings \cite{da00,da01,da05,sie}. In this paper an estimate of the radiation power from cusps and kinks is included as higher frequency behavior in the loop gravitational wave spectrum, but the beaming and bursts of gravitational waves is not discussed explicitly. As estimated in \cite{ho}, in the regime of very light strings, observable bursts are expected to be rare, and harder to detect than the stochastic background.
Finally we would like to state some additional uncertainties associated with the model of strings described below:
\begin{itemize}
\item{ There is the possibility of multiple scales in loop formation: some string theories suggest multiple
stable intersection points and loops connected by these intersections. This may strongly affect the
accuracy of the one scale model, and indeed some of these theories also have significant other modes of decay that invalidate the
entire framework used here~\cite{jackson}.}
\item{ The reconnection probability p can be less than one. This may affect loop formation and the gravitational wave background. In general it is thought that this effect always increases the the predicted background although it is not agreed by how much~\cite{dv}.}
\item{ There may be transient behavior of cosmic strings due to the formation process of cosmic quenching. The model below
assumes that transient effects have long since settled out by the time our model begins; this seems likely at LISA frequencies which
correspond to initial loop sizes much larger than the horizon at the end of inflation~\cite{jones1}.}
\end{itemize}
Further numerical string network studies will be needed to resolve these issues.
\section{Model of String Loop Populations}
\subsection{Loop Formation}
The behavior of strings on cosmological scales has been thoroughly discussed in the literature~\cite{vil,vil85,hin,ho3,van1,van2,turok1984,allen90,bennett88}. By the Kibble mechanism strings are created as a random walk with small coherence length, and they evolve following the Nambu-Goto action. The strings form in large threadlike networks \cite{van1}, and on scales longer than the Hubble length $cH(t)^{-1}$ the network is frozen and stretches with expansion. The strings move with speed c, and they interact to form loops with a probability p by different mechanisms: one involves two strings, the other a single string forming a loop. These loops break off from the infinite string network and become the dominant source of gravitational waves. They oscillate with a fundamental frequency given by $f=2c/L$, where L is the length of the loop. In general the loops form a discrete set of frequencies with higher modes given by $f_n=2nc/L$.
The exchange probability p measures the likelihood that two crossing string segments interact and form new connections to one another. If p=0 then the string segments simply pass through each other; but if p=1 then they can exchange ``partners'', possibly forming a loop. The value of p depends upon the model~\cite{dv,hash,eto}; it is close to unity in models most commonly discussed but in principle is an independent parameter (and in a broad class of models, the only one) that can modulate the amplitude of the spectrum for a given string tension. In this paper it is taken to be p=1 and is parameterized as part of the number of loops formed at a given time~\cite{ho,vi81,ho3}. The number of loops is normalized to the previous results of R.R. Caldwell and B. Allen~\cite{ca}, which is described in detail later in this section.
\subsection{Loop Radiation}
For the dynamics of a string the Nambu-Goto action is used:
\begin{equation}
S=-\mu \int \sqrt{-g^{(2)}} d\tau d\sigma,
\end{equation}
where $\tau$ and $\sigma$ are parameterized coordinates on the world sheet of the loop, and $g^{(2)}$ is the determinant of the induced metric on the worldsheet. The energy-momentum tensor of the string can be given by \cite{vil},
\begin{equation}
T^{\mu\nu}(t,\textbf{x})=\mu \int d\sigma \left(\dot{x}^{\mu}\dot{x}^{\nu}-\frac{dx^{\mu}}{d\sigma} \frac{dx^{\nu}}{d\sigma} \right) \delta^{(3)}(\textbf{x}-\textbf{x}(t,\sigma)),
\end{equation}
where $\tau$ is taken along the time direction. The radiated power from a loop at a particular frequency, $f_n$, into a solid angle $\Omega$ is,
\begin{equation}
\frac{d\dot{E_n}}{d\Omega}=4\pi G f_n^2 \left(T_{\mu\nu}^*(f_n,\textbf{k}) T^{\mu\nu}(f_n,\textbf{k})-\frac{1}{2} \left|T_{\nu}^{\nu}(f_n,\textbf{k})\right|^2 \right).
\end{equation}
Here $T_{\mu\nu}(f_n,\textbf{k})$ is the Fourier transform of $T_{\mu\nu}(t,\textbf{x})$ \cite{vil}. Dimensionally the power is given by $G\mu^2$~\cite{turok} with numerical coefficients determined by the frequency mode and the luminosity of the loop.
In general for the power output of a loop at a given mode we can write $\dot{E_n}=P_n G \mu^2 c$~\cite{vil,ca}, where directionality is ignored by effectively inserting the loop into a large network of loops and the $P_n$ are dimensionless power coefficients. Note that c has been reinserted. The power coefficients are generally given the form $P_n\propto n^{-4/3}$ \cite{vil} which will create an $f^{-1/3}$ high frequency tail to the power spectrum of a loop. The total power from a loop is a sum of all the modes:
\begin{subequations}
\begin{eqnarray}
\dot{E}&=&\sum_{n=1}^{\infty}P_n G \mu^2 c,\\
&=&\gamma G \mu^2 c.
\end{eqnarray}
\end{subequations}
Thus $\gamma$ measures the output of energy and is labeled the ``dimensionless gravitational wave luminosity'' of a loop. For most calculations we use the value $\gamma=50$~\cite{allen94}.
For the majority of our calculations we assume only the fundamental mode contributes, effectively a delta function for $P(f)$, and $f=2c/L$ is the only emission frequency of each loop. This assumption is relatively good. To check this, some calculations include higher frequency contributions and show that the effect on the background is small overall, with almost no difference at high frequencies. This is discussed in more detail in Results. The high frequency additions also represent the contribution from kinks and cusps, which add to the high frequency region with an $f^{-1/3}$ tail from each loop. This high frequency dependence is the same as that for the high frequency modes, so their contributions should be similar, particularly for light strings. Light strings contribute at higher frequencies because the loops decay more slowly and more small loops are able to contribute at their fundamental frequency longer throughout cosmic history.
The populations of loops contributing to the background fall naturally into two categories~\cite{ho}: the high redshift ``H'' population of long-decayed loops, and the present-day ``P'' population, where the redshift is of the order unity or less. The spectrum from the H population, which have long decayed and whose radiation is highly redshifted, is nearly independent of frequency for an early universe with a $p=\rho/3$ equation of state. The P loops have a spectrum of sizes up to the Hubble length; they create a peak in the spectrum corresponding approximately to the fundamental mode of the loops whose decay time is the Hubble time today, and a high frequency tail mainly from higher modes of those loops. For all of the observable loops (that is, less than tens of light years across), formation occurred during radiation era (that is, when the universe was less than $~10^5$ years old). As in Hogan \cite{ho} we use scaling to normalize to the high frequency results from R.R. Caldwell and B. Allen \cite{ca}: $\Omega_{gw}=10^{-8} (G\mu/10^{-9})^{1/2} p^{-1} (\gamma \alpha/5)^{1/2}$, where p=1. At the high frequency limit of our curves this relation is used to normalize the parameterization for the number of loops created at a given time.
\subsection{Loop Lengths}
Loop sizes are approximated by the ``one-scale'' model in which the length of created loops scale with the size of the Hubble radius. The infinite strings create loops with a size that scales with the expansion, creating a distribution with a characteristic size $\alpha/H$.
The expression for the average length of newly formed loops is $\left<L(t)\right>=\alpha c H(t)^{-1}$, where $\alpha$ measures the length of the loop as a fraction of the Hubble radius. It is assumed that the newly formed, stably orbiting loops form a range of lengths at a given time, and this range peaks near $\alpha$. For the numerical calculations, and to facilitate comparison with previous work, we use a delta function for the length. Thus only one length of loop is created at a given time and this is the average length defined above. Because of the averaging introduced by the expansion, this introduces only a small error unless the distribution of sizes is large, on the order of several orders of magnitude or more~\cite{ca}, which would no longer be a one scale model. As a check, models of loop formation with several loop sizes are also analyzed, and it is found that the larger loops tend to dominate the background over the smaller.
The loops start to decay as soon as they are created at time $t_c$, and are described by the equation:
\begin{equation}
\label{eqs:length}
L(t_c,t)=\alpha c H(t_c)^{-1}-\gamma G \mu \frac{t-t_c}{c}.
\end{equation}
(This expression is not valid as $L\rightarrow0$ but is an adequate approximation. As the length approaches zero Eqn.~\ref{eqs:length} becomes less accurate, but in this limit the loops are small and contribute only to the high frequency red noise region of the gravitational wave background. This region is very insensitive to the exact nature of the radiation so Eqn.~\ref{eqs:length} is accurate within the tolerances of the calculations.)
Loops that form at a time $t_c$ decay and eventually disappear at a time $t_d$. This occurs when $L(t_c,t_d)=0$ and from Eqn.~\ref{eqs:length}:
\begin{subequations}
\begin{eqnarray}
0&=&\alpha c H(t_c)^{-1}-\gamma G \mu \frac{t_d-t_c}{c},\\
t_d&=&t_c+\frac{\alpha c^2 H(t_c)^{-1}}{\gamma G \mu}.
\end{eqnarray}
\end{subequations}
At some time $t=t_d$ there are no loops created before time $t_c$, so they no longer contribute to the background of gravitational waves. This is discussed in more detail below.
\subsection{Loop Number Density}
For the number density of strings we parameterize the number created within a Hubble volume at a given time by $N_t / \alpha$. The newly formed loops redshift with the expansion of the horizon by the inverse cube of the scale factor $a(t)^{-3}$:
\begin{eqnarray}
n(t,t')= \frac{N_t}{\alpha} \left(\frac{H(t)}{c}\right)^3 \left(\frac{a(t)}{a(t')}\right)^3.
\end{eqnarray}
$n(t,t')$ is the number density at time $t$ as seen by an observer at time $t'$, and the redshift has been incorporated into the function. In the Appendix there is a description of the calculation of the scale factor. Analyzing the high frequency background equation given by C.J. Hogan \cite{ho} from R.R. Caldwell and B. Allen \cite{ca} the curves are normalized by setting $N_t=8.111$.
At an observation time $t'$ the total number density of loops is found by summing over previous times $t$ excluding loops that have disappeared,
\begin{equation}
\label{eqs:nsum}
n_s(t')=\sum_{t=t_e}^{t'} n(t,t'),
\end{equation}
where $t_e$ is the time before which all the loops have decayed when making an observation at time $t'$. The earliest time in our sum, $t_e$, is found by solving Eqn.~\ref{eqs:length}:
\begin{eqnarray}
t'=t_e+\frac{\alpha c^2 H(t_e)^{-1}}{\gamma G \mu},
\end{eqnarray}
at a given $t'$. Thus $t_e(t')$ tells us that at some time $t'$, all of the loops formed before $t_e(t')$ have decayed and are not part of the sum. For large loops and light strings $t_e$ trends toward earlier in the history of the universe for a given $t'$, allowing more loops to contribute longer over cosmic history. For the final calculations, which are described in detail in the next section, $t'$ and the number density are summed over the age of the universe.
As the loops decay the energy lost becomes gravitational radiation, which persists in the universe and redshifts with expansion. This gravitational wave energy is detectable as a stochastic background from the large number of emitting loops.
\section{Gravitational Radiation from a Network of Loops}
\subsection{Frequency of Radiation}
The frequency of radiation is determined by the length of the loop, and has normal modes of oscillation denoted by $n$. The frequency of gravitational radiation from a single loop is given by,
\begin{eqnarray}
f_n(t_c,t)=\frac{2n c}{L(t_c,t)},
\end{eqnarray}
where $t_c$ is the creation time of the loop and $t$ is the time of observation. There is also redshifting of the frequency with expansion which must be accounted for as this strongly affects the shaped of the background curves. The strings radiate most strongly at the fundamental frequency so $n=1$ for most of the calculations. This also facilitates computation times and introduces only small errors in the amplitude of the background, which is shown in Results. Ultimately the higher modes do not greatly alter the results at the resolution of the computations.
\subsection{Gravitational Wave Energy}
A network of loops at a given time $t'$ radiates gravitational wave energy per volume at the rate:
\begin{equation}
\label{eqs:rhogw}
\frac{d\rho_{gw}(t')}{dt}=\gamma G \mu^2 c\;n_s(t').
\end{equation}
Recall that $n_s(t')$ is the number density $n(t,t')$ summed over previous times $t$ at a time of observation $t'$; given by Eqn.~\ref{eqs:nsum}.
To find the total gravitational wave energy density at the present time the energy density of gravitational waves must be found at previous times $t'$ and then summed over cosmic history. Since the energy density is given by Eqn.~\ref{eqs:rhogw} the total density is found by summing over $n_s(t')$ for all times $t'$ while redshifting with the scale factor, $a^{-4}$. Thus the total gravitational energy per volume radiated by a network of string loops at the current age of the universe is given by:
\begin{subequations}
\label{eqs:densitytime}
\begin{eqnarray}
\rho_{gw}(t_{univ})=\gamma G \mu^2 c \int^{t_{univ}}_{t_0} \frac{a(t')^4}{a(t_{univ})^4} n_s(t') dt',\\
=\gamma G \mu^2 c \int^{t_{univ}}_{t_0} \frac{a(t')^4}{a(t_{univ})^4} \left(\sum^{t'}_{t=t_e} n(t,t')\right)dt'.
\end{eqnarray}
\end{subequations}
Here $t_{univ}$ is the current age of the universe and $t_0$ is the earliest time of loop formation. The internal sum is over $t$ and the overall integral is over $t'$. In practice we can make $t_0$ small to observe higher frequency, H loops, earlier during the radiation era.
Eqs.~\ref{eqs:densitytime} do not have explicit frequency dependence, which is needed to calculate the background gravitational wave energy density spectrum $\Omega_{gw}(f)$ determined by Eqn.~\ref{eqs:omega}. This requires the gravitational energy density be found as a function of the frequency: the number density as a function of time is converted to a function of frequency at the present time. This is discussed in detail in the Appendix.
For most calculations it is assumed each length of loop radiates at only one frequency, $f=2c/L$. The number density, as a function of frequency and time $t'$, is redshifted and summed from the initial formation time of loops $t_0$ to the present age of the universe:
\begin{equation}
\rho_{gw}(f)=\gamma G \mu^2 c \int_{t_0}^{t_{univ}} \frac{a(t')^4}{a(t_{univ})^4} n(f\frac{a(t')}{a(t_{univ})},t') dt'.
\end{equation}
Note that the frequency is also redshifted.
We have checked the effects of including higher mode radiation from each loop. The time rate of change of the gravitational energy density of loops at time $t'$ is found by summing over all frequencies $f_j$ and weighting with the power coefficients $P_j$:
\begin{equation}
\frac{d\rho_{gw}(f',t')}{dt'}=\sum_{j=1}^{\infty} P_j G\mu^2 c\: n(f_j',t').
\end{equation}
This is then integrated over cosmic time and redshifted to give the current energy density in gravitational waves:
\begin{eqnarray}
\rho_{gw}(f)&=&G\mu^2 c \int_{t_0}^{t_{univ}}dt' \frac{a(t')^4}{a(t_{univ})^4}
\nonumber\\
& &\times\left\{\sum_{j=1}^{\infty} P_j\:n(f_j\frac{a(t')}{a(t_{univ})},t')\right\} .
\end{eqnarray}
The power coefficients $P_j$ are functions of the mode and have the form $P_j\propto j^{-4/3}$~\cite{vil,ca}. This includes the behavior of cusps and kinks, which also contribute an $f^{-1/3}$ tail to the power spectrum. Depending upon the percentage of power in the fundamental mode, the sum to infinity is not needed. As an example, if 50\% of the power is in the fundamental mode only the first six modes are allowed. In general it is found that the higher modes do not significantly effect the values of the background radiation. Details are given in Results.
From the gravitational radiation density we can find the density spectrum readily by taking the derivative with respect to the log of the frequency and dividing by the critical density:
\begin{subequations}
\begin{eqnarray}
\Omega_{gw}(f)&=&\frac{f}{\rho_c}\frac{d\rho_{gw}(f)}{df},\\
&=&\frac{8 \pi G f}{3 H_o^2} \frac{d\rho_{gw}(f)}{df}.
\end{eqnarray}
\end{subequations}
A convenient measure is to take $\Omega_{gw} h^2$ to eliminate uncertainties in $H_o=H(t_{univ})=100 h\ km\ s^{-1} Mpc^{-1}$ from the final results.
\section{Results}
\subsection{General Results}
All of the computations show the expected spectrum for loops: old and decayed ``H'' loops leave a flat spectrum at high frequencies and matter era ``P'' loops contribute to a broad peak, with smooth transition region connecting them that includes comparable contributions from both. The frequency of the peak is strongly dependent upon the string tension, with lighter strings leading to a higher frequency peak. This is a result of the lighter loops decaying at a lower rate, so smaller loops survive for a Hubble time. This behavior is exhibited in Fig.~\ref{fig1}.
Fig. 2 shows the peaks of power output per unit volume per log frequency as a function of frequency at different times in cosmic history. The most recent times contribute to the broad peak, while later times contribute to the flat red noise region. Although the power output per volume is larger earlier, the summation time is foreshortened so the contributions overlap to form the lower amplitude flat region in the background.
\begin{figure*}
\includegraphics[width=.9\textwidth]{1.eps}
\caption{\label{fig1} The gravitational wave energy density per log frequency from cosmic strings is plotted as a function of frequency for various values of $G\mu$, with $\alpha=0.1$ and $\gamma=50$. Note that current millisecond pulsar limits have excluded string tensions $G\mu>10^{-9}$ and LISA is sensitive to string tensions $\sim10^{-16}$ using the broadband Sagnac technique, shown by the bars just below the main LISA sensitivity curve. The dotted line is the Galactic white dwarf binary background (GCWD-WD) from A.J. Farmer and E.S. Phinney, and G. Nelemans, et al.,~\cite{farm,nel}. The dashed lines are the optimistic (top) and pessimistic (bottom) plots for the extra-Galactic white dwarf binary backgrounds (XGCWD-WD)~\cite{farm}. Note the GCWD-WD eliminates the low frequency Sagnac improvements. With the binary confusion limits included, the limit on detectability of $G\mu$ is estimated to be $>10^{-16}$.}
\end{figure*}
\begin{figure}
\includegraphics[width=.48\textwidth]{2.eps}
\caption{\label{fig2}The power per unit volume of gravitational wave energy per log frequency from cosmic strings is measured at different times in cosmic history as a function of frequency for $G \mu=10^{-12}$, $\alpha=0.1$, and $\gamma=50$. Note that early in the universe the values were much larger, but the time radiating was short. The low frequency region is from current loops and the high frequency region is from loops that have decayed. This plot shows the origin of the peak in the spectrum at each epoch; the peaks from earlier epochs have smeared together to give the flat high frequency tail observed today.}
\end{figure}
The differences in these calculations from previous publications can be accounted for by the use of larger loops and lighter strings as predicted in~\cite{ho}. The change in cosmology, i.e. the inclusion of a cosmological constant, is not a cause of significant variation. Since the cosmological constant becomes dominant in recent times the increased redshift and change in production rate of loops does not strongly affect the background.
\subsection{Varying Lengths}
The effect of changing the size of the loops is shown in Fig. 3. A larger $\alpha$ leads to an overall increase in the amplitude of $\Omega_{gw}$ but a decrease in the amplitude of the peak relative to the flat portion of the spectrum. As expected there is little if no change in the frequency of the peak. In general the red noise portion of the spectrum scales with $\alpha^{1/2}$ and this is seen in the high frequency limits of the curves in Fig. 3 \cite{ho}. An increase in $\alpha$ leads to longer strings and an increase in the time that a given string can radiate. This leads to an overall increase in amplitude without an effect on the frequency dependence of the spectrum peak.
\begin{figure}
\includegraphics[width=.48\textwidth]{3.eps}
\caption{\label{fig3} The gravitational wave energy density per log frequency from cosmic strings is shown with varying $\alpha$ for $G\mu=10^{-12}$ and $\gamma=50$. Here $\alpha$ is given values of 0.1, $10^{-3}$, and $10^{-6}$ from top to bottom. Note the overall decrease in magnitude, but a slight increase in the relative height of the peak as $\alpha$ decreases; while the frequency remains unchanged. The density spectrum scales as $\alpha^{1/2}$, and larger loops dominate the spectrum over small loops.}
\end{figure}
In Fig.~\ref{fig4} a smaller $\alpha$ is plotted and shows the reduction in the background. The general shape of the background remains unchanged, except for some small differences in the heavier strings. For heavier strings the amplitude of the peak tends to increase relative to the red noise portion of the background spectrum, and the frequency of the peak shifts less with $G\mu$. These differences become more pronounced for heavy and very small strings; and at this limit our results are found to match very closely those computed in~\cite{ca}.
\begin{figure}
\includegraphics[width=.48\textwidth]{4.eps}
\caption{\label{fig4}The gravitational wave energy density per log frequency from cosmic strings for $\alpha=10^{-5}$ and $\gamma=50$. Smaller $\alpha$ reduces the background for equivalent string tensions, with a LISA sensitivity limit of $G\mu>10^{-14}$ if the Sagnac sensitivity limits are used. The bars below the main LISA sensitivity curve are the Sagnac improvements to the sensitivity, while the dotted line is the Galactic white dwarf binary background (GCWD-WD) from A.J. Farmer and E.S. Phinney, and G. Nelemans, et al.,~\cite{farm,nel}. The dashed lines are the optimistic (top) and pessimistic (bottom) plots for the extra-Galactic white dwarf binary backgrounds (XGCWD-WD)~\cite{farm}. Note the GCWD-WD eliminates the low frequency Sagnac improvements, and increases the minimum detectable $G\mu$ to $>10^{-12}$. $G\mu$ ranges from the top curve with a value $10^{-7}$ to the bottom curve with $10^{-15}$.}
\end{figure}
\subsection{Luminosity}
A residual factor of the order of unity arises from uncertainty in the typical radiation losses from loops when all modes are included averaged over an entire population. Changing the dimensionless luminosity $\gamma$ affects the curves as shown in Fig.~\ref{fig5}. Larger $\gamma$ decreases the amplitude of the red noise region of the spectrum, but increases the lower frequency region. Smaller $\gamma$ increases the amplitude of the peak and the flat portion of spectrum, while leading to a decrease in the high frequency region. This occurs because the loops decay more quickly for larger $\gamma$ and have less time to contribute over the long time periods of cosmology. In the matter dominated era the higher luminosity allows the loops to contribute more to the high frequency region, although it does not increase the peak amplitude. Higher luminosity also decreases the frequency of the overall peak by contributing more power in current loops. The power and change in length depend on the luminosity by $\dot{E}\propto\gamma$ and $\dot{L}\propto-\gamma$. Note that the gravitational wave power depends on the square of the mass density but the first power of the luminosity, thus different behavior is expected varying $\gamma$ or $\mu$. An analytic description of the scaling of the amplitude and frequency of $\Omega_{gw}$ with $\gamma$ is given in~\cite{ho}.
\begin{figure}
\includegraphics[width=.48\textwidth]{5.eps}
\caption{\label{fig5}The energy density in gravitational waves per log frequency from cosmic strings with varying luminosity $\gamma$ for $G\mu=10^{-12}$ and $\alpha=0.1$. Here the $\gamma$'s are given as follows at high frequencies: 50 for top curve, 75 for the middle curve, and 100 for the bottom curve. The peak frequency, peak amplitude, and amplitude of flat spectrum all vary with the dimensionless gravitational wave luminosity $\gamma$.}
\end{figure}
\subsection{Multiple Lengths}
Some models indicate several characteristic scales for the loop populations, with some combination of several scales. This implies multiple distributions of lengths, which can be described as a combination of different $\alpha$'s. Figures~\ref{fig1}, \ref{fig3}, and \ref{fig4} indicate the eventual dominance of the larger loops over the smaller in the final spectrum. Even though, in this model, the number of loops created goes as $\alpha^{-1}$, their contribution to the background is still much smaller and scales as $\alpha^{1/2}$. Thus an admixture of loop scales tends to be dominated by the largest stable radiating loops, which is shown in Fig.~\ref{fig6}.
\begin{figure}
\includegraphics[width=.48\textwidth]{6.eps}
\caption{\label{fig6} The gravitational wave energy density per log frequency from cosmic strings for $\gamma$=50, $G\mu=10^{-12}$, and a combination of two lengths ($\alpha$'s). 90\% of the loops shatter into loops of size $\alpha=10^{-5}$ and 10\% remain at the larger size $\alpha=0.1$. The overall decrease is an order of magnitude compared to just large alpha alone. From bottom to top the values of $G\mu$ run from $10^{-17}$ to $10^{-8}$. }
\end{figure}
In Fig.~\ref{fig6} we assume 90\% of the loops shatter into smaller sizes $\alpha=10^{-5}$ while 10\% are larger with $\alpha=0.1$. A ``two scale'' model is used and two relatively narrow distributions are assumed, as in the one scale model. It is imagined that the inital loops are large and splinter into smaller loops, with the percentage that shatter depedent upon the model. Overall the amplitude decreases by an order of magnitude below a pure $\alpha=0.1$ amplitude, but it is an order of magnitude above the $\alpha=10^{-5}$ spectra. Thus the large loops, even though they form a small fraction, contribute more to the background. Because of this weighting a one-scale model is likely to be suitable if appropriately normalized. Note that a string simulation requires a dynamic range of scale greater than $\alpha$ to make a good estimate of $\alpha$ and to choose the right normalization.
\subsection{High Frequency Modes, Kinks, and Cusps}
Calculations are done which include higher mode behavior of the loops; this sends some of the gravitational wave energy into higher frequencies for each loop. Adding higher modes tends to smooth out, to a larger degree, the curves and slightly lower the peaks on the spectra. When applied to the power per volume per log frequency as in Fig.~\ref{fig2} we see a less steep high frequency tail at each time and a slight decrease in the amplitude of the peak. From Fig.~\ref{fig7} a comparison of the fundamental mode versus the fundamental mode with higher modes included shows: a) the peak decreases, b) the peaked portion widens slightly, and c) the red noise amplitude remains relatively unchanged.
Fig.~\ref{fig7} demonstrates a calculation in which $\sim50\%$ of the power is in the fundamental mode and the power coefficients go as $n^{-4/3}$ in the higher modes. For the fundamental mode $P_1=25.074$, and the next five modes are $P_2=9.95$, $P_3=5.795$, $P_4=3.95$, $P_5=2.93$, and $P_6=2.30$. Their sum is $\sum P_n=\gamma=50$. The amplitude of the peak shifts from $1.202\times10^{-9}$ to $1.122\times10^{-9}$, a 6.7\% decrease. The frequency of the peak shifts more substantially, from $3.162\times10^{-7}\:Hz$ to $7.080\times10^{-7}\:Hz$, an increase of 124\%. This increase in frequency of the peak does not affect the limits of detection by LISA because the amplitude shrinks very little. Note that the high frequency red noise region is essentially unchanged, and for $G\mu=10^{-12}$ this is the region in which LISA is sensitive.
\begin{figure}
\includegraphics[width=.48\textwidth]{7.eps}
\caption{\label{fig7}The gravitational wave energy density per log frequency for $\gamma$=50, $G\mu=10^{-12}$, $\alpha=0.1$; plotted in one case (dashed) with only the fundamental mode, while for another case (solid) with the first six modes. The dashed curve has all power radiated in the fundamental mode, while the solid curve combines the fundamental with the next five modes: $P_1=25.074,\ P_2=9.95,\ P_3=5.795,\ P_4=3.95,\ P_5=2.93,\ P_6=2.30$. Note the slight decrease in the peak and its shift to a higher frequency. The LISA sensitivity is added as a reference.}
\end{figure}
Plots of spectra for other $G \mu$ with high frequency modes show changes similar to those in Fig.~\ref{fig7}. The peak amplitude drops slightly and the frequency of the peak is increased. Overall, the limits from Table~\ref{table1} remain unchanged by the introduction of 50\% of the power in the higher frequency modes.
Cusps and kinks in the loops are responsible for high frequency ``bursts'' of gravitational wave energy~\cite{da00,da01,da05,sie}. They are two different manifestations of string behavior: cusps are catastrophes that occur approximately once every oscillation, emitting a directed burst of gravitational radiation; while kinks are small wrinkles which propagate on the loops, emitting a higher frequency of directed gravitational energy. The light strings make it less likely to detect these random bursts; in particular they are more difficult to pick out from their own confusion background~\cite{ho}. Since cusps, kinks, and higher modes add an $f^{-1/3}$ tail to the power spectrum of each loop, it is reasonable to expect that they will have a similar effect on $\Omega_{gw}$. The general behavior of high frequency contributions is represented in Fig.~\ref{fig7}.
\section{Millisecond Pulsar Limits}
\subsection{Limits on Background}
Millisecond pulsars have been used as sources to indirectly measure a gravitational radiation background for a number of years \cite{det,sti,kaspi,lommen,thors,mch}. Limits on the strain, and correspondingly the energy density spectrum $\Omega_{gw}$, have been estimated recently at three frequencies: $1/20\:yr^{-1}$, $1/8\:yr^{-1}$, and $1\:yr^{-1}$~\cite{jenet}. The limits on the strain vary with the frequency dependence assumed for the source of gravitational radiation at the corresponding frequencies:
\begin{equation}
\label{eqs:Aheqn}
h_c(f)=A \left(\frac{f}{yr^{-1}}\right)^{\beta},
\end{equation}
where $\beta$ is the power of the frequency dependence of the source and is denoted as $\alpha$ in F.A. Jenet, et al.,\cite{jenet}. A table of values is provided in that source for various cosmological sources of stochastic gravitational radiation, with $\beta=-7/6$ used for cosmic strings. From~\cite{jenet}, the cosmic string limits are:
\begin{subequations}
\label{eqs:limits}
\begin{eqnarray}
\Omega_{gw}(1\ yr^{-1}) h^2 &\leq& 9.6\times10^{-9},\\
\Omega_{gw}(1/8\ yr^{-1}) h^2 &\leq& 1.9\times10^{-8},\\
\Omega_{gw}(1/20\ yr^{-1}) h^2 &\leq& 2.6\times10^{-8}.
\end{eqnarray}
\end{subequations}
Results of our calculations show that for heavier strings at the measured frequencies $\beta$ is approximately -1, but for lighter strings it varies from this value. The variation of $\beta$ is summarized in Table~\ref{table2}, which has been calculated at $f=1/20\:yr^{-1}$ and $f=1\:yr^{-1}$. This change in $\beta$ will result in limits differing slightly only at low frequencies from those in Eqs.~\ref{eqs:limits} \cite{jenet}.
In Figures~\ref{fig1} and \ref{fig4} the limits for millisecond pulsars \cite{jenet} are shown, with allowed values of $\Omega_{gw}$ indicated by the direction of the arrow. For large strings $\alpha=0.1$, the maximum allowed $G \mu$ is about $10^{-9}$, Fig.~\ref{fig1}. Decreasing $\alpha$ decreases the amplitude of the background allowing for heavier strings within the pulsar measurement limits. A more complete list is displayed in Table~\ref{table1}.
\subsection{Dependence of Characteristic Strain on Frequency}
From the curves of Fig.~\ref{fig1} one can find the frequency dependence of the characteristic strain as given by $h_c(f)=A(f/yr^{-1})^{\beta}$. Again, $\beta$ is the power law dependence of $h_c$ on frequency, and is generally written as $\alpha$ with a numerical value of -7/6 \cite{jenet,mag,mag00}. In this paper $\beta$ is used to prevent confusion with the length of loops. If $\Omega_{gw}$ is known, then using Eqn.~\ref{eqs:charstrain} we find:
\begin{equation}
\Omega_{gw}(f)=\frac{2 \pi^2 A^2}{3 H_o^2} \frac{f^{2(1+\beta)}}{(3.17\times10^{-8}\:Hz)^{2 \beta}}.
\end{equation}
With this equality $\beta$ is found by computing $d\ln(\Omega_{gw})/d\ln f$ at $f=1\:yr^{-1}=3.17\times10^{-8}Hz$ and setting this equal to $2(1+\beta)$. The results are listed in Table~\ref{table2}.
\begin{table
\caption{\label{table2}$\beta$ for given $G\mu$ at $f=1/20\:yr^{-1}$ and $f=1\ yr^{-1}$, $\alpha=0.1$, $\gamma=50$, where $h_c\propto f^\beta$.}
\begin{ruledtabular}
\begin{tabular}{c c c}
& $\beta$ & $\beta$ \\
$G\mu$ & $f=1/20\:yr^{-1}$ & $f=1\:yr^{-1}$ \\
\hline
$10^{-6}$& -1.00306 & -1.00031 \\
$10^{-8}$& -1.05314 & -1.00894 \\
$10^{-9}$& -1.05985 & -1.03853 \\
$10^{-10}$& -0.931749 & -1.06593 \\
$10^{-11}$& -0.735778 & -0.987882 \\
$10^{-12}$& -0.682874 & -0.778679 \\
$10^{-13}$& -0.676666 & -0.688228 \\
$10^{-14}$& -0.676034 & -0.676054 \\
$10^{-15}$& -0.675971 & -0.674794 \\
$10^{-16}$& -0.675965 & -0.674668 \\
\end{tabular}
\end{ruledtabular}
\end{table}
From Fig.~\ref{fig1} the change in slope with $G\mu$ is evident at the measured frequency $f_o=1\ yr^{-1}$. For large values of $G\mu$, $f_o$ is in the flat red noise section of the spectrum so the slope of $\Omega_{gw}$ goes to zero, and $\beta$ approaches -1. As the string tension is reduced, $\beta$ is measured first at the high frequency tail so the slope is negative, and thus $\beta$ is more negative. Once over the peak on the high frequency side, the slope becomes positive, thus $\beta$ must be greater than -1. The characteristic strain power law dependence $\beta$ then approaches a value of -0.67.
This dependence on $G\mu$ changes the value of the constant $A$ in Eqn.~\ref{eqs:Aheqn} and thus the limits on $\Omega_{gw}$ \cite{jenet}. Higher mode dependence only modestly changes the values of $\beta$ since it approaches the limits -1 and -2/3 for high and low frequencies respectively. The frequency at which it takes on these values depends upon $G \mu$ as seen in Fig.~\ref{fig1}
\section{Future LISA Sensitivity}
\subsection{Physical Principles}
The pending launch of LISA will give new opportunities to observe gravitational waves in general, and a stochastic background in particular~\cite{ci,allen96,cornish,lisa}. There are a number of theoretical sources for this background, including cosmic strings \cite{mag,kosow,ferrari99,postov,ho4}. LISA sensitivity has been modeled and methods for improving sensitivity by calibrating noise and integrating over a broad band have been proposed \cite{shane,hoben,corn}. The cosmic string background density spectrum calculated here is compared to the potential for detection by LISA.
Detectors such as LISA measure the strain on spacetime, so a relation between the background spectrum and strain is needed. The rms strain is denoted by $h_{rms}(f)$ or $\bar{h}(f)$, and for an isotropic, unpolarized, stationary background~\cite{jenet,hoben,mag00,allen96}:
\begin{equation}
\Omega_{gw}(f)=\frac{4 \pi^2}{3 H_o^2} f^3 h^2_{rms} (f).
\end{equation}
Another measure is the characteristic strain spectrum $h_c(f)^2=2fh^2_{rms}(f)$, when substituted gives:
\begin{equation}
\label{eqs:charstrain}
\Omega_{gw}(f)=\frac{2 \pi^2}{3 H_o^2} f^2 h_c^2(f).
\end{equation}
In the literature, $S_h(f)=h_{rms}(f)^2$ is often used and referred to as the spectral density \cite{mag00}. These relations are important as they relate the energy density spectrum $\Omega_{gw}(f)$, with the strain that LISA will detect. This limit on the detectable strain is shown as a curve on our plots and is taken from~\cite{shane}.
In this paper the LISA sensitivity curve is calculated with the following parameters: SNR=1.0, arm length=5$\times10^9\:m$, optics diameter=$0.3\:m$, laser wavelength=$1064\:nm$, laser power=$1.0\:W$, optical train efficiency=0.3, acceleration noise=$3\times10^{-15}\:m/(s^2\sqrt{Hz})$, position noise=$2\times10^{-11}\:m/\sqrt{Hz}$.
Extracting the background signal by combining signals over a broad band and an extended time can increase the sensitivity of the detector. One method known as the ``Sagnac technique'' is shown on plots of the background \cite{hoben,corn}, for example see Fig.~\ref{fig1}.
\subsection{Limits for LISA Sensitivity}
For both Figures~\ref{fig1} and \ref{fig4} there are limits on the detectability of the background due to strings of various mass densities by LISA from~\cite{shane}. From Fig.~\ref{fig1} with $\alpha=0.1$ the minimum value of the string tension measurable is $\sim10^{-16}$. Fig. 4, which plots a smaller $\alpha=10^{-5}$, the minimum is $G\mu\sim10^{-12}$. These values and others are listed in Table~\ref{table1}. Sagnac observables increase our available spectrum and change the detectable string tensions as indicated \cite{hoben}.
Shown in Table~\ref{table1} are the limits on string tensions for string sizes $\alpha$ of 0.1 to $10^{-6}$, along with the corresponding minimum tension that LISA can detect. Galactic and extra-Galactic binaries confusion limit is taken into account for this table. For $\alpha<10^{-2}$ the maximum string tension will be greater than $10^{-8}$, although these values are excluded by WMAP and SDSS~\cite{wy}. It is clear from the table and figures that decreasing $\alpha$ increases the maximum string tension allowed as well as increasing the minimum string tension detectable by LISA.
\begin{table
\caption{\label{table1}Millisecond Pulsar Limits and LISA sensitivity for $\gamma=50$. WMAP and SDSS \cite{wy} have excluded $G\mu\geq10^{-7}$.}
\begin{ruledtabular}
\begin{tabular}{c c c}
& Millisecond Pulsar & Minimum LISA \\
$\alpha$ & Limits, $G\mu<$ & Sensitivity, $G\mu\geq$\\
\hline
0.1 & $10^{-9}$ & $10^{-16}$ \\
$10^{-2}$ & $10^{-8}$ & $10^{-15}$\\
$10^{-3}$ & - & $10^{-14}$\\
$10^{-4}$ & - & $10^{-13}$\\
$10^{-5}$ & - & $10^{-12}$\\
$10^{-6}$ & - & $10^{-11}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Confusion Noise from Binary Systems}
Galactic white dwarf binaries add a background ``confusion'' that shrouds portions of LISA's spectrum. This background has been studied and predictions given for its spectrum~\cite{nel}. There are also extra-Galactic binary sources which similarly create confusion noise~\cite{farm}. These confusion limits are shown on plots of the gravitational radiation spectra, and they affect the minimum values of string tension that are detectable.
\section{Conclusions}
Broadly speaking, the results here confirm with greater precision and reliability the estimates given in \cite{ho}. Our survey of spectra provides a set of observational targets for current and future projects able to detect stochastic gravitational wave backgrounds. For light strings, the string tension significantly affects the spectrum of gravitational radiation from cosmic string loops. In addition to a reduction in the overall radiation density, the most significant change is the shift of the peak of the spectra to higher frequencies for light strings. Near the current pulsar limits, the peak happens to lie close to the frequency range probed by the pulsars, so spectra as computed here are needed as an input prior for establishing the value of the limits themselves. For much lighter strings, the peak is at too high a frequency for strings to be detectable by pulsar timing as the spectrum falls below other confusion backgrounds. Lighter strings will be detectable by LISA. For the lightest detectable strings the peak happens to lie in the LISA band and the detailed spectrum must again be included in mounting observational tests. The spectrum from strings is quite distinctive and not similar to any other known source.
A high probability of forming loops with stable, non-self-intersecting orbits, as suggested by recent simulations, leads to larger string loops at formation which in turn give an increased output of gravitational wave energy and improved possibilities for detection for very light strings. Recent simulations have shown stable radiating loops form at a significant fraction of the horizon, of order $\alpha=0.1$; for loops of this size our calculations show that the maximum string tension allowed by current millisecond pulsar measurements is $G\mu<10^{-9}$, and the minimum value detectable by LISA above estimated confusion noise is $G\mu\approx10^{-16}$. In field theories, the string tension is typically related to the scale by $G\mu\propto\Lambda_s^2/m_{pl}^2$, so the maximum detectable scale in Planck masses currently allowed by millisecond pulsars is $\Lambda<10^{-4.5}$, or around the Grand Unification scale; with LISA, the limit will be about
$\Lambda_s\sim10^{-8}$ or $10^{11}\:GeV$. These results suggest that gravitational wave backgrounds are already a uniquely deep probe into the phenomenology of Grand Unification and string theory cosmology, and will become more powerful in the future. The most important step in establishing these arguments for an unambiguous limit on fundamental theories will be a reliable quantitative calculation of $\alpha$.
|
1,108,101,563,669 | arxiv | \section{Introduction}
During the last two decades, a very rich activity of both theorists and experimentalists has led to a better understanding of QCD at very high energy and high density. Especially with the advent of RHIC, which provided a very good opportunity to test new ideas. New ideas such as the theory of the Color Glass Condensate (CGC) (see ref. \cite{JKrev} and references listed therein) which describes the physics of saturation in the initial state of heavy ion collisions.
\section{Hadron production in d-A collisions}
We focus our study on the
$R_{CP}$ (Central/Peripheral collisions)
ratio defined as
\begin{equation}\label{RpA}
R_{CP}=\frac{N^{P}_{coll}\frac{dN^{dA\rightarrow hX}}{d\eta
d^{2}{\bf k} }\vert_{C}}{N^{C}_{coll}\frac{dN^{dA\rightarrow
hX}}{d\eta d^{2}{\bf k}}\vert_{P}}.
\end{equation}
${\bf k}$ and $\eta$ are respectively the transverse momentum and
the pseudo-rapidity of the observed hadron. $N_{coll}$ is the
number of collisions in dA, it is roughly twice the number of
collisions in pA (proton-Gold). The centrality dependence of
$R_{CP}$ is related to the dependence of $N^{dA\rightarrow
hX}=d\sigma^{dA\rightarrow hX}/d^{2}{\bf b}$ and $N_{coll}({\bf
b})$ on the impact parameter of the collision. In this paper, we
address the predictions of the (CGC) for this
ratio. We always assume that cross-sections depend on the impact
parameter only through the number of participants which is
proportional to the saturation scale $Q_ {sA}^2({\bf b})\simeq Q_ {sA}^2(0)N _{part.Au}({\bf b})/N
_{part.Au}(0)$, where $N_{part.Au}$ is the number of participants in the gold
nucleus in d-Au collisions \cite{KLN}.
Also, we use Table 2 in ref. \cite{BRAHMS1} which gives the number
of participants $N_{part}$ and the number of collisions $N_{coll}$
for several centralities.
\subsection{mid-rapidity}\label{subsec:prod}
At mid-rapidity, $x_d=x_A=k_\perp/\sqrt{s}\simeq 10^{-2}$; for such small values -but not small enough to include quantum evolution- of the momentum fraction carried by partons in the deuteron and in the nucleus, gluons dominate the dynamics, therefore we focus on gluon production in the semi-classical picture given by the Mueller-Kovchegov formula \cite{KovMul}:
\begin{equation}\label{cross}
\frac{d\sigma^{dA\rightarrow gX}}{d\eta d^{2}{\bf k} d^{2}{\bf
b}}=\frac{C_{F}\alpha_{s}}{\pi^{2}}\frac{2}{{\bf
k}^{2}}\int_{0}^{~1/\Lambda}du\ln\frac{1}{u\Lambda}\partial_{u}[u\partial_{u}N_{G}(u,{\bf
b })]J_{0}(\vert {\bf k}\vert u),
\end{equation}
where $u=\vert {\bf z}\vert$ and $\Lambda\sim\Lambda_{QCD}$ is an infrared cut-off.\\
The CGC approach yields a
Glauber-Mueller form for the gluon dipole forward scattering amplitude
\begin{equation}\label{NG}
N_{G}({\bf z},{\bf b})=1-{\bf exp}({-\frac{1}{8}{\bf
z}^2Q^{2}_{s}({\bf b})\ln\frac{1}{{\bf z}^2\Lambda^2}}).
\end{equation}
In fig. (1) (data from Ref.\cite{BRAHMS1}) we see that the perturbative estimate for the saturation scale $Q_{sC}^2=2$ GeV$^2$, is not sufficient to reproduce the experimental Cronin peak. Thus, we need to enhance $Q_{s}^2$ by a non-perturbative component\cite{AG,KKT2} up to a value of $Q_{s}^2=9$ GeV$^2$ in order to get a meaningful
comparison with data (we have not included error bars in the figures).
\begin{figure}[h]
\centering
\includegraphics[width=5cm]{Fig3.eps}
\caption{
$R_{CP}$ for $Q^2_{s.C}=9$ GeV$^2$ (thick lines) and
$Q^2_{s.C}=2$ GeV$^2$ (thin lines). Full lines correspond to
central over peripheral collisions (full experimental dots), dashed lines correspond to semi-central over peripheral collisions
(empty experimental dots).}
\end{figure}
\subsection{Forward rapidity}
At forward rapidity the treatment is different, indeed, $x_d=k_\perp e^\eta/\sqrt{s} \simeq 10^{-1}-1$, thus we treat the parton emerging from the deuteron in the framework of QCD factorization. In the nucleus, $x_A=k_\perp e^ {-\eta}/\sqrt{s}\simeq 10^{-4}-10^{-3}$: gluons dominate and in this very small-x region we should apply $k_t$-factorization. The hadron production cross-section is written as follows:
\begin{equation}
\frac{d\sigma^{dA\rightarrow hX}}{d\eta d^{2}{\bf k} d^{2}{\bf
b}}=\frac{\alpha_{s}(2\pi)}{ C_{F}}\sum_{i=g,u,d}\int_{z_0}^1
dz\frac{\varphi_{A}({\bf k}/z,Y+\eta+\ln z,b)}{{\bf k}^{2}}[
f_{i}(x_{d}/z,{\bf k}^{2}/z^{2})D_{h/i}(z,{\bf k}^2)],
\end{equation}
where $ f_{u,d}(x,{\bf k}^{2})=(C_{F}/N_{c})xq_{u,d}(x,{\bf
k}^{2})$ and $f_{g}(x,{\bf k}^{2})=xg(x,{\bf k}^{2})$ are the
parton distributions inside the proton; $D_{h/i}(z,{\bf k})$ are
Fragmentation Functions of the parton $i$ into hadron $h$, and
$z_0=(k_{\bot}/\sqrt{s})e^{\eta}$, and $\varphi_A$ is the unintegrated gluon distribution in the nucleus, it is related to the Fourrier transform of the forward dipole scattering amplitude: $\varphi_{A}(L,y)\propto \frac{d^2}{dL^2}\tilde{N}(L,y)$. At large $y$, the BK equation\cite{BK} (derived in the framework of the CGC) provides the following expression \cite{munp2,MuT,IIMc}:
\begin{equation}\label{BKsol}
\tilde{N}(L,Y)\propto L\exp[-\gamma_{s}L-\beta (y) L^{2}].
\end{equation}
It has a remarkable geometric scaling behavior in the variable $L=\ln ({\bf
k}^{2}/Q_{s}^{2}({\bf b},y))+L_0$ when $y$ goes to infinity, $L_0$ is a constant fixed as in \cite{MuT}.
$\gamma_s\simeq 0.628$ is the anomalous
dimension of the BFKL dynamics in the geometric scaling region
\cite{MuT,IIMc}, and $\beta(y)\propto1/y$. We
used the fit to the HERA data performed in ref.\cite{IIM} in order to fix some free parameters.
In fig. (2) (data from Ref.\cite{BRAHMS1}), we show our results for $R_{CP}$ for different rapidities. The agreement of the CGC-inspired BK- description at forward rapidity is quite good, even at $\eta=1$, where our approach is however no longer valid.
\begin{figure}[hbtp]
\centering
\begin{tabular}{c c c }
\qquad\includegraphics[width=4cm]{Fig2a.eps} &\qquad \includegraphics[width=4cm]{Fig2b.eps} &\qquad \includegraphics[width=4cm]{Fig2c.eps}\\
\qquad (a)& \qquad(b)& \qquad(c)
\end{tabular}
\caption{
$R_{CP}$ at different rapidities $\eta=1,
2.2$ and $3.2$. Full lines correspond to
central over peripheral collisions (full experimental dots), dashed lines correspond to semi-central over peripheral collisions
(empty experimental dots).}
\end{figure}
\section{Summary}
At mid-rapidity, the CGC is not sufficient to yield a quantitative description of the Cronin peak. Whereas, at forward rapidity, we obtain a good quantitative agreement with data -not only the $R_{CP}$ is reproduced, but also hadron spectra \cite{AY}.
It should be noticed that the main features of $R_{CP}$ may be in fact understood within the approximate form (when
$k_{\bot}\gtrsim Q_s$)
\begin{equation}
R_{CP}\simeq
\left(\frac{N^C_{part}}{N^P_{part}}\right)^{\gamma_{eff}-1}.
\end{equation}
At forward rapidity
$\gamma_{eff}\simeq\gamma_s+\beta(\eta)\ln(k_{\bot}^2/Q_s^2)$ is a
decreasing function of $\eta$ and an increasing function of
$k_{\bot}$. This allows us to understand the qualitative behavior
shown by data and in particular the inversion of the centrality
dependence compared to mid-rapidity.
At very large $\eta$ the anomalous dimension stabilizes at
$\gamma_{eff}=\gamma_s$, which could be tested at the LHC.
\section*{References}
|
1,108,101,563,670 | arxiv | \section{INTRODUCTION}
The Einstein-Maxwell-dilaton model with a Liouville potential (the EMdL model) follows from low energy string theories and can be obtained by dimensional reduction of a higher-dimensional Einstein-Maxwell theory with a cosmological constant. Some exact solutions to the EMdL equations, including solutions representing black holes, were derived and analyzed in \cite{7}. Later it was demonstrated that no static, spherically symmetric, asymptotically flat or (anti)-de Sitter ((a)dS) solutions to $d$-dimensional ($d\geqslant 3$) EMdL equations exist \cite{15}. Static, spherically symmetric, non-(a)dS, non-asymptotically flat solutions to $d$-dimensional ($d\geqslant 4$) EMdL equations which represent black holes were discussed in \cite{12}. In particular, three families of black hole solutions were constructed and analyzed. The first family of solutions has two horizons (or one extremal horizon) if $a^2<2/(d-2)$, where $a$ is the dilaton coupling constant (see the action \eq{Action2} below), and there is one horizon if $a^2>2/(d-2)$. A black hole solution does not exist for $a^2=2/(d-2)$. The second family of solutions has only one horizon, while the third family exists only for non-zero value of electric charge. Due to the complicated form of the lapse function an explicit location of the horizons of the third family is not known. However, in 4-dimensional case a single horizon exists for $a=-1$. If the cosmological constant $\Lambda$ is positive and $a^2<1/3$ or $a^2>1$ there can be none, one, two horizons, or an extremal horizon. If $\Lambda<0$ or $1/3<a^2<1$, there is a single horizon. Static topological black hole solutions\footnote[1]{For a review on topological black holes see, e.g. \cite{19,20}. } to 4-dimensional EMdL equations with $\Lambda<0$, whose horizon surface has zero or negative constant curvature, were obtained in \cite{18}, and static black plane solutions were studied in \cite{17}. Static black hole solutions to the $d$-dimensional EMdL model whose horizon surface has zero, positive, or negative constant curvature were constructed in \cite{16}. Black hole solutions to $d$-dimensional EMdL equations were constructed in \cite{1} by using dimensional reduction of $(d+1)$-dimensional black string solutions of the Einstein gravity with a negative cosmological constant and by applying an $SL(2,R)$ transformation to $(d-1)$-dimensional action. These solutions have dilaton field diverging at asymptotic infinity, such that the corresponding Liouville potential vanishes. Cylindrically symmetric solutions to $d$-dimensional EMdL equations were analyzed, and 4-dimensional cylindrically symmetric solutions representing black holes and gravitating solitons were found in \cite{6}. A general $d$-dimensional, cylindrically symmetric solution was obtained for a certain relation between coupling constants. Spherically symmetric, dyonic black hole solutions to $d$-dimensional EMdL equations were derived in \cite{14}.
Having the relatively rich variety of solutions to the EMdL equations which possess regular horizon(s), one is interested to know what types of regular horizons exist within the EMdL model and what are their properties. It is of customary interest to study regular extremal horizons within this model. Regular extremal horizons within other models and their near-horizon geometry were studied, e.g. in \cite{Ha1,Ha1a,Ha2,Ha3}. In particular, in \cite{Ha1} four and $5$-dimensional extremal horizons and their near-horizon geometry were studied within a more general model which includes many abelian vector fields and scalar fields and which can be reduced to the EMdL model.
In this paper we shall study static Killing horizons of arbitrary geometry and topology within the EMdL model in $d$-dimensional ($d\geqslant 4$) space-times with electrostatic field. By ``static horizons of arbitrary geometry and topology" we mean such horizons whose corresponding space-time can be foliated by $(d-2)$-dimensional equipotential surfaces with no vanishing gradient.
Given the EMdL action, we are interested to know if there are associated static solutions to the corresponding EMdL equations which have regular Killing horizons. Here we shall consider both non-degenerate and degenerate (extremal) Killing horizons. Such horizons can be horizons associated with black objects or space-time cosmological horizons. To find if such horizons exist, one needs to solve the EMdL equations with no spatial symmetry imposed, which is a formidable problem. Here we shall undertake a less difficult task. Namely, we assume that space-time metric functions and the model fields are real analytic functions in the vicinity of a space-time Killing horizon. Substituting analytic expansions of the metric functions and fields near the horizon into the EMdL equations we derive equations for the expansion coefficients. Solving these equations, we can derive the higher order expansion coefficients in terms of those corresponding to the metric functions and fields defined on the horizon. These expansions allow us to formulate necessary conditions for existence of regular extremal horizons. We call the conditions necessary because the space-time outside a regular extremal horizon may have singularities which are located beyond the radius of convergence of the expansions. Existence or non-existence of such singularities can be established if a global solution is constructed.
To study geometric properties of regular Killing horizons we restrict ourselves to the space-time curvature invariants: the Ricci scalar, the square of the Ricci tensor, and the Kretschmann scalar. Substituting the expansions into expressions for the space-time curvature invariants we derive relations between the space-time curvature invariants calculated on the Killing horizon and geometric invariants of the horizon surface. One of such relations corresponding to an electrostatic 4-dimensional space-time was derived in \cite{AFS},
\begin{eqnarray}\n{A24.IV}
{\cal K}\rvert_{{\cal H}}&=&3\left({\cal R}\rvert_{{\cal H}}+\frac{1}{4}F^2\rvert_{{\cal H}}\right)^2+\frac{1}{8}F^4\rvert_{\cal H}\,.
\end{eqnarray}
Here ${\cal K}\rvert_{\cal H}$ is the Kretschmann scalar calculated on the space-time horizon ${\cal H}$, ${\cal R}\rvert_{\cal H}$ is the Ricci scalar of the horizon surface, and $F^2\rvert_{\cal H}$ is the electromagnetic field invariant calculated on the horizon. This relation is a generalization of an analogous relation corresponding to a $4$-dimensional vacuum static space-time with a Killing horizon \cite{FS}. It was used in \cite{FS} to calculate vacuum energy density near a static $4$-dimensional black hole using Page's \cite{P} and Brown's \cite{B} approximations. An analogous relation was derived for horizon of a 5-dimensional vacuum static space-time in \cite{ASP},
\begin{eqnarray}\n{A24.IVb}
{\cal K}\rvert_{{\cal H}}&=&6{\cal R}_{AB}{\cal R}^{AB}\rvert_{\cal H}\,.
\end{eqnarray}
Here ${\cal R}_{AB}{\cal R}^{AB}\rvert_{\cal H}$ is the square of the Ricci tensor of the 3-dimensional horizon surface.
In this paper we derive relations which generalize the relations above to horizons of static $d$-dimensional space-times of the EMdL model.
Let us note that throughout our paper we shall consider only secondary scalar hair which are induced by the electrostatic field and a Liouville potential, in other words, we shall consider a dilaton field $\varphi$ \cite{Ortin}.
Our paper is organized as follows: in Sec. II we present the EMdL equations. In Sec. III using Israel's description we define general form of a static space-time metric and bring the EMdL equations to the form which is convenient for our analysis. Section IV contains expressions for the space-time curvature invariants. In Sec. V we analyze space-time geometry near non-degenerate and extremal horizons and derive the relations between the space-time curvature invariants calculated on space-time horizon and geometric quantities of the horizon surface. We shall present necessary conditions for existence of static extremal horizons within the EMdL model. Summary of our results is presented in Sec. VI.
In this paper we use the following convention of units: $G_{(d)}=c=1$, where $G_{(d)}$ is the $d$-dimensional gravitational constant. The space-time signature is $+(d-2)$, and the sign conventions are that adopted in \cite{MTW}.
\section{The EM${\text d}$L equations}
The EMdL model in a $d$-dimensional space-time is defined by the following action:
\begin{eqnarray}
&&\hspace{0.5cm}S[g_{\alpha\beta},A_{\alpha},\varphi]=\frac{1}{16\pi}\int d^{d}x \sqrt{-g} ~{\cal L}\,,\nonumber\\
&&\hspace{-0.6cm}{\cal L}=R-\frac{1}{2}g^{\alpha\beta}(\nabla_\alpha\varphi)(\nabla_\beta\varphi)-\frac{1}{4}e^{a\varphi}F^{2}-2\Lambda e^{-b\varphi}\n{Action2}.
\end{eqnarray}
Here $a$, $b$, and $\Lambda$ are arbitrary coupling constants, $\varphi$ is the dilaton field, $F^2= F_{\alpha\beta}F^{\alpha\beta}$, $F_{\alpha\beta}\equiv\nabla_{\alpha}A_{\beta}-\nabla_{\beta}A_{\alpha}$ is the electromagnetic field tensor, where $A_{\alpha}$ is the electromagnetic $d$-potential, and $\Lambda e^{-b\varphi}$ is a Liouville potential which one may consider as an effective ``cosmological constant". Here and in what follows, $\nabla_\alpha$ denotes the covariant derivative defined with respect to a $d$-dimensional metric $g_{\alpha\beta}$.
The action \eq{Action2} covers the following cases: the case $b=0$ reduces the Liouville potential to the cosmological constant $\Lambda$; the case $a=0$ is the Einstein-Maxwell theory with the scalar kinetic term and the Liouville potential. In a 10-dimensional space-time, if $F_{\alpha\beta}=0$ the action describes tachyon-free non-supersymmetric string theory (see, e.g., \cite{Tac1,Tac2,Tac3,Tac4,Tac5}). For specific dimension-dependent values of the coupling constants $a$ and $b$ the action \eq{Action2} is the Kaluza-Klein reduction of a $(d+1)$-dimensional general relativity with a cosmological constant and rotation or twist (see, e.g., \cite{RTtheory}).
The EMdL equations derived from the action above are the following:
\begin{eqnarray}
&&R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R=8\pi T_{\alpha\beta}~,\nonumber\\
\hspace{-0.3cm}8\pi T_{\alpha\beta}&=&\frac{1}{2}\,(\nabla_{\alpha}\varphi)(\nabla_{\beta}\varphi)-\frac{1}{4}\,(\nabla_{\gamma}\varphi)(\nabla^{\gamma}\varphi)g_{\alpha\beta}\nonumber\\
&+&\frac{1}{2}e^{a\varphi}\left(F_{\alpha}^{~\gamma}F_{\beta\gamma}
-\frac{1}{4}F^2g_{\alpha\beta}\right)-e^{-b\varphi}\Lambda g_{\alpha\beta},\n{1.1a.IV}\\
&&\nabla_{\beta}(e^{a\varphi}F^{\alpha\beta})=0\,,\n{1.1c}\\
&&\nabla_{\alpha}\nabla^{\alpha}\varphi=\frac{1}{4}ae^{a\varphi}F^2-2b\Lambda e^{-b\varphi}\n{1.1d}.
\end{eqnarray}
It is convenient to rewrite the Einstein equation \eq{1.1a.IV} in the following form:
\begin{eqnarray}
R_{\alpha\beta}&=&\frac{1}{2}(\nabla_{\alpha}\varphi)(\nabla_{\beta}\varphi)+\frac{2\Lambda e^{-b\varphi}}{d-2}g_{\alpha\beta}\nonumber\\
&+&\frac{1}{2}e^{a\varphi}\left(F_{\alpha}^{~\gamma}F_{\beta\gamma}
-\frac{F^2g_{\alpha\beta}}{2(d-2)}\right).\n{1.3b}
\end{eqnarray}
In the next section we present these equations in the form corresponding to a static space-time.
\section{Static EM$\text{d}$L space-time}
Here we shall focus on static, $d$-dimensional ($d\geqslant4$) space-times of arbitrary geometry and topology (in the sense discussed in the Introduction) and use Israel's description \cite{3,4,5}. Such space-times admit a Killing vector $\xi^{\alpha}=\delta^{\alpha}_{\,\,\, 0}$, ($x^0\equiv t$, and $t$ is coordinate time), which is timelike in the domain of interest: $\xi^{\alpha}\xi_{\alpha}=g_{00}\equiv-k^2<0$, and hypersurface $t=const$ orthogonal. Thus, the space-time metric $g_{\alpha\beta}$, $(\alpha,\beta,...=0,...,d-1)$ can be presented in the following form:
\begin{eqnarray}\n{A1.IV}
&&\hspace{-1cm}ds^2=g_{\alpha\beta}dx^{\alpha}dx^\beta=-k^2dt^2+\gamma_{ab}dx^adx^b\,,
\end{eqnarray}
where $\gamma_{ab}=\gamma_{ab}(x^c)$, $(a,b,c,...=1,...,d-1)$, is the metric on a $(d-1)$-dimensional hypersurface $t=const$.
Assuming that $(\nabla_\alpha k)(\nabla^\alpha k)$ vanishes nowhere in the domain of interest, we can define $k$ as one of the space-time coordinates in the domain and consider $(d-2)$-dimensional equipotential spacelike surfaces $\Sigma_k$ of constant $k$ and $t$. The norm of the spacelike vector $\delta_\alpha^{~k}=\nabla_\alpha k$, which is orthogonal to $\Sigma_k$, is
\begin{equation}\n{A5.IV}
\kappa^2(k,x^A)\equiv\delta_\alpha^{~k} g^{\alpha\beta} \delta^{~k}_{\beta}=g^{kk}=-\frac{1}{2}(\nabla^{\alpha}\xi^{\beta})(\nabla_{\alpha} \xi_{\beta})\,.
\end{equation}
Thus, the metric \eq{A1.IV} can be written in the following form:
\begin{eqnarray}\n{A4.IV}
&&\hspace{-0.5cm}ds^2=-k^2dt^2+\kappa^{-2}dk^2+h_{AB}dx^Adx^B.
\end{eqnarray}
Here $h_{AB}=h_{AB}(k,x^C)$, $(A,B,C,...=2,...,d-1)$ is the metric on $\Sigma_k$. The space-time \eq{A4.IV} has a horizon ${\cal H}$ defined by $k=0$, which is a Killing horizon. The $(d-2)$-dimensional horizon surface is defined by $k=0$ and $t=const$.
In this section we write the EMdL equations \eq{1.1c}-\eq{1.3b} corresponding to the metric \eq{A4.IV}.
To begin with, we consider $(d-1)$-dimensional hypersurfaces of $t=const$ and use the Gauss and Codazzi relations (see, e.g., \cite{3, Eis})
\begin{eqnarray}
\hspace{-0.09cm}R_{a\beta\gamma b}\hspace{0.1cm} n^{\beta}n^{\gamma}k&=&\gamma_{ac}{\cal \bar{S}}_{b~,t}^{~c}+\epsilon k_{|ab}+k{\cal \bar{S}}_{ac}{\cal \bar{S}}_b^{~c}\,,\n{CC1a}\\
\hspace{-0.09cm}R_{\alpha bcd}\hspace{0.1cm} n^{\alpha}&=&{\cal \bar{S}}_{bc|d}-{\cal \bar{S}}_{bd|c}\,,\n{CC2}\\
\hspace{-0.09cm}R_{abcd}&=&{\bar R}_{abcd}+\epsilon({\cal \bar{S}}_{ad}{\cal \bar{S}}_{bc}-{\cal \bar{S}}_{ac}{\cal \bar{S}}_{bd})\,,\n{CC1}
\end{eqnarray}
where $n^{\alpha}=\xi^{\alpha}/k$ is a unit time-like vector orthogonal to a $(d-1)$-dimensional hypersurface $t=const$, $\epsilon\equiv n^{\alpha}n_{\alpha}=-1$, ${\cal \bar{S}}_{ab}$ is the extrinsic curvature of a hypersurface $t=const$ defined as
\begin{equation}
{\cal \bar{S}}_{ab}\equiv\frac{1}{2}k^{-1}\gamma_{ab,t}\,,\n{S4t}
\end{equation}
and ${\bar R}_{abcd}$ is the Riemann tensor corresponding to the metric $\gamma_{ab}$. Here and in what follows, the barred geometric quantities correspond to $(d-1)$-dimensional hypersurfaces $t=const$, and the stroke stands for the covariant derivative defined with respect to the metric $\gamma_{ab}$.
Contraction of the relations \eq{CC1a}-\eq{CC1} with the use of the metric \eq{A1.IV} gives
\begin{eqnarray}
&&\hspace{-0.09cm}R_{\alpha\beta}n^{\alpha}n^{\beta}=-{\cal \bar{S}}_{ab}{\cal \bar{S}}^{ab}-\epsilon k^{-1}k_{|a}^{~|a}-k^{-1}{\cal \bar{S}}_{,t}\,,\n{CC3}\\
&&\hspace{-0.09cm}R_{\alpha b}n^{\alpha}=-{\cal \bar{S}}_{,b}+{\cal \bar{S}}_{b~|c}^{~c}\,,\n{CC4}\\
&&\hspace{-0.09cm}R_{ab}={\bar R}_{ab}-\epsilon {\cal \bar{S}}{\cal \bar{S}}_{ab}-k^{-1}k_{|ab}-\epsilon k^{-1}\gamma_{ac}{\cal \bar{S}}_{b~,t}^{~c}\,,\n{CC5}
\end{eqnarray}
where ${\cal \bar{S}}\equiv\gamma^{ab} {\cal \bar{S}}_{ab}$.
Since the metric \eq{A1.IV} is static, ${\cal \bar{S}}_{ab}$ vanishes. Thus, the expressions \eq{CC3}-\eq{CC5} imply that for the static space-time \eq{A1.IV} the Ricci tensor components are
\begin{eqnarray}
\hspace{-0.6cm}R_{tt}=k k_{|a}^{~|a}\,,\hspace{0.3cm} R_{ta}=0
\,,\hspace{0.3cm} R_{ab}={\bar R}_{ab}-k^{-1}k_{|ab}\,.\n{R11}
\end{eqnarray}
Let us now write the components of the $d$-dimensional Riemann and Ricci tensors in terms of geometric quantities corresponding to a $(d-2)$-dimensional surface $\Sigma_k$. Applying the replacements $t\rightarrow k$, $k\rightarrow \kappa^{-1}$, $n^{\alpha}\rightarrow \delta^{\alpha}_{~k} \kappa$, $\epsilon\rightarrow 1$ to the Gauss and Codazzi relations \eq{CC1a}-\eq{CC1} we derive
\begin{eqnarray}
&&\hspace{-0.3cm}R_{AkkB}=\kappa^{-1}\left(h_{AC}{\cal S}_{B\,\,\,,k}^{\,\,\,\,C}+(\kappa^{-1})_{;AB}+\kappa^{-1}{\cal S}_{AC}{\cal S}_{B}^{\,\,\,\,C}\right)\,,\n{A6.IV}\nonumber\\\\
&&\hspace{-0.3cm}R_{kABC}=\kappa^{-1}({\cal S}_{AB;C}-{\cal S}_{AC;B})\,,\n{A7.IV}\\
&&\hspace{-0.3cm}R_{ABCD}={\cal R}_{ABCD}+{\cal S}_{AD}{\cal S}_{BC}-{\cal S}_{AC}{\cal S}_{BD}\,,\n{A8.IV}
\end{eqnarray}
where ${\cal R}_{ABCD}$ is the Riemann tensor of a $(d-2)$-dimensional surface $\Sigma_k$. Here and in what follows, the semicolon stands for the covariant derivative defined with respect to the $(d-2)$-dimensional metric $h_{AB}$, and ${\cal S}_{AB}$ is the extrinsic curvature of a surface $\Sigma_k$ defined as
\begin{equation}\n{A9.IV}
{\cal S}_{AB}\equiv\frac{1}{2}\kappa h_{AB,k}\,.
\end{equation}
For the metric \eq{A4.IV} we derive
\begin{eqnarray}
\hspace{-0.3cm}k_{|kk}&=&\kappa^{-1}\kappa_{,k}\,,\hspace{0.3cm}
k_{|Ak}=k_{|kA}=\kappa^{-1}\kappa_{,A}\,,\n{4.a}\\
\hspace{-0.3cm}k_{|AB}&=&\kappa{\cal S}_{AB}\,,\hspace{0.3cm}
k_{|a}^{~|a}=\kappa (\kappa_{,k}+{\cal S})\,,\hspace{0.3cm}{\cal S}\equiv {\cal S}_{A}^{\,\,\,\,A}\,.\n{4.d}
\end{eqnarray}
Using the Gauss and Codazzi relations \eq{A6.IV}-\eq{A8.IV} together with the relations \eq{4.a} and \eq{4.d}, we can write the components of the $d$-dimensional Ricci tensor \eq{R11} in terms of geometric quantities of a $(d-2)$-dimensional surface $\Sigma_k$ as follows:
\begin{eqnarray}
&&\hspace{-0.25cm}R_{tt}=k\kappa(\kappa_{,k}+{\cal S})\,,\n{5.a}\\
&&\hspace{-0.25cm}R_{kk}=-\kappa^{-1}\left({\cal S}_{,k}+(\kappa^{-1})_{;A}^{~;A}+\kappa^{-1}{\cal S}_{A}^{~B}{\cal S}_{B}^{~A}+k^{-1}\kappa_{,k}\right)\,,\n{5.b}\nonumber\\\\
&&\hspace{-0.25cm}R_{Ak}=-\kappa^{-1}\left({\cal S}_{,A}-{\cal S}_{A~;B}^{~B}+k^{-1}\kappa_{,A}\right)\,,\n{5.c}\\
&&\hspace{-0.25cm}R_{AB}={\cal R}_{AB}-{\cal S}{\cal S}_{AB}-\kappa(\kappa^{-1})_{;AB}-\kappa h_{AC}{\cal S}_{B~,k}^{~C}\nonumber\\
&&\hspace{0.6cm}-\,k^{-1}\kappa\,{\cal S}_{AB}\,\n{5.d}.
\end{eqnarray}
Here ${\cal R}_{AB}$ is the Ricci tensor of a $(d-2)$-dimensional surface $\Sigma_k$.
Let us now define the electromagnetic field tensor $F_{\alpha\beta}$ in the static space-time \eq{A4.IV}. We consider the electrostatic $d$-potential
\begin{equation}
A_{\mu}=-\Phi \delta^t_{~\mu},
\end{equation}
where $\Phi=\Phi(k, x^A)$ is an electrostatic potential. The corresponding components of the electromagnetic field tensor read
\begin{eqnarray}
\hspace{-0.3cm}F_{at}=-F_{ta}=-\Phi_{,a}\,,\hspace{0.3cm} F_{ab}=0\,.\n{1.4a}
\end{eqnarray}
They give
\begin{eqnarray}
&&\hspace{-0.3cm}{F}^2=-2k^{-2} \Phi_{,a}\Phi^{,a}=-2k^{-2}(\kappa^2\Phi_{,k}^2+\Phi_{,A}\Phi^{,A})\,,\n{1.4b} \\
&&\hspace{-0.3cm}F_{t}^{~\alpha}F_{t\alpha}=\Phi_{,a}\Phi^{,a}=(\kappa^2\Phi_{,k}^2+\Phi_{,A}\Phi^{,A})\,,\n{1.4bba}\\
&&\hspace{-0.3cm}F_{a}^{~\alpha}F_{b\alpha}=-k^{-2}\Phi_{,a}\Phi_{,b}\,.\n{1.4c}
\end{eqnarray}
The dilaton field $\varphi$ does not depend on time, i.e. $\varphi=\varphi(k, x^A)$.
\begin{widetext}
Using Eqs. \eq{5.a}-\eq{1.4c} we can present the EMdL equations \eq{1.1c}-\eq{1.3b} in the following form:
\newline the Maxwell equation \eq{1.1c}
\begin{eqnarray}
\hspace{-1cm}k(k^{-1}\sqrt{h}\kappa e^{a\varphi} \Phi_{,k})_{,k}+(\kappa^{-1}\sqrt{h} e^{a\varphi}\Phi^{,A})_{,A}=0\,, \n{1.5ce}\end{eqnarray}
the Klein-Gordon equation \eq{1.1d}
\begin{eqnarray}
\hspace{-1cm}(k\kappa \sqrt{h}\varphi_{,k})_{,k}+(k\kappa^{-1}\sqrt{h}h^{AB}\varphi_{,A})_{,B}=-\frac{a}{2}k^{-1}\kappa^{-1}\sqrt{h}e^{a\varphi}(\kappa^2\Phi_{,k}^2+\Phi_{,C}\Phi^{,C})-2b\Lambda k\kappa^{-1}\sqrt{h}e^{-b\varphi}\,,\n{1.5cf}
\end{eqnarray}
and the Einstein equation \eq{1.3b}
\begin{eqnarray}
&&\hspace{3cm}k\kappa(\kappa_{,k}+{\cal S})=\frac{d-3}{2(d-2)}\,e^{a\varphi}(\kappa^2\Phi_{,k}^2+\Phi_{,A}\Phi^{,A})-\frac{2\Lambda k^2}{d-2}e^{-b\varphi}\,,\n{1.5ca}\\
&&\hspace{-1cm}-\kappa^{-1}({\cal S}_{,k}+(\kappa^{-1})_{;A}^{~;A}+\kappa^{-1}{\cal S}_{A}^{~B}{\cal S}_{B}^{~A}+k^{-1}\kappa_{,k})=\frac{1}{2}\,\varphi_{,k}\varphi_{,k}-\frac{k^{-2}\kappa^{-2}}{2(d-2)}e^{a\varphi}\left[(d-3)\kappa^2\Phi_{,k}^2-\Phi_{,A}\Phi^{,A}\right]+\frac{2\Lambda\kappa^{-2}}{d-2}\,e^{-b\varphi},\n{1.5cb}\\
&&\hspace{3cm}-\kappa^{-1}({\cal S}_{,A}-{\cal S}_{A~;B}^{~B}+k^{-1}\kappa_{,A})=\frac{1}{2}\,\varphi_{,A}\varphi_{,k}-\frac{1}{2}k^{-2}e^{a\varphi}\Phi_{,k}\Phi_{,A}\,,\n{1.5cc}\\
&&\hspace{-1cm}{\cal R}_{AB}-{\cal S}{\cal S}_{AB}-\kappa(\kappa^{-1})_{;AB}-\kappa h_{AC}{\cal S}_{B,k}^{~C}-k^{-1}\kappa\,{\cal S}_{AB}=\frac{1}{2}\,\varphi_{,A}\varphi_{,B}-\frac{1}{2}k^{-2}e^{a\varphi}\Phi_{,A}\Phi_{,B}\nonumber\\
&&\hspace{7cm}+\frac{k^{-2}}{2(d-2)}e^{a\varphi}(\kappa^2\Phi_{,k}^2+\Phi_{,C}\Phi^{,C})h_{AB}+\frac{2\Lambda}{d-2}e^{-b\varphi} h_{AB}\,.\n{1.5cd}
\end{eqnarray}
Here $h\equiv det(h_{AB})$.
Taking the trace of Eq. \eq{1.5cd} and using Eqs. \eq{1.5ca} and \eq{1.5cb} we derive
\begin{eqnarray}
{\cal R}-{\cal S}^2+{\cal S}_A^{~B}{\cal S}_B^{~A}+2k^{-1}\kappa \kappa_{,k}&=&\frac{1}{2}(\varphi_{,A}\varphi^{,A}-\kappa^2\hspace{0.1cm}\varphi_{,k}\varphi_{,k})+\frac{k^{-2}}{2(d-2)}e^{a\varphi}\left[(3d-8)\kappa^2\Phi_{,k}^2+(d-4)\Phi_{,A}\Phi^{,A}\right]\nonumber\\
&+&\frac{2d-8}{d-2}\Lambda e^{-b\varphi}\,.\n{6}
\end{eqnarray}
\end{widetext}
In Sec. V we construct approximate solutions to these equations in the vicinity of space-time Killing horizon ${\cal H}$.
\section{Space-time Curvature invariants}
In this section we present expressions of the space-time curvature invariants corresponding to the static space-time \eq{A4.IV} such as the Ricci scalar $R\equiv g^{\alpha\beta}R_{\alpha\beta}$, the square of the Ricci tensor $R_{\alpha\beta}R^{\alpha\beta}\,$, and the Kretschmann scalar ${\cal K}\equiv R_{\alpha\beta\gamma\delta}\hspace{0.09cm}R^{\alpha\beta\gamma\delta}$, in terms of geometric quantities of a $(d-2)$-dimensional surface $\Sigma_k$.
Using Eqs. \eq{5.a}-\eq{5.d} we derive the $d$-dimensional Ricci scalar
\begin{eqnarray}\n{Ri}
R&=&-2\left(k^{-1}\kappa\kappa_{,k}+k^{-1}\kappa{\cal S}+\kappa {\cal S}_{,k}+\kappa (\kappa^{-1})_{;A}^{~;A}\right)\nonumber\\
&-&{\cal S}_{A}^{~B}{\cal S}_B^{~A}+{\cal R}-{\cal S}^2\,,
\end{eqnarray}
where ${\cal R}={\cal R}_A^{~A}$ is the Ricci scalar defined on $\Sigma_k$.
The square of the $d$-dimensional Ricci tensor reads
\begin{eqnarray}
&&\hspace{-0.3cm}R_{\alpha\beta}R^{\alpha\beta}=k^{-2}\kappa^2(\kappa_{,k}+{\cal S})^2\nonumber\\
&&\hspace{-0.39cm}+\,\kappa^2\left({\cal S}_{,k}+(\kappa^{-1})_{;A}^{~;A}
+\kappa^{-1}{\cal S}_A^{~C}{\cal S}_C^{~A}+k^{-1}\kappa_{,k}\right)^2\nonumber\\
&&\hspace{-0.39cm}+\,2\left({\cal S}_{,A}-{\cal S}_{A~;B}^{~B}+k^{-1}\kappa_{,A}\right)\left({\cal S}^{,A}-{\cal S}^{AC}_{~;C}+k^{-1}\kappa^{,A}\right)\nonumber\\
&&\hspace{-0.39cm}+\left[{\cal R}_{AB}-{\cal S}{\cal S}_{AB}-\kappa(\kappa^{-1})_{;AB}-\kappa h_{AC}{\cal S}_{B~,k}^{~C}-k^{-1}\kappa {\cal S}_{AB}\right]\nonumber\\
&&\hspace{-0.39cm}\times\left[{\cal R}^{AB}-{\cal S}{\cal S}^{AB}-\kappa(\kappa^{-1})^{;AB}-\kappa h^{BD}{\cal S}_{D~,k}^{~A}-k^{-1}\kappa {\cal S}^{AB}\right].\nonumber\\\n{RT}
\end{eqnarray}
To calculate the Kretschmann scalar we need to derive the $d$-dimensional Riemann tensor components. Using Eqs. \eq{CC1a}-\eq{S4t} for the static space-time \eq{A1.IV} we derive
\begin{eqnarray}\n{A16.IV}
R_{attb}=-kk_{|ab} \,,\hspace{0.3cm} R_{tabc}=0\,,\hspace{0.3cm}
R_{abcd}={\bar R}_{abcd}\,.
\end{eqnarray}
Then the Kretschmann scalar of the static space-time \eq{A4.IV} can be presented in the following form:
\begin{eqnarray}\n{A17.IV}
\hspace{-0.5cm}{\cal K}&\equiv&R_{\alpha\beta\gamma\delta}\hspace{0.09cm}\hspace{-0.09cm}R^{\alpha\beta\gamma\delta}=4k^{-2}k_{|ab}k^{|ab}+4R_{kABC}R^{kABC}\nonumber\\
&+&4R_{AkkB}R^{AkkB}+R_{ABCD}R^{ABCD}\,.
\end{eqnarray}
For $d\geqslant 5$ the $(d-2)$-dimensional Riemann tensor components ${\cal R}_{ABCD}$ corresponding to the metric $h_{AB}$ can be presented as follows (see, e.g., \cite{Ch}, p. 32):
\begin{eqnarray}\n{A19.IV}
{\cal R}_{ABCD}&=&{\cal C}_{ABCD}+\frac{1}{(d-4)}(h_{AC}{\cal R}_{BD}\nonumber\\
&+&h_{BD}{\cal R}_{AC}-h_{AD}{\cal R}_{BC}-h_{BC}{\cal R}_{AD})\nonumber\\
&-&\frac{1}{(d-3)(d-4)}\hspace{0.1cm}{\cal R}\left(h_{AC}h_{BD}-h_{AD}h_{BC}\right)\,,\nonumber\\
\end{eqnarray}
where ${\cal C}_{ABCD}$ is the Weyl tensor defined on $\Sigma_k$. For $d=5$ the Weyl tensor ${\cal C}_{ABCD}$ vanishes identically. This expression implies
\begin{eqnarray}\n{A20.IV}
{\cal R}_{ABCD}{\cal R}^{ABCD}&=&{\cal C}_{ABCD}{\cal C}^{ABCD}+\frac{4}{d-4}{\cal R}_{AB}{\cal R}^{AB}\nonumber\\
&-&\frac{2}{(d-3)(d-4)}\hspace{0.1cm}{\cal R}^2\,.
\end{eqnarray}
For $d=4$ the 2-dimensional Reimann tensor components corresponding to the metric $h_{AB}$ have the following form:
\begin{equation}
{\cal R}_{ABCD}=\frac{1}{2}(h_{AC}h_{BD}-h_{AD}h_{BC}){\cal R}\,.\n{Rin4}
\end{equation}
This expression implies
\begin{eqnarray}
{\cal R}_{ABCD}{\cal R}^{ABCD}={\cal R}^2\,,\hspace{0.3cm}{\cal R}_{AB}=\frac{1}{2}h_{AB}{\cal R}\,.\n{Eq4d}
\end{eqnarray}
Using the expressions \eq{A17.IV}-\eq{Eq4d} we derive the Kretschmann scalar for the static $d$-dimensional ($d\geqslant 5$) space-time \eq{A4.IV}
\begin{eqnarray}\n{A21.IV}
{\cal K}&=&4k^{-2}\kappa^2\left(\kappa_{,k}^2+2\kappa^{-2}\kappa_{,A}\kappa^{,A}+{\cal S}_{AB}{\cal S}^{AB}\right)\nonumber\\
&+&4\kappa^2\left[h_{AC}h^{BD}{\cal S}_{B~,k}^{~C}{\cal S}_{D~,k}^{~A}
+2\kappa^{-1}{\cal S}_{AC}{\cal S}^{AB}{\cal S}_{B~,k}^{~C}\right.\nonumber\\
&+&2\kappa^{-1}(\kappa^{-1})_{;AB}{\cal S}^{A}_{~C}{\cal S}^{BC}
+2(\kappa^{-1})_{;A}^{~;B}{\cal S}_{B~,k}^{~A}\nonumber\\
&+&\left(\kappa^{-1})_{;AB}(\kappa^{-1})^{;AB}\right]
+8{\cal S}^{AB;C}({\cal S}_{AB;C}-{\cal S}_{AC;B})\nonumber\\
&+&{\cal C}_{ABCD}{\cal C}^{ABCD}+2{\cal C}_{ABCD}{\cal S}^{AD}{\cal S}^{BC}\nonumber\\
&+&2{\cal C}_{ABCD}{\cal S}^{AC}{\cal S}^{BD}+\frac{4}{d-4}\left({\cal R}_{AB}{\cal R}^{AB}\right.\nonumber\\
&-&\left.2{\cal S}{\cal S}^{AB}{\cal R}_{AB}+2{\cal R}_{AC}{\cal S}_B^{~C}{\cal S}^{AB}\right)\nonumber\\
&-&\frac{2}{(d-3)(d-4)}{\cal R}\left({\cal R}-2{\cal S}^2+2{\cal S}_{AB}{\cal S}^{AB}\right)\nonumber\\
&+&2{\cal S}_{AB}{\cal S}^{AB}{\cal S}_{CD}{\cal S}^{CD}+2{\cal S}_{AC}{\cal S}^{BC}{\cal S}_{BD}{\cal S}^{AD}\,,
\end{eqnarray}
and for the static $4$-dimensional space-time \eq{A4.IV}
\begin{eqnarray}
{\cal K}&=&4k^{-2}\kappa^2\left(\kappa_{,k}^2+2\kappa^{-2}\kappa_{,A}\kappa^{,A}+{\cal S}_{AB}{\cal S}^{AB}\right)\nonumber\\
&+&4\kappa^2\left[h_{AC}h^{BD}{\cal S}_{B~,k}^{~C}{\cal S}_{D~,k}^{~A}\right.
+2\kappa^{-1}{\cal S}_{AC}{\cal S}^{AB}{\cal S}_{B~,k}^{~C}\nonumber\\
&+&2\kappa^{-1}(\kappa^{-1})_{;AB}{\cal S}^{A}_{~C}{\cal S}^{BC}
+2(\kappa^{-1})_{;A}^{~;B}{\cal S}_{B~,k}^{~A}\nonumber\\
&+&\left.(\kappa^{-1})_{;AB}(\kappa^{-1})^{;AB}\right]
+8{\cal S}^{AB;C}({\cal S}_{AB;C}-{\cal S}_{AC;B})\nonumber\\
&+&{\cal R}^2+2{\cal R}\left({\cal S}_{AB}{\cal S}^{AB}-{\cal S}^2\right)+2{\cal S}_{AB}{\cal S}^{AB}{\cal S}_{CD}{\cal S}^{CD}\nonumber\\
&+&2{\cal S}_{AC}{\cal S}^{BC}{\cal S}_{BD}{\cal S}^{AD}\,.\n{A21.IVb}
\end{eqnarray}
The expressions \eq{Ri}, \eq{RT} and \eq{A21.IV}, \eq{A21.IVb} define the space-time curvature invariants everywhere in the space-time \eq{A4.IV}. In the next section we calculate the space-time curvature invariants on its Killing horizon ${\cal H}$.
\section{Geometry Near the Horizon}
In this section we derive asymptotic behavior of the metric \eq{A4.IV}, its related geometric quantities, the electrostatic potential $\Phi$, and the dilaton field $\varphi$ near the Killing horizon ${\cal H}$ ($k=0$).
Equations \eq{1.4b}, and \eq{1.5ce}-\eq{6}, together with the definition \eq{A9.IV} provide a complete system of equations for determining such an asymptotic behavior. Assuming that the metric functions $\kappa$ and $h_{AB}$, and the fields $\Phi$ and $\varphi$ are real analytic functions of $k$ in the vicinity of ${\cal H}$, we construct the following series expansions:
\begin{eqnarray}\n{A22.IV}
&&\hspace{1.5cm}{\cal A}=\sum_{n\geqslant0}{\cal A}^{[n]}k^{n}\,,\nonumber\\
&&{\cal A}=\{\kappa, h_{AB},{\cal S}_{AB},{\cal S},{\cal R}_{AB},{\cal R},\Phi,F^2,\varphi\}\,,
\end{eqnarray}
which converge in the vicinity of ${\cal H}$. Here the first term ${\cal A}^{[0]}$ corresponds to the value of ${\cal A}$ calculated on the horizon, i.e. ${\cal A}^{[0]}\equiv{\cal A}\rvert_{\cal H}$. We have two types of quantities, $d$-dimensional and $(d-2)$-dimensional which are defined on $\Sigma_k$ surfaces. If ${\cal A}$ is a $d$-dimensional quantity, e.g. $F^2$ or ${\cal K}$, then ${\cal A}\rvert_{\cal H}$ is its value on the $(d-1)$-dimensional horizon ${\cal H}$ defined by $k=0$; if ${\cal A}$ is a $(d-2)$-dimensional quantity, e.g. ${\cal S}_{AB}$ or ${\cal R}_{AB}$, then ${\cal A}\rvert_{\cal H}$ is its value on the $(d-2)$-dimensional horizon surface $\Sigma_k$ defined by $t=const$, $k=0$.
A substitution of the expansions \eq{A22.IV} into the EMdL equations \eq{1.5ce}-\eq{6} gives equations for the expansion coefficients ${\cal A}^{[n]}$. The lowest order terms corresponding to $k^{-2}$ and $k^{-1}$ give
\begin{eqnarray}
&&\kappa^{[0]}\Phi^{[1]}\sqrt{h}^{[0]}e^{a\varphi^{[0]}}=0\,,\n{A}\\
&&\kappa^{[0]}=const\,,\hspace{0.3cm} \Phi^{[0]}=const\,,\n{B}\\
&&\kappa^{[0]}\kappa^{[1]}=0\,,\hspace{0.3cm} \kappa^{[0]}{\cal S}_{AB}^{[0]}=0\,.\n{C}
\end{eqnarray}
Equation \eq{B} means that the horizon surface gravity and the electrostatic potential are constant on the horizon. Assuming that $\kappa^{[0]}\neq0$ and $\sqrt{h}^{[0]}\neq0$, equations \eq{A} and \eq{C} imply
\begin{equation}
\Phi^{[1]}=0\,,\hspace{0.3cm} \kappa^{[1]}=0\,,\hspace{0.3cm} {\cal S}_{AB}^{[0]}=0\,.\n{D}
\end{equation}
According to the definition \eq{A9.IV}, we have
\begin{eqnarray}
\hspace{-0.6cm}{\cal S}_{AB}&=&{\cal S}_{AB}^{[0]}+{\cal S}_{AB}^{[1]}k+...\nonumber\\
&=&\frac{1}{2}\kappa^{[0]}h_{AB}^{[1]}+\frac{1}{2}(\kappa^{[1]}h_{AB}^{[1]}+2\kappa^{[0]}h_{AB}^{[2]})k+...~.\n{E}
\end{eqnarray}
Thus, equations \eq{D} and \eq{E} imply
\begin{equation}
h_{AB}^{[1]}=0\,.\n{F}
\end{equation}
According to Eq. \eq{E} for $\kappa^{[0]}=0$ we again have ${\cal S}_{AB}^{[0]}=0$. Thus, in both cases $\kappa^{[0]}=const\neq 0$ and $\kappa^{[0]}=0$ the horizon surface is a totally geodesic surface.
The case $\kappa^{[0]}=const\neq0$ corresponds to a non-degenerate Killing horizon, while the case $\kappa^{[0]}=0$ corresponds to a degenerate (extremal) Killing horizon. In what follows, we shall consider these cases separately.
\subsection{Non-degenerate Horizon}
The higher order expansion coefficients ${\cal A}^{[n]}$ corresponding to $\kappa^{[0]}=const\neq0$ can be derived by plugging the expansions \eq{A22.IV} into the EMdL equations \eq{1.5ce}-\eq{6} and by using Eqs. \eq{B}, \eq{D}-\eq{F}, together with Eq. \eq{1.4b}. We derive the following first and second order expansion coefficients:
\begin{widetext}
\begin{eqnarray}
&&\kappa^{[2]}=(4\kappa^{[0]})^{-1}\left(\frac{1}{2}\varphi_{~,A}^{[0]}\varphi^{[0],A}-{\cal R}^{[0]}-\frac{3d-8}{4(d-2)} e^{a\varphi^{[0]}}{F}^{2[0]}\right.
\left.+\frac{2d-8}{d-2}\Lambda e^{-b \varphi^{[0]}}\right) \,,\n{8.c}\\
&&h_{AB}^{[2]}=-(\sqrt{2}\kappa^{[0]})^{-2}\left(\frac{1}{2}\varphi_{~,A}^{[0]}\varphi_{~,B}^{[0]}-{\cal R}_{AB}^{[0]}-\frac{e^{a\varphi^{[0]}}{F}^{2[0]}}{4(d-2)}h_{AB}^{[0]}+\frac{2\Lambda e^{-b\varphi^{[0]}}}{d-2}h_{AB}^{[0]}\right)\,,\n{8.db}\\
&&{\cal S}_{AB}^{[2]}=0
\,,\hspace{0.3cm}~~~\Phi^{[2]}=\pm\frac{\sqrt{-{F}^{2[0]}}}{2\sqrt{2}\kappa^{[0]}}\,,\n{8.a}\\
&&\varphi^{[1]}=0\,,\hspace{0.3cm}\varphi^{[2]}=-(\kappa^{[0]})^{-2}\left(\frac{1}{4}\varphi^{[0]~;A}_{~~;A}-\frac{a}{16}e^{a\varphi^{[0]}}{F}^{2[0]}+b\Lambda e^{-b\varphi^{[0]}}\right).\n{8.df}
\end{eqnarray}
\end{widetext}
As we said in the Introduction, we shall consider only secondary scalar hair induced by the electrostatic field and Liouville potential. According to the expression \eq{8.df}, we have to put
\begin{equation}
\varphi^{[0]}=const\,. \n{h62}
\end{equation}
Hence, all the terms in the expressions above which involve derivatives of $\varphi^{[0]}$ vanish. The condition \eq{h62} is necessary condition which excludes non-zero scalar kinetic term in the action \eq{Action2} for vanishing electrostatic field and the Liouville potential.
The derived expansion coefficients define the metric and the fields near non-degenerate horizon of a general static $d$-dimensional analytic solution to the EMdL equations. Given the coupling constants $a$, $b$, and $\Lambda$, such a solution is defined in terms of the three constants $\kappa^{[0]}$, $\varphi^{[0]}$, and $\Phi^{[0]}$, and $1+(d-1)(d-2)/2$ functions ${F}^{2[0]}$ and $h_{AB}^{[0]}$.
Let us now calculate the space-time curvature invariants \eq{Ri}, \eq{RT}, and \eq{A21.IV}, \eq{A21.IVb} on a non-degenerate horizon using the expansion coefficients derived above.
Substituting the expansions \eq{A22.IV} into the expressions \eq{Ri} and \eq{RT} and taking the terms corresponding to $k^0$ we derive the Ricci scalar and the square of the Ricci tensor calculated on the horizon ${\cal H}$
\begin{eqnarray}
&&\hspace{-1cm}{R}|_{\cal H}=\frac{d-4}{4(d-2)}e^{a\varphi^{[0]}}F^{2[0]}+\frac{2d\Lambda}{d-2} e^{-b\varphi^{[0]}},\n{Riex}\\
&&\hspace{-1cm}{ R}_{\alpha\beta}\hspace{0.01cm}{R}^{\alpha\beta}|_{\cal H}=
\frac{(2d^2-11d+16)}{16(d-2)^2}e^{2a\varphi^{[0]}}(F^{2[0]})^2\nonumber\\
&&\hspace{-0.5cm}+\frac{(d-4)\Lambda}{(d-2)^2}e^{(a-b)\varphi^{[0]}}F^{2[0]}+\frac{4d\Lambda^2}{(d-2)^2}e^{-2b\varphi^{[0]}}.\n{RTex}
\end{eqnarray}
Note that ${R}|_{\cal H}$ is negative for $\Lambda<0$.
To calculate the Kretschmann scalar ${\cal K}$ on the horizon we substitute the expansions \eq{A22.IV} into the expressions \eq{A21.IV} and \eq{A21.IVb}, and take the terms corresponding to $k^0$. For a $d$-dimensional $(d\geqslant 5$) static space-time we derive
\begin{eqnarray}\n{A23.IV}
{\cal K}\rvert_{{\cal H}}&=&{\cal C}_{ABCD}^{[0]}{\cal C}^{ABCD[0]}+\frac{2(d-2)}{(d-4)}{\cal R}^{[0]}_{AB}{\cal R}^{AB[0]}\nonumber\\
&+&\frac{(d-2)(d-5)}{(d-4)(d-3)}({\cal R}^{[0]})^2\nonumber\\
&+&\left(\frac{3}{2} F^{2[0]}e^{a\varphi^{[0]}}-4\Lambda e^{-b\varphi^{[0]}}\right){\cal R}^{[0]}\nonumber\\
&+&\frac{(9d^2-46d+60)}{16(d-2)^2}e^{2a\varphi^{[0]}}(F^{2[0]})^2\nonumber\\
&-&\frac{(3d^2-18+28)}{(d-2)^2}\Lambda e^{(a-b)\varphi^{[0]}}F^{2[0]}\nonumber\\
&+&\frac{4(d^2-6d+12)}{(d-2)^2}\Lambda^2 e^{-2b\varphi^{[0]}},
\end{eqnarray}
where ${\cal C}_{ABCD}^{[0]}$ is the Weyl tensor corresponding to the horizon surface. For $d=5$ we have ${\cal C}_{ABCD}=0$. The Kretschmann scalar calculated on the horizon of a 4-dimensional static space-time is
\begin{eqnarray}
{\cal K}\rvert_{{\cal H}}&=&3({\cal R}^{[0]})^2+\left(\frac{3}{2}F^{2[0]}e^{a\varphi^{[0]}}-4\Lambda e^{-b\varphi^{[0]}}\right){\cal R}^{[0]}\nonumber\\
&+&\frac{5}{16}e^{2a\varphi^{[0]}}(F^{2[0]})^2-F^{2[0]}\Lambda e^{(a-b)\varphi^{[0]}}\nonumber\\
&+&4\Lambda^2 e^{-2b\varphi^{[0]}}.\n{A23.IVb}
\end{eqnarray}
The expressions \eq{A23.IV} and \eq{A23.IVb} are generalizations of the known expressions \eq{A24.IV} and \eq{A24.IVb}. According to the expressions \eq{Riex}-\eq{A23.IVb}, if the fields $F^2$ and $\varphi$ are finite on a non-degenrate horizon and if the horizon surface is regular, i.e. ${\cal R}^{[0]}$, ${\cal R}^{[0]}_{AB}{\cal R}^{AB[0]}$, and ${\cal C}_{ABCD}^{[0]}{\cal C}^{ABCD[0]}$ are finite, then the horizon is regular.
Let us mention a few examples of static solutions of the EMdL model which have non-degenerate Killing horizon. Exact black hole solutions to the EMdL equations with non-degenerate horizon can be found in \cite{7} and \cite{Ortin} (see p. 350). Spherically symmetric solutions representing black holes and black strings without a Liouville potential were derived in \cite{Jutta1, Jutta2}. Distorted, axisymmetric, charged, 4-dimensional dilaton black holes were constructed in \cite{10}. The $d$-dimensional Reissner-Nordstr\"om solution with a cosmological constant was derived in \cite{13}. Distorted, static, axisymmetric, four and $5$-dimensional vacuum and $4$-dimensional electrovacuum black holes were studied in e.g. \cite{FS1, Geroch, ASP} and \cite{Bre2, Step, AFS}, respectively.
Note that the $d$-dimensional Rindler space-time corresponds to $\kappa=const$, $\varphi=\Phi=\Lambda=0$, and the flat Riemannian metric $h_{AB}$.
\subsection{Extremal Horizon}
Let us now consider the extremal case $\kappa^{[0]}=0$. Plugging the expansions \eq{A22.IV} into the EMdL equations \eq{1.5ce}-\eq{6} we derive
\begin{eqnarray}
\kappa^{[1]}=const\,,\hspace{0.3cm} \Phi^{[1]}=const\,.\n{kconst}
\end{eqnarray}
The constant values of $\kappa^{[1]}$ and $\Phi^{[1]}$ are calculated in terms of the coupling constants $a$, $b$, and $\Lambda$, the electrostatic field potential $\Phi^{[0]}$, and the dilaton field $\varphi^{[0]}$ from the following equations:
\begin{eqnarray}
&&\hspace{-0.33cm}4\Lambda e^{-b\varphi^{[0]}}+(\kappa^{[1]})^2\left(2(d-2)-(d-3)(\Phi^{[1]})^2e^{a\varphi^{[0]}}\right)=0,\nonumber\\\n{68a}\\
&&\hspace{-0.33cm}2\varphi^{[0]~;A}_{~~;A}+a(\kappa^{[1]})^2(\Phi^{[1]})^2e^{a\varphi^{[0]}}+4b\Lambda e^{-b\varphi^{[0]}}=0\,.\n{68b}
\end{eqnarray}
The first equation implies that for general values of the coupling constants and $\kappa^{[1]}$, $\Phi^{[1]}$ we have
\begin{equation}
\varphi^{[0]}=const\,. \n{h70}
\end{equation}
Thus, in Eq. \eq{68b} the first term vanishes. Note that in the extremal case the condition \eq{h70} follows directly from the EMdL equations, while in the case of a non-degenerate horizon the same condition \eq{h62} is imposed by hand, in accordance with the secondary hair condition.
The Ricci tensor of the horizon surface is defined by the following expression:
\begin{eqnarray}
{\cal R}_{AB}^{[0]}&=&\frac{1}{2(d-2)}\left(4\Lambda e^{-b\phi^{[0]}}+(\kappa^{[1]})^2(\Phi^{[1]})^2e^{a\varphi^{[0]}}\right)h_{AB}^{[0]}.\nonumber\\
\n{68c}
\end{eqnarray}
According to this expression, the extremal horizon surface is an Einstein space. It was shown in \cite{Tod} that spatially compact extremal horizons in $d$-dimensional ($d\geqslant 4$) adS space-times are compact Einstein spaces of negative curvature.
The electromagnetic field invariant \eq{1.4b} calculated on the extremal horizon is given by
\begin{equation}
F^{2[0]}=-2(\kappa^{[1]})^2(\Phi^{[1]})^2.\n{68d}
\end{equation}
According to the expressions \eq{68c} and \eq{68d}, for
\begin{equation}
\Lambda<\frac{1}{8}F^{2[0]} e^{(a+b)\phi^{[0]}}\,
\end{equation}
the extremal horizon surface has negative curvature.
Solving Eqs. \eq{68a} and \eq{68b} for $\kappa^{[1]}$ and $\Phi^{[1]}$, we can derive the Ricci tensor \eq{68c}, the electromagnetic field invariant \eq{68d}, and the space-time curvature invariants $R$, $R_{\alpha\beta}R^{\alpha\beta}$, and ${\cal K}$ calculated on the extremal horizon.
According to Eqs. \eq{68a}-\eq{68d}, the metric and the fields depend on values of the coupling constants $a$, $b$, and $\Lambda$. There are six cases, which we shall consider separately.
\subsubsection{Case $a\neq0$, $b\neq0$, $\Lambda\neq0$ }
In this case we have
\begin{eqnarray}\n{74AB}
&&\hspace{-0.25cm}(\kappa^{[1]})^2=-2\Lambda e^{-b\varphi^{[0]}}\frac{[a+b(d-3)]}{a(d-2)}\,,\nonumber\\
&&\hspace{-0.25cm} (\Phi^{[1]})^2=\frac{2b(d-2)e^{-a\varphi^{[0]}}}{a+b(d-3)}\,,\hspace{0.3cm} F^{2[0]}=\frac{8b\Lambda}{a}e^{-(a+b)\varphi^{[0]}},\nonumber\\
&&\hspace{-0.25cm}{\cal R}_{AB}^{[0]}=\frac{2\Lambda (a-b)}{a(d-2)}e^{-b\varphi^{[0]}}h_{AB}^{[0]},~
{\cal R}^{[0]}=\frac{2\Lambda }{a}(a-b)e^{-b\varphi^{[0]}}.\nonumber\\
\end{eqnarray}
A solution with real values of $\kappa^{[1]}$ and $\Phi^{[1]}$ exists for $\Lambda>0$, when $a>0$ and $b<-a/(d-3)$, or $a<0$ and $b>-a/(d-3)$, and for $\Lambda<0$, when $a>0$ and $b>0$, or $a<0$ and $b<0$.
In the case $a=b\neq0$ the extremal horizon surface is a Ricci flat Riemannian surface. Some examples of the {\em Case 1} are discussed in \cite{16}, and in the case of 4-dimensional space-times in \cite{18} and \cite{17}.
Substituting the expansions \eq{A22.IV} into the expressions \eq{Ri}, \eq{RT}, and \eq{A21.IV}, \eq{A21.IVb} and taking the terms corresponding to $k^0$ we derive the following space-time curvature invariants calculated on the extremal horizon: the Ricci scalar
\begin{eqnarray}
&&R|_{{\cal H}}=\frac{2\Lambda e^{-b\varphi^{[0]}}}{a(d-2)}[ad+b(d-4)]\,,
\end{eqnarray}
the square of the Ricci tensor
\begin{eqnarray}
&&\hspace{-1.2cm}R_{\alpha\beta}R^{\alpha\beta}|_{{\cal H}}=\frac{4\Lambda^2 e^{-2b\varphi^{[0]}}}{a^2(d-2)^2}\nonumber\\
&&\hspace{-0.5cm}\times[a^2d+2ab(d-4)+b^2(2d^2-11d+16)]\,,
\end{eqnarray}
the Kretschmann scalar ($d\geqslant 5$)
\begin{eqnarray}
{\cal K}|_{\cal H}&=&\frac{8\Lambda^2e^{-2b\varphi^{[0]}}}{a^2(d-2)}\left(\frac{2[a+b(d-3)]^2}{d-2}+\frac{(a-b)^2}{d-3}\right)\nonumber\\
&+&{\cal C}_{ABCD}^{[0]}{\cal C}^{ABCD[0]}\,,
\end{eqnarray}
where for $d=5$ we have ${\cal C}_{ABCD}=0$, and the Kretschmann scalar ($d=4$)
\begin{eqnarray}
{\cal K}|_{\cal H}&=&\frac{8\Lambda^2(a^2+b^2)}{a^2}e^{-2b\varphi^{[0]}}.
\end{eqnarray}
\subsubsection{Case $a\neq 0$, $b=0$, $\Lambda\neq 0$}
This case follows from the previous case by setting $b=0$. According to the expression \eq{74AB} for $\kappa^{[1]}$,
a real solution exists only for $\Lambda<0$. The electromagnetic field invariant vanishes on the horizon $F^{2[0]}=0$. Note also that in this case the space-time curvature invariants: the Ricci scalar, the square of the Ricci tensor, and the Kretschmann scalar calculated on the extremal horizon do not depend on the dilaton coupling constant $a$.
\subsubsection{Case $a\neq 0$, $\Lambda=0$}
According to Eqs. \eq{68a} and \eq{68b}, in this case we have $\kappa^{[1]}=0$. Note that in contrast to the general {\em Case 1}, in this particular case we have to set by hand $\varphi^{[0]}=const$ in Eq. \eq{68b}, in accordance with the secondary scalar hair condition. Calculating the higher order terms in the expansions \eq{A22.IV} we derive
\begin{equation}
\kappa^{[2]}=\kappa^{[3]}=...=0\,,\hspace{0.3cm} \Phi^{[2]}=const\,,\hspace{0.3cm} \Phi^{[3]}=const,...~.
\end{equation}
Here the constant values of $\Phi^{[n]}$, $n\geqslant 1$ are arbitrary and not related to each other. Thus, the electrostatic potential $\Phi$ is not defined and $\kappa\equiv0$. This implies that there are no extremal horizons corresponding to this case within the analytic expansions \eq{A22.IV}. Known static and spherically symmetric solutions corresponding to this case have singular extremal horizon, see e.g. \cite{7} and \cite{Ortin} (p. 350).
\subsubsection{Case $a=b=0$, $\Lambda\neq0$}
In this case the secondary scalar hair condition implies that Eq. \eq{68b} is an identity. Using Eqs. \eq{68a} and \eq{68d} we can express $\kappa^{[1]}$ and $\Phi^{[1]}$ in terms of $F^{2[0]}=const$ as follows:
\begin{eqnarray}
&&(\kappa^{[1]})^2=-\frac{8\Lambda+(d-3)F^{2[0]}}{4(d-2)}\,,\\
&&(\Phi^{[1]})^2=\frac{2(d-2)F^{2[0]}}{8\Lambda+(d-3)F^{2[0]}}\,.
\end{eqnarray}
Note that a solution with real values of $\kappa^{[1]}$ and $\Phi^{[1]}$ exits only for $8\Lambda+(d-3)F^{2[0]}>0$.
Substituting these expressions into Eq. \eq{68c} we derive
\begin{eqnarray}
\hspace{-0.3cm}{\cal R}_{AB}^{[0]}=\frac{8\Lambda-F^{2[0]}}{4(d-2)}h_{AB}^{[0]}\,,\hspace{0.3cm}
{\cal R}^{[0]}=2\Lambda-\frac{1}{4}F^{2[0]}\,.
\end{eqnarray}
Note that electrically charged or neutral (a)dS black holes belong to this case (see, e.g. \cite{13}).
The space-time curvature invariants calculated on the extremal horizon corresponding to this case are the following:
the Ricci scalar \eq{Ri}
\begin{eqnarray}
&&R|_{{\cal H}}=\frac{8\Lambda d+(d-4)F^{2[0]}}{4(d-2)}\,,
\end{eqnarray}
the square of the Ricci tensor \eq{RT}
\begin{eqnarray}
R_{\alpha\beta}R^{\alpha\beta}|_{{\cal H}}&=&\frac{1}{16(d-2)^2}\biggl[64d\Lambda^2+16\Lambda F^{2[0]}(d-4)\nonumber\\
&+&(F^{2[0]})^2(2d^2-11d+16)\biggl]\,,
\end{eqnarray}
the Kretschmann scalar \eq{A21.IV} ($d\geqslant 5$)
\begin{eqnarray}
{\cal K}|_{{\cal H}}&=&{\cal C}_{ABCD}^{[0]}{\cal C}^{ABCD[0]}+\frac{1}{8(d-2)^2(d-3)}\nonumber\\
&\times&\biggl[64\Lambda^2(3d-8)+16\Lambda F^{2[0]}(2d-5)(d-4)\nonumber\\
&+&(F^{2[0]})^2(2d^3-18d^2+55d-56)\biggl]\,,
\end{eqnarray}
where for $d=5$ we have ${\cal C}_{ABCD}=0$, and
the Kretschmann scalar \eq{A21.IVb} ($d=4$)
\begin{equation}
{\cal K}|_{\cal H}=8\Lambda^2+\frac{1}{8}(F^{2[0]})^2\,.
\end{equation}
\subsubsection{Case $a=0$, $\Lambda=0$}
This case follows from the previous case by setting $\Lambda=0$. The $d$-dimensional extremal Reissner-Nordstr\"om black hole belongs to this case (see, e.g. \cite{13}).
\subsubsection{Case $a=0$, $b\neq0$, $\Lambda\neq0$}
From Eqs. \eq{68b} and \eq{h70} we conclude that in this case no extremal horizons corresponding to the analytic expansions \eq{A22.IV} exist.
The conditions presented in the cases above are necessary conditions for existence of static extremal horizons within the EMdL model. One should keep in mind that the external to such an extremal horizon space-time region may have singularities located beyond the radius of the convergence of the expansions \eq{A22.IV}. In addition, one has to rule out the points in the external region where $\kappa\equiv 0$, i.e. the points where gradient to equipotential surfaces $\Sigma_k$ vanishes (see Eq. \eq{A5.IV}). At such points Israel's description becomes inapplicable. If at such a point space-time is regular, one may consider another coordinate system in the vicinity of this point. Moreover, the condition \eq{h70} may not imply the secondary hair condition as such, and one may need to impose an additional condition to ensure that the expansion coefficients $\varphi^{[n]}$, $n\geqslant 1$ vanish identically for vanishing electrostatic field and the Liouville potential. All these important issues can be addressed to non-degenerate horizons as well. Such issues are related to detailed and involved analysis of space-times at hand which we shall not undertake here. The interested reader can find explicit examples of near-horizon geometries of extremal static and stationary horizons in, e.g. \cite{Ha1,Ha1a,Ha2,Ha3}.
\section{Summary}
Let us summarize our results. We studied the geometric properties of static non-degenerate and degenerate (extremal) Killing horizons of arbitrary geometry and topology within the EMdL model under assumption that the metric functions and fields are analytic in the vicinity of the horizons. Such horizons have zero extrinsic curvature, constant surface gravity, and the electrostatic potential is constant on such horizons. The presence of a dilaton field corresponding to the secondary scalar hair imposes an additional condition: the dilaton field should be constant on non-degenerate horizons. However, in the case of an extremal horizon this condition follows directly form the EMdL equations, except for the {\em Case 3}, which corresponds to zero Liouville potential, and for the {\em Case 4}, which corresponds to $a=b=0$. We derived the relations between space-time curvature invariants (the Ricci scalar, the square of the Ricci tensor, and the Kretschmann scalar) calculated on static EMdL horizons and the geometric quantities corresponding to the horizons surface. These relations are generalizations of the analogous known relations for horizons of static four and 5-dimensional vacuum and 4-dimensional electrovacuum space-times (see Eqs. \eq{A24.IV} and \eq{A24.IVb}). We have shown that all static extremal horizon surfaces of the EMdL model which correspond to the analytic expansions \eq{A22.IV} of the metric functions and fields are Einstein spaces, and presented the necessary conditions for existence of static extremal horizons within the EMdL model. In the case $a=b\neq0$ and $\Lambda\neq0$ surface of such an extremal horizon is Ricci flat. In contrast to a non-extremal horizon, in the general {\em Case 1} the electromagnetic field invariant $F^2$ calculated on the horizon is not an arbitrary constant, and is defined in terms of the coupling constants $a$, $b$, and $\Lambda$, and the dilaton field $\varphi$ calculated on the horizon. In the particular case $b=0$ ({\em Case 2}) corresponding to a negative cosmological constant $\Lambda<0$, the electromagnetic field invariant $F^2$ vanishes on the extremal horizon. It is interesting that in this case the space-time curvature invariants, i.e. the Ricci scalar, the square of the Ricci tensor, and the Kretschmann scalar, calculated on the extremal horizon do not depend on the dilaton coupling constant $a$. We also found that in the case of zero cosmological constant ({\em Case 3}) or vanishing dilaton coupling constant ({\em Case 6}) extremal horizons do not exist. Exact analytic solutions to the EMdL model support these necessary conditions. We believe that the approach presented here can be used for studying other models which contain more fields. This approach allows one to construct geometry near horizon of a static space-time at hand. Once such geometry is known, one can define the corresponding global space-time structure and analyze its properties. The geometric properties of the horizons presented here may be important for applications to holographic models as well as for understanding of properties of space-time horizons in general.
\begin{acknowledgments}
The authors wish to thank Hari K. Kunduri for useful suggestions and for bringing our attention to the paper \cite{Tod}. One of the authors (A. A. S.) is grateful to the Natural Sciences and Engineering Research Council of Canada for the financial support.
\end{acknowledgments}
|
1,108,101,563,671 | arxiv | \section{Introduction}
D-branes \cite{dlp,gr}
have emerged as important objects in string theory. It is thus
important to understand various properties of these objects.
Properties of bound states at threshold are important
tests for string duality conjectures \cite{wit1,sen}.
In this paper we explore, non-threshold BPS
\footnote{All these states should form an algebra, which
may elucidate some underlying structure of string theory \cite{hamo}.}
bound states between D-branes of dimension
$p$, $p+2$ and $ p+4$.
This is done by probing them with another D-brane.
In particular
we investigate the short distance structure of these bound states.
D-branes
can be used to probe sub-string distances
\footnote{See \cite{stpr} for a treatment of
strings scattered off D-branes.}.
There is
a correspondence between the infra-red world-volume
theory and the ``space-time'' description of the D-brane moving
in a background \cite{dbac}. At sub-string scales, one does not expect
to have a space-time description. In fact, one generally does not know
an appropriate description of the physics. However,
when D-branes are involved
one can readily see that the short distance physics is governed
by the light modes of the open-superstring \cite{dkps}
(or by the light modes of the closed string together with an infinite
tower of massive modes), so a space time description is not
available, but the physics is under control.
In cases where some of the supersymmetries are not
broken another description
comes in, namely that of a moduli space of a
supersymmetric world-volume theory.
If one-quarter of the supersymmetries
are un broken, the metric on the moduli space is protected from
higher loop corrections.
In effect, a space-time description means that low-velocity particles
follow geodesics of some metric, but this is exactly what a moduli
space means, so even at sub-string scales there are situations
where the physics of two objects (or more) can be effectively
described by a space-time \cite{dkps}. However the crucial dependence
of this description on the probing agent means that it is not
a universal description, and thus might differ from our notion of
space-time.
Classically, the D-brane are singular at $r=0$. The
world-volume description of the region near the D-brane shows
that as another D-brane approaches the ``singularity'', the physics
is described by a transition
to another branch, and is not singular.
The string description of the bound states we are going to consider
is given by open-superstring
ending on the brane with some world-volume gauge fields turn on.
As
discussed in \cite{tow,cjp} this endows a $p$-brane with
an $RR$ charge of a
$(p-2)$-brane. Further, these are BPS states \cite{gutgre},
thus this string description,
is a reasonable description for these bound-states.
We compare this
description to a supergravity description at long distances and
find that they agree.
After preliminaries in section $(2)$, the bound state of a two-brane
and a zero-brane $(2-0)$ is studied in section (3). We compare the
string description to a supergravity description by comparing the
long-range potential between a zero-brane and the $(2-0)$ bound state.
In section (4) we treat the $(4-2)$ bound state. We compute the
long-range, velocity dependent potential between the bound state
and a zero-brane and compare them to a super-gravity calculation.
We also compute the phase shift the scattered zero-brane acquires
after scattering from the bound-state at short distances. From the phase
shift
we compute the absorption probability of
the zero-brane by this bound state
and the size of the bound-state.
Section (5) is devoted to the study of the $(4-2-2-0)$
bound state, where the two two-branes are orthogonally embedded in the
four-brane. We end with conclusions.
\section{Probing the ($p$, $p-2$) bound state}
Starting with the action of a $p$ brane moving in a background,
let us concentrate on the coupling to
the RR sector \cite{doug}
\begin{equation}
I_{RR}=T_{p}\int_{W_{p+1}}Str \ \ C \wedge e^{2\pi \alpha' F}.
\label{birr}
\end{equation}
Here $T_{p}=\sqrt{\pi}(4\pi^{2} \alpha ')^{(3-p)/2}$ \cite{pol,cjp},
$F=dV-\frac{B}{2\pi\alpha '}$, $V$ is
the world volume gauge field and $B$ is the two-form NS-NS gauge field.
A $p$ brane with a constant magnetic field of the world volume
gauge field strength
$F$ will carry a $RR$ charge depending on the form of $F$.
Here we are assuming that the $p$ brane is compactified on a torus.
One can think of this
as representing the bound state of a $p$ brane with various
lower dimensional branes. We will compare our result to
the supergravity description of some of these objects.
A constant magnetic
field on the $p$
brane is relatively easy to treat and we will mainly scatter zero-branes
off various configuration in order
to learn about the property of the bound states.
This will be done by computing the one loop vacuum amplitude for open
superstring with the appropriate boundary conditions \cite{pol}.
The one loop vacuum amplitude is the phase shift of the
probe after the scattering \cite{bac}, and defines a potential $V(r^{2})$
through the equation
\begin{equation}
A(b,v)=-\int d\tau V(r^{2}=b^{2}+\tau^{2}\frac{v^2}{1-v^{2}}),
\label{defpot}
\end{equation}
where $b$ is the impact parameter and $v$ is the velocity of the
D-brane probe.
The short distance
behavior is governed by the light open string modes while the
long distance is governed by the light closed string modes. Thus when
approximating the integrals one should take care to match those two
approximation \cite{dkps}.
Given two identical parallel D-brane with the same condensation $F$ on
their world-volume, it is known that the one loop vacuum amplitude
is just the same as when the condensation is zero, except for
a multiplicative
factor of $(1+2\pi \alpha' f^2)$ \cite{acny,bacpor,gutgre}
($f$ is the non zero entry of $F$).
This factor expresses the change in the mass of the brane, due
to the binding with a lower brane, and the change in the $RR$ charge.
Thus the mass of the bound state of a $p$-brane and a $(p-2)$-brane
is
\begin{equation}
m^{2}_{(p, p-2)}=m^{2}_{p}+m^{2}_{p-2},
\end{equation}
which is the dual to the mass formula in \cite{wit}.
The
brane's original charge is un-modified, but there is an additional charge
density of
order $f$ on each brane, which is the $RR$ charge of the lower dimensional
brane. Some D-brane configurations, with world volume gauge
field turn on, were considered in \cite{gutgre,wit,ck,li,bf,bdl,as}.
For completnese we give the following.
In terms of $q=e^{-\pi t}$, we define
\begin{eqnarray}
f_{1}(q) & = & q^{1/12} \prod_{n=1} (1-q^{2n}). \\
f_{2}(q) & = & \sqrt{2} q^{1/12} \prod_{n=1} (1+q^{2n}). \\
f_{3}(q) & = & q^{-1/24} \prod_{n=1} (1+q^{2n-1}). \\
f_{4}(q) & = & q^{-1/24} \prod_{n=1} (1-q^{2n-1}).
\end{eqnarray}
The limit of $t \rightarrow 0$ one has,
\begin{eqnarray}
f_{1}(q) & \rightarrow & \frac{1}{\sqrt{t}}e^{-\pi/(12t)}. \\
f_{2}(q) & \rightarrow & e^{\pi/(24t)}(1-e^{-\pi/t}). \\
f_{3}(q) & \rightarrow & e^{\pi/(24t)}(1+e^{-\pi/t}). \\
f_{4}(q) & \rightarrow & \sqrt{2}e^{-\pi/(12t)}.
\end{eqnarray}
We will also find it convenient to have the behavior of
the $\Theta(\nu t,it)$ (Jacobi theta functions)
in the limit $t \rightarrow 0$
\begin{eqnarray}
\frac{\Theta_{1}(i\epsilon t,it)}{\Theta^{'}_{1}(0,it)} & \rightarrow &
-e^{(\pi \epsilon^{2} t)}\frac{t}{i}\frac{\sin (\pi \epsilon)}{\pi}. \\
\frac{\Theta_{2}(i\epsilon t,it)}{\Theta_{2}(0,it)} & \rightarrow &
e^{(\pi \epsilon^{2} t)}(1+4\sin^{2}(\pi \epsilon)e^{-\pi/t}). \\
\frac{\Theta_{3}(i\epsilon t,it)}{\Theta_{3}(0,it)} & \rightarrow &
e^{(\pi \epsilon^{2} t)}(1-4\sin^{2}(\pi \epsilon)e^{-\pi/t}). \\
\frac{\Theta_{4}(i\epsilon t,it)}{\Theta_{4}(0,it)} & \rightarrow &
e^{(\pi \epsilon^{2} t)}\cos(\pi \epsilon).
\end{eqnarray}
\subsection{Compact Branes}
When some of the space times coordinate are compact their
effect on the configuration of
the D-branes depends on whether the compact dimensions are
an $NN$, $ND$ or $DD$
directions. In the case a $NN$ direction is compact the
integral over the momentum
in that direction becomes a sum over the allowed momenta.
If the compact
direction is a $DD$ direction there is no momentum integral
to begin with, however
there are infinite number of open string configuration
that wrap around
the compact direction. Thus the mass of the open string is now
$M^2 =\frac{b^2}{(2\pi\alpha')^{2}} + \frac{1}{\alpha '}
\sum (oscillators) + (nR/\alpha')^2$,
$R$ is the radius of the compact direction,
and there is a string
configuration for each $n$.
In the case of a $ND$
direction there are no momentum
integrals and no winding modes,
thus there is no change in the
one loop computation. This is of course what one
expects from T-duality which
changes a $DD$ direction to a $NN$ direction but
the number of $ND$ directions
remain the same.
The one loop amplitude for a configuration of a $p$ brane
and an $l$ brane moving parallel
to each other with one $NN$ direction compactified is ($L=2\pi R$)
\begin{equation}
A=\frac{C_{l-1}}{2\pi}\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})}
(8\pi^2 \alpha' t)^{-(\sharp NN -1)/2}
\Theta_{3}(0,8i\pi^{2} \alpha ' t/L^2)
B \times J.
\label{comnn}
\end{equation}
$B$ and $J$ are the usual contribution from the bosonic and
fermionic oscillators
respectively.
Similarly for a compactified $DD$ direction one finds
\begin{equation}
A=\frac{C_{l}}{2\pi}\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})}
(8\pi^2 \alpha' t)^{-(\sharp NN)/2} \Theta_{3}(0,itL^2/2\pi^{2}\alpha ')
B \times J.
\label{comdd}
\end{equation}
For example, in the case of $p=6, l=2$ and one of the $NN$ directions being
compact, one can calculate
the $v^2$ term of the potential
to be ($\beta=\frac{r^2 L^2}{16\pi^{2} \alpha'^{2}}$ )
\begin{equation}
V=-\frac{\pi C_1 v^2 L^2}{(8\pi^2 \alpha')^{2}}
\frac{\coth(\sqrt{\beta})}{\sqrt{\beta}},
\end{equation}
from which the moduli space metric can be read off.
When $\beta$ is large the potential falls like
$r^{-1}$ as expected from a six-brane,
and when $\beta$ is small the metric falls like $r^{-2}$
as expected from a five brane.
From equation (\ref{comnn}) the important scale that
determines the behavior of a system with $NN$ compact directions,
is $bL$. If $bL$ is large than the
system will behave as if it is un-compact and vise versa.
So if we probe deeply a system compactified on an $NN$ direction
then it will behave as if the compactification scale is small. Similarly
If $L$ is small but we go far away the system will behave as if it is
un-compactified. For a compact $DD$ direction the relevant
scale is of course $b/L$.
In the next sections we will have to deal with compact dimensions that are
different than $NN, DD$ or $ND$.
We will be faced with compact coordinates
that satisfy a $D$ or $N$ boundary condition on one end of the string
and some condensation on the other end, we shall call them
$NF$ and $DF$ conditions. When an $NF$ or $DF$ direction is compact
things are different. For a $DF$ condition there will not be any
momenta integral but will be something
like a winding, and vise versa for the
$NF$ coordinates. In order to avoid this complication we will always assume
that the radius of compactification is large enough as to neglect those
effects, even in the large $r$ limit,
and we will treat those directions as if they are un-compactified
(from the modes point of view).
\section{(2-0) bound state}
For the two-brane there is only one relevant term in the expansion
(\ref{birr}) and it is
$A \wedge F$, where $A$ is the $RR$ gauge field carried by the zero-brane.
We will assume that the two-brane is compactified on $T^{2}$.
If one chooses
\[
F=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & f \\
0 & -f & 0
\end{array}
\right)
\]
then,
\begin{equation}
\int A \wedge F \rightarrow \int f d^{2}\sigma \int A d \tau.
\end{equation}
Requiring $\int f =2\pi$ gives the two-brane action a term
$T_{0}\int Ad\tau$, which is the coupling of a zero-brane to a $RR$
background.
As $f=$const, the zero-brane $RR$ charge of this configuration
is proportional to
$fL^2$ where $L^2$ is the area of the compactified two-brane.
Let us compute the velocity-dependent potential, between the (2-0) bound
state and another zero-brane moving with velocity $v$.
The one loop amplitude (the phase shift) takes the form
($\tan(\pi \epsilon) = 2\pi\alpha' f$, $\tanh(\pi \nu)= v$),
\begin{equation}
A=\frac{1}{2 \pi}\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})} B \times J,
\label{a20}
\end{equation}
\begin{eqnarray}
B& = &\frac{1}{2}f_{1}^{-6}\Theta^{-1}_{4}(i\epsilon t)
\frac{\Theta_{1}' (0)}{\Theta_{1}(\nu t)}, \\
J& = &\{ -f_{2}^{6}
\frac{\Theta_{2} (\nu t)}{\Theta_{2}(0)}
\Theta_{3}(i\epsilon t, it)+f_{3}^{6}\Theta_{2}(i\epsilon t, it)
\frac{\Theta_{3} (\nu t)}{\Theta_{3}(0)} \nonumber\\
& + & if_{4}^{6}
\frac{\Theta_{4} (\nu t)}{\Theta_{4}(0)}\Theta_{1}(i\epsilon t)\}.
\label{bj20}
\end{eqnarray}
The existence of the $NS(-1)^F$ sector, the third
term in equation (\ref{bj20}), is a
consequence of the new boundary condition
for the open super-string.
Instead of having 2 $ND$ coordinates which gives fermionic
zero modes in that sector those two coordinates now have
different boundary conditions $FD$ \cite{acny},
\begin{eqnarray}
\partial_{\sigma}X^{\mu}+2\pi \alpha' F^{\mu}_{\nu}\partial_{\tau}X^{\nu}
& = & 0 \ \ \ (\sigma=0), \\
\partial_{\tau} X^{\mu} & = & 0 \ \ \ (\sigma=\pi).
\end{eqnarray}
Similarly for the fermionic coordinates \cite{bacpor}.
We treat the case where the non-zero component of $F$ are constant.
One can solve for the modes and compute the one loop amplitude.
A short cut to the right answer is to start with the expression
in \cite{bacpor} which is for the case of an electric field on both
branes (in their case a $9$-brane).
To get a magnetic field one just substitutes
$\epsilon \rightarrow i\epsilon$ and to get a Dirichlet boundary condition
on one of the branes one can formally take the condensation on that brane
to $\infty$. This has the following effect, one substitutes
\begin{eqnarray}
\Theta_{1}(i\epsilon) & \rightarrow & i\Theta_{4}(i\epsilon). \\
\Theta_{2}(i\epsilon) & \rightarrow & \Theta_{3}(i\epsilon). \\
\Theta_{3}(i\epsilon) & \rightarrow & \Theta_{2}(i\epsilon). \\
\Theta_{4}(i\epsilon) & \rightarrow & i\Theta_{1}(i\epsilon).
\end{eqnarray}
Further more, because of the Dirichlet boundary condition on
one end there are
no zero-modes in the bosonic sector. The velocity dependence is
as in \cite{bac}, thus we end up with equations (\ref{a20}-\ref{bj20}).
Of course, when $\epsilon$ goes to zero in equations (\ref{a20}-\ref{bj20})
one gets back
just the expression for a zero-brane scattered off a two-brane.
Let us check that one gets the right charge for the zero-brane inside the
two-brane. Taking only the $RR$ sector (the third term in equation
(\ref{bj20})) one finds that
the charge per unit volume of the zero-brane inside the two-brane
is
proportional to $\sim T_{2}\tan(\pi \epsilon)=2\pi T_{2} \alpha 'f$
exactly as expected, and the $RR$
sector has the right sign to represent
interaction of two zero-brane of same charge.
One can compute the one loop amplitude in various limits.
When the distance between the branes $r$ is large
one gets for the velocity dependent potential
\begin{equation}
V=-\Gamma(5/2)\frac{(2+ 2\sin^{2}(\pi \epsilon) +2\sinh^{2}(\pi \nu) -
4\sin(\pi \epsilon)\cosh(\pi\nu))}
{\cos(\pi \epsilon)\sqrt(8\pi^{2} \alpha ')}
(\frac{2\pi \alpha '}{r^{2}})^{5/2}.
\end{equation}
Where $\cosh(\pi\nu)=\frac{1}{\sqrt{(1-v^{2})}}$ and
$\sinh(\pi\nu)=\frac{v}{\sqrt{(1-v^{2})}}$.
These results of course hold with some modification to all
T-dual configurations. For instance the long range potential
between a bound state of a four-brane and a two-brane and another
two-brane parallel to the one inside the four-brane is, ($Q_{4}$
is the four-brane charge)
\begin{equation}
V_{string} \sim -Q_{4}\frac{(2+ 2\sin^{2}(\pi \epsilon)
+2\sinh^{2}(\pi \nu) -
4\sin(\pi \epsilon)\cosh(\pi\nu))}{\cos(\pi \epsilon)}
r^{-3}.
\label{422st}
\end{equation}
We can compare this result,
with the conjectured supergravity description of the bound state
of a two-brane inside the four-brane \cite{ilpt}.
The supergravity configuration of a two-brane inside a four-brane,
was first derived as the $D=8$ dyonic membrane.
It will be convenient
to write down its eleven-dimensional interpretation. The metric takes the
form
\begin{eqnarray}
ds_{11}^{2} & = & (H \tilde{H})^{1/3}[H^{-1}
(-dt^{2}+dy_{1}^{2} +dy_{2}^{2})
+\tilde{H} ^{-1}(dy_{3}^{2}+dy_{4}^{2}+dy_{5}^{2}) \nonumber \\
& + & dx_{1}^{2}+ \cdots dx_{5}^{2}]. \nonumber \\
F_{4}^{(11)} & = &\frac{1}{2}\cos (\zeta) \star dH +
\frac{1}{2}\sin (\zeta) dH^{-1} \wedge dt \wedge dy_{1}
\wedge dy_{2} \nonumber\\
& + & \frac{3 \sin (2\zeta)}{2\tilde{H} ^{2}}dH\wedge dy_{3} \wedge dy_{4}
\wedge dy_{5}.
\label{sg42}
\end{eqnarray}
Here $H=1+\frac{\gamma}{r^{3}}$,
$\tilde{H}=1+\frac{\gamma \cos^{2}(\zeta)}{r^{3}}$ and $\star$ is the
Hodge dual in $R^{5}(x_{1} \cdots x_{5})$.
Now this is the metric of the eleven dimensional two-brane inside a
five-brane, but when one considers any of the $y_{i}$ $i=3-5$ as the
eleventh direction we get a two-brane inside a four-brane. Further the
two-brane charge is $Q_{2} \sim \gamma \sin (\zeta)$ and the
four-brane charge $Q_{4} \sim \gamma \cos (\zeta)$.
If we choose $y_{5}$ as the eleven dimension (so its radius is small), the
other two $y$'s are also compactified but on a large circle, as
discussed in section (2).
Now one can calculate the velocity
dependent potential between a two-brane
and this bound state, where the two-branes are parallel,
using the metric and gauge fields in
equation (\ref{sg42}). This is easiest done in the static gauge, and
one can readily use the formulas in \cite{dklt} to find,
\begin{equation}
V_{sugra} \sim \frac{\gamma}{r^{3}}
[4sin(\zeta) +2\cos^{2}(\zeta)-4-v^{2}\cos^{2}(\zeta)]
\label{422sug}
\end{equation}
Comparing this to equation (\ref{422st}) we find they do not agree.
The reason for that is that while in the supergravity calculation
we have worked in the ``static gauge'' in which the time like parameter
of the world volume is equal to $X_{0}$, this is not the case in
the string calculation. These expressions then identify potentials
in two different reference frames. The string calculation can
easily be converted to this frame.
Observe that in the string calculation
we have taken the expression for the potential of the form
\begin{equation}
A=-\int d \tau V(r^{2}=b^{2} +\tau^{2} \sinh^{2}(\pi \nu)),
\label{av}
\end{equation}
so that $\tau \neq X_{0}$. In order to get the string theory answer
corresponding to the observer $\tau=X_{0}$ one just needs to multiply
equation (\ref{422st}) by a factor
$\frac{v}{\sinh(\pi\nu)}=\cosh^{-1}(\pi\nu)$. Then the two expressions,
that of the string theory and that of the supergravity agree to order
$v^{2}$ when one identifies $\zeta=\pi\epsilon$.
When $r$ becomes very small the appropriate expansion is of
$t \rightarrow \infty$. We introduce a
cutoff $\Lambda$, and the $t$ region
$0 \rightarrow \Lambda$ is governed by the light closed string modes
which make a non-singular contribution of order $\sim r^2$. Then
one gets (for $v=0$)
\begin{equation}
A \approx \int_{\Lambda}^{\infty}
\frac{dt}{t} e^{-t(\frac{b^2 }{2\pi \alpha'}-\pi/2 +\pi\epsilon)}
(8\pi^2 \alpha' t)^{-(1/2)}
\end{equation}
Now when $b <\pi/2 -\pi\epsilon$ a tachyon develops in the open string
spectrum and the expression becomes complex \cite{bansus}.
However this happens
at a slightly smaller distance than in the pure two-brane case.
What happens when $\epsilon$ grows, the largest it
could be is $\epsilon=1/2$.
Huresticly as $\epsilon$ grows one gets more and more ``towards''
a Dirichlet boundary condition on the two-brane,
and the tachyonic instability
starts at smaller distances.
One can see that the constant
term and the $v^{2}$ term in the potential goes to zero while the
$v^{4}$ term grows.
Notice that a bound state configuration of $(2-0)$ is dual
to a bound state of a D-string and an elementary string.
In the first
case this is described by turning on a magnetic field on
the world-volume and in the second it is by turning on
an electric field in the world volume \cite{wit}.
\section{(4-2) bound state}
The world-volume of the four brane is five dimensional and the
coupling we are
going
to consider in this section is $C \wedge F$, where C is the $RR$
three form gauge
field coupled to the two-brane. The four-brane will be wrapped around
$T^{2}$.
Taking
\[
F=\left(
\begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & f & 0 & 0 \\
0 & -f & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{array}
\right)
\]
the $F\wedge F$ coupling will not contribute,
so we will have a bound state of a two-brane
and a four-brane. Of course one can orient the two-brane inside the four
brane in more than one way, by choosing which elements of $F$ will be
non zero. The membrane world volume occupies the directions where
$F=0$, so if $F_{12} \neq 0$ then the membrane is in the $3,4$ direction
inside the four-brane, and it is the $1,2$ direction which is compact
(not the directions the two-brane occupies).
The phase shift for a moving zero-brane probe (with velocity $v$)
in the presence of the $(4-2)$ bound state
is ($\tanh(\pi \nu) =v$, $\tan(\pi \epsilon)=2\pi \alpha' f$),
\begin{equation}
A=\frac{1}{2\pi}\int \frac{dt}{t} e^{-(\frac{b^2 t}
{2\pi \alpha'})} B \times J.
\end{equation}
\begin{eqnarray}
B & = & \frac{1}{2}f_{1}^{-4}f_{4}^{-2}\Theta^{-1}_{4}(i\epsilon t)
\frac{\Theta_{1}' (0)}{\Theta_{1}(\nu t)}. \\
J & = & \{ -f_{2}^{4}f_{3}^{2}
\frac{\Theta_{2} (\nu t)}{\Theta_{2}(0)}
\Theta_{3}(i\epsilon t, it)+f_{3}^{4}f_{2}^{2}\Theta_{2}(i\epsilon t, it)
\frac{\Theta_{3} (\nu t)}{\Theta_{3}(0)}\}.
\end{eqnarray}
As $\epsilon \rightarrow 0$ one gets the result for a zero-brane
and a pure four-brane.
This expression can be evaluated at various limits.
Let as first compare the string
description with the supergravity description.
The appropriate limit is then
$r \rightarrow \infty$, so $t \rightarrow 0$, which is the range
when the mass-less
closed string modes dominate.
One finds a velocity dependent potential,
\begin{equation}
V=-\Gamma(3/2)\frac{(\sin^{2}(\pi \epsilon)
+\sinh^{2}(\pi\nu))}{\cos(\pi \epsilon)\sqrt(8\pi^{2} \alpha ')}
(\frac{2\pi \alpha '}{r^{2}})^{3/2}.
\label{420st}
\end{equation}
We turn now to the supergravity description.
If there is a zero-brane interacting with the $(4-2)$ bound state,
one can compute \cite{gil} the
velocity dependent potential from null
geodesics on the metric (\ref{sg42}).
Then one finds
\begin{equation}
V_{sugra} \sim \frac{-Q_{4}}{r^{3}}\frac{(\frac{v^{2}}{1-v^2}+\sin^{2}(\zeta))}
{\cos(\zeta)}
\label{sgpot}
\end{equation}
Equation (\ref{sgpot}) agrees with equation (\ref{420st})
when one identifies $\zeta=\pi\epsilon$
\footnote{Notice that we do not have the problem of converting
the string calculation to the static gauge, because the
supergravity calculation is done differently than in the previous section}.
In the string
description $\epsilon=1/2$ describes an infinite condensation of
two-branes on the four-brane. On the supergravity side this is the case
$\zeta=\pi/2$, which makes the supergravity solution into a pure two-brane.
A condensation of infinitely many two-branes on the four-brane
have turned it into a two-brane.
Let us now turn to the case that $r$ is shorter than the string scale.
As $v \rightarrow 0$ the potential between the zero brane and the $(4-2)$
bound state is,
\begin{equation}
V=-\frac{\Gamma(-1/2)}{\sqrt{8\pi^2 \alpha '}}
[(\frac{r^2}{2\pi \alpha '}-\pi \epsilon)^{1/2}
+ (\frac{r^2}{2\pi \alpha '}+\pi \epsilon)^{1/2}- 2(\frac{r^2}
{2\pi \alpha '})^{1/2}]
\end{equation}
The first two-terms are from the $NS$ sector of the open
string and the last from the $R$ sector. It exhibits the characteristic
of a stretched string potential between the branes.
A feature that is due to the probing agent, the zero-brane.
The difference in the ground state energy of the different sectors,
translates into different effective length for the stretched string.
Now if one views $\epsilon$ as a parameter that can change, then
this expression tells as that $\epsilon$ will want to grow.
This means that if we have fixed the charges on the brane, then it is
the volume of the compactified brane that will tend to decrease.
To order $(\pi\epsilon)^{2}$
The potential becomes
\begin{equation}
V=-\frac{\sqrt{\pi}(\pi \epsilon)^{2}}{2\sqrt{8\pi^2 \alpha '}}
(\frac{2\pi \alpha '}{r^{2}})^{3/2}
\end{equation}
So
for small $r$ and large $r$ the potential agree to order
$\epsilon^2$, which will
enable us later to approximate some integrals in a simple way.
This agreement is a
residue of the
supersymmetry present when $\epsilon=0$.
At non-zero velocity one can compute the phase shift of the
scattered zero-brane.
For small $r$ we can take the $t$
integral limits from $0 \rightarrow \infty$
because in this case the $t \rightarrow 0$ limit of the open string
is the same as the closed string.
We consider the case where the velocity ($v$) is small. Then
one finds ($\pi \nu \approx v$)
\begin{equation}
A=\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})}(\tan(\frac{vt}{2})+
\frac{\cosh(\pi \epsilon t)
-1}{\sin(vt)})
\label{a42s}
\end{equation}
This gives,
\begin{equation}
e^{iA}= \frac{\Gamma[\frac{ib^2}{4v\pi \alpha '}+
\frac{1}{2}-\frac{i\pi\epsilon}{2v}]
\Gamma[\frac{ib^2}{4v\pi \alpha '}+\frac{1}{2}+\frac{i\pi\epsilon}{2v}]}
{\Gamma[\frac{ib^2}{4v\pi \alpha '}+1]\Gamma[\frac{ib^2}{4v\pi \alpha '}]}.
\label{ps42}
\end{equation}
Equation (\ref{a42s}) exhibits a tachyonic instability at
$b^2 < 2\pi^{2} \alpha ' \epsilon$, this will translate after analytic
continuation to a large imaginary part of equation (\ref{ps42}),
which would mean a very small norm for the scattered wave function
(i.e absorption).
An incoming zero-brane in a plane wave state with velocity in the $z$
direction, will be
multiplied by the phase shift (equation \ref{ps42})
after scattering, so
\begin{equation}
e^{ikz} \rightarrow e^{ikz +iA(b^{2}=x^{2}_{\perp},v)}.
\end{equation}
The norm of the outgoing wave function is given by,
\begin{equation}
|e^{iA}|=[\frac{\sinh^{2}(\frac{b^2}{4v \alpha '})}{\cosh^{2}
(\frac{b^2}{4v \alpha '})+
\sinh^{2}(\frac{\pi^2 \epsilon}{2v})}]^{1/2}.
\label{norm}
\end{equation}
If the norm of the wave function is much less than $1$, it signals the
breakdown of the WKB-Eikonal approximation. For low $v$ this will
happen when $b^2 < 2\pi^{2} \alpha ' \epsilon$.
If we Fourier transform the outgoing wave we will get the scattering
amplitude as a function of the incoming momenta ($k=\frac{v}{g}$)
\begin{equation}
f(k,\theta) \sim \exp{[-\sqrt{2} k \sin(\theta/2)
(\sqrt{\pi\epsilon +(gk)^2}-
\pi\epsilon)^{1/2}]}.
\label{sca42}
\end{equation}
In the limit $v \gg \epsilon$ we get
\begin{equation}
f(k,\theta) \sim e^{-\sqrt{2}\sin(\theta/2)(kl_{p}^{11})^{3/2}}
\end{equation}
as in \cite{dkps}, which shows that the
physical scale is the eleven dimensional
Planck length
$l_{p}^{11}=g^{1/3}l_{s}$ \cite{kp}.
In the limit $\epsilon \gg v$ one finds
\begin{equation}
f(k,\theta) \sim e^{-\sin(\theta/2) \frac{g}{\sqrt{\pi\epsilon}} k^{2}}
\end{equation}
However in this limit it is easy to see that the approximation
is not valid.
Now $(1-|e^{iA}|^{2})$ is the probability of absorbing a zero-brane
by this bound state at impact parameter $b$. At very low velocities
($v \ll \epsilon$) one sees that the probability is $\sim 1$ for
$b/2\pi \alpha ' <\pi \epsilon$ and zero otherwise.
One can interpret a scale $r_{0}$ at which there is a
large absorption probability, as
giving the effective scale of the bound state. Of course
this depends on the probing agent. thus the above result
is what
we expect from a state with characteristic length scale
in string units of $\sqrt{\pi\epsilon}$. For very small $\pi \epsilon$
this gives a characteristic scale for the bound state
(as seen by the zero-brane)
$\sim \frac{(2 \pi \alpha ')}{L}$.
The probability of absorbing a zero-brane by this bound state when the
zero-brane is an incoming state $e^{ikz}\phi(b)$ is
\begin{equation}
P_{abs}=\int d^{4} b |\phi(b)|^{2}(1-\|e^{iA}\|^{2})
\end{equation}
If one assumes that $\pi^{2} \epsilon \ll v$ and that $\phi(b)$
is zero outside a region of volume $V_{4}$ and constant in it,
then one finds
\begin{equation}
P_{abs}\approx \frac{\Omega_{3}}{2V_{4}}[(4v\alpha')^{2} \ln 2+
(\pi \epsilon)^{2}(2\pi \alpha')^{2}(\frac{2}{3}\ln 2-\frac{1}{6})]
\label{abs}
\end{equation}
Where $\Omega_{3}$ is the area of the unit three-sphere.
The first term in (\ref{abs}) is present for the pure four-brane, where
it is interpreted as a signature for resonances \cite{dfs}. The
second term represents the effective size of the bound state.
\section{Probing the (4-2-2-0) bound state}
We take the four-brane to be wrapped around $T^{4}$. If one chooses
\[
F=\left(
\begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & f & 0 & 0 \\
0 & -f & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & f \\
0 & 0 & 0 & -f & 0
\end{array}
\right)
\]
then the four-brane will be endowed with two-brane charge ( two orthogonal
two-branes) and
zero-brane charge, due to the coupling
\begin{equation}
\frac{1}{2}A\wedge F \wedge F + C \wedge F.
\end{equation}
The mass of this bound state is
\begin{equation}
m^{2}_{4-2-2-0}=m^{2}_{4} +m^{2}_{2}+m^{2}_{2}+m^{2}_{0}
\end{equation}
The phase shift of a moving zero-brane in the
background of this
bound state is
\begin{equation}
A=\frac{1}{2\pi}\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})}
B \times J.
\label{amp4220}
\end{equation}
\begin{eqnarray}
B& = &\frac{1}{2}f_{1}^{-4}\Theta^{-2}_{4}(i\epsilon t)
\frac{\Theta_{1}' (0)}{\Theta_{1}(\nu t)}. \\
J& = &\{ -f_{2}^{4}
\frac{\Theta_{2} (\nu t)}{\Theta_{2}(0)}
\Theta_{3}^{2}(i\epsilon t, it)+f_{3}^{4}\Theta_{2}^{2}(i\epsilon t, it)
\frac{\Theta_{3} (\nu t)}{\Theta_{3}(0)} \nonumber \\
& + & f_{4}^{4}
\frac{\Theta_{4} (\nu t)}{\Theta_{4}(0)}\Theta_{1}^{2}(i\epsilon t)\}.
\label{bj4220}
\end{eqnarray}
When $v=0$ one finds that the one loop amplitude vanishes, due
to one of the identities of the Jacobi functions \cite{witwat}.
This means that a configuration
of a (4-2-2-0) bound state of this type together with an additional
zero-brane is a stationary solution of the supergravity equations, and
should preserves some super-symmetries, probably a quarter
\footnote{After writing this work, I have learned that a T-dual
description of this configuration was discussed in \cite{bdl}}.
In the limit of large $r$ one finds a potential
\begin{equation}
V=-\Gamma(3/2)\frac{2(1-\cosh (\pi\nu))\sin^{2}(\pi \epsilon)+
\sinh^{2}(\pi\nu)}{\cos^{2}(\pi \epsilon)
\sqrt{(8\pi^{2} \alpha ')}}(\frac{2\pi \alpha '}{r^{2}})^{3/2}.
\end{equation}
Notice that to order $v^{2}$ one gets the same result as in the case
of a zero-brane
scattered off a four-brane \cite{gil}.
In the limit of small $r$ one finds
\begin{equation}
A=\int \frac{dt}{t} e^{-(\frac{b^2 t}{2\pi \alpha'})} \tan(vt/2).
\end{equation}
Exactly like in the case of the zero-brane and the pure four-brane.
One can evaluate the phase shift as in \cite{dkps},
\begin{equation}
e^{iA}= \frac{\Gamma[\frac{ib^2}{4v\pi \alpha '}+\frac{1}{2}]
\Gamma[\frac{ib^2}{4v\pi \alpha '}+\frac{1}{2}]}
{\Gamma[\frac{ib^2}{4v\pi \alpha '}+1]\Gamma[\frac{ib^2}{4v\pi \alpha '}]}.
\end{equation}
Thus the moduli space
is the same as in the case of a zero-brane moving in the background of
a four-brane,
\begin{equation}
ds^{2}=\frac{1}{g}(1+\frac{g}{2r^{3}})(dr^{2}+d\Omega^{2}_{4}).
\end{equation}
Notice however that although the results are similar to those in the
case where there is a zero-brane scattered off a pure four-brane, the
physics is different. For instance in the latter case at large distances
the physics is governed by gravity alone, while in the $(4-2-2-0)$ case
there are gauge field interactions as well.
If we assume that the condensation on the four brane was not the same
in both direction that is
\[
F=\left(
\begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & f_1 & 0 & 0 \\
0 & -f_1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & f_2 \\
0 & 0 & 0 & -f_2 & 0
\end{array}
\right)
\]
and $f_1 \neq f_2$, then the one loop has the same form as
equation(\ref{amp4220}-\ref{bj4220}), with the change,
$\Theta_{j}^{2}(i\epsilon t) \rightarrow
\Theta_{j}(i\epsilon_1 t)\Theta_{j}(i\epsilon_2 t)$.
The configuration of this bound state with a zero-brane is not
static any more. Assuming we take a configuration of a four-brane
two two-brane and a one-brane the
different $f$'s represent different areas for the two $T^2$'s.
At short distances this gives a potential
\begin{equation}
V=-\frac{\Gamma(-1/2)}{\sqrt{8\pi^2 \alpha '}}
[(\frac{b^2}{2\pi \alpha '}-\pi (\epsilon_1-\epsilon_2))^{1/2}
+ (\frac{b^2}{2\pi \alpha '}+\pi (\epsilon_1 - \epsilon_2))^{1/2}
- 2(\frac{b^2}{2\pi \alpha '})^{1/2}]
\end{equation}
As in the previous section $(\epsilon_{1}-\epsilon_{2})$ will tend to grow.
In this case this means that there is a force that will make the area
of one of the
$T^2$ larger than the other. So the space time will tend to
develop very different scale in some directions.
The phase shift for the scattering of the zero-brane can be computed
for small $r$ and it has the form of equation (\ref{ps42}) with
the substitution $\epsilon \rightarrow (\epsilon_1-\epsilon_2) $.
This is also true for equations (\ref{sca42}-\ref{abs}).
The characteristic scale of the bound state
is $r^{2}_{0}\sim 2\pi^{2} \alpha' (\epsilon_1 -\epsilon_2)$.
\section{Conclusions}
In this paper we give a string description of the bound states of
$(p, p+2, p+4) $ D-branes.
We compare the string description with a supergravity description,
using the interpretation of the dyonic membrane as the bound state of
a two-brane inside a four-brane. Both description agree at large
distances.
We compute the velocity
dependent potential between a zero-brane probe and these bound states and
the phase shift of a scattered zero-brane at short distances.
The size of the bound state as seen by a zero-brane is estimated by
looking at the absorption cross section.
We find that the size of the bound state ($r_{0}$) is related
to the scale of the compact dimension ($L$) of the higher brane,
as $r_0 \sim L^{-1}$.
The largest it could be is one half the string length.
In a certain range of parameters one finds that the high energy scattering
at fixed angle is governed by the eleventh dimensional Planck scale.
We have found that a special $(4-2-2-0)$ bound state does not exert a force
on an additional zero-brane.
This is
evidence that there should exist a solution of the
supergravity equations that corresponds to a (4-2-2-0) bound state
and another zero-brane, that preserves a quarter of the super-symmetries.
It will be interesting to find the corresponding super-gravity solution.
The moduli space, in this case, turns out to be
the same as in the pure
four-brane zero-brane case .This is also true for the long range
interaction between this bound state and a
zero-brane, even though the physics
of the pure four-brane zero-brane system looks very different.
If one takes the limit of infinite condensation on one of the branes, we
saw it transformed that brane into a $p-2$-brane, this was seen in the
string and in the supergravity descriptions. This may suggest that
a $p$-brane is made out of infinitely many ($p-2$)-branes \cite{tow,bfss}.
\centerline{\bf{Acknowledgments}}
I would like to thank S. Deser, S. Mathur and S. Ramgoolam
for many helpful
discussions.
|
1,108,101,563,672 | arxiv | \subsection{The Case of Two Flavors}
\label{due}
\input{due}
\subsection{Generic Number of Flavor}
\label{general}
\input{generic}
\subsubsection{The Skyrme--Faddeev model}
\label{skyrme}
Let us briefly review the effective low-energy pion Lagrangian
corresponding to the given pattern of the $\chi$SB; see Eq. (\ref{pater})
with $n_f=2$. We describe the pion dynamics by the $O(3)$ nonlinear sigma
model (in four dimensions)
\begin{equation}
\mathcal{L}_{\mathrm{eff}}=\frac{F_{\pi}^{2}}{2}\,\, \partial_{\mu}\vec{n}
\cdot\partial^{\mu}\vec{n} +\mbox{higher derivatives} \ ,
\label{senza il fermione}
\end{equation}
where the three-component field $\vec n$ is a vector in the \emph{flavor}
space, subject to the condition
\begin{equation}
\vec n^{\,2} =1\ . \label{ts}
\end{equation}
As in ordinary QCD, higher derivative terms are in general present in a low-energy effective theory expansion, and they are needed for the soliton stabilization.
The ``plain"
vacuum corresponds to a constant value of $\vec n$, which we are free to
choose as $\langle n_3\rangle =1$.
Usually the higher derivative term is chosen as follows (for a review, see
\cite{manton}):
\begin{equation}
\delta \mathcal{L}_{\mathrm{eff}} = -\frac{\lambda}{4}\,
\left(\partial_{\mu} \vec{n}\times\partial_{\nu}\vec{n} \right)
\cdot\left(\partial^{\mu}\vec{n} \times\partial^{\nu}\vec{n} \right)\,.
\label{senza il fermionep}
\end{equation}
Equations (\ref{senza il fermione}) and (\ref{senza il fermionep})
constitute the Skyrme--Faddeev (or the Faddeev--Hopf) model. Note that the
WZNW term does not exist in this
model, simply because $\pi_4(S^2)$ is not trivial.
To have a finite soliton energy, the vector $\vec n$ for the soliton solution
must tend to its vacuum value at the spatial infinity,
\begin{equation}
\vec n\to \{0,0,1\}\,\,\,\mbox{at}\,\,\, \left| \vec x \right| \to\infty\,.
\label{vac}
\end{equation}
Two elementary excitations near the vacuum $n_3 =1$,
\[
\frac{1}{\sqrt 2}\left(n_1\pm i\, n_2\right),
\]
can be identified with the pions.
The boundary condition (\ref{vac})
compactifies the space to $S^3$. Since $\pi_3 (S^2) = \mathbb{Z}$, see Table~
\ref{tabone}, solitons present topologically nontrivial maps of $S^3\to S^2$
. As was noted in \cite{Faddeev}, there is an associated integer topological
charge $n$, the Hopf invariant, which presents the soliton number.
The solitons of the Skyrme--Faddeev model are of the knot type. The
simplest of them is toroidal; it looks like a ``donut," see, e.g., \cite{manton}.
Qualitatively, it is rather easy to understand, in the limit when
the ratio of the periods is a large number, that the Hopf
topological number combines an instanton number in two dimensions,
with a twist in the perpendicular dimension. Let us slice the ``donut"
soliton by a perpendicular plane $AB$. In the vicinity of this plane,
the soliton can be viewed as a cylinder, so that the problem becomes
effectively two-dimensional. In two dimensions the, O(3) sigma model
has Polyakov--Belavin instantons \cite{Polyakov} whose topological
stability is ensured by the existence of the corresponding
topological charge. The Polyakov--Belavin instanton has an
orientational collective coordinate describing its rotation in the
unbroken U(1) subgroup. In two dimensions for each given instanton,
this collective coordinate is a fixed number. In the Hopf soliton of
the type shown in Fig.~\ref{donu}, as we move the plane $AB$ in the
direction indicated by the arrow, this collective coordinate changes
(adiabatically), so that the $2\pi$ rotation of the plane $AB$ in
the direction of the arrow corresponds to the $2\pi$ rotation of the
orientational modulus of the Polyakov--Belavin instanton. This is
the twist necessary to make the Hopf soliton topologically stable.
\begin{figure}[th]
\begin{center}
\leavevmode \epsfxsize 8.5 cm \epsffile{donut.eps}
\end{center}
\caption{{\protect\footnotesize The simplest Hopf soliton, in the adiabatic
limit, corresponds to a Belavin-Polyakov soliton closed into a donut after a $
2\protect\pi$ twist of the internal phase.}}
\label{donu}
\end{figure}
The Hopf charge {\it cannot} be written as an integral of any density
that is local in the field $\vec n$.
We introduce now a different, but equivalent, formulation.
In the $n_f=2$ case, there are two ways to parametrize the target space $S^2$.
One can use a vector $\vec{n}$ subject to the constraint $|\vec{n}| =1$, the one just discussed.
Another approach, which goes under the name of the
gauged formulation of the $CP^1$ sigma model, is to use a complex doublet $z_i$ subject to the constraint $z_i^* z_i =1$. This leaves us with an $S^3$ sphere. We have to further reduce it by gauging the phase rotation $z_i \to e^{i \theta} z_i$. This Hopf fibration leaves us exactly with the sphere $S^2$. The map between the two formulations is
$$
\vec{n} = z^*_i \vec{\tau} z_i\,.
$$
The derivatives acting on the doublet $z_i$ are the covariant derivatives
$$D_{\mu} = \partial_{\mu} -iA_{\mu}$$ where
\beq
A_{\mu} = -\frac{i}{2}\left[ z_i^*
({\partial}_{\mu}z_i)-({\partial}_{\mu}z_i^*) z_i\right] .
\eeq
This gauge field is related to the topological current of the vortex by
\begin{eqnarray}
J^{\mu} &=& \frac14 \epsilon^{\mu \nu \rho} \epsilon^{abc} n^a \partial_{\nu} n^b \partial_{\rho} n^c \\
&=& \epsilon^{\mu \nu \rho} \partial_{\nu} A_{\rho} \ ,
\end{eqnarray}
where the normalization is to $2\pi$.
It is possible now to express the Hopf charge (the charge of $\pi_3(S^2)=Z$) as a local function of the gauge field $A$, simply using the Chern-Simons term for the auxiliary gauge field:
\beq
s=\frac{1}{4\pi^2} \int d^3x \epsilon^{\mu\nu\rho} A_{\mu} \partial_{\nu} A_{\rho}\,.
\label{hopfnumb1}
\eeq
We can verify this formula in the following way.
Since the Hopf charge is a topological invariant, we can compute it on a configuration with cylindrical symmetry, a vortex in the $x,y$ plane, extended in the $z$ direction, that makes a $2\pi$ twist of the internal phase, as $z$ goes from $-\infty$ to $+\infty$.
Any configuration in this topological class must give the same answer for the integral of the Hopf number. We can thus choose an adiabatic rotation, in which the $z$ variation happens at a scale much longer than the vortex one. This allows the following simplifications. We can neglect the term $A_i J_i$ and consider only the one $A_0 J_0$. Furthermore, for the same reason, we can make separately the integrals $\int dz A_0 \int d^2 x J_0$. The integral of $J_0$ gives the total magnetic flux, which is $2\pi$. The integral of $A_0$ gives the total phase rotation, which is also $2\pi$. From that, the $4\pi^2$ normalization factor in (\ref{hopfnumb1}).
\subsubsection{Introducing massive fermions}
\label{massive}
The Skyrme--Faddeev model neglects excitations belonging to the odd sector of
the Hilbert space. Let us now include them.
We can write an effective Lagrangian that includes both the pions and the
fermions as follows:
\begin{equation}
\mathcal{L}_{\mathrm{eff}}=F_{\pi }^{2}\left[ \frac{1}{2}\partial _{\mu }%
\vec{n}\cdot \partial ^{\mu }\vec{n}+\bar{\Psi}\left( i\gamma ^{\mu
}\partial _{\mu }-g\vec{n}\cdot \vec{\tau}\right) \Psi +\dots \right]
\label{lagrangiana}
\end{equation}%
where $\Psi $ is a Majorana spinor defined as
\begin{equation}
\Psi =\frac{1}{2F_{\pi }}\left(
\begin{array}{c}
\psi \\
\sigma _{2}\tau _{2}\psi ^{\ast }%
\end{array}%
\right) \ . \label{Majorana}
\end{equation}%
If we expand around $\vec{n}$ rotated in the third direction, we find as
expected the two pions plus a massive Dirac fermion:
\begin{equation}
\mathcal{L}_{\mathrm{eff}}= \partial _{\mu }\pi ^{*}\partial
^{\mu }\pi + \bar{\Psi}_{\mathrm{D}}\left( i\gamma ^{\mu }\partial _{\mu }-m\right)
\Psi _{\mathrm{D}}+\mathrm{interactions~} \ , \label{lagrangianslowly}
\end{equation}%
where the mass is $m=gF_{\pi }$.
\subsubsection{Fermion impact on the Hopf soliton}
\label{impa}
In a slow field configuration background, that is a
sufficiently wide soliton, the induced fermion quantum numbers can be
evaluated using the Goldstone-Wilczek(GW) technique \cite{Goldstone:1981kk}.
Now return to our problem in $3+1$ dimensions. Now the Hopf
term
becomes a current%
\begin{equation}
j_{\mathrm{Hopf}}^{\mu }=\frac{1}{4\pi ^{2}}\epsilon ^{\mu \nu \rho \sigma
}A_{\nu }^{3}\partial _{\rho }A_{\sigma }^{3} \ ,
\end{equation}%
and it is normalized so that $\int d^{x}j_{\mathrm{Hopf}}^{0}=1$ on
the background of a Hopf Skyrmion of charge $1$.
For a slow field
configuration, we can use the Goldstone-Wiczek method. We orient
$\vec{n}$ in the third direction and we obtain the Lagrangian
(\ref{lagrangianslowly}), that is one massive Dirac fermion $\Psi
_{\mathrm{D}}$ plus pions.
The currents of the fermion are $j^{\mu
}=\bar{\Psi}_{\mathrm{D}}\gamma ^{\mu }\Psi _{\mathrm{D}}$ and \ \ \
$j_{5}^{\mu }=\bar{\Psi}_{\mathrm{D}}\gamma ^{\mu }\gamma ^{5}\Psi
_{\mathrm{D}}$ and the respective charges\ $Q=\int d^{3}xj^{0}$ and\
$Q_{5}=\int d^{3}xj_{5}^{0}$. $Q$ corresponds to the exactly
conserved $U(1)$ charge, while $Q_{5}$ is the conserved, modulus $2$,
fermionic number. The induced $Q$ charge is zero simply because the
left
fermion and the right fermion give opposite contributions.
The induced $Q_{5}$ is not zero.
We do need to compute this diagram,
since this has already been done: it \ is nothing but the ABJ
anomaly. The anomaly of
the axial current is%
\begin{equation}
\partial _{\mu }j_{5}^{\mu }=\frac{1}{8\pi ^{2}}\widetilde{F}^{\mu \nu
}F_{\mu \nu }+\mathrm{mass~,}
\end{equation}%
which can just be rewritten as%
\begin{equation}
\partial _{\mu }j_{5}^{\mu }=\partial _{\mu }j_{\mathrm{Hopf}}^{\mu }+%
\mathrm{mass~.}
\end{equation}%
The induced $Q_{5}$ charge is equal to the Hopf charge, modulus $2$.
We are now ready to discuss the stability of the Skyrmion.
The theory contains three kinds of particle, whose charges are resumed in the Table \ref{tabtwo}.%
\begin{table}[tbp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& \textrm{Charge }$Q$ & $F $\rule{0mm}{5mm} \\ \hline
$\psi $ & $1$ & $1$\rule{0mm}{5mm} \\ \hline
$\pi $ & $2$ & $0$\rule{0mm}{5mm} \\ \hline
\textrm{Skyrmion/exotics} & {$
\begin{array}{c}
0 \\
1
\end{array}
$} & {$
\begin{array}{c}
1 \\
0
\end{array}
$} \rule{0mm}{8mm} \\ \hline
\end{tabular}
\end{center}
\caption{{\protect\footnotesize $Q$ and $F$ mod 2 for nonexotic and exotic
hadrons.}}
\label{tabtwo}
\end{table}
The Hopf Skyrmion can have charges $Q$ and $F$ respectively $0$ and $1$, if
there is no fermion zero mode crossing in the process of evolution from the
topologically trivial background to that of the Hopf Skyrmion, or $1$ and $0$
if there is a fermion zero mode crossing.\footnote{
A relevant discussion of the fermion zero mode crossings in 2+1 dimensions
can be found in \cite{Carena:1990vy}.}
In both cases the lightest exotic hadrons represented by the Hopf Skyrmion
are stable. They cannot decay in any number of pions and/or pions plus
``ordinary" baryons with mass $O(n^0)$. note that this is a $Z_{2}$
stability. Two Hopf Skyrmions can annihilate and decay into an array of $\pi$
's and $\psi$'s. For nonexotic hadron excitations that can be seen in a
constituent model and have mass $O(n^0)$ the combination $Q+F$ is always
even while for exotic hadrons with mass $O(n^2)$ the sum $Q+F$ is odd.
The Goldstone-Wiczek (GW) method is not an exact procedure. It becomes exact only in the limit where the soliton is
so large that the variations of the fields of which is made out, can be considered adiabatically.
To compute exactly the fermion induce number on a soliton background, the complete analysis of the Dirac operator and its spectrum should be made (see for example \cite{Jackiw:1975fn,Niemi:1984vz}).
One of the first signals of the breaking of the GW technique is a zero mode crossing in the Dirac spectrum.
The large-$n$ limit is a weak coupling limit for ${\cal L}_{\rm eff}$, but {\it is not} the limit in which the Goldstone-Wilczek approximation can be considered exact.
In the large-$n$ limit the soliton size is constant, i.e. of order $n^0$, and the coupling constant is decreasing like $1/n$.
The validity or not of the GW method, is encoded in the relation between the quadratic and the quartic term of the effective Lagrangian; these are the quantities that determine the size of the soliton. We also said that a zero mode crossing can change the quantum numbers of the soliton, but do not alter the conclusion about the $\mathbb{Z}_2$ stability.
\subsubsection{Low-energy effective action}
\label{lowenergy}
In order to generalize to the case $n_f>2$, one must express the coset (\ref{target})
in a way that makes ``evident'' the action of the
$\mathrm{SU}(n_f)$ symmetry.
In the case of ${\rm SU}(2)/{\rm U}(1)$,
it was easy since using the representation with the unit vector $\vec{n}$ makes
evident how it transforms under ${\rm SU}(2)$ rotations. The fermion interaction also follows easily (\ref{lagrangiana}). However,
the $n_f =2$ case can be somehow misleading for generalization to higher $n_f$.
${\rm SU}(2)$ can be represented as the sphere $S^3$ in the
four-dimensional vector space generated by the identity and the Pauli
matrices $\sigma_i$. Intersecting this sphere with the hyperplane
generated by the Pauli matrices,
we get an $S^2$ that is in one-to-one correspondence with the coset space
SU(2)/U(1).
Moreover, this intersection tells us exactly how the
${\rm SU}(2)$ symmetry acts on the coset; it is the space $\{ \vec{n}\}$ of
the unit vectors.
Another possible way though, is to intersect the space with
the hyperplane of the symmetric matrices generated by
$1,\, \sigma_1,\, \sigma_3$.
This is again a sphere $S^2$ and is again in
one-to-one correspondence with the coset manifold. There is no
contradiction with the symmetry properties since for ${\rm SU}(2)$ the
adjoint representation is equivalent to the two-index symmetric and
traceless representation.
This is a consequence of equivalence between the
fundamental and the antifundamental representations in SU(2).
To generalize this construction to
higher $n_f$, we have to use the symmetric matrices. The space we get is
in one-to-one correspondence with the coset $\mathcal{M}_{n_f}$ and is
an explicit realization of its symmetric properties under the action
of the $\mathrm{SU}(n_f)$ group. We have thus a two-index symmetric matrix
that can be saturated by the fermion bilinear
$\psi^{a\alpha}\psi^{b\beta}\epsilon_{\alpha\beta}$.
The proper mathematical way to describe this is by using the Cartan embedding.
The general element of the quotient $\mathcal{M}_{n_f}=
\mathrm{SU}(n_f)/\mathrm{SO}(n_f)$ can be written
in a compact form as $U\cdot\mathrm{SO}(n_f)$, where $U$ is an $\mathrm{SU}(n_f)$ matrix
(different $U$ in $\mathrm{SU}(n_f)$ corresponds to the same
$\mathcal{M}_{n_f}$ element, modulo a right product with an arbitrary
$\mathrm{SO}(n_f)$ element). The map
\beq
U\cdot\mathrm{SO}(n_f) \rightarrow W=U\cdot U^t \,,
\label{mapm}
\eeq
where the superscript $t$ denotes transposition,
is well-defined on the quotient because for the $\mathrm{SO}(n_f)$ matrices
the inverse is equal to the transposed matrix.
Equation (\ref{mapm}) presents a one-to-one map between $\mathcal{M}_{n_f}$
and the submanifold of
the matrices of $\mathrm{SU}(n_f)$, which are both unitary and symmetric.
The Lagrangian of the Skyrme model with the target space $\mathcal{M}_{n_f}$
can be computed by evaluating the Lagrangian of the $\mathrm{SU}(n_f)$ Skyrme model
on the symmetric unitary matrix $W$,
\begin{eqnarray}
\mathcal{L} &=& \frac{F_\pi^2}{4} \mathcal{L}_2+ \frac{1}{e^2} \mathcal{L}_4 \nonumber\\[2mm]
&\equiv & \frac{F_\pi^2}{4} {\rm Tr} \left(\partial_\mu W \partial^\mu W^\dagger\right)+
\frac{1}{e^2} {\rm Tr} \left[ (\partial_\mu W) W^\dagger,(\partial_\nu W) W^\dagger \right]^2
\,.
\label{lagretta}
\end{eqnarray}
\subsubsection{Gauged formulation}
In the $n_f=2$ case, there are two ways to parametrize the target space $S^2$.
One can use a vector $\vec{n}$ subject to the constraint $|\vec{n}| =1$. This is the so-called O(3) formulation.
Another approach is the $z_i$ formulation where it is possible to express the Hopf charge (the charge of $\pi_3(S^2)=Z$) as a local function of the gauge field $A$. An equivalent local expression in terms of the $\vec{n}$ field is impossible.
Generalization to higher $n_f$ \textit{ is not} achieved by extending the doublet to a complex
$n_f$-plet. For $n_f=2$, this strategy works because ${\rm SU}(2)$ is equivalent to the
sphere $S^3$. In order to generalize to higher $n_f$, we need to start with an $\mathrm{SU}(n_f)$
sigma model and then gauge an $\mathrm{SO}(n_f)$ subgroup.
Let us consider the exact sequence
\[ \ldots \rightarrow \pi_3\left({\rm SO}(k)\right) \rightarrow \pi_3\left({\rm SU}(k)\right)
\rightarrow
\pi_3\left({\rm SU}(k)/{\rm SO}(k)\right)\rightarrow \pi_2\left({\rm SO}(k)\right)
\rightarrow \ldots \]
For every $k$, we have $\pi_2\left({\rm SO}(k)\right)=0$. Therefore,
every non-zero element of $\pi_3\left({\rm SU}(k)/{\rm SO}(k)\right)$
can be lifted to a non-zero element of $ \pi_3\left({\rm SU}(k)\right)$
(for $n_f>2$ this lifting is not unique, as we will discuss below).
Then we can calculate the $S^3$ winding number
of the lifted 3-cycle,
using the $\mathrm{SU}(n_f)$ result,
\beq
s=-\frac{i}{24 \pi^2} \int_{S^3} {\rm Tr} \, (U^\dagger dU)^3 \rule{0mm}{8mm}\,.
\eeq
It is possible to present the topological winding number as an
${\rm SU}(n_f)$ Chern--Simons current.
Let us introduce
\beq \mathcal{A}_\mu=i U^\dagger \partial_\mu U\,.
\eeq
Then
\beq s=\frac{1}{8 \pi^2} \int d^3 x K^0, \,\,\,\,
K^\mu=\epsilon^{\mu \nu \rho \sigma} \,
\mathrm{Tr} \left(\mathcal{A}_\nu \partial_\rho \mathcal{A}_\sigma-
\frac{2}{3}\,i \, \mathcal{A}_\nu \mathcal{A}_\rho \mathcal{A}_\sigma\right).
\label{chs}\eeq
As previously discussed, $s$ is defined modulo $4$ for $n_f=3$
and modulo $2$ for $n_f>3$, due to arbitrariness in the choice of $U$.
\subsubsection{ The Fermion interaction}
Let us consider an $\mathrm{SU}(n_f)$ representative $U$
of a quotient class in $\mathcal{M}_{n_f}$.
The $\mathrm{SU}(n_f)$ symmetry group acts on $U$ as
$U \rightarrow R\cdot U$.
The action on the Cartan embedding image ($W=U\cdot U^t$) is
\begin{equation}
W \rightarrow R\cdot W\cdot R^t\,.
\end{equation}
Due to this property, we can write down the fermion coupling
for arbitrary $n_f$ as
\begin{equation}
-\frac{g}{2}\left\{ W^{fg} \psi_{\alpha f} \psi^\alpha_{g}+ \mathrm{h.c.}
\right\}.
\end{equation}
To the lowest order, the effective Lagrangian that includes both
pions and the fermions $\psi_{\alpha a}$ is
\beq
\mathcal{L}=\frac{F_\pi^2}{4} {\rm Tr} \, (\partial_\mu W \partial^\mu W^\dagger)+
\bar{\psi}_{f \dot{\alpha}} i \partial^{\dot{\alpha} \alpha} \psi_{f \alpha}
-\frac{g}{2}\left\{ W^{fg} \psi_{\alpha f} \psi^\alpha_{g}+ \mathrm{h.c.} \right\}.
\label{leffe}
\eeq
If we expand around the vacuum where $W$ is given by the identity matrix,
the fermionic part of the Lagrangian is given by
\beq \mathcal{L}_{\rm ferm}=
\bar{\psi}_{f \dot{\alpha}} i \partial^{\dot{\alpha} \alpha} \psi_{f \alpha}
-g \left\{ \psi^\alpha_f \psi_{\alpha f} + \mathrm{H.c.} \right\}.\eeq
Of course, there are
interactions between these fermions and the Goldstone bosons.
\subsubsection{Skyrmions in \boldmath{$\mathrm{SO}(n)$} QCD}
\label{SOgaugetheory}
Now we consider another parental theory: SO$(n)$ gauge theory with
$n_f$ Weyl quarks in the vectorial representation. Such a theory
can be viewed as a ``parental" microscopic theory because it has
the chiral symmetry breaking
\begin{equation}
\mathrm{SU}(n_f)\times \mathbb{Z}_{4 n_f}\to \mathrm{SO}(n_f) \times
\mathbb{ Z} _{2} \,, \label{paterso}
\end{equation}
which, apart from the discrete factors, is the same as SU$(n)$
Yang--Mills with adjoint Weyl quarks, see Eq.~(\ref{pater}).
The low-energy effective Lagrangian is again a nonlinear sigma model
with the target space
${\cal M}_{n_f}$.
The ``baryon number" symmetry,
which rotates all charge-$1$ Weyl quarks is also anomalous;
the anomaly-free part is $\mathbb{Z}_{4 n_f}$.
This discrete symmetry is then broken down to $\mathbb{Z}_2$ by the fermion condensate.
There are
some differences from adjoint QCD.
One of them is that the coupling constant $F_{\pi}$
scales as $n$ rather than $n^2$. This means, in turn, that now the
Skyrmion soliton is an object whose mass scales as $n$.
Moreover, the fermion $\psi$ (see Eq.~(\ref{mafe})) is absent
in the spectrum.
The Skyrmion in the SO$(n)$ theory had been already matched with
the stable particle construction in the microscopic theory.
This identification belongs to Witten \cite{Witten:1983tx}. He
argued that the Skyrmion corresponds to the baryon constructed of
$n$ quarks,
\begin{equation}
\label{barionSO} \epsilon_{\alpha_1 \alpha_2 \dots \alpha_{n}}
q^{\alpha_1} q^{\alpha_2} \dots q^{\alpha_{n}}\,.
\label{62}
\end{equation}
As was discussed in Ref.~\cite{Wittenbaryonvertex},
the gauge theory actually has an $\mathrm{O}(n)$ symmetry;
the quotient $\mathbb{Z}_2=\mathrm{O}(n)/\mathrm{SO}(n)$
acts as a global symmetry group.
All particles built with the $\epsilon_{\alpha_1 \alpha_2 \dots \alpha_{n}}$
symbol are odd under this symmetry. This means
that the baryon (\ref{62}) is stable under decay into massless
Goldstone bosons while two baryons can freely annihilate.
From Eq.~(\ref{62}),
we can infer information about other quantum numbers of the
Skyrmion. Its $\mathbb{Z}_2$ fermion number is given by $n$ modulo $2$, and
its flavor representation is contained in the tensor product of $n$ vectorial
representations. This is consistent with the computation carried out in conventional
QCD (with fundamental quarks)
in Ref.~\cite{Balachandran:1982dw}.
By the same token, we can argue that there is a similar contribution
to the fermion number of the Skyrmion in adjoint QCD,
which is proportional to $n^2-1$.
As discussed in Sect.~\ref{stability}, the composite fermion $\psi$
(which is absent in the SO$(n)$ theory)
will give an extra contribution to the Skyrmion fermion number, shifting it
by one unit.
\subsubsection{WZNW term}
\label{WZNW}
We can write the WZNW term for the $\mathcal{M}_{n_f}$ sigma model ($n_f\geq 3$)
by virtue of evaluating the ${\rm SU}(n_f)$ Wess--Zumino--Novikov--Witten term on the symmetric unitary matrices $W$
introduced in Eq.~(\ref{mapm}). namely,
\begin{eqnarray}
\Gamma
&=&
-\frac{i}{240 \pi^2} \int_{B_5} d\Sigma^{\mu \nu \rho \sigma
\lambda} \mathrm{Tr} \left[ (W^\dagger \partial_\mu W)\cdot (W^\dagger
\partial_\nu W) \right.
\nonumber\\[3mm]
&\cdot&
\left. (W^\dagger \partial_\rho W) \cdot (W^\dagger \partial_\sigma W)\cdot
(W^\dagger \partial_\lambda W) \right].
\label{weszu}
\end{eqnarray}
In order to compute the WZNW term for the $\mathcal{M}_{n_f}$ sigma model,
we need to take the result for $\mathrm{SU}(n_f)$
and restrict it to the submanifold of the unitary symmetric matrices.
There is a subtle difference regarding the possible coefficients allowed
for $\Gamma$ in the action.
In the Lagrangian of the $\mathrm{SU}(n_f)$ sigma model, relevant for
QCD Skyrmions, the WZNW term must have just integer coefficient $k$,
\begin{equation}
\mathcal{L}= \mathcal{L}_2+ k \, \Gamma + \rm{Higher \, order \, terms}\,.
\end{equation}
This is due to the fact that the integral of this term
on an arbitrary $S^5$ submanifold of $\mathrm{SU}(n_f)$ must be an integer multiple of $2 \pi$.
In the $\mathcal{M}_{n_f}$ sigma model relevant
for adjoint QCD, we need to use the same normalization prescription.
The main difference is that if we integrate $\Gamma$ on
the minimal $S^5$, which we can build inside
the $\mathrm{SU}(n_f)$ subspace of the symmetric Hermitian matrices, the result will be
$4 \pi$ rather than $2 \pi$, as we get for the generator of $\pi_5\left(\mathrm{SU}(n_f)\right)$.
Therefore, if we restrict ourselves to this subspace
it is consistent to also consider half-integer values of $k$.
Let us gauge the U(1) subgroup generated by
\begin{equation}
Q=\pmatrix{ 0 & i & 0 \cr -i & 0 & 0 \cr 0 & 0 & 0 }.
\end{equation}
Let us take
\begin{equation}
T_{\kappa_1}=\pmatrix{ 1 & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & -2 }\,,
\end{equation}
which corresponds to the Goldstone boson $\pi_3$. We then find
\begin{equation}
\langle\partial_\mu J^\mu_{\kappa_1}\rangle =\frac{n^2-1}{16
\pi^2} \epsilon^{\kappa \nu \lambda \rho} F_{\kappa \nu} F_{\lambda
\rho}\,.
\end{equation}
At this point,
we can match this value with the one found from the
low-energy theory.
We obtain in this way that the coefficient in
front of the WZNW term is
\begin{equation}
k=\frac{n^2-1}{2}.
\end{equation}
The crucial $1/2$ factor comes from the fact that
we consider a theory with the Weyl fermions rather than
Dirac fermions as is the case in QCD.
Note that $k$ is half-integer for $n$ even
and integer for $n$ odd.
Using the arguments discussed previously,
it is straightforward to compute the coefficient $k$ of the WZNW term in the
low-energy effective action. The triangle diagram
is completely similar to that in the adjoint QCD case. The coefficient comes out
different
due to a different number of ultraviolet degrees of freedom.
The result is
\begin{equation}
k=\frac{n}{2}.
\end{equation}
It immediately follows that for $n$ odd the Skyrmion is a fermion
while for $n$ even it is a boson.
\begin{figure}[h!tb]
\epsfxsize=4.5cm \centerline{\epsfbox{anomaly.eps}}
\caption{{\footnotesize The WZNW term is responsible for anomaly
matching between the ultraviolet (microscopic) theory and the low-energy effective
Lagrangian (macroscopic description). The anomalies
in question are given by triangle graphs symbolically depicted in this figure, with flavor
currents in the vertices; they are blind with respect to the gauge indices. The only
information about the gauge structure comes from the trace in the
loop. }} \label{anomalyfig}
\end{figure}
Next, we have to evaluate the WZNW term (\ref{weszu}) on the rotation.
The result is two times larger than that we get in QCD,
\beq \Gamma= 2 \pi\,.
\eeq
With our conventions for the coefficient
$k$, it can be an integer or half-integer, depending on the number
of colors $n$.
$k$ is half-integer for $n$ even
and an integer for $n$ odd.
It immediately follows that the Skyrmion is quantized as a fermion for
$n$ even and as a boson for $n$ odd.
\subsubsection{ Skyrmion stability due to anomaly}
\label{stability}
This section is central for the understanding of the
Skyrmion stability in the microscopic theory.
In order to generalize to higher $n_f$, one must consider
the triangle anomaly $${\rm U}(1)-\mathrm{SO}(n_f)-\mathrm{SO}(n_f)\,.$$
The U(1) corresponds to the fermion number. For $\mathrm{SO}(n_f)$ we introduce an
auxiliary gauge field. The anomaly is
\begin{equation}
\partial_{\mu}J^{F0}_{\mu}=\frac{1}{16 \pi^2} \mathrm{Tr}(F^{\mu\nu}
\widetilde{F}_{\mu\nu}) = \frac{1}{8 \pi^2} \partial_\mu K^{\mu},
\end{equation}
where $F_{\mu\nu}=F^k_{\mu\nu} T^k$, with $T^k$ standing for the generators of
$\mathrm{SO}(n_f)$ (with ${\rm Tr} (T_jT_k)=\delta_{ij}$),
and $K_\mu$ is given in Eq.~(\ref{chs}).
The net effect of the baryon $\psi$ with mass $O(n^0)$ is to shift the
Skyrmion fermion
number by one unit, without changing its statistics.
For $n$ odd, the Skyrmion is a boson with an odd fermion number.
For $n$ even, it is a fermion with an even fermion number.
The relationship between the Skyrmion statistics and fermion number is abnormal.
In both cases, it is a $\mathbb{Z}_2$-stable object,
because in the ``perturbative" spectrum the normal relationship
between the fermion number and statistics takes place.
\section{Introduction}
\input{intro}
\section{Orientifold QCD}
\label{twoindex}
\input{twoindex}
\section{Adjoint QCD}
\label{adjoint}
\input{adjoint}
\section{Conclusion}
\input{conclu}
\section*{Acknowledgements}
The first part of the project began when I was in Copenhagen, founded by the Marie
Curie Grant MEXT-CT-2004-013510. I want to thank F.~Sannino for useful conversations there.
The second part of the paper has been carried out in Minneapolis at FTPI. I am grateful to R.~Auzzi for useful discussions and the collaboration \cite{Auzzi:2008hu} on which the last part of this contribution is based.
I want to thank especially M.~Shifman for many useful discussions and collaborations
\cite{Bolognesi:2007ut,Auzzi:2008hu}.
I want also to thank A.~Armoni for comments on the manuscript.
This review grew out of a seminar given in September 2007 at the Isaac Newton Institute in Cambridge. I want to people there for the invitation and for useful discussions.
My work is now supported by DOE grant DE-FG02-94ER40823.
\subsection{Large $n$ Limit}
\label{largen}
In order to have a well-defined large $n$ limit, we take the product $g^{2}\,n$ to be finite.
At large $n$, the theory reduces to an infinite tower of weakly coupled hadrons
whose interaction strength vanishes like $n^{-2}$. The large $n$ behavior of
orientifold QCD is very similar to that of theories with fermions in the adjoint
representation. The dependence upon the number of colors of
the meson coupling can be evaluated using the planar diagrams presented in
Figure \ref{effepai} and paying attention to the hadron wave function
normalization.
\begin{figure}
[h!t]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{effepai.eps}
\caption{{\protect\footnotesize The $n$ dependence of the meson
coupling $F_{\pi}$.}}%
\label{effepai}%
\end{center}
\end{figure}
We will denote the decay constant of the typical meson by $F_{\pi}$.
Using a double line notation, the Feynman diagrams can be arranged according to
the topology of the surface related to the diagram. The $n$ powers of the
Feynman diagrams can be read off from two topological properties of the
surface: the number of handles and the number of holes. Every handle carries a
factor $n^{-2}$, and every hole carries a factor $n^{-1}$. In the ordinary 't
Hooft limit, where the quarks are taken in the fundamental representation, the
holes are given by the quark loops. In the higher representation case, quarks
are represented by double lines as the gluons and so there are no holes. The contribution to $F_{\pi}$ in the large $n$ limit can thus be
arranged as in Figure \ref{torus} where the leading order is a quark closed
double line with planar quarks and gluons inside, and the next subleading
order is given by adding a handle. The leading order scales like $n^{2}$ while
the subleading order scales like $n^{0}$.
\begin{figure}
[h!t]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{torus.eps}
\caption{\footnotesize First-order and second-order
contributions to
the three-meson interaction.}
\label{torus}
\end{center}
\end{figure}
The previous color counting is not affected by the addition of a finite number
of flavors.
\subsection{Effective Lagrangians, Anomalies, and Skyrmions}
\label{effective}
With $n_{f}$ {\it massless} flavors, the Lagrangian (\ref{lagrangian}) has
global symmetry ${\rm SU}(n_{f})_{{\rm L}}\times {\rm SU}(n_{f})_{{\rm R}}$. The global chiral
symmetry is expected to dynamically break to its maximal diagonal subgroup by the quark condensate.
The low-energy effective
Lagrangian describes the dynamics of the massless mesons that are the
Nambu-Goldstone bosons of the spontaneous chiral symmetry breaking. Written in
terms of the matrix $U(x)=\exp\left(i \pi(x) / F_{\pi}\right) $,
where $\pi(x)$ is the Goldstone boson matrix, the effective Lagrangian is%
\begin{equation}
S_{\mathrm{eff}}=\frac{1}{16}F_{\pi}^{2}\int d^{4}x\,\left\{ \mathrm{Tr}%
\partial_{\mu}U\partial_{\mu}U^{-1}+\mathrm{higher~derivatives}\right\}
+ k \Gamma_{\mathrm{WZNW}} \ .
\end{equation}
The topological Wess-Zumino-Novikov-Witten (WZNW) term $\Gamma_{\mathrm{WZNW}}$ is crucial in order
to satisfy the 't Hooft anomaly conditions at the effective Lagrangian level.
Gauging the WZNW term with respect to the electromagnetic interactions yields
the familiar $\pi^{0} \rightarrow 2\gamma$ anomalous decay. The WZNW term can be
written as
\beq
\Gamma_{\mathrm{WZNW}}=-\frac{i}{240\pi^{2}}\,\int_{\mathcal{M}^{5}}%
\epsilon^{\mu\nu\rho\sigma\tau}\mathrm{Tr}\left( \partial_{\mu}%
UU^{-1}\partial_{\nu}UU^{-1}\partial_{\rho}UU^{-1}\partial_{\sigma}%
UU^{-1}\partial_{\tau}UU^{-1}\right) \ .
\eeq
where the integral must be performed over a five-dimensional manifold whose
boundary is ordinary Minkowski space. Quantum consistency of the theory
requires $k$ to be an integer. Matching with the underlying anomaly
computations requires $k$ to be equal to the number of quarks with respect to
the color. In the case of the fundamental representation $k=n$ while for orientifold QCD, $k=n(n\pm1)/2$.
The low-energy effective theory supports solitonic excitations. In order to obtain
classically stable configurations, it is necessary to include at least a four-derivative term in addition to the usual two-derivative term.
Higher-derivatives terms are certainly present in the low-energy effective Lagrangian and are crucial for the Skyrmion stability.
The Skyrmion is a texture-like solution of the effective Lagrangian
arising from the nontrivial third homotopy group of the possible
configurations of the matrix $U(x)$ (namely $\pi_{3}\left( {\rm SU}(n_{f})\right)
=\mathbb{Z}$). In the large $n$ limit, we can treat the effective Lagrangian as
classical, and, since the $n$ dependence appears only as a multiplicative
factor, the size and the mass of the Skyrmion scale, respectively, as $n^{0}$
and $n(n\pm1)/2$. Following \cite{Witten:1983tx}, we can read off the statistics and the
baryon number of the Skyrmion from the coefficient of the WZNW term. The baryon
number of the Skyrmion is $n(n\pm1)/2$ the baryon number of the quarks,
and the statistic is fermionic or bosonic accordingly if $n(n\pm1)/2$
is odd or even.
The results we have just obtained all point in the same direction. There
should exist in the spectrum of the theory a stable baryon that in the large
$n$ limit could be identified with the Skyrmion. This baryon should be
constituted by $n(n\pm1)/2$ quarks, and its mass should scale as
$n^{2}$ in the large $n$ limit.
\subsection{The Puzzle}
\label{puzzle}
Now we introduce the problem we are going to face.
It has been noted in
\cite{Armoni:2003jk} that, at least at first glance, the identification
between baryons and Skyrmions in the large $n$ limit, for orientifold QCD, is problematic.
A natural
choice for the wave function of the baryon is the following
\begin{equation}
\label{naivebaryon}
\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{n}}\epsilon_{\beta_{1}\beta
_{2}\dots\beta_{n}}\,Q^{\alpha_{1}\beta_{1}}Q^{\alpha_{2}\beta_{2}}\dots
Q^{\alpha_{n}\beta_{n}} \ . \label{firstguess}
\end{equation}
This baryon is formed of $n$ quarks and two epsilon tensors to saturate the indices. The first guess, since the number of components is $n$, is
that its mass scales like $n$ in the large $n$ limit. The mass of the
Skyrmion scales instead similar to $F_{\pi}^{2}$ that, in the case of the quarks in higher representations, is $n^{2}$. This is the first discrepancy between the baryon (\ref{firstguess}) and the Skyrmion.
Let us remember, for a moment, the well-known case of ordinary QCD.
We briefly consider the large $n$ behavior of the baryon in ordinary QCD.
The gauge wave function is%
\begin{equation}
\epsilon_{\alpha_{1}\dots\alpha_{n}}\,Q^{\alpha_{1}}\dots Q^{\alpha_{n}}~,
\label{ordinarybaryon}%
\end{equation}
and it is antisymmetric under the exchange of any two quarks. Since the quarks are
fermions, the total gauge function $\psi_{\mathrm{gauge}}\psi
_{\mathrm{spin/flavor}}\psi_{\mathrm{space}}$ must be antisymmetric under the
exchange of two quarks. The simplest choice is to take a completely symmetric
spin wave function and a completely symmetric spatial wave function.%
\begin{equation}%
\begin{tabular}
[c]{ccc}%
$\psi_{\mathrm{gauge}}$ & $\psi_{\mathrm{spin/flavor}}$ & $\psi
_{\mathrm{space}}$\\
$-$ & $+$ & $+$%
\end{tabular}
\end{equation}
In the large $n$ limit, the problem can be approximated by a system of free
bosons in a mean field potential $V_{\mathrm{mean}}\left( r\right) $ created
by the quarks themselves. The ground state is a Bose-Einstein condensate; the
quarks are all in the ground state of the mean field potential. The large $n$
behavior of the baryon is the following%
\begin{equation}
R\sim\mathcal{O}\left( 1\right) ~,\qquad M\sim\mathcal{O}\left( n\right) \ ,
\end{equation}
where $R$ is the size of the baryon and $M$ its mass.
The key point to obtain this result is that the many body problem becomes
enormously simplified by the fact that the coupling constant scales like
$1/g^{2} \sim n$\ in the large $n$ limit. To find the mass in this many
body problem, we have to sum up all the contributions from $k$-body
interactions. The one-body contribution is simply $n$ times the mass of the
single quark. The two-body interaction is of order $1/n$ but
an additional combinatorial factor ${n \choose 2}$ is needed, and we obtain a
contribution to the energy of order $n$. In general, any $k$-body interaction
is of order $1/n^{k-1}$ in the planar limit, and multiplied by
the combinatorial factor ${n \choose k}$, it gives a contribution of order $n$.
The same argument implies that the mean field potential $V_{\mathrm{mean}}(r)$
is constant in the large $n$ limit and so also the typical size of baryon $R$ (roughly the width of the ground state wave function).
These arguments are consistent with the low-energy effective Lagrangian point
of view. This Lagrangian is $L_{\rm eff}\sim n\left( \partial U\partial
U+\partial U\partial U\partial U\partial U+\dots\right) $ where$\ U$ is a
${\rm SU}(n_{f})$ matrix . Since $n$ is an overall multiplicative factor, the radius
of the Skyrmion is of order one while its mass is of order $n$.
Now let us go back to orientifold QCD.
The first step toward the solution of the puzzle is to realize that the naive expectation
that the mass of (\ref{firstguess}) scales like $n$ is not correct. The reason
is the following. The gauge wave function (\ref{firstguess}) is symmetric
under the exchange of two quarks. Since the total wave function must be
antisymmetric, this means that the space wave function must be antisymmetric.
The large $n$ baryon must
thus be approximated as a set of free {\it fermions} in a mean field potential.
Since fermions cannot all be in the same ground state, there is an extra term
in the energy coming from the Fermi zero temperature pressure. At this point,
one could hope that this extra term could compensate for the mismatch and make the
baryon mass scale like $n^{2}$. A more detailed analysis shows that this is not true.
In higher-representations QCD, the simplest baryon is (\ref{naivebaryon}),
If we exchange two quarks, say for example, $Q^{\alpha_{1}\beta_{1}}$ and
$Q^{\alpha_{2}\beta_{2}}$, this is equivalent to the exchange of
$\alpha_{1}\alpha_{2}$ in $\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{n}}$ and
$\beta_{1}\beta_{2}$ in $\epsilon_{\beta_{1}\beta_{2}\dots\beta_{n}}$. The
result is that the gauge wave function is symmetric under the exchange of two
quarks. This means that in order to have a total wave function that is
antisymmetric under the exchange, the spatial wave function $\psi_{\mathrm{space}%
}$ must be antisymmetric.%
\begin{equation}%
\begin{tabular}
[c]{ccc}%
$\psi_{\mathrm{gauge}}$ & $\psi_{\mathrm{spin/flavor}}$ & $\psi
_{\mathrm{space}}$\\
$+$ & $+$ & $-$%
\end{tabular}
\end{equation}
In the large $n$ limit, the problem can be approximated by a system
of free fermions in a mean field potential $V_{\mathrm{mean}}(r)$.
The ground state is a degenerate Fermi gas and is obtained by
filling all the lowest energy states of the mean field potential up
the Fermi surface. Now there are two kind of forces that enter in
the game:
\begin{itemize}
\item[1)] Gauge forces scales like $n$ and are both repulsive and attractive,
\item[2)] Fermi zero temperature pressure scales like $n^{4/3}$ and is only
repulsive.
\end{itemize}
We can thus immediately infer the following that the simplest baryon cannot be
matched with the Skyrmion; this is because the mass of the Skyrmion goes like
$n^{2}$ while the mass of this baryon obviously cannot go faster than
$n^{4/3}$.
Another discrepancy for the candidate baryon (\ref{firstguess}) comes from the
WZNW term of the effective Lagrangian. From this term, we can read off the
statistics and the baryon number of the Skyrmion. The baryon number is
$n(n\pm1)/2$, where $\pm$ stand, respectively, for symmetric and
antisymmetric representation, and the statistic is fermionic or bosonic
accordingly if $n(n\pm1)/2$ is odd or even. There is no way to recover
this number from the baryon (\ref{firstguess}).
The topological stability of the Skyrmion in the effective Lagrangian
indicates that, at least in the large $n$ limit, there should exist a stable
state composed by $n(n\pm1)/2$ quarks and whose mass scales like
$n^{2}$. This is possible if there exists a color singlet wave function that
not only is composed by $n(n\pm1)/2$ quarks but is also completely
antisymmetric under the exchange of them. In what follows we are going to show that this
function exists and that $n(n\pm1)/2$ is the {\it unique} number of quarks
needed for its existence. This shall also confirm the stability of these Skyrmions-baryons. In
fact, any baryonic particle with a smaller number of quarks must have the extra
contributions to its mass coming from the spatial Fermi statistics.
\subsection{The Matching of the Skyrmion}
\label{stablebaryons}
We have seen in the previous section that the simplest baryon (\ref{firstguess}) has a gauge wave
function that is symmetric under the exchange of two quarks. This has a drastic
consequence on its mass-vs.-$n$ behavior. In the
following, we will construct the only possible gauge wave function that is
completely antisymmetric under the exchange of any two quarks. We will find that the
required number of quarks, as expected from the Skyrmion analysis, must be
$n(n\pm1)/2$.
First of all, we introduce a diagrammatic representation of baryons that shall be very useful in the following. As described in Figure \ref{legenda}, we use {\it points} to indicate quarks $Q^{\alpha \beta}$ and {\it lines} to indicate epsilon tensors (or baryon vertices) $\epsilon_{\alpha_1 \dots \alpha_n}$. To have a gauge singlet baryon, we need to build a diagram of points and lines so that: $1)$ Two lines pass from every point. $2)$ A line passes through $n$ points. Only in the antisymmetric case is it possible for the same line to pass twice on the same point.
\begin{figure}[h!t]
\begin{center}
\leavevmode \epsfxsize 9.5 cm \epsffile{legenda.eps}
\end{center}
\caption{\footnotesize We diagrammatically represent quarks with points and epsilon tensors with lines.
Two lines pass from every point. Every line connects $n$ points. In the case of the anti-symmetric representation, a line can pass twice on the same point.}
\label{legenda}
\end{figure}
For example, the diagram corresponding to the simplest baryon (\ref{naivebaryon}) is given in Figure \ref{basic}.
\begin{figure}[h!t]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{basic.eps}
\end{center}
\caption{\footnotesize Diagrammatic representation for the simplest baryon of Eq.~(\ref{naivebaryon}).}
\label{basic}
\end{figure}
We shall now proceed to discuss separately the case of symmetric and antisymmetric representations.
\subsubsection{The symmetric representation}
We start from the simplest case: two colors $n=2$. We want to construct a gauge invariant
wave function that contains three quarks $\,Q^{\{\alpha_{1}\beta_{1}\}}$,
$Q^{\{\alpha_{2}\beta_{2}\}}$, and $Q^{\{\alpha_{3}\beta_{3}\}}$, and that is completely
antisymmetric under the exchange of any two of them. A natural guess to try is the triangular diagram of Figure \ref{duesymmetric}.%
\begin{figure}[th]
\begin{center}
\leavevmode \epsfxsize 2.2 cm \epsffile{duesymmetric.eps}
\end{center}
\caption{\footnotesize Diagrammatic representation for the simplest baryon for $n=2$ (Eq.~(\ref{duesymmetricbaryon})).}
\label{duesymmetric}
\end{figure}
The gauge wave function
that corresponds to the diagram is\footnote{Note in particular that the symmetric
representation for ${\rm SU}(2)$ is equivalent to the adjoint representation and a
gauge invariant antisymmetric wave function can easily be written as
$\epsilon_{abc}Q^{a}Q^{b}Q^{c}$ where $a,b,c$ are triplet indices. This wave
function is exactly the same as that of Eq. (\ref{duesymmetricbaryon}).}
\begin{equation}
\epsilon_{\alpha_{2}\alpha_{1}}\epsilon_{\beta_{2}\alpha_{3}}\epsilon
_{\beta_{1}\beta_{3}}\,Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha_{2}\beta_{2}%
\}}Q^{\{\alpha_{3}\beta_{3}\}}~. \label{duesymmetricbaryon}%
\end{equation}
Let us now prove that this wave function is indeed antisymmetric under the exchange of two quarks.
We can proceed with the following algebraic steps:
\begin{eqnarray}
& ~~~~~~\,\epsilon_{\alpha_{1}\alpha_{2}}\epsilon_{\beta_{1}\alpha_{3}%
}\epsilon_{\beta_{2}\beta_{3}}\,Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha
_{2}\beta_{2}\}}Q^{\{\alpha_{3}\beta_{3}\}} \nonumber \\
& =-\epsilon_{\alpha_{2}\alpha_{1}}\epsilon_{\beta_{1}\alpha_{3}}%
\epsilon_{\beta_{2}\beta_{3}}\,Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha_{2}%
\beta_{2}\}}Q^{\{\alpha_{3}\beta_{3}\}}\nonumber\\
& =-\epsilon_{\alpha_{2}\alpha_{1}}\epsilon_{\beta_{1}\beta_{3}}%
\epsilon_{\beta_{2}\alpha_{3}}\,Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha
_{2}\beta_{2}\}}Q^{\{\beta_{3}\alpha_{3}\}}\nonumber\\
& =-\epsilon_{\alpha_{2}\alpha_{1}}\epsilon_{\beta_{1}\beta_{3}}%
\epsilon_{\beta_{2}\alpha_{3}}\,Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha
_{2}\beta_{2}\}}Q^{\{\alpha_{3}\beta_{3}\}} \ .
\label{pedestrianproof}
\end{eqnarray}
The first line corresponds to Eq.(\ref{duesymmetricbaryon}) with quarks $1$ and $2$ exchanged (the exchange is done in the epsilon tensors).
The three algebraic steps are the following:
\begin{itemize}
\item[(A$\rightarrow$B)] Exchange of $\alpha_{1}$ and $\alpha_{2}$ in the
$\epsilon$ brings a minus factor;
\item[(B$\rightarrow$C)] Renomination of $\alpha_{3}$ with $\beta_{3}$, which
has no consequences;
\item[(C$\rightarrow$D)] Exchange of $\alpha_{3}$ and $\beta_{3}$ in the quark
also has no consequences.
\end{itemize}
In the final line, we recover exactly the wave function (\ref{duesymmetricbaryon}) but with a minus sign in front of it.
We now prove this general theorem that, apart from the confirmation of the existence and uniqueness of these $n(n+1)/2$ fully antisymmetric gauge wave function, will also give the recipe to construct it.
\begin{proposition}
\label{proposition1}
\ There is one and only one gauge wave function that is a gauge singlet and
completely antisymmetric under the exchange of two quarks. This wave function is
composed by $n(n+1)/2$ quarks $Q^{\{\alpha\beta\}}$ and is the
completely antisymmetric subspace of the tensor product of $n(n+1)/2$
quarks $Q^{\{\alpha\beta\}}$.
\end{proposition}
\begin{proof}
Call $S$ the number of quarks in a hypothetical gauge wave function that
satisfies the previous conditions. We need two facts to prove the
proposition: ${\it 1)}$ Two indices $\alpha_{i}$ and $\beta_{i}$ of the same quark $Q^{\{\alpha_{i}\beta_{i}\}}$ {\it cannot} belong to the same
saturation line since they are symmetric under the exchange; ${\it 2)}$
Two quarks $Q^{\{\alpha
_{i}\beta_{i}\}}$ and $Q^{\{\alpha_{j}\beta_{j}\}}$ can be connected by {\it at most} one line. The reason is simply that exchanging them would give a plus sign
instead of the required minus sign. At this point, we are ready to build the fully antisymmetric wave function. We follow Figure \ref{generalsymmetric}.
\begin{figure}[th]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{generalsymmetric.eps}
\end{center}
\caption{\footnotesize The triangular diagram for the Skyrmion-baryon in the case of the two-index symmetric representation: $n(n+1)/2$ points connected by $n+1$ lines.}
\label{generalsymmetric}
\end{figure}
We start from the first quark $Q^{\{\alpha_{1}\beta_{1}\}}$ and draw the first saturation line that departs from this quark. This implies the presence of other $n-1$ quarks that we call $Q^{\{\alpha_{2}\beta_{2}\}} , \dots, Q^{\{\alpha_{n}\beta_{n}\}}$.
Let's consider now $Q^{\{\alpha_{2}\beta_{2}\}}$. One index is already saturated, and from the other one a new saturation line must depart. Due to property ${\it 2)}$, new $n-2$ quarks must be added. One of them, $Q^{\{\alpha_{n+1}\beta_{n+1}\}}$, can be in common with $Q^{\{\alpha_{1}\beta_{1}\}}$. We then take $Q^{\{\alpha_{3}\beta_{3}\}}$ and start the other saturation line. It can pass from $Q^{\{\alpha_{n+2}\beta_{n+2}\}}$, but then other different $n-2$ quarks must be added. So on we go until we reach $Q^{\{\alpha_{n}\beta_{n}\}}$ and complete the saturation adding the last line. In total we have $n + n-1 + n-2 +\dots +1 = n(n+1)/2$ quarks, whose indices are saturated by $n+1$ epsilon tensors.
So what we have shown is that at
least $n(n+1)$ quarks are necessary if we require the complete antisymmetry of the gauge wave function. This is equivalent to say that we put a lower-bound on the number of quarks: $S \geq n(n+1)/2$.
Now we need to prove the existence and uniqueness of this wave function. Consider the tensorial
product of a certain number of quarks $Q^{\{\alpha\beta\}}$. Every quark must be
considered as a vector space of dimension $n(n+1)/2$ over which the
group ${\rm SU}(n)$ acts as a linear representation. Now we take the subspace of the
tensor product that is completely antisymmetric under the exchange. This subspace
is obviously closed under the action of the gauge group. If the number of
quarks is greater than $n(n+1)/2$, this subspace has dimension zero, and this is equivalent to an upper-bound on the number of quarks: $S \leq n(n+1)/2$.
Due to the previously found lower-bound, we can say that $S$ must be exactly $n(n+1)/2$.
If the number of quarks is exactly $n(n+1)/2$, the antisymmetric subspace
has exactly dimension one. We have thus proved that the completely antisymmetric space
of $n(n+1)/2$ quarks $Q^{\{\alpha\beta\}}$ is also a singlet of the
gauge group, since it is one-dimensional and must be closed under the gauge transformations.
\end{proof}
The gauge wave function for general $n$ can be obtained by generalizing the
one of Figure \ref{duesymmetric} for $n=2$.
The baryon for $n=2$ does not need to be
antisymmetrized, because it is already antisymmetric under the exchange of any pair
of quarks. For generic $n$, antisymmetrization is needed. In the case $n=3$, for example, the antisymmetrizations with respect to the four quarks
$Q^{\{\alpha_{1}\beta_{1}\}}$, $Q^{\{\alpha_{2}\beta_{2}\}}$, $Q^{\{\alpha
_{3}\beta_{3}\}}$ and $Q^{\{\alpha_{4}\beta_{4}\}}$ are enough to guarantee the
complete antisymmetrization. It can be seen that the antisymmetrization with respect to the exchange
$Q^{\{\alpha_{1}\beta_{1}\}}\leftrightarrow$ $Q^{\{\alpha_{2}\beta_{2}\}}$
implies that with respect to\ \ $Q^{\{\alpha_{3}\beta_{3}\}}\leftrightarrow
Q^{\{\alpha_{5}\beta_{5}\}}$ and the same for the two exchanges $Q^{\{\alpha
_{2}\beta_{2}\}}\leftrightarrow Q^{\{\alpha_{4}\beta_{4}\}}$ and
$Q^{\{\alpha_{3}\beta_{3}\}}\leftrightarrow Q^{\{\alpha_{6}\beta_{6}\}}$. We
thus have a sufficient number of exchanges to generate the complete
permutation group.
\subsubsection{The antisymmetric representation}
We now consider the quarks in the antisymmetric representation. Our goal is a gauge invariant and antisymmetric
wave function that contains $n(n-1)/2$ quarks $Q^{[\alpha\beta]}$.
As we did in the previous subsection, we start with the simplest cases and
then generalize. For $n=2$, we have $n(n-1)/2=1$, and it is easy to find such a wave
function. It is just $\epsilon_{\alpha\beta}\,Q^{[\alpha\beta]}$. For $n=3$, we need a
wave function that contains three quarks. To guess it directly from
$Q^{[\alpha\beta]}$ is not easy, but we can use an indirect trick. The antisymmetric
representation, for $n=3$, is equivalent to the anti-fundamental representation; this follows from $\widetilde
{Q}_{\gamma}=\frac{1}{2}\epsilon_{\gamma\alpha\beta}Q^{[\alpha\beta]}$. We
know how to write a baryon for the anti-fundamental representation; it is the usual one $\epsilon^{\gamma\rho\tau}\widetilde{Q}_{\gamma}\widetilde{Q}_{\rho}%
\widetilde{Q}_{\tau}$.
Substituting the relationship between $\widetilde{Q}_{\gamma}$ and $Q^{[\alpha
\beta]}$, we obtain%
\begin{equation}
\frac{1}{2}(\epsilon_{\alpha_2\beta_2\alpha_1}\epsilon_{\alpha_3
\beta_3\beta_1}-\epsilon_{\alpha_3\beta_3\alpha_1}\epsilon_{\alpha_2\beta_2\beta_1})\,Q^{[\alpha_1\beta_1]}Q^{[\alpha_2\beta_2]}%
Q^{[\alpha_3\beta_3]}~. \label{treantisymmetricbaryon}%
\end{equation}
The diagram corresponding to this baryon is given in Figure \ref{treantisymmetricbaryon}.
\begin{figure}[th]
\begin{center}
\leavevmode \epsfxsize 2.2 cm \epsffile{treantisymmetric.eps}
\end{center}
\caption{\footnotesize Diagrammatic representation for the simplest baryon of Eq.~(\ref{treantisymmetricbaryon}).}
\label{treantisymmetricbaryon}
\end{figure}
We know, by construction, that this wave function is antisymmetric under the
exchange of any couple of quarks.
We now generalize this result.
\begin{proposition}
\label{proposition2}
\ There is one and only one gauge wave function that is a gauge singlet and
completely antisymmetric under the exchange of two quarks. This wave function is
composed by $n(n-1)/2$ quarks $Q^{[\alpha\beta]}$ and is the
antisymmetric subspace of the tensor product of $n(n-1)/2$ quarks
$Q^{[\alpha\beta]}$.
\end{proposition}
\begin{proof}
Denote by $A$ the number of quarks in a hypothetical gauge wave function that
satisfies the previous conditions. The reason why $A$ can be smaller than $S$
is that now it is instead possible for a quark to have both indices on the
same saturation line. For the proof we need the following two
basic facts: ${\it 1)}$ One saturation line can contain at most one quark; otherwise the wave function will be symmetric
under the exchange of these quarks; ${\it 2)}$ Two quarks $Q^{\{\alpha
_{i}\beta_{i}\}}$ and $Q^{\{\alpha_{j}\beta_{j}\}}$ can be connected by {\it at most} one line. The reason is the
same as in the case of the symmetric representation. At this point, we are ready to build the fully antisymmetric wave function. We follow Figure \ref{generalantisymmetric}.
\begin{figure}[th]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{generalantisymmetric.eps}
\end{center}
\caption{{\protect\footnotesize Diagram for the Skyrmion-Baryon in the case of the two-index antisymmetric representation. }}
\label{generalantisymmetric}
\end{figure}
We start from the first quark $Q^{\{\alpha_{1}\beta_{1}\}}$ and draw the first saturation line that departs from this quark. This implies the presence of other $n-2$ quarks at least that we call $Q^{\{\alpha_{2}\beta_{2}\}} , \dots, Q^{\{\alpha_{n-1}\beta_{n-1}\}}$.
Let us consider $Q^{\{\alpha_{2}\beta_{2}\}}$. One index is already saturated, and from the other one a new saturation line must depart. Due to property ${\it 2)}$, at least new $n-2$ quarks must be added. One of them, $Q^{\{\alpha_{n}\beta_{n}\}}$, can have two indices on the same line. We then consider $Q^{\{\alpha_{3}\beta_{3}\}}$ and start the other saturation line. It can pass from $Q^{\{\alpha_{n+1}\beta_{n+1}\}}$, but then other different $n-3$ quarks must be added. So on we go until we reach $Q^{\{\alpha_{n-1}\beta_{n-1}\}}$ and complete the saturation adding the last line. In total, we have $n-1 + n-2 + \dots +1 = n(n-1)/2$ quarks, whose indices are saturated by $n-1$ epsilon tensors.
We just have proved that at
least $n(n-1)$ quarks are necessary if we require the complete antisymmetry of the gauge wave function; this is the lower-bound $A \geq n(n-1)/2$.
The proof of the existence and uniqueness of this wave function is exactly the same as that of the symmetric representation. From the antisymmetric subspace of the tensorial product, we find the upper-bound $A \leq n(n-1)/2$ This implies that the unique possibility is exactly $A=n(n-1)/2$. From the fact that the antisymmetric subspace is, in this case, one-dimensional, follows the gauge invariance.
\end{proof}
For example, the baryon for $n=4$ is given by the diagram plus the needed antisymmetrizations.
\begin{eqnarray}
& \left( \sum_{\sigma\in S} \mathrm{sign(\sigma)} \epsilon_{\alpha
_{\sigma(4)}\beta_{\sigma(4)}\alpha_{\sigma(2)}\alpha_{\sigma(1)}%
\epsilon_{\sigma(2)}\beta_{\sigma(5)}\beta_{\sigma(2)}\alpha_{\sigma(3)}%
}\epsilon_{\alpha_{\sigma(6)}\beta_{\sigma(6)}\beta_{\sigma(1)}\beta
_{\sigma(3)}} \right) \nonumber\\
& Q^{\{\alpha_{1}\beta_{1}\}}Q^{\{\alpha_{2}\beta_{2}\}}Q^{\{\alpha_{3}%
\beta_{3}\}} Q^{\{\alpha_{4}\beta_{4}\}}Q^{\{\alpha_{5}\beta_{5}\}}%
Q^{\{\alpha_{6}\beta_{6}\}} \ .
\end{eqnarray}
\subsection{More on the Antisymmetric Representation}
\label{moreantisymmetric}
In this section, we want to consider in more detail the case of the
antisymmetric representation. There is a peculiarity about the simplest baryon, which we did not mention previously.
The baryon previously introduced as the simplest one, is that of Figure \ref{basic}, which is
\begin{equation}
\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{n}}\epsilon_{\beta_{1}\beta
_{2}\dots\beta_{n}}\,Q^{[\alpha_{1}\beta_{1}]}Q^{[\alpha_{2}\beta_{2}]}\dots
Q^{[\alpha_{n}\beta_{n}]}~ \ . \label{minimal}%
\end{equation}
We have now to make a distinction between $n$ even and $n$ odd. In the case of
$n$ even, (\ref{minimal}) is not the minimal baryon, since we can construct a
gauge invariant wave function using only $n/2$ quarks:%
\begin{equation}
\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{n/2}\beta_{1}\beta_{2}\dots
\beta_{n/2}}\,Q^{[\alpha_{1}\beta_{1}]}Q^{[\alpha_{2}\beta_{2}]}\dots
Q^{[\alpha_{n/2}\beta_{n/2}]}~.
\end{equation}
This baryon is symmetric under the exchange of two quark and so there is no
difference with respect to the previous one with regard to the mass-vs-$n$ dependence.
In the case of $n=2n+1$, instead we can prove that the minimal
baryon (\ref{minimal}) is identically zero, with the following algebraic
passages:%
\begin{eqnarray}
& ~~~~~~~~~~~~~~~~~\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{2n+1}}%
\epsilon_{\beta_{1}\beta_{2}\dots\beta_{2n+1}}\,Q^{[\alpha_{1}\beta_{1}%
]}Q^{[\alpha_{2}\beta_{2}]}\dots Q^{[\alpha_{2n+1}\beta_{2n+1}]}\nonumber\\
& =\,\left( -1\right) ^{2n+1}\epsilon_{\alpha_{1}\alpha_{2}\dots
\alpha_{2n+1}}\epsilon_{\beta_{1}\beta_{2}\dots\beta_{2n+1}}\,Q^{[\beta
_{1}\alpha_{1}]}Q^{[\beta_{2}\alpha_{2}]}\dots Q^{[\beta_{2n+1}\alpha_{2n+1}%
]}\nonumber\\
& =~~~~~~~~~-\epsilon_{\beta_{1}\beta_{2}\dots\beta_{2n+1}}\epsilon
_{\alpha_{1}\alpha_{2}\dots\alpha_{2n+1}}Q^{[\alpha_{1}\beta_{1}]}%
Q^{[\alpha_{2}\beta_{2}]}\dots Q^{[\alpha_{2n+1}\beta_{2n+1}]}~.
\label{passages}%
\end{eqnarray}
In the first passage, we have exchanged the $\alpha$ and the $\beta$ indices in
every quark. Since we have $2n+1$ quarks in the antisymmetric representation,
this step brings down a minus sign. In the second step we have just renamed
$\alpha_{i}$ with $\beta_{i}$ and vice versa, and this has no consequences. The
last line of (\ref{passages}) is equal to minus the first line (apart from an
irrelevant exchange in the position of the two epsilons), and thus the wave
function must be zero. We can also prove a stronger statement:
\begin{proposition}
For $n$ odd and quarks in the antisymmetric representation, it is not possible
to write a gauge invariant wave function that is completely symmetric under the
exchange of two quarks.
\end{proposition}
\begin{proof}
Consider a generic wave function that is gauge invariant and symmetric under the
exchange of two quarks. We are going to prove that it is identically zero.
This wave function is composed by a number of quarks that we generically
denote by $M$. $M_{\alpha\beta}$ of these quarks are of type $Q^{[\alpha
\beta]}$ and $M_{\alpha\beta}$ are of type $Q^{[\alpha\beta]}$ so that we
can write%
\begin{equation}
M=M_{\left( \alpha\beta\right) }+M_{\left( \alpha\beta\right) }~.
\label{emme}%
\end{equation}
The $M$ quarks can be divided into various \emph{connected} components, where
the connection is given by the epsilon contractions and the quarks
$Q^{[\alpha\beta]}$. Let us assume for the moment that we have only one
connected component. It is easy to see that $M_{\alpha\beta}$ must be odd. We
will now\ use the same argument we have used to show that (\ref{passages}) is
identically zero. Namely we will show that the wave function is equal to minus
itself. First we exchange all the $\alpha$ indices with their $\beta$ partners
and this contributes a minus sign since $M_{\alpha\beta}$ is odd. Then we make
a suitable number of exchanges between the quarks $Q^{[\alpha\beta]}$ in order
to recover the original epsilon structure. These exchanges do not affect the
wave function since by definition it is symmetric under exchanges of two
quarks. So we have recovered the original wave function but with a minus sign
in front.
We now have to consider the more general situation in which the $M$ quarks are
divided into various disconnected components. It can easily be seen that in this
case the sub-connected components must be closed under the exchange of two
generic quarks. Put in another way, if the global wave function is symmetric
under the exchange of two quarks, then the sub-connected wave functions are also
symmetric under the exchange of two quarks.
\end{proof}
The previous proposition does not exclude the possible existence of a gauge
invariant wave function with less than $n(n-1)/2$ quarks, and in a non-singlet representation of the permutation group. In this case, the baryon is
not a simple product of gauge, spin, and space wave function but a sum
$\sum_{i}\psi_{\mathrm{gauge}}^{i}\psi_{\mathrm{spin}}^{i}\psi_{\mathrm{space}%
}^{i}$, where $\psi_{\mathrm{gauge}}^{i}$ is the non-singlet representation of
the permutation group.
\subsection{Stability of the Skyrmion}
\label{stabilitytwo}
We want now to discuss the issue of the stability of the Skyrmion. The
Skyrmion corresponds to the baryon that contains $n(n\pm 1)/2$ quarks and is fully antisymmetric in the gauge wave function. The
mass is thus proportional to the number of constituent quarks. Seen from the
low-energy effective Lagrangian, the Skyrmion is absolutely stable. In the full
theory, on the other hand, we should consider the possibility of decay into
baryons with lower numbers of constituent quarks, for example, the baryon
$\epsilon_{\alpha_{1}\alpha_{2}\dots\alpha_{n}}\epsilon_{\beta_{1}\beta
_{2}\dots\beta_{n}}\,Q^{\alpha_{1}\beta_{1}}Q^{\alpha_{2}\beta_{2}}\dots
Q^{\alpha_{n}\beta_{n}}$. These states are not visible from the low-energy
effective Lagrangian. As we have seen in Section \ref{stablebaryons}, baryons
with a number of constituent quarks lower than $n(n\pm 1)/2$ cannot be in a fully antisymmetric gauge wave function. This implies that
the Skyrmion is the state that minimizes the mass per unit of baryon number.
Let us consider an explicit example in more detail. A Skyrmion that contains
$n(n\pm 1)/2$ can decay into $(n\pm 1)/2$ baryons
composed by $n$ quarks. The baryon number is conserved, and so this decay
channel is in principle possible. In order to analyze the energetic of this
baryon, we propose now a toy model to schematize the fundamental baryon. We
have $n$ quarks and $2$ baryon vertices. Every quark is attached to two
fundamental strings and every baryon vertex to $n$ fundamental strings (see
Figure \ref{toymodel} for an example).
\begin{figure}[h]
\begin{center}
\leavevmode \epsfxsize 6.5 cm \epsffile{toymodel.eps}
\end{center}
\caption{{\protect\footnotesize A model of the baryon (here for four colors).
Every quark is attached to two confining strings and every baryon vertex to
$n$ confining strings.}}
\label{toymodel}
\end{figure}
Baryon vertices have a mass of order $n$; we can thus neglect their dynamics
and consider them at rest and positioned in what we define to be the center of
the baryon. In this approximation, the quarks do not interact directly between
each other; they live in a mean potential given by the string tension
multiplied by the distance from the center%
\begin{equation}
V_{\mathrm{mean}}(R)=2T_{\mathrm{string}}\left\vert R\right\vert
~.\label{confiningpotential}%
\end{equation}
Quarks are antisymmetric in the space wave function, and so they fill the
energy levels up to the Fermi surface (see Figure \ref{potentialbaryon}). We
indicate as $R_{\mathrm{F}}$ and $P_{\mathrm{F}}$, respectively, the Fermi
radius and momentum. The total energy and the number of quarks $n$ are given
by the following integrals over the phase space:%
\begin{eqnarray}
\int^{R_{\mathrm{F}}}\int^{P_{\mathrm{F}}}\frac{d^{3}Rd^{3}P}{\left(
2\pi\right) ^{3}}\left( P+V_{\mathrm{mean}}(R)\right) & =E~,\nonumber\\
\int^{R_{\mathrm{F}}}\int^{P_{\mathrm{F}}}\frac{d^{3}Rd^{3}P}{\left(
2\pi\right) ^{3}} & =n~.\label{integrals}%
\end{eqnarray}
Since the quarks are massless, we take the Hamiltonian to be
$P+V_{\mathrm{mean}}(R)$. From now on, we ignore numerical factors such as the
phase space volume element; at this level of approximation they are not
important. The second equation of (\ref{integrals}) gives a relationship between
the Fermi momentum and the Fermi radius, namely $P_{\mathrm{F}}\sim
n^{1/3}/R_{\mathrm{F}}$. The first equation of (\ref{integrals}) gives the
following expression of the energy as function of the radius
\begin{equation}
E\sim\frac{n^{4/3}}{R_{\mathrm{F}}}+T_{\mathrm{string}}nR_{\mathrm{F}}~.
\end{equation}
Minimizing, we obtain $R_{\mathrm{F}}\sim n^{1/6}/\sqrt{T_{\mathrm{string}}}%
$,$\ $and\ consequently$\ P_{\mathrm{F}}\sim n^{1/6}\sqrt{T_{\mathrm{string}}%
}$. The mass of the baryon is thus given by%
\begin{equation}
M_{n-\mathrm{Baryon}}\sim n^{7/6}\sqrt{T_{\mathrm{string}}}~.
\end{equation}
The important thing to note is the $n^{7/6}$ dependence. The mass per unit of
baryon number grows as $n^{1/6}$.
\begin{figure}[h]
\begin{center}
\leavevmode \epsfxsize 8 cm \epsffile{potentialbaryon.eps}
\end{center}
\caption{{\protect\footnotesize The mean potential for our toy model of the baryon.}}
\label{potentialbaryon}
\end{figure}
This approximation breaks down when the Fermi energy $n^{1/6}\sqrt
{T_{\mathrm{string}}}$ becomes much greater than the dynamical scale. Due to
asymptotic freedom, the highly energetic quarks do not feel a confining
potential like (\ref{confiningpotential}) but instead a Coulomb-like
potential. The mass per unit of baryon number stops growing as $n^{1/6}$ and, presumably,
saturates to a constant.
This toy model is certainly a very crude approximation. But we think it captures some qualitative behavior of this simplest baryon.
In particular, the model shows how this Fermi statistic is responsible for the stability of the Skyrmion.
The simplest baryon, in the very large-$n$ limit, consists of a core of quarks in a confined potential, plus a cloud of quarks in a Coulomb-like potential. The mass per unit of baryon number exceeds that of the Skyrmion a the quantity $\Delta$ (in Figure \ref{potentialbaryon}), due to the Fermi statistic of the quarks. The Skyrmion is thus stable against decay into $\sim n$ of these simplest baryons.
Regarding the mass-vs.-$n$ dependence of the Skyrmion, there is a final issue we have to discuss now. The Skyrmion has $n(n \pm 1)/2$ constituent quarks, symmetric in the space wave function. The mass, as we said, is proportional to $n^2$ since all the quarks can occupy the ground state of the mean potential. But we should also consider higher order corrections, from the gluons exchange, and verify that they all scale like $n^2$. In ordinary QCD, that is indeed the case. A two-body interaction has a $\sim n^2$ enhancement due to the possibility of choosing any couple out of the $n$ quarks. On the other hand, there is also a $1/n$ suppression from the gauge coupling that enters in the gluon exchange. At any order, the corrections always scale like $n$. If we simply repeat the same argument for the orientifold QCD, we run into a problem. Since we have $\sim n^2$ constituent quarks, the combinatorial enhancement factor is now $\sim n^4$. We still have a suppression of $1/n$ from the gauge coupling, and as a result the one-body interaction seems to grow as $\sim n^3$. And things gets even worse if we consider higher order interactions. Needless to say, this is a problem. Our result, the identification of the Skyrmion with the $n(n \pm 1)/2$ baryon, deeply relies on the fact that this object has a mass that scales like $\sim n^2$.
This issue has been considered, and successfully solved, in Ref.~\cite{Cherman:2006iy}. The point is that, in the previous paragraph, we overestimated the combinatorial factor. The gauge structure of the baryon forbids many gluon exchanges between quarks, reducing the combinatorial factor from order $n^4$ to order $n^3$. Suppose we want to exchange a gluon between quark $Q^{12}$ and quark $Q^{34}$, where the numbers refer to the gauge space. We can do it with the gluon $A_{\mu}^{23}$, for example. The outcome is that the two quarks exchange the gauge numbers carried by the gluon. We thus have that the quarks $Q^{12}$ and $Q^{34}$ become $Q^{13}$ and $Q^{24}$. But the baryon already contains quarks $Q^{13}$ and $Q^{24}$ in its wave function, and the completely antisymmetric structure forbids repetitions. That means that this gluon exchange is not allowed. The only exchanges allowed are the ones between quarks that share at least one index. Quarks $Q^{\alpha \beta}$ and $Q^{\alpha \gamma}$ can interact exchanging the gluon $A_{\mu}^{\beta \gamma}$ or the diagonal one $A_{\mu}^{\alpha \alpha}$. This reduces the total combinatorial factor from $n^4$ to $n^3$. This, together with the $1/n$ suppression from the gauge coupling, gives a contribution of order $n^2$, which is exactly what we expect from the Skyrmion-baryon identification. Repeating the same argument, for higher body interactions, still gives a total result of order $\sim n^2$. |
1,108,101,563,673 | arxiv | \section*{References}
|
1,108,101,563,674 | arxiv | \section{Matchings}
Write
\[
f: A \equiv B
\]
and say `$f$ matches $A$ with $B$'
to mean that we know a suitable bijection
$f$ from $A$ to $B$,
together with its inverse
\[
f^{-1}:A \equiv B
.
\]
Write
\[
A \equiv B
\]
and say `$A$ matches $B$'
to mean that we know (or know we \emph{could} know)
some $f: A \equiv B$.
We have
\[
A \equiv A
;\;\;\;
A \equiv B \implies B \equiv A
;\;\;\;
A \equiv B \land B \equiv C \implies A \equiv C
.
\]
(We refrain from saying that $\equiv$ is an equivalence relation
since it
is inherently time-dependent.)
We can add and multiply matchings:
\[
A \equiv B \land C \equiv D
\implies
A + C \equiv B + D
,
\]
where $+$ denotes disjoint union, and
\[
A \equiv B \land C \equiv D
\implies
A \times C \equiv B \times D
.
\]
\section{Respectful subtraction}
Before addressing subtraction in general, let's begin with
\emph{respectful subtraction}, an important special case.
It's so simple that it hardly deserves to be called subtraction.
\begin{comment}
Still, it may be helpful to do so, so that we can start by recognizing
that we're facing a subtraction problem, and then see, `Oh yes,
this is just respectful subtraction.'
\end{comment}
\begin{definition}
For $f: A+C \equiv B+D$, $g:C \equiv D$, write
$g \ll f$ and say `$g$ respects $f$' if
\[
\forall x \in C
\;
(f(x) \in D \implies g(x)=f(x))
.
\]
\end{definition}
\begin{prop}[Respectful subtraction]
If
\[
f: A+C \equiv B+D
;\;\;\;
g:C \equiv D
;\;\;\;
g \ll f
\]
then
\[
\without{f}{g}: A \equiv B
,
\]
where
\[
\without{f}{g} (x) =
f(x) {\ \kwstyle{if}\ } f(x) \in B {\ \kwstyle{else}\ } f(g^{-1}(f(x)))
.
\]
Moreover,
\[
\without{f}{g} \ll f
\]
and
\[
\without{f}({\without{f}{g}}) = g
.
\]
\end{prop}
{\bf {\medskip}{\noindent}Proof: }
Without loss of generality,
assume $f$ is the identity on $A+B=C+D$.
$g$ fixes $C \cap D$ and matches $C \setminus D$ to $D \setminus C$.
$\without{f}{g}$ fixes $A \cap B$ and, taking its cue from $g^{-1}$,
matches $A \setminus B=D \setminus C$ to $B \setminus A=C \setminus D$.
$\quad \spadesuit$
\section{Subtraction}
\begin{prop}[Subtraction] \label{subtraction}
If
\[
f: A+C \equiv B+D
;\;\;\;
g:C \equiv D
\]
with $D$ finite,
then
\[
\without{f}{g}: A \equiv B
,
\]
where
\[
\without{f}{g}(x) =
(
y:=f(x);\;
{\kwstyle{while}\ } y \in D
{\ \kwstyle{do}\ } y:=f(g^{-1}(y));\;
{\kwstyle{return}\ } y
)
.
\]
\end{prop}
{\bf {\medskip}{\noindent}Proof: }
This goes way back---see \cite{doyle:category}.
$\quad \spadesuit$
\\
When $g \ll f$ we're back to respectful subtraction:
\begin{prop}
If
$g \ll f$ then
\[
\without{f}{g} (x) =
f(x) {\ \kwstyle{if}\ } f(x) \in B {\ \kwstyle{else}\ } f(g^{-1}(f(x)))
.
\quad \spadesuit
\]
\end{prop}
The fact that $\without{f}{g} \ll f$ is general:
\begin{prop} \label{prop2}
$\without{f}{g} \ll f$.
$\quad \spadesuit$
\end{prop}
\begin{comment}
{\bf {\medskip}{\noindent}Proof: }
We have
\[
f^{-1}:B+D \equiv A+C
,
\]
\[
\without{f}{g}:A \equiv B,
\]
Let $b \in B$, and suppose
$f^{-1}(b) \in A$:
Then
$f(f^{-1}(b)) = b \in B$,
hence
$(\without{f}{g})(f^{-1}(b)) = b$.
$\quad \spadesuit$
\end{comment}
Idempotence of subtraction characterizes respectfulness:
\begin{prop} \label{prop3}
\[
\without{f}{(\without{f}{g})} = g
\;\iff\;
g \ll f
.
\quad \spadesuit
\]
\end{prop}
\begin{comment}
{\bf {\medskip}{\noindent}Proof: }
$\implies$:
Suppose $\without{f}{(\without{f}{g})} = g$.
By Proposition \ref{prop2},
\[
\without{(\without{f}{g})} \ll f
,
\]
so $g \ll f$.
$\impliedby$:
Suppose $g \ll f$.
Let
\[
\bar{f} = \without{f}{(\without{f}{g})}: C \rightarrowtail D
.
\]
We want to show that
$\bar{f} = g$.
Take $d \in D$.
There are two cases.
Case 1:
$\bar{f}(d) = f^{-1}(d) \in C$.
Since $g \ll f$
\[
\bar{f}(d) = f^{-1}(d) = g(f(f^{-1}(d))) = g(d)
.
\]
Case 2:
$f^{-1}(d) \in A$ and
\[
\bar{f}(d) = f^{-1} ( (\without{f}{g}) (f^{-1}(d)))
.
\]
Let $a = f^{-1}(d)$,
so that
\[
\bar{f}(d) = f^{-1} ( (\without{f}{g}) (a))
.
\]
Since $g \ll f$ and $f(a) \in D$ we have
\[
(\without{f}{g}) (a) = f(g(f(a)) = f(g(d))
,
\]
so
\[
\bar{f}(d) = f^{-1} ( f(g(d))) = g(d)
.
\quad \spadesuit
\]
\noindent
{\bf Aargh!} Couldn't we substitute a couple of pictures for all this gunk?
\end{comment}
This gives us a closure operation:
\begin{prop} \label{prop4}
\[
\without{f}{(\without{f}{(\without{f}{g})})} = \without{f}{g}
,
\]
so
\[
\without{f}{(\without{f}{(\without{f}{(\without{f}{g})})})}
= \without{f}{(\without{f}{g})}
.
\quad \spadesuit
\]
\end{prop}
\section{Inclusion-exclusion}
Subtraction generalizes to inclusion-exclusion.
Let $P$ be a poset with
$\{q:q\leq p \}$ finite for all $p$.
Given a family of finite sets $A_p, p \in P$, let
\[
A_{\leq p} = \sum_{q \leq p} A_q
,
\]
etc.
\begin{prop}[Inclusion-exclusion] \label{incexc}
\[
\forall p
\;
A_{\leq p} \equiv B_{\leq p}
\;\;\implies\;\;
\forall p
\;
A_p \equiv B_p
.
\]
\end{prop}
{\bf {\medskip}{\noindent}Proof: }
By induction:
Assuming
\[
\forall q <p
\;
A_q \equiv B_q
\]
(true if $p$ is minimal)
we have
\[
A_{<p} \equiv B_{<p}
.
\]
Subtract from
\[
A_{\leq p} \equiv B_{\leq p}
\]
to get
\[
A_p \equiv B_p
.
\quad \spadesuit
\]
To be more explicit,
define the projection map
\[
\pi_A: \sum_p A_p \to P
,
\;
\pi(x) = p \iff x \in A_p
.
\]
\begin{prop}
If
\[
g_p: A_{\leq p} \equiv B_{\leq p}
\]
then
\[
f_p: A_p \equiv B_p
\]
where
\[
f_p(x)=F(p,x)
,
\]
\[
F(p,x) =
(
y:=g_p(x);\;
q:=\pi_B(y);\;
{\kwstyle{return}\ }
y {\ \kwstyle{if}\ } q=p
{\ \kwstyle{else}\ }
F(p,\bar{F}(q,y))
)
;\;
\]
\[
\bar{F}(p,x) =
(
y:=g_p^{-1}(x);\;
q:=\pi_A(y);\;
{\kwstyle{return}\ }
y {\ \kwstyle{if}\ } q=p
{\ \kwstyle{else}\ }
\bar{F}(p,F(q,y))
)
.
\]
\end{prop}
{\bf {\medskip}{\noindent}Proof: }
This is what you get if you trace it through.
$\quad \spadesuit$
\section{Extreme division}
Subtraction will get you a long way toward automating the process
of generating bijections.
But sometimes you will want to divide, and that's when
things can get scary.
(Like the old Marchant mechanical calculators, which would make
a satisfying `chunk' when you hit $+$ or $-$,
but would make a terrifying racket, with the carriage scurrying to and fro,
when you hit the ${\overset{\mbox{\tiny Auto}}{\div}}$ key.)
The way to keep things under control is to make sure you are multiplying
polynomials (in one or many variables),
with the polynomial you are dividing
by having a unique extreme monomial $\omega$
for some linear function on the space of
degrees (in other words a singleton monomial on the boundary of its
Newton polytope).
In this case you can use `extreme division', whereby you recursively
subtract the mapping based on multiplication by $\omega$.
Suppose
\[
F:A \times C \equiv B \times C
\]
and
\[
G:B \times C \equiv A \times C
\p.
\]
(We may choose $G=F^{-1}$, but we don't require this.)
For $\omega \in C$,
define
\[
\operatorname{xdiv}((F,G),\omega)
=
(f,g)
\]
where $f,g$ are the
partial functions $f$ on $A$ and $g$ on $B$
defined via the mutual recursion equations
\[
f(x)
=
((y,z):=F((x,\omega))
;\,
{\kwstyle{while}\ }
z \neq \omega
{\ \kwstyle{do}\ }
((y,z):=F((g(y),z))
;\,
{\kwstyle{return}\ }
y)
\]
\[
g(x)
=
((y,z):=G((x,\omega))
;\,
{\kwstyle{while}\ }
z \neq \omega
{\ \kwstyle{do}\ }
((y,z):=G((f(y),z))
;\,
{\kwstyle{return}\ }
y)
\]
If both $f$ and $g$ are total, we say that
the pair $(F,G)$ is
\emph{X-divisible for $\omega$},
and that $\omega$ is an \emph{extreme point} for the pair $(F,G)$.
This terminology springs from the following proposition.
\begin{prop}[Extreme division] \label{xdiv}
If $A,B,C$ are multinomials, and $F,G$ match terms of
$A \cdot C$ and $B \cdot C$,
then $(F,G)$ is X-divisible for any extreme monomial $\omega$ of $C$.
\end{prop}
{\bf {\medskip}{\noindent}Proof: }
By induction.
$\quad \spadesuit$
Extreme division is what Conway and Doyle
\cite{conwaydoyle:three}
had in mind when they wrote,
`There is more to division than repeated subtraction.'
This must be a great truth, because its negation
would appear to be at least as true.
\section{Mode d'emploi}
Contrary to what we may appear to be claiming in Proposition \ref{xdiv},
the bijection yielded by the $\mathrm{xdiv}$ algorithm may fail to be a `matching',
because it may take too long to compute,
or otherwise fail to qualify as `suitable',
the admittedly slippery condition that we slipped in as
part of the definition of a matching.
The same goes for bijections obtained by inclusion-exclusion.
This is why we have taken care to announce \ref{incexc} and \ref{xdiv}
as `Propositions', yielding bijections
proposed for consideration as `matchings'.
This is contrary to mathematical custom,
and wrong-headed,
but useful nevertheless.
Now it will often happen that a slow quotient bijection can be speeded up
immensely by `memoizing', meaning that values
of $f$ and $g$ are automatically saved so that they don't get
computed over and over.
This in itself may make the bijection `suitable'.
Better still is to be able to see `what the bijection is doing',
so that it can be defined, and proven to be a suitable bijection,
without reference to its origin as a quotient.
Here's a case in point.
Everyone knows that
\[
\binom{n}{k}
=
\binom{n}{n-k}
,
\]
and if you ask why,
they will either tell you to match a $k$-subset to
the complementary $(n-k)$ - subset,
or compute
\[
\binom{n}{k}
=
\frac{n!}{k!(n-k)!}
=
\frac{n!}{(n-k)!(n-(n-k))!}
=
\binom{n}{n-k}
.
\]
Taking our motto to be `follow the algebra',
we recast the computation as
\[
\binom{n}{k} k! (n-k)!
=
n!
=
\binom{n}{n-k} (n-k)! k!
=
\binom{n}{n-k} k! (n-k)!
,
\]
where every step is backed by a matching.
Now divide.
The xdiv algorithm yields a very inefficient computation of a very simple
matching
(see the code in the appendix):
\clearpage
\begin{verbatim}
([[0, 1], [2, 3, 4]], [[0, 1, 2], [3, 4]])
([[0, 2], [1, 3, 4]], [[0, 1, 3], [2, 4]])
([[0, 3], [1, 2, 4]], [[0, 2, 3], [1, 4]])
([[0, 4], [1, 2, 3]], [[1, 2, 3], [0, 4]])
([[1, 2], [0, 3, 4]], [[0, 1, 4], [2, 3]])
([[1, 3], [0, 2, 4]], [[0, 2, 4], [1, 3]])
([[1, 4], [0, 2, 3]], [[1, 2, 4], [0, 3]])
([[2, 3], [0, 1, 4]], [[0, 3, 4], [1, 2]])
([[2, 4], [0, 1, 3]], [[1, 3, 4], [0, 2]])
([[3, 4], [0, 1, 2]], [[2, 3, 4], [0, 1]])
\end{verbatim}
This follow-the-algebra matching
differs from only slightly from taking the complementary set.
It's arguably better.
Do you agree?
\clearpage
\section*{Appendix}
\begin{verbatim}
"""
gauss 000 - use extreme division to match (n choose k) to (n choose n-k)
"""
def xdiv(FG,omega):
F,G=FG
def f(x):
y,z=F((x,omega))
while z!=omega:
y,z=F((g(y),z))
return y
def g(x):
y,z=G((x,omega))
while z!=omega:
y,z=G((f(y),z))
return y
return [f,g]
import itertools
def makeintolist(a): return [x for x in a]
def sublist(a,c): return [a[x] for x in c]
def num(n): return [x for x in range(n)]
def combinations(a,k):
return [makeintolist(a) for a in itertools.combinations(a,k)]
def complement(s,a): return [x for x in s if x not in a]
def separate(s,a): return [a,complement(s,a)]
"""
choose(n,k) lists the ways for separating n into pieces of size k and n-k
"""
def choose(n,k):
s=range(n)
return [separate(s,a) for a in combinations(s,k)]
"""
compress(l) replaces elements of l by their relative ranks
"""
def compress(l):
m=sorted(l)
return [m.index(x) for x in l]
"""
binom(sigma,k) maps the permutation sigma to choose(n,k) * k! * (n-k)!
"""
def binom(sigma,k):
a=sigma[:k]
b=sigma[k:]
return [[sorted(a),sorted(b)],[compress(a),compress(b)]]
"""
monib is the inverse of binom
"""
def monib(abcd):
((a,b),(c,d))=abcd
return sublist(a,c)+sublist(b,d)
"""
flipkl maps choose(n,k) * k! * (n-k)! to choose(n,k) * (n-k)! * k!
We need this because we decided to cast division in terms of matchings
between A*C and B*C rather than A*C and B*D
"""
def flipkl(abcd):
((a,b),(c,d))=abcd
return [[a,b],[d,c]]
"""
Our F and G would be the same, except for the flipping.
"""
def F(abcd):
((a,b),(c,d))=abcd
k=len(a)
l=len(b)
return flipkl(binom(monib(abcd),l))
def G(abcd):
((a,b),(c,d))=abcd
k=len(a)
l=len(b)
return binom(monib(flipkl(abcd)),l)
def omega(n,k): return [num(k),num(n-k)]
def match(n,k): return xdiv([F,G],omega(n,k))
"""
TESTING
"""
def column(l):
print(*l,sep='\n')
n=5
k=2
A=choose(n,k)
(f,g)=match(n,k)
B=[f(ab) for ab in A]
column(zip(A,B))
C=[g(ab) for ab in B]
print(A==C)
\end{verbatim}
\begin{comment}
\section{Application: derangements}
Let $D_n$ denote derangements (permutations
without fixed points) of $[n]= \{1,\ldots,n\}$;
The sets $D_n$ satisfy the \emph{two-term recurrence}
\[
D_0 = \{()\}
;\;\;\;
D_1=\{\}
;\;\;\;
D_n \equiv [n-1] \times D_{n-1} + [n-1] \times D_{n-2}
.
\]
There are a variety of possibilities for the matchings here:
Take your choice.
Now, letting $\mathrm{One}$ denote a one-element set, we have
\begin{align*}
D_0
&\equiv
\mathrm{One}
\\
\\
D_1 + \mathrm{One}
&\equiv
[1] \times D_0
\\
\\
D_2
&\equiv
[1] \times D_1 + [1] \times D_0
\\&\equiv
[1] \times D_1 + D_1 + \mathrm{One}
\\&\equiv
[2] \times D_1 + \mathrm{One}
\\
\\
D_3 + \mathrm{One}
&\equiv
[2] \times D_2 + [2] \times D_1 + \mathrm{One}
\\&\equiv
[2] \times D_2 + D_2
\\&\equiv
[3] \times D_1
\\
\\
D_4
&\equiv
[3] \times D_3 + [3] \times D_2
\\&\equiv
[3] \times D_3 + D_3 + \mathrm{One}
\\&\equiv
[4] \times D_3 + \mathrm{One}
\\
\\
&\ldots
\end{align*}
If we're OK with matchings between signed sets,
as in \cite{doyle:category},
we can dispense with shuttling
the term $\mathrm{One}$ from one side to the other,
and write the \emph{one-term recurrence}
\begin{eqnarray*}
D_n
&\equiv&
[n-1] \times D_{n-1} + [n-1] \times D_{n-2}
\\&\equiv&
[n-1] \times D_{n-1} + D_{n-1} - (-)^{n-1}\, \mathrm{One}
\\&\equiv&
[n] \times D_{n-1} + (-)^n\, \mathrm{One}
\end{eqnarray*}
Going the other way,
we can derive the two-term recurrence from the one-term recurrence.
\begin{eqnarray*}
D_n
&\equiv&
[n]\times D_{n-1} + (-)^n\, \mathrm{One}
\\&\equiv&
[n-1]\times D_{n-1} + D_{n-1} + (-)^n\, \mathrm{One}
\\&\equiv&
[n-1]\times D_{n-1} +[n-1]\times D_{n-2}
.
\end{eqnarray*}
See Ferreri
\cite{ferreri:derange}
to see this worked out in detail.
\section{Application: non-derangements}
Let $\bar{D}_n = S_n \setminus D_n$ denote \emph{non-derangements} of $[n]$.
We have
\[
S_0 \equiv \mathrm{One}
;\;\;\;
S_n \equiv [n] \times S_{n-1}
,
\]
hence
\begin{eqnarray*}
D_n + \bar{D}_n +(-)^n\, \mathrm{One}
&\equiv&
[n] \times D_{n-1} + [n] \times \bar{D}_{n-1} + (-)^n\, \mathrm{One}
\\&\equiv&
D_n + [n] \times \bar{D}_{n-1}
.
\end{eqnarray*}
Subtract to get
\[
\bar{D}_n +(-)^n\, \mathrm{One}
\equiv
[n] \times \bar{D}_{n-1}
.
\]
The subtraction we've just done was most likely respectful;
that will depend on the matchings you chose above.
If not, you can revise your choices, or proceed to the next section,
on general subtraction.
As with derangements, from this one-term recurrence
we can get the two-term recurrence
\[
\bar{D}_n =
[n-1]\times \bar{D}_{n-1} +[n-1]\times \bar{D}_{n-2}
.
\]
Alternatively, we can get this two-term recurrence
for non-derangements by subtracting
the two-term recurrence for derangements
from
\[
S_n \equiv [n-1] \times S_{n-1} + [n-1] \times S_{n-2}
;
\]
from there we can derive the one-term recurrence
as with derangements.
By combining induction and subtraction, we can generate matchings
to prove these identities in diverse ways.
Beware that
different routes to the same identity may sometimes
yield different
bijections.
To see how this all plays out,
see Ferreri
\cite{ferreri:derange}.
\section{Application: derangements again}
We turn to the familiar inclusion-exclusion for derangements:
\[
|D_n|
=
\sum_{k=0}^n (-1)^k \binom{n}{k} (n-k)!
.
\]
Doing some fiddling with binomial coefficients we get
\begin{eqnarray*}
|D_n|
&=&
\sum_{k=0}^{n-1} (-1)^k \binom{n}{k} (n-k)!
+ (-1)^n
\\&=&
\sum_{k=0}^{n-1} (-1)^k n \binom{n-1}{k} (n-1-k)!
+ (-1)^n
\\&=&
n |D_n-1|
+ (-1)^n
.
\end{eqnarray*}
We already know this from the one-term recurrence
\[
D_n \equiv [n] \times D_{n-1} + (-)^n\,\mathrm{One}
,
\]
which we derived above from the two-term recurrence.
Here we want to use inclusion-exclusion of matchings.
Specifically,
letting $E_n$ denote permutations of $[n]$
with exactly one fixed point,
we have
\[
E_n \equiv [n] \times D_{n-1}
.
\]
We will use inclusion-exclusion to show that
\[
D_n \equiv E_n +(-)^n\,\mathrm{One}
.
\]
Let $\mathrm{fix}(\sigma)$ denote the fixed points of $\sigma$.
Define
\[
A_X = \{\sigma: \mathrm{fix}(\sigma) =X\} \subseteq [n]!
,
\]
\[
B_X = \{(k,\sigma): \mathrm{fix}(\sigma) = X+\{k\}\} \subseteq [n] \times [n]!
,
\]
where the disjoint union $X+\{k\}$ here implies that $k \notin X$.
We do this because
\[
A_\emptyset = D_n
;\;\;
B_\emptyset \equiv E_n
.
\]
We have
\[
A_{\supseteq X}
=
\{\sigma: \mathrm{fix}(\sigma) \supseteq X\}
,
\]
\[
B_{\supseteq X}
=
\{
(k,\sigma: \mathrm{fix}(\sigma) \supseteq X+\{k\}\}
,
\]
and can find matchings to yield
\[
A_{\supseteq X}
\equiv
B_{\supseteq X}
\;
\mbox{if $|X|<n$}
.
\]
But this breaks down for $X=[n]$:
\[
A_{[n]} = \{\mathrm{id}_{[n]}\} \equiv \mathrm{One}
;\;\;
B_{[n]} = \{\}
.
\]
So now we get a little more serious about signed sets, and set
\[
\bar{B}_X = B_X + (-)^{n-|X|}\,\mathrm{One}
.
\]
Choose matchings showing that $(1-1)^{n-k} = 0$ for $k<n$, and use them to
get
\[
\bar{B}_{\geq X} \equiv B_{\geq X}
\;
\mbox{if $|X|<n$}
.
\]
Now for all $X$ we have
\[
A_{\geq X} \equiv \bar{B}_{\geq X}
,
\]
so
\[
A_X \equiv \bar{B}_X
\equiv B_X + (-)^{n-|X|}\mathrm{One},
\]
and in particular
\[
D_n
= A_\emptyset
\equiv
B_\emptyset + (-1)^n\,\mathrm{One}
= E_n + (-1)^n\,\mathrm{One}
.
\]
Taking this a bit further,
we get information about permutations with multiple fixed points.
Let
\[
C(n,k) = \{X \subseteq [n]:|X|=k\}
,
\]
\[
A(n,k) = \sum_{X \in C(n,k)} A_X
,
\]
\begin{eqnarray*}
B(n,k)
&=&
\sum_{X \in C(n,k)} B_X
\\&\equiv&
\sum_{X \in C(n,k} A_n - (-)^{n-k}\mathrm{One}
\\&\equiv&
A(n,k) - (-)^{n-k} C(n,k)
.
\end{eqnarray*}
Now
\[
B(n,k) \equiv [k+1] \times A(n,k+1)
,
\]
so
\[
[k+1] \times A(n,k+1) \equiv
A(n,k) - (-1)^{n-k} C(n,k)
.
\]
Taking sizes,
the number $a_{n,k}$ of permutations of $[n]$ having $k$ fixed points
satisfies
\[
(k+1)a_{n,k+1}
=
a_{n,k} - (-1)^{n-k}\binom{n}{k}
.
\]
From this we get the well-known fact that
the distribution of the number of fixed points
is asymptotically Poisson:
\[
\lim_{n->\infty} a_{n,k}/n! = e^{-1}\frac{1}{k!}
.
\]
\end{comment}
|
1,108,101,563,675 | arxiv | \section{Introduction}
A $J$-Hermitian metric $g$ on a complex manifold $(M,J)$ is called SKT (\emph{strong K\"ahler with torsion}) or \emph{pluriclosed}
if the fundamental 2-form $\omega(\cdot,\cdot)=g(J\cdot,\cdot)$ satisfies
$$
\partial\ov \partial \omega=0
$$
(see for instance \cite {GHR}).
For complex surfaces a Hermitian metric satisfying the SKT condition is {\emph{standard}} in the terminology of Gauduchon \cite{Gau2} and on a compact manifold a standard metric can be found in the conformal
class of any given Hermitian metric. However, the theory is completely different in higher dimensions.
The study of SKT metrics is strictly related to the study of the geometry of the Bismut connection. Indeed, any Hermitian manifold $(M, J,g)$ admits a unique connection $\nabla^B$ preserving $g$ and $J$ and such that the tensor
$$
c (X, Y, Z) = g(X, T^B(Y , Z ))
$$
is totally skew-symmetric, where by $T^B$ we denote the torsion of $\nabla^B$ (see \cite{Gau}).
This connection was introduced by Bismut in \cite{Bismut} to prove a local index
formula for the Dolbeault operator for non-K\"ahler manifolds.
The torsion $3$-form $c$ is related to the fundamental form $\omega$ of $g$ by
$$
c(X, Y, Z) = - d \omega (JX, JY, JZ)
$$
and it is well known that $
\partial\ov \partial \omega=0$ is equivalent to $ dc=0$.
SKT metrics have a central role in type II string theory, in $2$-dimensional supersymmetric
$\sigma$-models (see \cite{GHR,strominger}) and they have also relations with generalized
K\"ahler geometry (see for instance
\cite{GHR,Gu,Hi2,AG,CG,FT}).
Indeed, by \cite{Gu,AG} it follows that a generalized K\"ahler
structure on a $2n$-dimensional manifold $M$ is equivalent to a pair of SKT structures $(J_+, g)$ and $(J_-, g)$ such that
$ d^c_+ \omega_+ = -d^c_- \omega_- $, where $\omega_{\pm} ( \cdot, \cdot) = g( J_{\pm} \cdot, \cdot)$ are the
fundamental $2$-forms associated to the Hermitian structures
$(J_{\pm}, g)$ and $d^c_{\pm} = i (\overline \partial_ {\pm} -
\partial_{\pm})$. The closed $3$-form $ d^c_+ \omega_+$ is called the
{\it torsion} of the generalized K\"ahler structure and the structure is said {\em untwisted} or {\em twisted} according to the fact that the cohomology class $[d^c_+ \omega_+] \in H^3 (M, \R)$ vanishes or not. In particular, any K\"ahler metric $(J, g)$ determines a generalized K\" ahler structure by setting $J_+ = J$ and $J_-
= \pm J$.
Recently, SKT metrics have been studied by many
authors. For instance, new simply-connected compact SKT examples have been
constructed by Swann in \cite{Sw} via the twist construction and SKT structures on $S^1$-bundles over almost contact manifolds have been studied in \cite{FFUV}. Moreover, in \cite{FT2} it has been shown that the blow-up of an SKT manifold at a point or along a compact submanifold admits an SKT
metric.
For real Lie groups admitting left-invariant SKT metrics there are some classification results in dimension $4$, $6$ and $8$. More precisely,
$6$-dimensional (resp. $8$-dimensional) SKT nilpotent Lie groups have been classified in \cite{FPS} (resp. in \cite{EFV} and for a particular class in \cite{RT}) and a classification of SKT
solvable Lie groups of dimension $4$ has been obtained in \cite{MS}.
General results {are known also} for nilmanifolds, i.e. compact quotients of simply connected nilpotent Lie groups $G$ by discrete subgroups $\Gamma$. Indeed, in \cite{EFV} it has been proved that
if $(M=G/\Gamma,J)$ is a nilmanifold (not a torus) endowed with an invariant complex structure $J$ and if there exists a $J$-Hermitian {\rm SKT} metric $g$ on $M$, then $G$ must be $2$-step nilpotent and $M$ is a total space of a principal holomorphic torus bundle over a torus.
No general restrictions for the existence of SKT and generalized K\"ahler structures
are known in the case of
solvmanifolds, i.e. compact quotients of solvable Lie
groups by discrete subgroups. A structure theorem by
\cite{Ha2} states that a solvmanifold carries a K\" ahler
structure if and only it is covered by a complex torus which has a
structure of a complex torus bundle over a complex torus.
As far as we know, the only known solvmanifolds carrying a
generalized K\" ahler structure are the Inoue surface of type ${\mathcal S}^0$ defined in \cite{In} and ${\mathbb T}^{2k}$-bundles over the Inoue surface of type ${\mathcal S}^0$ constructed in \cite{FT}.
A quaternionic analogous of K\"ahler manifolds is given by {\em
hyper-K\"ahler with torsion} (shortly {\em HKT}) manifolds, that are hyper-Hermitian manifolds $(M^{4n}, J_1, J_2, J_3, h)$
admitting a hyper-Hermitian connection with totally skew-symmetric
torsion, i.e. for which the three Bismut connections associated to
the three Hermitian structures $(J_r, h)$, $r =1,2,3$ coincide.
This geometry was introduced by Howe and Papadopoulos \cite{HP} and
later studied for instance in~\cite{GP, FG, BDV, BF, Sw}.
In \cite{BF} it was shown that the
tangent Lie algebra of an HKT Lie algebra may admit an HKT
structure,
constructing in this way a family of new compact strong HKT manifolds.
In this paper we adapt the previous construction to the SKT and generalized K\"ahler case. Starting from a $2n$-dimensional SKT (generalized K\"ahler) Lie algebra $\mathfrak g$ and using a suitable connection on $\mathfrak g$ we construct a $4n$-dimensional SKT (generalized K\"ahler) Lie algebra. We apply the previous procedure to
some of the $4$-dimensional SKT Lie algebras, obtaining in this way new SKT examples in dimension 8 and recovering the generalized K\"aher example found in \cite{FT}.
{The existence of an SKT metric $\omega$ on a complex manifold $(M, J)$ such that $\partial\omega=\overline{\partial}\beta$ for a $\partial$-closed (2,0)-form $\beta$ is equivalent to the existence of a symplectic form taming $J$ (\cite{EFV}).}
We recall that an almost complex structure $J$ on a compact $2n$-dimensional symplectic manifold $(M, \Omega)$ is said to be \emph{tamed} by $\Omega$ if
$$
\Omega(X,JX)>0
$$
for any non-zero vector field $X$ on $M$. When $J$ is a complex structure (i.e. $J$ is integrable) and $\Omega$ tames $J$, the pair $(\Omega, J)$ has been called a {\em Hermitian-symplectic structure} in \cite{ST}.
Although any symplectic structure always admits tamed almost complex structures,
it is still an open problem to find an example of a compact complex manifold admitting a
taming symplectic structure but no K\"ahler structures. From \cite{ST,LiZhang} there exist no compact examples in dimension $4$.
Moreover, the study of taming symplectic structures in dimension $4$ is related to a more general conjecture of Donaldson (see for instance \cite{donaldson,weinkove,LiZhang}).
In \cite{EFV} some negative results for the existence of taming symplectic structures on compact quotients of Lie groups by discrete subgroups were obtained.
It was shown that if $M$ is
a nilmanifold (not a torus) endowed with an invariant complex structure $J$, then $(M, J)$ does
not admit any symplectic form taming $J$.
{The taming symplectic structures are related to static solutions of
a new metric flow on complex manifolds (see \cite{ST}).} Indeed, Streets and
Tian constructed an elliptic flow using the Ricci tensor associated to the Bismut connection instead of
the Levi-Civita connection, and it turns out that this flow preserves the SKT condition and that the existence of some particular type of static SKT metrics implies the existence
of a taming symplectic structure on the complex manifold (\cite{ST2}).
Static SKT metrics on Lie groups
have been also recently studied in \cite{En}.
In the last section of the paper we prove that a 4-dimensional Lie algebra $\mathfrak{g}$ endowed with a complex structure $J$ admits a taming symplectic structure if and only if $(\mathfrak{g},J)$ admits a K\"ahler metric. Moreover, under this condition, every SKT metric induces a symplectic form taming $J$.
\medskip
\noindent \textbf{Acknowledgements.} This work has been partially
supported {by} Project MICINN (Spain) MTM2008-06540-C02-01/02,
Project MIUR ``Geometria Differenziale e Analisi Globale" and
by GNSAGA of INdAM.
The second author wants to thank the organizers for the invitation and their wonderful hospitality in Brno.
\section{Preliminaries}
Let $M$ be a $2n$-dimensional manifold.
We recall that an almost complex structure $J$ on M is
integrable if and only if
the Nijenhuis tensor
$$N(X,Y)= J([X,Y]-[JX,JY])-([JX,Y]+[X,JY])$$
vanishes for all vector fields $X,Y$. In this case $J$ is called a complex structure on $M$.
A Riemannian metric $g$ on a complex manifold $(M, J)$ is said to
be Hermitian if it is compatible with $J$, i.e. if $g (JX, JY) = g
(X, Y)$ for any $X, Y$. In \cite{Gau}, Gauduchon proved that if
$(M,J,g)$ is an Hermitian manifold, then there is a $1$-parameter family of
canonical Hermitian connections on $M$ characterized by the properties
of their torsion tensor. In particular,
the {\em Bismut connection} is the unique connection $\nabla^B$ such that
$$
\nabla^B J = \nabla^B g = 0
$$
and its torsion tensor
$$c (X, Y, Z) = g (X, T^B (Y, Z))$$ is totally skew-symmetric, where
$T^B$ is the torsion of $\nabla^B$. The geometry associated to the
Bismut connection is called KT geometry and when $c =0$ it coincides
with the usual K\"ahler geometry.
\begin{definition} Let $(M, J,g)$ be a Hermitian manifold.
If the torsion 3-form $c$ of the Bismut connection is $d$-closed or equivalently if $ \partial \overline \partial \omega=0$, then the Hermitian metric $g$ on a
complex manifold $(M, J)$ is called {\em{strong K\"ahler with torsion}} (shortly SKT).
\end{definition}
An interesting case is when $g$ is compatible with two complex structures $J_+$ and $J_-$.
We recall the following
\begin{definition} A Riemannian manifold $(M, g)$ is called
{\em generalized K\"ahler} if it has a pair
of SKT structures $(J_+, g)$ and $(J_-, g)$ for which $c_-= - c_+$,
where $c_{\pm}$ denotes the torsion $3$-form of the Bismut connection associated to the SKT structure $(J_{\pm}, g)$.
\end{definition}
{G}eneralized K\"ahler structures were introduced in \cite{GHR} and studied by M. Gualtieri in his
PhD thesis \cite{Gu} in the more general context of generalized
complex geometry, which contains complex and symplectic geometry as extremal special cases and shares important properties with them.
When $M$ is $4$-dimensional, by \cite{AG} there are
two classes of generalized K\"ahler structures, according to whether the complex structures $J_+$ and $J_-$ induce
the same or different orientations on $M$. In \cite{AG}
compact $4$-dimensional generalized K\"ahler manifolds $(M^4, J_{\pm}, g)$ for which $J_+$ and $J_-$
commute have been classified.
In this paper we consider Lie algebras endowed with SKT structures which induce left-invariant SKT structures on the corresponding simply connected Lie groups.
Let $\mathfrak g$ be a Lie algebra with an (integrable) complex
structure $J$ and an inner product $g$ compatible with $J$. If the
associated K\"ahler form $\omega (X,Y)= g(JX,Y)$ satisfies
$d\omega=0$, where
$$
d \omega(X, Y, Z) = - \omega ([X, Y], Z) - \omega([Y, Z], X) -
\omega([Z, X], Y),
$$
for any $X, Y, Z \in {\mathfrak g}$,
the Hermitian Lie algebra $(\mathfrak g , J,g)$ is K\"ahler. Equivalently, $(\mathfrak g , J,g)$ is K\"ahler
if and only if $\nabla^{g} J =0$, where $\nabla^{g} $ is the Levi-Civita connection of $g$. If $\partial \overline \partial \omega =0$, the Hermitian Lie algebra $(\mathfrak g , J,g)$ is SKT.
We recall that for a Lie group $G$ with a
left-invariant Hermitian structure {$(J,g)$} the Bismut connection $\nabla^B$ on $G$ is given by
the following equation
{\begin{equation} \label{Bismut}
\begin{array}{l}
g(\nabla^B_XY,Z) = \frac{1}{2} \{ g([X,Y]-[JX,JY],Z) - g([Y,Z]+[JY,JZ],X)\\
\phantom{g(\nabla^B_XY,Z) = }\, -g([X,Z]-[JX,JZ],Y) \}
\end{array}
\end{equation}}
for any $X, Y, Z \in {\mathfrak g}$ (see \cite{DF}).
If $G$ is nilpotent and admits a left-invariant SKT structure {$(J, g)$}, then by \cite{EFV} $G$ has to be $2$-step nilpotent and $J$ has to preserve the center of $G$. {Moreover, nilpotent Lie groups
cannot admit any left-invariant generalized K\"ahler structure unless they are abelian \cite{Ca}.}
In the solvable case there is a classification of SKT
Lie groups of dimension $4$ \cite{MS}, but there are no general results in higher dimensions.
Examples of SKT and generalized K\"ahler solvable Lie groups admitting compact quotients
have been shown in \cite{FT, FT3}.
\section{SKT structures on tangent Lie algebras}
Let $\mathfrak g$ be a $2n$-dimensional Lie algebra endowed with a complex
structure $J$ and an inner product $g$ compatible with $J$. Assume that $D$ is a flat connection on
$\mathfrak g$ preserving the Hermitian structure, i.e. such that $D g=0$ and $D J =0$.
Consider the tangent Lie algebra $T_{D} \, \mathfrak{g} :=
{\mathfrak g} \ltimes_{D} {\R^{2n}}$ {endowed} with the Lie
bracket
\begin{equation}\label{t_D} [(X_1,X_2), (Y_1,Y_2)]=([X_1,
Y_1] , D _{X_1} Y_2 - D_{Y_1} X_2 )
\end{equation}
and {with the} complex structure
\begin{equation} \label{complexontg}
{\tilde J}(X_1, X_2)= (J X_1, J X_2).
\end{equation}
Since $D$ is flat, the Lie bracket \eqref{t_D} on $T_{D} \, \mathfrak{g}$ satisfies the Jacobi identity. The integrability of the complex
structure ${\tilde J}$ on $T_{D} \, \mathfrak{g}$ follows from
the fact that $J$ is integrable and parallel with respect
to $D$ (see \cite[Proposition~3.3]{BD}).
Let $\tilde g$ be the inner product on
$T_{D} \, \mathfrak{g}$ induced by $g$ such that $( {\mathfrak g}, 0)$
and $(0, {\mathfrak g})$ are orthogonal. Then $\tilde g$ is
compatible with ${\tilde J}$, that
is, $(T_D\,{\mathfrak g}, \tilde J, \tilde g)$ is a
Hermitian Lie algebra.
In a similar way as for the HKT case (see \cite[Proposition 4.1]{BF}) we can prove the following
\begin{prop} \label{properties} Let $({\mathfrak g}, J, g)$ be
a Hermitian Lie algebra and $D$ a flat connection such that $Dg=0$ and
$D J =0$. Then the Hermitian structure $(\tilde J, \tilde g)$ on $T_D \, \mathfrak{g}$ is SKT if and only if $(J, g)$ is SKT on $\mathfrak{g}$.
\end{prop}
\begin{proof} The proof is already contained in \cite{BF}. Indeed, by a direct computation we have
that the Bismut connection $\tilde \nabla^B$ of the new Hermitian
structure $(\tilde J, \tilde g)$ on $T_{D} \,
{\mathfrak g} $ is related to the Bismut connection $\nabla^B$ of
the Hermitian structure $(J, g)$ on ${\mathfrak g}$ by
\begin{equation} \label{exprtildenablaB}
\tilde g (\tilde \nabla^B_{(X_1, X_2)} (Y_1, Y_2), (Z_1, Z_2)) = g(\nabla^B_{X_1} Y_1, Z_1) + g (D_{X_1} Y_2, Z_2),
\end{equation}
for any $X_i, Y_i, Z_i \in {\mathfrak g}$, $i = 1,2,3$. Therefore,
denoting by $\tilde c$ and $c$ the torsion $3$-forms of the Bismut connections on $T_{D} \, {\mathfrak g}$ and $\mathfrak g$,
respectively, we obtain
\begin{equation}\label{eqc}
\tilde c ((X_1, X_2), (Y_1, Y_2), (Z_1, Z_2)) = c (X_1, Y_1, Z_1),
\end{equation}
and
\begin{equation}\label{eqdc}
d\tilde c \, ((X_1, X_2), (Y_1, Y_2), (Z_1, Z_2), (W_1, W_2)) = dc
(X_1, Y_1, Z_1, W_1).
\end{equation}
This shows that the strong condition is
preserved.
\end{proof}
\begin{rem}
To construct the tangent Lie algebra we consider the flat connection $D$ as a representation of $\mathfrak{g}$ on $\R^{2n}$. If we choose the adjoint representation {${\rm ad}$} of $\mathfrak g$ on $\mathfrak g$ then the semidirect product {$\mathfrak{g}\ltimes_{{\rm ad}}\R^{2n}$} is the Lie algebra of the Lie group $TG$, the tangent bundle over $G$ \cite{BD}. In this case the conditions $DJ=Dg=0$ are satisfied if and only if $(\mathfrak{g},J)$ is a complex Lie algebra and the inner product $g$ is bi-invariant. Therefore this construction allows us to lift an invariant SKT structure $(\hat \mathfrak{g},\hat J)$ from a Lie group $G$ to its tangent bundle $TG$ if and only if $(G,\hat J)$ is a complex Lie group and $\hat g$ is a bi-invariant metric.
\end{rem}
Since a generalized K\"ahler
structure on a $2n$-dimensional Lie algebra $\mathfrak g$ is equivalent to a pair of SKT structures $(J_+, g)$ and $(J_-, g)$, such that
$ c_+ = - c_-$, {as a consequence of the previous proposition we can prove} the following
\begin{cor}\label{cor}
Let $({\mathfrak g}, J_{\pm}, g)$ be
a generalized K\"ahler Lie algebra and $D$ a flat connection such that $Dg=0$ and
$D J_{\pm} =0$. Then, $(\tilde J_{\pm}, \tilde g)$ on $T_D \, \mathfrak{g}$ is generalized K\"ahler.
\end{cor}
\begin{proof}
We know that $\tilde J_-$ and $\tilde J_+$ are integrable and that the metric $\tilde g$ is still compatible with both complex structures. Moreover, if $({\mathfrak g}, J_{\pm}, g)$ is a generalized K\"ahler Lie algebra, then $c_+=-c_-$ and $dc_+=dc_-=0$. Therefore, {equations \eqref{eqc} and \eqref{eqdc} yield} $\tilde c_+=-\tilde c_-$ and $d\tilde c_+=d\tilde c_-=0$, i.e. $(T_D \, \mathfrak{g},\tilde J_{\pm}, \tilde g)$ is a generalized K\" ahler Lie algebra.
\end{proof}
For Hermitian Lie algebras ${\mathfrak g}$ such that the commutator $[{\mathfrak g}, {\mathfrak g}]$ does not coincide with ${\mathfrak g}$ we can show that it is always possible to find an Hermitian flat connection $D$.
\begin{prop}\label{connection}
Let $(\mathfrak{g},J,g)$ be a 2n-dimensional SKT Lie algebra such that $\mathfrak{g}^1=[\mathfrak{g},\mathfrak{g}]\subsetneq\mathfrak{g}$. Then $\mathfrak{g}$ admits a flat connection $D$ such that $DJ=Dg=0$.
\end{prop}
\begin{proof}
Let $X\in\mathfrak{g}\setminus\mathfrak{g}^1$, and choose a basis $\{e_i\}$ of $\mathfrak{g}$ such that $e_{2n}=X$. We define
\[
\begin{cases}
D_{e_i}Y=0 \quad i=1,\dots,2n-1 \\
D_{e_{2n}}Y=JY.
\end{cases}
\]
It is easy to verify that $D$ is flat. Moreover, it {satisfies} the conditions
\[
g(D_XY,Z)=-g(Y,D_XZ) \qquad\ J(D_XY)=D_X(JY)
\]
since $g$ is Hermitian.
\end{proof}
\begin{rem}
Note that the strict inclusion $[\mathfrak{g},\mathfrak{g}]\subsetneq\mathfrak{g}$ holds for every solvable Lie algebra $\mathfrak{g}$. {Therefore, by applying the previous proposition,} we have that every SKT solvable Lie algebra admits an SKT tangent Lie algebra.
\end{rem}
{Next we will} apply the previous construction to $4$-dimensional SKT Lie algebras. We recall that in the solvable case a $4$-dimensional SKT Lie group is unimodular if and only if it admits a compact quotient by a discrete subgroup (see \cite{MS}).
By \cite{Ha3} a complex (non-K\"ahler) surface diffeomorphic to a
$4$-dimensional compact homogeneous manifold $X =\Theta \backslash L$, where $\Theta$ is a uniform
discrete subgroup of $L$,
and which does
not admit any K\" ahler structure is one of the following:\vskip.1truecm\noindent
a) Hopf surface;\vskip.1truecm\noindent
b) Inoue surface of type ${\mathcal S}^0$;\vskip.1truecm\noindent
c) Inoue surface of type ${\mathcal S}^{\pm}$;\vskip.1truecm\noindent
d) primary Kodaira surface;\vskip.1truecm\noindent
e) secondary Kodaira surface;\vskip.1truecm\noindent
f) properly elliptic surface with first odd Betti number.\smallskip
All the previous complex (non-K\"ahler) surfaces admit an invariant SKT structure (i.e., induced by a SKT structure on the Lie algebra of $L$)
and by \cite{AG, FT} an Inoue surface of type ${\mathcal S}^0$ {admits} an invariant generalized K\"ahler structure. A $\T^2$-bundle over the Inoue surface of type ${\mathcal S}^0$ was considered in \cite{FT}
in order to construct a $6$-dimensional compact solvmanifold with a non-trivial
generalized K\"ahler structure. A similar construction
can be
done for any of the non-K\"ahler complex homogeneous surfaces, {using} the description of $L$ and $\Theta$ in
\cite{Ha3}. Indeed, in \cite{FT2} it was proved that on any non-K\"ahler compact homogeneous complex surface $X= \Theta \backslash L$
there exists a non-trivial compact $\T^2$-bundle
$M$ carrying a locally conformally balanced SKT metric.
In the sequel we will use tangent Lie algebras associated to $4$-dimensional solvmanifolds to produce examples of SKT metrics and generalized K\"ahler structures in higher dimensions.
\begin{ex}
Consider the $4$-dimensional solvable Lie algebra $\mathfrak{g}_1$ with structure equations
{\[
\begin{cases}
de^1= e^2 \wedge e^4 \\
de^2=-e^1 \wedge e^4 \\
de^3=e^1 \wedge e^2 \\
de^4=0.
\end{cases}
\]}
{To simplify the notations, we will denote $\mathfrak{g}_1$ by $(e^{24},-e^{14},e^{12},0)$.} On $\mathfrak{g}_1$ we define the integrable complex structure $Je^{2i-1}=e^{2i}$ for $i=1,2$ and the SKT inner product $g = \sum_{j = 1}^4 e^j \otimes e^j$. By \cite{Ha3}, $(\mathfrak{g}_1,J)$ is the Lie algebra corresponding to the \emph{secondary Kodaira surface}.\\
Since $e_4\notin [\mathfrak{g}_1,\mathfrak{g}_1]$, we can consider the flat connection $D$ defined in the proof of Proposition \ref{connection}.
{The tangent Lie algebra $T_D\,\mathfrak{g}_1$} has structure equations
\[
(
f^{24},
-f^{14},
f^{12},
0,
-f^{46},
f^{45},
-f^{48},
f^{47}
).
\]
Combining Propositions \ref{connection} and \ref{properties} the induced Hermitian structure $(\tilde J,\tilde g)$ is SKT.
Moreover, it is easy to verify that the Lie algebra ${\tilde \mathfrak{g}}_1 = T_D\,\mathfrak{g}_1$ is 3-step solvable and unimodular. The simply connected Lie group $\tilde G_1$ with Lie algebra $\tilde {\mathfrak g}_1$ is isomorphic to the semidirect product $\R \ltimes_{\mu} (H_3 \times \C^2)$ where $H_3$ is the real $3$-dimensional Heisenberg Lie group and $\mu$ is the automorphism
$$
\mu(t): (x + i y, u, z_1, z_2) \rightarrow (e^{i\frac{\pi}{2}t} (x + i y), u, e^{i\frac{\pi}{2}t} z_1, e^{i\frac{\pi}{2}t} z_2)
$$
by identifying the matrix
$$
\left ( \begin{array}{ccc} 1&x&u\\ 0&1&y\\ 0&0&1 \end{array} \right )
$$
in $H_3$ with $(x + i y, u) \in \C \ltimes \R$.
Arguing as in \cite{FT3} it is possible to show that ${\tilde G}_1$ admits a uniform discrete subgroup.
More in general, {a flat Hermitian connection $\hat D$} is on $({\mathfrak g}_1, J, g)$ then it is given with respect to the basis $\{e_i\}$ by
\begin{equation} \label{Dgenkodsec}
\hat D_{e_i} =0, \, i = 1,2,3, \quad \hat D_{e_4} = \left ( \begin{array}{cccc}
0&a_{1,2}&a_{1,3}&a_{1,4} \\ -a_{12} & 0 & - a_{1,4} &a_{1,3}\\ - a_{13} & a_{1,4}&0&a_{3,4}\\ - a_{14} & - a_{1,3} & - a_{3,4} & 0 \end{array} \right ), \, a_{ij} \in \R
\end{equation}
The connection $D$ considered before {coincides} with $\hat D$ when $a_{1,2}=a_{3,4}=1,\ a_{1,3}=a_{1,4}=0$. Note that different choices of the coefficients can lead to {non-isomorphic} Lie algebras. Indeed, when $a_{1,2}=a_{1,3}=a_{1,4}=0$ we obtain that $T_{\hat D}\,\mathfrak{g}_1 \cong\R^2\times\mathfrak h$ for a 6-dimensional Lie algebra $\mathfrak h$, so $T_{\hat D}\,\mathfrak{g}_1 \ncong \tilde\mathfrak{g}_1$.
\end{ex}
\begin{ex}
We start {by} considering the $4$-dimensional solvable Lie algebra
\[
\mathfrak{g}_2= (a e^{14}+b e^{24}, - b e^{14} + a e^{24}, 2a e^{34}, 0), \quad {a,b \in \R - \{ 0 \},}
\]
{endowed with the two integrable complex structures $J_\pm$ defined by
\[
J_\pm e^{1}=e^{2} \qquad J_\pm e^3=\pm e^4
\]
and the inner product $g= \sum_{j = 1}^4 e^j \otimes e^j$. By \cite{Ha3}, $(\mathfrak{g}_2,J_+)$ corresponds to the \emph{Inoue surface of type ${\mathcal S}^0$}.} Defining $\omega_\pm(\cdot,\cdot) = g(J_\pm\cdot,\cdot)$ we obtain $d^c_+\omega_+=-d^c_-\omega_-=2ae^{123}$ and $dd^c_+\omega_+=dd^c_-\omega_-=0$, so $(J_\pm,g)$ is a generalized K\"ahler structure on $\mathfrak{g}_2$. We note that $e_4\notin [\mathfrak{g}_2,\mathfrak{g}_2]$, so applying Proposition \ref{connection} the connection $D$ defined by
\[
\begin{cases}
D_{e_i}=0\quad i=1,2,3 \\
D_{e_4}=J_+
\end{cases}
\]
is flat and {satisfies} $DJ_+=Dg=0$.
Thus, the induced Hermitian structure $(\tilde J_+,\tilde g)$ on {the} Lie algebra $T_D\,\mathfrak{g}_2$ with structure equations
\[
(a\,f^{14}+b\,f^{24},
-b\,f^{14}+a\,f^{24},
-2a\,e^{34},
0,
-f^{46},
f^{45},
-f^{48},
f^{47}
)
\]
is SKT. Moreover, since $J_+$ and $J_-$ {commute} $DJ_-=0$, hence by Corollary \ref{cor} $(\tilde J_\pm,\tilde g)$ is a generalized {K\"ahler} structure on $T_D\,\mathfrak{g}_2$.
Again, $T_D\,\mathfrak{g}_2$ is 2-step solvable and unimodular. This generalized K\"ahler Lie algebra was already introduced in \cite{FT} and it was shown that the corresponding simply connected Lie group admits a compact quotient by a discrete subgroup.
More in general, {a flat Hermitian connection $\hat D$ is expressed by \eqref{Dgenkodsec} with respect to the basis $\{e_i\}$}. Moreover, for every choice of the coefficients $a_{1,2},a_{1,3},a_{1,4},a_{3,4}$ we find that $\hat D$ and $J_-$ commute, i.e. $\hat DJ_-=0$. So $(T_{\hat D}\,\mathfrak{g}_2,\tilde J_\pm,\tilde g)$ is a GK Lie algebra for every $\hat D$.
\end{ex}
\begin{ex} Consider the $4$-dimensional nilpotent Lie algebra
\[
\mathfrak g_3=(0,0, e^{12},0)
\]
endowed with the integrable complex structure $J$ defined by $Je^{2i-1}=e^{2i}$ for $i=1,2$ and the SKT inner product $g = \sum_{j = 1}^4 e^j \otimes e^j$ {corresponding} to the \emph{primary Kodaira surface}. This is a 2-step nilpotent Lie algebra. Indeed $[\mathfrak{g}_3,\mathfrak{g}_3]= {{\mbox {span}} <e_3>}$, {so, in order to generate the action induced by a Hermitian flat connection $D$ on $\R^4$, we need not only $e_4$ (as in the previous example) but also $e_1$ and $e_2$.}
In fact, every connection in the form
\[
D_{e_3} =\mathbf{0}, \qquad D_{e_i} = \left ( \begin{array}{cccc}
0&a_{i,1}&a_{i,2}&a_{i,3} \\ -a_{i,1} & 0 & - a_{i,3} &a_{i,2}\\ - a_{i,2} & a_{i,3}&0&a_{i,4}\\ - a_{i,3} & - a_{i,2} & - a_{i,4} & 0 \end{array} \right )\ \ i=1,2,4
\]
with $a_{i,j}\in\R$ satisfying the conditions
\[
\begin{array}{cc}
a_{2,2}(a_{1,1}-a_{1,4})=a_{1,2}(a_{2,1}-a_{2,4})\phantom{\frac{1}{2}} &\ a_{1,3}\,a_{2,2}=a_{1,2}\,a_{2,3} \\
a_{3,3}\,a_{2,2}=a_{3,2}\,a_{2,3}\phantom{\frac{1}{2}} &\ a_{2,2}(a_{3,1}-a_{3,4})=a_{3,2}(a_{2,1}-a_{2,4})
\end{array}
\]
is flat and $Dg=DJ=0$. The tangent Lie algebra $T_D\,\mathfrak{g}_3$ is 2-step solvable and unimodular as in the previous cases, but in general it is not nilpotent.
\end{ex}
\section{Taming symplectic forms on $4$-dimensional Lie groups}
We recall that on a complex manifold $(M,J)$ a \emph{taming symplectic form} is a symplectic form $\Omega$ on $M$ such that $\Omega(X,JX)>0$ for every non-zero vector field $X$ of $M$. {This is equivalent to the existence of an SKT metric $\omega$ such that $\partial\omega=\overline{\partial}\beta$ for a $\partial$-closed (2,0)-form $\beta$ (\cite{EFV})}. If $M$ is compact and $(M,J)$ admits a K\"ahler metric, the converse is also true:
\begin{prop}
Let $(M,J)$ be a compact complex manifold that admits a K\"ahler metric. Then every SKT metric induces a taming symplectic form.
\end{prop}
\begin{proof}
Since $(M,J)$ is compact and admits a K\"ahler metric, the $\partial\overline{\partial}$-lemma holds. Let $\omega$ be the fundamental 2-form of an SKT metric, i.e. $\partial\overline{\partial}\omega=0$. Applying the $\partial\overline{\partial}$-lemma to $\partial\omega$ we obtain $\partial\omega=\partial\overline{\partial}\gamma$ for some (1,0)-form $\gamma$ on $M$. Then $\partial\omega=\overline{\partial}(-\partial\gamma)$ with $\partial(-\partial\gamma)=0$, so $\omega$ induces a taming symplectic form.
\end{proof}
If $(M,J)$ is compact and 4-dimensional, then it admits a taming symplectic form if and only if it admits a K\"ahler metric \cite[Theorem 1.5]{LiZhang}, so every SKT metric on a compact 4-dimensional {K\"ahler} manifold induces a symplectic form that tames the complex structure. {One can wonder if it also holds for non-compact manifolds. We verify} that is still true in the case of invariant Hermitian structures on $4$-dimensional simply connected Lie groups.
\begin{prop}
Let $(\mathfrak{g},J)$ be a 4-dimensional Lie algebra endowed with a complex structure. Then:
\begin{enumerate}
\item $\mathfrak{g}$ admits a taming symplectic form if and only if it admits a K\"ahler metric;
\item if $\mathfrak{g}$ admits a K\"ahler metric, every SKT metric induces a taming symplectic form.
\end{enumerate}
\end{prop}
\begin{proof}
It is well known that a non-solvable Lie algebra of dimension 4 is unimodular, so by \cite{LM} it does not admit any symplectic structure. Moreover, an SKT solvable Lie algebra of dimension 4 admits a compact quotient if and only if it is unimodular. Therefore, to study taming symplectic forms on non-compact Lie algebras it is sufficient to consider the non-unimodular case. Using the classification of SKT structures on 4-dimensional Lie algebras in \cite{MS}, for every non-unimodular Lie algebra $\mathfrak{g}$ that admits an SKT structure $(J,\omega)$ we provide a $\partial$-closed (2,0)-form $\beta$ such that $\partial\omega=\overline{\partial}\beta$ and a $J$-compatible K\"ahler metric $\omega_k$.
Before starting, we note that since $\mathfrak{g}$ is 4-dimensional, the space of (2,0)-forms is generated by $\alpha^{12}$, where $\{ \alpha^1,\alpha^2 \}$ is a basis for (1,0)-forms. Thus $\beta$ is $\partial$-closed and is in the form $a\,\alpha^{12}$, where $a\in\C$.
Following the notations of \cite{MS}, we study the non-unimodular SKT 4-dimensional Lie algebras:
\begin{itemize}
\item $\R\times\mathfrak r_{3,0} = (0,e^{21},0,0)$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}$ with $Je^1=e^2,\ Je^3=e^4$ and structure equations
\[
(0,0,0,u_1e^{12}+\sqrt{u_1w_1}(e^{14}-e^{23})+w_1e^{34})
\]
where the coefficients are real and $w_1>0,\ u_1\geqslant 0$. We find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=i\,\frac{\sqrt{u_1w_1}}{2w_1}$.
Moreover, the (1,1)-form
\[
\omega_k=ne^{12}+me^{34}+m\,\frac{\sqrt{u_1w_1}}{w_1}(e^{14}-e^{23})
\]
with the conditions $n>m\,\frac{u_1}{w_1},\ m>0$ is closed and positive, so it is a K\"ahler metric with respect to $J$.
\
\item $\mathfrak{aff}_\R\times\mathfrak{aff}_\R = (0,e^{21},0,e^{43})$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}+t(e^{13}+e^{24})$ with $Je^1=e^2,\ Je^3=e^4$ and structure equations
\[
(0,0,x_1e^{12}+x_3(e^{14}-e^{23})+y_2e^{34},u_1e^{12}+u_3(e^{14}-e^{23})+v_2e^{34}),
\]
where $de^2$ and $de^4$ are linearly indipendent and the real coefficients satisfy
\[
\begin{array}{lll}
y_{{2}}x_{{1}}-y_{{2}}u_{{3}}+v_{{2}}x_{{3}}-{x_{{3}}}^{2}=0 &&
u_{{1}}v_{{2}}-u_{{1}}x_{{3}}+u_{{3}}x_{{1}}-{u_{{3}}}^{2}=0 \\
u_{{3}}x_{{3}}-y_{{2}}u_{{1}}=0 &&
(u_1-x_3)(v_2+x_3)-(u_3+x_1)(u_3-y_2)=0.
\end{array}
\]
We find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=-\frac{t}{2}+i\,\frac{u_3-y_2}{2(x_3+v_2)}$. More in general, we have that every $d$-closed 3-form is exact because $b_3^{\,\text{inv}}=\dim H^3(\mathfrak{g})=0$.
Moreover, the (1,1)-form
\[
\omega_k=ne^{12}+me^{34}+p(e^{14}-e^{23})
\]
with the conditions $nx_3-mu_1+p(u_3-x_1)=0,\ nm>p^2$ and $m>0$ is closed and positive, so it is a K\"ahler metric with respect to $J$. Note that the conditions above admit a solution for every {choice} of the coefficients $x_1,x_3,u_1,u_3$. Indeed, fixing $p$, the first condition can be written as $nx_3 = mu_1-p(u_3-x_1)$, so we can choose $n$ and $m$ as large as we need in order to satisfy $nm>p^2$.
\
\item $\mathfrak r'_{4,\lambda,0} = (0,\lambda\, e^{21},e^{41},-e^{31})$, with $\lambda>0$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}$ with $Je^1=e^2,\ Je^3=e^4$ and structure equations
\[
(0,x_1e^{12},y_1e^{12}+y_3e^{14},-y_3e^{13})
\]
where the coefficients are real, $y_3\neq0,\ x_1>0$ and $y_1\geqslant 0$. {Under} these conditions $\lambda=\vert{\frac{x_1}{y_3}}\vert$. We find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=-\frac{y_1}{2(x_1^2+y_3^2)} (x_1+iy_3)$.
Moreover, the (1,1)-form
\[
\omega_k=ne^{12}+me^{34}+m\,\frac{y_1y_3}{x_1^2+y_3^2}(e^{14}-e^{23})-m\,\frac{y_1x_1}{x_1^2+y_3^2}(e^{13}+e^{24})
\]
with the conditions $n>m\,\frac{y_1}{x_1^2+y_3^2}(y_3-x_1),\ n,m>0$ is closed and positive, so it is a K\"ahler metric with respect to $J$.
\
\item $\mathfrak d_{4,2} = (0,2\,e^{21},-e^{31},e^{41}+e^{32})$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}$ with $Je^1=e^2,\ Je^3=e^4$ and structure equations
\[
(0,x_1e^{12},y_1e^{12}-\frac{1}{2}x_1e^{13},u_1e^{12}+\frac{1}{2}x_1e^{14}-x_1e^{23})
\]
where the coefficients are real and $x_1>0$. We find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=\frac{1}{3x_1} (-y_1+iu_1)$.
Moreover, the (1,1)-form
\[
\omega_k=ne^{12}+me^{34}+m\,\frac{2u_1}{3x_1}(e^{14}-e^{23})
\]
with the conditions $n>m\,\frac{4u_1^2}{9x_1^2},\ m>0$ is closed and positive, so it is a K\"ahler metric with respect to $J$.
\
\item $\mathfrak d'_{4,\lambda} = (0,\lambda\, e^{21}+e^{31},-e^{21}+\lambda\, e^{31},2\lambda\, e^{41}+e^{32})$, with $\lambda>0$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}+t(e^{13}+e^{24})$ with $Je^1=e^2,\ Je^3=e^4$ and structure equations
\[
\left\{
\begin{aligned}
&d {e}^1 = 0 \\
&d {e}^2 = -k(1+q^2)\, {e}^{12}-kqr( {e}^{14}- {e}^{23})-kr^2\, {e}^{34} \\
&d {e}^3 = \frac{z_3q}{r}\, {e}^{12}-\frac{k}{2}\, {e}^{13}+z_3\, {e}^{14} \\
&d {e}^4 = \frac{q}{r}(kq^2+\frac{k}{2})\, {e}^{12}-z_3\, {e}^{13}+(kq^2-\frac{k}{2}) {e}^{14}-kq^2 {e}^{23}+kqr\, {e}^{34},
\end{aligned}
\right.
\]
with $q,r,k\in\R$ such that $q^2+r^2=1,\ r>0$ and $k,z_3\neq0$. {Under} these conditions $\lambda=\vert{\frac{k}{2z_3}}\vert$. We find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=-\frac{t}{2}-i\frac{q}{2r}$. More in general, we have that every $d$-closed 3-form is exact because $b_3^{\,\text{inv}}=\dim H^3(\mathfrak{g})=0$.
Moreover, the (1,1)-form
\[
\omega_k=m\,\frac{1+q^2}{r^2}e^{12}+me^{34}+m\,\frac{q}{r}(e^{14}-e^{23})
\]
with the condition $m>0$ is closed and positive, so it is a K\"ahler metric with respect to $J$.
\
\item $\mathfrak d_{4,\frac{1}{2}} = (0,\frac{1}{2}\,e^{21},\frac{1}{2}\,e^{31},e^{41}+e^{32})$. Every SKT metric {has} the form $\omega=e^{12}+e^{34}+t(e^{13}+e^{24})$ with $Je^1=e^2,\ Je^3=e^4$ with the structure equations consider in the latter case with the additional condition $z_3=0$. {Like} before, we find that $\partial\omega=\overline{\partial} (a\,\alpha^{12})$ with $a=-\frac{t}{2}-i\frac{q}{2r}$ and that the (1,1)-form
\[
\omega_k=m\,\frac{1+q^2}{r^2}e^{12}+me^{34}+m\,\frac{q}{r}(e^{14}-e^{23})
\]
with the condition $m>0$ is a K\"ahler metric with respect to $J$.
\end{itemize}
\end{proof}
|
1,108,101,563,676 | arxiv | \section{Introduction}
Studies of the Galactic halo can help constrain the formation history of the Milky Way and the galaxy formation process in general. For example, in a recent theoretical study, \citet{har14} suggested there may be between 300 to 700 low luminosity ($<10^3$ $L_\sun$) dwarf satellite galaxies orbiting the Milky Way within 300 kpc of the Sun. However, the census of such low luminosity galaxies is currently complete only within $\sim45$ kpc of the Sun (Table 3 of \citealt{kop08}), and only at high galactic latitudes ($|b|>25\arcdeg$). A deeper, wider, and more complete census of Milky Way dwarfs would be extremely valuable, as it would allow us to test our assumptions about $\Lambda$CDM cosmology and galaxy formation, by comparing the observed distribution and properties of discovered dwarfs against those present in state-of-the-art hydrodynamic simulations, such as the APOSTLE \citep{saw16} and FIRE/\texttt{Gizmo} simulation suites \citep{hop14}.
The Galactic halo also contains remnants of accreted satellites (i.e., dwarf galaxies and globular clusters) that were disrupted by tidal forces and stretched into stellar tidal streams and clouds \citep[e.g., ][]{iba01, bel07, ses15, ber16}. Stellar streams are especially interesting because their orbits are sensitive to the properties of the Galactic potential (e.g., its shape and total mass of the Milky Way) and thus can be used to constrain it over the range of distances spanned by the streams (e.g., \citealt{kop10, new10, ses13, bel14}). For example, the total mass of the Milky Way is currently uncertain at a factor of two, and its more precise measurement using stellar streams may help resolve (or further aggravate) some apparent issues in the theory of galaxy formation, such as the so-called ``Too-Big-To-Fail'' problem \citep{bk11, wan12}. A more precise measurement of the total mass requires detailed modeling of stellar streams, such as the Sagittarius stream, as well as precise 3D kinematics and positions of stars in distant streams \citep{apw14}. While the Gaia mission \citep{per01} can and will deliver precise proper motions of halo stars, in most cases Gaia's parallax estimates will only be marginal beyond a few kiloparsec. Studies focusing on the 6D phase space structure of the Galactic halo will need to rely on tracers with precise (spectro-)photometric distances for the foreseeable future, making ``standard candles'' such as RR Lyrae stars enormously valuable.
To measure the total mass of the Milky Way and find the faintest dwarf satellites, we need to trace the spatial and kinematic structure and substructure (i.e., stellar streams) of the Galactic halo over the greatest possible distances and with the highest possible precision in distance and velocity, and the best tracers\footnote{Tracers are objects whose distribution reflects the distribution of the majority of stars (hopefully, in the least biased way).} for this task are RR Lyrae stars.
RR Lyrae stars are old (${\rm age}>10$ Gyr), metal-poor (${\rm [Fe/H] < -0.5}$ dex), pulsating horizontal branch stars with periodically variable light curves (periods ranging from 0.2 to 0.9 days; \citealt{smi04}). They are bright stars ($M_{\rm V}=0.6\pm0.1$ mag) with distinct light curves which makes them easy to identify with time-domain imaging surveys, even to large distances (5-120 kpc for surveys with a $14 < V < 21$ magnitude range; e.g., \citealt{ses10}). These properties, and the fact that {\em almost every Milky Way dwarf satellite galaxy has at least one RR Lyrae star} (including the faintest one, Segue 1; \citealt{sim11}), open up the exciting possibility of locating very low-luminosity Milky Way dwarf satellites by using distant RR Lyrae stars, as first proposed by \citet{ses14} (also see \citealt{bw15}).
RR Lyrae stars are also precise standard candles (i.e., their intrinsic brightness is well-determined). While distances to RR Lyrae stars can be measured with 3\% uncertainty using optical data (\autoref{distance_precision}), thanks to a tight period-luminosity relation in the near-infrared, distances to RR Lyrae stars can be measured with $2\%$ or better precision using, for example, $K$-band observations \citep{bra15, bea16}. Having precise distances is crucial for measuring tangential velocities\footnote{Radial velocities of RR Lyrae stars are straightforward to measure \citep{sesar12}.} and thus the Galactic potential, as the uncertainty in tangential velocity increases proportionally with the uncertainty in distance.
As made evident by several existing catalogs of Milky Way halo RR Lyrae stars (e.g., \citealt{viv01, mic08, kel08, ses10, ses13, dra13, aba14}), selection of RR Lyrae stars has become an almost routine procedure, as long as one has access to $\sim40$ or more observations per star {\em in a single photometric bandpass}. While very useful for many Galactic studies, the above catalogs are not ideal: they are either deep with limited sky coverage (e.g., the SDSS Stripe 82 catalog covers 100 deg$^2$ and is complete up to 110 kpc, \citealt{ses10}), or have wide coverage but are not very deep (e.g., the CRTS catalog covers 20,000 deg$^2$ and is complete up to 30 kpc, \citealt{dra13}). In addition, none of the above catalogs cover the Galactic plane, and thus cannot support studies of the old ($>10$ Gyr) Galactic disk. Currently the only existing imaging survey that has the potential to overcome all of the above drawbacks, and provide a deep and wide-area catalog of RR Lyrae stars in the northern skies (${\rm Dec}>-30\arcdeg$), is the Pan-STARRS1 (PS1) $3\pi$ survey.
\begin{figure}
\plotone{PS1_multiband_timeseries.pdf}
\caption{
Multi-epoch PS1 $grizy$ photometry (i.e., light curves) of a faint RR Lyrae star. Note that the observations in different bands are not synchronous, and that the light curves are sparsely covered in time: for this object, there are a total of 45 observations over 4 years, which spans about 3000 typical RR Lyrae periods.
\label{PS1_multi-band_timeseries}}
\end{figure}
Even though the PS1 $3\pi$ survey holds a great potential for Galactic studies due to its depth and sky coverage, it is a challenging data set for selection of RR Lyrae stars due to its sparse temporal coverage, cadence, and asynchronous multi-band observations (see \autoref{PS1_multi-band_timeseries}). As we described in our previous work \citep{her16}, we overcame these challenges by characterizing time-series of PS1 sources using three statistics: a $\chi^2$-based variability indicator, a variability amplitude (in the $r_{\rm P1}$-band) $\omega_r$, and a variability time-scale $\tau$, where the latter two were obtained by fitting a damped random walk model to observed PS1 {\it multi-band} structure functions (see Section 3.2 of \citealt{her16}). When applied to the second internal PS1 data release (PV2), our approach yielded a candidate sample of $\sim150,000$ RR Lyrae stars covering three quarters of the sky and reaching up to 120 kpc from the Sun.
Building on the work by \citet{her16}, in this paper we use the final PS1 data release (PV3) to significantly increase the completeness and purity of the PS1 sample of RR Lyrae stars. Compared to \citet{her16}, we achieve higher completeness and purity by 1) having more observation epochs per object (72 in PV3 vs.~55 in PV2), 2) by excising fewer and thereby retaining more of these observations (using a machine-learning algorithm that more efficiently identifies bad photometric data), 3) by building a more detailed machine-learned model of RR Lyrae stars in data space, and 4) by developing and running CPU-intensive multi-band light curve fitting on PS1 time-series, thereby directly determining the RR Lyrae periods. The purer samples of RR Lyrae stars that this work delivers are especially important for studies of the Galactic halo (e.g., when searching for low-luminosity dwarf satellites), as stars incorrectly identified as RR Lyrae stars may cause appearance of spurious halo substructures \citep{ses10}.
\section{Data: PS1 $3\pi$ Light Curves }\label{Sec:PS1_3pi}
From an observational point of view, RR Lyrae stars are A-F type stars with distinct, periodically variable light curves. In the following sections, we describe data that capture these properties of RR Lyrae stars.
Pan-STARRS1 \citep[PS1]{kai10} is a wide-field optical/near-IR survey telescope system located at Haleakal\={a} Observatory on the island of Maui in Hawai`i. The largest survey undertaken by the telescope, the PS1 $3\pi$ survey \citep{cha11}, has observed the entire sky north of declination $-30\arcdeg$ in five filter bands \citep{stu10,ton12}, reaching $5\sigma$ single epoch depths of about 22.0, 22.0, 21.9, 21.0 and 19.8 magnitudes in $g_{\rm P1}$, $r_{\rm P1}$, $i_{\rm P1}$, $z_{\rm P1}$, and $y_{\rm P1}$ bands, respectively. The uncertainty in photometric calibration of the survey is $\la0.01$ mag \citep{sch12}, and the astrometric precision of single-epoch detections is 10 milliarcsec \citep{mag08}.
The PS1 $3\pi$ survey aimed to observe each position in two pairs of exposures per filter per year, where the observations within each so-called transit-time-interval pair were taken $\sim25$ minutes apart and in the same band. Thus, the survey should have obtained about 16 observations in each band (for a total of 80), but due to bad weather and telescope downtime, fewer epochs were observed ($\sim 70$ on average).
Unlike \citet[see their Section 2.2]{her16}, we do not use bit-flags or other {\em ad hoc} procedures to exclude detections that may appear as non-astrophysical photometric outliers in PS1 time-series data (e.g., badly calibrated data, blended objects, etc.). We define a non-astrophysical photometric outlier as a photometric measurement that deviates by more than $2.5\sigma$ from its ``expected'' value, where $\sigma$ is the total photometric uncertainty of that detection. Instead, to identify and remove non-astrophysical photometric outliers, we employ a machine-learned model that uses other properties associated with a detection (e.g., its position on the chip, level of agreement with a Point-Spread-Function model, seeing, etc.) to {\em predict} whether a detection will be a $2.5\sigma$ outlier or not (Sesar et al., {\it in prep.}).
Validation tests have shown that our machine-learned outlier model identifies 80\% of all true $2.5\sigma$ outliers, and only misclassifies one good observation for every true $2.5\sigma$ outlier. For comparison, the outlier rejection approach adopted by \citet{her16} identifies almost all of the $2.5\sigma$ outliers, but it misclassifies eight good observations for every true $2.5\sigma$ outlier.
After removing photometric outliers from PV3 time-series (using our machine-learned outlier model), the average number of observations per object is 67 (out of the initial 72 observations). If we would have used the outlier rejection method of \citet{her16}, the number of observations per object would have decreased to $\sim30$.
To select objects with enough epochs for multi-band light curve fitting (\autoref{multi-band_light_curve_fitting}), and signal-to-noise ratios appropriate for variability studies, we require that PS1 light curves have (after outlier rejection):
\begin{itemize}
\item at least two epochs in $g_{\rm P1}$, $r_{\rm P1}$, and $i_{\rm P1}$ bands, and at least a total of two epochs in the ``red'' bands" ($z_{\rm P1}$ and $y_{\rm P1}$).
\item a total of at least 23 epochs,
\item and an uncertainty-weighted mean magnitude of $15 < \langle m \rangle < 21.5$ in at least one of the PS1 $g_{\rm P1}, r_{\rm P1}, i_{\rm P1}$ bands.
\end{itemize}
Dereddened optical colors are useful as they provide a rough estimate of the spectral type, and could help with the identification of RR Lyrae stars (which are A-F type stars). Thus, we correct observed PS1 magnitudes for extinction using the extinction coefficients of \citealt{sf11} (see their Table 6) and the \citealt{sch14} dust map and calculate $\langle g \rangle - \langle r\rangle$, $\langle r \rangle - \langle i\rangle$, $\langle i \rangle - \langle z\rangle$, $\langle z \rangle - \langle y\rangle$, and $\langle g \rangle - \langle i\rangle$ colors, where $\langle \rangle$ indicates an uncertainty-weighted mean magnitude. If for some reason an object is not observed in a particular PS1 band, the value of the color involving that band is reset to 9999.99.
To extract variability information from multi-band PS1 light curves, we calculate the variability indicator $\hat{\chi}^2$ (Equation 1 of \citealt{her16}), and fit a damped random walk model to PS1 multi-band structure functions. From the best-fit damped random walk model we measure the variability amplitude $\omega_r$ (in the $r_{\rm P1}$-band), and the variability time-scale $\tau$ (see Sections 3.1 to 3.3 of \citealt{her16}). As \citet{her16} have shown, these three parameters are very useful for separating different types of variable sources (e.g., quasars and RR Lyrae stars).
\section{Light Curve and Period Fitting}\label{Sec:lightcurve_fitting}
In this Section we describe several approaches to fitting the multi-band light curves, which will result in a period determination and a fit to the phased light curve.
\subsection{Multi-band Periodogram}\label{multi-band_periodogram}
A more detailed separation of variables can be obtained by studying the properties of phased (i.e., period-folded) light curves, such as the amplitude and shape \citep{ric11,dub11,elo16}. However, light curve folding requires an assumed or known period.
To measure the period of variability of a PS1 light curve, we use the multi-band periodogram of \citet{vi15} as implemented in \texttt{gatspy}, an open-source Python package for general astronomical time-series analysis\footnote{\url{http://www.astroML.org/gatspy/}} \citep{van15}. Briefly, the algorithm of \citet{vi15} models the phased light curves in each band as an arbitrary truncated Fourier series, with the period and optionally the phase, shared across all bands.
Since the phase offsets between RR Lyrae $griz$ light curves are smaller than 1\% \citep{ses10}, we adopt the \citet{vi15} shared-phase model when calculating the multi-band periodogram (i.e., we set \texttt{gatspy} parameters $N_{\rm base} = 1$ and $N_{\rm band} = 0$; see Section 5 of \citealt{vi15} for details). Furthermore, since RR Lyrae stars have periods between 0.2 and 0.9 days, we limit the period search to the same range.
To test the accuracy of the \citet{vi15} period-finding algorithm on PS1 data, we use \texttt{gatspy} to calculate multi-band periodograms for 440 RR Lyrae stars previously studied by \citet{ses10}, but with the PS1 data at hand. From each periodogram, we select the period associated with the highest periodogram peak. We considered the periods measured by \citet{ses10} from more densely sampled Sloan Digital Sky Survey (SDSS; \citealt{yor00}) Stripe 82 observations to be ``true'' periods (see Section 2 of \citealt{ses10} for details on SDSS Stripe 82). We define a period to be accurately measured if the selected multi-band periodogram peak is within 2 sec of the true period.
We find that the \citet{vi15} period-finding algorithm accurately identifies the true period for 53\% of RR Lyrae stars observed by PS1 (37\% if the accuracy of 1 sec is required). Changing the \texttt{gatspy} $N_{\rm base}$ and $N_{\rm band}$ parameters did not significantly improve this result.
Using a mathematical, not physical, multi-band light curve model is one of the reasons why the \citet{vi15} period-finding algorithm fails to identify the true period for half of the RR Lyrae stars observed by PS1. When the light curves are sparse, there is no guarantee that the resulting best-fit multi-band model will be physical. If the light curve model is not constrained by external information on the physics of the problem at hand, inaccurate, or not robust, period estimates may result.
\begin{figure}
\plotone{multiband_fit.pdf}
\caption{
An unsuccessful attempt to accurately measure the period of the RR Lyrae star from \autoref{PS1_multi-band_timeseries}, using the multi-band periodogram of \citet{vi15}. Even though the best-fit multi-band model (sine curves) agrees with the phased PS1 light curves (symbols with errorbars) in the minimum $\chi^2$ sense, the period inferred by this modeling approach is incorrect. This happens because the algorithm permits light curve models that are not physical: the model's colors do not change as a function of phase.
\label{multi-band_fit}}
\end{figure}
An example of a best-fit, but non-physical (mathematical) multi-band model is shown in \autoref{multi-band_fit}. The $g-r$ color predicted by the best-fit model does not change as a function of phase (i.e., the difference between the green and red line), while in reality, it is well known that RR Lyrae stars have bluer $g-r$ color when they are brightest (e.g., \citealt{ses10}). Furthermore, the model predicts $i-z\sim0.2$ mag, while in reality $i-z\sim0$ mag. Due to a combination of a non-physical multi-band model and sparse PS1 data, the \citet{vi15} algorithm is unable to accurately measure the pulsation period of that particular star.
Since the \citet{vi15} algorithm fails to accurately measure the period for almost half of the RR Lyrae stars with the PS1 data at hand, we cannot use it to phase the light curves. However, we still calculate and use the multi-band periodogram in this work in the subsequent classification, as it improves the selection of RR Lyrae stars (see \autoref{second_classifier} below).
\subsection{Multi-band Light Curve Fitting and Periods}\label{multi-band_light_curve_fitting}
We now show that it is possible to accurately measure periods of RR Lyrae stars observed by PS1, by using a more realistic and physically constrained multi-band light curve model.
In principle, such a model could be obtained by extracting theoretical $griz$ light curves from pulsation models, such as those created by \citet{mar06}. However, a comparison of theoretical \citep{mar06} and empirical (i.e., observed) SDSS $ugriz$ light curves by \citet[see their Figure 8]{ses10} has shown differences between the two light curve sets that cannot be explained by observational uncertainties. Due to these differences, we are reluctant to use theoretical multi-band models of RR Lyrae stars when measuring periods.
Instead of using theoretical multi-band models, we adopt a set of 483 empirical $griz$ models. These models consist of $griz$ light curve templates that were fitted by \citet{ses10} to observed $griz$ light curves of 483 RR Lyrae stars in SDSS Stripe 82. The curves in \autoref{template_fit} illustrate one of the 483 empirical multi-band models. The set contains 379 type $ab$ multi-band templates, corresponding to RR Lyrae stars pulsating in the fundamental model (RRab stars), and 104 type $c$ multi-band templates, corresponding to RR Lyrae stars pulsating in the first overtone (RRc stars).
\begin{figure}
\plotone{template_fit.pdf}
\caption{
Phased PS1 light curves of the object shown in \autoref{PS1_multi-band_timeseries}, folded using the best-fit period measured from multi-band template fitting of PS1 data (see \autoref{multi-band_light_curve_fitting} for details). The best-fit multi-band template ($g,r,i,z\&y$) is overplotted. Even though this star is very faint ($r\sim20.5$ mag) and its light curve is sparsely sampled in PS1 (a total of 45 observations across 3000 periods), the period of $\sim0.51$ days measured using multi-band template fitting agrees within 2 sec with the value measured by \citet{ses10} from more densely sampled SDSS Stripe 82 data.
\label{template_fit}}
\end{figure}
We define the $k$-th empirical multi-band model, which is a function of the pulsation phase $\phi$ as:
\begin{align}
g(\phi) &= FA_{\rm g}T_{\rm g}(\phi) + g_{\rm 0} - r_{\rm 0} + r^\prime\label{multi-band_templates_equations} \\
r(\phi) &= FA_{\rm r}T_{\rm r}(\phi) + r^\prime \nonumber \\
i(\phi) &= FA_{\rm i}T_{\rm i}(\phi) + i_{\rm 0} - r_{\rm 0} + r^\prime \nonumber \\
z(\phi) &= FA_{\rm z}T_{\rm z}(\phi) + z_{\rm 0} - r_{\rm 0} + r^\prime, \nonumber
\end{align}
where $T_{\rm m}(\phi)$ is the best-fit template light curve, $A_{\rm m}$ is the (known and fixed) amplitude of this template, and $m_{\rm 0}$ is the (known and fixed) best-fit magnitude at peak brightness (i.e., at $\phi = 0$) in the $m=g,r,i,z$ band of the $k$-th RR Lyrae star in SDSS Stripe 82 (see Table 2 of \citealt{ses10} for values of $A_{\rm m}$ and $m_{\rm 0}$). Note that the free parameter $r^\prime$ acts as a zero-point offset in our model, since the $griz$ light curves have been normalized by subtracting $r_{\rm 0}$ from each light curve. The free parameter $F$ allows the amplitudes of model $griz$ light curves to vary by up to 20\% from their original values (which are listed in Table 2 of \citealt{ses10}).
Qualitative inspection of phased PS1 $z$ and $y$ band light curves has shown that the two are roughly similar within photometric uncertainties. Therefore, we (can) treat all $y$-band observations as $z$-band observations in the remainder of the analysis.
Assuming a period of $P$ days and a phase offset $\phi_0$, we calculate the phase of each PS1 observation epoch as
\begin{equation}
\phi(t~|~ P, \phi_0) = \frac{(t - 2400000)\, {\rm modulo}\, P}{P} + \phi_0\label{phase},
\end{equation}
where the time of observation $t$ is in units of heliocentric Julian days, and $-0.5 \leq \phi_0 < 0.5$. The purpose of the phase offset $\phi_0$ is to make sure the maximum light of the best-fit multi-band template occurs at $\phi = 0$. Note that the phase of an observation needs to be in the $0 \leq \phi < 1$ range. If it is outside of that range, one should add or subtract 1.
To find the best-fit values of $F$, $r^\prime$, $\phi_0$, and $P$ parameters for a given multi-band template, $k$, we minimize a $\chi^2$-like statistic calculated as
\begin{equation}
\chi^2_k = \sum_{m=g,r,i,z}\sum_{n=1}^{N_{\rm m,obs}}\left(\frac{m_{\rm m,n} - m_k\bigl (\phi(t_{\rm m,n}~|~ P, \phi_0)~|~F,r^\prime\bigr )}{\sigma_{\rm m,n}}\right)^2,
\end{equation}
where $t_{\rm m,n}$, $m_{\rm m,n}$, and $\sigma_{\rm m,n}$ are the time, magnitude, and the photometric uncertainty of the $n$-th observation in the $m=g,r,i,z$ band (e.g., $t_{\rm g, 1}$, $g_{\rm 1}$, $\sigma_{\rm g, 1}$). The best-fit parameters (period, phase offset, etc.) are measured using the Differential Evolution algorithm of \citet{sp97} as implemented in \texttt{scipy}, an open-source Python package for scientific computing\footnote{\url{http://www.scipy.org}} \citep{scipy1, scipy2}.
We perform multi-band light curve fitting in two runs. In the first run, we fit every multi-band template to a PS1 multi-band time-series, and for each template record the best-fit $\chi^2$ and model parameter values. When fitting a type $ab$ multi-band template, we constrain the minimization to periods ranging from 0.4 to 0.9 days. The minimization is constrained to periods ranging from 0.2 to 0.5 days when fitting a type $c$ multi-band template. The above period ranges are typical of type $ab$ (RRab) and type $c$ (RRc) RR Lyrae stars pulsating in the fundamental or the first-overtone mode, respectively. For illustration, \autoref{period_vs_chi2} shows the 483 best-fit $\chi^2_k$ and period values, one for each template light curve, measured for a faint RR Lyrae star and a faint non-RR Lyrae object. Clearly, there is a global minimum for the RR Lyrae light curve among the $\chi^2_k$ values.
\begin{figure}
\plotone{period_vs_chi2.pdf}
\caption{
The symbols show best-fit periods and associated $\chi^2$ values obtained by fitting each of the 483 multi-band templates to PS1 light curves of a faint non-RR Lyrae object (open grey points), and the same faint RR Lyrae star shown in Figures~\ref{PS1_multi-band_timeseries} to~\ref{template_fit} (red points). The vertical dashed line shows the RR Lyrae star's true period measured by \citet{ses10} from SDSS Stripe 82 data. For this RR Lyrae star, the period associated with the best-fit template (i.e., the template with the smallest $\chi^2$ value) is consistent within 2 sec with the star's true period, indicating a successful period recovery. Note the classification power of the template fitting $\chi^2$ statistic, as it clearly separates the RR Lyrae star from the non-RR Lyrae object, even though both objects have similarly sampled PS1 light curves and signal-to-noise ratios.
\label{period_vs_chi2}}
\end{figure}
In a second round of light curve fitting, we fit only type $ab$ or type $c$ templates, depending on the type of the best-fit template (i.e., the template with the lowest $\chi^2$ value) found during the first run. This time, the period range is restricted to $\pm2$ minutes around the period associated with the best-fit template from the first run. At the end of the second fitting iteration, we save only the best-fit $\chi^2$ and model parameter values associated with the best-fit template (of the second run).
\begin{figure}
\plotone{period_comparison.pdf}
\plotone{deltaP_cumhist.pdf}
\caption{
Accuracy, precision and robustness of the RR Lyrae period estimates obtained using our multi-band light curve template fitting of PS1 light curves. The top panel compares periods measured from Stripe 82 data (by \citealt{ses10}) with those measured from PS1 data using multi-band template fitting. The dashed lines show the 1-day beat frequency aliases. The bottom panel quantifies the period recovery: the period is accurately recovered (i.e., within 2 sec) for 87\% of RRab and 74\% of RRc stars. \label{period_comparison}}
\end{figure}
The result of applying this procedure to the PS1 lightcurves of 440 RR Lyrae stars in SDSS Stripe 82 is illustrated in \autoref{period_comparison}. Our multi-band template fitting method accurately measures periods for 85\% of RR Lyrae stars (87\% of RRab and 74\% of RRc stars), a 32\% improvement in period recovery over the \citet{vi15} algorithm. Within 1 sec, the period is recovered for 73\% of RR Lyrae stars (a 36\% improvement versus the \citealt{vi15} algorithm). If the period fitting returns a discrepant value, this can predominately be attributed to 1-day beat frequency aliasing (see \autoref{period_comparison}).
When fitting multiband templates to PS1 lightcurves of 440 RR Lyrae stars in SDSS Stripe 82, there is a possibility that some Stripe 82 RR Lyrae stars will be best fit with their own multiband templates (recall that multiband templates were constructed from observed SDSS light curves of individual Stripe 82 RR Lyrae stars). This ``self-fitting'' can be considered as a form of overfitting\footnote{Because the correct period is more likely to be found for a star when its own template is used.} which, if it happens frequently, may inflate our estimate of the accuracy of period recovery. However, only 6 out of 440 Stripe 82 RR Lyrae stars are fit with their own multiband templates, indicating that self-fitting does not happen frequently and that it does not significantly inflate our estimate of the accuracy of period recovery. The lack of self-fitting implies that many multi-band templates are quite similar to each other, and suggests that there is a potential for a computational speedup by removing redundant multi-band templates from the set.
As \autoref{period_vs_chi2} illustrates, the multi-band light curve fitting also provides useful information for separating RR Lyrae stars and non-RR Lyrae objects. For example, the average and best-fit $\chi^2$ values measured for the RR Lyrae star are vastly lower than the corresponding values measured for the non-RR Lyrae object, even though both objects have PS1 light curves of similar signal-to-noise ratio and sampling. This result is not unexpected, since $\chi^2$ measures the statistical agreement between the (observed) phased PS1 light curve and a best-fit empirical multi-band light curve model of an RR Lyrae star, and thus quantifies how ``RR Lyrae-like'' an object is.
In order to further characterize how RR Lyrae-like a PS1 multi-band light curve is, we measure additional properties of phased PS1 light curves, such as the entropy of the phased light curve, the \citet{ste96} $J$ index, as well as $\sim20$ other properties (called {\em features} in machine learning), and describe them in more detail in \autoref{extraction_of_features}.
\subsection{Resulting RR Lyrae Distance Precision}\label{distance_precision}
Along with measuring an accurate period ($87\%$ of the time for RRab stars), an important aspect of multi-band light curve fitting is also the increased precision in estimating the star's average flux (or magnitude). As RR Lyrae stars follow a tight period-absolute magnitude-metallicity (PLZ) relation, we show that PS1 data constrain distances of RR Lyrae stars with a 3\% precision, even if the metallicity of an RR Lyrae star is unknown.
Theoretical and empirical studies \citep[e.g.,][]{cat04,mar15,sol06,bra15} have shown that the absolute magnitudes of RR Lyrae stars can be modeled as
\begin{equation}
M = \alpha\log_{\rm 10}(P/P_{\rm ref}) + \beta({\rm [Fe/H]} - {\rm [Fe/H]_{\rm ref}}) + M_{\rm ref} + \epsilon,\label{PLZ}
\end{equation}
where $P$ is the period of pulsation, $M_{\rm ref}$ is the absolute magnitude at some reference period $P_{\rm ref}$ and metallicity ${\rm [Fe/H]_{\rm ref}}$ (here chosen to be $P_{\rm ref}=0.6$ days and ${\rm [Fe/H]_{\rm ref}}=-1.5$ dex), and $\alpha$ and $\beta$ describe the dependence of the absolute magnitude on period and metallicity, respectively. The $\epsilon$ is a standard normal random variable with mean 0 and standard deviation $\sigma_{\rm M}$, that models the intrinsic scatter in the absolute magnitude convolved with unaccounted measurement uncertainties.
To constrain the PLZ relations for RRab stars in PS1 bandpasses, we use a probabilistic approach described in detail in \autoref{PLZ_derivation}, where the data include metallicities and distance moduli of PS1 RR Lyrae stars in five Galactic globular clusters. The end product of this approach is a joint posterior distribution of all model parameters. To describe the marginal posterior distributions of individual model parameters, we measure the median, the difference between the 84th percentile and the median, and the difference between the median and the 16th percentile of each marginal posterior distribution (for a Gaussian distribution, these differences are equal to $\pm1$ standard deviation). We report these values in \autoref{PS1_PLZ}.
\capstartfalse
\begin{deluxetable*}{ccccc}
\tablecolumns{5}
\tablecaption{PLZ Relations for PS1 bandpasses\label{PS1_PLZ}}
\tablehead{
\colhead{Band} & \colhead{$\alpha$} & \colhead{$\beta$} & \colhead{$M_{\rm ref}$} & \colhead{$\sigma_{\rm M}$} \\
\colhead{ } & \colhead{(mag dex$^{-1}$)} & \colhead{(mag dex$^{-1}$)} & \colhead{(mag)} & \colhead{(mag)}
}
\startdata
$g_{\rm P1}$ & $-1.7\pm0.3$ & $0.08\pm0.03$ & $0.69\pm0.01(rnd)\pm0.03(sys)$ & $0.07\pm0.01$ \\
$r_{\rm P1}$ & $-1.6\pm0.1$ & $0.09\pm0.02$ & $0.51\pm0.01(rnd)\pm0.03(sys)$ & $0.06\pm0.01$ \\
$i_{\rm P1}$ & $-1.77\pm0.08$ & $0.08\pm0.02$ & $0.46\pm0.01(rnd)\pm0.03(sys)$ & $0.05\pm0.01$ \\
$z_{\rm P1}^a$ & $-2.2\pm0.2$ & $0.06\pm0.02$ & $0.46\pm0.01(rnd)\pm0.03(sys)$ & $0.05\pm0.01$
\enddata
\tablenotetext{a}{The PLZ relation for the $z_{\rm P1}$ band was derived using $z_{\rm P1}$ and $y_{\rm P1}$ band observations, since a qualitative inspection of phased PS1 $z$ and $y$ band light curves has shown that the two are roughly similar within photometric uncertainties.}
\end{deluxetable*}
\capstarttrue
Overall, the PLZ relations behave as expected: as the bandpass moves to redder wavelengths, the dependence on the period increases (i.e., the $\alpha$ parameter becomes more negative), the dependence on the metallicity (i.e., the $\beta$ parameter) and the scatter $\sigma_{\rm M}$ decrease, and the reference absolute magnitude becomes brighter. Similar trends were also observed in previous theoretical and observational studies \citep[e.g.,][]{mar15,bra15}. Since the PLZ relation for the $i_{\rm P1}$ band is most tightly constrained and has low metallicity dependence, we use it hereafter when measuring distances.
The metallicity information is not available for the vast majority of stars in PS1. To estimate the uncertainty in absolute $i_{\rm P1}$ magnitudes when the metallicity is unknown, we assume that RR Lyrae stars are drawn from the halo metallicity distribution function, represented with a Gaussian distribution centered on $-1.5$ dex and with a standard deviation of 0.3 dex \citep{tomoII}. The resulting uncertainty in $M_{\rm i_{\rm P1}}$ is then $\sigma_{\rm M_{\rm i_{\rm P1}}}=0.06(rnd) \pm0.03(sys)$ mag, and the expression for $M_{\rm i_{\rm P1}}$ simplifies to
\begin{equation}
M_{\rm i_{\rm P1}} = -1.77\log_{\rm 10}(P/0.6) + 0.46.\label{abs_mag_i_band}
\end{equation}
To calculate distance moduli of PS1 RR Lyrae stars, we use flux-averaged $i_{\rm P1}$-band magnitude and \autoref{abs_mag_i_band}. For the uncertainty in distance modulus, we adopt $\sigma_{\rm DM} = \sigma_{\rm M_{\rm i_{\rm P1}}}=0.06(rnd) \pm0.03(sys)$ mag. This corresponds to a distance precision of $\sim3\%$, as long as dust extinction is not an important issue.
To validate \autoref{abs_mag_i_band}, we compute median distance moduli for three dwarf spheroidal galaxies, using the PS1 data for their RR Lyrae stars. We find $DM=19.51\pm0.03$ mag for Draco, $DM=19.67\pm0.03$ mag for Sextans, and $DM=19.17\pm0.03$ mag for Ursa Minor, where the uncertainty in distance moduli is dominated by the systematic uncertainty in absolute magnitude. The values for Draco and Sextans agree well with the literature values of $19.40\pm0.17$ and $19.67\pm0.1$ \citep{bon04,lee09}, respectively. The $DM=19.11\pm0.03$ mag we measure for Ursa Minor agrees with the $19.18\pm0.12$ mag measurement of \citet{mb99}, but disagrees with the $19.4\pm0.1$ mag value measured by \citet{car02}. The rms scatter of distance moduli of RR Lyrae stars in these dwarf galaxies is $\sigma_{DM}\approx 0.05$ mag. This scatter empirically verifies the intrinsic scatter of $\sigma_{\rm M}=0.05$ mag we measured when fitting the PLZ relation in the $i_{\rm P1}$ band.
\subsection{WISE Data}\label{wise}
Quasars (QSOs) are one of the biggest sources of contamination when selecting RR Lyrae stars, especially at faint magnitudes (i.e., as the probed volume of the Universe increases). They overlap with RR Lyrae stars in $g-r$ and redder optical colors (e.g., Figure 4 of \citealt{ses07}), and may look as variable as RR Lyrae stars when observed in sparse datasets (such as PS1, Figure 3 of \citealt{her16}). Because most QSOs have a hot dust torus, they show an excess of radiation in the mid-infrared part of the spectrum, and have the WISE \citep{wri10} mid-infrared color $W12=W1-W2 > 0.5$ mag (Figure 2 of \citealt{nik14}). RR Lyrae stars, on the other hand, have $W12 \sim 0$ mag.
To better separate QSOs and RR Lyrae stars, we supplement PS1 data with the $W12$ color provided by the all-sky WISE mission by matching PS1 and WISE positions using a $1\arcsec$ radius. If a PS1 object does not have a WISE $W1$ or $W2$ measurement, or those measurements have uncertainties greater than 0.3 mag (i.e., the WISE detection is less than $5\sigma$ above the background), we reset its $W12$ color to 9999.99 (this happens for about 50\% of objects). We also calculate the $\langle i\rangle - W1$ color, and set its value to 9999.99 if one of its magnitudes is missing.
\section{RR Lyrae Identification}\label{Sec:Method}
We wish to build a model that returns the probability of an object to be an RR Lyrae star, given the data from \autoref{Sec:PS1_3pi} and the light curve fits from \autoref{Sec:lightcurve_fitting}.
We will address this problem with a supervised machine learning approach, where we use a training set (labeled or classified objects and their data) to infer a function that determines the class of unlabeled objects from their data. Since we have a reliable training set (see \autoref{training_set}), supervised learning techniques (\autoref{classifiers}) represent a natural choice for building a classification model for a selection of RR Lyrae stars. The light curve fitting and its $\chi^2$ value (\autoref{multi-band_light_curve_fitting}) play a crucial role in this process.
\subsection{Training Set}\label{training_set}
To ``learn'' how to classify objects, supervised algorithms need to be trained using a subset of the data in which each object is labeled, i.e. their class is known in advance.
Our main training set consists of 1.9 million PS1 objects located in the SDSS Stripe 82 region ($310\arcdeg > RA < 59\arcdeg$, $|Dec| < 1.25\arcdeg$) that are brighter than 21.5 mag, have at least 23 observations (\autoref{Sec:PS1_3pi}), and are at least $24\arcmin$ (2 tidal radii; \citealt{har96} (2010 edition)) away from the center of globular cluster NGC 7089. To label the objects in the training set, we match them to \citet{ses10} and \citet{suv12} catalogs of RR Lyrae stars. If the position of an RR Lyrae star in one of these two catalogs matches the position of the closest PS1 object within $1\arcsec$, we label the PS1 object as an ``RR Lyrae'' (class 1; there are 462 such matches, and only 3 RR Lyrae stars do not have a PS1 match). The remaining PS1 objects are labeled as ``non-RR Lyrae'' (class 0). Out of 465 matched RR Lyrae stars, we know that 364 are of type $ab$ (RRab) and 98 are of type $c$ (RRc).
Since the SDSS Stripe 82 observations are slightly deeper and have 6 times more epochs than PS1 observations, we consider the \citet{ses10} and \citet{suv12} RR Lyrae catalogs to be 100\% pure and complete up to the adopted faint PS1 magnitude limit (and likely beyond). Consequently, we consider the labels of PS1 objects in SDSS Stripe 82 as the ``ground truth'' when measuring the efficiency of our selection method (i.e., the selection completeness and purity).
In SDSS Stripe 82, the majority of previously identified RR Lyrae stars are located within $\sim30$ kpc of the Sun (400 stars or $83\%$ of the sample, see Figure 10 of \citealt{ses10}), and are thus fairly bright ($r_{\rm P1} < 18.5$). This distribution is the result of Galactic structure, and is not a selection effect. To enhance the training set with fainter, and thus more distant RR Lyrae stars, we use RR Lyrae stars in the Draco dwarf spheroidal galaxy, located at a heliocentric distance of $\sim80$ kpc \citep{kin08}. If the position of an RR Lyrae star in the \citet{kin08} catalog matches the position of the closest PS1 object (with at least 23 observations) within $1\arcsec$, we label the PS1 object as a ``RR Lyrae'' (there are 261 such matches, and 5 RR Lyrae stars do not have a PS1 match). Out of 261 matched RR Lyrae stars, 205 are of type $ab$ (RRab), 30 are of type $c$ (RRc), and 25 are $d$-type RR Lyrae stars (RRd) that pulsate simultaneously in the fundamental mode and first overtone.
\subsection{Supervised Learning}\label{classifiers}
To build this classification model we use \texttt{XGBoost}\footnote{\url{https://github.com/dmlc/xgboost}} \citep{xgb}, an open-source implementation of the {\em gradient tree boosting} supervised machine learning technique \citep{fri01}.
We use gradient tree boosting because the technique produces a prediction model in the form of an ensemble of decision trees, and because tree-based models are robust to uninformative features\footnote{In machine learning, a feature is an individual measurable property of a phenomenon being observed (e.g., period of variability, color, light curve amplitude, etc.).} \citep{has09,ric11,dub11}. This fact supports the usage of a large number of features when building the classification model, even when some of them may not be useful. By permitting the classification algorithm to consider even seemingly uninformative features, we allow it to consider potential correlations between data that may improve the classification in the end.
Given the resilience of gradient tree boosting to uninformative features, and the improvement in classification that additional features may bring, the best approach seems to be to train the classifier using the full set of features. However, this is impractical for the data set at hand. While calculating mean optical colors, low-level variability statistics (\autoref{Sec:PS1_3pi}), and the multi-band periodogram takes less than a second per object, multi-band light curve fitting takes $\sim30$ min per object. Given that our training set contains about 1.9 million objects, calculating all these features for all objects in the training set would be computationally prohibitive.
Instead of training a single classifier using the full set of features, we build three progressively more detailed classifiers using progressively smaller, but purer training sets (purer in the sense that the fraction of RR Lyrae stars in the training set increases). We describe these classifiers in Sections~\ref{first_classifier} to~\ref{third_classifier}, but first give an overview of how a classifier is trained in \autoref{classifier_training}
\subsection{Overview of Classifier Training}\label{classifier_training}
In brief, the steps in training a classifier are to:
\begin{enumerate}
\item Select training objects and input features.
\item Tune \texttt{XGBoost} hyperparameters and train the classifier.
\item Measure the classification performance with a purity vs.~completeness curve.
\end{enumerate}
While the first step is fairly self-explanatory, the remaining steps require further explanation.
The gradient tree boosting technique produces a prediction model in the form of an ensemble of decision trees. The number of trees in the ensemble, the maximum depth of a tree, the fraction of features that are considered when constructing a tree, and many other parameters that affect the manner in which the trees are grown and pruned, can be controlled via parameters\footnote{See \url{https://xgboost.readthedocs.io/en/latest//parameter.html} for the full list.} exposed by the \texttt{XGBoost} package. By properly tuning these model {\em hyperparameters}, we can ensure that the classification produced by the model is not sub-optimal (e.g., not overfitted).
Before tuning the hyperparameters, we select input features, and shuffle and split the training set into two equal-sized sets which we call {\em development} and {\em evaluation} sets. We use stratified splitting, i.e., we make sure that the ratio of RR Lyrae and non-RR Lyrae objects is equal in both sets.
To find the optimal hyperparameters, we use the \texttt{GridSearchCV} function in the \texttt{scikit-learn} open-source package for machine learning\footnote{\url{http://scikit-learn.org}} \citep{scikit-learn}. \texttt{GridSearchCV} selects test values of hyperparameters from a grid, and then measures the performance of the classification model (for the given hyperparameters) using ten-fold stratified cross-validation on the development set. In detail, the development set is split into ten subsets (using stratified splitting, see above), the model is trained on nine subsets, and the probability of being an RR Lyrae star\footnote{Computed as the mean predicted class probabilities of the trees in the forest. Given a single tree, the probability that an object is of the RR Lyrae class (according to that tree) is equal to the fraction of training samples of the RR Lyrae class in the leaf in which the object ends up.} (hereafter, the classification score\footnote{In this work, we use three classifiers and label their scores as $score_{\rm j}$, where $j=1,2,3$.}) is obtained from the trained model for objects in the tenth (i.e., withheld) subset. The performance of the classification is evaluated on the withheld set using some metric (see Sections~\ref{first_classifier} to~\ref{third_classifier} for details), and the whole procedure is repeated nine more times, each time with a different withheld set. The average of the ten performance evaluations is stored, and the set of hyperparameters with the best average performance is used when training the classifier (step 3).
To verify whether the choice of the development set significantly affects the tuning of hyperparameters, we evaluate the performance of the tuned model on the evaluation set (which was not used by \texttt{GridSearchCV} during hyperparameter optimization), and then repeat the tuning process, but this time we use the evaluation set for tuning and the development set for evaluation. We find that the tuning procedure returns similar values of hyperparameters, regardless of the choice of the development set, indicating that the tuning of hyperparameters is not significantly biased by our choice of the development set.
\begin{figure}
\plotone{purity_vs_completeness.pdf}
\caption{
The power of the multi-band light curve fitting in the classification of RR Lyrae stars. The figure shows purity vs.~completeness curves produced by progressively more detailed classifiers described in Sections~\ref{first_classifier} to~\ref{third_classifier}. The ideal classifier should approach the top right corner of the diagram. The square and star symbols show the purity and completeness of the classification with the adopted choice of scores returned by the first and second classifier ($score_{\rm 1} >0.01$ and $score_{\rm 2} > 0.13$, respectively). The initial completeness is 99\% due to initial data quality cuts (\autoref{Sec:PS1_3pi}). Using the final classifier, we can select samples of RR Lyrae stars that are, for example, 90\% complete and 90\% pure.
\label{purity_completeness_curves}}
\end{figure}
Once the hyperparameters are tuned and the classifier is trained, we evaluate the performance of the classification using a purity vs.~completeness curve (see \autoref{purity_completeness_curves} for examples). To measure this curve, we use the classification scores of objects in the Stripe 82 part of the training set (see \autoref{training_set} for details on this set). The classification scores of these objects were calculated using the ten-fold cross-validation on the full (Stripe 82 and Draco) training set. For any threshold on the score and knowing the true class of each object in SDSS Stripe 82, we obtain the fraction of recovered RR Lyrae stars (completeness), and the fraction of RR Lyrae stars in the selected sample (purity).
\subsection{First Classification Step: Optical/IR Colors and Variability}\label{first_classifier}
To train the first classifier, we use the full training set of 1.9 million objects (\autoref{training_set}), and adopt their variability statistics, as well as average PS1 and WISE colors, as input features for the classifier (for a total of 10 features, see Sections~\ref{Sec:PS1_3pi} and~\ref{wise} for details); we do not use the light curve fitting of Section~\ref{Sec:lightcurve_fitting}. When tuning the classifier, we select the values of hyperparameters that maximize the area under the purity vs.~completeness curve. The black dotted line in \autoref{purity_completeness_curves} characterizes the performance of the trained classifier.
Our first classification outperforms the one obtained by \citet{her16} for all choices of sample purity and completeness (i.e., for all thresholds on the classification score). This is attributable perhaps foremost to a substantially greater number of observations per object in our dataset (67 vs.~35 in \citealt{her16}). The hyperparameter tuning, and use of a different machine learning algorithm (\texttt{XGBoost} vs.~\texttt{scikit-learn} Random Forest) may also contribute to better performance.
Using a cut on the classification score of $score_{\rm 1} > 0.01$ ($score_{\rm 1}$), we are able to reduce the number of objects under consideration by more than three orders of magnitude (from about 1.9 million to $\sim1500$), while losing only $2\%$ of RR Lyrae stars. However, the purity of the selected sample is still unacceptably low (only 34\%). In order to improve the purity of the selected sample we need to train the classifier using additional features, such as the multi-band periodogram (\autoref{multi-band_periodogram}).
\subsection{Second Classification Step: Multi-band Periodogram}\label{second_classifier}
For 1568 objects that pass the first classification cut ($score_{\rm 1} > 0.01$) we calculate multi-band periodograms and extract top 20 periods from each periodogram. Along with the periods, we also extract the power of each period (i.e., the height of the periodogram at that period).
As \autoref{power_vs_score} illustrates, the multi-band periodogram contains useful information for separating RR Lyrae stars from non-RR Lyrae objects. In principle, we could improve the purity of the selection by simply keeping all objects with $power_{\rm 0} > 0.4$, without a loss of completeness. On the other hand, we may achieve even better classification if we provide the {\em entire} set of periods and their powers to \texttt{XGBoost}, and let the algorithm decide which features to use.
\begin{figure}
\plotone{power1_vs_score.pdf}
\caption{
This plot shows that the multi-band periodogram contains useful information for separating RR Lyrae stars (solid circles) from non-RR Lyrae objects (open circles). Even though some true RR Lyrae stars may have low $score_{\rm 1}$ values (i.e., are not recognized by the first classifier as likely RR Lyrae stars), the power of the top period clearly separates them from non-RR Lyrae objects (e.g., $power_{\rm 0} \ga0.4$).
\label{power_vs_score}}
\end{figure}
To improve the classification of RR Lyrae stars, we create a new feature set by combining 10 features used by the first classifier, with the top 20 periods and their powers obtained from the multi-band periodogram (for a total of 50 features). As the training set, we use $\sim1500$ objects remaining from the initial training set after the $score_{\rm 1} > 0.01$ cut. When tuning hyperparameters, we adopt values that optimize the area under the purity vs.~completeness curve. The blue dashed line in \autoref{purity_completeness_curves} characterizes the performance of the second classifier.
We find that the addition of multi-band periodogram data improves the selection of RR Lyrae stars, as evidenced by higher sample purity at a given completeness (blue dashed line in \autoref{purity_completeness_curves}). For example, at 90\% completeness, adding multi-band periodogram data increases the purity of the selected sample by 15\% (with respect to the purity delivered by the first classifier, black dotted line).
\subsection{Final Classification Step: Multi-band Light Curve Fitting}\label{third_classifier}
Given the information in hand, a nearly optimal classification may be obtained by also including features extracted from phased multi-band light curves (\autoref{multi-band_light_curve_fitting} and~\autoref{extraction_of_features}), but first, we need to fit multi-band light curves to objects under consideration.
Since multi-band light curve fitting is computationally quite expensive ($\sim30$ min per CPU and per object), we only do it for 910 training objects that have $score_{\rm 2} > 0.13$, where $score_{\rm 2}$ is the classification score produced by the second classifier. According to \autoref{purity_completeness_curves}, this selection cut returns a sample with 66\% purity and 95\% completeness. By using this cut, we avoid fitting objects that are not likely to be RR Lyrae stars (i.e., we do not waste CPU time), but at the same time, do not reject many true RR Lyrae stars (i.e., the completeness decreases by 2\% due to this cut, with respect to the 97\% completeness obtained after the $score_{\rm 1} > 0.01$ cut). In principle, we could have reached the same sample purity using a cut on the first classification score ($score_{\rm 1}$), but the decrease in completeness would have been much greater (a 6\% decrease).
In Sections~\ref{first_classifier} and~\ref{second_classifier}, we have trained classifiers using non-RR Lyrae objects (class 0) and RR Lyrae stars (class 1), that is, we have performed {\em binary} classifications. The reason for this two-step procedure was practical. In order to make the multi-band template fitting computationally feasible, we had to reduce the number of objects under consideration to a manageable level (by increasing the purity of the selected sample), while retaining as many true RR Lyrae stars as possible (i.e., by keeping the selection completeness as high as possible). Doing this required only knowing whether an object is likely an RR Lyrae star or not. By using cuts on $score_{\rm 1}$ and $score_{\rm 2}$ (binary) classification scores, we were able to reduce the number of objects from about 1.9 million to 900, while retaining $95\%$ of RR Lyrae stars.
We now take a further step, determining whether an object is a non-RR Lyrae object (class 0), a type $ab$ (class 1), or a type $c$ or $d$ RR Lyrae star (class 2) through a {\em multiclass} classification.
To train the final (multiclass) classifier, we use 910 training objects that have $score_{\rm 2} > 0.13$ (and of course, $score_{\rm 1} > 0.01$). Of these 910 objects, 541 are RRab stars, and 144 are RRc or RRd stars (based on \citealt{ses10} and \citealt{kin08} classifications). The remaining 225 objects are non-RR Lyrae objects. The feature set consists of 50 features employed by the second classifier, and 20 features extracted from phased multi-band light curves (\autoref{extraction_of_features}). Since we are training a multiclass classifier, when tuning hyperparameters we adopt values that minimize the logistic (or cross-entropy) loss \citep{PRML}. The thick red line in \autoref{purity_completeness_curves} characterizes the purity and completeness of the selection as a function of the threshold on $score_{\rm 3,ab} + score_{\rm 3,c}$, where $score_{\rm 3,ab}$ and $score_{\rm 3,c}$ are RRab and RRc classification scores, respectively.
\section{Verification and Analysis of the RR Lyrae Selection at High Galactic Latitudes}
The purity vs.~completeness curve obtained using the full classifier (the thick red line in \autoref{purity_completeness_curves}) shows that we can select samples of RR Lyrae stars that are 90\% complete and 90\% pure; we deem this a gratifying and impressive success. However, it is important to keep in mind that the purity and completeness shown in \autoref{purity_completeness_curves} are {\em integrated} over RRab and RRc stars, and over a range of distances (or magnitudes; roughly, 5 to 120 kpc, $14.5 < \langle r \rangle/{\rm mag} < 21$). Since we have reasons to expect variations in purity and completeness as a function of type and distance (e.g., because the classification becomes more uncertain as objects get fainter and light curves become noisier), and because the knowledge of such variations is important for studies of Galactic structure (e.g., when measuring the stellar number density profile), below we present a more detailed analysis of the selection of RR Lyrae stars at high galactic latitudes ($|b| > 15\arcdeg$).
\subsection{Purity and Completeness in Detail}\label{section_purity_completeness}
\begin{figure}
\plotone{purity_completeness_faint.pdf}
\caption{
The expected purity and completeness for faint RRab stars, shown as a function of the threshold on the RRab classification score, $score_{\rm 3,ab}$. The initial purity is $38\%$ due to the $score_{\rm 2}>0.13$ requirement (\autoref{third_classifier}). A threshold of 0.8 (i.e., $score_{\rm 3,ab} > 0.8$, vertical dotted line) returns an RRab sample that is $91\%$ pure and $77\%$ complete at $\sim80$ kpc ($\langle r \rangle\sim20$ mag),
\label{purity_completeness_faint}}
\end{figure}
The solid line in \autoref{purity_completeness_faint} shows the purity of the RRab selection at the faint end (at $\sim80$ kpc or $r\sim20$ mag), given a threshold on $score_{\rm 3, ab}$. To make this curve, we use 80 labeled objects from the SDSS Stripe 82 training set with $19.7 < \langle r \rangle< 20.3$ and $score_{\rm 2} > 0.13$, and calculate the fraction of true RRab stars in selected samples (given a threshold on $score_{\rm 3, ab}$).
\capstartfalse
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablecaption{Expected RRab selection purity and completeness at $\sim80$ kpc ($\langle r \rangle\sim20$ mag)\label{table_RRab_faint}}
\tablehead{
\colhead{Threshold on $score_{\rm 3,ab}$} & \colhead{Purity} & \colhead{Completeness}
}
\startdata
0.00 & 0.38 & 0.91 \\
0.05 & 0.37 & 0.90 \\
0.10 & 0.39 & 0.89 \\
0.15 & 0.44 & 0.89 \\
0.20 & 0.47 & 0.88 \\
0.25 & 0.50 & 0.88 \\
0.30 & 0.54 & 0.87 \\
0.35 & 0.55 & 0.87 \\
0.40 & 0.56 & 0.87 \\
0.45 & 0.61 & 0.87 \\
0.50 & 0.65 & 0.86 \\
0.55 & 0.72 & 0.86 \\
0.60 & 0.76 & 0.86 \\
0.65 & 0.78 & 0.84 \\
0.70 & 0.81 & 0.82 \\
0.75 & 0.90 & 0.80 \\
0.80 & 0.91 & 0.77 \\
0.85 & 0.98 & 0.74 \\
0.90 & 1.00 & 0.71 \\
0.95 & 1.00 & 0.55
\enddata
\tablecomments{A machine readable version of this table with a 0.01 step in threshold, is available in the electronic edition of the Journal.}
\end{deluxetable}
\capstarttrue
To quantify the completeness of the RRab selection at the faint end, we use 242 RRab stars from the Draco dSph training set (see \autoref{training_set}) that have $score_{\rm 2} > 0.13$. The dashed line in \autoref{purity_completeness_faint} shows the completeness of the selection (i.e., the fraction of recovered RRab stars) as a function of the threshold on $score_{\rm 3, ab}$. This completeness includes all losses due to initial data quality cuts, and classification cuts (i.e., $score_{\rm 1} > 0.01$ and $score_{\rm 2} > 0.13$). For convenience, we tabulate the purity and completeness in \autoref{table_RRab_faint}.
Above, we have used the Stripe 82 sample to measure the purity, and the Draco sample to measure the completeness. We did so because the S82 sample covers a large area and thus contains a more representative sample of contaminants that we may expect to encounter elsewhere on the sky. The Draco sample was used because it contains more faint ($r~\sim20$ mag) RR Lyrae stars than the Stripe 82 sample, and thus the estimate of the completeness has a lower Poisson noise.
Given the sparseness and the multi-band nature of PS1 data, it is remarkable that our selection method can deliver samples of RRab stars that are $\sim90\%$ pure and $\sim80\%$ complete (e.g., for $score_{\rm 3, ab} > 0.8$), even at distances as far as $\sim80$ kpc from the Sun. This raises hope for even better performance at the bright end.
\begin{figure}
\plotone{purity_completeness_bright_ab.pdf}
\caption{
The expected purity and completeness of selected samples of bright RRab stars, as a function of the threshold on the RRab classification score, $score_{\rm 3,ab}$. A threshold of 0.8 (vertical dotted line) returns an RRab sample that is $97\%$ pure and $92\%$ complete within $\sim40$ kpc ($\langle r \rangle\lesssim18.5$ mag).
\label{purity_completeness_bright_ab}}
\end{figure}
To measure the purity and completeness at the bright end, we select objects from the SDSS Stripe 82 training set with $score_{\rm 2}>0.13$ and $\langle r \rangle<18.5$ mag (i.e., within $\sim40$ kpc from the Sun). We use the $\langle r \rangle=18.5$ mag brightness cut because the vast majority of halo RR Lyrae stars are located within that magnitude range \citep{ses10}. The relevant curves are plotted in \autoref{purity_completeness_bright_ab} and tabulated in \autoref{table_RRab_bright}.
\capstartfalse
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablecaption{Expected RRab selection purity and completeness within $\sim40$ kpc ($\langle r \rangle<18.5$ mag)\label{table_RRab_bright}}
\tablehead{
\colhead{Threshold on $score_{\rm 3,ab}$} & \colhead{Purity} & \colhead{Completeness}
}
\startdata
0.00 & 0.66 & 1.00 \\
0.05 & 0.85 & 0.99 \\
0.10 & 0.89 & 0.98 \\
0.15 & 0.91 & 0.98 \\
0.20 & 0.92 & 0.98 \\
0.25 & 0.93 & 0.98 \\
0.30 & 0.93 & 0.98 \\
0.35 & 0.94 & 0.98 \\
0.40 & 0.94 & 0.98 \\
0.45 & 0.94 & 0.98 \\
0.50 & 0.95 & 0.97 \\
0.55 & 0.96 & 0.97 \\
0.60 & 0.96 & 0.96 \\
0.65 & 0.97 & 0.95 \\
0.70 & 0.97 & 0.95 \\
0.75 & 0.97 & 0.92 \\
0.80 & 0.97 & 0.92 \\
0.85 & 0.97 & 0.89 \\
0.90 & 0.97 & 0.88 \\
0.95 & 0.98 & 0.80
\enddata
\tablecomments{A machine readable version of this table with a 0.01 step in threshold, is available in the electronic edition of the Journal.}
\end{deluxetable}
\capstarttrue
Finally, the purity and completeness curves characterizing the selection of bright RRc stars are shown in \autoref{purity_completeness_bright_c} and tabulated in \autoref{table_RRc_bright}. \autoref{purity_completeness_bright_c} shows that the selection of pure and complete samples of RRc stars is more challenging, both due to the lower amplitude of the pulsation and to the contamination by contact binaries with similar sinusoidal light curves. Nonetheless, it is still possible to produce samples that are over 80\% complete and pure within $\sim40$ kpc from the Sun. We do not discuss RRc stars further as they are less numerous than RRab stars (by a factor of three), and thus are of lesser importance for Galactic studies.
\begin{figure}
\plotone{purity_completeness_bright_c.pdf}
\caption{
The expected purity and completeness of selected samples of bright RRc stars, as a function of the threshold on the RRc classification score, $score_{\rm 3,c}$. In comparison with RRab stars (\autoref{purity_completeness_bright_ab}), the selection of pure and complete samples of RRc stars is more challenging.
\label{purity_completeness_bright_c}}
\end{figure}
\capstartfalse
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablecaption{Expected RRc selection purity and completeness within $\sim40$ kpc ($\langle r \rangle<18.5$ mag)\label{table_RRc_bright}}
\tablehead{
\colhead{Threshold on $score_{\rm 3,c}$} & \colhead{Purity} & \colhead{Completeness}
}
\startdata
0.00 & 0.48 & 0.96 \\
0.05 & 0.61 & 0.93 \\
0.10 & 0.74 & 0.92 \\
0.15 & 0.78 & 0.89 \\
0.20 & 0.81 & 0.89 \\
0.25 & 0.81 & 0.89 \\
0.30 & 0.83 & 0.88 \\
0.35 & 0.85 & 0.86 \\
0.40 & 0.86 & 0.86 \\
0.45 & 0.86 & 0.83 \\
0.50 & 0.88 & 0.82 \\
0.55 & 0.90 & 0.79 \\
0.60 & 0.90 & 0.78 \\
0.65 & 0.90 & 0.78 \\
0.70 & 0.91 & 0.77 \\
0.75 & 0.93 & 0.75 \\
0.80 & 0.92 & 0.70 \\
0.85 & 0.95 & 0.66 \\
0.90 & 0.94 & 0.57 \\
0.95 & 0.91 & 0.35
\enddata
\tablecomments{A machine readable version of this table with a 0.01 step in threshold, is available in the electronic edition of the Journal.}
\end{deluxetable}
\capstarttrue
\subsection{RRab Selection Function}\label{selection_function}
Given a position on the sky and the flux-averaged $r$-band magnitude of an RR Lyrae star, what is the probability of selecting that star using the PS1 data at hand? Characterizing this selection function is of obvious importance for studies of the Galactic structure, especially when modeling the number density distribution of stars (e.g., \citealt{bov12, xue15}). In this Section we restrict ourselves to characterizing the selection function of RRab stars at high galactic latitudes ($|b| > 20\arcdeg$), because i) they are three times more numerous than RRc stars, and ii) at a given purity, they can be recovered at a much higher rate than RRc stars (compare \autoref{purity_completeness_bright_ab} vs.~\autoref{purity_completeness_bright_c}). Characterizing the selection function at low galactic latitudes would require an appropriate training set (Stripe 82 and Draco are both located at $|b| > 20\arcdeg$).
We assume that the selection function $S$ depends only on the flux-averaged $r$-band magnitude of an RRab star (not corrected for interstellar extinction), and not on its position (i.e., $S(r_{\rm F}$)). This is a reasonable assumption given the uniformity of dust extinction away from the Galactic plane, and the uniformity of PS1 multi-epoch coverage. The selection function will also depend on the threshold imposed on the final classification score, $score_{\rm 3, ab}$. For the sake of simplicity we only consider the case when $score_{\rm 3, ab} > 0.8$, as this selection cut returns a sample that is appropriate for many studies (90\% purity and 80\% completeness, even at the faint end; see \autoref{section_purity_completeness}). By assuming spatial independence, we can now use the SDSS Stripe 82 and Draco training sets to determine the PS1 $3\pi$ selection function of RRab stars at high galactic latitudes. The result is illustrated in \autoref{completeness_rmag}.
\begin{figure}
\plotone{completeness_rmag.pdf}
\caption{
The RRab selection function, or the completeness of the selection of RRab stars at high galactic latitudes ($|b| >20\arcdeg$), as a function of the flux-averaged $r$-band magnitude (not corrected for interstellar matter extinction). The red curve shows the ratio of the number of selected ($n_{\rm sel}$, $score_{\rm 3,ab}>0.8$) and all RRab stars from the SDSS Stripe 82 training set ($n_{\rm all}$), in 0.5 mag wide bins. The shaded region shows the standard deviation of the bin height, $\sqrt{n_{\rm sel}(1 - n_{\rm sel}/n_{\rm all})}/n_{\rm all}$, computed based on the binomial distribution. For comparison, the star symbol shows the fraction of recovered RRab stars in the Draco dSph (see \autoref{purity_completeness_faint}). The thick yellow line shows the best-fit logistic curve (\autoref{logistic_curve}, $L = 0.91$, $k = 4.1$, $x_{\rm 0} = 20.57$ mag), and the thin blue lines illustrate the uncertainty of the fit (see \autoref{selection_function} for details).
\label{completeness_rmag}}
\end{figure}
We find that the RRab selection function is approximately constant at $\sim90\%$ for $r_{\rm F}\lesssim20$ mag, after which it steeply drops to zero at $r_{\rm F}\sim21.5$ mag. To characterize the selection function, we construct a simple probabilistic model.
There are 577 RRab stars in our training set (in SDSS Stripe 82 and Draco), of which 483 pass the $score_{\rm 3,ab}>0.8$ selection cut. With each RRab star we associate a $(r_{\rm F, n}, s_{\rm n})$ pair of values, where $s_{\rm n} = 1$ if the star is selected, otherwise $s_{\rm n} = 0$ ($r_{\rm F,n}$ is the star's extincted flux-averaged $r$-band magnitude). We denote the full data set of 577 pairs of values as ${\bf d_{\rm n}} = \{ r_{\rm F,n}, s_{\rm n} \}$.
The likelihood of this data set is given by
\begin{equation}
p\left({\bf d_{\rm n}} | L, k, x_{\rm 0}\right) = \prod_{\rm n = 1}^{577}p\left(s_{\rm n} | S\left(r_{\rm F,n} | L, k, x_{\rm 0}\right)\right),
\end{equation}
where $p\left(s_{\rm n} | S\left(r_{\rm F,n} | L, k, x_{\rm 0}\right)\right)$ is the Bernoulli probability mass function with success probability given by the selection function, $S\left(r_{\rm F,n} | L, k, x_{\rm 0}\right)$.
To model the selection function, we use the logistic curve
\begin{equation}
S(r_{\rm F, n} | L, k, x_{\rm 0}) = \frac{L}{1 + \exp(-k(r_{\rm F, n} - x_{\rm 0}))},\label{logistic_curve}
\end{equation}\\
where $L$ is the curve's maximum value, $k$ is the steepness of the curve, and $x_{\rm 0}$ is the magnitude at which the completeness drops to $50\%$.
The probability of this model given data ${\bf d_{\rm n}}$ is then
\begin{equation}
p(L, k, x_{\rm 0} | {\bf d_{\rm n}}) = p({\bf d_{\rm n}} | L, k, x_{\rm 0})p(L, k, x_{\rm 0}),
\end{equation}
where $p(L, k, x_{\rm 0})$ is the prior probability of model parameters. We impose uniform priors such that $0.4 \leq S(r_{\rm F} < 18.5) \leq 1.0$ (i.e., completeness within 40 kpc is between 40\% and 100\%) and $S(r_{\rm F} > 22) = 0$.
We explore the probability of various model parameters using the \citet{gw10} Affine Invariant Markov chain Monte Carlo (MCMC) ensemble sampler, as implemented in the \texttt{emcee} package\footnote{\url{http://dan.iel.fm/emcee/current/}} (v2.2.1, \citealt{fm13}). The most probable model of the selection function (yellow curve) is shown in \autoref{completeness_rmag}, with the best-fit logistic curve (\autoref{logistic_curve}) being $L = 0.91$, $k = 4.1$, $x_{\rm 0} = 20.57$ mag. To illustrate the uncertainty in the model, we also plot the curves associated with 200 randomly selected models from the posterior distribution (thin blue lines).
\section{PS1 Catalog of RR Lyrae Stars}\label{catalog}
We have applied the above multi-step selection procedure to about 500 million PS1 objects that pass PS1 data quality cuts (\autoref{Sec:PS1_3pi}), and have calculated final RRab and RRc classification scores ($score_{\rm 3,ab}$, and $score_{\rm 3,c}$) for 240,000 objects. We report their positions, distances, PS1 photometry, and classification scores in \autoref{PS1_RRLyrae_table}. A total of $\sim400,000$ CPU hours of super-computing time was used to process all of the data and calculate the final classification scores. Below we illustrate some properties of this sample and leave a more detailed analysis of the distribution of RR Lyrae stars in the Galactic halo for future studies.
To illustrate the coverage of the PS1 catalog of RR Lyrae stars, we have selected a sample of $\sim 45,000$ highly probable RRab stars ($score_{3,ab} > 0.8$, expected purity of 90\% and completeness of $\sim80\%$ at 80 kpc), and have plotted their angular distribution in \autoref{mollweide_projection}.
\begin{figure*}
\plotone{RRLyrae_mollweide_3pi_finalcandidates.pdf}
\caption{
Distribution of $\sim45,000$ highly probable RRab stars ($score_{\rm 3,ab} > 0.8$, expected purity of 90\% and completeness of $\sim80\%$ at 80 kpc), shown in Mollweide projection of Galactic coordinates. A contour plot of the reddening-based $E(B-V)$ dust map \citep{sch14} is overlayed, as well as the positions of four Milky Way dwarf satellite galaxies. The locations of the leading and trailing arms of the Sagittarius tidal stream are also indicated.
\label{mollweide_projection}}
\end{figure*}
The leading arm of the Sagittarius tidal stream \citep{iba01} and four Milky Way satellite galaxies are most easily discernible features in \autoref{mollweide_projection}. However, another notable feature is an almost complete absence of {\em high probability} RRab stars (i.e., those that have $score_{\rm 3,ab} > 0.8$) in regions with high ISM extinction (e.g., $E(B-V) > 1$). Improperly dereddened photometry is the most likely reason for this lack of high probability RRab stars at low galactic latitudes.
Briefly, when dereddening photometry, we assume that all sources are located behind the dust layer. At low galactic latitudes this may not always be true, as sources may be embedded in the dust layer. After dereddening, the photometry of such sources will be overcorrected for extinction, and their optical PS1 colors will be shifted blueward from their dust-free values. In addition, improperly dereddened light curves will not be well-fit by multiband templates (\autoref{multi-band_light_curve_fitting}). As a result of these effects, true RR Lyrae stars may look like non-RR Lyrae objects, and may end up with low $score_{\rm 3, ab}$ values.
The lack of high probability RRab stars at low galactic latitudes also demonstrates the resilience of the classifier to contamination. Due to the increase in stellar number density, and the fact that some fraction of star will be incorrectly tagged as RR Lyrae stars, one would naively expect for the density of objects tagged as RR Lyrae stars to increase towards the Galactic plane. However, no such increase is observed in \autoref{mollweide_projection}. The features extracted during multiband template fitting are most likely responsible for this resilience, as even a significant increase in the number of contaminants is not sufficient to produce objects that match multiband light curve characteristics of RR Lyrae stars.
And finally, to illustrate the efficiency of the final multiclass classifier (\autoref{third_classifier}) at separating RRab and RRc stars, we show their distribution in the period vs.~$r_{\rm P1}$-band amplitude diagram (\autoref{period_amplitude}).
\begin{figure}
\plotone{period_amplitude.pdf}
\caption{
The distribution of highly likely RRab (red; $score_{\rm 3, ab} > 0.8$) and RRc stars (blue; $score_{\rm 3, c} > 0.55$) in the period vs.~$r_{\rm P1}$-band amplitude diagram. Note the well-defined Oosterhoff I locus of RRab stars \citep{oos39,cat09}, and the less populated Oosterhof II locus shifted to longer periods (along the lines of constant amplitude). The apparent clumps of RRc stars are likely caused by period aliasing (e.g., the 1-day beat frequency alias, see the top panel of \autoref{period_comparison}).
\label{period_amplitude}}
\end{figure}
\section{Discussion and Summary}
In this paper, we have explored with what fidelity RR Lyrae stars can be identified in multi-epoch, asynchronous multi-band photometric data. We have done this for the specific case of the PS1 $3\pi$ light curves which are very sparse; $\lesssim12$ epochs per band over 3000 typical RR Lyrae pulsation periods (i.e., 4 years). To identify RR Lyrae stars, we have employed, in particular, the fitting (and period phasing) of very specific empirical RR Lyrae light curves, and have utilized supervised machine-learning tools. While we have applied our selection only to this specific data set, many of the approaches described here will be applicable to other sparse, asynchronous multi-band data sets, such as those produced by the Dark Energy Survey (DES; \citealt{DES}) and the Large Synoptic Survey Telescope (LSST; \citealt{ive08}). For example, for its Galactic plane sub-survey, LSST is currently planning to obtain 12 observations over 4 years in each of its $ugrizy$ bandpasses (i.e., 30 observations per band over 10 years; \v{Z}.~Ivezi\'{c}, priv.~comm.), making its data set very similar to the one produced by the PS1 $3\pi$ survey. Compared to PS1, however, LSST will go deeper by at least 2 mag in $izy$ bands, allowing us to select RR Lyrae stars and study old stellar populations close to the Galactic plane, even at the far side of the Galaxy.
We demonstrated that we can precisely and accurately measure the periods (to within 2 seconds) for the vast majority of RR Lyrae within PS1 $3\pi$, extending to distance moduli of $\sim20$, or $\sim 100$~kpc. The high precision of the period determination may seem surprising at face-value, but is owed to the long time-baseline: a 2s/period difference causes a cumulative light curve shift of 90 minutes across 3000 pulsation periods. Accurate periods are crucial for calculating the phase of spectroscopic observations, and for transforming the observed radial velocity to the center-of-mass velocity needed for kinematic studies \citep{sesar12}. The ephemerides (i.e., periods and phase offsets) provided by our catalog can thus be readily used to turn RR Lyrae stars observed by current (e.g., SDSS-IV/TDSS; \citealt{rua16}) and upcoming multi-object spectroscopic (MOS) surveys (e.g., Gaia, WEAVE, DESI; \citealt{per01,dal14,lev13}) into precise kinematic tracers of the halo structure and substructure (i.e., stellar streams). With a density of 1 deg$^{-1}$, PS1 RR Lyrae stars represent a unique ``piggyback'' project for MOS surveys, with a potentially high impact and certainly low cost ($\sim1$ target per MOS field).
Using these light curve fits as one (crucial) feature in a supervised classification of RR Lyrae, we showed that we can -- at least at high Galactic latitudes -- construct a sample of $\sim45,000$ RRab stars that has 90\% purity and 80\% completeness, even at 80 kpc from the Sun. In comparison with previous catalogs, our sample is deeper than the SDSS Stripe 82 sample of \citet{ses10}, while covering more of the sky than the CRTS sample of \citet{dra13}. The PS1 $3\pi$ data and the classification presented here even allow for a quite reliable separation of RRab and RRc type of RR Lyrae stars, as shown by the period-amplitude diagram (\autoref{period_amplitude})
All this opens up many avenues in exploring the Galactic halo. With its second data release (DR2) expected in April 2018, Gaia astrometric mission will provide unprecedented proper motions for PS1 RR Lyrae stars brighter than $V\sim20$ mag, but no competitive distance information (beyond a few kpc). Having precise distances is crucial for measuring tangential velocities\footnote{Radial velocities of RR Lyrae stars are straightforward to measure \citep{sesar12}.}, and thus the Galactic potential, as the uncertainty in tangential velocity increases proportionally with the uncertainty in distance. Therefore, it is particularly remarkable that, using PS1 data and a period-absolute magnitude relation, we can measure distances to RR Lyrae stars with a precision of $\sim3\%$, even for stars at 100 kpc from the Sun.
An important avenue to explore with the resulting RR Lyrae catalog is the question of RR Lyrae at low-latitudes (covered by PS1). These objects open up the possibility to explore the oldest portion of the Galactic disk. At low latitudes, the selection function of the sample will, however, be considerably more complicated, warranting careful testing and characterization beyond the scope of this paper.
\capstartfalse
\begin{deluxetable*}{ccccccccccccc}
\tabletypesize{\footnotesize}
\tablecolumns{13}
\tablecaption{PS1 Catalog of RR Lyrae Stars\label{PS1_RRLyrae_table}}
\tablehead{
\colhead{R.A.} & \colhead{Decl.} & \colhead{$score_{\rm 3,ab}^a$} & \colhead{$score_{\rm 3,c}^a$} & \colhead{DM$^b$} & \colhead{Period} & \colhead{$\phi_0^c$} & \colhead{$A^{\prime}_{g}$ \dots $A_{z}^{\prime,d}$} & \colhead{$g^{\prime}$ \dots $z^{\prime,e}$} & \colhead{$T_g$ \dots $T_z^f$} & \colhead{$g_F$ \dots $z_F^g$} & \colhead{E(B-V)$^h$}
}
\startdata
180.39736 & -0.23480 & 0.57 & 0.02 & 15.44 & 0.671302 & -0.40301 & 0.22 \dots 0.11 & 15.96 \dots 15.77 & 100 \dots 100 & 16.09 \dots 15.82 & 0.020 \\
179.98457 & -0.00105 & 0.99 & 0.00 & 15.90 & 0.471807 & -0.18385 & 1.32 \dots 0.68 & 15.72 \dots 16.12 & 120 \dots 113 & 16.75 \dots 16.58 & 0.012
\enddata
\tablenotetext{a}{Final RRab and RRc classification scores.}
\tablenotetext{b}{Distance modulus calculated using the flux-averaged $i_{\rm P1}$-band magnitude and \autoref{abs_mag_i_band}. The uncertainty in distance modulus is $0.06(rnd)\pm0.03(sys)$ mag {\em for RRab stars}. This distance modulus may be biased and more uncertain for RRc stars.}
\tablenotetext{c}{Phase offset (see \autoref{phase}).}
\tablenotetext{d}{Best-fit amplitude (e.g., $A_g^\prime = FA_g$; see \autoref{multi-band_templates_equations}).}
\tablenotetext{e}{Best-fit magnitude at $\phi=0$, {\em corrected} for dust extinction using extinction coefficients of \citet{sf11} and the dust map of \citet{sch14} (e.g., $g^\prime = g_0 - r_0 + r^\prime$); see \autoref{multi-band_templates_equations}).}
\tablenotetext{f}{Best-fit template ID number (see Section 3.1 and Table 2 of \citealt{ses10}).}
\tablenotetext{g}{Flux-averaged magnitude, {\em corrected} for dust extinction using extinction coefficients of \citet{sf11} and the dust map of \citet{sch14}.}
\tablenotetext{h}{Reddening adopted from the \citet{sch14} dust map.}
\tablecomments{A machine readable version of this table will become available on Nov 1 2017 in the electronic edition of the Journal. A portion is shown here for guidance regarding its form and content. For collaborations on projects and earlier access to the PS1 catalog of RR Lyrae stars, please contact the first author.}
\end{deluxetable*}
\capstarttrue
\acknowledgments
B.S., N.H. and H.-W.R.~acknowledge funding from the European Research Council under the European Union’s Seventh Framework Programme (FP 7) ERC Grant Agreement n.~${\rm [321035]}$. H.-W.R. acknowledges support of the Miller Institute at UC Berkeley through a visiting professorship during the completion of this work. We thank the anonymous referee for the thorough review, positive comments, and constructive remarks on this manuscript. The Pan-STARRS1 Surveys (PS1) have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, and the National Aeronautics and Space Administration under Grant No.~NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No.~AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), and the Los Alamos National Laboratory.
|
1,108,101,563,677 | arxiv | \section{Introduction}\label{introduction}
Within the inflationary scenario, spacetime typically inflated to the
extent that comoving modes which are of cosmological size today originated
well beyond the Planck scale (and any other natural ultraviolet (UV)
cutoff scale such as the string scale). This observation has led to a
growing body of work exploring the tantalizing prospect that the
predictions of inflation could be observably affected by quantum gravity.
For early papers on this subject see, e.g.,
\cite{brandenbergermartin01-a,brandenbergermartin01-b,niemeyer01,kempf00,kempfniemeyer01}.
Technically, all possible Planck scale effects on the comoving modes, $k$,
at late times take the form of a nontrivial $k$-dependent selection of
solution to the ordinary mode equation, see, e.g., \cite{starobinsky01}.
This is because, independent of the details of Planck scale physics, each
mode's evolution equation reduces to the ordinary wave equation as soon as
its proper wave length is significantly larger than the Planck scale. It
has been possible, therefore, to determine on these general grounds that
quantum gravity effects should manifest themselves qualitatively in the
form of superimposed oscillations in the CMB spectra as well as in a
breaking of the consistency relation between the scalar$/$tensor ratio and
the tensor spectral index of single field inflation, see, e.g.,
\cite{greeneetal0104,huikinney01,greeneetal0110,greeneetal02,jerome-osc}.
Any quantitative predictions, however, are particular to each particular
theory of quantum gravity. Concretely, for wavelengths closer and closer
to the cutoff scale the framework of QFT should hold - with characteristic
increasing corrections - until it finally breaks down at the cutoff scale.
A theory of quantum gravity, be it, e.g., loop quantum gravity or $M$
theory, should eventually allow one to calculate those characteristic
corrections to the framework of QFT in this ``sub-Planckian" regime, i.e.,
in the regime of wavelengths larger than but close to the cutoff length.
In addition, the theory of quantum gravity should allow one to determine
the initial condition for the evolution of a comoving mode within the
framework of QFT, namely at the time when the mode's proper wavelength
first exceeds the cutoff length.
In this context, first-principle calculations are very difficult, of
course. Therefore, a number of authors have modelled the corrections to
the framework of QFT from quantum gravity simply as UV modifications to
the dispersion relations. The implied effects on the inflationary
predictions for the CMB have been investigated, for example, in
\cite{brandenbergermartin01-a,brandenbergermartin01-b,niemeyer01,niemeyerparentani01}.
Particular modifications to the dispersion relations for short wavelengths
have been motivated either \emph{ad hoc} or on the basis of analogies to
the propagation of waves in condensed matter systems. It was found that
the effects of Planck scale physics could be considerable, even though
there appear to be strong constraints due to a possibly strong
backreaction problem, see, e.g., \cite{tanaka00}.
A less \emph{ad hoc} approach \cite{kempf00} has been to model the
behaviour of quantum field theory in the sub-Planckian regime through the
implementation of corrections to the first quantization uncertainty
relations, see \cite{kempf98}. These corrections implement an ultraviolet
cutoff in the form of a finite minimum uncertainty in spatial distances,
which has been motivated by studies in quantum gravity and string theory,
see, e.g.,
\cite{Gross:1987ar,Amati:1988tn,Garay:1994en,Amelino-Camelia:1997uq,Witten:2001ib}.
This approach to modelling Planck scale physics in inflation was further
investigated in particular in
\cite{kempfniemeyer01,greeneetal0104,greeneetal0110,greeneetal02}. So far
it has been possible to solve only approximations of the mode equation
that arises in this approach. In fact, to solve the exact mode equation is
known to be highly nontrivial even numerically because of a particular
singular behaviour of the mode equation at the mode's starting time. Here,
we are following up on this series of papers.
In particular, we calculate here the exact solutions to the exact mode
equation in both the de Sitter and power-law backgrounds. This allows us
then to follow the mode solutions back towards the time when the proper
wavelength approaches the cutoff length. That in turn allows us to
calculate the exact behaviour of physical quantities such as the field
fluctuation amplitudes and the Hamiltonian towards the Planckian regime.
We obtain, therefore, a fully explicit model in which to explore possible
mechanisms that could impose initial conditions on comoving modes as they
enter into the regime of validity of quantum field theory.
\section{Short distance physics and inflation}\label{shortdistance}
\subsection{UV cutoff through minimum length uncertainty}
A rather general assumption about quantum field theory in the
sub-Planckian regime is that it should still possess for each coordinate a
linear operator \({\bf x}^{i}\) (inherited from first quantization) whose
formal expectation values (e.g., in the space of fields that are being
summed over in the path integral) are real. The \({\bf x}^{i}\) may or
may not commute. As shown in \cite{kempf98}, this implies that the short distance
structure of any such coordinate, considered separately,
can only be continuous, discrete, or ``unsharp'' in one of two particular ways.
Studies in quantum gravity and string theory point towards one of these
two unsharp cases, namely the case of coordinates \({\bf x}\) whose formal
uncertainty \(\Delta {\bf x}$ possesses a finite lower bound $\Delta {\bf
x}_{min}$ at the Planck or string scale. This short distance structure
arises from quantum gravity correction terms to the uncertainty relation
(see \cite{uncertainty}):
\begin{equation
\Delta x\Delta p\geq \frac{1}{2}(1+\beta(\Delta p)^{2}+\dots)
\end{equation}
Here, \(\beta\) is a positive constant which, as is easily verified,
parametrizes the cutoff length through \(\Delta x_{min}=\sqrt{\beta}\).
The units are such that $\hbar=1$. As was first pointed out in
\cite{kempf93}, uncertainty relations of this form arise from corrections
to the canonical commutation relations which, for example in the case of
one space dimension, take this form:
\begin{equation}\label{corrcommutation}
[{\bf x},{\bf p}]=i(1+\beta {\bf p}^{2}+\dots)
\end{equation}
It was shown in \cite{ak-harmosc} that the multi-dimensional
generalization is unique to first order in $\beta$ if rotational and
translational symmetry is to be preserved.
Let us remark that, as was shown in
\cite{ak-sampling}, this type of natural UV cutoff implies
that physical fields possess a finite bandwidth in the information
theoretic sense: In the presence of this short distance structure one can
apply Shannon's sampling theorem to conclude that if the values of a
physical field are known on any set of points with average density above
twice the Planck density then the values of the field everywhere are
already determined and can be explicitly calculated from these
samples\footnote{The situation is mathematically identical, for example,
to that of HiFi music signals of bandwidth 20KHz, whose amplitude samples
are recorded on CDs at a rate of $4\cdot 10^4$ samples per second. Using
Shannon's theorem, CD players are able to \it precisely \rm reconstruct
the continuous music signal at \it all \rm time from these discrete
samples.}. Notice that no sampling lattice is preferred, as all sampling
lattices of sufficiently high average density can be used to reconstruct
the fields. Therefore, in a theory with this type of natural cutoff
spacetime can be viewed as continuous, in which case the conservation of
the spatial symmetries is displayed, while, fully equivalently, in the
same theory spacetime can also be viewed as discrete, in which case the
theory's UV finiteness is displayed. So far, this type of natural UV
cutoff has been considered mostly within models that break Lorentz
invariance, as we will here. The generalization to a generally covariant
setting has been started in \cite{ak-sampling-cov}.
\subsection{Mode generation in the presence of the UV cutoff}
In \cite{kempf00}, the short distance cutoff formulated in
(\ref{corrcommutation}) was applied to the theory of a minimally coupled
massless real scalar field \(\phi({\bf x},t)\) in an expanding
Friedmann-Robertson-Walker (FRW) background. This setup can be used to
describe the quantum dynamics of the tensor as well as the scalar
fluctuations in inflation. Concerning the mode evolution, the only
difference is that, in the latter case, the scale factor $a$ in the mode
equation is replaced by $z=\phi_0'a^2/a'$. Here, $\phi_0$ is the bulk zero
mode of the inflaton field with the prime denoting differentiation with
respect to \(\eta\). (It should be kept in mind, however, that the
Hamiltonian of the gravitational waves that are due to tensor modes
contributes to the energy momentum tensor only from second order.)
The approach of \cite{kempf00} was continued in \cite{kempfniemeyer01} as
well as in a series of papers by Easther \emph{et al.}
\cite{greeneetal0104,greeneetal0110,greeneetal02}. In each case, for
simplicity, the notation for the tensor modes was used (i.e. with the mode
equation containing $a$ instead of $z$), as we will also do here.
We assume the case of spatial flatness of the background spacetime and we
use the conformal time coordinate \(\eta\). Note that the UV cutoff
\(\Delta x_{min}=\sqrt{\beta}\)
is introduced in proper distances as opposed to comoving distances.
The action for the field \(\phi({x}, \eta)\) then reads, see
\cite{kempf00}:
\begin{eqnarray}
S&=&\int d\eta d^{3}x\,\frac{1}{2a}\bigg\{\left[\left(\partial_{\eta}+\frac{a'}{a}\sum_{i=1}^{3}\partial_{x^{i}}x^{i}-\frac{3a'}{a}\right)\phi\right]^{2}\nonumber\\
&&-a^{2}\sum_{i=1}^{3}\left(\partial_{x^{i}}\phi\right)^{2}\bigg\}\label{action}
\end{eqnarray}
Note that our function \(\phi_{\tilde{k}}(\eta)\) is related to
\(u_{\tilde{k}}\) in \cite{greeneetal0104} by
\(u_{\tilde{k}}=1/a^{2}\cdot \phi_{\tilde{k}}(\eta)\) and that our
function \(\nu(\eta,\tilde{k})\) is the function \(\nu(\eta,\rho)\) from
\cite{greeneetal0104} multiplied by \(a^{4}\).
Using the operators $${\bf x}^{i}: \phi(x,\eta)\rightarrow x_i\phi(x,\eta)$$
$${\bf p}^{i}: \phi(x,\eta)\rightarrow -i\partial/\partial x_i\phi(x,\eta) $$
the action can be expressed representation independently in terms of these
operators and the usual $L^2$ inner product of fields. While the fields'
expression for the action (\ref{action}) is held fixed the underlying
commutation relations are now corrected to introduce the desired cutoff
(\ref{corrcommutation}). The modification of the short distance behaviour
then manifests itself whenever one writes the action in any
representation, such as in comoving momentum space. For example, at
distances close to the cutoff length, comoving modes no longer decouple.
However, it is then still possible to find variables \((\eta,\tilde{k})\)
in which the \(\tilde{k}\) modes of the fields' Fourier transform do
decouple\footnote{Note that, as discussed in \cite{kempf00} and
\cite{kempfniemeyer01}, these variables \(\tilde{k}\) approximate the
comoving momentum variables \(k\) only at long wavelengths.}. The
decoupling modes are important as they describe the truly independent
degrees of freedom. They will be denoted as \(\phi_{\tilde{k}}(\eta)\).
The realness of the field \(\phi({\bf x},\eta)\) translates into
\(\phi_{\tilde{k}}^{*}=\phi_{-\tilde{k}}\). The action (\ref{action}) can
then be written in the form
\begin{equation}S=\int
d\eta\int_{\tilde{k}^{2}<a^{2}/e\beta}d^{3}\tilde{k}\,\mathscr{L}\label{af}\end{equation}
with the Lagrangian
\begin{equation}\label{lagrangian}
\mathscr{L}=\frac{1}{2}\nu\left\{\left|\left(\partial_{\eta}-3\frac{a'}{a}\right)\phi_{\tilde{k}}(\eta)\right|^{2}-\mu\left|\phi_{\tilde{k}}(\eta)\right|^{2}\right\},
\end{equation}
where the coefficient functions are given by
\begin{eqnarray}
\mu(\eta,\tilde{k})&:=&-\frac{a^{2}\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})}{\beta\left[1+\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})\right]^{2}},\label{mu}\\
\nu(\eta,\tilde{k})&:=&\frac{\exp\left[-\frac{3}{2}\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})\right]}{a^{4}\left[1+\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})\right]}.\label{nu}
\end{eqnarray}
The function plog used here, the ``product log'', is defined as the
inverse of the function \(x\rightarrow xe^{x}\) (also known as the Lambert
W-function). It will be important later that, at \(x=-1/e\), this
function possesses an essential singularity.
Equally important will be the way in which the short distance
cutoff\footnote{For \(\beta\rightarrow 0\), (\ref{lagrangian}) reduces to
the standard action. Note that Fourier transforming and scaling only
commute up to a scaling factor of \(a^{3}\).} affects the integration
region of the action functional (\ref{af}): each mode \(\tilde{k}\) enters
the action once \(a(\eta)\) has grown enough so that the condition
\(\tilde{k}^{2}<a^{2}/e\beta\) holds, i.e., when:
\begin{equation*}
a(\eta_{c})=\tilde{k}\sqrt{e\beta}\approx\tilde{k}\Delta x_{min}.
\end{equation*} Correspondingly,
the mode equation that follows from the action (\ref{lagrangian}), see
\cite{kempf00}, {\small
\begin{equation}\label{equationofmotion}
\phi_{\tilde{k}}''+\frac{\nu'}{\nu}\phi_{\tilde{k}}'+\left[\mu-3\left(\frac{a'}{a}\right)'-9\left(\frac{a'}{a}\right)^{2}-3\frac{a'\nu'}{a\nu}\right]\phi_{\tilde{k}}=0
\end{equation}}
is an equation of motion which possesses a starting time. As had to be
expected, the implementation of a UV cutoff in an expanding spacetime has
led us to the problem of setting initial conditions on these newly
emerging comoving modes. Before we solve the mode equation to obtain the
solutions' early time behavior, let us briefly consider the mode
solutions' late time behaviour.
\subsection{Late time behaviour}
At late times, when a mode's proper wavelength far exceeds the cutoff
length, the mode becomes insensitive to the nontrivial short distance
structure of spacetime. The modified mode equation
(\ref{equationofmotion}) then reduces to the mode equation without cutoff,
which (taking into account the non-com\-mutativity of scaling and Fourier
transforming)
reads with our conventions\footnote{Recall that at large scales we also have \(\tilde{k}\rightarrow k\).}
\begin{equation}\label{philatetime}
\phi_{\tilde{k}}''-4\frac{a'}{a}\phi_{\tilde{k}}'+\left(6\left(\frac{a'}{a}\right)^{2}-3\frac{a''}{a}+\tilde{k}^{2}\right)\phi_{\tilde{k}}=0
\end{equation}
Let us derive the late time behaviour of (\ref{philatetime}), namely
\(\eta\rightarrow0\), in de Sitter space \(a(\eta)=-1/H\eta\). Since in
this case we have \(\frac{a'}{a}=-\frac{1}{\eta}\) and
\(\frac{a''}{a}=\frac{2}{\eta^{2}}\), while \(\tilde{k}^{2}\) is constant,
equation (\ref{philatetime}) on large scales becomes:
\begin{eqnarray*}
\phi_{\tilde{k}}''+4\frac{1}{\eta}\phi_{\tilde{k}}'&=&0
\end{eqnarray*}
The solution to this equation is given by
\begin{equation}
\phi_{\tilde{k}}=B_{1}+\frac{B_{2}}{\eta^{3}}.
\end{equation}
So for late times, \(\eta\rightarrow0\), the mode function
\(\phi_{\tilde{k}}(\eta)\) diverges \(\propto\frac{1}{\eta^{3}}\) in the
de Sitter case. We recall that
\(\phi_{\tilde{k}}(\eta)=a^{3}(\eta)\Phi_{\tilde{k}}(\eta)\). Thus, as
expected, the physical field's mode function \(\Phi_{\tilde{k}}(\eta)\)
approaches a constant for late times. Let us remark that the analogous
calculation for classical fluctuations indicates that classical scalar or
tensor fluctuations that existed before inflation would have their
wavelength stretched but their amplitude maintained, i.e., they would not
be ``ironed out" by inflation. Inflation is usually thought to have lasted
long enough so that, classically, the inflation-induced flattening of a
fixed proper spatial volume would hinge on the assumption that there were
no pre-existing spatial ``ripples" down to wavelengths shorter than the
Planck length. Of course, the Planck scale is expected to rule out the
notion of pre-existing classical ripples that are shorter than the Planck
length.
\section{The Hamiltonian}\label{Hamiltoniananalysis}
Using the canonically conjugate field to $\phi_{\tilde{k}}$, see
\cite{kempf00},
\(\pi_{\tilde{k}}(\eta)=\nu\phi'_{-\tilde{k}}(\eta)-3\nu\frac{a'}{a}\phi_{-\tilde{k}}(\eta)\),
the Hamiltonian
corresponding to the action (\ref{action}) takes the form
\begin{equation}\label{Hamiltonian}
H=\int_{\tilde{k}^{2}<\frac{a^{2}}{e\beta}} d^{3}\tilde{k}
\left(\frac{1}{2\nu}\pi_{\tilde{k}}^{*}\pi_{\tilde{k}}+\frac{1}{2}
\nu\mu\,\phi_{\tilde{k}}^{*}\phi_{\tilde{k}}+3\frac{a'}{a}\pi_{\tilde{k}}
\phi_{\tilde{k}}\right).
\end{equation}
To quantize one promotes \(\hat{\phi}_{\tilde{k}}\) and
\(\hat{\pi}_{\tilde{k}}\) to operators satisfying the usual commutation
relation
\([\hat{\phi}_{\tilde{k}},\hat{\pi}_{\tilde{r}}]=i\delta^{3}(\tilde{k}-\tilde{r})\)
in momentum space. Note that the field commutators are of course modified
when expressed in position space.
In the Hamiltonian, the integration region in\-crea\-ses over time,
expressing that the field operators of comoving modes enter the
Hamiltonian only when the mode's wavelength exceeds the cutoff length.
This is significant because it means that the Hamiltonian contains
operators which act nontrivially on the Hil\-bert space dimensions of a
particular comoving mode \(\tilde{k}\) only from that time
\(\eta_{c}(\tilde{k})\) when this mode's proper wavelength starts to
exceed the cutoff length. In the literature, this time (which coincides
with the essential singularity of the plog function) has been called the
mode's ``creation time'' and we will stay with this terminology. Let us
keep in mind, however, that the dimensions on which the harmonic
oscillator of a given mode is represented are of course \it always \rm in
the Hilbert space. In the Heisenberg picture, all that happens at the
creation time of a mode is that the mode's field and conjugate field
operators enter the Hamiltonian for the first time, while conversely, in a
shrinking universe, a mode's field operators would drop from the
Hamiltonian when the mode's wavelength drops below the cutoff length.
We express the quantum field in terms of creation and annihilation
operators \(a_{\tilde{k}}^{\dagger}\) and \(a_{\tilde{k}}\) and in terms
of the mode functions \(\phi_{\tilde{k}}(\eta)\):
\begin{equation}
\hat{\phi}_{\tilde{k}}(\eta)=a_{\tilde{k}}\phi_{\tilde{k}}(\eta)+a_{-\tilde{k}}^{\dagger}\phi_{-\tilde{k}}^{*}(\eta),
\end{equation}
Note that the notation \(\phi_{\tilde{k}}(\eta)\) now no longer stands for
the classical field (which obeys $\phi^*_{\tilde{k}}=\phi_{-k}$) but
instead for a mode function. For the mode functions we can choose, as
usual that \(\phi_{\tilde{k}}=\phi_{\tilde{|k|}}\).
The Wronskian conditio
\begin{equation}\label{wronskian}
\nu(\eta,\tilde{k})\left[\phi_{\tilde{k}}(\eta)\phi'^{*}_{\tilde{k}}(\eta)-\phi^{*}_{\tilde{k}}(\eta)\phi'_{\tilde{k}}(\eta)\right]=i.
\end{equation}
insures that the so-defined quantum fields obey the canonical commutation
relations.
We can then rearrange the quantized Hamiltonian in the following form that
we will later refer to: {\small
\begin{eqnarray}
\hat{H}&=&\int_{\tilde{k}^{2}<\frac{a^{2}}{e\beta}} d^{3}\tilde{k}
\left[\frac{\nu}{2}\left|\phi_{\tilde{k}}'\right|^{2}+
\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\left|\phi_{\tilde{k}}\right|^{2}
\right]\nonumber\\
&&\times\left(a_{\tilde{k}}^{\dagger}a_{\tilde{k}}+
a_{-\tilde{k}}^{\dagger}a_{-\tilde{k}}\right)
\nonumber\\
&+&\left[\frac{\nu}{2}\phi_{-\tilde{k}}'\phi_{\tilde{k}}'+
\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\phi_{-\tilde{k}}\phi_{\tilde{k}}\right]\nonumber\\
&&\times \left(a_{\tilde{k}}a_{-\tilde{k}}\right)\nonumber\\
&+&\left[\frac{\nu}{2}\left(\phi_{-\tilde{k}}'\phi_{\tilde{k}}'\right)^{*}+
\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\left(\phi_{-\tilde{k}}\phi_{\tilde{k}}\right)^{*}\right]\nonumber\\
&&\times\left(a_{\tilde{k}}a_{-\tilde{k}}\right)^{\dagger}\nonumber\\
&+&\left[\frac{\nu}{2}\left|\phi_{\tilde{k}}'\right|^{2}+
\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\left|\phi_{\tilde{k}}\right|^{2}\right]\delta^{3}(0).\label{quantizedHamiltonian}
\end{eqnarray}}
To see that this Hamiltonian gives the correct equation of motion (\ref{equationofmotion}), it is straightforward to apply the Heisenberg equations and use the definition of \(\hat{\pi}_{\tilde{k}}\). Note again the bound on the integral, which reflects the fact that each mode \(\tilde{k}\) only contributes to the Hamiltonian from its creation time \(\eta_{c}(\tilde{k})\) onwards; before that time, the operators \(a_{\tilde{k}}\) and \(a^{\dagger}_{\tilde{k}}\) are not contained in \(\hat{H}\).\\
Let us note that, curiously, in the case of a shrinking universe the
modes' creation and annihilation operators \(a_{\tilde{k}},
a^{\dagger}_{\tilde{k}}\) successively drop out of the Hamiltonian. Thus,
a growing number of states can no longer be ``reached'' by the Hamiltonian
and the field operators while the universe is contracting. All information
encoded in these states would no longer interact although it would become
relevant again during a subsequent expanding phase.
We will be interested, in particular, in the ground state energy term of
each comoving mode, given by the last line of
(\ref{quantizedHamiltonian}):
\begin{eqnarray}
\label{vacuumterm}
\rho_{vac,\tilde{k}}(\eta)&=&\frac{\nu(\eta)}{2}
\left|\phi_{\tilde{k}}'(\eta)\right|^{2}\\
&+&
\frac{\nu(\eta)}{2}\left|\phi_{\tilde{k}}(\eta)\right|^{2}\left(\mu(\eta)-9
\left(\frac{a'(\eta)}{a(\eta)}\right)^{2}\right)
\nonumber
\end{eqnarray}
Since the Hamiltonian is closely related to the \(00\) component of the
energy momentum tensor \(T^{\mu}{}_{\nu}\), (\ref{vacuumterm}) essentially
describes the energy contribution of each mode which arises when the mode
outgrows the cutoff scale. Two time dependencies determine the evolution
of the modes' ground state energy (\ref{vacuumterm}): first, the behaviour
of the coefficients \(\nu(\eta,\tilde{k})\) and \(\mu(\eta,\tilde{k})\),
and second the mode function \(\phi_{\tilde{k}}(\eta)\). While the former
are already known explicitly (see (\ref{mu}) and (\ref{nu})), as we will
discuss, the choice of solution to the equation of motion
(\ref{equationofmotion}) for the mode function will nontrivially affect
the ground state energy.
\section{Solving the mode equation}\label{eofmanalysis}
We will now derive the exact solutions to (\ref{equationofmotion}) which
obey the Wronskian condition (\ref{wronskian}). The difficulty here is due
to the plog-functions in \(\mu(\eta,\tilde{k})\) and
\(\nu(\eta,\tilde{k})\): the equation of motion for each
\(\tilde{k}\)-mode has an irregular singular point at the mode's creation
time \(\eta_{c}(\tilde{k})\). In previous work, \cite{greeneetal0104}, the
mode equation was replaced by an approximate mode equation which had a
regular singular point at the mode's creation time. The exact solutions to
this approximate mode equation were obtained both as power series and in
analytical form.
Our aim now is to obtain the exact solutions to the exact mode equation.
To this end, we employ a suitable variable transformation that turns the
irregular singular point into a regular singular point, without the need
for approximations. We then apply the Frobenius method to derive the exact
solutions, namely in terms of convergent power series and logarithms.
\subsection{Singularity transformation}
Technically, \(\eta_{c}(\tilde{k})\) is called an irregular singular point
of (\ref{equationofmotion}) because the term
\begin{equation*}
\left(\eta-\eta_{c}(\tilde{k})\right)^{2}
\left[\mu-3\left(\frac{a'}{a}\right)'-9\left(\frac{a'}{a}\right)^{2}-3\frac{a'\nu'}{a\nu}\right]
\end{equation*}
and the term
\begin{equation*}
\left(\eta-\eta_{c}(\tilde{k})\right)\cdot\frac{\nu'}{\nu}
\end{equation*}
do not have convergent Taylor expansions around this point. Let us now
define a new variable \(\tau\),
\begin{equation}\label{deftau}
\tau=\textnormal{plog}(-\beta\tilde{k}^{2}/a(\eta)^{2}),
\end{equation}
which plays the role of time, but is a function of both \(\eta\) and
\(\tilde{k}\).
The scale factor \(a(\eta)\) and the coefficient functions
\(\nu(\eta,\tilde{k})\) and \(\mu(\eta,\tilde{k})\) then
read\footnote{Note that to distinguish between functions in \(\eta\) as
opposed to functions in \(\tau\), we will write a tilde on top of
functions in \(\tau\), so \(a=a(\eta)\), but \(\tilde{a}=\tilde{a}(\tau)\)
etc. and especially \(\phi_{\tilde{k}}=\phi_{\tilde{k}}(\eta)\) but
\(\tilde{\phi}_{\tilde{k}}=\tilde{\phi}_{\tilde{k}}(\tau)\). This should
not be confused with the tilde on \(\tilde{k}\) which was introduced in
\cite{kempf00} to denote coordinates in which the Fourier modes decouple.}
\begin{eqnarray}
\tilde{a}(\tau)&:=&\tilde{k}\sqrt{\frac{\beta}{-\tau e^{\tau}}},\label{aoftau}\\
\tilde{\mu}(\tau,\tilde{k})&:=&\frac{\tilde{k}^{2}}{e^{\tau}(1+\tau)^{2}},\label{muoftau}\\
\tilde{\nu}(\tau,\tilde{k})&:=&\frac{\tau^{2}e^{\frac{\tau}{2}}}{\beta^{2}\tilde{k}^{4}(1+\tau)}.\label{nuoftau}
\end{eqnarray}
It is straightforward to translate the derivatives $\partial /\partial \eta$ into derivatives in
terms of derivatives \(\partial /\partial\tau\) and we then obtain that the equation of motion (\ref{equationofmotion}) takes the form:
\begin{eqnarray}
0 & = & \tilde{\phi}_{\tilde{k}}''+\left(\frac{\tilde{\nu}'}{\tilde{\nu}}-\frac{\eta''}{\eta'}\right)
\tilde{\phi}_{\tilde{k}}' \label{transformedeofm}\\
& & + \bigg[\tilde{\mu}\,\eta'^{2}-3\left(\frac{\tilde{a}''}{\tilde{a}}\right)- 6\left(\frac{\tilde{a}'}{\tilde{a}}\right)^{2} \nonumber \\
& & ~~~-
3\frac{\tilde{a}'\tilde{\nu}'}{\tilde{a}\tilde{\nu}}+3\frac{
\tilde{a}'}{\tilde{a}}\frac{\eta''}{\eta'}\bigg]\tilde{\phi}_{\tilde{k}}\nonumber
\end{eqnarray}
Note that we now let the prime denote differentiation with respect to \(\tau\) rather than \(\eta\).
In order to make those expressions explicit in an example, let us consider the case of de Sitter space
\(a(\eta)=-1/H\eta\). In this case, by inverting (\ref{deftau}), $\eta$ is expressed in terms of $\tau$ and $\tilde{k}$ through:
\begin{equation}\label{etaoftau}
\eta(\tau,\tilde{k})=-\frac{1}{H\tilde{k}}\frac{\sqrt{-\tau
e^{\tau}}}{\sqrt{\beta}}
\end{equation}
Using (\ref{aoftau}), (\ref{muoftau}) and (\ref{nuoftau}), we then obtain for the mode equation in de Sitter space:
\begin{equation}\label{eofmexplicit}
\tilde{\phi}_{\tilde{k}}''+\frac{5+\tau}{2\tau(1+\tau)}
\tilde{\phi}_{\tilde{k}}'
-\left(\frac{1+15H^{2}\beta+9H^{2}\beta\tau}{4H^{2}\beta\tau}\right)
\tilde{\phi}_{\tilde{k}}=0
\end{equation}
The creation ``time" (when a mode outgrows the minimum length) now
corresponds for all \(\tilde{k}\) to \(\tau_{c}=-1\). (This is particular
to the de Sitter case, where \(H\) is a constant.) The time \(\tau_{c}\) is still a
singular point of (\ref{eofmexplicit}). However, it is now a regular singular
point since the Taylor expansions of the term
\begin{equation*}
\left(\tau-\tau_{c}(\tilde{k})\right)^{2}\cdot\left(\frac{1+15H^{2}\beta+9H^{2}\beta\tau}{4H^{2}\beta\tau}\right)
\end{equation*}
and the term
\begin{equation*}
\left(\tau-\tau_{c}(\tilde{k})\right)\cdot\frac{5+\tau}{2\tau(1+\tau)}
\end{equation*}
indeed exist. For later convenience, we shift the \(\tau\)-variable once more so that
\(x:=\tau+1\); creation time for all modes is now \(x_{c}=0\):
\begin{equation}\label{eofminx}
\tilde{\phi}_{\tilde{k}}''+\frac{4+x}{2x(x-1)} \tilde{\phi}_{\tilde{k}}'
-\left(\frac{1+6H^{2}\beta+9H^{2}\beta x}{4H^{2}\beta(x-1)}\right)
\tilde{\phi}_{\tilde{k}}=0
\end{equation}
Having transformed the creation time into a regular singular point, we are now ready to solve the mode equation through the Frobenius method.
\subsection{The Frobenius solutions}\label{solutiondesitter}
Full details on the Frobenius method are given in Appendix \ref{frobenius}.
Using the Frobenius ansatz (see
(\ref{sol1}) in the Appendix) we obtain from (\ref{eofminx}):
\begin{eqnarray}
&&\sum_{n=0}^{\infty}a_{n}(n+r)(3-n-r)\cdot x^{n-1}\nonumber\\
&+&\sum_{n=0}^{\infty}a_{n}(n+r)(n+r-1/2)\cdot x^{n}\nonumber\\
&-&\sum_{n=0}^{\infty}a_{n}\left(\frac{1}{4H^{2}\beta}+3/2\right)\cdot x^{n+1}\nonumber\\
&-&9/4\cdot\sum_{n=0}^{\infty}a_{n}\cdot x^{n+2}=0\label{inserted}
\end{eqnarray}
The first line of (\ref{inserted}) yields the indicial equation
(\ref{characteristic}), which here has the solutions \(r_{1}=3\) and \(r_{2}=0\). The first coefficient \(a_{0}\) is an
arbitrary normalization constant and we set \(a_{0}=1\). Comparing the terms of equal powers in \(x\), we then find
\begin{eqnarray*}
a_{1}&=&\frac{r(2r-1)}{2(r+1)(r-2)},\\
a_{2}&=&\frac{1}{4(2+r)(1-r)}\\
&&\times\left(\frac{1}{H^{2}\beta}+6-\frac{r(2r-1)(2r+1)}{r-2}\right).
\end{eqnarray*}
From \(x^{2}\) on, all terms in (\ref{inserted}) contribute to the
coefficient, thus we can find a recursion formula for \(a_{n}\) for
\(n\geq 3\) as a function of \(a_{n-1}, a_{n-2}\) and \(a_{n-3}\) as well
as \(r\):
\begin{eqnarray}
a_{n}(r)&=&\frac{1}{(n+r)(n+r-3)}\label{recursion}\\
&&\times\bigg(-\frac{9}{4}a_{n-3}-\frac{1}{4}a_{n-2}\left[\frac{1}{H^{2}\beta}+6\right]\nonumber\\
&&+\frac{1}{2}a_{n-1}(n+r-1)(2n+2r-3)\bigg)\nonumber
\end{eqnarray}
The first solution to (\ref{eofminx}) is simply given by inserting \(r=r_{1}=3\) in the above expressions.\\
Since \(r_{1}-r_{2}=N=3\) is a positive integer, the solution for
\(r_{2}=0\) is of the form given by (\ref{sol2}) in the Appendix.
The constant \(A\) can be calculated from \(a_{3}\) as
\begin{equation}
=\lim_{r\rightarrow 0}r\cdot a_{3}(r)=\frac{1}{8H^{2}\beta}\cdot
a_{0}\label{A}.
\end{equation}
The coefficients \(c_{n}(r=0)\) for \(n=0,1,2,\dots\)
are related to the \(a_{n}\) in (\ref{recursion}) b
\begin{equation}
c_{n}(r_{2}=0)=\left[\frac{d}{dr}\left((r-r_{2})a_{n}(r)\right)\right]_{r=r_{2}=0}.
\end{equation}
The second solution to (\ref{eofminx}) is then obtained by setting
\(r=r_{2}=0\). (Equivalently, \(A\) and the coefficients \(c_{0}\) can be
calculated directly by inserting (\ref{sol2}) into (\ref{inserted}), see
Appendix \ref{frobenius}.) The two independent solutions can be combined
to form the exact general solution of (\ref{eofminx}),
\begin{equation}\label{general}
\tilde{\phi}_{\tilde{k}}(x)=C_{1}\tilde{\phi}_{\tilde{k},1}(x)+C_{2}\tilde{\phi}_{\tilde{k},2}(x).
\end{equation}
It is clear from Frobenius theory that the power series solutions
converge for all $x<1$,, i.e. all \(\tau<0\). Here, \(C_{1}\) and \(C_{2}\) are complex constants whose
choice corresponds to the choice of vacuum.
Note that our solutions (\ref{general}) could be rewritten
in terms of the conformal time \(\eta\) and would then of course solve equation
(\ref{equationofmotion}).
\subsection{Wronskian condition}\label{wronskiancondition}
After the transformation
\(\eta\rightarrow\tau\) the Wronskian condition
(\ref{wronskian}) takes the form:
\begin{equation}\label{newwronskian}
\tilde{\nu}(\tau,\tilde{k})\frac{1}{\eta'(\tau,\tilde{k})}\left[\tilde{\phi}_{\tilde{k}}(\tau)\tilde{\phi}'^{*}_{\tilde{k}}(\tau)-
\tilde{\phi}^{*}_{\tilde{k}}(\tau)
\tilde{\phi}_{\tilde{k}}'(\tau)\right]=i
\end{equation}
Note that while the solution space to the mode equation has two complex and therefore four real dimensions, the Wronskian condition reduces the
dimensionality to three real dimensions (since only the imaginary part of (\ref{newwronskian}) is nontrivial).
For example, in the case of de Sitter space and when splitting the constants \(C_{1}\) and \(C_{2}\) in (\ref{general}) into
their real and imaginary parts, \(C_{j}=R_{j}+iK_{j},\,j=1,2\), the Wronskian condition at the creation time \(\tau_{c}=-1\) can be expressed as:
\begin{equation}
R_{1}K_{2}-K_{1}R_{2}=\left(\frac{12H}{\beta^{3/2}\tilde{k}^{3}}\right)^{-1}.
\end{equation}
\subsection{The generic case}\label{generalizedapproach}
We have here used a suitable transformation of the time variable in order to turn the creation time into a regular singular point so that then the Frobenius method yielded the exact solutions to the mode equation in de Sitter space. In fact, this method, using in all cases the same transformation from conformal time to the $\tau$ variable, can also be applied to more general FRW spacetimes. In particular, we have applied the method to the case of general power-law backgrounds. The exact solutions are then only slightly more difficult to calculate, see Appendix \ref{powerlaw}. Namely, while in the de Sitter case we obtained a recursion formula for the
coefficients \(a_{n},c_{n}\) as a function of a limited number of
predecessors (namely three), in a general power-law background each
coefficient generally depends on all others that precede it.
Interestingly, the roots of the indicial equation in the general power-law case turn out to be the same
as for the de Sitter case, namely \(r_{1}=3, r_{2}=0\). Therefore the behaviour of a mode close to its creation time is qualitatively the same in both the de Sitter and the general power-law case.
Further, it is often useful to employ field redefinitions, for example in order to eliminate friction terms in the mode equation. Our method remains applicable under such field redefinitions. In particular, as shown in \cite{kempfniemeyer01}, the field redefinition \(\psi_{\tilde{k}}(\eta)=\sqrt{\nu}\phi_{\tilde{k}}(\eta)\) eliminates the friction term in (\ref{equationofmotion}), so that we arrive at an equation of the form \(\psi_{\tilde{k}}''+\omega_{\tilde{k}}^{2}(\eta)\psi_{\tilde{k}}=0\).
Similarly, it is possible to eliminate the friction term in our mode equation (\ref{eofminx}) in which the creation time is a regular singular point. Thereby, the creation time remains a regular singular point.
The corresponding field redefinition and exact solutions are presented in Appendix \ref{frictionless}.
Note that formulating the mode equation in frictionless form allows us not
only to obtain the exact solutions through the Frobenius method but also
to obtain a second set of solutions using the WKB method in the adiabatic
regime. This will provide us with two bases of the solution space. These
are of course related
by a Bogolyubov transformations and we will calculate it
Sec.\ref{criteria}.\\%A similar transformation is also possible after the transformation to the \(\tau\) variable, and the resulting equation can then also be solved by the Frobenius method (see Appendix \ref{frictionless}).\\
\section{Initial conditions}\label{poseinitial}
Having derived the space of exact solutions to the exact mode equation, it becomes crucial
to identify the physical solution in that space, since this then implicitly identifies the vacuum state.
For this purpose, given that the mode equation is of second order, it would appear to suffice to specify suitable initial conditions at the mode's creation time, such as the physical solution's value and derivative. Interestingly, the behaviour of the exact solutions towards the creation time reveals that the situation is more complex.
Indeed, the values of \(\phi_{\tilde{k}}(\eta)\) and
its derivative with respect to conformal time, at creation time are given by:
\begin{eqnarray}
\phi_{\tilde{k}}(\eta_{c})&=&C_{2}\label{functioninetaatcreation}\\
\phi'_{\tilde{k}}(\eta_{c})&=&-\frac{1}{2\eta_{c}}~C_{2}\left(6+\frac{1}{H^{2}\beta}\right)\label{derivinetaatcreation}
\end{eqnarray}
This shows that prescribing $\phi_{\tilde{k}}(\eta_{c})$ and $\phi'_{\tilde{k}}(\eta_{c})$ will fix only $C_2$ but not $C_1$.
In fact, at the creation time, it is also not possible to specify a solution by specifying higher order derivatives of $\phi_{\tilde{k}}$ since, as one readily verifies, the behaviour of the exact solutions shows that all these higher derivatives are necessarily divergent. For example, \(\phi''_{\tilde{k}}(\eta=\eta_{c})\) diverges as \(\propto\frac{1}{\left(\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})+1\right)}\).
Roughly speaking, as we approach creation time, the general solution (\ref{general}) loses its dependence on \(C_{1}\).
While the missing $C_1$ has two real dimensions, one of these is fixed by
the Wronskian condition. Within the remaining one-parameter family of
solutions, a solution can be picked by choosing a phase condition, for
example, by imposing that the solution be real at a given time
\(\eta_{aux}\neq\eta_{c}\), i.e.,
\(Im\left(\phi_{\tilde{k}}(\eta_{aux})\right)=0\).
Thus, any mode function can be specified by giving the mode function's value at creation
time \(\eta_{c}\) (two real parameters) and demanding that the solution be
real at an auxiliary time \(\eta_{aux}\) (one parameter). The Wronskian
condition constrains the fourth parameter.
\section{Comparison with the results of Easther \emph{et al.}}\label{criteria}
In earlier work, see \cite{greeneetal0104}, the irregular singular point of the mode equation
(\ref{equationofmotion}) was dealt with by truncating a series expansion of the coefficients of the mode equation. For the resulting approximate mode equation\footnote{In \cite{greeneetal0104} a different notation for this
equation was chosen; apart from replacing the field \(\phi_{k}\) by
\(u_{k}=\phi_{k}/a^{2}\), also the variable \(y\) was introduced by setting \(\eta=\eta_{c}(1-y)\), and all expressions were re-written in terms of \(y\).}
\begin{equation}\label{approximateeofm}
\phi_{\tilde{k}, approx}''-\frac{1}{2(\eta-\eta_{c})}\phi_{\tilde{k},
approx}'+\frac{\mathscr{A}}{(\eta-\eta_{c})}\phi_{\tilde{k}, approx}=0,
\end{equation}
\(\eta=\eta_{c}\) is a regular singular point. Here, \(\mathscr{A}=\frac{\tilde{k}\sqrt{e}}{4\sqrt{\beta} H}(1+6\beta
H^{2})\). The solution space of the approximate mode equation (\ref{approximateeofm}) was found to be spanned by
\begin{equation}\label{F}
F(\eta)=\left(\frac{\sqrt{\mathscr{A}}}{2}+i\mathscr{A}\sqrt{\eta-\eta_{c}}\right)\exp(-2i\sqrt{\mathscr{A}(\eta-\eta_{c})})
\end{equation}
and its complex conjugate, so that the general solution is given by the
linear combination:
\begin{equation}
\phi_{\tilde{k},approx}=\mathscr{C}_{1}F(\eta)+\mathscr{C}_{2}F^{*}(\eta)
\end{equation}
Since we have found in Sec.\ref{solutiondesitter} the exact solutions (\ref{general}) to the exact mode
equation (\ref{equationofmotion}), we can now compare them with the
solutions to the approximate mode equation that were obtained in \cite{greeneetal0104}.
To compare the
behaviour of the exact and the approximate solution close to the creation
time, we take the first and second \(\eta\) derivatives of \(F(\eta)\)
and evaluate at creation time, which gives
\begin{eqnarray}
\phi_{\tilde{k}, approx}(\eta=\eta_{c})&=&\frac{\sqrt{\mathscr{A}}}{2}(\mathscr{C}_{1}+\mathscr{C}_{2}),\\
\phi'_{\tilde{k},
approx}(\eta=\eta_{c})&=&\mathscr{A}^{3/2}(\mathscr{C}_{1}+\mathscr{C}_{2}).
\end{eqnarray}
We encounter the same behaviour we observed in the case of the exact
solution \(\phi_{\tilde{k}}(\eta)\): the function and its first derivative
become proportional to another for $\eta\rightarrow \eta_c$ and the proportionality factor is the same as in (\ref{functioninetaatcreation})
and (\ref{derivinetaatcreation}), namely
\(2\mathscr{A}=\frac{\tilde{k}\sqrt{e}}{2\sqrt{\beta} H}(1+6\beta
H^{2})\). Thus, also the solutions to the approximate mode equation show that posing initial conditions on $\phi$ and its derivative at $\eta_c$ does not suffice to specify a solution.
Further, the second derivative of the solutions to the approximate mode equation
diverges for \(\eta\rightarrow\eta_{c}\), namely as
\(F''(\eta)\propto\frac{1}{\sqrt{\eta-\eta_{c}}}\). In comparison, the second
\(\eta\)-derivative of the exact solution \(\phi_{\tilde{k}}(\eta)\) at
creation time diverges as
\(\propto\left(\textnormal{plog}(-\beta\tilde{k}^{2}H^{2}\eta^{2})+1\right)^{-1}=\frac{\sqrt{-\eta_{c}}}{2}\frac{1}{\sqrt{\eta-\eta_{c}}}+\mathscr{O}(1)\)
(using the series expansion of the plog function). Thus, the
solutions to the approximate mode equation also properly reproduce the leading divergence for $\eta\rightarrow\eta_c$.
We note that the next to leading orders are different: the approximate
equation has a regular singular point at creation time which meant that
the Frobenius method could be used to derive two power series solutions.
The roots of the indicial equation are $(0, 3/2)$, unlike the roots
($0,3$) of the exact mode equation. This means that the solutions to the
approximate mode equation are of the form:
\(\phi_{\tilde{k},approx,1}=\sum
a_{n}(\eta-\eta_{c})^{n}\) and \(\phi_{\tilde{k},approx,2}=\sum
c_{n}(\eta-\eta_{c})^{n+3/2}\) The absence of the logarithm in these
functions implies that their higher derivatives exhibit a different
divergent behaviour than that of the solutions to the exact mode equation.
Related to this, it is clear that the approximate mode equation does not
conserve the Wronskian of the exact mode equation. Instead, it conserves
the expression \(|\mathscr{C}_{1}|^{2}-|\mathscr{C}_{2}|^{2}\). Indeed,
the Wronskian of the exact mode equation, (\ref{wronskian}), for the
solutions to the approximate mode equation is given by
\begin{equation}\label{w-approx}\mathscr{W}(\eta) =
2\,\nu(\eta,\tilde{k})\,\mathscr{A}^{5/2}\sqrt{\eta-\eta_{c}}
\left(|\mathscr{C}_{1}|^{2}-|\mathscr{C}_{2}|^{2}\right),
\end{equation}
where \(\nu(\eta,\tilde{k})\), given by (\ref{nu}), has a nontrivial time
dependence. Using again the series expansion around \(\eta_{c}\), we find
that: $$
\frac{1}{\textnormal{plog}(-\beta\tilde{k}^{2}H^{2}\eta^{2})+1}=\frac{\sqrt{-\eta_{c}}}{2}
\frac{1}{\sqrt{\eta-\eta_{c}}}+\mathscr{O}(1)
$$
Thus, at \(\eta_{c}\), it is possible to enforce the exact Wronskian
condition on the solutions to the approximate mode equation, though the
Wronskian will be conserved only to first order in $\sqrt{\eta-\eta_{c}}$.
This shows that if a solution to the approximate mode equation is to be
matched to the numerical evolution of the exact mode equation, as was done
in \cite{greeneetal0104}, then this match needs to be performed at a time
$\eta_{aux}$ which is as close as possible to the creation time. We note,
however, that as our exact solutions will show below, there is limit to
how close $\eta_{aux}$ can be chosen to the creation time, because of
unavoidable numerical instabilities.
In \cite{greeneetal0104}, a particular choice of initial conditions was
suggested, based on a formal similarity between certain solutions to the
approximate equation and the solutions that correspond to the Bunch-Davies
vacuum in inflation without a cutoff. The argument is that, if the
``friction'' term in the approximate mode equation (\ref{approximateeofm})
could be ignored,
then it would be of the form \(\phi_{k}''+\omega_{k}^{2}(\eta)\phi_{k}=0\) with \(\omega_{k}=\sqrt{\frac{\mathscr{A}}{(\eta-\eta_{c})}}\). In the region where the adiabaticity condition \(\left|\frac{\omega'_{\tilde{k}}(\eta)}{\omega_{\tilde{k}}^{2}(\eta)}\right|\ll 1\) is satisfied, this equation has two approximate solutions of the WKB form
\begin{eqnarray}
\phi_{\tilde{k},approx}^{\mp}(\eta
&=&\left(\frac{\eta-\eta_{c}}{4\mathscr{A}}\right)^{1/4}\nonumber\\
&&\times\exp\left(\pm2i\sqrt{\mathscr{A}(\eta-\eta_{c})}\right).\label{phifrictionignored}
\end{eqnarray}
Even though one cannot actually ignore the ``friction'' term in the approximate mode
equation because the period close to $\eta_c$ is not adiabatic, it is suggestive that these would-be WKB type functions
possess the same oscillatory behaviour as Easther \emph{et al.}'s solution to their approximate mode equation.
In formal analogy to choosing the Bunch-Davies vacuum solution, Easther \emph{et al.} therefore choose \(\mathscr{C}_{2}=0\) so that their preferred physical solution to their approximate mode equation
reads\footnote{Note that even though this solution resembles that of an
adiabatic vacuum, it is not because the adiabaticity condition with
\(\omega_{k}=\sqrt{\frac{\mathscr{A}}{(\eta-\eta_{c})}}\) is not
satisfied.}
\begin{eqnarray*}
\phi_{\tilde{k},approx}&=&\mathscr{C}_{1}\left(\frac{\sqrt{\mathscr{A}}}{2}+i\mathscr{A}\sqrt{\eta-\eta_{c}}\right)\\
&&\times\exp(-2i\sqrt{\mathscr{A}(\eta-\eta_{c})}).
\end{eqnarray*}
The real and imaginary part of \(\mathscr{C}_{1}\) are related via the Wronskian condition. The remaining arbitrariness is merely the freedom to choose an overall phase, and can be fixed by demanding, for example, that $\phi_{\tilde{k},approx}$ is real at some auxiliary time \(\eta_{aux}\).
Let us now investigate which exact solution to the exact mode equation
Easther \emph{et al.}'s choice corresponds to. Normally, in the case of
second order wave equations, in order to match two solutions at any
arbitrary point it is sufficient to set these solutions and their first
derivatives equal at that point. We would obviously like to match Easther
\emph{et al.}'s function to one of our exact solutions to the exact
equation at the creation time. However, such a match-up at creation time
itself does here not suffice to use Easther \it et al.\rm 's choice of
mode function to uniquely pick out an exact solution to the exact mode
equation. This is because, as we saw earlier, at the creation time
\(\eta=\eta_{c}\), the value and the first derivative of the solutions are
not independent (but are instead a fixed multiple of another).
We therefore studied matching-up the mode function chosen by Easther
\emph{et al.} with an exact solution by setting the functions and their
derivatives equal at times \(\eta_{m}>\eta_{c}\). We were indeed able to
reproduce the observation made in \cite{greeneetal0104} that at times
considerably later than $\eta_c$ the mode with \(\mathscr{C}_{2}=0\) shows
close to adiabatic evolution with small characteristic oscillations. This
match-up is very delicate, however, and we can now see the underlying
reasons.
First, the match cannot be improved by choosing earlier and earlier match-up times $\eta_m$ because the mode function's amplitude and derivative loose their independence for $\eta_m\rightarrow \eta_c$, leading to numerical instability of the match-up.
Second, the match also cannot be improved by choosing late matching times $\eta_m$ because the simplified mode equation obeyed by
\(\phi_{\tilde{k},approx}(\eta)\) is close to the precise mode equation only for early $\eta$. Indeed, recall that the function chosen by Easther \emph{et al.} conserves the Wronskian of the exact mode equation only very close to $\eta_c$.
Given that there is no ideal match-up procedure, one may alternatively follow a strategy already outlined in Sec.\ref{poseinitial}: the amplitudes of the two solutions are matched at creation time, which fixes $C_2$. The information about $C_1$ is then obtained by enforcing the Wronskian condition and by choosing the overall phase such that the mode function is, for example, real at some auxiliary time.
We observed again a notable dependence on the arbitrary auxiliary time
\(\eta_{aux}\): it must be chosen not too late after creation time because
the approximate mode equation that the function of Easther \emph{et al.}
obeys is close to the exact mode equation only in the vicinity of
\(\eta_{c}\), but neither must $\eta_{aux}$ be chosen too early because
close to \(\eta_{c}\) the exact solution is entirely dominated by one of
its dimensions in the solution space and information about the
contribution of the other dimension is numerically increasingly difficult
to extract.
\section{Comparison with the WKB solution}\label{WKB}
The starting point of our present analysis was the mode equation (\ref{equationofmotion}), which is analogous to that of a damped harmonic oscillator.
We expressed (\ref{equationofmotion}) in the \(\tau\)-variable to obtain (\ref{eofmexplicit}), which we were then able to solve by means of the Frobenius method. As we show in Appendix \ref{frictionless}, a suitable field redefinition brings (\ref{eofmexplicit}) into a frictionless form,
\begin{equation}
\tilde{\psi}_{\tilde{k}}''+\tilde{\omega}^{2}(\tau)\tilde{\psi}_{\tilde{k}}=0,
\end{equation}
see (\ref{trafowithouteofm}), where
\begin{eqnarray}
\tilde{\omega}^{2}(\tau)&=&-\frac{1}{16{H}^{2}\beta\,{\tau}^{2}
\left (\tau+1\right )^{2}}\times\bigg(36\,\beta\,{\tau}^{4}{H}^{2}\nonumber\\
&&+132\,\beta\,{\tau}^{3}{H}^
{2}+153\,\beta\,{\tau}^{2}{H}^{2}+30\,\beta\,\tau\,{H}^{2}\nonumber\\
&&+5\,\beta\,{H}^{2}+4\,{\tau}^{3}+8\,{\tau}^{2}+4\,\tau\bigg).\label{omegaadiab}
\end{eqnarray}
Also this mode equation can be solved with the Frobenius method. We write the general solution in the form
\begin{equation}
\label{ab}
\tilde{\psi}_{\tilde{k}}(\tau)=D_{1}\tilde{\psi}_{\tilde{k},1}(\tau)+D_{2}\tilde{\psi}_{\tilde{k},2}(\tau),
\end{equation}
see (\ref{generalwithout}) in Appendix \ref{frictionless}.
The advantage of the frictionless formulation is that
in the adiabatic range of \(\tau\), i.e, where
\(\left|\frac{\tilde{\omega}'(\tau)}{\tilde{\omega}^{2}(\tau)}\right|\ll
1\) is fulfilled, (\ref{trafowithouteofm}) will have a set of solutions of
the WKB form, namely
\begin{equation}
\tilde{\Psi}_{\tilde{k}}^{\pm}(\tau)=\frac{1}{\sqrt{2\tilde{\omega}(
\tau)}}\exp\left(\mp{}i\int_{\tau_{i}}^{\tau}\tilde{\omega}(\tau')d\tau'\right).\label{plus}
\end{equation}
A general solution to (\ref{trafowithouteofm}) in the adiabatic regime is
then given by
\(\tilde{\Psi}_{\tilde{k}}(\tau)=\lambda_{-}\tilde{\Psi}_{\tilde{k}}^{-}(\tau)+\lambda_{+}\tilde{\Psi}_{\tilde{k}}^{+}(\tau)\).
Note that the Wronskian condition, when evaluated at \(\tau_{c}=-1\), imposes:
\begin{equation}
|\lambda_{+}|^{2}-|\lambda_{-}|^{2}=\frac{\tilde{k}^{3}\beta^{3/2}}{2H}
\end{equation}
In the adiabatic regime, the exact solutions and the WKB solutions must be related through
a Bogolyubov transformation.
The WKB solutions, (\ref{plus}), suggest a special choice of the vacuum state, namely
the adiabatic vacuum characterized by \(\lambda_{-}=0\). The Bogolyubov
transformation will then tell us which combination of the exact solutions
\(\tilde{\psi}_{\tilde{k},1}(\tau)\) and
\(\tilde{\psi}_{\tilde{k},2}(\tau)\) this choice corresponds to, i.e., it will determine the coefficients
\(D_{1},D_{2}\) in (\ref{ab}).
For the purpose of plotting the properties of the exact solutions, their Frobenius power series expansion,
(\ref{generalwithout}), needs to be truncated at some finite order \(N\). We established that
choosing any \(N>30\) suffices to ensure that the plot of the truncated function is valid well into the adiabatic regime.
In order to find the Bogolyubov transformation between the two
sets of solutions, it is sufficient to match the value and derivative of the adiabatic solution to an exact solution at some arbitrary time \(\tau_{ad}\) in the adiabatic regime:
\begin{eqnarray*}
&&D_{1}\tilde{\psi}_{\tilde{k},1}(\tau_{ad})+D_{2}\tilde{\psi}_{\tilde{k},2}(\tau_{ad})\\
&&\qquad=\lambda_{-}\tilde{\Psi}_{\tilde{k}}^{-}(\tau_{ad})+\lambda_{+}\tilde{\Psi}_{\tilde{k}}^{+}(\tau_{ad})\label{match1}\\
&&D_{1}\tilde{\psi}_{\tilde{k},1}'(\tau_{ad})+D_{2}\tilde{\psi}_{\tilde{k},2}'(\tau_{ad})\\
&&\qquad=\lambda_{-}\tilde{\Psi}_{\tilde{k}}'{}^{-}(\tau_{ad})+\lambda_{+}\tilde{\Psi}_{\tilde{k}}'{}^{+}(\tau_{ad})\label{match2}
\end{eqnarray*}
We verified that the result of the match-up is indeed independent of the match-up time within the adiabatic regime.
\subsubsection*{Adiabatic vacuum}
We are particularly interested in that mode function
\(\tilde{\psi}_{\tilde{k}}(\tau)\) which corresponds to the adiabatic
vacuum (to zeroth order) in the adiabatic regime. This choice of vacuum is defined by \(\lambda_{-}=0\) since in the
adiabatic limit only the modes of positive frequency are present. From the
match-up of our two sets of solutions we then have
\begin{eqnarray*}
D_{1}&=&-\frac{1}{3}\left(\tilde{\Psi}_{\tilde{k}}^{+}\tilde{\psi}'_{\tilde{k},2}-\tilde{\psi}_{\tilde{k},2}\tilde{\Psi}'{}_{\tilde{k}}^{+}\right)\cdot\lambda_{+},\\
D_{2}&=&\tilde{\psi}'{}_{\tilde{k},2}^{-1}\cdot\left(\lambda_{+}\tilde{\Psi}'{}_{\tilde{k}}^{+}-D_{1}\tilde{\psi}'_{\tilde{k},1}\right).
\end{eqnarray*}
(Here, the fields are understood to be evaluated at the arbitrary time
\(\tau_{ad}\).) The resulting values of $D_1$ and $D_2$ are several orders
of magnitude apart and not obviously special, in the sense that they do
not suggest a particular mathematical criterion that would single out this
initial condition and therefore the vacuum in a way that would not rely on
the adiabatic expansion. We noticed however, that numerically, $D_1$ and
$D_2$ appear to be at close to 90 degrees when viewed as vectors in the
complex plane. In the Outlook we will comment on the potential
significance for the choice of vacuum of the existence of an orthonormal
basis that is canonical in the solution space.
\subsubsection*{Deviations from the adiabatic vacuum}
We can now turn this line of argument around and use the knowledge of the exact solutions to bridge the gap between the initial behaviour right at the creation time and the behaviour of the modes in the adiabatic regime. This means that we can precisely link the initial behaviour to the one at horizon crossing which then determines the size and shape of potentially observable effects in the CMB.
Concretely, starting with a certain linear combination of the Frobenius solutions (that may be set by quantum gravity), how close to the adiabatic vacuum will this particular solution be during the adiabatic phase? To this end, we will derive an expression for \(\lambda_{-}\), which measures the deviation from the positive frequency adiabatic solution as a function of \(D_{1}\), \(D_{2}\).
From the match-up (\ref{match1}) we find
\begin{eqnarray}
\lambda_{-}&=&-i\bigg(D_{1}\left(\tilde{\psi}_{\tilde{k},1}'\tilde{\Psi}_{\tilde{k}}^{+}-\tilde{\Psi}_{\tilde{k}}^{+}{}'\tilde{\psi}_{\tilde{k},1}\right)\nonumber\\
&+&D_{2}\left(\tilde{\psi}_{\tilde{k},2}'\tilde{\Psi}_{\tilde{k}}^{+}-\tilde{\Psi}_{\tilde{k}}^{+}{}'\tilde{\psi}_{\tilde{k},2}\right)\bigg).\label{lambda}
\end{eqnarray}
So far, $\lambda_-$ depends on the real and imaginary parts of \(D_{1}\) and \(D_{2}\), i.e. on four real parameters:
\begin{eqnarray*}
D_{1}&=&S_{1}+iL_{1}\\
D_{2}&=&S_{2}+iL_{2}
\end{eqnarray*}
Only two of these parameters are independent, however, due to the Wronskian condition
\begin{equation}
S_{1}L_{2}-L_{1}S_{2}=\left(\frac{12H}{\beta^{3/2}\tilde{k}^{3}}\right)^{-1},
\end{equation}
see Appendix \ref{frictionless}, and due to the arbitrariness of the
overall phase, which allows us to choose for example $D_{1}$ real.
$|\lambda_-|^2$ as a function of these remaining two parameters measures
the non-adiabaticity of the vacuum in terms of the initial behaviour of
the physical mode function. Its minimum is zero and denotes the adiabatic
vacuum.
\section{Behaviour of physical quantities near creation time}\label{principles}
So far, we have parameterized the initial behaviour of modes and used our exact solutions to link it to the mode's behaviour at horizon crossing. In order to better understand what might determine the initial behaviour of modes let us now use our exact solutions to investigate the behaviour of physical quantities such as the mode's field uncertainties and its Hamiltonian close to the mode's creation time.
\subsection{Breakdown of the particle picture}
The Hamiltonian expressed through creation and annihilation operators was
given in (\ref{quantizedHamiltonian}).
We see that the nondiagonal terms of the Hamiltonian, i.e., the terms
\(a_{\tilde{k}}a_{-\tilde{k}}\) and their complex conjugate are
proportional to
\begin{equation}\label{diagcondition}
\mathscr{D}(\eta,\tilde{k})=\frac{\nu}{2}\phi_{-\tilde{k}}'\phi_{\tilde{k}}'+
\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\phi_{-\tilde{k}}\phi_{\tilde{k}}.
\end{equation}
The condition \(\mathscr{D}(\eta,\tilde{k})=0\) can be
rewritten as
\begin{equation}\label{diag2}
\frac{\left(\phi'_{\tilde{k}}(\eta)\right)^{2}}{\left(\phi_{\tilde{k}}(\eta)\right)^{2}}
=9\left(\frac{a'(\eta)}{a(\eta)}\right)^{2}-\mu(\eta,\tilde{k}).
\end{equation}
It is clear that at any finite time $\eta>\eta_c$, this equation can be
fulfilled, i.e., the Hamiltonian can be made diagonal in the Fock basis,
namely by a suitable choice of mode function $\phi_{\tilde{k}}$. For the
case of the creation time $\eta_c$ itself, however, let us recall from
Sec.\ref{poseinitial} that all mode functions \(\phi_{\tilde{k}}\) are
proportional to their derivative with a fixed proportionality constant,
see (\ref{functioninetaatcreation}) and (\ref{derivinetaatcreation}).
Therefore, (\ref{diag2}) cannot be fulfilled at the creation time
\(\eta_{c}\). This means that whatever mechanism determines the initial
behaviour of modes cannot be described in terms of a Fock basis and a
corresponding particle picture.
\subsection{Initial field fluctuations}\label{danielssoncriterion}
It has been proposed, see \cite{danielsson02}, that Planck scale physics
might imply that when modes are created they appear in a state that
minimizes the product of the fluctuations of the mode's field
\(\hat{\phi}_{\tilde{k}}(\eta)\) and its conjugate momentum field
\(\hat{\pi}_{\tilde{k}}(\eta)\) so that the product of the uncertainties
reads:
\begin{equation}\label{uncertainty}
\Delta\hat{\phi}_{\tilde{k}}(\eta_c)\cdot
\Delta\hat{\pi}_{\tilde{k}}(\eta_c)=\frac{1}{2}
\end{equation}
In our case here, we find that at all times $\eta>\eta_c$:
\begin{eqnarray*}
\Delta\hat{\phi}_{\tilde{k}}(\eta)&=&\left|\phi_{\tilde{k}}(\eta)\right|\\
\Delta\hat{\pi}_{\tilde{k}}(\eta)&=&\nu(\eta,\tilde{k})~\left|\phi'_{\tilde{k}}(\eta)-3\frac{a'(\eta)}{a(\eta)}\phi_{\tilde{k}}(\eta)\right|
\end{eqnarray*}
Let us now recall, see Sec.\ref{poseinitial}, that while
\(\phi_{\tilde{k}}\) and its derivative assume finite values for
\(\eta\rightarrow\eta_{c}\), the term \(\nu(\eta,\tilde{k})\) diverges
\(\propto\left(\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})+1\right)^{-1}\).
On one hand, this means that the criterion of \cite{danielsson02} for
specifying the vacuum state is here not applicable. On the other hand, it
is perhaps not surprising that, while the modes are created with field
fluctuations \(\Delta\hat{\phi}_{\tilde{k}}\) of finite size, their
momentum field fluctuations \(\Delta\hat{\pi}_{\tilde{k}}\) are divergent
at the moment of mode creation itself.
\section{Ground state energy}\label{vacuumenergy}
Since the Hamiltonian of a mode is quadratic in the field $\pi_{\tilde{k}}$, it is plausible that the divergence of the momentum field's fluctuations
\(\Delta\hat{\pi}_{\tilde{k}}\)
at $\eta_c$ should imply a divergence of the expectation value of the mode's Hamiltonian at $\eta_c$.
To this end, let us consider the total ground state energy at some time \(\eta\):
\begin{eqnarray}
E_{vac}(\eta)&=&\int_{\tilde{k}^{2}<\frac{a^{2}}{e\beta}} d^{3}\tilde{k}\quad \rho_{vac,\tilde{k}}(\eta)\nonumber\\
&=&\int_{\tilde{k}^{2}<\frac{a^{2}}{e\beta}} d^{3}\tilde{k}\,
\bigg[\frac{\nu}{2}\left|\phi_{\tilde{k}}'\right|^{2}\nonumber\\
&&+\frac{\nu}{2}\left(\mu-9\left(\frac{a'}{a}\right)^{2}\right)
\left|\phi_{\tilde{k}}\right|^{2}\bigg]\label{Evac}
\end{eqnarray}
Recall that the integration ranges over only those modes whose
wavelength at time \(\eta\) exceeds the cutoff length.
At a given mode's creation time its operators $a_{\tilde{k}}$ and $a^\dagger_{\tilde{k}}$, first enter the total Hamiltonian, which means that the mode then first contributes to both ground state and dynamical energy. In particular, in an expanding universe, the total Hamiltonian continually picks up new ground state energy as it picks up new modes.
After inserting \(\mu(\eta,\tilde{k})\) and \(\nu(\eta,\tilde{k})\) into
(\ref{vacuumterm}) and making use of the fact that we know the exact behaviour of the solutions \(\phi_{\tilde{k}}(\eta)\) to the mode equation, we now indeed find a divergence of each mode's contribution to the total ground state energy at the mode's creation time. Concretely, straightforward calculation shows that each mode's Hamiltonian contains three types of divergent terms:
\begin{eqnarray*}
&\propto&\ln\left(\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})+1\right)\\
&\propto&1/\left(\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})+1\right)\\
&\propto&1/\left(\textnormal{plog}(-\beta\tilde{k}^{2}/a^{2})+1\right)^{3}
\end{eqnarray*}
We obtain a simplified representation of the energetic behaviour of each mode by changing variables from the pair \((\eta,\tilde{k})\) to the new pair \((\tau,\tilde{k})\), and taking into account that
the integration measure in (\ref{Evac}) transforms as
\begin{equation*}
d^{3}\tilde{k}=4\pi\tilde{k}^{2}d\tilde{k}=2\pi\tilde{k}^{3}\frac{(1+\tau)}{\tau}d\tau.
\end{equation*}
In these variables, each mode's ground sate energy contains only one divergent term, which behaves as \(1/(\tau+1)^{2}\) for \(\tau\rightarrow-1\).
We can now consider the behaviour of the entity that is of physical significance, namely the integrated ground state energy of all modes.
Here, we notice that the accumulation of ground state energy through mode
creation is offset to some extent by the fact that each individual mode's
contribution to the total ground state energy
$\rho_{vac,\tilde{k}}(\eta)$, i.e., roughly speaking its
$\hbar\omega_{\tilde{k}}/2$, continually diminishes as its wavelength
expands, i.e., as its $\omega_{\tilde{k}}$ decreases.
Whether or not there is a net generating of ground state energy through the expansion depends crucially on the scale factor function $a(\eta)$ of the background spacetime. In the case of the background spacetime that we have considered so far, namely de Sitter space, we know that this calculation can only yield a constant result, since in its mode equation the dependence on $\tilde{k}$ can be eliminated, see (\ref{eofmexplicit}).
It should be very interesting, however, to carry out the integration of
the total ground state energy, at least numerically, for the case of a
more general background spacetime, such as the case of power-law
inflation. As already mentioned, we have calculated the exact solutions to
the exact mode equation in the power-law case, see Appendix
\ref{powerlaw}. The balance of the continual creation of ground state
energy and its continual dilution through expansion is nontrivial in any
model of quantum field theory in the sub-Planckian regime, including those
of generalized dispersion relations. This is of interest because if the
vacuum energy generation is not fully offset by its dilution then this
would imply a vacuum instability and therefore a potential starting
mechanism for inflation.
\section{Summary and outlook}
We considered quantum field theory in the sub-\-Planck\-ian regime, by
which we mean the regime of length scales larger than but close to the
Planck (or other cutoff-) length. In the case of an expanding background
spacetime the independent degrees of freedom of the QFT, i.e., the field
oscillators given by the comoving modes, are of particular interest. This
is because as the comoving modes' proper wavelength increases, new
comoving modes must continually enter the QFT description of the
sub-Planckian regime. A key problem is to identify in which state the
comoving modes first enter the QFT description of the sub-Planckian regime
and how they evolve through the sub-Planckian regime into the regime where
ordinary QFT holds.
To this end, we here followed up on a concrete model for quantum field
theory in the sub-Planckian regime. In this model a type of natural UV
cutoff is implemented that has been motivated from general quantum gravity
arguments, namely the presence of a finite minimum uncertainty in
positions. Previous work had found that in this model the initial phase of
a mode's evolution is described by a mode equation with an intriguing but
difficult to handle singular point at each mode's starting time.
Here, we improved on the existing numerical and semi-analytical solutions
by calculating the set of exact solutions to the precise mode equation for
the cases of de Sitter space and power-law inflation. In both cases the
initial singularity yielded for the roots of the indicial equation the
values $(0,3)$. This showed that the qualitative behaviour that can be
read off from of our exact solutions around the initial singularity is not
dependent on the precise dynamics of the scale factor. This in turn meant
that we were able to focus most of our calculations on the simpler case of
de Sitter expansion.
Having found the exact solution space, we studied possibilities for
identifying the physical mode solution within that solution space and
therefore for identifying the initial vacuum. The vacuum determines
the mode's late time behaviour at horizon crossing and
therefore the type and magnitude of potentially observable effects in the
CMB.
Clearly, within any model for QFT in the sub-Planckian regime, it is
nontrivial to find a reliable method for identifying the physical mode
solution. In particular, given any natural UV cutoff it is of course no
longer possible to identify a mode's vacuum as having started out
essentially as the Minkowski vacuum on the basis that the mode would have
had \it arbitrarily \rm short wavelength in the distant enough past.
Within the model of QFT in the sub-Planckian regime that we studied here,
we found that indeed also any approach that is based on instantaneous
Hamiltonian diagonalization in a Fock basis must run into difficulties.
The reason is that, as we were able to establish using the exact
solutions, each mode's Hamiltonian enters the total Hamiltonian such that
it is at the mode's creation time not diagonalizable by means of any Fock
representation.
Similarly, the properties of the exact solutions to the mode equation
showed that the criterion for identifying the physical mode function by
minimizing the field uncertainty relation at mode creation time
cannot be applied here unchanged: we found that
the field modes themselves are created with finite fluctuations. The
canonically conjugate momentum field's fluctuations are, however,
divergent at creation time itself, which is plausible given that the
momentum field generates changes in the field.
By studying the behaviour of the exact solutions close to the modes'
creation time we traced the underlying mathematical reason for why it is
difficult to give physical criteria that could reliably single out the
physical mode solution. Namely, we found that even though the mode
equation is second order in time, at creation time a mode solution cannot
be specified as usual by giving its amplitude and derivative. This is
because, at the creation time, the amplitude and derivative of all
solutions are proportional with the same proportionality constant.
This shows that in any model for QFT in the sub-Planckian regime the
physical criteria for specifying the mode function and vacuum may need to
be adapted to unexpected mathematical behaviour. Here, for example, it is
possible to specify every choice of mode solution by specifying its
amplitude at creation time, by using the Wronskian condition and by also
using the freedom of overall phase to set the amplitude real at some
arbitrary time other than the mode's creation time.
Of course it should be most interesting to try to calculate directly from
candidate quantum gravity theories, such as loop quantum gravity and
string theory, the mode solution, i.e., to find the state in which these
theories predict new comoving modes to enter the sub-Planckian regime in
which the framework of QFT is valid. It would be most satisfying then if
the fixing of the initial state of the new modes could then be rephrased
within QFT in terms of a suitable boundary action along the lines of
\cite{schalmetal0401,schalmetal0411}. Within the model of QFT in the
sub-Planckian regime that we studied here, boundary terms have been
considered in \cite{amjad} and those results are likely to be useful for
this purpose.
Independent of which initial condition is to be chosen, our exact
solutions can be used to straightforwardly calculate from any initial
condition the behaviour of the corresponding mode function at horizon
crossing. In other words, the explicit solution space provides an explicit
bridge between the nontrivial initial conditions that may be set by Planck
scale physics and the predictions for their potential impact in the CMB.
We close with a gedanken experiment that indicates that Planck scale
physics may not necessarily set initial conditions for modes that enter
the sub-Planckian regime during expansion, as this may violate unitarity:
consider a hypothetical background spacetime that repeatedly contracts and
expands, so that comoving modes repeatedly enter and leave the
sub-Planckian regime of validity of QFT. In a contracting phase, when a
comoving mode's wavelength drops below the cutoff length, the field
operators of that mode drop out of the QFT's Hamiltonian. This means that
the further evolution of that mode is frozen, at least as far as the QFT
is concerned. Whatever excitation or particles that mode might have had
are then unaccessible within the framework of QFT because the quantum
field simply no longer contains operators that act nontrivially on the
dimensions of that mode's Hilbert space.
Thus, for all practical purposes any matter in such a comoving mode would
be disappearing as if behind a Planck ``horizon". This would be the case
until in a subsequent expansion phase the comoving mode's wavelength again
exceeds the cutoff length. Then also the Hamiltonian resumes a nontrivial
action on that mode's part of the Hilbert space. As far as the framework
of QFT is concerned, if the mode froze while in an excited state it will
re-emerge in the same excited state during expansion: the evolution is
unitary and no information was lost\footnote{It should be worth
investigating if a similar Hilbert space mechanism to the one discussed
here may freeze degrees of freedom that fall into a black hole, to thaw
them in the final evaporation of the black hole, thus preserving unitarity
without having the frozen degrees of freedom unduly gravitate.}. It is
possible, of course, that the full theory of quantum gravity will show
that there are operators which act on those frozen modes' dimensions in
the Hilbert space so that in a re-expanding phase, when the modes re-enter
the description by the framework of QFT, they do so with certain fixed
initial conditions. Clearly, in this case unitarity would be hard to
maintain in a cycle of expansions and contractions. If, however, Planck
scale physics were not to enforce a fixed initial condition on modes, then
the question remains unanswered in which state comoving modes are created
when they first emerge, during the very first expansion.
This leads to the question whether in the solution space to the mode
equation there exists any distinguished or canonical solution that might
therefore be the preferred physical solution for modes that emerge for the
first time, i.e., the question is if the mathematics singles out a
preferred vacuum state. This amounts to asking if there exists a canonical
splitting of the solution space that could improve on the useful but at
best approximate splitting of the solution space into positive and
negative frequency solutions during periods of adiabatic evolution. Here
we notice that, equivalently, the question is whether there is a canonical
split of the solution space into what in an adiabatic phase would be the
sine and cosine basis solutions to the mode equation (from which one would
then obtain positive and negative frequency solutions straightforwardly).
Because of the peculiar singularity at the mode creation time this is
indeed possible: recall that there exists a distinguished dimension of the
solution space, consisting of the real-valued mode solutions that vanish
at creation time. This induces a canonical ON basis in the two-dimensional
solution space that could be viewed as the generalizations of the sine and
cosine functions. It will be difficult to orthonormalize these functions
in practice because the exact solution's power series is needed in an
integration over all time while the power series is slow to converge at
late times. Nevertheless, it will be very interesting to investigate how
close the so-defined positive frequency mode solution would be to the
adiabatic vacuum in the adiabatic phase.
\subsection*{Acknowledgements}
This work was partially supported by PREA, CFI, OIT and the Canada
Research Chairs program of the National Science and Engineering Research
Council of Canada. LL acknowledges support by the International Council
for Canadian Studies during the early phase of this work and current
support by the DAAD.
\bibliographystyle{h-physrev}
|
1,108,101,563,678 | arxiv | \section{Introduction}
In this paper, we consider the cubic nonlinear Schr\"odinger equation (NLS) in the periodic setting $x\in \mathbb{T}_\lambda^4$
\begin{equation}\label{eq:NLS}
(i\partial_t + \Delta) u = \mu u|u|^2,
\end{equation}
where $\mu = \pm 1$ ($+1$: the defocusing
case,
$-1$: the focusing case).
And $u : \mathbb{R}\times \mathbb{T}_\lambda^4 \to \mathbb{C}$ is a complex-valued function of time space $\mathbb{R}$ and spatial space $\mathbb{T}^4_\lambda$, a general rectangler tori,
i.e.
\[
\mathbb{T}^4_\lambda := \mathbb{R}^4/(\prod_{i=1}^4 \lambda_i \mathbb{Z}),\qquad \lambda = (\lambda_1, \lambda_2, \lambda_3, \lambda_4),
\]
where $\lambda_i\in (0,\infty)$ { for } $i=1, 2 ,3 ,4.$
Specifically, if the ratio of arbitrary two $\lambda_i'\, s$ in $\{\lambda_1, \lambda_2, \lambda_3, \lambda_4\}$ is an irrational number, then $\mathbb{T}^4_\lambda$ is called an irrational torus, otherwise $\mathbb{T}^4_\lambda$ is called a rational torus. Since our proof doesn't change no matter either rational or irrational tori. For the convenience, we use $\mathbb{T}^4: = \mathbb{T}^4_\lambda$ hence-forth in the paper.
Solutions of (\ref{eq:NLS}) conserve in both the mass of $u$:
\begin{equation}\label{eq:mass}
M(u)(t):= \int_{\mathbb{T}^4} |u(t)|^2\, dx
\end{equation}
and the energy of $u$:
\begin{equation}\label{eq:energy}
E(u)(t):= \frac{1}{2} \int_{\mathbb{T}^4} |\nabla u(t)|^2\, dx + \frac{1}{4}\mu
\int_{\mathbb{T}^4} |{u(t)}|^4\, dx.
\end{equation}
\subsection{The defocusing case ($\mu = +1$)}
In the defocusing case, our main theorem is global well-posedness of (\ref{eq:NLS}) with
$H^1(\mathbb{T}^4)$ initial data.
\begin{thm}[GWP of the defocusing NLS]\label{thm:main}
If $u_0\in H^1(\mathbb{T}^4)$, for any $T\in [0, \infty)$, there exists a unique global solution $u\in X^1([-T, T])$
of the initial value problem
\begin{equation}\label{eq:IVP}
(i\partial_t + \Delta) u = u|u|^2,\qquad u(0) = u_0.
\end{equation}
In addition, the mapping $u_0 \to u$ extends to a continuous mapping from
$H^1(\mathbb{T}^4)$ to $X^1([-T, T])$ and $M(u)$ and $E(u)$
defined in (\ref{eq:mass}) and (\ref{eq:energy}) are conserved along the flow.
\end{thm}
The space $X^1(I) \subset C(I: H^1(\mathbb{T}^4))$ is the adapted atomic space (see Definition \ref{def:Xs}).
On $\mathbb{R}^d$, the scaling symmetry plays an important role in the well-posedness (existence, uniqueness and continuous dependence of the data to solution map) problem of initial value problem (IVP) for NLS:
\begin{equation}\label{eq:pNLS}
\begin{cases}
i\partial_t u + \Delta u = |{u}|^{p-1} u,\qquad p>1\\
u(0, x) = u_0(x) \in \dot{H}^s(\mathbb{R}^d).
\end{cases}
\end{equation}
The IVP (\ref{eq:pNLS}) is scale invariant in the Sobolev norm $\dot{H}^{s_c}$, where $s_c := \frac{d}{2} - \frac{2}{p-1}$ is called the scaling critical regularity.
For $H^s$ data with $s > s_c$ (sub-critical regime), the local-well posedness (LWP) of the IVP (\ref{eq:pNLS}) in sub-critical regime was proven by Cazenave-Weissler \cite{caz1}.
For $H^s$ data with $s = s_c$ (critical regime), Bourgain \cite{bourgain1999radial} first proved the large data global well-posedness (GWP) and scattering for the defocusing energy-critical ($s_c = 1$) NLS in $\mathbb{R}^3$ with the radially symmetric initial data in $\dot{H}^1$ by introducing an induction method on the size of energy and a refined Morawetz inequality. A different proof of the same result was given by Grillakis in \cite{grillakis2000radial}. Then a breakthrough was made by Colliander-Keel-Staffilani-Takaoka-Tao \cite{colliander2008global}. Their work extended the results of Bourgain \cite{bourgain1999radial} and Grillakis \cite{grillakis2000radial}. They proved global well-posedness and scattering of the energy-critical problem in $\mathbb{R}^3$ for general large data in $\dot{H}^1$.
Then similar results were proven by Ryckman-Vi{\c{s}}an \cite{visan2007global} and Vi{\c{s}}an
\cite{vicsan2011global} on the higher dimension $\mathbb{R}^d$ spaces. Furthermore, Dodson proved mass-critical ($s_c = 0$) global wellposedness results for $\mathbb{R}^d$ in his series of papers \cite{dodson2012global, dodson2016global1, dodson2016global2}.
For the corresponding problems on the tori, the Strichartz estimates on rational tori $\mathbb{T}^d$ (see \cite{GV,Yaj,KTao} for the Strichartz estimates in the Euclidean spaces $\mathbb{R}^d$), which prove the local well-posedness of the periodic NLS, was initially developed by Bourgain \cite{bourgain1993fourier}. In \cite{bourgain1993fourier}, the number theoretical related lattice counting arguments were used, hence this method worked better in the rational tori than irration tori. Recently Bourgain-Demeter's work \cite{bourgain2015proof} proved the optimal Strichartz estimates on both rational and irrational tori via a totally different approach which doesn't depend on the lattice counting lattice. Also there are other important references \cite{BourgainStrichartz2007, DeSilvaNatasaStaffilani2007, CatoireWang2010, BourgainStrichartz2013, GuoOhWang2014, KillipVisanStrichartz2016, Demirbas2017, DengGermainGuth2017, FanBilinear2018} on the Strichartz estimates on the tori and global existence of solution of the Cauchy problem in sub-critical regime.
On the general compact manifolds, Burq-Gerard-Tzvetkov derived the Strichartz type estimates and applied these estimates to the global well-posedness of NLS on compact manifolds in a series of papers \cite{BurqGerardTzvetkov2004, BurqGerardTzvetkov2005Bilinear, BurqGerardTzvetkov2005multilinear, BurqGerardTzvetkov2007}. We also refers to \cite{Zhong2008, GerardPierfelice2010, ZaherBilinear2012, Zaher2012} and references therein for the other results of global existence sub-critical NLS on compact manifolds.
In the critical regime, Herr-Tataru-Tzvetkov \cite{herr2011global} studied the global existence of the energy-critical NLS on $\mathbb{T}^3$ and first proved the global well-posedness with small initial data in $H^1$. They used a crucial trilinear Strichartz type estimates in the context of the critical atomic spaces $U^p$ and $V^p$, which were originally developed in unpublished work on wave maps by Tararu. These atomic spaces were systematically formalized by Hadac-Herr-Koch \cite{hadac2009well} (see also \cite{KochTataru2005}\cite{herr2014strichartz}) and now the atomic spaces $U^p$ and $V^p$ are widely used in the field of the critical well-posedness theory of nonlinear dispersive equations.
The large data global well-posedness result of the energy-critical NLS on rational $\mathbb{T}^3$ was proven by Ionescu-Pausader \cite{ionescu2012energy}, which is the first large data critical global well-posedness result of NLS on a compact manifold.
In a series of papers, Ionescu-Pausader \cite{ionescu2012energy}\cite{ionescu2012global2} and Ionescu-Pausader-Staffilani \cite{ionescu2012global1} developed a method to obtain energy-critical large data global well-posedness in more general manifolds ($\mathbb{T}^3$, $\mathbb{T}^3\times\mathbb{R}$, and $\mathbb{H}^3$) based on the corresponding results on the Euclidean spaces in the same dimension. So far, their method
has been successfully applied to other manifolds in several following papers \cite{pausader2014global, strunk2015global, zhao2017gT2R2, zhao2017TR3}. In particular, based on the recent developments on the large data global well-posedness theory in the product spaces $\mathbb{R}^m\times \mathbb{T}^n\ (m\geq 1, n\geq 1)$, many authors (\cite{TzvetkovVisciglia2012, HaniPausader2014, tzvetkov2014well, modifiedScattering2015, Grebert2016, cheng2017scattering, zhao2017gT2R2, zhao2017TR3, Liu2018}) studied the long time behaviors (scattering, modified scattering and etc) of the solutions of the NLS.
In this paper, we prove the large data global wellposedness result of defocusing energy-critical NLS on the both rational and irrational tori in the 4-dimension. Our proof is closely related to the strategy developed by Ionescu-Pausader \cite{ionescu2012energy}\cite{ionescu2012global2}. Compared to the $\mathbb{T}^3\times \mathbb{R}$ case, in Ionescu-Pausader \cite{ionescu2012global2}, there is less dispersion of the Schr\"{o}dinger operator in the compact manifold $\mathbb{T}^4$. This means that it is more difficult to obtain a sharp enough Strichartz type estimates as Proposition 2.1 in \cite{ionescu2012global2}.
We use the sharp Strichartz type estimates (Lemma
\ref{prop:strichartz}) recently proven by Bourgain-Demeter
\cite{bourgain2015proof} in our proof.
Moreover, since Lemma \ref{prop:strichartz} works both for rational or irrational tori, we can prove the result on both rational and irrational tori.
The main parts in the proof of {Theorem \ref{thm:main}} will follow the concentration-compactness framework of Kenig-Merle
\cite{KenigMerle}, which is a deep and broad road map to deal with critical problems (see also in \cite{KenigMerle2008NLW}\cite{KenigMerle2010}).
Our first step is to obtain the critical local well-posedness theory
and the stability theory of (\ref{eq:NLS}) in $\mathbb{T}^4$. For that
purpose, we follow Herr-Tataru-Tzvetkov's idea \cite{herr2011global}\cite{herr2014strichartz} and introduce the adapted critical spaces $X^s$ and $Y^s$, which are frequency localized modifications of atomic spaces $U^p$ and $V^p$, as our solution spaces and nonlinear spaces.
Applying Proposition \ref{prop:strichartz} and the strip decomposition technique in \cite{herr2011global, herr2014strichartz} to the atomic spaces in time-space frequency space, we
obtain a crucial bilinear estimate and then the local well-posedness of (\ref{eq:NLS}).
Then we measure the solution in a weaker critical space-time norm $Z
$, which plays a similar role as $L^{10}_{x,t}$ norm in
\cite{colliander2008global}. On the one hand, equipped with $Z$-norm, we
obtain the refined bilinear estimate (Lemma \ref{lem:bilinear}) and
hence it is proven that the solution stay regular as long as $Z$-norm stay finite (i.e. global well-posedness with a priori $Z$-norm bound).
On the other hand, we show that concentration of a large amount of the Z -norm in fnite time is self-defeating. The reason is as follows.
Concentration of a large amount the Z-norm in finite
time can only happen around a space-time point, which can be
considered as a Euclidean-like solution.
To implement this, arguing by contradiction, we construct a
sequence of initial data which implies a sequence of solutions and
leads the $Z$-norm towards infinity. Then following the profile decompositon idea (firstly by Gerard \cite{gerard1998} in Sobolev embedding and Merle-Vega \cite{merle1998} in the Schr\"odinger equation), we perform a linear profile decomposition
of the sequence of initial data with one Scale-1-profile and a
series of Euclidean profiles that concentrate at space-time points.
We get nonlinear profiles by running the linear profiles along the
nonlinear Schr\"{o}dinger flow as initial data. By the contradiction
condition, the scattering properties of nonlinear Euclidean profiles
and the defect of interaction between different profiles show that
there is actually at most one profile which is the Euclidean
profile. And the corresponding nonlinear Euclidean profile is just
the Euclidean-like solution we want.
Euclidean-like solution can be interpreted in some sense as
solutions in the Euclidean space $\mathbb{R}^4$, however, these kind of
concentration as a Euclidean-like solution is prevented by the global well-posedness
theory on the Euclidean space $\mathbb{R}^4$ in Vi{\c{s}}an-Ryckman
\cite{visan2007global} and Vi{\c{s}}an \cite{vicsan2011global}'s
papers.
\subsection{The focusing case ($\mu = -1$)}
In the focusing case, we prove global well-posedness when both the modified energy and kinect energy of the initial data less than energy and modified kinect energy of the ground state $W$ in $\mathbb{R}^4$. Moreover,
\begin{equation}\label{eqn:groundstate1}
W(x) = W(x, t) = \frac{1}{ 1 + \frac{|x|^2}{8}} \qquad \text{in } \dot{H}^1(\mathbb{R}^4)
\end{equation}
which is a stationary solution of the focusing case of (\ref{eq:NLS}) and also solves the elliptic equation in $\mathbb{R}^4$
\begin{equation}\label{eqn:groundstate2}
\Delta W +|W|^{2}W = 0.
\end{equation}
Then we define a constant $C_4$ by using the stationary solution $W$. And also $C_4$ is the best constant in Sobolev embedding (see Remark \ref{rmk:bestconstant}).
\begin{equation}\label{eq:WC}
\|W\|^2_{\dot{H}^1(\mathbb{R}^4)} = \|W\|^4_{L^4(\mathbb{R}^4)} := \frac{1}{C_4^4} \qquad\text{ and then } \qquad
E_{\mathbb{R}^4}(W) = \frac{1}{4C^4_4},
\end{equation}
where $E_{\mathbb{R}^4}(W)$ is the energy of $W$ in the Euclidean space $\mathbb{R}^4$:
\begin{equation}\label{def:energyinEuclidean}
E_{\mathbb{R}^4}(W):= \frac{1}{2} \int_{\mathbb{R}^4} |\nabla W(x)|^2\, dx - \frac{1}{4}
\int_{\mathbb{R}^4} |{W(x)}|^4\, dx.
\end{equation}
\begin{thm}[GWP of the focusing NLS]\label{thm:focusing}
Assume $u_0\in H^1(\mathbb{T}^4)$. Assume that $u$ is a maximal-lifespan solution $u: I\times\mathbb{T}^4\to \mathbb{C}$ satisfying
\begin{equation}\label{cond:focusing1}
\sup_{t\in I}\|u(t)\|_{\dot{H}^1(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},
\end{equation}
then for any $T\in [0, +\infty)$, $u\in X^1([-T, T])$ is a solution
of the initial value problem
\begin{equation}\label{eq:IVP2}
(i\partial_t + \Delta) u = -u|u|^2,\qquad u(0) = u_0.
\end{equation}
\end{thm}
For the technical reason in the focusing case, we should introduce two modified energies of $u$:
\begin{equation}\label{def:modifiedEnergy1}
E_*(u)(t) := \frac{1}{2}(\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}+c_* \|u(t)\|^2_{L^2(\mathbb{T}^4)}) - \frac{1}{4}\|u(t)\|_{L^4(\mathbb{T}^4)}^4,
\end{equation}
and
\begin{equation}\label{def:modifiedEnergy2}
E_{**}(u)(t) := \frac{1}{2}(\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}+c_* \|u(t)\|^2_{L^2(\mathbb{T}^4)})- \frac{1}{4}\|u(t)\|_{L^4(\mathbb{T}^4)}^4 + \frac{c_*^2 C^4_4}{4}\|u(t)\|^4_{L^2(\mathbb{T}^4)},
\end{equation}
where $c_*$ is a fixed constant determined by the Sobolev embedding on $\mathbb{T}^4$ (Lemma \ref{lem:sobolev}). By the definitions (\ref{def:modifiedEnergy1})(\ref{def:modifiedEnergy2}), $E_*(u)(t)$ and $E_{**}(u)(t)$ are conserved in time.
We also introduce $\|u\|_{H_*^1(\mathbb{T}^4)}$ as a modified inhomogenous Sobolev norm:
\begin{equation}\label{def:modifiedNorm}
\|u\|^2_{H_*^1(\mathbb{T}^4)} = \|u\|^2_{\dot{H}^1(\mathbb{T}^4)} + c_* \|u\|^2_{L^2(\mathbb{T}^4)}
\end{equation}
Obviously, $H^1_*(\mathbb{T}^4)$-norm and $H^1(\mathbb{T}^4)$-norm are two comparable norms ($\|u\|_{H^1_*(\mathbb{T}^4)} \simeq \|u\|_{H^1(\mathbb{T}^4)}$).
\begin{cor}\label{cor:focusing}
Assume that $u_0\in H^1(\mathbb{T}^4)$ satisfying
\begin{equation}\label{cond:focusing2}
\|u_0\|_{H^1_*(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},\quad E_*(u_0)<E_{\mathbb{R}^4}(W);
\end{equation}
OR
\begin{equation}\label{cond:focusing3}
\|u_0\|_{\dot{H}^1(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},\quad E_{**}(u_0)<E_{\mathbb{R}^4}(W),
\end{equation}
where $E_*(u_0)$ and $E_{**}(u_0)$ are two modified Energies defined in (\ref{def:modifiedEnergy1}) and (\ref{def:modifiedEnergy2}), and $E_{\mathbb{R}^4}(W)$ is the Energy in the Euclidean space defined in (\ref{def:energyinEuclidean}).
Then for any $T\in [0, \infty)$, there exists a unique global solution $u\in X^1([-T, T])$
of the initial value problem (\ref{eq:IVP2}).
In addition, the mapping $u_0 \to u$ extends to a continuous mapping from
$H^1(\mathbb{T}^4)$ to $X^1([-T, T])$ for any $T\in [0, \infty)$.
\end{cor}
\begin{remark}
By the the energy trapping lemma (Theorem 2.5) in the Section 2, either
(\ref{cond:focusing2}) or (\ref{cond:focusing3}) implies the condition (\ref{cond:focusing1}) in Theorem \ref{thm:focusing}.
\end{remark}
In the focusing case, global well-posedness result usually doesn't
hold for arbitary data. For the energy-critical focusing NLS on $\mathbb{R}^d$, Kenig-Merle \cite{KenigMerle} first proved the global
well-posedness and scattering with initial data below a ground state
threshold ($E_{\mathbb{R}^d}(u_0)<E_{\mathbb{R}^d}(W)$ and $\|u_0\|_{\dot{H}^1}< \|W\|_{\dot{H}^1}$) in the
radial case ($d\geq 3$). And then the corresponding results without
the radial conditions were proven by Killp-Vi{\c{s}}an \cite{KillipVisanFocusing} ($d\geq 5$) and Dodson \cite{DodsonFocusing} ($d=4$). We also refer to \cite{DuyckaertsFocusing2008, HolmerRoudenko2008, Cazenave2011, DuyckaertsRoudenko2015, Masaki2015, DodsonMurphy2017, dodson2017new} for other focusing NLS results.
In this paper, we also prove a similar global well-posedness result for the energy-critical focusing NLS on $\mathbb{T}^4$ below the ground state threshold. As in the defocusing case, we follow the idea in Ionescu-Pausader \cite{ionescu2012energy}\cite{ionescu2012global2} and use the
focusing global well-posedness result \cite{DodsonFocusing} in $\mathbb{R}^4$ as a black box.
It is known that the conditions in $\mathbb{R}^d$ are $E_{\mathbb{R}^d}(u_0)<E_{\mathbb{R}^d}(W)$
and $\|u_0\|_{\dot{H}^1}< \|W\|_{\dot{H}^1}$, which are tightly related to the Sobolev embedding with the best constant in $\mathbb{R}^d$. However the
sharp version of Sobolev embedding (Lemma \ref{lem:sobolev}) is
quite different. So compared to the conditions for initial data in Eulidean space $\mathbb{R}^d$, the conditions in Corollary \ref{cor:focusing} should be different. A similar case is the focusing
NLS on the hyperbolic space, it doesn't share the same sharp version
of Sobolev embedding either. On the hyperbolic space, Fan-Kleinhenz
\cite{FanVariational} give a minimal ratio between energy and $L^2$
norm under the condition $E_{\mathbb{H}^3}(u_0)<E_{\mathbb{R}^3}(W)$ and $\|u_0\|_{\dot{H}^1}< \|W\|_{\dot{H}^1}$ in the radial case, and also in Banica-Duyckaerts \cite{BanicaDuyckaerts}'s paper about the focusing NLS on the
hyperbolic space, they modified the Sobolev norm and energy by
subtracting a multiple of the $L^2$ norm.
On $\mathbb{T}^d$, based on the best constants of Sobolev embedding (Lemma \ref{lem:sobolev}) on $\mathbb{T}^d$, we should also modify the energy and Sobolev norm by adding terms related to the $L^2$ norm , so that the modified
conditions together with Sobolev embedding derive the energy
trapping property which controls the Sobolev norm globally in time.
In Section 2, we will discuss the Sobolev embedding and energy
trapping lemma in detail.
\subsection{Outline of the following paper}
The rest of the paper is organized as follows. In Section 2, we prove the energy trapping property for the focusing NLS. In Section 3, we introduce the adapted atomic spaces $X^s$, $Y^s$ and $Z$ norm and provide some corresponding embedding properties of the spaces. In Section 4, we use Herr-Tataru-Tzvetkov's method and Bourgain-Demeter's sharp Strichartz estimate to develop a large-data local well-posedness and stability theory for (\ref{eq:NLS}).
In Section 5, {we study} the behavior of Euclidean-like solutions to the linear and nonlinear equation concentrating to a point in space and time. In Section 6, we recall a similar profile decomposition as the Section 5 in \cite{ionescu2012energy} to measure the defects of compactness in the Strchartz inequality. In Section 7, we prove the main theorems (Theorem \ref{thm:main} and Theorem \ref{thm:focusing}) except for a lemma. In Section 8, we prove the remaining lemma about approximate solutions.
\subsection*{Acknowledgements}
The author is greatly indebted to his advisor, Andrea R. Nahmod, for suggesting this problem and her patient guidance and warm encouragement over the past years. The author would like to thank Beno\^{i}t Pausader for prompting the author to study the focusing case. The author also would like to thank Chenjie Fan for his helpful discussions on the focusing case of this paper. The author acknowledges support from the National Science Foundation
through his advisor Andrea R. Nahmod's grants NSF-DMS 1201443 and NSF-DMS 1463714.
\section{Energy trapping for the focusing NLS}
Before proceeding to the proofs of main theorems (Theorem \ref{thm:main} and Theorem \ref{thm:focusing}), we explain how Theorem \ref{thm:focusing} implies Corollary \ref{cor:focusing} in the focusing case by using the energy trapping argument. In this section, we'll prove the energy trapping argument in $\mathbb{T}^4$ which is different from the energy trapping argument (Theorem 3.9 in \cite{KenigMerle}) in $\mathbb{R}^4$.
\begin{lem}[Sobolev embedding with best constants by \cite{Aubin2}\cite{Hebey2}\cite{hebey1996}]\label{lem:sobolev}
Let $f\in H^1(\mathbb{T}^4)$, then there exists a positive constant $c_*$, such that
\begin{equation}\label{eq:Sobolev}
\|f\|^2_{L^4(\mathbb{T}^4)} \leq C^2_4 (\|f\|^2_{\dot{H}^1(\mathbb{T}^4)} + c_* \|f\|^2_{L^2(\mathbb{T}^4)}).
\end{equation}
where $C_4$ is the best constant of this inequality.
\end{lem}
\begin{remark}\label{rmk:bestconstant}
$C_4$ is the same constant as expressed in (\ref{eq:WC}), because $C_4$ is also the best constant of the Sobolev embedding in $\mathbb{R}^4$,
$
\|f\|^2_{L^4(\mathbb{R}^4)} \leq C^2_4 \|f\|^2_{\dot{H}^1(\mathbb{R}^4)},
$
and the function $W(x)$ holds the Sobolev embedding with the best constant $C_4$ in $\mathbb{R}^4$.
\end{remark}
\begin{remark}
Since $\|u\|^2_{H_*^1(\mathbb{T}^4)} = \|u\|^2_{\dot{H}^1(\mathbb{T}^4)} + c_* \|u\|^2_{L^2(\mathbb{T}^4)}$, the Sobolev embedding (Lemma \ref{lem:sobolev}) can be also written in the form:
\[
\|f\|^2_{L^4(\mathbb{T}^4)} \leq C^2_4 \|f\|^2_{{H}_*^1(\mathbb{T}^4)}.
\]
Suppose $c_{opt}:= \inf\{c_* : c_* \text{ holds } (\ref{eq:Sobolev})\}$.
By taking $f=1$, it's easy to check that $c_{opt} \geq \frac{1}{C_4^2 Vol(\mathbb{T}^4)^{1/2}}$.
\end{remark}
\begin{lem}\label{lem:EnergyTrapping}
(i) Suppose $f\in H^1(\mathbb{T}^4)$ and $\delta_0>0$ satisfying
\begin{equation}\label{eq:conditionOfEnergytrapping}
\|f\|_{H^1_*(\mathbb{T}^4)}< \|W\|_{\dot{H}^1(\mathbb{R}^4)} \qquad\text{ and }\qquad E_*(f)<(1-\delta_0)E_{\mathbb{R}^4}(W),
\end{equation}
then there exists $\bar{\delta} = \bar{\delta}(\delta_0)>0$ such that
\begin{align}\label{eq:energyresult1}
\|f\|^2_{H^1_*(\mathbb{T}^4)}< (1-\bar{\delta}) \|W\|^2_{\dot{H}^1(\mathbb{R}^4)}\\\label{eq:energyresult2}
\|f\|^2_{H^1_*(\mathbb{T}^4)} - \|f\|^4_{L^4(\mathbb{T}^4)} \geq \bar{\delta}\|f\|^2_{H^1_*(\mathbb{T}^4)},
\end{align}
and in particular
\begin{equation}\label{eq:energyresult3}
E_*(f) \geq \frac{1}{4}(1+\bar{\delta})\|f\|^2_{H^1_*(\mathbb{T}^4)}.
\end{equation}
(ii) Suppose $f\in H^1(\mathbb{T}^4)$ and $\delta_0>0$ satisfying
\begin{equation}\label{eq:conditionOfEnergytrapping2}
\|f\|_{\dot{H}^1(\mathbb{T}^4)}< \|W\|_{\dot{H}^1(\mathbb{R}^4)} \qquad\text{ and }\qquad E_{**}(f)<(1-\delta_0)E_{\mathbb{R}^4}(W),
\end{equation}
then there exists $\bar{\delta} = \bar{\delta}(\delta_0)>0$ such that
\begin{align}\label{eq:energyresult12}
\|f\|^2_{\dot{H}^1(\mathbb{T}^4)}< (1-\bar{\delta}) \|W\|^2_{\dot{H}^1(\mathbb{R}^4)}\\\label{eq:energyresult22}
\|f\|^2_{\dot{H}^1(\mathbb{T}^4)} - \|f\|^4_{L^4(\mathbb{T}^4)} + 2c_* \|f\|_{L^2(\mathbb{T}^4)}+c_*^2 C^4_4 \|f\|_{L^2(\mathbb{T}^4)}^4\geq \bar{\delta}\|f\|^2_{\dot{H}^1(\mathbb{T}^4)},
\end{align}
and in particular
\begin{equation}\label{eq:energyresult32}
E_{**}(f) \geq \frac{1}{4}(1+\bar{\delta})\|f\|^2_{\dot{H}^1(\mathbb{T}^4)}.
\end{equation}
\end{lem}
\begin{proof}
In the proof of part (i), we almost identically follow the proof of Lemma 3.4 in Kenig-Merle's paper \cite{KenigMerle}, but use ${H^1_*(\mathbb{T}^4)}$-norm instead of ${\dot{H}^1(\mathbb{T}^4)}$-norm. Consider a quadratic function $g_1 = \frac{1}{2}y - \frac{C^4_4}{4}y^2$, and plug in $\|f\|^2_{H^1_*(\mathbb{T}^4)}$, by Sobolev embedding (Lemma \ref{lem:sobolev}) and the assumption (\ref{eq:conditionOfEnergytrapping}), we have that
\begin{equation}\label{eq:energytrapping1}
\begin{split}
g_1(\|f\|^2_{H^1_*}) & = \frac{1}{2} \|f\|^2_{H^1_*} - \frac{C^4_4}{4} \|f\|^4_{H^1_*}\\
&\leq \frac{1}{2} \|f\|^2_{H^1_*} - \frac{1}{4}\|f\|_{L^4}^4 = E_*(f)\\
&< (1-\delta_0)E_{\mathbb{R}^4}(W)= (1-\delta_0)g_1(\|W\|^2_{\dot{H}^1(\mathbb{R}^4)}).
\end{split}
\end{equation}
It is easy to know $\|f\|^2_{H^1_*(\mathbb{T}^4)}<(1-\bar{\delta})\|W\|^2_{\dot{H}^1(\mathbb{R}^4)}$, from (\ref{eq:energytrapping1}) and the property of quadratic function $g_1$, where $\bar{\delta}\sim \delta_0^{\frac{1}{2}}$.
Then choose $g_2(y) = y - C^4_4 y^2$, if plug in $\|f\|^2_{H^1_*(\mathbb{T}^4)}$, by Sobolev embedding (Lemma \ref{lem:sobolev}), we have that
\begin{equation}\label{eq:energytrapping2}
g_2(\|f\|^2_{H^1_*(\mathbb{T}^4)}) = \|f\|^2_{H^1_*(\mathbb{T}^4)}- C^4_4 \|f\|^4_{H^1_*(\mathbb{T}^4)} \leq \|f\|^2_{H^1_*(\mathbb{T}^4)} - \|f\|^4_{L^4(\mathbb{T}^4)}.
\end{equation}
Since $g_2(0) = 0$, $g''_2(y) = - 2C^4_4<0$ and
$ \|f\|^2_{H^1_*(\mathbb{T}^4)} < (1-\bar{\delta}) \|W\|^2_{\dot{H}^1(\mathbb{R}^4)}$, by Jensen's inequality and (\ref{eq:WC}),
\begin{equation}\label{eq:energytrapping3}
g_2(\|f\|^2_{H^1_*(\mathbb{T}^4)})> g_2((1-\bar{\delta})\|W\|^2_{\dot{H}^1(\mathbb{R}^4)})\frac{\|f\|^2_{H^1_*(\mathbb{T}^4)}}{(1-\bar{\delta})\|W\|^2_{\dot{H}^1(\mathbb{R}^4)}}=\bar{\delta}\|f\|^2_{H^1_*(\mathbb{T}^4)}.
\end{equation}
Together (\ref{eq:energytrapping2}) and (\ref{eq:energytrapping3}), we get (\ref{eq:energyresult2}).
By (\ref{eq:energyresult2}), we get (\ref{eq:energyresult3})
\[
E_*(f) = \frac{1}{4}\|f\|^2_{H^1_*(\mathbb{T}^4)} + \frac{1}{4}(\|f\|^2_{H^1_*(\mathbb{T}^4)} -\|f\|^4_{L^4(\mathbb{T}^4)})\geq \frac{1}{4}(1+\bar{\delta})\|f\|^2_{H^1_*(\mathbb{T}^4)}.
\]
The proof of part (ii) would be similar with part (i). Under the assumptions (\ref{eq:conditionOfEnergytrapping2}) of part (ii), by squaring Sobolev embedding (Lemma \ref{lem:sobolev}), we have that
\begin{equation}\label{eq:squareSobolev}
C^4_4 \|f\|^4_{\dot{H}^1(\mathbb{T}^4)}\geq \|f\|^4_{L^4} -2c_* \|f\|^2_{L^2(\mathbb{T}^4)}-c_*^2C^4_4\|f\|^4_{L^2(\mathbb{T}^4)}
\end{equation}
Plugging $\|f\|^2_{\dot{H}^1(\mathbb{T}^4)}$ into $g_1$, by (\ref{eq:squareSobolev}), we hold that
\begin{equation}\label{eq:energytrapping12}
\begin{split}
g_1(\|f\|^2_{\dot{H}^1}) & = \frac{1}{2} \|f\|^2_{\dot{H}^1} - \frac{C^4_4}{4} \|f\|^4_{\dot{H}^1}\\
&\leq \frac{1}{2} \|f\|^2_{\dot{H}^1} - \frac{1}{4}\|f\|_{L^4}^4 + \frac{c_*}{2}\|f\|^2_{L^2(\mathbb{T}^4)}+\frac{c_*^2 C^4_4}{4}\|f\|^4_{L^2(\mathbb{T}^4)}= E_{**}(f)\\
&< (1-\delta_0)E_{\mathbb{R}^4}(W)= (1-\delta_0)g_1(\|W\|^2_{\dot{H}^1(\mathbb{R}^4)}).
\end{split}
\end{equation}
It is easy to know $\|f\|^2_{\dot{H}^1(\mathbb{T}^4)}<(1-\bar{\delta})\|W\|^2_{\dot{H}^1(\mathbb{R}^4)}$, from (\ref{eq:energytrapping12}) and the property of quadratic function $g_1$, where $\bar{\delta}\sim \delta_0^{\frac{1}{2}}$. Similarly, we can also hold (\ref{eq:energyresult22})(\ref{eq:energyresult32}) under the assumption (\ref{eq:conditionOfEnergytrapping2}).
\end{proof}
\begin{thm}[Energy trapping]\label{thm:EnergyTrapping}
(i)
Let $u$ be a solution of IVP (\ref{eq:IVP2}), such that for $\delta_0>0$
\begin{equation}\label{ass:i}
\|u_0\|_{H^1_*(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},\quad E_*(u_0)<(1-\delta_0)E_{\mathbb{R}^4}(W);
\end{equation}
Let $I\ni 0$ be the maximal interval of existence, then there exists $\bar{\delta} = \bar{\delta}(\delta_0)>0$ such that for all $t\in I$
\begin{align}
\|u(t)\|^2_{H^1_*(\mathbb{T}^4)}< (1-\bar{\delta}) \|W\|_{\dot{H}^1(\mathbb{R}^4)},\\
\|u(t)\|^2_{H^1_*(\mathbb{T}^4)} - \|u(t)\|^4_{L^4(\mathbb{T}^4)} \geq \bar{\delta}\|u(t)\|^2_{H^1_*(\mathbb{T}^4)},
\end{align}
and in particular
\begin{equation}\label{eq:energyresult4}
E_*(u)(t) \geq \frac{1}{4}(1+\bar{\delta})\|u(t)\|^2_{H^1_*(\mathbb{T}^4)}.
\end{equation}
(ii)
Let $u$ be a solution of IVP (\ref{eq:IVP2}), such that for $\delta_0>0$
\begin{equation}\label{ass:ii}
\|u_0\|_{\dot{H}^1(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},\quad E_{**}(u_0)<(1-\delta_0)E_{\mathbb{R}^4}(W);
\end{equation}
Let $I\ni 0$ be the maximal interval of existence, then there exists $\bar{\delta} = \bar{\delta}(\delta_0)>0$ such that for all $t\in I$
\begin{align}
\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}< (1-\bar{\delta}) \|W\|_{\dot{H}^1(\mathbb{R}^4)},\\
\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)} - \|u(t)\|^4_{L^4(\mathbb{T}^4)} + 2c_* \|u(t)\|^2_{L^2(\mathbb{T}^4)} + c_*^2C^4_4\|u(t)\|^4_{L^2(\mathbb{T}^4)}\geq \bar{\delta}\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)},
\end{align}
and in particular
\begin{equation}\label{eq:energyresult42}
E_{**}(u)(t) \geq \frac{1}{4}(1+\bar{\delta})\|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}.
\end{equation}
\end{thm}
\begin{proof}
By the conservation of energy and mass, this theorem directly from Lemma \ref{lem:EnergyTrapping} by the continuity argument.
\end{proof}
\begin{remark}\label{rmk:EnergySimSobolev}
The energy trapping lemma (Theorem \ref{thm:EnergyTrapping}) shows that if the initial data satisfies the condition (\ref{cond:focusing2}) or (\ref{cond:focusing3}) then the solution $u(t)$ satisfies $\|u(t)\|_{\dot{H}^1(\mathbb{T}^4)}< \|W\|_{\dot{H}^1(\mathbb{R}^4)}$ for all $t$ in the lifespan of the solution.
So Theorem \ref{thm:focusing} implies Corollary \ref{cor:focusing}.
In particular, we also obtain that $E_*(u)(t)\simeq \|u(t)\|^2_{H^1_*(\mathbb{T}^4)}$ under the assumption (\ref{ass:i}) and $E_{**}(u)(t)\simeq \|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}$ under the assumption (\ref{ass:ii}) by Theorem \ref{thm:EnergyTrapping}.
\end{remark}
\section{Adapted function spaces}
In this section, we introduce $X^s$ and $Y^s$ spaces which are based on the atomic spaces $U^p$ and $V^p$ which were originally developed in unpublished work on wave maps by Tararu and then were applied to PDEs in \cite{hadac2009well}\cite{herr2011global}\cite{herr2014strichartz}, while we'll use the $X^s$ and $Y^s$ spaces in the proof of the defocusing and focusing global wellposedness.
$\mathcal{H}$ is a separable Hilbert space on $\mathbb{C}$ and $\mathcal{Z}$ denotes the set of finite partitions $-\infty = t_0 < t_1 < ... <t_K = \infty$ of the real line, with the convention that $v(\infty) := 0$ for any function $v : \mathbb{R} \to \mathcal{H}$.
\begin{definition}[Definition 2.1 in \cite{herr2011global}]
Let $1\leq p < \infty$. For $\{t_k\}_{k=0}^K \in \mathcal{Z}$ and $\{\phi_k\}_{k=0}^{K-1} \subset \mathcal{H}$ with $\sum_{k=0}^K \|\phi_k\|_{\mathcal{H}}^p = 1$ and $\phi_0 = 0$. A $U^p$-atom is a piecewise defined function $a : \mathbb{R} \to \mathcal{H}$ of the form
$$
a = \sum_{k=1}^K \mathds{1}_{[t_{k-1}, t_k)} \phi_{k-1}.
$$
The atomic Banach space $U^p(\mathbb{R}, \mathcal{H})$ is then defined to be the set of all functions $u : \mathbb{R} \to \mathcal{H}$ such that
$$
u = \sum_{j=1}^{\infty} \lambda_{j} a_j,\qquad \text{for}\ U^p \text{-atoms}\ a_j,\qquad \{\lambda_j\}_j \in \ell^1,\quad \|u\|_{U^p}<\infty,
$$
where
$$
\|u\|_{U^p} : = \inf \{\sum_{j=1}^{\infty} |\lambda_j| : u = \sum_{j=1}^\infty \lambda_j a_j,\ \lambda_j \in \mathbb{C}\ \text{and}\ a_j\ \text{an}\ U^p\ \text{atom}\}.
$$
Here $\mathds{1}_{I}$ denotes the indicator function over the time interval $I$.
\end{definition}
\begin{definition}[Definition 2.2 in \cite{herr2011global}]
Let $1\leq p < \infty$. The Banach space $V^p(\mathbb{R}, \mathcal{H})$ is defined to be the set of all functions $v: \mathbb{R} \to \mathcal{H}$ with $v(\infty):=0$ and $v(-\infty) := \lim_{t\to -\infty} v(t)$ exists, such that
$$
\|v\|_{V^p} : = \sup_{\{t_k\}_{k=0}^K\in \mathcal{Z}} (\sum_{k=1}^K \|v(t_k)-v(t_{k-1})\|^p_\mathcal{H})^\frac{1}{p}\quad
\text{is}\ \text{finite}.
$$
Likewise, let $V_{-}^p$ denote the closed subspace of all $v \in V^p$ with $\lim_{t\to -\infty}v(t) = 0$. $V_{-, rc}^p$ means all right-continuous $V_{-}^p$ functions.
\end{definition}
\begin{remark}[Some embeding properties]\label{rmk:embedding}
Note that for $1\leq p \leq q < \infty$,
\begin{equation}
U^p(\mathbb{R}, \mathcal{H}) \hookrightarrow U^q(\mathbb{R}, \mathcal{H}) \hookrightarrow L^{\infty}(\mathbb{R},\mathcal{H}),
\end{equation}
and functions in $U^p(\mathbb{R}, \mathcal{H})$ are right continuous, and $\lim_{t\to -\infty} u(t) = 0$ for each $u \in U^p(\mathbb{R}, \mathcal{H})$. Also note that,
\begin{equation}
U^p(\mathbb{R}, \mathcal{H}) \hookrightarrow V^p_{-,rc} (\mathbb{R}, \mathcal{H}) \hookrightarrow U^q(\mathbb{R}, \mathcal{H}).
\end{equation}
\end{remark}
\begin{definition}[Definition 2.5 in \cite{herr2011global}]
For $s\in \mathbb{R}$, we let $U^p_{\Delta}H^s$, respectively $V^p_{\Delta}H^s$, be the space of all functions $u : \mathbb{R}\to H^s(\mathbb{T}^d)$ such that $t\mapsto e^{-it\Delta}u(t)$ is in $U^p(\mathbb{R}, H^s)$, respectively in $V^p(\mathbb{R}, H^s)$ with norm
$$
\|u\|_{U^p(\mathbb{R}, H^s)} := \|e^{-it\Delta}u(t)\|_{U^p(\mathbb{R},H^s)},
\quad \|u\|_{V^p(\mathbb{R}, H^s)} := \|e^{-it\Delta}u(t)\|_{V^p(\mathbb{R},H^s)}.
$$
\end{definition}
\begin{definition}[Definition 2.6 in \cite{herr2011global}]\label{def:Xs}
For $s\in \mathbb{R}$, we define $X^s$ as the space of all functions $u : \mathbb{R}\to H^s(\mathbb{T}^d)$ such that for every $n\in \mathbb{Z}^d$, the map $t\mapsto e^{it|n|^2}\widehat{u(t)}(n)$ is in $U^2(\mathbb{R}, \mathcal{C})$, and with the norm
\begin{equation}
\|u\|_{X^s} : = (\sum_{n\in \mathbb{Z}^d}\langle n \rangle^{2s} \|e^{it|n|^2}\widehat{u(t)}(n)\|^2_{U_t^2})^{\frac{1}{2}}\quad \text{is finite}.
\end{equation}
\end{definition}
\begin{definition}[Definition 2.7 in \cite{herr2011global}]
For $s\in \mathbb{R}$, we define $Y^s$ as the space of all functions $u : \mathbb{R}\to H^s(\mathbb{T}^d)$ such that for every $n\in \mathbb{Z}^d$, the map $t\mapsto e^{it|n|^2}\widehat{u(t)}(n)$ is in $V_{rc}^2(\mathbb{R}, \mathcal{C})$, and with the norm
\begin{equation}
\|u\|_{Y^s} : = (\sum_{n\in \mathbb{Z}^d}\langle n \rangle^{2s} \|e^{it|n|^2}\widehat{u(t)}(n)\|^2_{V_t^2})^{\frac{1}{2}}\quad \text{is finite}.
\end{equation}
\end{definition}
Note that
\begin{equation}\label{eq:Embedding}
U^2_{\Delta}H^s \hookrightarrow X^s \hookrightarrow Y^s \hookrightarrow V^2_{\Delta}H^s.
\end{equation}
\begin{prop}[Proposition 2.10 in \cite{hadac2009well}]\label{EstimateFreeSolution}
Suppose $u := e^{it\Delta}\phi$ which is a linear Schr\"{o}dinger solution, then for any $T>0$ we obtain that
$$\|u\|_{X^s([0,T])} \leq \|\phi\|_{H^s}.$$
\end{prop}
\begin{proof}
Since $u := e^{it\Delta}\phi$, then $\|u\|_{X^s} = (\sum_{n\in \mathbb{Z}^d}\langle n \rangle^{2s} \|\widehat{\phi}(n)\|^2_{U_t^2})^{\frac{1}{2}} \leq \|\phi\|_{H^s}$.
\end{proof}
\begin{remark}
Compared with Bourgain's $X^{s,b}$ first introduced in Bourgain,
\begin{align*}
\|v\|_{X^{s,b}} & = \|e^{-it\Delta}v\|_{H^b_tH^s_x},\\
\|v\|_{U^p_\Delta H^s} & = \|e^{-it\Delta}v\|_{U^p_tH^s_x},\\
\|v\|_{V^p_\Delta H^s} & = \|e^{-it\Delta}v\|_{V^p_tH^s_x}.\\
\end{align*}
Also later we will see that the atomic spaces enjoy the similar duality and
transfer principle properties with $X^{s,b}$.
\end{remark}
\begin{remark}
Follow the definitions, it's easy to check the following embedding property:
\begin{equation}
U^2_{\Delta}H^s \hookrightarrow X^s \hookrightarrow Y^s \hookrightarrow V^2_{\Delta}H^s
\hookrightarrow L^\infty(\mathbb{R}, H^s).
\end{equation}
\end{remark}
\begin{definition}[$X^s$ and $Y^s$ restricted to a time interval $I$]
For intervals $I\subset \mathbb{R}$, we define $X^s(I)$ and $Y^s(I)$ as following
\begin{equation*}
X^s(I) := \{ v\in C(I: H^s) : \|v\|_{X^s(I)} := \sup_{J\subset I ,\ |J|\leq 1}
\inf_{\tilde{v}|_J = v}
\|\tilde{v}\|_{X^s}<\infty\},
\end{equation*}
and
\begin{equation*}
Y^s(I) := \{ v\in C(I: H^s) : \|v\|_{Y^s(I)} := \sup_{J\subset I ,\ |J|\leq 1}
\inf_{\tilde{v}|_J = v}
\|\tilde{v}\|_{Y^s}<\infty\}.
\end{equation*}
\end{definition}
We will consider our solution in $X^1(I)$ spaces, and then let's introduce nonlinear
norm $N(I)$.
\begin{definition}[Nonlinear norm $N(I)$]
Let $I=[0, T]$, then
\begin{equation*}
\|f\|_{N(I)} := \left\| \int_0^t e^{i(t-t')\Delta} f(t') dt'\right\|_{X^1(I)}
\end{equation*}
\end{definition}
\begin{prop}[Proposition 2.11 in \cite{herr2014strichartz}]\label{prop:dual}
Let $s>0$. For $f\in L^1(I, H^1(\mathbb{T}^4))$
we have
\begin{equation}
\|f \|_{N(I)}\leq
\sup_{v\in Y^{-1}(I)}
\left|\int_I\int_{\mathbb{T}^4} f(t, x)\overline{v(t,x)}dxdt\right|.
\end{equation}
\end{prop}
Now, we will need a weaker norm $Z$, which plays a similar role as
$L^{10}_{t,x}$ norm in \cite{colliander2008global}.
\begin{definition}
\[\|v\|_{Z(I)} := \sup_{J\subset I, \ |J|\leq 1}
\left( \sum_{N \in 2^\mathbb{Z}} N^2 \|P_N v\|^4_{L^4(\mathbb{T}^4\times J)}
\right)^{\frac{1}{4}}.\]
\end{definition}
\begin{remark}
$\|v\|_{Z(I)}$ actually can be considered as
\[
\sum_{p\in \{p_1, p_2, \cdots, p_k\}} \sup_{J\subset I, \ |J|\leq 1}
\left( \sum_{N \in 2^\mathbb{Z}} N^{6-p} \|P_N v\|^p_{L^p(\mathbb{T}^4\times J)} \right)^{\frac{1}{p}},
\]
where $\{p_1, p_2, \cdots, p_k\}$ should be the $L^p$ estimates that we need to use in
the proof of nonlinear estimate.
In our case, we only need $\|P_N u\|_{L^4(\mathbb{T}^4\times I)} \lesssim \|P_N u\|_{Z(I)}$
in the proof of the nonlinear estimates, so we choose $\{p_1, p_2, \cdots, p_k\}= \{4\}$.
\end{remark}
The following property shows us that $Z(I)$ is a weaker norm than $X^1(I)$.
\begin{prop}\label{prop:ZinX}
\[\|v\|_{Z(I)} \lesssim \|v\|_{X^1(I)}\].
\end{prop}
\begin{proof}
By the definition of $Z(I)$ and the following Strichartz type estimates
{Proposition \ref{coro:strichartz}}, we obtain that
\begin{align*}
\sup_{J\subset I, \ |J|\leq 1}
\left( \sum_{N \text{ dydic number}} N^2 \|P_N v\|^4_{L^4(\mathbb{T}^4\times J)} \right)^{\frac{1}{4}}
&\lesssim \sup_{J\subset I, \ |J|\leq 1}
\left( \sum_{N \text{ dydic number}} N^4 \|P_N v\|^4_{X^0(J)} \right)^{\frac{1}{4}}\\
&\lesssim \|v\|_{X^1(I)}.
\end{align*}
\end{proof}
\begin{prop}[Proposition 2.19 in \cite{hadac2009well}]\label{prop:transfer}
Let $T_0 : L^2 \times \cdots \times L^2 \to L^{1}_{loc}$ be a m-linear operator.
Assume that for some $1\leq p,\ q \leq \infty$
\begin{equation}
\| T_0 (e^{it\Delta }\phi_1,\cdots, e^{it\Delta }\phi_m)\|_{L^p(\mathbb{R},
L_x^q(\mathbb{T}^d))} \lesssim \prod_{i=1}^m\|\phi_i\|_{L^2(\mathbb{T}^d)}.
\end{equation}
Then, there exists an extension $T : U^p_{\Delta}\times \cdots \times U^p_{\Delta}
\to L^p(\mathbb{R}, L^q(\mathbb{T}^d))$ satisfying
\begin{equation}
\|T(u_1, \cdots, u_m)\|_{L^p(\mathbb{R}, L^q(\mathbb{T}^d))} \lesssim \prod_{i=1}^m
\|u_i\|_{U^p_{\Delta}},
\end{equation}
such that $T(u_1, \cdots , u_m) (t, \cdot) = T_0 (u_1(t), \cdots , u_m(t))(\cdot)$,
a.e.
\end{prop}
\begin{lem}[Strichartz type estimates \cite{bourgain1993fourier}\cite{bourgain2015proof}]
\label{prop:strichartz}
If $p>3$, then
\begin{equation*}
\|P_N e^{it\Delta} f\|_{L^p_{t,x}([-1,1]\times\mathbb{T}^4)} \lesssim_p
N^{2-\frac{6}{p}} \|f\|_{L^2_x}
\end{equation*}
and
\begin{equation*}
\|P_C e^{it\Delta} f\|_{L^p_{t,x}([-1,1]\times\mathbb{T}^4)} \lesssim_p
N^{2-\frac{6}{p}} \|f\|_{L^2_x}
\end{equation*}
where $C$ is a cube of side length N and $f\in L^2(\mathbb{T}^4)$.
\end{lem}
By the transfer principle proposition {(Proposition \ref{prop:transfer})} and Strichartz type estimate
Lemma \ref{prop:strichartz}, we obtain the following corollary:
\begin{cor}\label{coro:strichartz}
If $p>3$, for any $v\in U^p_\Delta([-1, 1])$,
\[
\|P_N v\|_{L^p([-1,1]\times\mathbb{T}^4)} \lesssim_p N^{2-\frac{6}{p}}
\|v\|_{U^p_\Delta([-1, 1])},
\]
and
\[
\|P_C v\|_{L^p([-1,1]\times\mathbb{T}^4)} \lesssim_p N^{2-\frac{6}{p}}
\|v\|_{U^p_\Delta([-1, 1])},
\]
where $C$ is a cube of side length $N$.
\end{cor}
\section{Local well-posedness and Stability theory}
In this section, we present large-data local well-poesdness and stability
results. Although Herr, Tataru and Tzvetkov's idea \cite{herr2014strichartz}
together with Bourgain and Demeter's result \cite{bourgain2015proof} gives the local well-posedness
of (\ref{eq:IVP}), to obtain the stability results, we need a refined
nonlinear estimate and the local well-posedness result.
\begin{definition}[Definition of solutions]
Given an interval $I\subseteq \mathbb{R}$, we call $u\in C(I: H^1(\mathbb{T}^4))$ a strong
solution of (\ref{eq:IVP}) if $u\in X^1(I)$ and $u$ satisfies that for all
$t, s\in I$,
\[
u(t) = e^{i(t-s)\Delta} u(s) - i\mu
\int_s^t e^{i(t-t')\Delta} u(t')|u(t')|^2 dt'.
\]
\end{definition}
First, we need to introduce
\begin{equation}\label{def:Z'}
\|u\|_{Z'(I)} := \|u\|_{Z(I)}^{\frac{3}{4}}\|u\|^\frac{1}{4}_{X^1(I)}.
\end{equation}
\begin{lem}[Bilinear estimates in \cite{herr2014strichartz}]\label{lem:bilinear}
Assuming $|I|\leq 1$ and $N_1\geq N_2$, then we hold that
\begin{equation}
\|P_{N_1} u_1 P_{N_2} u_2\|_{L^2_{x,t}(\mathbb{T}^4\times I)}
\lesssim (\frac{N_2}{N_1}+\frac{1}{N_2})^\kappa \|P_{N_1} u_1\|_{Y^0(I)}
\|P_{N_2}u_2\|_{Y^1(I)}
\end{equation}
for some $\kappa >0$.
\end{lem}
\begin{remark}
This Bilinear estimate is Proposition 2.8 in \cite{herr2014strichartz}. The proof
of Lemma \ref{lem:bilinear} relies on $L^p$ estimates in
Corolla \ref{prop:strichartz} (for some $p<4$). In the proof not only
we need the decoupling properties for spatial frequency, but also
we need further strip partitions to apply the decoupling properties for time
frequency.
\end{remark}
Let's introduce a refined nonlinear estimate.
\begin{prop}[Refined nonlinear estimate]\label{prop:nonlinear}
For $u_k\in X^1(I)$, $k=1, 2, 3$, $|I|\leq 1$, we hold the estimate
\begin{equation}\label{eq:nonlinear}
\|\prod_{k=1}^3 \widetilde{u_k}\|_{N(I)}\lesssim
\sum_{\{i, j, k\}=\{1, 2, 3\}} \|u_i\|_{X^1(I)}\|u_j\|_{Z'(I)}\|u_k\|_{Z'(I)}
\end{equation}
where $\widetilde{u_k} =u_k$ or $\widetilde{u_k} = \overline{u_k}$
for $k=1, 2, 3$.
In particular, if there exist constants $A, B>0$, such that $u_1 = P_{>A} u_1$,
$u_2 = P_{>A} u_2$ and $u_3 = P_{<B} u_3$, then we obtain that
\begin{equation}\label{eq:nonlinearP}
\|\prod_{k=1}^3 \widetilde{u_k}\|_{N(I)}\lesssim
\|u_1\|_{X^1(I)}\|u_2\|_{Z'(I)}\|u_3\|_{Z'(I)}
+ \|u_2\|_{X^1(I)}\|u_1\|_{Z'(I)}\|u_3\|_{Z'(I)}.
\end{equation}
\end{prop}
\begin{proof}
Suppose $N_0$, $N_1$, $N_2$, $N_3$ are dyadic and
WLOG we assume $N_1\geq N_2 \geq N_3$. By the Proposition \ref{prop:dual}, we obtain that
\begin{align*}
&\|\prod_{k=1}^3 \widetilde{u_k}\|_{N(I)} \lesssim
\sup_{\|u_0\|_{Y^{-1}}} |\int_{\mathbb{T}^4\times I} \overline{u_0}
\prod_{k=1}^3 \widetilde{u_k} \,dxdt|\\
\leq & \sup_{\|u_0\|_{Y^{-1}}} \sum_{N_0, N_1\geq N_2\geq N_3}|
\int_{\mathbb{T}^4\times I} \overline{P_{N_0} u_0}
\prod_{k=1}^3 P_{N_k}\widetilde{u_k} \,dxdt|
\end{align*}
Then we know that $N_1 \sim \max (N_2, N_0)$ by the spatial frequency orthogonality.
There are two cases:
\begin{enumerate}
\item $N_0\sim N_1 \geq N_2 \geq N_3$;
\item $N_0\leq N_2 \sim N_1 \geq N_3$.
\end{enumerate}
\case{1}{$N_0\sim N_1 \geq N_2 \geq N_3$}
By Cauchy-Schwartz inequality and Lemma \ref{lem:bilinear}, we have that
\begin{equation}\label{3.2}
\begin{split}
& |\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|
\leq \|P_{N_0} u_0 P_{N_2} u_2\|_{L^2_{x,t}}
\|P_{N_1}u_1 P_{N_3} u_3\|_{L^2_{x,t}}\\
\lesssim& (\frac{N_3}{N_1} + \frac{1}{N_3})^\kappa
(\frac{N_2}{N_0} + \frac{1}{N_2})^\kappa \|P_{N_0} u_0\|_{Y^0(I)}
\|P_{N_1}u_1\|_{Y^0(I)}\|P_{N_2}u_2\|_{X^1(I)}\|P_{N_3}u_3\|_{X^1(I)}
\end{split}
\end{equation}
Assume $\{C_j\}$ is a cube partition of size $N_2$, and $\{C_k\}$ is a cube
partition of size $N_3$. By $\{P_{C_j}P_{N_0} u_0 P_{N_2} u_2\}_j$
and $\{P_{C_k}P_{N_1}u_1 P_{N_3}u_3\}_k$ are both almost orthogonal,
Corollary \ref{coro:strichartz} and definition of $Z$ norm, we obtain that
\begin{equation}\label{3.3}
\begin{split}
& |\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|
\leq \|P_{N_0} u_0 P_{N_2} u_2\|_{L^2_{x,t}}
\|P_{N_1}u_1 P_{N_3} u_3\|_{L^2_{x,t}}\\
\lesssim & (\sum_{C_j} \|P_{C_j}P_{N_0} u_0 P_{N_2} u_2\|^2_{L^2_{x,t}})^{\frac{1}{2}}
(\sum_{C_k} \|P_{C_k}P_{N_1}u_1 P_{N_3}u_3 \|^2)^{\frac{1}{2}}\\
\lesssim & (\sum_{C_j} \|P_{C_j}P_{N_0} u_0\|^2_{L^4_{x,t}}
\|P_{N_2} u_2\|^2_{L^4_{x,t}})^{\frac{1}{2}}
(\sum_{C_k} \|P_{C_k}P_{N_1}u_1\|_{L^4_{x,t}}^2
\| P_{N_3}u_3 \|^2_{L^4_{x,t}})^{\frac{1}{2}}\\
\lesssim & (\sum_{C_j} \|P_{C_j}P_{N_0} u_0\|^2_{Y^0(I)}
(N_2^{\frac{1}{2}}\|P_{N_2} u_2\|_{L^4_{x,t}})^2)^{\frac{1}{2}}
(\sum_{C_k} \|P_{C_k}P_{N_1} u_1\|^2_{Y^0(I)}
(N_3^{\frac{1}{2}}\|P_{N_3} u_3\|_{L^4_{x,t}})^2)^{\frac{1}{2}}\\
\lesssim& \|P_{N_0}u_0\|_{Y^0(I)}\|P_{N_1}u_1\|_{Y^0(I)}\|P_{N_2}u_2\|_{Z(I)}
\|P_{N_2}u_2\|_{Z(I)}.
\end{split}
\end{equation}
Interpolating (\ref{3.2}) with (\ref{3.3}) we obtain that
\begin{equation}\label{3.4}
\begin{split}
&|\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|\\
\lesssim&
(\frac{N_3}{N_1} + \frac{1}{N_3})^{\kappa_1}
(\frac{N_2}{N_0} + \frac{1}{N_2})^{\kappa_1}
\|P_{N_0}u_0\|_{Y^{-1}(I)}\|P_{N_1}u_1\|_{X^1(I)}\|P_{N_2}u_2\|_{Z'(I)}
\|P_{N_2}u_2\|_{Z'(I)}.
\end{split}
\end{equation}
Then we sum (\ref{3.4}) over all $N_0\sim N_1\geq N_2\geq N_3$,
\begin{align*}
&\sum_{N_0\sim N_1\geq N_2\geq N_3}
(\frac{N_3}{N_1} + \frac{1}{N_3})^{\kappa_1}
(\frac{N_2}{N_0} + \frac{1}{N_2})^{\kappa_1}
\|P_{N_0}u_0\|_{Y^{-1}(I)}\|P_{N_1}u_1\|_{X^1(I)}\|P_{N_2}u_2\|_{Z'(I)}
\|P_{N_2}u_2\|_{Z'(I)}\\
\lesssim & \|u_0\|_{Y^{-1}(I)}\|u_1\|_{X^1{I}}\|u_2\|_{Z'(I)}\|u_3\|_{Z'(I)}.
\end{align*}
\case{2}{$N_0\leq N_2\sim N_1\geq N_3$}
Similarly we have that
\begin{equation}\label{3.5}
\begin{split}
&|\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|\\
\lesssim& (\frac{N_3}{N_1} + \frac{1}{N_3})^\kappa
(\frac{N_0}{N_2} + \frac{1}{N_0})^\kappa \|P_{N_0} u_0\|_{Y^0(I)}
\|P_{N_1}u_1\|_{Y^0(I)}\|P_{N_2}u_2\|_{X^1(I)}\|P_{N_3}u_3\|_{X^1(I)}.
\end{split}
\end{equation}
Similar with (\ref{3.3}), we obtain that:
\begin{equation}\label{3.6}
\begin{split}
&|\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|\\
\lesssim& \|P_{N_0} u_0\|_{Y^0(I)}
\|P_{N_1}u_1\|_{Y^0(I)}\|P_{N_2}u_2\|_{Z(I)}\|P_{N_3}u_3\|_{Z(I)}.
\end{split}
\end{equation}
We interpolate (\ref{3.5}) with (\ref{3.6}) and sum over $N_0\leq N_2\sim N_1\geq N_3$. Then we have that
\begin{align*}
&\sum_{N_0\leq N_2\sim N_1\geq N_3}
|\int \overline{P_{N_0} u_0} P_{N_1} \widetilde{u_1} P_{N_2} \widetilde{u_2}
P_{N_3} \widetilde{u_3}\, dxdt|\\
\lesssim & \|P_{N_0} u_0\|_{Y^{-1}(I)}
\|P_{N_1}u_1\|_{X^1(I)}\|P_{N_2}u_2\|_{Z'(I)}\|P_{N_3}u_3\|_{Z'(I)}.
\end{align*}
Next we summarize these two cases and similarly consider
$N_1\geq N_3\geq N_2$, $N_2\geq N_1\geq N_3$, $N_2\geq N_3\geq N_1$,
$N_3\geq N_1\geq N_2$, and $N_3\geq N_2\geq N_1$,
we can get the desired estimate (\ref{eq:nonlinear}).
In particular, if there exist constants $A, B>0$ such that $u_1 = P_{>A} u_1$,
$u_2 = P_{>A} u_2$ and $u_3 = P_{<B} u_3$, then we only consider the sum when
$N_1\geq N_2\gtrsim N_3$ and
$N_2\geq N_1\gtrsim N_3$. So we get the estimate (\ref{eq:nonlinearP}).
\end{proof}
\begin{prop}[Local Wellposedness]\label{prop:lwp}
Assume that $E>0$ is fixed. There exists $\delta_0 = \delta_0(E)$ such that if
\[
\|e^{it\Delta} u_0\|_{Z'(I)} < \delta
\]
for some $\delta \leq \delta_0$, some interval $0\in I$ with $|I|\leq 1$ and
some function $u_0\in H^1(\mathbb{T}^4)$ satisfying $\|u_0\|_{H^1}\leq E$, then
there exists a unique strong solution to (\ref{eq:NLS}) $u\in X^1(I)$
such that $u(0) = u_0$. Besides we also have
\begin{equation}\label{ineq:smoothing}
\|u-e^{it\Delta}u_0\|_{X^1(I)}\leq \delta^{\frac{5}{3}}.
\end{equation}
\end{prop}
\begin{proof}
First, we consider the set
\[
S = \{ u\in X^1(I): \|u\|_{X^1(I)}\leq 2E,\qquad \|u\|_{Z'(I)}\leq a\},
\]
and the mapping
\[
\Phi(v) = e^{it\Delta} u_0 -i \mu\int_0^t e^{i(t-s)\Delta} v(s)|v(s)|^2\, ds.
\]
For $u$, $v\in S$, by Proposition \ref{prop:nonlinear}, there exists
a constant $C>0$, we have that
\begin{align*}
&\|\Phi(u) - \Phi(v)\|_{X^1(I)}\\
\leq& C \left( \|u\|_{X^1(I)}+\|v\|_{X^1(I)}\right)
\left( \|u\|_{Z'(I)}+\|v\|_{Z'(I)}\right) \|u-v\|_{X^1(I)}\\
\leq&C Ea\|u-v\|_{X^1(I)}
\end{align*}
Similarly, using Proposition \ref{EstimateFreeSolution} and nonlinear estimate
Proposition \ref{prop:nonlinear}, we also obtain that
\begin{align*}
\|\Phi(u)\|_{X^1(I)} & \leq \|\Phi(0)\|_{X^1(I)} + \|\Phi(u)-\Phi(0)\|_{X^1(I)}\\
&\leq \|u_0\|_{H^1} + C Ea^2
\end{align*}
and
\begin{align*}
\|\Phi(u)\|_{Z'(I)} & \leq \|\Phi(0)\|_{Z'(I)} + \|\Phi(u)-\Phi(0)\|_{Z'(I)}\\
&\leq \delta + C Ea^2.
\end{align*}
Now, we choose $a=2\delta$ and we let $\delta_0 =\delta_0(E)$ be small enough.
We see that $\Phi$ is a contraction on $S$, so we have a fixed point $u$.
And it's easy to check (\ref{ineq:smoothing}) and uniqueness in $X^1(I)$.
\end{proof}
By a similar idea of Herr-Tataru-Tzvetkov \cite{herr2011global}, we can easily prove the global well-posedness result with
small initial data by using Theorem \ref{prop:lwp}.
\begin{prop}[Small data global wellposedness]\label{prop:smallDataGWP}
If $\|\phi\|_{H^1(\mathbb{T}^4)} =\delta\leq \delta_0$, then the unique strong solution
with initial data $\phi$ is global and satisfies
\[\|u\|_{X^1([-1, 1])} \leq 2\delta\] and moreover
\[\|u-e^{it\Delta}\phi\|_{X^1([-1, 1])}\lesssim \delta^2.\]
\end{prop}
\begin{lem}[$Z$-norm controls the global existence]\label{lem:cgwp}
Assume that $I\subseteq \mathbb{R}$ is a bounded open interval.
\begin{enumerate}
\item If $E$ is a nonnegative finite number, that $u$ is
a strong solution of (\ref{eq:NLS}) and
\[
\|u\|_{L^\infty_t(I, H^1)}\leq E.
\]
Then, if
\[
\|u\|_{Z(I)} <+\infty
\]
there exists an open interval $J$ with $\bar{I}\subset J$ such that $u$
can be extended to a strong solution of (\ref{eq:NLS}) on $J$, besides
\[
\|u\|_{X^1(I)}\leq C(E, \|u\|_{Z(I)}).
\]
\item (GWP with a priori bound)Assume $C$ is some positive finite number and we have a priori bound $\|u\|_{Z(I)}<C$,
for any solution $u$ of
(\ref{eq:NLS}) in the interval $I$, then this IVP (\ref{eq:IVP}) is well-posedness
on $I$.
(In particular, if $u$ blows up in finite time, then $u$ blows up in the
$Z$-norm.)
\end{enumerate}
\end{lem}
\begin{proof}
Consider the case $I = (0, T)$.
\begin{enumerate}
\item By the continuity arguments of $h(s) =
\|e^{i(t-T_1)\Delta}u(T_1)\|_{Z'(T_1, T_1+s)}$ where $T_1\geq T-1$ such that
$\|u\|_{Z(T_1, T)}\leq \varepsilon$.
\item Combined (1) and Proposition \ref{prop:lwp}, it's trivial to know.
\end{enumerate}
\end{proof}
\begin{remark}
This proof determines the ratio of $X^1$ norm and $Z$ norm in the definition
of $Z'$ norm \ref{def:Z'}, and the portion of $Z$ norm in $Z'$ norm can be any number strictly between
$\frac{1}{2}$ and $1$.
\end{remark}
\begin{prop}[Stability]\label{prop:stability}
Assume $I$ is an open bounded interval, $\mu \in [-1, 1]$, and $\tilde{u}
\in X^1(I)$ satisfies the approximate Schr\"{o}dinger equation
\begin{equation}\label{eq:aNLS}
(i\partial_t + \Delta) \tilde{u} =\mu \tilde{u}|\tilde{u}|^2 + e,\qquad
\text{on }\mathbb{T}^4\times I.
\end{equation}
Assume in addition that
\begin{equation}
\|\tilde{u}\|_{Z(I)} + \|\tilde{u}\|_{L^\infty_t(I, H^1(\mathbb{T}^4)}\leq M,
\end{equation}
for some $M\in [1, \infty]$. Assume $t_0\in I$ and $u_0\in H^1(\mathbb{T}^4)$ is
such that the smallness condition:
\begin{equation}\label{ineq:err}
\|u_0 - \tilde{u}(0)\|_{H^1(\mathbb{T}^4)} + \|e\|_{N(I)}\leq \varepsilon
\end{equation}
holds for some $0<\varepsilon<\varepsilon_1$, where $\varepsilon_1 \leq 1$.
$\varepsilon_1 =\varepsilon_1(M)>0$ is a small constant.
Then there exists a strong solution $u\in X^1(I)$ of the NLS
\[
(i\partial_t + \Delta) u =\mu u|u|^2,
\]
such that $u(t_0) =u_0$ and
\begin{equation}\label{3.11}
\begin{split}
\|u\|_{X^1(I)} +\|\tilde{u}\|_{X^1(I)}&\leq C(M),\\
\|u-\tilde{u}\|_{X^1(I)}&\leq C(M)\varepsilon.
\end{split}
\end{equation}
\end{prop}
\begin{proof}
First, we need to show the short time Stability, which follows a similar proof
as the proof of Proposition \ref{prop:lwp}.
Then, by using Lemma \ref{lem:cgwp}, we extend to the entire time interval.
\end{proof}
\section{Euclidean profiles}
In this section, we introduce the Euclidean profiles which are linear and nonlinear Sch\"odinger solutions on $\mathbb{T}^4$ concentrated at a point. The Euclidean profiles perform similar with the solutions in the Euclidean space $\mathbb{R}^4$ and hence Euclidean profiles hold some similar well-posedness and scattering properties
by using the theory for the NLS in Euclidean space $\mathbb{R}^4$, which is proven by
Ryckman and Vi{\c{s}}an \cite{visan2007global}\cite{vicsan2011global} (the defocusing case) and Dodson \cite{DodsonFocusing} (the focusing case), as a black box. This is an analogue in 4 dimensions of the section 4 in \cite{ionescu2012energy}, we follows closely the argument in the section 4 of \cite{ionescu2012energy}.
We fix a spherically symmetric function $\eta \in C_0^{\infty}(\mathbb{R}^4)$ supported
in the ball of radius 2 and equal to 1 in the ball of radius 1.
Given $\phi\in \dot{H}^1(\mathbb{R}^4)$ and a real number $N\geq 1$ we define
\begin{equation}\label{def:euclideanProfile}
\begin{split}
Q_N \phi \in H^1(\mathbb{R}^4),&\qquad (Q_N\phi)(x) =\eta(x/N^{\frac{1}{2}})\phi(x),\\
\phi_N \in H^1(\mathbb{R}^4),&\qquad \phi_N (x) = N(Q_N \phi)(Nx),\\
f_N \in H^1(\mathbb{T}^4),&\qquad f_N(y) =\phi_N(\Psi^{-1}(y)),
\end{split}
\end{equation}
where $\Psi :\{x\in \mathbb{R}^4: |x|<1 \} \to O_0\subseteq \mathbb{T}^4, \quad \Psi(x)=x$.
The cutoff function $\eta(\frac{x}{N^{1/2}})$
is useful to concentrate our focus on the range of a point, and the choice
of the order $1/2$ actually can be chosen any number between $1/2$ and $1$.
Thus $Q_N \phi$ is a compactly supported modification of profile $\phi$.
$\phi_N$ is a $\dot{H}^1$-invariant rescaling of $Q_N \phi$, and $f_N$ is
the function obtained by transferring $\phi_N$ to a neighborhood of 0 in $\mathbb{T}^4$.
\begin{thm}[GWP of the defocusing cubic NLS in $\mathbb{R}^4$ \cite{visan2007global}\cite{vicsan2011global}]
\label{thm:GWPinR}
Assume $\phi \in \dot{H}^1(\mathbb{R}^4)$ then there is an unique global solution
$v\in C(\mathbb{R}: \dot{H}^1(\mathbb{R}^4))$ of the initial-value problem
\begin{equation}
(i\partial_t +\Delta)v = v|v|^2, \qquad v(0) = \phi,
\end{equation}
and
\begin{equation}
\| \nabla_{\mathbb{R}^4} v\|_{(L_t^\infty L_x^2\cap L^2_tL_x^4)(\mathbb{R}\times\mathbb{R}^4)}
\leq C(E_{\mathbb{R}^4}(\phi)) < +\infty.
\end{equation}
Moreover, this solution scatters in the sense that there exists $\phi^{\pm\infty}
\in \dot{H}^1(\mathbb{R}^4)$, such that
\begin{equation}
\|v(t) - e^{it\Delta}\phi^{\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}\to 0,\text{ as }
t\to \pm\infty.
\end{equation}
Besides, if $\phi\in H^5(\mathbb{R}^4)$ then $v\in C(\mathbb{R}: H^5(\mathbb{R}^4))$ and
\begin{equation}
\sup_{t\in\mathbb{R}} \|v(t)\|_{H^5(\mathbb{R}^4)} \lesssim_{\|\phi\|_{H^5(\mathbb{R}^4)}}
\lesssim 1.
\end{equation}
\end{thm}
\begin{thm}[GWP of the focusing cubic NLS in $\mathbb{R}^4$ \cite{DodsonFocusing}]\label{thm:GWPfocusing}
Assume $\phi \in \dot{H}^1(\mathbb{R}^4)$, under the assumption that
\[
\sup_{t\in \text{lifespan of }v}\|v(t)\|_{\dot{H}^1(\mathbb{R}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},
\]
then there is an unique global solution
$v\in C(\mathbb{R}: \dot{H}^1(\mathbb{R}^4))$ of the initial-value problem
\begin{equation}\label{eq:ivpfocusing}
(i\partial_t +\Delta)v = -v|v|^2, \qquad v(0) = \phi,
\end{equation}
and
\begin{equation}
\| \nabla_{\mathbb{R}^4} v\|_{(L_t^\infty L_x^2\cap L^2_tL_x^4)(\mathbb{R}\times\mathbb{R}^4)}
\leq C(\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}, E_{\mathbb{R}^4}(\phi))<+\infty.
\end{equation}
Moreover, this solution scatters in the sense that there exists $\phi^{\pm\infty}
\in \dot{H}^1(\mathbb{R}^4)$, such that
\begin{equation}
\|v(t) - e^{it\Delta}\phi^{\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}\to 0,\text{ as }
t\to \pm\infty.
\end{equation}
Besides, if $\phi\in H^5(\mathbb{R}^4)$ then $v\in C(\mathbb{R}: H^5(\mathbb{R}^4))$ and
\begin{equation}
\sup_{t\in\mathbb{R}} \|v(t)\|_{H^5(\mathbb{R}^4)} \lesssim_{\|\phi\|_{H^5(\mathbb{R}^4)}} 1.
\end{equation}
\end{thm}
\begin{remark}[Persistence of regularity]
Consider $\phi \in H^5(\mathbb{R}^4)$, and $v\in C(\mathbb{R}: \dot{H}^1(\mathbb{R}^4))$ is the solution of (\ref{eq:NLS}) with $v(0) = \phi$ and satisfying
\[
\| \nabla_{\mathbb{R}^4}v \|_{(L_t^\infty L_x^2\cap L^2_tL_x^4)(\mathbb{R}\times\mathbb{R}^4)}
< +\infty.\]
So we can have a finite partition $\{I_k\}_{k=1}^K$ of $\mathbb{R}$, ($I_k =
[t_{k-1},t_k)$, where $t_k =\infty$.)
s.t. $\| \nabla_{\mathbb{R}^4} v\|_{L_t^4L_x^{8/3}}< \frac{1}{2}$, for each $k$,
\begin{align*}
&\|v(t)\|_{L_t^\infty(I_k: H^5(\mathbb{R}^4))}\leq \|e^{i(t-t_{k-1})\Delta}v(t_{k-1})\|_{H^5}
+ \|\langle \nabla \rangle^5 |v(t)|^2v(t)\|_{L^2_tL_x^{4/3}(I_k)}\\
\leq & \|v(t_{k-1})\|_{H^5_x}+\|\langle \nabla\rangle^5 v\|_{L^\infty_tL^2_x(I_k)}
\|v(t)\|^2_{L^4_tL_x^8(I_k)}\\
\leq & \|v(t_{k_1})\|_{H^5} + \frac{1}{4} \|v\|_{L^\infty_t(I_k: H^5(\mathbb{R}^4))}
\end{align*}
which implies $\|v(t)\|_{L^\infty_t(I_k: H^5(\mathbb{R}^4))} \leq \frac{4}{3}
\|v(t_{k-1})\|_{H^5}$ for each $1\leq k \leq K$, so
$\|v(t)\|_{L^\infty_t(\mathbb{R}: H^5_x(\mathbb{R}^4))}<\infty$.
\end{remark}
\begin{thm}\label{thm:4.2}
Assume $T_0\in (0,\infty)$, and
$\mu \in\{-1, 0, 1\}$ are given, and define $f_N$ as (\ref{def:euclideanProfile})
above.
Suppose
\[\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}< +\infty, \qquad \text{when }\mu \in \{0, 1\};\]
or under the assumption that if $v$ is a solution of (\ref{eq:ivpfocusing}) then $v$ satisfies
\[\sup_{t\in \text{lifespan of }v}\|v(t)\|_{\dot{H}^1(\mathbb{R}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)}, , \qquad \text{when }\mu \in \{-1\}.\]
Then the following conclusions hold:
\begin{enumerate}
\item There is $N_0 = N_0(\phi, T_0)$ sufficiently large such that
for any $N\geq N_0$ there is an unique solution $U_N\in C((-T_0N^{-2},T_0N^{-2}):H^1(\mathbb{T}^4))$
of the initial value problem
\begin{equation}
(i\partial_t +\Delta)U_N = \mu U_N|U_N|^2, \qquad U_N(0)=f_N.
\end{equation}
Moreover, for any $N\geq N_0$,
\begin{equation}
\|U_N\|_{X^1(-T_0N^{-2},T_0N^{-2})}\lesssim_{E_{\mathbb{R}^4}(\phi),\,\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}} 1.
\end{equation}
\item Assume $\varepsilon_1\in (0,1]$ is sufficiently small (depending on only $E_{\mathbb{R}^4}\phi$),
$\phi'\in H^5(\mathbb{R}^4)$, and $\|\phi -\phi'\|_{\dot{H}^1(\mathbb{R}^4)}\leq \varepsilon_1$.
Let $v'\in C(\mathbb{R}: H^5(\mathbb{R}^4))$ denote the solution of the initial value problem
\begin{equation}
(i\partial_t +\Delta)v' = \mu v'|v'|^2, \qquad v'(0)=\phi'.
\end{equation}
For $R$, $N\geq 1$, we define
\begin{equation}
\begin{split}
v_R'(x, t) =\eta(x/R) v'(x, t), &\qquad (x, t)\in \mathbb{R}^4\times(-T_0, T_0)\\
v'_{R,N} (x, t) =N v_R'(Nx, N^2t), &\qquad (x, t)\in \mathbb{R}^4\times(-T_0N^{-2}, T_0N^{-2})\\
V_{R, N}(y, t) = v'_{R,N}(\Psi^{-1}(y), t), &\qquad (y, t)\in
\mathbb{T}^4\times(-T_0N^{-2}, T_0N^{-2}).
\end{split}
\end{equation}
Then there is $R_0\geq 1$(depending on $T_0$, $\phi'$ and $\varepsilon_1$), for any
$R\geq R_0$, we obtain that
\begin{equation}
\limsup_{N\to\infty} \|U_N-V_{R,N}\|_{X^1(-T_0N^{-2}, T_0N^{-2})}
\lesssim_{E_{\mathbb{R}^4}(\phi),\,\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}}\varepsilon_1.
\end{equation}
\end{enumerate}
\end{thm}
$V_{R,N}$ can be considered as solve NLS firstly, then cutoff and scaling, while
$U_N$ can be considered as cutoff and scaling firstly, then solve NLS.
\begin{proof}
We show Part(1) and Part(2) together, by Proposition \ref{prop:stability} (stability).
Using Theorem \ref{thm:GWPinR} and Theorem \ref{thm:GWPfocusing}, we know $v'$ globally exists and satisfying
\begin{equation*}
\| \nabla_{\mathbb{R}^4} v' \|_{(L_t^\infty L_x^2\cap L^2_tL_x^4)(\mathbb{R}\times\mathbb{R}^4)}
\lesssim 1,
\end{equation*}
and
\begin{equation}\label{eq:scatteringsmooth}
\sup_{t\in\mathbb{R}} \|v'(t)\|_{H^5(\mathbb{R}^4)}\lesssim_{\|\phi'\|_{H^5(\mathbb{R}^4)}} 1.
\end{equation}
Let's consider $v_R'(x, t) = \eta(x/R) v'(x,t)$.
\begin{align*}
&(i\partial_t + \Delta_{\mathbb{R}^4})v_R' =(i\partial_t +\Delta_{\mathbb{R}^4})
(\eta(x/R)v'(x,t))\\
=&\eta(x/R)(i\partial_t + \Delta_{\mathbb{R}^4})v'(x,t) + R^{-2} v'(x,t)
(\Delta_{\mathbb{R}^4}\eta)(x/R)+2R^{-1}\sum_{j=1}^4\partial_j v'(x,t)\partial_j\eta(x/R).
\end{align*}
which implies
\begin{equation*}
(i\partial_t + \Delta_{\mathbb{R}^4})v_R' = \lambda |v_R'|^2v_R' + e_R(x,t),
\end{equation*}
where $e_R(x,t) = \mu (\eta(x/R)-\eta^3(x/R))v'|v'|^2 +
R^{-2} v'(x,t)(\Delta_{\mathbb{R}^4}\eta)(x/R)+2R^{-1}
\sum_{j=1}^4\partial_j v'(x,t)\partial_j\eta(x/R).$
After scaling, we get
\begin{equation*}
(i\partial_t + \Delta_{\mathbb{R}^4})v'_{R,N} = \mu |v_{R,N}'|^2v_R' +
e_{R,N}(x,t),
\end{equation*}
where $e_{R,N}(x,t) = N^3 e_R (Nx, N^2t)$.
with $V_{R,N}(y,t) = v'_{R,N}(\Phi^{-1}(y), t)$ and taking $N\geq 10R$,
we obtain that
\begin{equation}\label{4.3}
(i\partial_t + \Delta_{\mathbb{R}^4})V_{R,N}(y,t) = \mu |V_{R,N}|^2V_{R,N}
+E_{R,N}(y,t),
\end{equation}
where $E_{R,N}(y,t) = e_{R,N}(\Phi^{-1}(y),t).$
By Proposition \ref{prop:stability}, we need following conditions:
\begin{enumerate}
\item $\|V_{R,N}\|_{L_t^\infty([-T_0N^{-2},T_0N^{-2}]: H^1(\mathbb{T}^4))}
+\|V_{R,N}\|_{Z([-T_0N^{-2},T_0N^{-2}])}\leq M$;
\item $\|f_N -V_{R,N}(0)\|_{H^1(\mathbb{T}^4)}\leq \varepsilon$;
\item $\|E_{R,N}\|_{N([-T_0N^{-2},T_0N^{-2}])}\leq \varepsilon$.
\end{enumerate}
We will prove all 3 condition above:
\case{1}{$\|V_{R,N}\|_{L_t^\infty([-T_0N^{-2},T_0N^{-2}]: H^1(\mathbb{T}^4))}
+\|V_{R,N}\|_{Z([-T_0N^{-2},T_0N^{-2}])}\leq M$.}
Since $v'(x,t)$ globally exists, $V_{R,N}(y,t)$ also globally exists.
Given $T_0\in (0, \infty)$,
\begin{align*}
&\sup_{t\in[-T_0N^{-2},T_0N^{-2}]}\|V_{R,N}(t)\|_{H^1(\mathbb{T}^4)}
\leq \sup_{t\in[-T_0N^{-2},T_0N^{-2}]}\|v'_{R,N}(t)\|_{H^1(\mathbb{R}^4)}\\
=&\sup_{t\in[-T_0N^{-2},T_0N^{-2}]}\|Nv'_R(Nx, N^2t)\|_{H^1(\mathbb{R}^4)}\\
=&\sup_{t\in[-T_0N^{-2},T_0N^{-2}]}\frac{1}{N} \|v'_R(N^2t)\|_{L^2(\mathbb{R}^4)}
+\|v'_R(N^2t)\|_{\dot{H}^1(\mathbb{R}^4)}\\
\leq& \sup_{t\in[-T_0, T_0]} \|v'_R\|_{H^1(\mathbb{R}^4)} =\sup_{t\in[-T_0,T_0]}
\|\eta(x/R)v'(x,t)\|_{H^1(\mathbb{R}^4)}\\
\leq& \sup_{t\in[-T_0, T_0]}\|\eta(x/R)v'(x,t)\|_{L^2(\mathbb{R}^4)}+
\|\nabla\eta(x/R)v'(x,t)\|_{L^2(\mathbb{R}^4)}+\|\eta(x/R)\nabla v'(x,t)\|_{L^2(\mathbb{R}^4)}\\
\leq & 2\|v'(x,t)\|_{H^1(\mathbb{R}^4)}\leq 2\|\phi'(t)\|_{H^5(\mathbb{R}^4)}.
\end{align*}
By Littlewood-Paley theorem and Sobolev embedding, we obtain that
\begin{align*}
&\|V_{R,N}\|_{Z([-T_0N^{-2},T_0N^{-2}])}=\sup_{J\subset[-T_0N^{-2},T_0N^{-2}]}
(\sum_{M dyadic} M^2\|P_M V_{R,N}\|^4_{L^4(J\times\mathbb{T}^4)})^{\frac{1}{4}}\\
=&\sup_{J\subset[-T_0N^{-2},T_0N^{-2}]}
\|(\sum_M (\langle 1-\Delta \rangle^{\frac{1}{4}}
P_M V_{R,N})^4)^{\frac{1}{4}}\|_{L^4(J\times \mathbb{T}^4)}\\
\leq& \sup_{J\subset[-T_0N^{-2},T_0N^{-2}]}
\|(\sum_M (\|\langle 1-\Delta \rangle^{\frac{1}{4}}
P_M V_{R,N})^2)^{\frac{1}{2}}\|_{L^4(J\times \mathbb{T}^4)}\\
\lesssim& \sup_{J\subset[-T_0N^{-2},T_0N^{-2}]}
\|\langle 1-\Delta \rangle^{\frac{1}{4}} V_{R,N}\|_{L^4(J\times\mathbb{T}^4)}\\
\leq& \sup_{J\subset[-T_0N^{-2},T_0N^{-2}]}
\|\langle 1-\Delta \rangle^{\frac{1}{2}} V_{R,N}
\|_{L^4_t(J)L^\frac{8}{3}_x(\mathbb{T}^4)}\\
\lesssim& \||v'_{R,N}| +|\nabla_{\mathbb{R}^4} v'_{R,N}|\|_{L^4_tL_x^{\frac{8}{3}}
([-T_0N^{-2},T_0N^{-2}]\times\mathbb{R}^4)}\\
\lesssim& \|v'_R\|_{L^4_tL_x^{\frac{8}{3}}([-T_0,T_0]\times\mathbb{R}^4)}+
\||\nabla_{\mathbb{R}^4}v'_{R}|\|_{L^4_tL_x^{\frac{8}{3}}([-T_0,T_0]\times\mathbb{R}^4)}.
\end{align*}
Since
$\|v'_R\|_{L^4_tL_x^{\frac{8}{3}}([-T_0,T_0]\times\mathbb{R}^4)}+
\||\nabla_{\mathbb{R}^4}v'_{R}|\|_{L^4_tL_x^{\frac{8}{3}}([-T_0,T_0]\times\mathbb{R}^4)}\lesssim \sup_{t} \|v'(t)\|_{H^5}$, by (\ref{eq:scatteringsmooth}) we obtain $\|V_{R,N}\|_{Z([-T_0N^{-2},T_0N^{-2}])}\lesssim_{\|\phi'\|_{H^5(\mathbb{R}^4)}} 1.$
\case{2}{$\|f_N -V_{R,N}(0)\|_{H^1(\mathbb{T}^4)}\leq \varepsilon$.}
By H\"{o}lder inequality, we obtain that
\begin{align*}
\|f_N - V_{R,N}(0)\|_{H^1(\mathbb{T}^4)}&\leq \|\phi_N(\Psi^{-1}(y))-
\phi'_{R,N}(\Psi^{-1}(y))\|_{\dot{H}^1(\mathbb{T}^4)}\\
&\leq \|\phi_N -\phi'_{R,N}\|_{\dot{H}^1(\mathbb{R}^4)}
=\|Q_N\phi -\phi'_R\|_{\dot{H}^1(\mathbb{R}^4)}\\
&= \|\eta(\frac{x}{N^{\frac{1}{2}}})\phi(x) -
\eta(\frac{x}{N^{\frac{1}{2}}})\phi'(x)\|_{\dot{H}^1(\mathbb{R}^4)}\\
&\leq \|\eta(\frac{x}{N^{\frac{1}{2}}})\phi(x)-\phi(x)\|_{\dot{H}^1(\mathbb{R}^4)}
+\|\phi -\phi'\|_{\dot{H}^1(\mathbb{R}^4)}
+\|\eta(\frac{x}{N^{\frac{1}{2}}})\phi'(x)-\phi'(x)\|_{\dot{H}^1(\mathbb{R}^4)}.
\end{align*}
With $N\geq 10 R$, and $R> R_0$, $R_0$ large enough, we have that
\[
\|f_N -V_{R,N}(0)\|_{H^1(\mathbb{T}^4)}\leq 2\varepsilon_1.
\]
\case{3}{$\|E_{R,N}\|_{N([-T_0N^{-2},T_0N^{-2}])}\leq \varepsilon$.}
Next, by Proposition \ref{prop:dual} and scaling invariance, we obtain that
\begin{align*}
\|E_{R,N}\|_{N([-T_0N^{-2},T_0N^{-2}])}
&= \|\int_{0}^t e^{i(t-s)\Delta} E_{R,N}(s)\, ds\|_{X^1([-T_0N^{-2},T_0N^{-2}])}\\
&\leq \sup_{\|u_0\|_{Y^{-1}}=1}
\left| \int_{\mathbb{T}^4\times[-T_0N^{-2},T_0N^{-2}]} \overline{u_0}\cdot E_{R,N}
\, dxdt\right|\\
&\leq \sup_{\|u_0\|_{Y^{-1}}=1} \||\nabla|^{-1}u_0\|_{L_t^\infty L_x^2}
\||\nabla| E_{R,N}\|_{L_t^1L_x^2}\\
&\leq \sup_{\|u_0\|_{Y^{-1}}=1}
\|u_0\|_{Y^{-1}} \||\nabla|E_{R,N}\|_{L^1_tL^2_x([-T_0N^{-2},T_0N^{-2}]
\times\mathbb{T}^4)}\\
&\leq \|\nabla_{\mathbb{R}^4}\,e_{R,N}\|_{L^1_tL^2_x([-T_0N^{-2},T_0N^{-2}]\times\mathbb{R}^4)}\\
&=\|\nabla_{\mathbb{R}^4}\, e_R\|_{L^1_tL^2_x([-T_0, T_0]\times\mathbb{R}^4)}.
\end{align*}
\begin{align*}
|\nabla_{\mathbb{R}^4}\, e_R(x,t)|&=| \nabla_{\mathbb{R}^4}(\mu (\eta(x/R)-\eta^3(x/R)))
v'(x,t)|v'(x,t)|^2\\
+& R^{-2} v'(x,t)(\Delta_{\mathbb{R}^4} \eta)(\frac{x}{R}) + 2R^{-1}(\sum_{j=1}^4
\partial_j v'(x,t)\partial_j \eta(x/R)|\\
&\leq |\nabla_{\mathbb{R}^4}(\eta(\frac{x}{R})-\eta(\frac{x}{R})^3)v'(x,t)|v'(x,t)|^2|
+3|(\eta(\frac{x}{R})-\eta(\frac{x}{R})^3)\nabla_{\mathbb{R}^4}v'(x,t)|v'(x,t)|^2|\\
+& R^{-3}|v'(x,t)\nabla_{\mathbb{R}^4} \Delta_{\mathbb{R}^4} \eta(\frac{x}{R})|
+ R^{-2}|\nabla_{\mathbb{R}^4}v'(x,t)(\Delta_{\mathbb{R}^4}\eta)(\frac{x}{R})|
+ R^{-1}|\Delta_{\mathbb{R}^4}v'(x,t)\nabla_{\mathbb{R}^4}\eta(\frac{x}{R})|\\
&\lesssim_{\|\phi'\|_{H^5(\mathbb{R}^4)}} \mathds{1}_{[R, 2R]}(|x|)
\left( |v'(x,t)| +|\nabla_{\mathbb{R}^4} v'(x,t)| \right)+\frac{1}{R}
\left( |\langle \nabla_{\mathbb{R}^4}\rangle^2 v'(x,t)| \right).
\end{align*}
Since $\|\nabla_{\mathbb{R}^4}^2 v'(x,t)\|_{L^\infty_x}\lesssim_{\|\phi'\|_{H^5}} 1$,
$\|\nabla_{\mathbb{R}^4} v'(x,t)\|_{L^\infty_x}\lesssim_{\|\phi'\|_{H^5}} 1$, and
$\|v'(x,t)\|_{L^\infty_x}\lesssim_{\|\phi'\|_{H^5}} 1$ (by Sobolev embedding), we obtain that
\begin{align*}
&\|\nabla_{\mathbb{R}^4}\, e_R\|_{L^1_tL^2_x([-T_0,T_0]\times\mathbb{R}^4)}
=\int_{-T_0}^{T_0} (\int_{\mathbb{R}^4} |\nabla_{\mathbb{R}^4} \, e_R|\, dx)^{\frac{1}{2}}dt\\
&\leq\int_{-T_0}^{T_0} \left(\int_{\mathbb{R}^4}\mathds{1}_{[R, 2R]}(|x|)
(|v'(x,t)|^2 + |\nabla_{\mathbb{R}^4} v'(x,t)|^2)\,dx +\frac{1}{R^2}\int_{\mathbb{R}^4}
|\langle \nabla_{\mathbb{R}^4}\rangle^2 v'(x,t)|^2\,dx
\right)^\frac{1}{2}dt\\
&\lesssim_{\|\phi'\|_{H^5}}
2T_0 \left( \int_{\mathbb{R}^4}\mathds{1}_{[R,2R]}(|x|)
\langle \nabla_{\mathbb{R}^4}\rangle^2 v'(x,t)|^2\,dx \right)^{\frac{1}{2}}+\frac{1}{R}
\to 0,\text{ as } R\to \infty.\\
\end{align*}
So we can obtain that \begin{equation*}
\|\nabla E_{R,N}\|_{L^1_tL^2_x([-T_0N^{-2}, T_0N^{-2}]\times\mathbb{T}^4)}<\varepsilon_1,
\end{equation*}
where $R>R_0$, and $R_0$ large enough.
By checking all there conditions above, we have the desired result.
\end{proof}
Next, we prove a extinction lemma as Ionescu and Pausader \cite{ionescu2012energy}
did in their paper about energy critical NLS in $\mathbb{T}^3$. The extinction lemma
is the essential part why we prove the GWP result in $\mathbb{T}^4$.
\begin{lem}[Extinction Lemma]\label{lem:extinction}
Let $\phi\in\dot{H}^1(\mathbb{R}^4)$, and define $f_N$ as in (\ref{def:euclideanProfile}).
For any $\varepsilon >0$, there exist $T = T(\phi, \varepsilon)$ and $N_0(\phi, \varepsilon)$ such that
for all $N\geq N_0$,
there holds that
\[
\|e^{it\Delta} f_N\|_{Z([TN^{-2}, T^{-1}])}\lesssim \varepsilon.
\]
\end{lem}
\begin{proof}
For $M\geq 1$, we define
\[
K_M(x,t) = \sum_{\xi\in\mathbb{Z}^4} e^{-i[t|\xi|^2 +x\cdot\xi]\eta(\xi/M)}
=e^{it\Delta}P_{\leq M}\delta_0.
\]
We know from [Lemma 3.18, Bourgain\cite{bourgain1993fourier}] that
$K_M$ satisfies
\begin{equation}\label{4.9}
|K_M(x,t)|\lesssim \prod_{i=1}^4 \left(\frac{M}{\sqrt{q_i}(1+M|t/(\lambda_i)-a_i/q_i|^{\frac{1}{2}})}
\right),
\end{equation}
if $a_i$ and $q_i$ satisfying $\frac{t}{\lambda_i} =\frac{a_i}{q_i}+\beta_i$, where
$q_i\in \{1,\cdots, M\}$, $a_i\in \mathbb{Z}$, $(a_i, q_i)=1$ and $|\beta_i|\leq (Mq_i)^{-1}$ for all $i=1,2,3,4.$
From this, we conclude that for any $1\leq S\leq M$,
\begin{equation}\label{4.11}
\|K_M(x,t)\|_{L_{x,t}^\infty(\mathbb{T}^4\times[SM^{-2}, S^{-1}])} \lesssim
S^{-2}M^4.
\end{equation}
This follows directly from (\ref{4.9}) and Dirichlet's approximation lemma
which is stated as following:
\textit{For any real numbers $\alpha$, and any positive integer $N$, there exists
integers $p$ and $q$ such $1\leq q\leq N$ and $|q\alpha -p|<\frac{1}{N}$}.
Assume that $|t|\leq \frac{1}{S}$. $\frac{t}{\lambda_i} =\frac{a_i}{q_i}+\beta_i$ and
$|\beta_i|\leq \frac{1}{Mq_i}\leq \frac{1}{M}\leq \frac{1}{S}$. So we obtain that
\[\left|\frac{a_i}{q_i}\right|\leq \frac{2}{S}\quad \implies \quad q_i\geq \frac{a_i}{2}.\]
Therefore either $q_i\geq \frac{1}{2}S\quad (a_i\geq 1)$ or $a_i=0$ for each $i$.
If {$q_i\geq \frac{1}{2}S\quad (a_i\geq 1)$}, then
\[\frac{M}{\sqrt{q_i}(1+M|t/(\lambda_i)-a_i/q_i|^{\frac{1}{2}})}
\lesssim \frac{M}{\sqrt{q}}\lesssim S^{-\frac{1}{2}}M.
\]
If {$a_i=0$}, then
\[\frac{M}{\sqrt{q_i}(1+M|t/(\lambda_i)-a_i/q_i|^{\frac{1}{2}})}
\lesssim \frac{M}{\sqrt{q_i}+M|t|^{1/2}}\lesssim |t|^{-\frac{1}{2}}
\leq S^{-\frac{1}{2}}M.
\]
So we have that $|K_M(x,t)|\lesssim S^{-2}M^4$.
By the definition as in (\ref{def:euclideanProfile}), to prove the extinction lemma,
we may assume that $\phi\in C_0^{\infty}(\mathbb{R}^4)$, we claim that
\begin{equation}\label{4.12}
\begin{split}
\|f_N\|_{L^1(\mathbb{T}^4)} &\lesssim_{\phi} N^{-3}\\
\|P_K f_N\|_{L^2(\mathbb{T}^4)}&\lesssim_{\phi} \left( 1+ \frac{K}{N}\right)^{-10}N^{-1}.
\end{split}
\end{equation}
Let's consider the bound of $\|f_N\|_{L^1(\mathbb{T}^4)}$:
\begin{align*}
\|f_N\|_{L^1(\mathbb{T}^4)}& = \|\phi_N(\Psi^{-1}(y))\|_{L^1(\mathbb{T}^4)}
= \|\phi_N(x)\|_{L^1(\mathbb{R}^4)}\\
&=\int_{\mathbb{R}^4} |N(Q_N\phi)(Nx)|\,dx \\
&= \frac{1}{N^3}\int_{\mathbb{R}^4}|Q_N \phi|(x)\, dx =
\frac{1}{N^3} \int_{\mathbb{R}^4} |\eta(\frac{x}{N^{1/2}}) \phi(x)|\,dx\\
&\leq \frac{1}{N^3} \|\phi\|_{L^1(\mathbb{R}^4)}.
\end{align*}
Let's consider the bound of $\|P_K f_N\|_{L^2(\mathbb{T}^4)}$:
\begin{align*}
\|P_K f_N\|_{L^2(\mathbb{T}^4)} &=\|P_K \phi_N\|_{L^2(\mathbb{R}^4)}\\
&=\|P_K N(Q_N\phi)(Nx)\|_{L^2(\mathbb{R}^4)}\\
&=\|N(P_{\frac{K}{N}}Q_N\phi)(Nx)\|_{L^2(\mathbb{R}^4)}=
\frac{1}{N}\|P_{\frac{K}{N}}Q_N\phi\|_{L^2(\mathbb{R}^4)}\\
&=\frac{1}{N}\|P_{\frac{K}{N}}(\eta(\frac{x}{N^{\frac{1}{2}}})\phi(x))\|_{L^2(\mathbb{R}^4)}\\
&\leq \frac{1}{N}\left(1+\frac{K}{N} \right)^{-10}\|\eta(\frac{x}{N^{\frac{1}{2}}})
\phi(x)\|_{H^{10}(\mathbb{R}^4)}\\
&\leq \frac{1}{N}\left(1+\frac{K}{N} \right)^{-10} \|\phi\|_{H^{10}}.
\end{align*}
By Proposition \ref{prop:strichartz}, for $p>3$ we obtain that
\begin{equation}\label{4.121}
\|e^{it\Delta}P_K f_N\|_{L^p_{x,t}(\mathbb{T}^4\times[-1,1])}\leq K^{2-\frac{6}{p}}
\left(1+\frac{K}{N} \right)^{-10} N^{-1} \|\phi\|_{H^{10}}.
\end{equation}
Then let's estimate $\|e^{it\Delta}f_N\|_{Z([TN^{-2},T^{-1}])}$. We know that
\begin{equation*}
\|e^{it\Delta}f_N\|_{Z([TN^{-2},T^{-1}])} = \sup_{J\subset[TN^{-2},T^{-1}]}
\left( \sum_K K^2 \|P_K e^{it\Delta}f_N\|^4_{L^4(J\times\mathbb{T}^4)}\right)^\frac{1}{4}
\end{equation*}
To estimate it, we decompose the sum above into three part:
$$\left(\sum_{K\leq NT^{-\frac{1}{100}}} +
\sum_{K\geq NT^{\frac{1}{100}}}+
\sum_{NT^{-\frac{1}{100}}\leq K\leq NT^{\frac{1}{100}}}\right)
K^2 \|P_K e^{it\Delta}f_N\|^4_{L^4([TN^{-2}, T^{-1}]\times\mathbb{T}^4)}$$.
\case{1}{$K\leq NT^{-\frac{1}{100}}$:}
By (\ref{4.121}), we obtain that
\begin{align*}
&\sum_{K\leq NT^{-\frac{1}{100}}} K^2
\|P_K e^{it\Delta}f_N\|^4_{L^4([TN^{-2}, T^{-1}]\times\mathbb{T}^4)}\\
\leq & \sum_{K\leq NT^{-\frac{1}{100}}} K^4 \left(1+\frac{K}{N}\right)^{-40}
N^{-4}\|\phi\|_{H^{10}}\\
\lesssim_{\phi}& (NT^{-\frac{1}{100}})^4N^{-4} = T^{-\frac{1}{25}}.
\end{align*}
\case{2}{$K\geq NT^{\frac{1}{100}}$:}
By (\ref{4.121}), we obtain that
\begin{align*}
&\sum_{K\geq NT^{\frac{1}{100}}} K^2
\|P_K e^{it\Delta}f_N\|^4_{L^4([TN^{-2}, T^{-1}]\times\mathbb{T}^4)}\\
\leq & \sum_{K\geq NT^{\frac{1}{100}}} K^4 \left(1+\frac{K}{N}\right)^{-40}
N^{-4}\|\phi\|_{H^{10}}\\
\leq &\sum_{K\geq NT^{\frac{1}{100}}} K^{-36}N^{36}\|\phi\|_{H^10}\\
\lesssim_{\phi} & T^{-\frac{4}{100}}.
\end{align*}
\case{3}{$NT^{-\frac{1}{100}}\leq K\leq NT^{\frac{1}{100}}$:}
Let's consider $K\in [NT^{-\frac{1}{100}}, NT^{\frac{1}{100}}]$ and set $M\sim \max{(K, N)}$
and $S\sim T$.
\begin{equation}\label{4.13}
\begin{split}
\|e^{it\Delta} P_K f_N\|_{L^{\infty}_{x,t}(\mathbb{T}^4\times[TN^{-2}, T^{-1}])}
&=\|K_M * f_N\|_{L^{\infty}_{x,t}(\mathbb{T}^4\times[TN^{-2}, T^{-1}])}\\
&\leq \|K_M\|_{L^\infty_{x,t}(\mathbb{T}^4\times[TN^{-2}, T^{-1}])
\|f_N\|_{L^1_{x,t}(\mathbb{T}^4)}}\\
&\lesssim_{\phi} T^{-2}K^4N^{-3}\leq T^{-2+\frac{1}{25}}N.
\end{split}
\end{equation}
\begin{equation}\label{4.14}
\begin{split}
\|e^{it\Delta} P_N f_N\|_{L_{x,t}^{3}(\mathbb{T}^4\times[TN^{-2}, T^{-1}])}&\lesssim_\phi K^{\varepsilon}\left(
1+\frac{K}{N}\right)^{-10}N^{-1}\\
&\leq N^{-1+\varepsilon}T^{-\frac{\varepsilon}{100}}.
\end{split}
\end{equation}
Interpolating (\ref{4.13}) with (\ref{4.14}), we have that
\begin{equation}
\|e^{it\Delta}P_K f_N\|_{L^4_{x,t}([TN^{-2},T^{-1}])}\lesssim_{\phi}
\left( N^{-1+\varepsilon} T^{\frac{\varepsilon}{100}} \right)^{\frac{3}{4}}
\left(T^{-2+\frac{1}{25}} N \right)^{\frac{1}{4}}
\leq
N^{-\frac{1}{4}}T^{-\frac{1}{100}}.
\end{equation}
Summing $K^2 \|P_K e^{it\Delta}f_N\|^4_{L^4([TN^{-2}, T^{-1}]\times\mathbb{T}^4)}$ over $K$, we obtain that
\begin{align*}
\sum_{NT^{-\frac{1}{100}}\leq K\leq NT^{\frac{1}{100}}}
K^2 \|P_K e^{it\Delta}f_N\|^4_{L^4([TN^{-2}, T^{-1}]\times\mathbb{T}^4)}
& \leq \sum_{NT^{-\frac{1}{100}}\leq K\leq NT^{\frac{1}{100}}} K^2
(N^{-\frac{1}{4}}T^{-\frac{1}{100}})^4\\
&\leq \sum_{NT^{-\frac{1}{100}}\leq K\leq NT^{\frac{1}{100}}}
K^2N^{-2} T^{-\frac{1}{25}}\\
&\leq T^{-\frac{1}{50}}.
\end{align*}
Summarizing all three cases by setting $T$ large enough, we hold the estimate.
\end{proof}
Let's now consider $f\in L^2(\mathbb{T}^4)$, $t_0\in \mathbb{R}$ and $x_0\in \mathbb{T}^4$,
\begin{align*}
(\pi_{x_0} f)(x)&:= f(x-x_0),\\
(\Pi_{t_0,x_0}) f(x)&:= (\pi_{x_0}e^{-it_0\Delta} f)(x).\\
\end{align*}
As in (\ref{def:euclideanProfile}), given $\phi\in\dot{H}^1(\mathbb{R}^4)$
and $N\geq 1$, we define
\[
T_N \phi (x):= N\tilde{\phi} (N\Psi^{-1}(x)),\text{ where }
\tilde{\phi} (y):= \eta(y/N^{\frac{1}{2}})\phi(y)
\]
and claim that
$T_N : \dot{H}^1(\mathbb{R}^4)\to H^1(\mathbb{R}^4)$ is a linear operator with
$\|T_N \phi\|_{H^1(\mathbb{T}^4)}\lesssim \|\phi\|_{\dot{H}^1(\mathbb{R}^4)}$.
\begin{remark}
To show $\|T_N \phi\|_{H^1(\mathbb{T}^4)}\lesssim \|\phi\|_{\dot{H}^1(\mathbb{R})}$.
\begin{align*}
\|T_N \phi(x)\|_{H^1(\mathbb{T}^4)}& \lesssim \|T_N \phi(x)\|_{\dot{H}^1(\mathbb{T}^4)}\\
&= \|\nabla(N\eta(N^{\frac{1}{2}})\phi(Ny))\|_{L^2(\mathbb{R}^4)}\\
&\leq \|N^{\frac{3}{2}} (\nabla \eta) (N^{\frac{1}{2}}y)\phi(Ny)\|_{L^2(\mathbb{R}^4)}
+\|N^2 \eta(N^{\frac{1}{2}}) \nabla \phi(Ny)\|_{L^2(\mathbb{R}^4)}\\
&\leq \|\mathds{1}_{[0, N^{\frac{1}{2}}]}\|_{L^4(\mathbb{R}^4)}
\|N^{\frac{3}{2}} |(\nabla \eta) (N^{\frac{1}{2}}y)\phi(Ny)|\|_{L^4_{\mathbb{R}^4}}
+\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}\\
&\leq \|N\phi(Ny)\|_{\dot{H}^1(\mathbb{R}^4)}+\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}\\
&\leq 2\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}.
\end{align*}
\end{remark}
\begin{definition}
Let $\widetilde{\mathcal{F}_e}$ denote the set of renormalized Euclidean frames
\begin{align*}
\widetilde{\mathcal{F}_e}:=&
\{
(N_k, t_k, x_k)_{k\geq 1}: N_k \in [1,\infty),\ t_k\to 0,\ x_k\in\mathbb{T}^4,\
N_k\to\infty \\
&\text{ and either } t_k=0 \text{ for any } k\geq 1 \text{ or }
\lim_{k\to \infty} N_k^2|t_k| =\infty \}.
\end{align*}
\end{definition}
\begin{prop}[Euclidean profiles]\label{prop:4.4}
Assume that $\mathcal{O} = (N_k, t_k, x_k)_k \in \widetilde{\mathcal{F}_e}$ and $\mu \in \{-1, 0, 1\}$.
$\phi\in \dot{H}^1(\mathbb{R}^4)$.
Suppose
\[\|\phi\|_{\dot{H}^1(\mathbb{R}^4)}< +\infty, \qquad \text{when }\mu \in \{0, 1\};\]
or under the assumption that if $v$ is a solution of (\ref{eq:ivpfocusing}) with $v(0) =\phi$ then $v$ satisfies
\[\sup_{t\in \text{lifespan of }v}\|v(t)\|_{\dot{H}^1(\mathbb{R}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)}, , \qquad \text{when }\mu \in \{-1\}.\] Then
\begin{enumerate}
\item there exists $\tau =\tau(\phi)$ such that for $k$ large enough
(depending only on $\phi$ and $\mathcal{O}$) there is a nonlinear solution
$U_k\in X^1(-\tau, \tau)$ of the initial value problem (\ref{eq:NLS}) with initial data $U_k(0) = \Pi_{t_k, 0} (T_{N_k} \phi)$ and
\begin{equation}\label{4.4.1}
\|U_k\|_{X^1(-\tau, \tau)}\lesssim_{E_{\mathbb{R}^4}(\phi),\, \|\phi\|_{\dot{H}^1(\mathbb{R}^4)}} \, 1;
\end{equation}
\item there exists an Euclidean solution $u\in C(\mathbb{R}: \dot{H}^1(\mathbb{R}^4))$
of
\[
(i\partial_t +\Delta_{\mathbb{R}^4})u = \mu u|u|^2
\]
with scattering data $\phi^{\pm \infty}$ defined as Theorem \ref{thm:GWPinR} such
that the following holds, up to a subsequence: for any $\varepsilon >0$, there exists
$T(\phi, \varepsilon)$ such that for all $T\geq T(\phi, \varepsilon)$, there exists
$R(\phi, \varepsilon, T)$ such that for all $R\geq R(\phi, \varepsilon, T)$,
there holds that
\begin{equation}\label{4.4.2}
\|U_k -\widetilde{u_k}\|_{X^1(\{|t-t_k|\leq TN_k^{-2}\}\cap\{|t|<T^{-1}\})}
\leq \varepsilon,
\end{equation}\label{eq:tildeu}
for $k$ large enough, where
\begin{equation}
(\pi_{-x_k} \widetilde{u_k})(x, t) = N_k
\eta(N_k \Psi^{-1}(x)/R) u(N_k\Psi^{-1}(x), N_k^2(t-t_k)).
\end{equation}
In addition, up to a subsequence,
\begin{equation}\label{4.4.3}
\|U_k(t) -\Pi_{t_k-t,x_k}
T_{N_k}\phi^{\pm\infty}\|_{X^1(\{\pm (t-t_k)\geq \pm TN_k^{-2}\}\cap\{|t|<T^{-1}\})},
\leq \varepsilon,
\end{equation}
for $k$ large enough (depending on $\phi$, $\varepsilon$, $T$, and $R$).
\end{enumerate}
\end{prop}
\begin{proof}
By the statement, it is equivalent to prove the case when $x_k = 0$.
\noindent Part (1):
First, for $k$ large enough, we can make
\[
\|\phi -\eta(\frac{x}{N^{\frac{1}{2}}}) \phi\|_{\dot{H}^1(\mathbb{R}^4)}\leq \varepsilon_1.
\]
For each $N_k$, we choose $T_{0,N_k} = \tau N_k^2$($T_{0, N_k}$ is the coefficient
in Lemma \ref{thm:4.2}).
For each $T_{0, N_k}$, we make $R_k$ large enough to make Theorem \ref{thm:4.2} work.
(Note: in this case, $R_k$ determined by $T_{0, N_k}$ as in the proof of
Theorem \ref{thm:4.2}.)
\noindent Part(2):
Let's consider first case in Euclidean frame: $t_k =0$ for all $k$.
(\ref{4.4.1}) is directly from Theorem \ref{thm:4.2}, by choosing $k$, $R$ for any
fixed $T$ large enough.
To prove (\ref{4.4.2}), we need to choose $T(\phi, \delta)$ large enough, to make
sure
\[
\|\nabla_{\mathbb{R}^4} u\|_{L^3_{x,t}(\mathbb{R}^4\times\{|t|>T(\phi,\delta)\})}\leq \delta.
\]
By Theorem \ref{thm:GWPinR}, we obtain that
\[
\|u(\pm T(\phi, \delta))-e^{\pm iT(\phi,\delta)\Delta}\phi^{\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}
\leq \delta,
\]
which implies
\begin{equation}
\|U_{N_k}(\pm TN_k^{-2}) -\Pi_{-\pm T, x_k} T_{N_k} \phi^{\pm\infty}\|_{H^1(\mathbb{T}^4)}
\leq \delta.
\end{equation}
By Proposition \ref{prop:ZinX} and Proposition \ref{EstimateFreeSolution}, we have
\begin{equation}\label{4.4.5}
\|e^{it\Delta}\left( U_{N_k}(\pm TN_k^{-2}) -\Pi_{-\pm T, x_k} T_{N_k}
\phi^{\pm\infty}\right)\|_{X^1(|t|<T^{-1})}.
\end{equation}
By Proposition \ref{prop:lwp}, we obtain that
\begin{equation}\label{4.4.6}
\|U_{N_k} -e^{it\Delta}U_{N_k}(\pm TN_k^{-2})\|_{X^1}\leq \delta,
\end{equation}
and combining (\ref{4.4.5}) and (\ref{4.4.6}), we have
\[
\|U_{N_k} -\Pi_{-t, x_k} T_{N_k}\phi^{\pm\infty} \|_{X^1(\{\pm t\geq \pm TN_k^{-2}\}\cap\{|t|<T^{-1}\})}
\leq \varepsilon.
\]
when we choose $\delta$ small enough.
The second case: $N_k^2|t_k|\to \infty$.
\begin{align*}
U_k(0) &= \Pi_{t_k, 0} (T_{N_k} \phi)\\
&= e^{-it_k\Delta} \left(N_k^{\frac{1}{2}}\widetilde{\phi}(N_k\Psi^{-1}(x))\right)\\
&= e^{-it_k\Delta} \left( N_k^{\frac{1}{2}} \eta(N_k^{\frac{1}{2}}\Psi^{-1}(x))
\phi(N_k\Psi^{-1}(x))\right).
\end{align*}
By existence of wave operator of NLS, we know the following initial value problem
is global well-posed, so there exists $v$ satisfying:
\begin{equation}
\begin{cases}
(i\partial_t+\Delta_{\mathbb{R}^4}) v = \mu v|v|^2,\\
\lim_{t\to -\infty} \|v(t)- e^{it\Delta} \phi\|_{\dot{H}^1(\mathbb{R}^4)} = 0.
\end{cases}
\end{equation}
We set
\[\widetilde{v_k}(t) = N_k^{\frac{1}{2}} \eta(N_k\Psi^{-1}(x)/R)
v(N_k\Psi^{-1}(x), N_k^2 t),
\]
so we have $\widetilde{v_k}(-t_k) = N_k^{\frac{1}{2}} \eta(N_k\Psi^{-1}(x)/R)
v(N_k\Psi^{-1}(x), -N_k^2 t_k).$
For $k$ and $R$ large enough,
\begin{align*}
&\|\widetilde{v_k}(-t_k) - e^{-it_k\Delta} N_k^{\frac{1}{2}}
\eta(N_k^{\frac{1}{2}}\Psi^{-1}(x))\phi(N_k\Psi^{-1}(x))\|_{\dot{H}^1(\mathbb{T}^4)}\\
\leq& \|\eta(\frac{x}{N_k^{\frac{1}{2}}})v(x, -N_k^2t_k)
-e^{it_kN_k^2\Delta}\eta(\frac{x}{N_k^{\frac{1}{2}}})\phi(x)
\|_{\dot{H}^1(\mathbb{R}^4)}\\
\leq &\varepsilon.
\end{align*}
So $V_k(t)$ solves initial value problem (\ref{eq:IVP}) in $\mathbb{T}^4$, with
initial data $V_k(0) = \widetilde{V_k}(0)$, which implies $V_k(t)$ exists
in $[-\delta, \delta]$, and $\|V_k(t)-\widetilde{V_k}(t)\|_{X^1([-\delta, dd])}\lesssim \varepsilon$.
By the stability property (Proposition \ref{prop:stability}),
$
\|U_k - V_k\|_{X^1([-\delta, \delta])}\to 0, \text{ as } k\to \infty.
$
\end{proof}
The following corollary (Corollary \ref{lem:decomposition1}) decompose the nonlinear Euclidean profiles $U_k$ defined in the Proposition \ref{prop:4.4}. This corollary follows closely in a part of the proof of Lemma 6.2 in \cite{ionescu2012energy}. I state it here as a corollary because the almost orthogonality of nonlinear profiles (Lemma \ref{prop:almostorth}) heavily relies on this decomposition lemma (Corollary \ref{lem:decomposition1}).
\begin{cor}[Decomposition of the nonlinear Euclidean profiles $U_k$]\label{lem:decomposition1}
Consider $U_k$ is the nonlinear Euclidean profiles w.r.p.t. $\mathcal{O} = (N_k, t_k, x_k)_k \in \widetilde{\mathcal{F}_e}$ defined above. For any
$\theta>0$, there exist $T_{\theta}^0$ sufficiently large such that
for all $T_{\theta}\geq T_{\theta}^0$ and $R_{\theta}$
sufficiently large such that for all $k$ large enough
(depending on $R_{\theta}$) we can decompose $U_k$ as following:
\[ \mathds{1}_{(-T_{\theta}^{-1},T_{\theta}^{-1})}(t)
U_k = \omega_k^{\theta,-\infty}+\omega_k^{\theta,+\infty}
+\omega_k^{\theta}+\rho^{\theta}_k,\]
and $\omega_k^{\theta,\pm\infty}$, $\omega_k^{\theta}$,
and $\rho^{\theta}_k$ satisfy the following
conditions:
\begin{equation}\label{eq:decomposition1}
\begin{split}
\|\omega_k^{\theta,\pm\infty}\|_{Z'(-T_{\theta}^{-1},T_{\theta}^{-1})}
+\|\rho^{ \theta}_k\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}\leq \theta,\\
\|\omega_k^{\theta,\pm\infty}\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}+
\|\omega_k^{\theta}\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}\lesssim 1,\\
\omega_k^{\theta,\pm\infty} = P_{\leq R_{\theta}N_{k}}
\omega_k^{ \theta,\pm\infty}\\
|\nabla_x^m \omega_k^{\theta}|+(N_{k})^{-2}\mathds{1}_{S_k^{ \theta}}
|\partial_t \nabla_x^m \omega_k^{ \theta}|\leq R_{\theta}
(N_{k})^{|m|+1}\mathds{1}_{S_k^{\theta}},\ 0\leq |m| \leq 10,
\end{split}
\end{equation}
where
\[
S_k^{\theta} :=\{ (x,t)\in \mathbb{T}^4\times (-T_{\theta}, T_{\theta})
: |t-t_{k}|< T_{\theta}(N_{k})^{-2},\ |x-x_{k}|\leq R_{\theta}(N_{k})^{-1}\}.
\]
\end{cor}
\begin{proof}
By Proposition \ref{prop:4.4}, there exists
$T(\phi, \frac{\theta}{4})$, such that for all
$T\geq T(\phi, \frac{\theta}{4})$, there exists $R(\phi,\frac{\theta}{4},T)$
such that for all $R\geq R(\phi,\frac{\theta}{2},T)$, there holds that
\begin{equation*}
\|U_k - \widetilde{u_k}\|_{X^1(\{
|t-t_k|\leq T(N_{k})^{-2}\}\cap\{
|t|<T^{-1}
\})}\leq \frac{\theta}{2},
\end{equation*}
for $k$ large enough, where
\begin{equation*}
\left(\pi_{-x_k} \widetilde{u_k}\right)(x,t)
=N_{k} \eta(N_{k} \Psi^{-1}(x)/R)
u(N_{k} \Psi^{-1}(x), N_{k}^2(t-t_k)),
\end{equation*}
where $u$ is a solution of (\ref{eq:NLS}) with scattering data $\phi^{\pm\infty}$.
In addition, up to subsequence,
\begin{equation*}
\|U_k - \Pi_{t_k-t, x_k} T_{N_{k}} \phi^{\pm\infty}
\|_{X^1(\{\pm(t-t_k)\geq T(N_k)^{-2}\}\cap\{
|t|\leq T^{-1}
\})}\leq \frac{\theta}{4},
\end{equation*}
for $k$ large enough (depending on $\phi$, $\theta$, $T$, and $R$).
Choose a sufficiently large $T_{\theta}> T(\phi,\frac{\theta}{4})$
based on the extinction lemma(Lemma \ref{lem:extinction}), such that
\begin{equation*}
\|e^{it\Delta} \Pi_{t_k, x_k} T_{N_k} \phi^{\pm\infty}
\|_{Z(T_{\theta}(N_k)^{-2}, T_{\theta}^{-1})}\leq \frac{\theta}{4}
\end{equation*}
when k large enough.
And then we choose $R_{\theta} = R(\phi, \frac{\theta}{2}, T_{\theta})$.
Denote:
\begin{enumerate}
\item $\omega_k^{\theta,\pm\infty} := \mathds{1}_{\{
\pm(t-t_k)\geq T_{\theta}(N_k)^{-2},|t|\leq T_{\theta}^{-1}\}}
\left(\Pi_{t_k-t, x_k} T_{N_k}\phi^{\theta,\pm\infty}
\right)$,
{where }
\[
\|\phi^{\theta,\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}\lesssim 1,
\ \phi^{\theta,\pm\infty} = P_{\leq R_{\theta}} (\phi^{\theta,\pm\infty}),
\]
which implies $\omega_k^{ \theta,\pm\infty} = P_{\leq R_{\theta}N_{\theta}}
\omega_k^{ \theta,\pm\infty}$.
\item $\omega_k^{\theta} := \widetilde{u_k}\cdot \mathds{1}_{S_k^{ \theta}},$
where $
S_k^{\theta} :=\{ (x,t)\in \mathbb{T}^4\times (-T_{\theta}, T_{\theta})
: |t-t_k|< T_{\theta}(N_k)^{-2},\ |x-x_k|\leq R_{\theta}(N_k)^{-1}\}$.
\noindent By the stability property (Proposition \ref{prop:stability}) and Theorem \ref{thm:4.2}, we can adjust
$\omega_k^{\theta}$ and $\omega_k^{\theta,\pm\infty}$, with an acceptable
error, to make
\[|\nabla_x^m \omega_k^{\theta}|+(N_k)^{-2}\mathds{S_k^{\alpha, \theta}}
|\partial_t \nabla_x^m \omega_k^{ \theta}|\leq R_{\theta}
(N_k)^{|m|+1}\mathds{1}_{S_k^{ \theta}},\ 0\leq |m| \leq 10.\]
\item $\rho_k:= \mathds{1}_{(-T_{\theta}^{-1},T_{\theta}^{-1})}(t)
U_k^{\alpha} -\omega^{\theta}_k -\omega^{\theta,+\infty}-\omega^{\theta,-\infty}$.
\end{enumerate}
By (\ref{4.4.2}) and (\ref{4.4.3}), we obtain that
\[
\|\rho_k^{\theta}\|_{X^1(\{|t|<T_{\theta}^{-1}\})}\leq \frac{\theta}{2}.
\]
and then we have
\begin{align*}
\|\omega_k^{\theta,\pm\infty}\|_{Z'(-T_{\theta}^{-1},T_{\theta}^{-1})}
+\|\rho^{\theta}_k\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}\leq \theta,\\
\|\omega_k^{\theta,\pm\infty}\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}+
\|\omega_k^{\theta}\|_{X^1(-T_{\theta}^{-1},T_{\theta}^{-1})}\lesssim 1.
\end{align*}
\end{proof}
\section{Profile decomposition}
In this section, we construct the profile decomposition on $\mathbb{T}^4$ for linear
Schr\"{o}dinger equations. The arguments and propositions in this section is almost identical to those in the Section 5 of \cite{ionescu2012global2}, except for one more lemma (Lemma \ref{prop:almostorth}) about almost orthogonality of nonlinear profiles which is useful in the focusing case.
As in the previous section, given $f\in L^2(\mathbb{R}^4)$, $t_0\in\mathbb{R}$, and
$x_0\in\mathbb{T}^4$, we define:
\begin{align*}
(\Pi_{t_0, x_0} )f(x)&:= (e^{-it_0\Delta} f)(x-x_0)\\
T_N \phi (x) &:= N\widetilde{\phi} (N\Psi^{-1}(x)),
\end{align*}
where $\widetilde{\phi}(y):= \eta(\frac{y}{N^{\frac{1}{2}}})\phi(y).$
Observe that $T_N : \dot{H}^1(\mathbb{R}^4) \to H^1(\mathbb{T}^4)$ is a linear operator with
$\|T_N \phi\|_{H^1(\mathbb{T}^4)}\lesssim \|\phi\|_{\dot{H}^1(\mathbb{R}^4)}$.
\begin{definition}[Euclidean frames]
\begin{enumerate}
\item We define a Euclidean frame to be a sequence $\mathcal{F}_e =(N_k, t_k, x_k)_k$
with $N_k\geq 1$, $N_k\to +\infty$, $t_k\in\mathbb{R}$, $t_k\to 0$, $x_k\in\mathbb{T}^4$.
We say that two frames, $(N_k, t_k, x_k)_k$ and $(M_k, s_k, y_k)_k$ are
orthogonal if
\[
\lim_{k\to+\infty} \left( \ln \left|\frac{N_k}{M_k}\right|+ N_k^2
\left|t_k-s_k\right| + N_k\left|x_k-y_k\right|\right) =\infty.
\]
Two frames that are not orthogonal are called equivalent.
\item If $\mathcal{O} =(N_k, t_k, x_k)_k$ is a Euclidean frame and if
$\phi\in\dot{H}^1(\mathbb{R}^4)$, we define the Euclidean profile associated to
$(\phi, \mathcal{O})$ as the sequence $\widetilde{\phi}_{\mathcal{O}_k}$:
\[
\widetilde{\phi}_{\mathcal{O}_k} := \Pi_{t_k, x_k} (T_{N_k}\phi).\]
\end{enumerate}
\end{definition}
\begin{prop}[Equivalence of frames \cite{ionescu2012global2}]\label{prop:equivalenceFrames}
\noindent (1) If $\mathcal{O}$ and $\mathcal{O}'$ are equivalent Euclidean frames, then there
exists an isometry $T: \dot{H}^1(\mathbb{R}^4) \to \dot{H}^1(\mathbb{R}^4)$ such that
for any profile $\widetilde{\phi}_{\mathcal{O}'_k}$, up to a subsequence there holds that
\[
\limsup_{k\to\infty} \|\widetilde{T\phi}_{\mathcal{O}_k} - \widetilde{\phi}_{\mathcal{O}'_k}\|_{H^1(\mathbb{T}^4)} = 0.
\]
(2) If $\mathcal{O}$ and $\mathcal{O}'$ are orthogonal Euclidean frames and $\widetilde{\phi}_{\mathcal{O}_k}$, $\widetilde{\phi}_{\mathcal{O}'_k}$ are corresponding profiles, then, up to a subsequence:
\begin{align}\label{eq:almostorth1}
\lim_{k\to\infty} \langle \widetilde{\phi}_{\mathcal{O}_k}, \widetilde{\phi}_{\mathcal{O}'_k}\rangle_{H^1\times H^1(\mathbb{T}^4)} = 0;\\\label{eq:almostorth2}
\lim_{k\to\infty} \langle \widetilde{|\phi}_{\mathcal{O}_k}|^2, |\widetilde{\phi}_{\mathcal{O}'_k}|^2\rangle_{L^2\times L^2(\mathbb{T}^4)} = 0.
\end{align}
\end{prop}
\begin{lem}[Refined Strichartz inequality]
Let $f\in H^1(\mathbb{T}^4)$ and $I\subset [0,1]$. Then
\[
\|e^{it\Delta} f\|_{Z(I)} \lesssim (\|f\|_{H_x^1(\mathbb{T}^4)})^{\frac{5}{6}} \sup_{N\in 2^{\mathbb{Z}}} (N^{-1} \|P_N e^{it\Delta}f\|_{L^\infty_{t,x}(I\times\mathbb{T}^4)})^{\frac{1}{6}}.
\]
\end{lem}
\begin{proof}
By the definition of $Z$-norm,
\[
\|e^{it\Delta} f\|_{Z(I)} = \left( \sum_N N^2 \|P_N e^{it\Delta} f\|^4_{L^4_{t,x}}\right)^{\frac{1}{4}}
= \left\|N^{\frac{1}{2}} \|P_Ne^{it\Delta}f\|_{L^4_{t,x}} \right\|_{l^4_N}.
\]
By H\"{o}lder inequality and Proposition \ref{prop:strichartz}, we have that
\begin{align*}
&\left\|N^{\frac{1}{2}} \|P_Ne^{it\Delta}f\|_{L^4_{t,x}} \right\|_{l^4_N}\\
\lesssim & \left\|
\left(N^{\frac{4}{5}}\|P_Ne^{it\Delta}f\|_{L^{\frac{10}{3}_{t,x}}}\right)^{\frac{5}{6}} \left(N^{-1}
\|P_N e^{it\Delta} f\|_{L^\infty_{t,x}}\right)^{\frac{1}{6}}\right\|_{l^4_N}\\
\lesssim & \sup_{N}\left(N^{-1}
\|P_N e^{it\Delta} f\|_{L^\infty_{t,x}}\right)^{\frac{1}{6}}
\left( \sum_N N^{\frac{8}{3}}\|P_N e^{it\Delta} f\|^{\frac{10}{3}}_{L^{\frac{10}{3}}_{t,x}}
\right)^{\frac{5}{6}}\\
\lesssim & \sup_{N}\left(N^{-1}
\|P_N e^{it\Delta} f\|_{L^\infty_{t,x}}\right)^{\frac{1}{6}} \left(\|f\|_{H^1_x(\mathbb{T}^4)}\right)^{\frac{5}{6}}.
\end{align*}
\end{proof}
\begin{remark}
The refined Strichartz estimate actually give us a fact:
\[
\sup_{N\in 2^{\mathbb{Z}}} (N^{-1} \|P_N e^{it\Delta}f\|_{L^\infty_{t,x}})\gtrsim
\frac{\|e^{it\Delta}f\|^6_{Z(I)}}{\|f\|^5_{H^1(\mathbb{T}^4)}},
\]
the linear solutions with non-trivial space-time norm must concentrate on at least one
frequency annulus and around some points in space-time space.
\end{remark}
\begin{prop}[Profile decompositions \cite{ionescu2012global2}]\label{prop:ProfileDecomposition}
Consider $\{ f_k\}_{k}$ a sequence of functions in $H^1(\mathbb{T}^4)$ and $0<A<\infty$ satisfying
\[
\limsup_{k\to +\infty} \|f_k\|_{H^1(\mathbb{T}^4)}\leq A
\]
and a sequence of intervals $I_k=(-T_k, T^k)$ such that $|I_k|\to 0$ as $k\to\infty$.
Up to passing to a subsequence, assume that $f_k\rightharpoonup g\in H^1(\mathbb{T}^4).$
There exists $J^*\in \{0, 1, ...\}\cup \{\infty\}$, and a sequence of profile $\widetilde{\psi}^\alpha_{k}:= \widetilde{\psi}^\alpha_{\mathcal{O}_k^\alpha}$
associated to pairwise orthogonal Euclidean frames $\mathcal{O}^\alpha$ and $\psi^\alpha \in {H}^1(\mathbb{R}^4)$ such that
extracting a subsequence, for every $0 \leq J\leq J^*$, we have
\begin{equation}\label{5.1}
f_k = g + \sum_{1\leq \alpha \leq J} \widetilde{\psi}^\alpha_{k}
+ R_k^J
\end{equation}
where $R^J_k$ is small in the sense that
\begin{equation}\label{5.2}
\limsup_{J\to J^*}\limsup_{k\to\infty}\|e^{it\Delta}R_k^J\|_{Z(I_k)} = 0.
\end{equation}
Besides, we also have the following orthogonality relations:
\begin{equation}\label{5.3}
\begin{split}
\|f_k\|_{L^2}^2 = \|g\|_{L^2}^2 + \|R_k^J\|^2_{L^2}+o_k(1).\\
\|\nabla f_k\|_{L^2}^2=\|\nabla g\|_{L^2}^2 +
\sum_{\alpha\leq J}\|\nabla_{\mathbb{R}^4} \psi^\alpha\|^2_{L^2(\mathbb{R}^4)} +\|\nabla R_k^J\|^2_{L^2}
+o_k(1).\\
\lim_{J\to J^*}\limsup_{k\to\infty}
\left| \|f_k\|^4_{L^4} - \|g\|_{L^4}^4 -\sum_{\alpha\leq J}
\|\widetilde{\psi^\alpha_k}\|_{L^4}^4 \right| = 0.
\end{split}
\end{equation}
\end{prop}
\begin{remark}
$g$ and $\widetilde{\psi}^\alpha_{k}$ for all $\alpha$ are
called profiles. In addition, we call $g$ is Scale-1-profile,
and $\widetilde{\psi}^\alpha_{k}$ are called Euclidean profiles.
\end{remark}
\begin{remark}[Almost orthogonality of the energy]\label{rmk:EnergyDecoupling}
By (\ref{def:euclideanProfile}), we have that $\|\widetilde{\psi}^\alpha_{k}\|_{L^2(\mathbb{T}^4)} \leq \frac{1}{N_k}\|\psi^\alpha\|_{L^2(\mathbb{R}^4)}\to 0$ as $k\to \infty$ and $\|\widetilde{\psi}^\alpha_{k}\|^2_{\dot{H}^1(\mathbb{T}^4)} = \frac{1}{N}\|\nabla\eta(\frac{\cdot}{N^{\frac{1}{2}}})\psi^\alpha\|^2_{L^2(\mathbb{R}^4)} + \|\eta(\frac{\cdot}{N^{\frac{1}{2}}})\psi^\alpha\|^2_{\dot{H}^1(\mathbb{R}^4)}$. Then above and (\ref{5.3}), we know that
\[
\lim_{J\to J^*}\lim_{k\to\infty} \left( \sum_{1\leq \alpha \leq J}
E(\widetilde{\psi}^\alpha_k)
+ E(R_k^J) + E(g)- E(f_k)\right) = 0.
\]
\end{remark}
\begin{lem}[Almost orthogonality of nonlinear profiles]\label{prop:almostorth}
Define $U_k^\alpha$, $U_k^\beta$ as the maximal life-span $I_k$ solutions of (\ref{eq:NLS}) with
initial data $U_k^\alpha(0) = \widetilde{\psi}^\alpha_{\mathcal{O}^\alpha_k}$, $U_k^\beta(0) = \widetilde{\psi}^\beta_{\mathcal{O}^\beta_k}$, where $\mathcal{O}^\alpha$ and $\mathcal{O}^\beta$ are orthogonal.
And define $G$ to be the solution of the maximal lifespan $I_0$ of (\ref{eq:NLS}) with initial
data $G(0) =g$. And $0\in I_k$ and $\lim_{k\to \infty} |I_k| = 0$.
Then
\begin{equation}
\lim_{k\to \infty} \sup_{t\in I_k}\, \langle U^\alpha_k(t), U^\beta_k(t) \rangle_{\dot{H}^1\times \dot{H}^1} = 0, \quad \lim_{k\to \infty} \sup_{t\in I_k\cap I_0} \langle U^\alpha_k(t), G(t) \rangle_{\dot{H}^1\times \dot{H}^1} = 0.
\end{equation}
\end{lem}
\begin{proof}
Set $U^0_k(0) = g$ and $U^0_k = G$ for all $k$, such that $U^0_k$ can be considered as a nonlinear profile with a trivial frame $\mathcal{O} =(1, 0, 0)_k$.
For any $\theta > 0$, by the decomposition of the nonlinear profiles $U^\alpha$ and $U^\beta$ (Corollary \ref{lem:decomposition1}), there exist $T_{\theta, \alpha}$, $R_{\theta, \alpha}$, $T_{\theta, \beta}$, $R_{\theta, \beta}$ sufficiently large
\begin{align*}
U_k^\alpha &= \omega_k^{\alpha, \theta,-\infty}+\omega_k^{\alpha, \theta,+\infty}
+\omega_k^{\alpha, \theta}+\rho^{\alpha, \theta}_k,\\
U_k^\beta &= \omega_k^{\beta, \theta,-\infty}+\omega_k^{\beta, \theta,+\infty}
+\omega_k^{\beta, \theta}+\rho^{\beta, \theta}_k.
\end{align*}
For $U^0_k$, set $U^0_k := \omega_k^{\alpha,\theta,-\infty}+\omega_k^{\alpha,\theta,+\infty}
+\omega_k^{\alpha,\theta}+\rho^{\alpha, \theta}_k$ where $\rho_{k}^{0, \theta} =
\omega_k^{0,\theta} = 0$ and $\omega^{0,\theta,+\infty}=\omega^{0,\theta,-\infty}=\frac{1}{2}G$.
And by taking $T_{\theta,0}$ large, it is easy to make $\|G\|_{Z'(-T_{\theta,0},T_{\theta,0})}\leq \theta$. So
$\langle U^\alpha_k(t), G(t) \rangle_{\dot{H}^1\times \dot{H}^1}$ can be considered as a special case of $\langle U^\alpha_k(t), U^\beta_k(t) \rangle_{\dot{H}^1\times \dot{H}^1}$ when $\beta = 0$.
Since $\rho^{\alpha,\theta}_k$, $\rho^{\beta,\theta}_k$ are the small term with the $X^1$-norm less then $\theta$, for any fixed $t\in I_k$, it will suffice to consider the following three terms:
\begin{enumerate}
\item $\langle\omega_k^{\alpha, \theta,\pm\infty} , \omega_k^{\beta, \theta,\pm\infty} \rangle_{\dot{H}^1\times \dot{H}^1}$;
\item $\langle\omega_k^{\alpha, \theta,\pm\infty} , \omega_k^{\beta, \theta} \rangle_{\dot{H}^1\times \dot{H}^1}$;
\item $\langle\omega_k^{\alpha, \theta} , \omega_k^{\beta, \theta} \rangle_{\dot{H}^1\times \dot{H}^1}$.
\end{enumerate}
\case{(1)}{ $\langle\omega_k^{\alpha, \theta,\pm\infty} , \omega_k^{\beta, \theta,\pm\infty} \rangle_{\dot{H}^1\times \dot{H}^1}$.}
By the constructions of $\omega_k^{\alpha, \theta,\pm\infty}, \omega_k^{\beta, \theta,\pm\infty}$ in the proof of Lemma \ref{lem:decomposition1}, we obtain that
\begin{align}
\omega_k^{\alpha, \theta,\pm\infty} := \mathds{1}_{\{
\pm(t-t_k^{\alpha})\geq T_{\alpha, \theta}(N^{\alpha}_k)^{-2},|t|\leq T_{\alpha, \theta}^{-1}\}}
\left(\Pi_{t_k^\alpha-t, x_k^\alpha} T_{N^\alpha_k}\phi^{\alpha, \theta,\pm\infty}\right),\\
\omega_k^{\alpha, \theta,\pm\infty} := \mathds{1}_{\{ \pm(t-t_k^{\beta})\geq T_{\beta, \theta}(N^{\beta}_k)^{-2},|t|\leq T_{\beta, \theta}^{-1}\}}
\left(\Pi_{t_k^\beta-t, x_k^\beta} T_{N^\beta_k}\phi^{\beta, \theta,\pm\infty}\right).
\end{align}
For any fixed $t\in I_k$, we obtain that \[\langle\omega_k^{\alpha, \theta,\pm\infty}(t) , \omega_k^{\beta, \theta,\pm\infty}(t) \rangle_{\dot{H}^1\times \dot{H}^1} = \langle \phi^{\alpha, \theta,\pm\infty}_{\mathcal{O}^\alpha_k}, \phi^{\beta, \theta,\pm\infty}_{\mathcal{O}^\beta_k} \rangle_{\dot{H}^1\times \dot{H}^1}.\]
By (\ref{eq:almostorth1}) of Proposition \ref{prop:equivalenceFrames}, we obtain that
\[\lim_{k\to\infty}\sup_t \langle\omega_k^{\alpha, \theta,\pm\infty}(t) , \omega_k^{\beta, \theta,\pm\infty}(t) \rangle_{\dot{H}^1\times \dot{H}^1} = 0.\]
\case{(2)}{$\langle\omega_k^{\alpha, \theta,\pm\infty} , \omega_k^{\beta, \theta} \rangle_{\dot{H}^1\times \dot{H}^1}$.}
By the constructions of $\omega_k^{\alpha, \theta,\pm\infty}, \omega_k^{\beta, \theta,\pm\infty}$ in the proof of Lemma \ref{lem:decomposition1}, we obtain that
\[
\omega_k^{\beta, \theta} := \widetilde{u_k}^\beta\cdot \mathds{1}_{S_k^{ \beta, \theta}},
\]
where $
S_k^{\beta, \theta} :=\{ (x,t)\in \mathbb{T}^4\times (-T_{\beta,\theta}, T_{\beta, \theta})
: |t-t_k^\beta|< T_{\beta, \theta}(N^\beta_k)^{-2},\ |x-x^\beta_k|\leq R_{\beta, \theta}(N^\beta_k)^{-1}\}$ and $\widetilde{u_k}^\beta$ is defined in (\ref{eq:tildeu}).
Following a similar proof of the \textbf{Case 4} in the proof of (\ref{eq:6.8}) in Lemma \ref{lem:6.2}, we have that $\lim_{k\to \infty}\sup_t\langle\omega_k^{\alpha, \theta,\pm\infty} , \omega_k^{\beta, \theta} \rangle_{\dot{H}^1\times \dot{H}^1} = 0$.
\case{(3)}{$\langle\omega_k^{\alpha, \theta} , \omega_k^{\beta, \theta} \rangle_{\dot{H}^1\times \dot{H}^1}$.}
For $\varepsilon > 0$ small.
If $N^\alpha_{k}/N^\beta{k} + N^\beta_{k}/N^\alpha_{k} \leq \varepsilon^{-1000}$ and
$k$ is large enough then $S^{\alpha, \theta}_{k}\cap S^{\beta, \theta}_{k} = \emptyset$. (By the definition of
orthogonality of frames, $N^\alpha_{k}/N^\beta_{k} + N^\beta_{k}/N^\alpha_{k} \leq \varepsilon^{-1000}$ implies
$(N^\alpha_{k})^2|t^\alpha_{k}-t^\beta_{k}|\to \infty$ or $N^\alpha_{k}|x^\alpha_{k}-x^\beta_{k}|\to\infty$, so
$S^{\alpha, \theta}_{k}\cap S^{\beta, \theta}_{k} = \emptyset$.)
In this case, $\omega^{\alpha,\theta}_{k}\, \omega^{\beta,\theta}_{k}\ \equiv 0$.
If $N^\alpha_{k}/N^\beta_{k} \geq \varepsilon^{-1000}/2$.
Denote that
\[
\omega^{\alpha,\theta}_{k} \omega^{\beta,\theta}_{k} = \omega^{\alpha,\theta}_{k} \widetilde{ \omega}^{\beta,\theta}_{k}:=
\omega^{\alpha,\theta}_{k} \cdot (\omega^{\beta,\theta}_{k} \mathds{1}_{(t^\alpha_{k}-T_{\alpha,\theta} (N^{\alpha}_{k})^{-2}, t^\alpha_{k}+T_{\alpha,\theta} (N^{\alpha}_{k})^{-2})}(t)).
\]
By $\varepsilon^{10}N_k^{\alpha}>>\varepsilon^{-10}N_k^{\beta}$ and the \textit\textbf{Claim $\dagger$} in the proof of Lemma \ref{lem:7.2}, we obtain that
\begin{align*}
\langle\omega^{\alpha,\theta}_{k}, \omega^{\beta,\theta}_{k}\rangle_{\dot{H}^1\times\dot{H}^1}& \leq \langle P_{\leq \varepsilon^{10}N_k^{\alpha}}\omega^{\alpha,\theta}_{k},
\omega^{\beta,\theta}_{k}\rangle_{\dot{H}^1\times\dot{H}^1}
+\langle P_{> \varepsilon^{10}N_k^{\alpha}} \omega^{\alpha,\theta}_{k}, P_{>\varepsilon^{-10} N_k^{\beta}}\omega^{\beta,\theta}_{k} \rangle_{\dot{H}^1\times\dot{H}^1}\\
&+\langle P_{> \varepsilon^{10}N_k^{\alpha}} \omega^{\alpha,\theta}_{k}, \omega^{\beta,\theta}_{k}\rangle_{\dot{H}^1\times\dot{H}^1} \\
&\lesssim \varepsilon.
\end{align*}
\end{proof}
\section{Proof of the main theorems}
It suffice to prove the solutions remain bounded in $Z$-norm on intervals of
length at most 1. To obtain this, we run the induction on the $E(u) + M(u)$ ($\mu = +1$) and $\|u\|_{L^\infty_t\dot{H}^1}$ ($\mu = -1$).
\begin{definition}
Define
\[
\Lambda (L,\tau) =
\begin{cases}\sup_{\text{u is a solution}\atop \text{of } (\ref{eq:NLS})}\{ \|u\|_{Z(I)}: E(u) + M(u) \leq L, \ |I|\leq \tau\}\qquad \text{if } \mu = +1\\
\sup_{\text{u is a solution}\atop \text{of } (\ref{eq:NLS})}\{ \|u\|_{Z(I)}: \sup_{t\in I} \|u(t)\|^2_{\dot{H}^1(\mathbb{T}^4)}<L,\ |I|\leq \tau\} \qquad \text{if } \mu = -1
\end{cases}
\]
where $u$ is any strong solution of (\ref{eq:NLS}) with initial data $u_0$ in interval $I$ of length $|I|\leq \tau$.
\end{definition}
It is easy to see that $\Lambda$ is an increasing function of both $L$ and $\tau$,
and moreover, by the definition
we have the sublinearity of $\Lambda$ in $\tau$:
$
\Lambda (L, \tau + \sigma) \leq \Lambda (L, \tau) + \Lambda (L, \sigma).
$
Hence we define
\[
\Lambda_0 (L) = \lim_{\tau\to 0} \Lambda (L,\tau),
\]
and for all $\tau$, we have that
$
\Lambda(L, \tau)<+\infty \Leftrightarrow \Lambda_0 (L)<+\infty.
$
Finally, we define
\[
E_{max} = \sup\{L: \Lambda_0 (L)<+\infty\}.
\]
\begin{thm}\label{thm:main2}
Consider $E_{max}$ defined above, if $\mu = +1$ (the defocusing case), then $E_{max} = +\infty$; if $\mu = -1$ (the focusing case), then $E_{max}\geq \|W\|^2_{\dot{H}^1(\mathbb{R}^4)}$.
\end{thm}
\begin{cor}
Suppose $u$ is a solution of (\ref{eq:NLS}) in some time interval with the initial data $u_0\in H^1(\mathbb{T}^4)$.
\begin{enumerate}
\item (the defocusing case) If $\mu = +1$ and $\|u_0\|_{H^1(\mathbb{T}^4)}<+\infty$, then $u$ is a global solution.
\item (the focusing case) If $\mu = -1$ and under the assumption that
\begin{equation*}
\sup_{t}\|u(t)\|_{\dot{H}^1(\mathbb{T}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)},
\end{equation*} then $u$ is a global solution.
\end{enumerate}
\end{cor}
\begin{proof}[Proof of Theorem \ref{thm:main2}]
Suppose for contradiction that $E_{max}<+\infty$ (if $\mu = +1$), or $E_{max} < \|W\|_{\dot{H}^1(\mathbb{R}^4)}$ (if $\mu = -1$). By the definition of $E_{max}$, there exists
a sequence of solutions $u_k$ such that
\begin{equation}\label{6.2}
\begin{cases}
E(u_k)+M(u_k),\ (\text{if } \mu = +1)\\
\sup_{t\in [-T_k, T^k]}\|u(t)\|_{\dot{H}^1(\mathbb{T}^4)},\ (\text{if } \mu = -1)
\end{cases}\to E_{max},
\quad \|u_k\|_{Z(-T_k, 0)},\ \|u_k\|_{Z(0,T^k)}\to +\infty.
\end{equation}
for some $T_k,\ T^k\to 0$ as $k\to +\infty$. For the simplicity of notations, set
\[
L(\phi) := \begin{cases}
E(\phi)+M(\phi),\ (\text{if } \mu = +1),\\
\sup_{t\in [-T_k, T^k]}\|u_{\phi}(t)\|^2_{\dot{H}^1(\mathbb{T}^4)},\ (\text{if } \mu = -1),
\end{cases}
\]
where $u_{\phi}(t)$ is the solution of (\ref{eq:NLS}) with initial data $u_{\phi}(0) =\phi$.
By the Proposition \ref{prop:ProfileDecomposition}, after extracting a subsequence, (\ref{6.2}) gives a sequence of profiles $\widetilde{\psi}^\alpha_k$, where $\alpha, k = 1, 2,\cdots$, and a decomposition
\begin{equation*}
u_k(0) = g + \sum_{1\leq\alpha\leq J}
\widetilde{\psi}^\alpha_k + R^J_k.
\end{equation*}
satisfying
\begin{equation}\label{63}
\limsup_{J\to\infty}\limsup_{k\to\infty} \|e^{it\Delta}R_k^J\|_{Z(I_k)} =0.
\end{equation}
And moreover the almost orthogonality in the Proposition \ref{prop:ProfileDecomposition} and the almost orthogonality of nonlinear profiles (Lemma \ref{prop:almostorth}), we obtain that
\begin{equation}\label{eq:EnergyDecoupling}
\begin{split}
L(\alpha) := \lim_{k\to+\infty} L(\widetilde{\psi}^\alpha_{\mathcal{O}^\alpha_k})
\in [0,E_{max}],\\
\lim_{J\to J^*} \left( \sum_{1\leq \alpha \leq J}
L(\alpha)
+\lim_{k\to\infty} L(R_k^J)\right) + L(g)= E_{max},
\end{split}
\end{equation}
\case{1}{$g\neq 0$ and no any Euclidean profiles.}
There is no any Euclidean profiles, and by Remark \ref{rmk:EnergySimSobolev}, $\|g\|_{H^1(\mathbb{T}^4)}\lesssim L(g)\leq E_{max}$. Then, by $I_k\to 0$ as $k\to \infty$, there exist, $\eta >0$,
s.t. for $k$ large enough
\[
\|e^{it\Delta} u_k(0)\|_{Z(-T_k, T^k)} \leq \|e^{it\Delta} g\|_{Z(-\eta, \eta)}
+ \varepsilon \leq \delta_0
\]
where $\delta_0$ is given by the local theory in Proposition \ref{prop:lwp}. In this case.
we conclude that $\|u_k\|_{Z(-T_k, T^k)}\lesssim 2\delta_0$ which contradicts (\ref{6.2}).
\case{2}{$g=0$ and only one Euclidean profile $\widetilde{\psi}^1_k$ such that $L(1) = E_{max}$.}
By Remark \ref{rmk:EnergyDecoupling} and (\ref{eq:EnergyDecoupling}), we obtain that $L(\widetilde{\psi}^1_k) \leq E_{max}$ which implies $\|\psi\|_{\dot{H}^1(\mathbb{R}^4)}<\infty$ (if $\mu = +1$) or $\sup_{t}\|u_{\psi}\|_{\dot{H}^1(\mathbb{R}^4)}<\|W\|_{\dot{H}^1(\mathbb{R}^4)}$ (if $\mu = -1$). Denote $U^1_k$ is the solution of (\ref{eq:NLS}) with $U^1_k(0)= \widetilde{\psi}^1_k$.
In this case, we use the part (1) of Proposition \ref{prop:4.4} and Remark \ref{rmk:EnergySimSobolev},
Given some $\epsilon>0$, for $k$ large enough, we have that
\begin{equation}\label{eq:case2}
\|U^1_k\|_{X^1(-T_k, T^k)}\leq \|U^1_k\|_{X^1(-\delta, \delta)} \lesssim 1, \qquad \text{and}\qquad \|U^1_k(0) - u_k(0)\|_{H^1(\mathbb{T}^4)}\leq \epsilon.
\end{equation}
By (\ref{eq:case2}) and Proposition \ref{prop:stability}, we obtain that
\[
\|u_k\|_{Z(I_k)} \lesssim \|u_k\|_{X^1(I_k)} \lesssim 1,
\]
which contradicts (\ref{6.2}).
\case{3}{At least two of all profiles are nonzero}
By (\ref{eq:EnergyDecoupling}), $L(g)<E_{max}$ and $L(\alpha)<E_{max}$ for any $\alpha = 1, 2,\cdots$
By almost orthogonality and relabeling the profiles, we can assume that for all $\alpha$,
\[L(\alpha)\leq L(1)< E_{max} -\eta, \ L(g)<E_{max} -\eta, \text{ for some }\eta>0.\]
Define $U_k^\alpha$ as the maximal life-span solution of (\ref{eq:NLS}) with
initial data $U_k^\alpha(0) = \widetilde{\psi}^\alpha_k$
and $G$ to be the maximal life-span solution of (\ref{eq:NLS}) with initial
data $G(0) =g$.
By the definition of $\Lambda$ and the hypothesis $E_{max}<\infty$ (if $\mu = +1$) and $E_{max}< E_{W}$ (if $\mu = -1$), we have
\[
\|G\|_{Z(-1,1)} + \lim_{k\to\infty} \|U_k^\alpha\|_{Z(-1,1)} \leq
2\Lambda(E_{max}-\eta/2, 2) \lesssim 1.
\]
By Proposition \ref{lem:cgwp}, it follows that for any $\alpha$ and any
$k>k_0(\alpha)$ sufficient large,
\[
\|G\|_{X^1(-1,1)} +\|U^\alpha_k\|_{X^1(-1, 1)}\lesssim 1.
\]
For $J, k \geq 1$, we define
\[
U_{prof, k}^J:= G + \sum_{\alpha =1}^J U_k^\alpha = \sum_{\alpha = 0}^J U_k^\alpha.
\]
where we set that $U_k^0 := G$.
\noindent\textit{\textbf{Claim} that there is a constant $Q$ such that
\begin{equation}\label{6.6}
\|U^J_{prof, k}\|^2_{X^1(-1,1)} +\sum_{\alpha=0}^J \|U_k^\alpha\|^2_{X^1(-1,1)}
\leq Q^2,
\end{equation}
uniformly on $J$.
}
From (\ref{63}) we know that there are only finite many profiles such that
$L(\alpha)\geq \frac{\delta_0}{2}$. We may assume that for all $\alpha\geq A$,
$L(\alpha) \leq\delta_0$. Consider $U_k^\alpha$ for $k\geq A$, by small
data GWP result (Proposition \ref{prop:smallDataGWP}), we have that
\begin{align*}
&\|U^J_{prof, k}\|_{X^1(-1,1)} = \|\sum_{0\leq\alpha\leq J}
U^\alpha_k\|_{X^1(-1,1)} \\
\leq &\sum_{0\leq \alpha\leq A} \|U^\alpha_k\|_{X^1(-1,1)}
+\|\sum_{A\leq \alpha\leq J} (U^\alpha_k -e^{it\Delta}U^\alpha_k(0))\|_{X^1(-1,1)}
+\|e^{it\Delta} \sum_{A\leq \alpha \leq J} U^\alpha_k(0)\|_{X^1(-1,1)}\\
\lesssim& (A+1)+\sum_{A\leq\alpha\leq J} \|U^\alpha_k(0)\|^2_{H^1} +
\|\sum_{A\leq\alpha\leq J} U_k^{\alpha}(0)\|_{H^1}\\
\lesssim& (A+1) + \sum_{A\leq\alpha\leq J} L(\alpha) + E_{max}^{\frac{1}{2}}\\
\lesssim& 1.
\end{align*}
And also similarly, we have that
\begin{align*}
\sum_{\alpha = 0}^{J} \|U_k^\alpha\|^2_{X^1(-1,1)} &=
\sum_{\alpha =0}^{A-1} \|U_k^\alpha\|^2_{X^1(-1,1)} +
\sum_{A\leq\alpha\leq J}\|U_k^\alpha\|^2_{X^1(-1,1)} \\
&\lesssim A + \sum_{A\leq\alpha\leq J} L(\alpha)\\
&\lesssim 1.
\end{align*}
We denote that
\[
U^J_{app, k} = \sum_{0\leq \alpha\leq J} U^\alpha_k + e^{it\Delta}R_k^J
\]
is a solution of the approximation equation (\ref{eq:aNLS}) with
the error term:
\begin{align*}
e &= (i\partial_t +\Delta) U_{app, k}^J - F(U^J_{app, k})\\
& = \sum_{0\leq \alpha\leq J} F(U_k^\alpha) - F(\sum_{0\leq \alpha\leq J} U^\alpha_k
+e^{it\Delta}R_k^J),
\end{align*}
where $F(u) = u|u|^2.$
From (\ref{6.6}) we know
$\|U_{app, k}^J\|_{X^1(-1,1)}\leq Q$
By Lemma \ref{lem:6.2} (proven later), we obtain that
\[
\limsup_{k\to\infty} \|e\|_{N(I_k)} \leq \varepsilon/2, \text{ for } J\geq J_0(\varepsilon).
\]
We use the stability proposition (Proposition \ref{prop:stability}) to conclude that
$u_k$ satisfies
\[
\|u_k\|_{X^1(I_k)} \lesssim \|U_{app, k}^J\|_{X^1(I_k)}\leq
\|U^J_{prof, k}\|_{X^1(-1,1)} \|e^{it\Delta}R_k^J\|_{X^1(-1,1)} \lesssim 1.
\]
which contradicts (\ref{6.2}).
\end{proof}
\begin{lem}\label{lem:6.2}
With the same notation, we obtain that
\begin{equation}
\limsup_{J\to\infty}\limsup_{k\to \infty}
\|\sum_{0\leq \alpha\leq J} F(U_k^\alpha) -
F(\sum_{0\leq \alpha\leq J} U^\alpha_k
+e^{it\Delta}R_k^J)\|_{N(I_k)} = 0.
\end{equation}
\end{lem}
\section{Proof of Lemma \ref{lem:6.2}}
Consider
\begin{align*}
&\|\sum_{0\leq \alpha\leq J} F(U_k^\alpha) -
F(U_{prof, k}^J
+e^{it\Delta}R_k^J)\|_{N(I_k)} \\
\leq&
\|F(U^J_{prof, k}+e^{it\Delta}R_k^J))-F(U^J_{prof, k})\|_{N(I_k)}+
\|F(U^J_{prof, k}) -\sum_{0\leq \alpha\leq J} F(U_k^\alpha)\|_{N(I_k)}.
\end{align*}
It will suffice that we can prove
\begin{equation}\label{eq:6.8}
\limsup_{J\to\infty}\limsup_{k\to \infty}
\|F(U^J_{prof, k}+e^{it\Delta}R_k^J))-F(U^J_{prof, k})\|_{N(I_k)} = 0,
\end{equation}
and
\begin{equation}\label{eq:6.9}
\limsup_{J\to\infty}\limsup_{k\to \infty}
\|F(U^J_{prof, k}) -\sum_{0\leq \alpha\leq J} F(U_k^\alpha)\|_{N(I_k)} = 0.
\end{equation}
Before prove (\ref{eq:6.8}) and (\ref{eq:6.9}), we need several lemma.
\begin{lem}[Decomposition of $U_k^\alpha$]\label{lem:decomposition}
Consider $U^\alpha_k$ is the nonlinear profiles defined above. For any
$\theta>0$, there exists $T_{\theta,\alpha}^0$ sufficiently large such that
for all $T_{\theta, \alpha}\geq T_{\theta, \alpha}^0$ there is $R_{\theta, \alpha}$
sufficiently large such that for all $k$ large enough
(depending on $R_{\theta,\alpha}$) we can decompose $U^\alpha_k$ as following:
\[ \mathds{1}_{(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}(t)
U^\alpha_k = \omega_k^{\alpha,\theta,-\infty}+\omega_k^{\alpha,\theta,+\infty}
+\omega_k^{\alpha,\theta}+\rho^{\alpha, \theta}_k,\]
and $\omega_k^{\alpha,\theta,\pm\infty}$, $\omega_k^{\alpha,\theta}$,
and $\rho^{\alpha, \theta}_k$ satisfy the following
conditions:
\begin{equation}\label{eq:decomposition}
\begin{split}
\|\omega_k^{\alpha,\theta,\pm\infty}\|_{Z'(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}
+\|\rho^{\alpha, \theta}_k\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}\leq \theta,\\
\|\omega_k^{\alpha,\theta,\pm\infty}\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}+
\|\omega_k^{\alpha,\theta}\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}\lesssim 1,\\
\omega_k^{\alpha, \theta,\pm\infty} = P_{\leq R_{\theta,\alpha}N_{k,\alpha}}
\omega_k^{\alpha, \theta,\pm\infty}\\
|\nabla_x^m \omega_k^{\alpha, \theta}|+(N_{k, \alpha})^{-2}\mathds{1}_{S_k^{\alpha, \theta}}
|\partial_t \nabla_x^m \omega_k^{\alpha, \theta}|\leq R_{\theta, \alpha}
(N_{k,\alpha})^{|m|+1}\mathds{1}_{S_k^{\alpha, \theta}},\ 0\leq |m| \leq 10,
\end{split}
\end{equation}
where
\[
S_k^{\alpha,\theta} :=\{ (x,t)\in \mathbb{T}^4\times (-T_{\theta,\alpha}, T_{\theta,\alpha})
: |t-t_{k,\alpha}|< T_{\theta,\alpha}(N_{k,\alpha})^{-2},\ |x-x_{k,\alpha}|\leq R_{\theta, \alpha}(N_{k,\alpha})^{-1}\}.
\]
\end{lem}
\begin{proof}
First, if $\alpha = 0$. Set $G = U^0_k := \omega_k^{\alpha,\theta,-\infty}+\omega_k^{\alpha,\theta,+\infty}
+\omega_k^{\alpha,\theta}+\rho^{\alpha, \theta}_k$ where $\rho_{k}^{0, \theta} =
\omega_k^{0,\theta} = 0$ and $\omega^{0,\theta,+\infty}=\omega^{0,\theta,-\infty}=\frac{1}{2}G$.
And by taking $T_{\theta,0}$ large, it is easy to make $\|G\|_{Z'(-T_{\theta,0},T_{\theta,0})}\leq \theta$.
For a fixed $\alpha$ which is not $0$, by Proposition \ref{prop:4.4}, there exists
$T(\phi^\alpha, \frac{\theta}{4})$, such that for all
$T\geq T(\phi^\alpha, \frac{\theta}{4})$, there exists $R(\phi^\alpha,\frac{\theta}{4},T)$
such that for all $R\geq R(\phi^\alpha,\frac{\theta}{2},T)$, there holds that
\begin{equation*}
\|U^\alpha_k - \widetilde{u_k^\alpha}\|_{X^1(\{
|t-t_k^\alpha|\leq T(N_{k,\alpha})^{-2}\}\cap\{
|t|<T^{-1}
\})}\leq \frac{\theta}{2},
\end{equation*}
for $k$ large enough, where
\begin{equation*}
\left(\pi_{-x_k^\alpha} \widetilde{u_k^\alpha}\right)(x,t)
=N_{k,\alpha} \eta(N_{k,\alpha} \Psi^{-1}(x)/R)
u(N_{k,\alpha} \Psi^{-1}(x), N_{k,\alpha}^2(t-t_k^\alpha)),
\end{equation*}
where $u$ is a solution of (\ref{eq:NLS}) with scattering data $\phi^{\pm\infty}$.
In addition, up to subsequence,
\begin{equation*}
\|U_k^\alpha - \Pi_{t_k^\alpha-t, x_k^\alpha} T_{N_{k,\alpha}} \phi^{\pm\infty,\alpha}
\|_{X^1(\{\pm(t-t_k^\alpha)\geq T(N_k^\alpha)^{-2}\}\cap\{
|t|\leq T^{-1}
\})}\leq \frac{\theta}{4},
\end{equation*}
for $k$ large enough (depending on $\phi^\alpha$, $\theta$, $T$, and $R$).
Choose a sufficiently large $T_{\theta,\alpha}> T(\phi^\alpha,\frac{\theta}{4})$
based on the extinction lemma(Lemma \ref{lem:extinction}), such that
\begin{equation*}
\|e^{it\Delta} \Pi_{t_k^\alpha, x_k^\alpha} T_{N_k^\alpha} \phi^{\pm\infty, \alpha}
\|_{Z(T_{\theta,\alpha}(N_k^\alpha)^{-2}, T_{\theta,\alpha}^{-1})}\leq \frac{\theta}{4}
\end{equation*}
when k large enough.
And then we choose $R_{\theta, \alpha} = R(\phi^{\alpha}, \frac{\theta}{2}, T_{\theta, \alpha})$.
Denote:
\begin{enumerate}
\item $\omega_k^{\alpha,\theta,\pm\infty} := \mathds{1}_{\{
\pm(t-t_k^\alpha)\geq T_{\theta,\alpha}(N_k^\alpha)^{-2},|t|\leq T_{\theta,\alpha}^{-1}\}}
\left(\Pi_{t_k^\alpha-t, x_k^\alpha} T_{N^\alpha_k}\phi^{\alpha,\theta,\pm\infty}
\right)$,
{where }
\[
\|\phi^{\alpha,\theta,\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}\lesssim 1,
\ \phi^{\alpha,\theta,\pm\infty} = P_{\leq R_{\theta,\alpha}} (\phi^{\alpha,\theta,\pm\infty}),
\]
which implies $\omega_k^{\alpha, \theta,\pm\infty} = P_{\leq R_{\theta,\alpha}N_{\theta,\alpha}}
\omega_k^{\alpha, \theta,\pm\infty}$.
\item $\omega_k^{\alpha,\theta} := \widetilde{u_k^\alpha}\cdot \mathds{1}_{S_k^{\alpha, \theta}},$
where $
S_k^{\alpha,\theta} :=\{ (x,t)\in \mathbb{T}^4\times (-T_{\theta,\alpha}, T_{\theta,\alpha})
: |t-t_k^\alpha|< T_{\theta,\alpha}(N_k^\alpha)^{-2},\ |x-x_k|\leq R_{\theta, \alpha}(N_k^\alpha)^{-1}\}$.
\noindent By the stability property (Proposition \ref{prop:stability}) and Theorem \ref{thm:4.2}, we can adjust
$\omega_k^{\alpha,\theta}$ and $\omega_k^{\alpha,\theta,\pm\infty}$, with an acceptable
error, to make
\[|\nabla_x^m \omega_k^{\alpha, \theta}|+(N_k^\alpha)^{-2}\mathds{S_k^{\alpha, \theta}}
|\partial_t \nabla_x^m \omega_k^{\alpha, \theta}|\leq R_{\theta, \alpha}
(N_k^\alpha)^{|m|+1}\mathds{1}_{S_k^{\alpha, \theta}},\ 0\leq |m| \leq 10.\]
\item $\rho_k^{\alpha}:= \mathds{1}_{(-T_{\theta,\alpha}^{-1},T_{\theta,\alpha}^{-1})}(t)
U_k^{\alpha} -\omega^{\alpha,\theta}_k -\omega^{\alpha,\theta,+\infty}-\omega^{\alpha,\theta,-\infty}$.
\end{enumerate}
By (\ref{4.4.2}) and (\ref{4.4.3}), we obtain that
\[
\|\rho_k^{\alpha,\theta}\|_{X^1(\{|t|<T_{\theta,\alpha}^{-1}\})}\leq \frac{\theta}{2}.
\]
and then we have
\begin{align*}
\|\omega_k^{\alpha,\theta,\pm\infty}\|_{Z'(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}
+\|\rho^{\alpha, \theta}_k\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}\leq \theta,\\
\|\omega_k^{\alpha,\theta,\pm\infty}\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}+
\|\omega_k^{\alpha,\theta}\|_{X^1(-T_{\theta, \alpha}^{-1},T_{\theta, \alpha}^{-1})}\lesssim 1.
\end{align*}
\end{proof}
Denote that $\mathfrak{D}_{p,q}(a,b)$ stands for a $p+q$ - linear
expression with $p$ factors consisting of either $\overline{a}$ or $a$
and $q$ factors consisting of either $\overline{b}$ or $b$.
\begin{lem}[a high-frequency linear solution does not interact significantly
with a low-frequency profile]\label{lem:7.1}
Assume that $B, N\geq 2$, and dyadic numbers, and assume that
$\omega :\mathbb{T}^4\times(-1, 1)\to \mathbb{C}$ is a function satisfying
\[
|\nabla^j \omega|\leq N^{j+1} \mathds{1}_{|x|\leq N^{-1}, |t|\leq N^{-2}},\ j=0, 1.
\]
Then we hold that
\[
\|\mathfrak{D}_{2,1}(\omega, e^{it\Delta} P_{>BN} f)\|_{N(-1,1)}\lesssim
(B^{-1/{200}}+N^{-1/{200}}) \|f\|_{H^1(\mathbb{T}^4)}.
\]
\end{lem}
\begin{proof}
We may assume that $\|f\|_{H^1(\mathbb{T}^4)} = 1$ and $f = P_{>BN} f$.
By Proposition \ref{prop:dual}, we obtain that
\begin{align*}
&\|\mathfrak{D}_{2,1}(\omega, e^{it\Delta} P_{>BN} f)\|_{N(I)}\\
\leq & \|\mathfrak{D}_{2,1}(\omega, e^{it\Delta} P_{>BN} f)\|_{L^1((-1,1), H^1)}\\
\lesssim& \|\mathfrak{D}_{2,1}(\omega, \nabla e^{it\Delta}f)\|_{L^1((-1,1), L^2)}
+\|e^{it\Delta}f\|_{L^\infty_tL^2_x} \|\omega\|_{L^2_tL^\infty_x}\||\nabla \omega|+|\omega|\|_{L^2_tL^\infty_x}\\
\lesssim & \|\mathfrak{D}_{2,1}(\omega, \nabla e^{it\Delta}f)\|_{L^1((-1,1),L^2)}+ B^{-1}.
\end{align*}
(It's easy to check that $\|\omega\|_{L^2_tL_x^\infty} \leq \left( \int_{-N^{-2}}^{N^{-2}}
(N)^2\, dt\right)^{\frac{1}{2}} =$, $\|\nabla\omega\|_{L^2_tL_x^\infty}\leq \left(
\int_{-N^{-2}}^{N^{-2}}N^4\,dt\right)^\frac{1}{2} = N$, and $\|P_{>BN} f\|_{L^2}\leq \frac{1}{BN} \|f\|_{H^1}$.)
Now we let $W(x,t):= N^4 \eta_{\mathbb{R}^4}(N\Psi^{-1}(x))\eta_{\mathbb{R}}(N^2 t)$,
\begin{align*}
&\|\mathfrak{D}_{2,1} (\omega, \nabla e^{it\Delta}f)\|^2_{L^1((-1,1),L^2)}\\
=&\left(\int_{-1}^1 \|\mathfrak{D}_{2,1}(\omega, \nabla e^{it\Delta}f)\| \, dt\right)^2\\
\leq& \|\omega\|^4_{L^4_tL_x^\infty}\|\frac{1}{N^2}W^{\frac{1}{2}}\nabla e^{it\Delta} f\|_{L^2(\mathbb{T}^4\times[-1,1])}\\
\lesssim&N^{-2} \|W^{\frac{1}{2}} \nabla e^{it\Delta} f\|^2_{L^2(\mathbb{T}^4\times[-1,1])}\\
\lesssim& \sum_{j=1}^4 \langle e^{it\Delta}\partial_j f, We^{it\Delta}\partial_j f\rangle_{L^2\times L^2}\, dt\\
\lesssim& \sum_{j=1}^4
\int_{-1}^1\langle \partial_j f, [\int_{-1}^1 e^{-it\Delta}We^{it\Delta}\,dt]\rangle_{L^2\times L^2}.
\end{align*}
It remains to prove that
\[
\|K\|_{L^2(\mathbb{T}^4)\to L^2(\mathbb{T}^4)} \lesssim N^2(B^{-\frac{1}{100}}+N^{-\frac{1}{100}}),
\]
where $K = P_{>BN} \int_{\mathbb{R}} e^{-it\Delta}W e^{it\Delta} P_{>BN}\,dt$.
We compute the Fourier coefficients of $K$ as follows:
\begin{align*}
c_{p,q} &= \langle e^{ipx}, Ke^{iqx}\rangle\\
& =\int_{\mathbb{T}^4} \overline{P_{>BN}e^{ipx}} \int_\mathbb{R} e^{-it\Delta}W e^{it\Delta} P_{>BN}\,dt
dx\\
&= (1-\eta_{\mathbb{R}^4})(p/{BN}) (1-\eta_{\mathbb{R}^4})(q/{BN})
\int_{\mathbb{T}^4} \overline{e^{ipx}}\int_\mathbb{R} e^{-it\Delta}W e^{it\Delta} e^{iqx}\,dt
dx\\
&=(1-\eta_{\mathbb{R}^4})(p/{BN}) (1-\eta_{\mathbb{R}^4})(q/{BN})
\int_{\mathbb{T}^4\times[-1,1]} \overline{e^{-it|p|^2 +ipx}}W(t,x) e^{-it|q|^2+iqx}\,dxdt\\
& = (1-\eta_{\mathbb{R}^4})(p/{BN}) (1-\eta_{\mathbb{R}^4})(q/{BN}) C_{\mathcal{F}_{x,t}}(p-q, |q|^2-|p|^2).
\end{align*}
Hence, we obtain that
\[
|c_{p,q}|\lesssim N^{-2}\left( 1+ \frac{\left||p|^2-|q|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|p-q|}{N} \right)^{-10}\mathds{1}_{\{|p|\geq BN\}}\mathds{1}_{\{|q|\geq BN\}}.
\]
Using Schur's lemma.
\[
\|K\|_{L^2(\mathbb{T}^4)\to L^2(\mathbb{T}^4)}\lesssim \sup_{p\in\mathbb{Z}^4}
\sum_{q\in\mathbb{Z}^4}|c_{p,q}|+\sup_{q\in\mathbb{Z}^4}\sum_{p\in\mathbb{Z}^4}|c_{p,q}|.
\]
It suffices to prove that
\begin{equation}\label{eq:7.3}
N^{-4}\sup_{|p|\geq BN} \sum_{v\in\mathbb{Z}^4}
\left( 1+ \frac{\left||p|^2-|p+v|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|v|}{N} \right)^{-10}\lesssim B^{-\frac{1}{100}} + N^{-\frac{1}{100}}
\end{equation}
Consider (\ref{eq:7.3}) in the following 3 cases.
\case{1}{}
\begin{align*}
&\sum_{|v|\geq NB^{\frac{1}{100}}}
\left( 1+ \frac{\left||p|^2-|p+v|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|v|}{N} \right)^{-10}\\
\lesssim& \sum_{|v|\geq NB^{\frac{1}{100}}}\left( 1+\frac{|v|}{N} \right)^{-10}\\
\lesssim& \int_{|v|\geq N,\, v\in\mathbb{R}^4} \left( 1+\frac{|v|}{N} \right)^{-10}\,dv\\
\lesssim& \left( 1+\frac{NB^{\frac{1}{100}}}{N} \right)^{-6}\\
\lesssim& B^{-\frac{6}{100}}.
\end{align*}
\case{2}{}
\begin{align*}
&\sum_{|v|\leq NB^{\frac{1}{100}}\atop |v\cdot p|\geq N^2 B^{\frac{1}{10}}}
\left( 1+ \frac{\left||p|^2-|p+v|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|v|}{N} \right)^{-10}\\
\lesssim &\sum_{|v|\leq NB^{\frac{1}{100}}\atop |v\cdot p|\geq N^2 B^{\frac{1}{10}}}
\left(
1 + \frac{2|v\cdot p|}{N^2}
\right)^{-10}\\
\lesssim & ( 1+ B^{\frac{1}{10}})^{-6} \lesssim B^{-\frac{6}{10}}.
\end{align*}
\case{3}{}
Denote $\hat{p} = \frac{p}{|p|}$
\begin{align*}
&N^{-4}\sup_{|p|\geq BN} \sum_{|v|\leq NB^{\frac{1}{100}}\atop |p\cdot v|\leq N^2B^{\frac{1}{10}}}
\left( 1+ \frac{\left||p|^2-|p+v|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|v|}{N} \right)^{-10}\\
\leq & N^{-4}\sup_{|p|\geq BN} \sum_{|v|\leq NB^{\frac{1}{100}}\atop |\hat{p}\cdot v|\leq NB^{-\frac{9}{10}}}
\left( 1+ \frac{\left||p|^2-|p+v|^2\right|}{N^2}\right)^{-10}
\left( 1+\frac{|v|}{N} \right)^{-10}\\
\leq & N^{-4}\sup_{|p|\geq BN} \sum_{|v|\leq NB^{\frac{1}{100}}\atop |\hat{p}\cdot v|\leq NB^{-\frac{9}{10}}} 1\\
\leq & N^{-4}\sup_{|p|\geq BN} \# \{v: |v|\leq NB^{\frac{1}{100}},\ |\hat{p}\cdot v|\leq NB^{-\frac{9}{10}}\}\\
=& N^{-4} (NB^{\frac{1}{100}})^3NB^{-\frac{9}{10}}\\
\leq & B^{-\frac{87}{100}}.
\end{align*}
\end{proof}
\begin{lem}\label{lem:7.2}
Assume that $\mathfrak{O}_\alpha = (N_{k,\alpha},t_{k,\alpha}, x_{k,\alpha})_k
\in \mathcal{F}_e, \ \alpha\in\{1, 2\}$, are two orthogonal frames,
$I\subseteq (-1, 1)$ is a fixed open interval, $0\in I$, and $T_1$, $T_2$,
$R\in [1,\infty)$ are fixed numbers, $R\geq T_1 + T_2$. For $k$ large enough,
for $\alpha \in\{1, 2\}$
\[
|\nabla_x^m \omega_k^{\alpha, \theta}|+(N_{k, \alpha})^{-2}\mathds{1}_{S_k^{\alpha, \theta}}
|\partial_t \nabla_x^m \omega_k^{\alpha, \theta}|\leq R_{\theta, \alpha}
(N_k^\alpha)^{|m|+1}\mathds{1}_{S_k^{\alpha, \theta}},\ 0\leq |m| \leq 10,
\]
where
\[
S_k^{\alpha,\theta} :=\{ (x,t)\in \mathbb{T}^4\times I
: |t-t_{k,\alpha}|< T_{\alpha}(N_{k,\alpha})^{-2},\ |x-x_{k,\alpha}|\leq R(N_{k,\alpha})^{-1}\}.
\]
and assume that $(\omega_{k,1}, w_{k,2}, f_k)_k$ are 3 sequences of functions with
properties $\|f_k\|_{X^1(I)}\leq 1$ for all $k$ large enough,
then
\[
\limsup_{k\to\infty} \|\omega_{k, 1}\, \omega_{k, 2}\, f_k\|_{N(I)} = 0\]
\end{lem}
\begin{proof}
For $\varepsilon > 0$ small.
If $N_{k,1}/N_{k,2} + N_{k,2}/N_{k,1} \leq \varepsilon^{-1000}$ and
$k$ is large enough then $S_{k,1}\cap S_{k,2} = \emptyset$. (By the definition of
orthogonality of frames, $N_{k,1}/N_{k,2} + N_{k,2}/N_{k,1} \leq \varepsilon^{-1000}$ implies
$N_{k,1}^2|t_{k,1}-t_{k,2}|\to \infty$ or $N_{k,1}|x_{k,1}-x_{k,2}|\to\infty$, so
$S_{k,1}\cap S_{k,2} = \emptyset$.)
In this case, $\omega_{k, 1}\, \omega_{k, 2}\, f_k\equiv 0$.
If $N_{k,1}/N_{k,2} \geq \varepsilon^{-1000}/2$.
Denote that
\[
\omega_{k,1}\omega_{k,2} = \omega_{k,1}\widetilde{\omega}_{k,2}:=
\omega_{k,1}\cdot (w_{k,2}\mathds{1}_{(t_{k,1}-T_1N_{k,1}^{-2}, t_{k,1}+T_1N_{k,1}^{-2})}(t)).
\]
\noindent\textit{\textbf{Claim $\dagger$} For $k$ large enough,
\begin{enumerate}
\item $\|\widetilde{\omega}_{k,2}\|_{X^1(I)}\lesssim_R 1$;
\item $\|P_{>\varepsilon^{-10}N_{k,2}} \widetilde{\omega}_{k,2}\|_{X^1(I)}\lesssim_R \varepsilon$;
\item $\|\widetilde{\omega}_{k,2}\|_{Z(I)}\lesssim_R \varepsilon$;
\item $\|\omega_{k,1}\|_{X^1(I)}\lesssim_R 1$;
\item $\|P_{\leq \varepsilon^{10}} \omega_{k,1}\|_{X^1(I)}\lesssim_R \varepsilon$.
\end{enumerate}
}
By this \textit{Claim $\dagger$}, Proposition \ref{prop:nonlinear}, and $\varepsilon^{10}N^1>>\varepsilon^{-10}N_2$ we obtain that
\begin{align*}
\|\omega_{k,1}\omega_{k,2}f_k\|_{N(I)} \leq& \|(P_{\leq \varepsilon^{10}N_{k,1}} \omega_{k,1})
(\widetilde{\omega}_{k,2}) f_k\|_{N(I)}
+\|(P_{> \varepsilon^{10}N_{k,1}} \omega_{k,1})(P_{>\varepsilon^{-10} N_{k,2}}\widetilde{\omega}_{k,2})
f_k\|_{N(I)}\\
&+\|(P_{> \varepsilon^{10}N_{k,1}} \omega_{k,1})(P_{\leq\varepsilon^{-10} N_{k,2}}\widetilde{\omega}_{k,2})
f_k\|_{N(I)}\\
\lesssim_R& \varepsilon.
\end{align*}
More detail about the \textit{Claim $\dagger$}:
\noindent (1) Consider $\widetilde{\omega}_{k,2} w_{k,2}\mathds{1}_{(t_{k,1}-T_1N_{k,1}^{-2}, t_{k,1}+T_1N_{k,1}^{-2})}(t)$.
\begin{align*}
\|\widetilde{\omega}_{k,2}\|_{X^1(I)}&\lesssim \|\widetilde{\omega}_{k,2}(0)\|_{H^1}
+\left(\sum_{N} \|P_N(i\partial_t +\Delta) \widetilde{\omega}_{k,2}\|^2_{L^1_t([0,1],H^1)}
\right)^{\frac{1}{2}}\\
&\lesssim\left( \int_{|x-x_{k,2}|\leq RN_{k,2}^{-1}} |\langle \nabla \rangle \widetilde{\omega}_{k,2}(0)
|^2\,dx \right)^{\frac{1}{2}}\\
&+ \left(\sum_{N} \left( \int_I dt
\|P_N (\partial_t \widetilde{\omega}_{k,2})\|_{H^1} + \|P_N \Delta \widetilde{\omega}_{k,2}\|_{H^1}
\right)^2 \right)^{\frac{1}{2}}\\
&\lesssim (R^2 N_{k,2}^4 R^4 N_{k,2}^{-4})^{\frac{1}{2}} + \int_I (\|\partial_t
\widetilde{\omega}_{\alpha, k}\|_{H^1}+\|\Delta \widetilde{\omega}_{k,2}\|_{H^1})\, dt\\
&\lesssim 1.
\end{align*}
\noindent (2) Consider the high frequency part of $\widetilde{\omega}_{k,2}$.
\begin{align*}
&\|P_{>\varepsilon^{-10}N_{k,2}} \widetilde{\omega}_{k,2}\|_{X^1(I)}\\
\lesssim&
\|P_{>\varepsilon^{-10}}\widetilde{\omega}_{k,2}(0)\|_{H^1}
+\left(\sum_{N>\varepsilon^{-10}N_{k,2}} \|P_N(i\partial_t +\Delta) \widetilde{\omega}_{k,2}\|^2_{L^1_t([0,1],H^1)}
\right)^{\frac{1}{2}}\\
\leq&\left( \int_{|x-x_{k,2}|\leq RN_{k,2}^{-1}} |P_{>\varepsilon^{-10} N_{k,2}}\langle \nabla \rangle \widetilde{\omega}_{k,2}(0)
|^2\,dx \right)^{\frac{1}{2}}+\int \|P_{>\varepsilon^{-10}N_{k,2}} (i\partial_t+\Delta) \widetilde{\omega}_{k,2}\|_{H^1}\, dt\\
\leq & \left( \int_{|x-x_{k,2}|\leq RN_{k,2}^{-1}}\left(\frac{\varepsilon^{10}}{N_{k,2}}\right)^2
|P_{>\varepsilon^{-10} N_{k,2}}\langle \nabla \rangle^2 \widetilde{\omega}_{k,2}(0)
|^2\,dx \right)^{\frac{1}{2}}+\int_{|t-t_{k,2}|<N^{-2}_{k,2} R} \frac{\varepsilon^{10}}{N_{k,2}}
\| (i\partial_t+\Delta) \widetilde{\omega}_{k,2}\|_{H^2}\, dt\\
\leq& \varepsilon^{10} R^3 + N_{k,2}^{-2}R\frac{\varepsilon^{10}}{N_{k,2}}(R^4 N_{k,2}^{-2} R^2 N_{k,2}^10)^{\frac{1}{2}}\\
\lesssim& \varepsilon^{10} R^4.
\end{align*}
\noindent (3): Consider the $Z$-norm of $\widetilde{\omega}_{k,2}$.
\begin{align*}
\|\widetilde{\omega}_{k,2}\|_{Z(I)}&\leq
\left(\sum_N N^2 \|P_N \widetilde{\omega}_{k,2}\|^4_{L^4(\mathbb{T}^4\times
(t_{k,1}-RN_{k,1}^{-2}, t_{k,1}+RN_{k,1}^{-2}))}\right)^{1/4}\\
&\leq \left(
\sum_N \|\nabla^{\frac{1}{2}} P_N \widetilde{\omega}_{k,2}\|_{L^4_{t,x}}^4
\right)^{1/4}\\
&\lesssim \|\left(\sum_{N} |\nabla^{1/2} P_N \widetilde{\omega}_{k,2}|^4\right)^{1/4}\|_{L^4}\\
&\lesssim \|\left(\sum_{N} |\nabla^{1/2} P_N \widetilde{\omega}_{k,2}|^2\right)^{1/2}\|_{L^4}\\
&\lesssim \|\nabla^{1/2} \widetilde{\omega}_{k,2}\|_{L^4(\mathbb{T}^4\times
(t_{k,1}-RN_{k,1}^{-2}, t_{k,1}+RN_{k,1}^{-2}))}\\
&\lesssim R^{\frac{9}{4}} \left(\frac{N_{k,2}}{N_{k,1}} \right)^{\frac{1}{2}}\\
&\leq R^{\frac{9}{4}} \varepsilon^{500}.
\end{align*}
\noindent (4): Similar with (1).
\noindent (5):
\begin{equation}\label{eq:**}\begin{split}
&\|P_{\leq \varepsilon^{10}N_{k,1}} \omega_{k,1}\|_{X^1(I)}\\
\leq&\|P_{\leq \varepsilon^{10} N_{k,1}} \omega_{k,1}(0)\|_{H^1}+\int \|P_{\leq \varepsilon^{10}N_{k,1}}
(i\partial_t +\Delta) \omega_{k,1}\|_{H^1}\,dt\\
\lesssim& \varepsilon^{10} N_{k,1}\left( \|P_{\leq \varepsilon^{10} N_{k,1}} \omega_{k,1}(0)\|_{L^2}
+\int \|P_{\leq \varepsilon^{10}N_{k,1}}
(i\partial_t +\Delta) \omega_{k,1}\|_{L^2}\,dt
\right)\\
\lesssim& \varepsilon^{10} R^4.
\end{split}
\end{equation}
\end{proof}
\begin{proof}[Proof of (\ref{eq:6.8})]
\begin{align*}
&F(U_{prof,k}^J + e^{it\Delta}R_k^J) - F(U_{prof,k}^J)\\
=&\mathfrak{D}_{2,1}(U^J_{prof, k}, e^{it\Delta}R_k^J) +
\mathfrak{D}_{1,2}(U^J_{prof, k}, e^{it\Delta}R_k^J)+ |e^{it\Delta}R_k^J|^2 e^{it\Delta} R_k^J
\end{align*}
First, by the nonlinear estimate (Proposition \ref{prop:nonlinear}), we have
\begin{align*}
&\||e^{it\Delta}R_k^J|^2 e^{it\Delta} R_k^J\|_{N(I_k)}\\
\lesssim & \|e^{it\Delta}R_k^J\|^2_{Z'(I_k)}\|e^{it\Delta}R_k^J\|_{X^1(I_k)}
\end{align*}
Since $\|e^{it\Delta}R_k^J\|_{Z'(I_k)}\to 0$ as $J, k\to \infty$, and $\|e^{it\Delta}
R_k^J\|_{X^1(I_k)}\lesssim 1$,
\[
\limsup_{J\to\infty}\limsup_{k\to\infty} \||e^{it\Delta}R_k^J|^2 e^{it\Delta} R_k^J\|_{N(I_k)}
= 0.
\]
Second, also by the nonlinear estimate Proposition \ref{prop:nonlinear} and
Proposition \ref{prop:ZinX},
\begin{align*}
&\|\mathfrak{D}_{1,2}(U^J_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)}\\
\lesssim&
\|U_{prof,k}^J\|_{X^1(I_k)}\|e^{it\Delta}R_k^J\|_{X^1(I_k)}\|e^{it\Delta}R_k^J\|_{Z'(I_k)}
\to 0,
\end{align*}
as $k, J \to \infty.$
Third, consider
\[\|
\mathfrak{D}_{2,1}(U^J_{prof, k}, e^{it\Delta}R_k^J)
\|_{N(I_k)},\]
assume $\varepsilon>0$ is fixed, there exists $A = A(\varepsilon)$ sufficiently large, such that
for all $J\geq A$ and $k\geq k_0(J)$
\[
\|U_{prof,k}^J - U_{prof,k}^A\|_{X^1(-1,1)}\leq \varepsilon.
\]
Then
\begin{align*}
&\|\mathfrak{D}_{2,1}(U^J_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)}\\
\leq& \|\mathfrak{D}_{2,1}(U^A_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)}+
\|\mathfrak{D}_{1,1,1}(U^A_{prof, k}, U^J_{prof, k}-U^A_{prof, k}, e^{itDD}R_k^J)\|_{N(I_k)}\\
&+ \|\mathfrak{D}_{2,1}(U^J_{prof, k}-U^A_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)}
\to \|\mathfrak{D}_{2,1}(U^A_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)} + \varepsilon,
\end{align*}
as $k, J \to \infty.$
It remains to prove that
\[
\limsup_{J\to\infty}\limsup_{k\to\infty}
\|\mathfrak{D}_{2,1}(U^A_{prof, k}, e^{it\Delta}R_k^J)\|_{N(I_k)}\lesssim \varepsilon.
\]
By the definition of $U^A_{prof,k}$, it suffices to prove that for any $\alpha_1,
\alpha_2\in \{0, 1, \cdots, A\}$,
Fix $\theta = \varepsilon A^{-2}/ 10$, apply the decomposition in Lemma \ref{lem:decomposition}
to all nonlinear profiles $U^\alpha_k$, $\alpha = 1, 2, \cdots, A$. We assume that
\[
T_{\theta,\alpha} = T_{\theta},\ \text{ and } R_{\theta,\alpha} = R_{\theta},
\]
for any $\alpha = 1, 2, \cdots, A$.
\begin{equation}
\limsup_{J\to\infty}\limsup_{k\to\infty}
\|\mathfrak{D}_{1,1,1}(U^{\alpha_1}_k, U^{\alpha_2}_k, e^{it\Delta}R^J_k)\|_{N(I_k)}
\lesssim \varepsilon A^{-2}.
\end{equation}
\case{1}{$\alpha_1 = 0$ or $\alpha_2 = 0$.}
Without loss of generality, suppose $\alpha_2 = 0$.
Since $\|U^0_{k}\|_{X^1(-1,1)} = \|G\|_{X^1(-1,1)} \lesssim 1$, for any $k$ large
enough such that $\|G\|_{Z'(I_k)} \lesssim \varepsilon A^{-2}$, and $\|G\|_{X^1(I_k)} \lesssim 1$.
By the nonlinear estimate Proposition \ref{prop:nonlinear} and
Proposition \ref{prop:ZinX},
\begin{align*}
&\|\mathfrak{D}_{1,1,1}(G, U^{\alpha_2}_k, e^{it\Delta}R^J_k)\|_{N(I_k)}\\
\lesssim& \|G\|_{Z'(I_k)}\|U^{\alpha_2}_k\|_{Z'(I_k)}\|e^{it\Delta}R_k^J\|_{X^1(I_k)}
+ \|G\|_{Z'(I_k)}\|U^{\alpha_2}_k\|_{X^1(I_k)}\|e^{it\Delta}R_k^J\|_{Z'(I_k)}\\
&+ \|G\|_{X^1(I_k)}\|U^{\alpha_2}_k\|_{Z'(I_k)}\|e^{it\Delta}R_k^J\|_{Z'(I_k)}\\
\lesssim& \varepsilon A^{-2},
\end{align*}
when taking $k$, $J$ large enough.
\case{2}{$\alpha_1 \neq 0$, $\alpha_2\neq 0$ and $\alpha_1 = \alpha_2$.}
Taking $k$ large enough, we have $I_k \subset (-T_{\theta}^{-1},T_{\theta}^{-1})$
\[
\mathds{1}_{I_k}(t)
U^\alpha_k = \omega_k^{\alpha,\theta,-\infty}+\omega_k^{\alpha,\theta,+\infty}
+\omega_k^{\alpha,\theta}+\rho^{\alpha, \theta}_k.\]
By the nonlinear estimate Proposition \ref{prop:nonlinear}, (\ref{eq:decomposition})
and Lemma \ref{lem:7.2} (since $\|e^{it\Delta}R_k^J\|_{X^1(I_k)}\lesssim 1$ uniformly
for both $k$ and $J$),
we obtain that
\begin{align*}
\|\mathfrak{D}_{1,1,1}(U_k^{\alpha_1}, U_k^{\alpha_2}, e^{it\Delta}R_k^J)\|_{N(I_k)}
&\lesssim \frac{1}{2} A^{-2}\varepsilon +
\|\mathfrak{D}_{1,1,1}(\omega_k^{\alpha_1, \theta,+\infty}, \omega_k^{\alpha_1, \theta,-\infty}, e^{it\Delta}R_k^J)\|_{N(I_k)}\\
&\lesssim A^{-2}\varepsilon,
\end{align*}
when $k$ large enough.
\case{3}{$\alpha_1 \neq 0$, $\alpha_2\neq 0$ and $\alpha_1 \neq \alpha_2$.}
Using Lemma \ref{lem:7.1}, and set $B$ sufficiently large and $k$ sufficiently
large, we obtain that,
\begin{equation}\label{eq:7.14}
\begin{split}
\|\mathfrak{D}_{2,1}(\omega_k^{\alpha,\theta}, P_{>BN_{k,\alpha}} e^{it\Delta}R_k^J)\|_{N(I_k)}
&\lesssim (\frac{1}{B^{1/{200}}}+\frac{1}{N_{k,\alpha}^{1/{200}}})
\|R_k^J\|_{H^1}\\
&\lesssim \frac{\varepsilon}{4} A^{-2}.
\end{split}
\end{equation}
We may also assume that $B$ is sufficiently large such that, for $k$ large enough,
by a similar estimate as (\ref{eq:**}), we obtain that
\begin{equation}\label{eq:7.15}
\|P_{\leq B^{-1}N_{k,\alpha}} \omega_k^{\alpha,\theta}\|_{X^1(I_k)}\leq \frac{\varepsilon}{4}A^{-2}.
\end{equation}
Using the modified nonlinear estimate
(\ref{eq:nonlinearP}) of Lemma \ref{prop:nonlinear} and bounds (\ref{eq:7.14})
(\ref{eq:7.15}), it remains to prove that
\[
\limsup_{J\to\infty}\limsup_{k\to\infty} \|\mathfrak{D}_{2,1}
(P_{>B^{-1}N_{k,\alpha}}\omega^{\alpha,\theta}_k, P_{\leq BN_{k,\alpha}}e^{it\Delta}R_k^J)
\|_{N(I_k)} = 0.
\]
\end{proof}
\begin{proof}[Proof of (\ref{eq:6.9})]
\begin{equation*}
F(U^J_{prof, k}) -\sum_{0\leq \alpha\leq J} F(U_k^\alpha)
= \sum_{0\leq \alpha_1, \alpha_2, \alpha_3 \leq J\atop
\alpha_1\neq\alpha_2\text{ or }\alpha_1\neq\alpha_3\text{ or }
\alpha_2\neq\alpha_3
}\mathfrak{D}_{1,1,1}(U_k^{\alpha_1}, U_k^{\alpha_2}, U_k^{\alpha_3})
\end{equation*}
By (\ref{6.6}), we choose $A(\theta)$ large enough, such that $\sum_{A\leq \alpha \leq J}
\|U_\alpha\|^2_{X^1(-1, 1)} \leq \theta$.
So we have
\begin{align*}
&\|\sum_{0\leq \alpha_1, \alpha_2, \alpha_3 \leq J\atop
\alpha_1\neq\alpha_2\text{ or }\alpha_1\neq\alpha_3\text{ or }
\alpha_2\neq\alpha_3
}\mathfrak{D}_{1,1,1}(U_k^{\alpha_1}, U_k^{\alpha_2}, U_k^{\alpha_3})\|_{N(I_k)}\\
\leq &\|\sum_{0\leq \alpha_1, \alpha_2, \alpha_3 \leq A\atop
\alpha_1\neq\alpha_2\text{ or }\alpha_1\neq\alpha_3\text{ or }
\alpha_2\neq\alpha_3
}\mathfrak{D}_{1,1,1}(U_k^{\alpha_1}, U_k^{\alpha_2}, U_k^{\alpha_3})\|_{N(I_k)}
+ \theta.
\end{align*}
Using Lemma \ref{lem:decomposition},
\begin{align*}
&\|\sum_{0\leq \alpha_1, \alpha_2, \alpha_3 \leq A\atop
\alpha_1\neq\alpha_2\text{ or }\alpha_1\neq\alpha_3\text{ or }
\alpha_2\neq\alpha_3
}\mathfrak{D}_{1,1,1}(U_k^{\alpha_1}, U_k^{\alpha_2}, U_k^{\alpha_3})\|_{N(I_k)}\\
\leq& \|\sum_{F}
\mathfrak{D}_{1,1,1}(W_k^{1}, W_k^{2}, W_k^{3})\|_{N(I_k)},
\end{align*}
where
\begin{align*}
F:=&
\{ (W_k^{1}, W_k^{2}, W_k^{3}) : W^i_k\in\{\omega_k^{\alpha,\theta, +\infty},
\omega_k^{\alpha,\theta, -\infty}, \omega_k^{\alpha,\theta}
,\rho_k^{\alpha, \theta}\}, \\
&\text{ for } 0\leq\alpha\leq A,\text{ and each }i,
\text{at least two different }\alpha \}
\end{align*}
and $\#F< A^3$
\noindent Consider the following several cases:
\case{1}{the terms containing one error component $\rho^{\alpha,\theta}_k$.}
By the nonlinear estimate (Proposition \ref{prop:nonlinear}),
\[
\|\mathfrak{D}_{1,1,1}(W_k^1, W_k^2, \rho_k^{\alpha, \theta})\|_{N(I_k)}
\leq \|\rho_k^{\alpha, \theta}\|_{X^1(I_k)} \|W_k^1\|_{X^1(I_k)}\|W_k^2\|_{X^1(I_k)}
\lesssim \theta,\]
for $k$ large enough.
\case{2}{the terms containing two scattering components $\omega_k^{\alpha,\theta,\pm\infty}$
and $\omega_k^{\beta,\theta,\pm\infty}$(maybe $\alpha =\beta$ or not).}
\begin{align*}
&\|\mathfrak{D}_{1,1,1}(\omega_k^{\alpha,\theta,\pm\infty}, \omega_k^{\beta,\theta,\pm\infty}, W_k^3)\|_{N(I_k)}\\
\leq&\|W^3_k\|_{X^1(I_k)}(\|\omega_k^{\alpha,\theta,\pm\infty}\|_{X^1(I_k)}+\|\omega_k^{\beta,\theta,\pm\infty}\|_{X^1(I_k)})
(\|\omega_k^{\alpha,\theta,\pm\infty}\|_{Z'(I_k)}+\|\omega_k^{\beta,\theta,\pm\infty}\|_{Z'(I_k)})\\
\lesssim& \theta
,\end{align*}
for $k$ large enough.
\case{3}{the terms containing two different cores $\omega^{\alpha,\theta}_k$ and
$\omega^{\beta,\theta}_k$ with $\alpha\neq\beta$.}
By Lemma \ref{lem:7.2}, for $k$ large enough, we obtain that
\[
\|\mathfrak{D}_{1,1,1}(\omega^{\alpha,\theta}_k, \omega^{\beta,\theta}_k, W_k^3)\|_{N(I_k)}
\lesssim \theta.
\]
\case{4}{the others: $\mathfrak{D}_{2,1}(\omega_k^{\alpha,\theta},
\omega_k^{\beta, \theta,\pm\infty})$ with $\alpha\neq \beta$.}
\case{4.1}{$\limsup_{k\to \infty}\frac{N_{k,\beta}}{N_{k,\alpha}} = +\infty$.}
By Lemma \ref{lem:7.1}, and choosing $B$ and $k$ large enough,
\begin{equation}\label{eq:***}
\|\mathfrak{D}_{2,1}(\omega^{\alpha, \theta},
P_{>BN_{k,\alpha}}\omega^{\beta,\theta,\pm\infty})\|_{N(I_k)}\lesssim
(B^{-1/{200}}+ N_{k, \alpha}^{-1/{200}})\lesssim \theta.
\end{equation}
And for the other part,
\begin{align*}
\|P_{\leq BN_{k,\alpha}} \omega^{\beta, \theta, \pm\infty}_k\|_{X^1(I_k)}
&= \|P_{\leq BN_{k,\beta}\frac{N_{k,\alpha}}{N_{k,\beta}}}
\omega^{\beta, \theta, \pm\infty}_k\|_{X^1(I_k)}\\
&= \|P_{\leq BN_{k,\beta}\frac{N_{k,\alpha}}{N_{k,\beta}}}
\pi_{x^\beta_k}T_{N_{k,\beta}}(\phi^{\beta,\theta,\pm\infty})\|_{H^1(\mathbb{T}^4)}\\
& = \|P_{\leq B\frac{N_{k,\alpha}}{N_{k,\beta}}}
\phi^{\beta,\theta,\pm\infty}\|_{\dot{H}^1(\mathbb{R}^4)}\to 0, \text{ as } k \to\infty.
\end{align*}
So for $k$ large enough, we obtain that
\begin{equation*}
\|\mathfrak{D}_{2,1}(\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\alpha}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}
\lesssim \|P_{\leq BN_{k,\alpha}} \omega_k^{\beta, \theta, \pm\infty}\|_{X^1(I_k)}
\|\omega_k^{\alpha, \theta}\|^2_{X^1(I_k)} \lesssim \theta.
\end{equation*}
\case{4.2}{$\limsup_{k\to \infty}\frac{N_{k,\alpha}}{N_{k,\beta}} = +\infty$.}
We assume that $B$ is sufficiently large such that for $k$ large, by a similar
estimate as (\ref{eq:***}), we obtain that
\begin{equation*}
\|\mathfrak{D}_{2,1}(\omega_k^{\alpha, \theta},
P_{>BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}\lesssim
(B^{-1/{200}}+ N_{k, \beta}^{-1/{200}})\lesssim \theta.
\end{equation*}
And by the similar estimate as (\ref{eq:**}), for $k$ large enough,
we obtain that
\begin{equation*}
\|P_{\leq N_{k,\beta}} \omega_k^{\alpha,\theta}\|_{X^1(I_k)}=
\|P_{\leq N_{k,\alpha}\frac{N_{k,\beta}}{N_{k,\alpha}}} \omega_k^{\alpha,\theta}\|_{X^1(I_k)}
\lesssim \theta.
\end{equation*}
and $\|P_{> N_{k,\beta}} \omega_k^{\alpha,\theta}\|_{X^1(I_k)} \lesssim 1.$
Consider the remaining part, by the nonlinear estimate (\ref{eq:nonlinear}) and
(\ref{eq:nonlinearP}),
\begin{align*}
&\|\mathfrak{D}_{2,1}(\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}\\
\lesssim& \|\mathfrak{D}_{2,1}(P_{\leq N_{k,\beta}}\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}
+
\|\mathfrak{D}_{2,1}(P_{> N_{k,\beta}}\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)} \\
&+
\|\mathfrak{D}_{1,1,1}(P_{> N_{k,\beta}}\omega_k^{\alpha, \theta},
P_{\leq N_{k,\beta}}\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}\\
\lesssim&\|P_{\leq N_{k,\beta}}\omega_k^{\alpha, \theta}\|_{X^1(I_k)}+
\|\mathfrak{D}_{2,1}(P_{> N_{k,\beta}}\omega_k^{\alpha, \theta},
P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty})\|_{N(I_k)}\\
\lesssim &\theta + \|P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty}\|_{Z'(I_k)}\\
\lesssim& \theta,
\end{align*}
where $\limsup_{J\to\infty}\limsup_{k\to\infty}
\|P_{\leq BN_{k,\beta}}\omega_k^{\beta,\theta,\pm\infty}\|_{Z'(I_k)} = 0$(by a similar
estimate with (\ref{eq:decomposition}) from extinction lemma.)
\case{4.3}{$N_{k,\alpha} \approx N_{k,\beta}${ and }
$N_{k,\alpha}|x_k^\alpha - x_k^\beta|\to \infty$ as $k\to \infty$.}
From Proposition \ref{prop:equivalenceFrames}, we can use an equivalent frame of $\mathcal{O}^\alpha$ to adjust $N_{k,\alpha}$ and $t_k^\alpha$ such that
$N_{k,\alpha} = N_{k,\beta}$ and $t_k^\alpha = t_k^\beta$.
By the definition of $\omega_k^{\alpha,\theta}$ and $\omega_k^{\beta,\theta,\pm\infty}$,
for $k$ large enough, we obtain that
$\omega_k^{\beta,\theta} \omega_k^{\alpha,\theta,\pm\infty} \equiv 0.$
\case{4.4}{$N_{k,\alpha} \approx N_{k,\beta}${ and }
$N^{2}_{k,\alpha}|t_k^\alpha - t_k^\beta|\to \infty$ as $k\to \infty$.}
By Proposition \ref{prop:equivalenceFrames}, we can adjust $N_{k,\alpha}$ such that $N_{k,\alpha} = N_{k,\beta} := N_k$.
By the definition of $\omega_k^{\alpha,\theta}$ and $\omega_k^{\beta,\theta,\pm\infty}$,
taking $k$ large enough and $N_k^2|t_k^\alpha - t_k^\beta| > T_\theta$, we obtain that
\[
\omega_k^{\alpha,\theta}\omega_k^{\beta,\theta,\pm\infty} =
\mathds{1}_{[t_\alpha - \frac{T_\theta}{N_k^2},t_\alpha + \frac{T_\theta}{N_k^2}]}
\omega_k^{\alpha,\theta}\omega_k^{\beta,\theta,\pm\infty},
\]
and also $\omega_k^{\alpha, \theta,\pm\infty} = P_{\leq R_{\theta}N_{k}}
\omega_k^{\alpha, \theta,\pm\infty}$.
By (\ref{4.12}) and (\ref{4.13}), for any $T\leq N_k$, we obtain that
\begin{equation}\label{eq:L2}
\begin{split}
\|\omega_k^{\beta,\theta,\pm\infty}\|_{L^2(\mathbb{T}^4)}
&= \|P_{\leq R_{\theta}N_k}\omega_k^{\beta,\theta,\pm\infty}\|_{L^2(\mathbb{T}^4)}\\
&\lesssim (1 + {R_\theta})^{-10}\frac{1}{N_{k}},
\end{split}
\end{equation}
and
\begin{equation}\label{eq:Linfty}
\sup_{|t-t_{k}^\beta|\in [TN_k^{-2}, T^{-1}]}
\|w_k^{\beta,\theta,\pm\infty}\|_{L^\infty{(\mathbb{T}^4)}} \lesssim T^{-2}
R_\theta^4 N_k.
\end{equation}
Interpolate (\ref{eq:L2}) and (\ref{eq:Linfty}), we can obtain that
\begin{equation}\label{eq:Lp}
\sup_{|t-t_{k}^\beta|\in [TN_k^{-2}, T^{-1}]}
\|w_k^{\beta,\theta,\pm\infty}\|_{L^p(\mathbb{T}^4)}
\lesssim_{R_\theta} T^{\frac{4}{p}-2} N_k^{1-\frac{4}{p}}.
\end{equation}
By choosing $T_k = N_k |t_k^\alpha - t_k^\beta|^{\frac{1}{2}}\to \infty$ as $k\to \infty$
and using (\ref{eq:Lp}),
we obtain that
\begin{equation}\label{eq:Linfty2}
\sup_{t\in [t_k^\alpha - \frac{T_\theta}{N_k^2},t_k^\alpha + \frac{T_\theta}{N_k^2}]}
\|\omega_k^{\beta, \theta,\pm\infty}\|_{L^\infty(\mathbb{T}^4)}
\lesssim_{R_\theta} T_k^{-2} N_k,
\end{equation}
and
\begin{equation}\label{eq:L4}
\sup_{t\in [t_k^\alpha - \frac{T_\theta}{N_k^2},t_k^\alpha + \frac{T_\theta}{N_k^2}]}
\|\langle \nabla \rangle \omega_k^{\beta, \theta,\pm\infty}\|_{L^4(\mathbb{T}^4)}
\lesssim_{R_\theta} T_k^{-1}N_k.
\end{equation}
So by using of Leibniz rule, (\ref{eq:decomposition}) (\ref{eq:L4}) and (\ref{eq:Linfty2}), we obtain that
\begin{align*}
&\|\mathfrak{D}_{2,1}(\omega^{\alpha,\theta}_k, \omega^{\beta,\theta,\pm\infty}_k)
\|_{N( [t^\alpha_k - \frac{T_\theta}{N_k^2},t^\alpha_k + \frac{T_\theta}{N_k^2}])}\\
\lesssim& \|\mathfrak{D}_{2,1}(\omega^{\alpha,\theta}_k, \omega^{\beta,\theta,\pm\infty}_k)
\|_{L^1([t_k^\alpha - \frac{T_\theta}{N_k^2},t_k^\alpha + \frac{T_\theta}{N_k^2}], H^1(\mathbb{T}^4))}\\
\lesssim&\int_{t_k^\alpha - \frac{T_\theta}{N_k^2}}^{t_k^\alpha + \frac{T_\theta}{N_k^2}}
\left(\|\mathfrak{D}_2(\langle \nabla \rangle\omega_k^{\alpha,\theta})\|_{L^2(\mathbb{T}^4)}
\|\omega_k^{\beta,\theta,\pm\infty}\|_{L^\infty(\mathbb{T}^4)}
+ \|\mathfrak{D}_2(\omega_k^{\alpha,\theta})\|_{L^4(\mathbb{T}^4)}
\|\langle \nabla \rangle\omega_k^{\beta,\theta,\pm\infty}\|_{L^4(\mathbb{T}^4)}
\right)\, dt\\
\lesssim& \int_{t_k^\alpha - \frac{T_\theta}{N_k^2}}^{t_k^\alpha + \frac{T_\theta}{N_k^2}}
\left(
N_k^2 T_k^{-2} R_\theta^8+
N_k^2 T_k^{-1} R_\theta^3
\right)\, dt
\\
\lesssim&
T_k^{-1} T_\theta R_\theta^8 \to 0 \text{ as } k\to\infty.
\end{align*}
\end{proof}
\bibliographystyle{amsplain}
|
1,108,101,563,679 | arxiv | \section{#1}}
\renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}}
\newcommand{\l@qq}[2]{\addvspace{2em}
\hbox to\textwidth{\hspace{1em}\bf #1 \dotfill #2}}
\def\add#1{\addcontentsline{toc}{app}{\bf #1}}
\defAppendix{Appendix}
\newcounter{app}
\defAppendix \Alph{app}{Appendix \Alph{app}}
\def\setcounter{equation}{0{\setcounter{equation}{0}
\def\thesection.\arabic{equation}}{\Alph{app}.\arabic{equation}}\par
\addvspace{4ex}
\@afterindentfalse
\secdef\@app\@dapp}
\newcommand{\appmark}[1]{\markboth{
\uppercase{Appendix \Alph{app}\hspace{1em}#1}}{}}
\newcommand\@app{\@startsection {app}{1}{0ex}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\Large\bf}}
\def\@dapp#1{%
{\parindent \mbox{\Bbbb Z}@ \raggedright \bf #1}\par\nobreak}
\def\l@app#1#2{\ifnum \c@tocdepth >\mbox{\Bbbb Z}@
\addpenalty\@secpenalty
\addvspace{1.0em \@plus\p@}%
\setlength\@tempdima{8em}%
\begingroup
\parindent \mbox{\Bbbb Z}@ \rightskip \@pnumwidth
\parfillskip -\@pnumwidth
\leavevmode \bfseries
\advance\leftskip\@tempdima
\hskip -\leftskip
#1\nobreak\hfil \nobreak\hb@xt@\@pnumwidth{\hss #2}\par
\endgroup\fi}
\newcounter{sapp}[app]
\def\Alph{app}.\arabic{sapp}{\Alph{app}.\arabic{sapp}}
\def\def\theequation{\Alph{app}.\arabic{equation}{\def\thesection.\arabic{equation}}{\Alph{app}.\arabic{equation}}
\par
\@afterindentfalse
\secdef\@sapp\@dsapp}
\newcommand{\@sapp}{\@startsection{sapp}{2}{\mbox{\Bbbb Z}@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\large\bfseries}}
\newcommand{\sappmark}[1]{\markboth{
\uppercase{\Alph{app}.\arabic{sapp}\hspace{1em}#1}}{}}
\def\@dsapp#1{%
{\parindent \mbox{\Bbbb Z}@ \raggedright \bf #1
}\par\nobreak}
\newcommand{\l@sapp}{\@dottedtocline{2}{1.5em}{2.3em}}
\def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn
\else \newpage \fi \thispagestyle{empty}\c@page\mbox{\Bbbb Z}@
\def\arabic{footnote}{\fnsymbol{footnote}} }
\def\endtitlepage{\if@restonecol\twocolumn \else \fi
\def\arabic{footnote}{\arabic{footnote}}
\setcounter{footnote}{0}}
\relax
\hybrid
\parskip=0.4em
\makeatletter
\newdimen\normalarrayskip
\newdimen\minarrayskip
\normalarrayskip\baselineskip
\minarrayskip\jot
\newif\ifold \oldtrue \def\oldfalse{\oldfalse}
\def\arraymode{\ifold\relax\else\displaystyle\fi}
\def\eqnumphantom{\phantom{(\thesection.\arabic{equation}})}}
\def\@arrayskip{\ifold\baselineskip\mbox{\Bbbb Z}@\lineskip\mbox{\Bbbb Z}@
\else
\baselineskip\minarrayskip\lineskip1\baselineskip\fi}
\def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or
\@ampacol \or \or \or \@addamp \or
\@acolampacol \or \@firstampfalse \@acol \fi
\edef\@preamble{\@preamble
\ifcase \@chnum
\hfil$\relax\arraymode\@sharp$\hfil
\or $\relax\arraymode\@sharp$\hfil
\or \hfil$\relax\arraymode\@sharp$\fi}}
\def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule
height\arraystretch \ht\strutbox
depth\arraystretch \dp\strutbox
width\mbox{\Bbbb Z}@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto
\bgroup \tabskip\mbox{\Bbbb Z}@ \@arstrut \@preamble \tabskip\mbox{\Bbbb Z}@ \cr}%
\let\@startpbox\@@startpbox \let\@endpbox\@@endpbox
\if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi
\bgroup \let\par\relax
\let\@sharp##\let\protect\relax
\@arrayskip\@preamble}
\def\eqnarray{\stepcounter{equation}%
\let\@currentlabel=\thesection.\arabic{equation}}
\global\@eqnswtrue
\global\@eqcnt\mbox{\Bbbb Z}@
\tabskip\@centering
\let\\=\@eqncr
$$%
\halign to \displaywidth\bgroup
\eqnumphantom\@eqnsel\hskip\@centering
$\displaystyle \tabskip\mbox{\Bbbb Z}@ {##}$%
&\global\@eqcnt\@ne \hskip 2\arraycolsep
$\displaystyle\arraymode{##}$\hfil
&\global\@eqcnt\tw@ \hskip 2\arraycolsep
$\displaystyle\tabskip\mbox{\Bbbb Z}@{##}$\hfil
\tabskip\@centering
&{##}\tabskip\mbox{\Bbbb Z}@\cr}
\makeatother
\newtheorem{th}{Theorem}[section]
\newtheorem{de}{Definition}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{cor}{Corollary}[section]
\newtheorem{lem}{Lemma}[section]
\newtheorem{ex}{Example}[section]
\newtheorem{note}{Note}[section]
\newtheorem{rem}{Remark}[section]
\newtheorem{ath}{Theorem}[app]
\newtheorem{aprop}{Proposition}[app]
\newtheorem{ade}{Definition}[app]
\newtheorem{acor}{Corollary}[app]
\newtheorem{alem}{Lemma}[app]
\newtheorem{arem}{Remark}[app]
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\nonumber{\nonumber}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\beq\new\begin{array}{c}{\begin{equation}\oldfalse\begin{array}{c}}
\def\end{array}\eeq{\end{array}\end{equation}}
\def\stackreb#1#2{\mathrel{\mathop{#2}\limits_{#1}}}
\def\square{\hfill{\vrule height6pt width6pt
depth1pt} \break \vspace{.01cm}}
\def\partial{\partial}
\def\der#1#2{\frac{\partial{#1}}{\partial{#2}}}
\def \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}{ \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}}
\def\raisebox{1pt}{$\,\blacktriangleright\,$}{\raisebox{1pt}{$\,\blacktriangleright\,$}}
\def\raisebox{1pt}{$\,\blacktriangleleft\,$}{\raisebox{1pt}{$\,\blacktriangleleft\,$}}
\def\tr{\triangleright}
\def\triangleleft{\triangleleft}
\def\tr\!\!\tl\,{\tr\!\!\triangleleft\,}
\def\raise1pt\hbox{$\scriptscriptstyle >\!$}\:\!\!\tl{\raise1pt\hbox{$\scriptscriptstyle >\!$}\:\!\!\triangleleft}
\def\raise1pt\hbox{$\scriptscriptstyle >\!$}\!\!\btl{\raise1pt\hbox{$\scriptscriptstyle >\!$}\!\!\raisebox{1pt}{$\,\blacktriangleleft\,$}}
\def\]{]\raise-2pt\hbox{$_\ast$}}
\def\st#1{#1\raise-2pt\hbox{$_\ast$}}
\def\mbox{ad}^\ast_{_{r^{(+)}(\xi)}}\eta{\mbox{ad}^\ast_{_{r^{(+)}(\xi)}}\eta}
\def\mbox{ad}^\ast_{_{r^{(-)}(\eta)}}\xi{\mbox{ad}^\ast_{_{r^{(-)}(\eta)}}\xi}
\def\op#1{\raise-6pt\hbox{$\stackrel{\displaystyle\oplus }{\scriptstyle
#1}$}\;}
\def\oop#1#2{\raise-6pt\hbox{$\stackrel{#2}{\stackrel{\displaystyle\oplus
}{\scriptstyle #1}}$\;}}
\def\,\overline{\!H\!}\:{\,\overline{\!H\!}\:}
\def\,\overline{\!H\!}\:{\,\overline{\!H\!}\:}
\def\,\overline{\hspace{-2pt}J}{\,\overline{\hspace{-2pt}J}}
\def\,\overline{\!T}{\,\overline{\!T}}
\def\;\,\overline{\!\!\cal M\!}\,{\;\,\overline{\!\!\cal M\!}\,}
\def\nL{\,\overline{\!\cal L}}
\def\,\overline{\!\mu}{\,\overline{\!\mu}}
\def\,\overline{\!q}{\,\overline{\!q}}
\def\,\overline{\!p}{\,\overline{\!p}}
\def\bar \delta_{\,_{\overline {\!q}}}\,{\bar \delta_{\,_{\overline {\!q}}}\,}
\def\bar \delta_{\,_{1/\,{\overline {\!q}}}}\,{\bar \delta_{\,_{1/\,{\overline {\!q}}}}\,}
\def\delta_{_q}{\delta_{_q}}
\def\Bf#1{\mbox{\boldmath $#1$}}
\def{\bfit\alpha}{{\bfit\alpha}}
\def{\bfit\beta}{{\bfit\beta}}
\def\bnu{{\bfit\nu}}
\def{\bfit\mu}{{\bfit\mu}}
\def{\bfit\phi}{{\bfit\phi}}
\def{\bfit\lambda}{{\bfit\lambda}}
\def{\bfit\rho}{{\bfit\rho}}
\def\hbox{\seveneufm p}{\hbox{\seveneufm p}}
\def\hbox{\seveneufm x}{\hbox{\seveneufm x}}
\def\alpha{\alpha}
\def\lambda{\lambda}
\def\varepsilon{\varepsilon}
\def\epsilon{\epsilon}
\def\<{\langle}
\def\>{\rangle}
\def\overline{\overline}
\def\widetilde{\widetilde}
\defg(\Bf{x},\Bf{y}){g(\Bf{x},\Bf{y})}
\def{\got g}{{\got g}}
\def{\got h}{{\got h}}
\def\got{gl}(\infty){\got{gl}(\infty)}
\def\got{sl}(\infty){\got{sl}(\infty)}
\def\widehat{\got{sl}}{\widehat{\got{sl}}}
\def\got{sl}_n{\got{sl}_n}
\def\hsln{\widehat{\got{sl}}_n}
\def\got{sl}_2{\got{sl}_2}
\def\widehat{\got{sl}}_2{\widehat{\got{sl}}_2}
\def{\cal A}{{\cal A}}
\def{\cal B}{{\cal B}}
\def{\cal M}{{\cal M}}
\def{\cal L}{{\cal L}}
\def{\cal P}{{\cal P}}
\def{\cal Q}{{\cal Q}}
\def{\cal U}{{\cal U}}
\def1\!\!1{1\!\!1}
\def\mbox{\footnotesize Tr}{\mbox{\footnotesize Tr}}
\def\sum_{a+b=p-1}{\sum_{a+b=p-1}}
\def\mbox{\bf L}{\mbox{\bf L}}
\def\mbox{\footnotesize\bf L}{\mbox{\footnotesize\bf L}}
\mathsurround=2pt
\begin{document}
\begin{titlepage}
\thispagestyle{empty}
\hfill ITEP-TH-78/97 \\
\phantom. \hfill hepth/9810091 \\
\begin{center}
\vspace{0.1in}{\Large\bf Kadomtsev-Petviashvili Hierarchy and Generalized
Kontsevich Model}\\[.4in]
\bigskip {\large S.Kharchev}\\
\bigskip {\it Institute of Theoretical and Experimental
Physics, \\
Bol.Cheremushkinskaya st., 25, Moscow, 117 259},
\footnote{E-mail address: [email protected]}
\end{center}
\bigskip
\bigskip
\begin{abstract}
The review is devoted to the integrable properties of the Generalized
Kontsevich Model which is supposed to be an universal matrix model to describe
the conformal field theories with $c<1$. The careful analysis of the model with
arbitrary polynomial potential of order $p+1$ is presented. In the case of
monomial potential the partition function is proved to be a $\tau$-function
of the $p$-reduced Kadomtsev-Petviashvili hierarchy satisfying $\mbox{\bf L}_{-p}$
Virasoro constraint. It is shown that the deformations of the "monomial"
phase to "polynomial" one have the natural interpretation in context of
so-called equivalent hierarchies. The dynamical transition between equivalent
integrable systems is exactly along the flows of the dispersionless Kadomtsev-
Petviashvili hierarchy; the coefficients of the potential are shown to be
directly related with the flat (quasiclassical) times arising in $N=2$
Landau-Ginzburg topological model. It is proved that the partition
function of a generic Generalized Kontsevich Model can be presented as a
product of "quasiclassical" factor and non-deformed partition function which
depends only on the sum of transformed integrable flows and flat times.
The Virasoro constraint for solution with an arbitrary potential is shown to be
a standard $\mbox{\bf L}_{-p}$-constraint of the (equivalent) $p$-reduced hierarchy
with the times additively corrected by the flat coordinates. The rich structure
of the model requires the implications almost all aspects of the classical
integrability. Therefore, the essential details of the fermionic approach to
Kadomtsev-Petviashvili hierarchy as well as the notions of the equivalent
integrable systems and their quasiclassical analogues are collected together
in parallel with step-by-step investigation of the suggested universal matrix
model.
\end{abstract}
\end{titlepage}
\newpage
\footnotesize
\tableofcontents
\newpage
\normalsize
\section{Introduction}
During the last years, the matrix models play an important role in the theory
of $2$-dimensional gravity, topological models and statistical physics
(see \cite{Mor} and references therein).
This paper is devoted to the study of particular 1-matrix model in an external
matrix field which is supposed to be "the universal" one .
The structure of the model is essentially defined by the matrix
integral of the typical form
\beq\new\begin{array}{c}\label{i1}
Z^V_{_N}[M]\,\sim\,\int dX\,e^{-\mbox{\footnotesize Tr}\,V(X)+\mbox{\footnotesize Tr}\,XV'(M)}
\end{array}\eeq
where $M$, $X$ are Hermitian $N\times N$ matrices and
$dX\sim\prod_{i,j=1}^NdX_{ij}$. In (\ref{i1})
$V(X)$ is an arbitrary potential (see exact formulation below).
The model with $V(X)=\frac{1}{3}X^3$ (the Kontsevich model) has been derived
in \cite{Kon} as a generating function of the intersection numbers on the
moduli spaces, i.e. by purely geometrical reasons, guided by Witten's
treatment of $2$-dimensional topological gravity \cite {Wit90}. Unfortunately,
the similar interpretation of more complicated model with an arbitrary
polynomial potential is still lacking.
Actually, the same model
(though in somewhat implicit form) have appeared for the first time in
\cite{KS} inspired by more "physical" arguments \cite{FKN1}, \cite{DVV}.
The advantage of the paper \cite{KS} consists of the fact that it starts
from the {\it integrable} properties of the model from the very beginning:
\cite{KS} gives the clear interpretation
of the Kontsevich partition function as a concrete solution of $2$-reduced
Kadomtsev-Petviashvili (KP) hierarchy, that is, the Korteveg-de-Vries one.
This allows to generalize the original Kontsevich model immediately.
In \cite{GKM} the partition function with an arbitrary potential has been
suggested as "the universal" matrix model under the name of Generalized
Kontsevich Model (GKM) (independently, the integral (\ref{i1}) with the
monomial potential of finite order has been considered in papers \cite{AM}-
\cite{IZ1}). The universality of GKM is based on the following facts
\cite{GKM},\cite{LGM}, \cite{versus}:
\begin{itemize}
\item[(i)] {For monomial potential it describes properly the
(sophisticated) double scaling limit of any multimatrix model.}
\item[(ii)] {The GKM partition function with a polynomial potential of order
$p+1$ is a $\tau$-function of the
Kadomtsev-Petviashvili hierarchy, properly reducible at the points, associated
with multi-matrix models, to a solution of $p$-reduced hierarchy. Moreover, it
satisfies an additional equation, which reduces
to conventional Virasoro constraint (string equation) for multi-matrix models
when potential
is degenerates to monomial one.}
\item[(iii)] {It allows the deformations of the potential
associated with a given multi-matrix model to potentials corresponding
to other models.}
\item[(iv)]
{GKM with arbitrary polynomial potential is directly connected with $N=2$
supersymmetric Landau-Ginzburg theories.}
\item[(v)] {The partition function
(\ref{i1}) with potential $V(X)\sim X^2+ n\log X$ describes the standard
1-matrix model before double-scaling limit.}
\item[(vi)] {Adding the negative
powers of $X$, the model gives a particular solution of Toda Lattice
(TL)hierarchy.}
\end{itemize}
Besides, the GKM is a non-trivial (and more or
less explicit) example of solution of the Kadomtsev-Petviashvili hierarchy
corresponding to the Riemann surface of the infinite genus. As an integrable
system with an infinite degrees of freedom, it possesses very rich structure
encoding many features which are absent for finite-dimensional systems: the
Virasoro and, more generally, $W$-constraints do not exhaust the complexity
of the model. It turns out that GKM is properly designed to describe the
quasiclassical (dispersionless) solutions parametrized by the coefficients
of the potential $V(X)$ being in the same time the exact solution of the
original hierarchy. This makes the study of GKM very promising from the
point of view of the physical applications as well as in the context of purely
mathematical aspects concerning the integrable structures.
In this paper we shall deal with the integrable properties of (\ref{i1})
only.
\bigskip\noindent
To investigate the GKM model in detail, the long way to overcome is required.
The point is that this model, being excellent example of the explicit
solution of the integrable system (KP or even TL hierarchy), unifies many
aspects of the latter. Besides well elaborated general strategy \cite{DJKM},
\cite{UT}, \cite{SP} to describe the above hierarchies, some more subtle
notions have to be implemented.
Originally author had a temptation to dump all the details concerning
the standard material to appendices (including the fermionic approach to
$\tau$-function). After some contemplation it become clear that in this
case the paper will contain the only Introduction as a main body with a lot
of appendices so the structure will be the same. Therefore, by the
pedagogical reasons, the decision arises to arrange the things as
self-consistent as possible.
The paper is organized as follows. In the first three sections we follow
the approach developed in \cite{DJKM}; the material here (except of
some details) is quite standard.
In Sect. 2 we give the essentials
concerning the most important integrable system, namely, the
Kadomtsev-Petviashvili hierarchy. We discuss briefly the
pseudo-differential calculus and introduce the notions of the Baker-Akhiezer
functions as well as the central object, the $\tau$-function, as a solution
of the evolution equations.
In Sect. 3 the fermionic approach to realization of $gl_\infty$ is given .
It is of importance while representing
the solutions of the KP hierarchy in the "explicit" form in terms
of the fermionic correlators. In Sect. 4 we represent the
$\tau$-function in the specific determinant form using the fermionic approach
introduced in the previous section. Such representation is very natural from
the Grassmannian approach to the integrable systems \cite{SP} and, what is more
important for the present purposes, this gives the simple proof of the
integrability of the GKM partition function.
The Generalized Kontsevich Model is introduced in Sect. 5. First of all,
simplify the GKM partition function by the standard integration over the
angle variables thus obtaining
the integral over the eigenvalues $x_1,\,\ldots,\,x_{_N}$ of the matrix $X$.
After this, we are able to write the partition function in the
determinant form which is the starting point to investigate
its integrable properties. We prove that, in the case of
monomial potential $V(X)=\frac{\displaystyle X^{p+1}}{\displaystyle p+1}$,
the GKM integral is a solution of the $p$-reduced KP hierarchy. Moreover, it
satisfies, in addition, the string equation. In turn, these two conditions
fix the solution of KP hierarchy uniquely - this is exactly the GKM
partition function with monomial potential. The case of an arbitrary polynomial
potential is more complicated (and more rich). It requires the notion of the
equivalent hierarchies \cite{S} which is thoroughly discussed in Sect. 6.
We prove that the solution with polynomial potential can be generated from the
corresponding solution of the $p$-reduced KP hierarchy by the action of the
Virasoro group and is represented as another $p$-reduced $\tau$-function
corrected by the exponential factor with some quadratic form. In order to
investigate in detail the nature of the transformations between the equivalent
hierarchies one more notion is required, namely, the notion of the
quasiclassical hierarchies which is described in Sect. 7 following the
approach developed in \cite{Kri1}-\cite{TT2}. We show that the quadratic form
is related with the quasiclassical $\tau$-function. Moreover, we demonstrate
that it is possible to describe the quasiclassical hierarchy directly in
terms of GKM.
The last section contains the complete description of the GKM partition
function with an arbitrary polynomial potential.
It is proved that after redefinition of times the partition function of a
generic GKM can be presented as a product of "quasiclassical" factor and
non-deformed partition function, the latter being the solution of the
equivalent $p$-reduced KP hierarchy. We show how to extract the genuine
partition function which depends only on the sum of transformed integrable
flows and flat (quasiclassical) times and which satisfies
the standard $\mbox{\bf L}_{-p}$-constraint of the (equivalent) $p$-reduced hierarchy.
\sect{Kadomtsev-Petviashvili hierarchy}
\subsection{KP hierarchy: Lax equations}
Let $\{T\}=(T_1,T_2,\ldots,T_i,\ldots)$ be the infinite set of variables.
Consider the pseudo-differential operator (the Lax operator)
\beq\new\begin{array}{c}\label{l-op}
L\,=\,\partial+\sum_{i=1}^\infty u_{i+1}(T)\partial^{-i}\;;\hspace{1cm}\partial=\frac{\partial}{\partial T_1}
\end{array}\eeq
where $\partial^{-1}$ is a formal inverse to $\partial$, i.e.
$\partial^{-1} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}\partial=\partial \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}\partial^{-1}=1$;
for any function $f(T_1)$ and any $n\geq 1$
\beq\new\begin{array}{c}\label{k1}
\partial^{-n} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} f\,=\,\sum_{i=0}^\infty (-1)^i\frac{(n+i-1)!}{i!(n-1)!}\,
\frac{\partial^if}{\partial T^i_1} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}\partial^{-n-i}
\end{array}\eeq
Note that the Lax operator $L$ can be written as
\beq\new\begin{array}{c}\label{k2}
L\,=\,W \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}\partial \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} W^{-1}
\end{array}\eeq
where
\beq\new\begin{array}{c}
W\,\equiv\,1+\sum_{i=1}^\infty w_i(T)\partial^{-1}
\end{array}\eeq
and the inverse $W^{-1}$ can be calculated term by term using the Leibnitz rule
(\ref{k1}): an easy exercise gives
\beq\new\begin{array}{c}
W^{-1}\,=\,1-w_1\partial^{-1}+(-w_2+w_1^2)\partial^{-2}+(-w_3+2w_1w_2-w_1w_1'-w_1^3)\partial^{-3}
+\ldots
\end{array}\eeq
where $'$ denotes the derivative w.r.t. $T_1$. Comparing (\ref{l-op}) and
(\ref{k2}) one can find the relation between functions $\{u_i\}$ and $\{w_i\}$:
\beq\new\begin{array}{c}
u_2\,=\,w_1'\\
u_3\,=\,-w_2'+w_1w_1'\\
u_4\,=\,-w_3'+w_1w_2'+w_1'w_2-w_1^2w_1'-(w_1')^2
\end{array}\eeq
etc.\\
Let $L^k_+$ denotes
the differential parts of the pseudo-differential operators $L^k$; for
example,
\beq\new\begin{array}{c}
L_+\,=\,\partial\\
L^2_+\,=\,\partial^2+2u_2\\
L^3_+\,=\,\partial^3+3u_2\partial+3(u_3+u_2')
\end{array}\eeq
etc.
We use also the notation $L^k_-$ to denote the purely pseudo-differential
part of $L^k$; evidently, $L^k=L^k_++L^k_{-}$.
\bigskip\noindent
By definition, the dependence of functions $\{u_i\}$ on
{\it time} variables $(T_1, T_2,\ldots )$ is determined by the Lax equations
\beq\new\begin{array}{c}\label{lax}
\frac{\partial L}{\partial T_k}\,=\,[L^k_+,L]\;;\hspace{1cm}k\geq 1
\end{array}\eeq
It can be shown that this set of equations is equivalent to
zero-curvature conditions
\beq\new\begin{array}{c}\label{zc}
\frac{\partial L^n_+}{\partial T_k}-\frac{\partial L^k_+}{\partial T_n}\,=\,[L^k_+,L^n_+]
\end{array}\eeq
The set of equations (\ref{lax}) (or, equivalently, (\ref{zc}))
is called the Kadomtsev-Petviashvili (KP) hierarchy. Let $n=2,\;m=3$ in
(\ref{zc}). Using the explicit expression for the differential polynomials
$L^2_+\,,\;L^3_+$ one can easily get the simplest equation of KP hierarchy -
the Kadomtsev-Petviashvili equation:
\beq\new\begin{array}{c}\label{k4}
\frac{\partial}{\partial T_1}\Big(4\frac{\partial u_2}{\partial T_3}-12u_2\frac{\partial u_2}{\partial T_1}-
\frac{\partial^3 u_2}{\partial T_1^3}\Big)\,-\,3\frac{\partial^2 u_2}{\partial T^2_2}\,=\,0
\end{array}\eeq
\subsection{Baker-Akhiezer functions}
Evolution equations for KP hierarchy (\ref{lax}) or (\ref{zc}) are the
compatibility conditions of the following equations:
\beq\new\begin{array}{c}\label{k5}
L\Psi\,=\,z\Psi\\
\partial_{_{T_n}}\Psi\,=\,L^n_+\Psi
\end{array}\eeq
The function $\Psi(T,z)$ which satisfies this system is called the
Baker-Akhiezer function.
Introduce the conjugation $\partial^\ast=-\partial$ and put
\beq\new\begin{array}{c}\label{k6}
L^\ast\,=\,-\partial+(-\partial)^{-1} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} u_2+(-\partial)^{-2} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} u_3+\ldots\\
W^\ast\,=\,1+(-\partial)^{-1} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} w_1+(-\partial)^{-2} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} w_2+\ldots
\end{array}\eeq
such that $L^\ast=-(W^\ast)^{-1} \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$}\partial \raise1pt\hbox{$\mathsurround=0pt\,\scriptstyle \circ\,$} W^\ast$.
The adjoint Baker-Akhiezer function $\Psi^\ast(T,z)$ satisfies, by definition,
the set of equations
\beq\new\begin{array}{c}\label{k7}
L^\ast\Psi^\ast\,=\,z\Psi^\ast\\
\partial_{_{T_n}}\Psi^\ast\,=\,-\,(L^n_+)^\ast\Psi^\ast
\end{array}\eeq
It can be shown that solutions of the systems (\ref{k6}), (\ref{k7} are
represented in the form
\beq\new\begin{array}{c}\label{k8}
\Psi(T,z)\,=\,W(T,\partial)e^{\xi(T,z)}\,\equiv\,
e^{\xi(T,z)}\sum_{i=0}^\infty w_i(T)z^{-i}\\
\Psi^\ast(T,z)\,=\,W^\ast(T,\partial)^{-1}e^{-\xi(T,z)}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{k9}
\xi(T,z)\,\equiv\,\sum_{k=1}^\infty T_kz^k
\end{array}\eeq
In \cite{DJKM} the following fundamental theorem has been proved:\\
Let $\Psi(T,z)$, $\Psi^\ast(T,z)$ be the Baker-Akhiezer functions of the KP
hierarchy. There exists the function $\tau(T)$ such that
\beq\new\begin{array}{c}\label{k10}
\Psi(T,z)\,=\,\frac{\displaystyle \tau(T_k-\frac{1}{kz^k})}
{\displaystyle\tau(T_k)}\;e^{\xi(t,z)}\\
\Psi^\ast(T,z)\,=\,\frac{\displaystyle \tau(T_k+\frac{1}{kz^k})}
{\displaystyle\tau(T_k)}\;e^{-\xi(t,z)}
\end{array}\eeq
It is not hard to see that all the functions $u_i(T)\,\;i\geq 2$ can be
represented in terms of $\tau$. For example,
\beq\new\begin{array}{c}\label{k11}
u_2\,=\, \partial^2_{_{T_1}}\log\tau\\
u_3\,=\,\frac{1}{2}(\partial^3_{_{T_1}}+\partial_{_{T_1}}\partial_{_{T_3}})\log\tau\\
u_4\,=\,\frac{1}{6}(\partial^4_{_{T_1}}-3\partial^2_{_{T_1}}\partial_{_{T_2}}+2\partial_{_{T_1}}
\partial_{_{T_2}})\log\tau\,-\,(\partial^2_{_{T_1}}\log\tau)^2
\end{array}\eeq
etc. Substitution of the first relation to (\ref{k11}) gives the
representation of the KP equation in the bilinear form
\beq\new\begin{array}{c}\label{k12}
\frac{1}{12}\tau\Big(\frac{\partial^4\tau}{\partial T_1^4}-4\frac{\partial^2\tau}{\partial T_1\partial T_3}+
3\frac{\partial^2\tau}{\partial T^2_2}\Big)-\frac{1}{3}\frac{\partial\tau}{\partial T_1}
\Big(\frac{\partial^3\tau}{\partial T^3_1}-\frac{\partial\tau}{\partial T_3}\Big)+\\ +
\frac{1}{4}\Big(\frac{\partial^2\tau}{\partial T_1^2}+\frac{\partial\tau}{\partial T_2}\Big)
\Big(\frac{\partial^2\tau}{\partial T_1^2}-\frac{\partial\tau}{\partial T_2}\Big)=0
\end{array}\eeq
As it turns out, it is possible to rewrite all non-linear equations of
KP hierarchy as an infinite set of {\it bilinear} equations for the
$\tau$-function \cite{DJKM} in more or less compact form using the Hirota
symbols. \\
One should note that it is possible to consider more general integrable
system, namely the Toda lattice (TL) hierarchy \cite{UT} which can be thought
as a specific "gluing" of the two KP hierarchies. In this case the solutions
depend on the two infinite sets of times, $\{T_k\}$ and $\{\,\overline{\!T}_k\}$
parametrizing the KP parts as well on the discrete time $n$ which mixes
the KP evolutions. The $\tau$-function of TL hierarchy $\tau_n(T,\,\overline{\!T})$ also
satisfies the infinite set of bilinear equations \cite{UT}; the simplest
evolution is described by the famous Toda equation
\beq\new\begin{array}{c}\label{toda}
\tau_n\frac{\partial^2\tau_n}{\partial T_1\partial\,\overline{\!T}_1}-
\frac{\partial\tau_n}{\partial T_1}\frac{\partial\tau_n}{\partial\,\overline{\!T}_1}=\,-\,\tau_{n+1}\tau_{n-1}
\end{array}\eeq
The main problem is to describe the generic solutions of these
hierarchies. It will be done in section 4.
\subsection{Reduction}
The KP hierarchy is called a $p$-reduced one if for some natural $p\geq 2$
the operator $L^p$ has only differential part, i.e.
\beq\new\begin{array}{c}\label{kr1}
(L^p)_{_-}\,=\,0
\end{array}\eeq
In this case $L^{np}_+ =L^{np}$ for any $n\geq 1$ and from (\ref{k5}),
(\ref{k10}) it follows that
\beq\new\begin{array}{c}
\frac{\partial}{\partial T_{np}}\,\frac{\displaystyle \tau(T_k-\frac{1}{kz^k})}
{\displaystyle\tau(T_k)}\,=\,0
\end{array}\eeq
From the last relation it is clear that on the level of $\tau$-function
the condition of $p$-reduction reads
\beq\new\begin{array}{c}\label{kr2}
\frac{\partial\tau(T)}{\partial T_{np}}\,=\,Const\cdot\tau(T)\hspace{1cm}n=1,2,\,\ldots
\end{array}\eeq
Equivalently, the relations (\ref{kr2}) can be taken themselves as a
definition of $p$-reduced hierarchy.
\section{Free field realization of $gl_\infty$}
\subsection {Free fermions and vacuum states}
Let us consider the infinite set of the fermionic modes $\psi_i\,,
\;\psi^{\ast}_i\,,\;i\in\mbox{\Bbb Z}$ which satisfy the usual anticommutation
relations
\begin{equation}
\{\psi_{i},\psi^{\ast}_{j} \} =
\delta_{ij} \;\;,\hspace{0.3in} \{\psi_{i},\psi_{j} \}
=\{\psi^{\ast}_{i},\psi^{\ast}_{j} \} = 0\;\;\;\;\; i,j\in\mbox{\Bbb Z}
\end{equation}
Totally empty (true) vacuum $|+\infty \rangle$ is determined by relations
\beq\new\begin{array}{c}
\psi_{i}\;|+\infty \rangle = 0 \;\; , \;\; i \in \mbox{\Bbb Z}
\label{eq:a}
\end{array}\eeq
Then the n-th "vacuum" state $|n\>$ is defined as follows:
\beq\new\begin{array}{c}
|n \> = \psi^{\ast}_{n} \psi^{\ast}_{n+1} \ldots |+\infty\>
\end{array}\eeq
thus satisfying the conditions (which themselves can be taken
as definition of such state):
\beq\new\begin{array}{c}\label{vac1}
\psi^{\ast}_{k}|n \> = 0 \;,\;\;\;\;k\geq n \;\;;\;\;\;
\;\;\;\;\;\; \psi_{k}|n \> = 0 \;\;\;\;\; k<n
\end{array}\eeq
Similarly, the left (dual) $n$-th vacuum $\<n|$ is defined by
conditions
\beq\new\begin{array}{c}\label{vac2}
\<n|\psi^{\ast}_{k} = 0 \;,\;\;\;\;k < n \;\;;\;\;\;
\;\;\;\;\;\; \<n|\psi_{k} = 0 \;,\;\;\;\; k\geq n
\end{array}\eeq
One can select the particular state, for example, $|0\>$ and
consider the normal ordering of the fermions with respect to this
preferred vacuum.
In this case the annihilation operators are $\psi_i\;,\;\; i<0$ and
$\psi^{\ast}_i\;,\;\; i\geq0$ and, therefore, the normal ordering
is defined as follows:
\beq\new\begin{array}{c}\label{ord1}
\psi_i\psi^{\ast}_j = \;:\psi_i\psi^{\ast}_j: + \theta(-i-1)\delta_{ij}
\end{array}\eeq
\subsection{Boson-fermion correspondence}
It is convenient to introduce the free fermionic fields
\begin{equation}
\psi(z) \equiv \sum_{i \in \mbox{\Bbbb Z}} \psi_{i}z^{i}\;\;\;,\;\;\;\psi^{\ast}(z)
\equiv \sum_{i \in \mbox{\Bbbb Z}} \psi^{\ast}_{i}z^{-i}
\end{equation}
which, in turn, can be expressed in terms of the free {\it bosonic}
field $\varphi(z)$
\beq\new\begin{array}{c}\label{eq:phi}
\varphi(z) = q - i p \log z + i \sum_{k \in \mbox{\Bbbb Z}} \frac{J_{k}}{k}z^{-k} \\
[q,p]= i\;\; ; \;\;\;\;\; [J_{m},J_{n}]=m\delta_{m+n,0}
\end{array}\eeq
according to well known formulae
\beq\new\begin{array}{c} \label{rep1}
\psi(z)= \; :e^{
i \varphi(z)}}: \;\equiv \\
\equiv e^{i q}\;e^{ p\log z}\;
\exp \left(\sum_{k=1}^{\infty}\frac{J_{-k}}{k}z^{k}\right) \times \exp
\left(-\sum_{k=1}^{\infty}\frac{J_{k}}{k}z^{-k}\right)\;\; ,
\label{eq:q}
\end{array}\eeq
\bigskip
\beq\new\begin{array}{c} \label{rep2}
\psi^{\ast}(z)= z\;:e^{
- i \varphi(z)}}:\; \equiv \\
\equiv ze^{-i q}\;e^{-p\log z}\;
\exp \left(-\sum_{k=1}^{\infty}\frac{J_{-k}}{k}z^{k}\right)
\times \exp\left(\sum_{k=1}^{\infty}\frac{J_{k}}{k}z^{-k}\right)
\end{array}\eeq
Note, that under the formal Hermitian conjugation
\beq\new\begin{array}{c}\label{con}
(z)^\dagger = z^{-1}\;\,; \ \ \ \ (J_k)^\dagger = J_{-k}\\
(q)^\dagger = q\;\,; \ \ \ \ \;(p)^\dagger = p
\end{array}\eeq
we have the involution
\beq\new\begin{array}{c}
(\psi(z))^\dagger = \psi^{\ast}(z)\;\; ,
\;\;\;\;\;\;(\psi^{\ast}(z))^\dagger = \psi(z)
\end{array}\eeq
It can be shown that vacua $|n\>$ are eigenfunctions of the operator
$p$:
\beq\new\begin{array}{c} \label{p}
p|n\> = n|n\>\;;\hspace{0.5cm}\<n|p=n\,\<n|
\end{array}\eeq
and zero bosonic mode shifts the vacua, i.e. changes its charge
\beq\new\begin{array}{c}\label{q}
\left\{
\begin{array}{l}
e^{imq}|n\> = |n+m\>\;\;\\
\<n|e^{imq} = \<n-m|\;\;
\end{array}\right.
\;\;\;\;\;\; m\in\mbox{\Bbb Z}\;\; .
\end{array}\eeq
Using the definition (\ref{eq:phi}) one can show that
\beq\new\begin{array}{c}\label{ord2}
:e^{
i \alpha \varphi(z)}}:\;
:e^{
i \beta\varphi(w)}}:\; =
(z-w)^{\alpha \beta}:e^{
i \alpha \phi(z)+ i \beta \phi(w)}}:
\end{array}\eeq
and, therefore,
\beq\new\begin{array}{c}\label{ord}
\psi(z) \psi^{\ast}(w) = \frac{w}{z-w}:e^{
i \varphi(z)-i\varphi(w)}}:\; \equiv \\
\equiv \;:\psi(z) \psi^{\ast}(w): + \frac{w}{z-w}
\end{array}\eeq
The last expression being expanded near the point $w \sim z$
enables to rewrite the bosonic field via the fermionic ones:
\beq\new\begin{array}{c}
i\partial_z\varphi(z) = \frac{1}{z}\;:\psi(z) \psi^{\ast}(z):\;=\;
\sum_{k\in\mbox{\Bbbb Z}}J_kz^{-k-1}
\end{array}\eeq
or, equivalently, the bosonic currents can be represented as bilinear
combination of the fermionic modes:
\beq\new\begin{array}{c}\label{curr}
J_k = \sum_{i\in\mbox{\Bbbb Z}}:\psi_i \psi^{\ast}_{i+k}: \;, \;\;\;\; k\in\mbox{\Bbb Z}
\end{array}\eeq
Obviously, the normal ordering in (\ref{curr}) is essential
only for $J_0\equiv p$. Using (\ref{curr}) it is easy to see that
\beq\new\begin{array}{c} \label{curr2}\left\{
\begin{array}{l}
J_k|n\> \equiv 0\\
\<n|J_{-k} \equiv 0
\end{array}\right.
\;\;\;\;\;\;k > 0\;\; ,\;\;\;\;n\in\mbox{\Bbb Z}
\end{array}\eeq
One should mention that not only the bosonic currents can be expressed as
bilinear combination of the free fermions. Actually, this is true for the
whole family of $gl_\infty$ generators (sometimes called the
$W_{1+\infty}$-generators); for example, one can derive
analogous boson-fermion corresponding for the Virasoro generators:
\beq\new\begin{array}{c}\label{Vir}
\mbox{\bf L}_k\,\equiv\,\frac{1}{2}\sum_{i\in\mbox{\Bbbb Z}}:J_iJ_{k-i}:\,=\,
\sum_{i\in\mbox{\Bbbb Z}}\Bigl(i +\frac{k+1}{2}\Bigr):\psi_i\psi^{\ast}_{i+k}:
\end{array}\eeq
The bosonization formulae are very useful tool to calculate
different correlators containing the fermionic operators.
\section{$\tau$-functions in free field representation}
In this section the solutions of KP (more generally, Toda) hierarchy are
represented in the form of the fermionic correlators parametrized by the
infinite set of continuous variables. The fermionic language is very
convenient for the integrable systems since it enables to represent an
arbitrary solution in the specific determinant form. This, in turn, allows to
identify the GKM partition function with appropriate solution of the
hierarchy.
\subsection{Fermionic correlators, Wick theorem and solution of KP (TL)
hierarchy}
Let us introduce the "Hamiltonians"
\beq\new\begin{array}{c}\label{hm}
H(T) \equiv \sum_{k=1}^{\infty} T_{k}J_{k}\,,\hspace{1cm}
\,\overline{\!H\!}\:(\,\overline{\!T}) \equiv \sum_{k=1}^{\infty} \,\overline{\!T}_{k}J_{-k}
\end{array}\eeq
where $\{T_k\}$ and $\{T_k\}$ are the infinite sets of parameters
(sometimes called the sets of positive and negative times respectively).
We define the fermionic correlators ($\tau$-functions) with the following
parameterization by these times
\beq\new\begin{array}{c}\label{tauTL}
\tau_n(T,\,\overline{\!T}|g) = \< n|e^{H(T)}ge^{-\,\overline{\!H\!}\:(\,\overline{\!T})}|n \>\,\equiv\,\< n|g(T,\,\overline{\!T})|n \>
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{point}
g=\; :\exp \Bigl\{\sum_{i,j \in \mbox{\Bbbb Z}} A_{ij}\psi_{i} \psi^{\ast}_{j}\Bigr\}:
\end{array}\eeq
with $||A_{ij}|| \in gl_\infty$. In the most cases we shall write
$\tau_n(T,\,\overline{\!T})$ for brevity.
We assume that the (infinite)
matrix $||A_{ij}||$ satisfies such a requirements that the correlator
(\ref{tauTL}) is well defined. As an example, the matrix with almost all
zero entries is suitable. The wide class of the suitable matrices are the
Jacobian ones: $A_{ij}=0$ for $|i-j|\gg 1$. The more general conditions can
be found in \cite{SP}. One should mention that
the normal ordering in (\ref{point}) is taken with respect the zero vacuum
state $|0\>$ (see (\ref{ord1})); it is equivalent to (\ref{ord}).\\
Note also
that every element of the type (\ref{point}) rotates the fermionic modes:
\beq\new\begin{array}{c}\label{ro1}
g\psi_ig^{-1}\,=\,R_{ki}\psi_k\;;\hspace{0.6cm}
g\psi^\ast_ig^{-1}\,=\,R^{-1}_{ik}\psi^\ast_k
\end{array}\eeq
with some (infinite) matrix $||R||\in GL_\infty$. As an example, the
exponentials containing the Hamiltonians give the transformations
\beq\new\begin{array}{c}\label{ro2}
\begin{array}{ll}
e^{H(T)}\psi(z)e^{-H(T)}\,=\,e^{\xi(T,z)}\psi(z)\;\,;&\hspace{0.7cm}
e^{H(T)}\psi^\ast(z)e^{-H(T)}\,=\,e^{-\xi(T,z)}\psi^\ast(z)\\
e^{\,\overline{\!H\!}\:(\,\overline{\!T})}\psi(z)e^{-\,\overline{\!H\!}\:(\,\overline{\!T})}\,=\,e^{\xi(\,\overline{\!T},z^{-1})}\psi(z)\;\,;
&\hspace{0.7cm}
e^{\,\overline{\!H\!}\:(\,\overline{\!T})}\psi^\ast(z)e^{-\,\overline{\!H\!}\:(\,\overline{\!T})}\,=\,e^{-\xi(\,\overline{\!T},z^{-1})}\psi^\ast(z)
\end{array}
\end{array}\eeq
because of commutator relations $[J_k,\psi(z)]=z^k\psi(z)\,,\;\,
[J_k,\psi^\ast(z)]=z^{-k}\psi^\ast(z)$ (the latter are simple consequence
of the fermionic representation (\ref{curr2})).
\bigskip\noindent
The fermionic correlators introduced above have a very specific dependence
on the infinite sets of times $\{T_k\},\,\{\,\overline{\!T}_k\}$.
The main statement is that the correlators (\ref{tauTL}) solve the
Toda lattice hierarchy; in particular, as a function of the positive times
$\{T_k\}$ these correlators are solutions of the KP hierarchy: each
particular solution is parametrized by the given matrix $||A_{ij}||$. It can
be proved in full generality using the so-called bilinear identity
\cite{DJKM}. For the local purposes it is enough, however, to show that the
simplest equations of the mentioned hierarchies are satisfied.
It is possible to deduce them starting directly from the fermionic correlators.
Note that we shall deal with the only KP hierarchy in what follows.
Nevertheless, as an instructive example, let us derive 2-dimensional Toda
equation which is the first equation of the Toda hierarchy. The example
shows the natural appearance of the determinant representations in the
context of the integrable systems; besides, the similar technique will be used
below quite extensively.
\bigskip\noindent
All the correlators similar to (\ref{tauTL})
are expressed in terms of the free fields, hence, the Wick theorem is
applicable; as an example
\beq\new\begin{array}{c}
\frac{\<n|\psi_{i_1}\ldots\psi_{i_k}g(T,\overline{T})
\psi^{\ast}_{j_1}\ldots\psi^{\ast}_{j_k}|n\>}{\<n|g(T,\overline{T})|n\>}\;=\;
\left.\det\frac{\<n|\psi_{i_a}g(T,\overline{T})\psi^{\ast}_{j_b}|n\>}
{\<n|g(T,\overline{T})|n\>}\right|_{a,b=1}^k
\end{array}\eeq
This key observation gives easy way to prove that the $\tau$-function
(\ref{tauTL}) satisfies the standard Toda equation. Indeed, using
the fermionic representation (\ref{curr}) of the currents $J_k$
together with the
definition of the vacuum states (\ref{vac1}), (\ref{vac2}) one gets
\beq\new\begin{array}{c}
\partial_{_{T_1}}\partial_{_{\,\overline{\!T}_1}}\tau_n = \,-\;\<n|J_1e^{H(T)}ge^{-\,\overline{\!H\!}\:(\,\overline{\!T})}J_{-1}|n \>
=\\ =\,-\;\<n|\psi_{n-1}\psi^{\ast}_ng(T,\overline{T})\psi_n\psi^{\ast}_{n-1}|n \>
\end{array}\eeq
Using the Wick theorem, this expression can be written in the form
\beq\new\begin{array}{c}\label{eqTL}
\partial_{_{T_1}}\partial_{_{\,\overline{\!T}_1}}\tau_n = \,-\;\frac{1}{\tau_n}
\Bigl\{\<n|\psi_{n-1}\psi^{\ast}_ng(T,\overline{T})|n \>
\<n|_ng(T,\overline{T})\psi_n\psi^{\ast}_{n-1}|n \> +\\
+ \<n|\psi_{n-1}g(T,\overline{T})\psi^{\ast}_{n-1}|n \>
\<n|\psi^{\ast}_ng(T,\overline{T})\psi_n|n \>\Bigr\}\;\; .
\end{array}\eeq
Recalling the definitions again one can rewrite every term in the last
formula in terms of the $\tau$-functions and their derivatives; namely,
\beq\new\begin{array}{c}
\begin{array}{ll}
\<n|\psi_{n-1}g(T,\overline{T})\psi^{\ast}_{n-1}|n \> = \tau_{n-1}\;\; ,\;\;\;\;\;\;
& \<n|\psi^{\ast}_ng(T,\overline{T})\psi_n|n \> = \tau_{n+1}\;\; ,\\
\<n|\psi_{n-1}\psi^{\ast}_ng(T,\overline{T})|n \> = \partial_{_{T_1}}\tau_n\;\; ,
\;\;\;\;\;\; &\<n|g(T,\overline{T})\psi_n\psi^{\ast}_{n-1}|n \> =
\,-\;\partial_{_{\,\overline{\!T}_1}}\tau_n
\end{array}
\end{array}\eeq
and, therefore, (\ref{eqTL}) reduces to Toda equation
\beq\new\begin{array}{c}\label{TTT}
\partial_{_{T_1}}\partial_{_{\overline{T}_1}}\log\,=\,-\,\frac{\tau_{n+1}\tau_{n-1}}{\tau^2_n}
\end{array}\eeq
which is equivalent to (\ref{toda}).
The analogous (though more involved) calculations show that $\tau_n$
as a function of the positive times $T_1,T_2,T_3$ satisfies the
Kadomtsev-Petviashvili equation (\ref{k12}) for any fixed $n$. Let us stress
again that the complete list of bilinear equations for the $\tau$-functions
is represented in \cite{DJKM, UT}.
\subsection{Determinant representation of $\tau$-functions}
Here we represent an arbitrary solution of the KP hierarchy in the determinant
form which is crucial in what follows.
Let us calculate the fermionic correlator
$\<n+N|\psi(\mu_N)\ldots\psi(\mu_1)g|n\>$ in two different ways.
First of all, using the definition of the vacua and applying the Wick theorem,
the correlator can be written in the determinant form:
\beq\new\begin{array}{c}\label{start1}
\<n+N|\psi(\mu_N)\ldots\psi(\mu_1)g|n\>\,=\,
\<n|\psi_n^\ast\ldots\psi^\ast_{n+N-1}\psi(\mu_N)\ldots\psi(\mu_1)g|n\>\,=\\
=\,\<n|g|n\>\,\det\frac{\<n|\psi^\ast_{n+i-1}\psi(\mu_j)g|n\>}{\<n|g|n\>}
\end{array}\eeq
On the other hand, using the boson-fermion correspondence (\ref{eq:q}),
the normal ordering (\ref{ord2}), and the formulas (\ref{p}), (\ref{q}),
(\ref{curr2}) describing the action of the different operators on the
vacuum state $\<N|$, one can write
\beq\new\begin{array}{c}\label{start2}
\<n+N|\psi(\mu_N)\ldots\psi(\mu_1)g|n\>\equiv
\Delta(\mu)\<n+N|:\exp\Big\{i\sum_{j=1}^N\phi(\mu_j)\Big\}:g|n\>\;=\\
=\,\Delta(\mu)\prod_{j=1}^N\mu^n_j\,\<n|\exp\Big\{\sum_{k=1}^\infty
T_kJ_k\Big\}g|n\>
\end{array}\eeq
where in the r.h.s. the $\tau$-function appears with the specific
parametrization of the positive times
\beq\new\begin{array}{c}\label{mi}
T_k\,\equiv\,-\,\frac{1}{k}\sum_{j=1}^N\mu_j^{-k}
\end{array}\eeq
The parametrization (\ref{mi}) has been introduced in \cite{Mi}.
We shall call such
representation of times the Miwa parametrization (respectively, the set
$\{\mu_i\}$ is called the Miwa variables). Note that for $N$ finite only first
$N$ times $T_1,\ldots ,T_N$ are functionally independent. Equivalently, only
first $N$ equations of the KP hierarchy have a non-trivial sense
(all higher equations are functionally dependent on the first $N$
ones). We shall deal with such restricted hierarchy in what follows.
Comparing the relations (\ref{start1}), (\ref{start2}) one arrives
to the following statement. For any finite $N$
the $\tau$-functions of the KP hierarchy being written in the Miwa
variables (\ref{mi}) can be represented in the determinant form
\beq\new\begin{array}{c}\label{det}
\tau_n(T)\,=\,\<n|g|n\>\,\frac{\det\,\phi^{(can)}_i(\mu_j)|_{i,j=1}^N}
{\Delta(\mu)}
\end{array}\eeq
where {\it the canonical basis vectors}
\beq\new\begin{array}{c}\label{bv}
\phi^{(can)}_i(\mu)\,=\,\mu^{-n}\,
\frac{\<n|\psi^\ast_{n+i-1}\psi(\mu)g|n\>}{\<n|g|n\>}
\hspace{1cm}i=1,\,2,\,\ldots
\end{array}\eeq
have the following asymptotics
\beq\new\begin{array}{c}\label{as1}
\phi^{(can)}_i(\mu)\,=\,\mu^{i-1}+O\Big(\frac{1}{\mu}\Big)
\hspace{1cm}\mu\to\infty
\end{array}\eeq
Moreover, the opposite statement is true. Namely, any functions
$\tau(\mu_1,\ldots,\mu_N)$ of the form
\beq\new\begin{array}{c}\label{det2}
\tau(T)\,=\,\frac{\det\,\phi_i(\mu_j)}{\Delta(\mu)}\;;
\hspace{1.5cm}T_k\,\equiv\,-\,\frac{1}{k}\sum_{j=1}^N\mu_j^{-k}
\end{array}\eeq
whose basis vectors $\phi_i(\mu)\,,\;i=1,\,2,\,\ldots$ have
the asymptotics
\beq\new\begin{array}{c}\label{as2}
\phi_i(\mu)\,=\,\mu^{i-1}\Big(1+O\Big(\frac{1}{\mu}\Big)\Big)
\hspace{1cm}\mu\to\infty
\end{array}\eeq
solve the KP hierarchy.
The set $\{\phi_i(\mu)\}$ satisfying the asymptotics (\ref{as2}) is naturally
identified with the projective coordinates of a point of
Grassmannian \cite{SP}.
More precisely, the vectors $\{\phi_i(\mu)\}$
can be transformed to the canonical ones taking the appropriate linear
combinations (clearly, such transformation does not change the determinant in
(\ref{det2})). Then, there exists the element (\ref{point})
of the Grassmannian such that
transformed basis vectors can be written as a fermionic correlators
(\ref{bv}) (for some fixed $n$) and, consequently, $\tau(\mu_1,\ldots,\mu_N)$
have the form (\ref{tauTL}) in the Miwa parametrization (\ref{mi}).
To summarize, any infinite set of the vectors (\ref{as2}) describes
the particular solution of KP hierarchy via determinant form (\ref{det2}).
\subsection{Time derivatives}\label{td}
Let us find the expression of the time derivatives $\partial\tau/\partial{T_k}$
for the $\tau$-function written in the determinant form (\ref{det}).
As in (\ref{mi}), we assume the finite number $N$ of the
Miwa variables. Hence, only first $N$ times $T_k$ are functionally
independent and all formulas below have a sense for $\partial\tau/\partial{T_1},\ldots ,
\partial\tau/\partial{T_N}$ only. From (\ref{start2})
\beq\new\begin{array}{c}\label{1}
\frac{\partial\tau_n}{\partial T_k}\,=\,\frac{\prod\mu_i^{-n}}{\Delta(\mu)}\,
\<n+N|\psi(\mu_N)\ldots\psi(\mu_1)J_kg|n\>\,\equiv\\
\equiv\frac{\prod\mu_i^{-n}}{\Delta(\mu)}\Big\{
\<n+N|J_k\psi(\mu_N)\ldots\psi(\mu_1)g|n\>+
\sum_{i=1}^N\<n+N|\psi(\mu_N)\ldots[\psi(\mu_i),J_k]\ldots\psi(\mu_1)g|n\>
\Big\}
\end{array}\eeq
Since the currents $J_k=\sum_{j\in\mbox{\Bbbb Z}}\psi_j\psi_{j+k}^\ast$ satisfy the
commutation relations $[J_k,\psi(\mu)]=\mu^k\psi(\mu)$ the last expression
can be written in the form
\beq\new\begin{array}{c}\label{dk1}
\frac{\partial\tau_n}{\partial T_k}\,=\,
\frac{\prod\mu_i^{-n}}{\Delta(\mu)}\,
\<n|\psi_n^\ast\ldots\psi_{n+N-1}^\ast
\Big\{\sum_{j=n+N-k}^{n+N-1}\psi_j\psi_{j+k}^\ast\Big\}\psi(\mu_N)\ldots
\psi(\mu_1)g|n\>-\tau_n(x)\sum_{i=1}^N\mu_i^k
\end{array}\eeq
where, according to the definition of the vacua (\ref{vac2}), the
action of $J_k$ on the state $\<n+N|$ reduces to the action of the
finite number of the fermionic modes with $n+N-k\le j\le n+N-1$. This fact
allows to represent the expression (\ref{dk1}) in the compact determinant
form. Indeed, since $j\ge n+N-k$ and $k\le N$ (i.e. $j\ge n$) it is clear
that $\<n|\psi_j\psi^\ast_{j+k}=0$ and the moving of the operator
$\sum_{j=n+N-k}^{n+N-1}\psi_j\psi_{j+k}^\ast$ to the left state results
to appropriate shifts of the modes $\psi^\ast_n,\ldots , \psi^\ast_{n+N-1}$.
For example, for $k=1$ one gets the only correlator
$\<n|\psi^\ast_n\ldots\psi^\ast_{n+N-2}\psi^\ast_{n+N}g\psi(\mu_n)
\ldots\psi(\mu_1)g|n\>$ and, therefore, the first term in (\ref{dk1}) has the
determinant form similar to (\ref{det}) (with the shifted last row
$\phi^{(can)}_N\,\to\,\phi^{(can)}_{N+1}$). It is evident that for arbitrary
$k\leq N$ the first term in (\ref{dk1}) can be represented
as the sum of the shifted determinants
\beq\new\begin{array}{c}\label{dder}
\frac{\<n|g|n\>}{\Delta(\mu)}\,\sum_{m=1}^N\,\left|
\begin{array}{ccc}
\phi^{(can)}_1(\mu_1) &\ldots & \phi^{(can)}_1(\mu_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{m-1}(\mu_1) &\ldots&\phi^{(can)}_{m-1}(\mu_N)\\
\phi^{(can)}_{m+k}(\mu_1) &\ldots&\phi^{(can)}_{m+k}(\mu_N)\\
\phi^{(can)}_{m+1}(\mu_1) &\ldots&\phi^{(can)}_{m+1}(\mu_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{N}(\mu_1) &\ldots&\phi^{(can)}_{N}(\mu_N)
\end{array}\right|
\end{array}\eeq
Hence, one arrives to the following formula:
\beq\new\begin{array}{c}\label{3der}\hspace{-0.5cm}
\frac{\partial}{\partial T_k}\!\left(\frac{\det \phi^{(can)}_i(\mu_j)}{\Delta(\mu)}\right)
\!=\!\frac{1}{\Delta(\mu)}\!\sum_{m=1}^N\left|
\begin{array}{ccc}
\phi^{(can)}_1(\mu_1) &\ldots & \phi^{(can)}_1(\mu_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{m-1}(\mu_1) &\ldots&\phi^{(can)}_{m-1}(\mu_N)\\
\!\!\phi^{(can)}_{m+k}(\mu_1)\!-\!\mu^k_1\phi^{(can)}_{m}(\mu_1) &\ldots &
\!\!\phi^{(can)}_{m+k}(\mu_N)\!-\!\mu^k_N\phi^{(can)}_{m}(\mu_N)\\
\phi^{(can)}_{m+1}(\mu_1) &\ldots&\phi^{(can)}_{m+1}(\mu_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{N}(\mu_1) &\ldots&\phi^{(can)}_{N}(\mu_N)
\end{array}\right|
\end{array}\eeq
Introducing the formal operator which shifts the indices
of the canonical basis vectors
\beq\new\begin{array}{c}\label{sh}
B(\mu)\phi^{(can)}_i(\mu)\,\equiv\, \phi^{(can)}_{i+1}(\mu)
\end{array}\eeq
one can write the final answer in more compact notations:
\beq\new\begin{array}{c}\label{time}
\frac{\partial}{\partial T_k}\,\left(\frac{\det \phi^{(can)}_i(\mu_j)}{\Delta(\mu)}\right)
\,=\,\frac{1}{\Delta(\mu)}
\sum_{m=1}^N\Big(B^k(\mu_m)-\mu_m^k\Big)\,\det\,\phi^{(can)}_i(\mu_j)
\end{array}\eeq
This is the first important formula we need in what follows. As an
immediate application one can consider the translation of the notion
of $p$-reduced KP hierarchy to the language of the Grassmannian.
Suppose that for some natural $p>1$ the quantity $\mu^p\phi^{(can)}_m(\mu)$
can be expanded in the canonical basis vectors, i.e. for any $m\geq 1$
\beq\new\begin{array}{c}\label{p-r}
\mu^p\phi^{(can)}_m(\mu)\subset\mbox{Span}\,\{\phi^{(can)}(\mu)\}
\end{array}\eeq
Writing
\beq\new\begin{array}{c}
\phi^{(can)}_m(\mu)\,\equiv\,\mu^{m-1}+\sum_{j=1}^\infty\alpha_{mj}\mu^{-j}
\end{array}\eeq
it is easy to see that
for any $n\geq 1$ the following expansion holds:
\beq\new\begin{array}{c}
\mu^{np}\phi^{(can)}_m(\mu)\,=\,\phi^{(can)}_{m+np}(\mu)+
\sum_{j=1}^{np}\alpha_{mj}\phi^{(can)}_{np-j+1}
\end{array}\eeq
Due to determinant structure in (\ref{3der}) every row containing the terms
$\phi^{(can)}_{m+np}-\mu^{np}\phi^{(can)}_m$ gives non-trivial
contribution $-\alpha_{m,np-m+1}\phi^{(can)}_m\,;\;\;1\leq m\leq np$
(provided $np\leq N$),
hence,
\beq\new\begin{array}{c}\label{p-r2}
\frac{\partial\tau(T)}{\partial
T_{np}}\,=\,-\,\tau(T)\sum_{m=1}^{np}\alpha_{m,np-m+1}\;;\hspace{1cm}np\leq N
\end{array}\eeq
assuming that (\ref{p-r}) holds. In the limit $N\to\infty$ this is exactly
the case of $p$-reduced KP hierarchy. Hence, the conditions (\ref{p-r}) and
(\ref{p-r2}) are equivalent \cite{SP}.
\subsection{Action of the Virasoro generators}\label{virs}
Literally the same calculation can be performed for any $W$-generators.
Consider, for example, the Virasoro generators
\beq\new\begin{array}{c}\label{TVir}
\mbox{\bf L}_k(T)\,=\,\frac{1}{2}\sum_{a+b=-k}\!\!abT_aT_b+
\sum_{a-b=-k}\!\!\!aT_a\frac{\partial}{\partial T_b}+
\frac{1}{2}\sum_{a+b=k}\frac{\partial^2}{\partial T_a\partial T_b}
\end{array}\eeq
then, evidently,
\beq\new\begin{array}{c}
\mbox{\bf L}_k(T)\tau_n(T)\,=\,\<n|e^{H(T)}\,\mbox{\bf L}_k(J)\,g|n\>
\end{array}\eeq
where the fermionic Virasoro generators $\mbox{\bf L}_k(J)$ (\ref{Vir}) satisfy the
commutations relations
\beq\new\begin{array}{c}\label{vir2}
[\,\mbox{\bf L}_k(J),\psi(\mu)]\,=\,\Big(\mu^{k+1}\frac{\partial}{\partial\mu}+
\frac{k+1}{2}\mu^k\Big)\psi(\mu)\,\equiv\,A_k(\mu)\psi(\mu)
\end{array}\eeq
Consider the subset $\{\mbox{\bf L}_{-k}(J)\,,\;k>0\}$.
Taking into account the identity $\<n+N|\mbox{\bf L}_{-k}(J)=0\,,\;k>0$ one gets
instead of (\ref{1})
\beq\new\begin{array}{c}\label{Vir3}
\mbox{\bf L}_{-k}(T)\tau_n(T)\,=\,-\,
\frac{\prod\mu_i^{-n}}{\Delta(\mu)}\,
\sum_{m=1}^N\<n+N|\psi(\mu_N)\ldots[\psi(\mu_m),L_{-k}(J)]
\ldots\psi(\mu_1)g|n\>\,=\\=\,-\,
\frac{\prod\mu_i^{-n}}{\Delta(\mu)}\,
\sum_{m=1}^NA_{-k}(\mu_m)\,\<n+N|\psi(\mu_N)\ldots\psi(\mu_1)g|n\>\,=\\
=\,nkT_k\tau_n(T)\,-\,\frac{\<n|g|n\>}{\Delta(\mu)}\,\sum_{m=1}^N\,
A_{-k}(\mu_m)\,\det\,\phi^{(can)}_i(\mu_j)
\end{array}\eeq
In particular, the standard $\tau$-function of the KP hierarchy
$\tau_{n=0}(T)\equiv\tau(T)$ satisfies the relation
\beq\new\begin{array}{c}\label{Vir4}
\mbox{\bf L}_{-k}(T)\,\left(\frac{\det \phi^{(can)}_i(\mu_j)}{\Delta(\mu)}\right)
\,=\,-\,\frac{1}{\Delta(\mu)}\,\sum_{m=1}^N\,
A_{-k}(\mu_m)\,\det\,\phi^{(can)}_i(\mu_j)\\
A_{-k}(\mu)\,=\,\mu^{1-k}\frac{\partial}{\partial\mu}+
\frac{1-k}{2}\mu^{-k}
\end{array}\eeq
(we shall see below that the GKM partition function
corresponds exactly to the choice of $0$-vacuum state).
\bigskip\noindent
Similarly to (\ref{p-r}) consider the case when for some $q>1$
\beq\new\begin{array}{c}\label{vr1}
A_{-q}(\mu)\phi^{(can)}_i(\mu)\subset\mbox{Span}\,\{\phi^{(can)}(\mu)\}
\end{array}\eeq
From (\ref{Vir4}) it follows that the solution of the KP hierarchy is
invariant w.r.t. action of the corresponding Virasoro generator:
\beq\new\begin{array}{c}\label{vr2}
\mbox{\bf L}_{-q}(T)\tau(T)\,=\,0
\end{array}\eeq
In the next subsection it will shown that the GKM partition function
satisfies the conditions quite similar to (\ref{p-r2}) and (\ref{vr2}).
\bigskip\noindent
Relations (\ref{time}) and (\ref{Vir4}) are the simplest examples of
$W$-generators acting on $\tau$-functions in the Miwa parametrization.
Using the fermionic representation it is possible to write down
the similar expressions for the higher generators.
\section{Generalized Kontsevich model: Preliminary investigation}
\subsection{GKM: the definition}
Recall that the standard Hermitian one-matrix model is defined as a multiple
integral over $n\times n$ Hermitian matrix $X$
\beq\new\begin{array}{c}\label{1m}
Z_n[t]\,=\,\int e^{-\mbox{\footnotesize Tr}\,S(X,t)}dX
\end{array}\eeq
where the action $S(X,t)$ depends on infinitely many coupling constants
("the times")
\beq\new\begin{array}{c}
S(X,t)\,=\,\sum_{k=1}^\infty t_kX^k
\end{array}\eeq
and the measure
\beq\new\begin{array}{c}\label{me}
dX\,=\,\prod_{i=1}^{n}dX_{ii}\,
\prod_{i<j}2\,d(\mbox{Re}X_{ij})d(\mbox{Im}X_{ij})
\end{array}\eeq
is chosen in such a way that the following normalization condition is
fixed:
\beq\new\begin{array}{c}
\int e^{-\frac{1}{2}\mbox{\footnotesize Tr}\,X^2}dX\,=\,(2\pi)^{n^2/2}
\end{array}\eeq
After the integration over the angle variables \cite{Mehta}
the partition function (\ref{1m}) results to $n$-tuple integral over the
eigenvalues $x_1,\ldots ,x_n$ of the matrix $X$:
\beq\new\begin{array}{c}\label{2m}
Z_n[t]\,=\,\frac{(2\pi)^{\frac{n(n-1)}{2}}}{\prod_{k=1}^n k!}\,\int
\Delta^2(x)\,\prod_{i=1}^n e^{-S(x_i,t)}dx_i
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{vdm}
\Delta(x)\,\equiv\,\prod_{i>j}(x_i-x_j)
\end{array}\eeq
is the van der Monde determinant and
\beq\new\begin{array}{c}
U_n\,\equiv\,\frac{(2\pi)^{\frac{n(n-1)}{2}}}{\prod_{k=1}^n k!}
\end{array}\eeq
is a volume of the group $SU(n)$. The partition function (\ref{2m}) possesses
a remarkable integrability property: as a function of times $\{t_k\}$ and
discrete variable $n$ (the size of the matrix) it is a
solution of so-called Toda chain hierarchy. In particular, the function
\beq\new\begin{array}{c}
\tau_n(t)\,\equiv\,\frac{1}{n!U_n}\,Z_n[t]
\end{array}\eeq
satisfies the famous Toda equation
\beq\new\begin{array}{c}
\frac{\partial^2\log\tau_n}{\partial t_1^2}\,=\,\frac{\tau_{n+1}\tau_{n-1}}{\tau^2_n}
\end{array}\eeq
\bigskip\noindent
The main object we shall discuss below is quite different one-matrix integral
depending on the external $N\times N$ Hermitian matrix $M$:
\beq\new\begin{array}{c}\label{def}
Z^V_N[M]\,=\,\frac{\int e^{-S(M,Y)}dY}{\int e^{-S_2(M,Y)}dY}
\end{array}\eeq
where the measure is the same as in (\ref{me}) (with $n$ substituted by $N$).
The explicit dependence on the matrix $M$ comes from the action $S(M,Y)$
and its quadratic part $S_2(M,Y)$; for any Taylor series
$V(Y)$ we set, by definition,
\beq\new\begin{array}{c}
S(M,Y)\,=\,\mbox{Tr}\,\big[V(Y+M)-V'(M)Y-V(M)\big]
\end{array}\eeq
such that this action does not contain the constant and linear terms in $Y$.
The denominator in (\ref{def}) is interpreted as a natural normalization factor
and is nothing but a Gaussian integral determined by the
quadratic part of the original action:
\beq\new\begin{array}{c}\label{qv}
S_2(M,Y)\,=\,\lim_{\epsilon\to 0}\,\frac{1}{\epsilon^2}\,S(M,\epsilon Y)
\end{array}\eeq
\bigskip\noindent
It is clear that the integral (\ref{def}) depends only on the eigenvalues
$\mu_{_1},\,\ldots\,,\mu_{_N}$ of the external matrix $M$. It is more
reasonable, however, to use another parametrization of the partition function
$Z^V_N$ treating it as a function of {\it the time variables} $T_k$ defined by
relations
\beq\new\begin{array}{c}\label{3m}
T_k\,=\,-\,\frac{1}{k}\,\sum_{i=1}^N\mu_i^{-k}
\end{array}\eeq
- these are appropriate analogues of times entering in a definition of the
standard matrix model (\ref{1m})
\footnote{Nevertheless, to write the explicit expression of the partition
function in times $\{T_k\}$ requires some additional job.}.
The appearance of such variables is very natural by the reasons discussed
below.
\bigskip\noindent
The matrix model (\ref{def}) is called the Generalized Kontsevich Model.
The reason for this is that for the special choice of potential
\beq\new\begin{array}{c}
V(Y) = Y^3/3
\end{array}\eeq
the integral (\ref{def}) becomes the partition function of original Kontsevich
model \cite{Kon}:
\beq\new\begin{array}{c}\label{4m}
Z^{(2)}_N[M]\,=\,{\int dY\ e^{-1/3\ TrY^3 - TrMY^2}\over
\int dY\ e^{-TrMY^2}}
\end{array}\eeq
Expression (\ref{4m}) has been derived in \cite{Kon} as a representation of
the generating
functional of intersection numbers of the stable cohomology classes on the
universal moduli space, $i.e$. it is defined to be a partition function of
Witten's $2d$ topological gravity \cite{Wit90}. In \cite{MMM91b}, (see also
\cite{MS91,Wit91} for
alternative derivations) it was shown that as $N \rightarrow \infty \ $
$Z^{(2)}_\infty $ considered as a function of time variables (\ref{4m}),
satisfies
the set of Virasoro constraints
\beq\new\begin{array}{c}\label{5m}
\mbox{\bf L}^{(2)}_nZ^{(2)}_\infty = 0, \ \ \ n \geq -1\
\end{array}\eeq
\beq\new\begin{array}{c}
\mbox{\bf L}^{(2)}_n= {1\over 2} \sum _{k\ odd} kT_k\partial /\partial T_{k+2n}+
{1\over 4}
\sum _{{a+b=2n}\atop {a,b\ odd\ and>0}}\partial ^2/
\partial T_a\partial T_b+\nonumber \\
+ {1\over 4}
\sum _{{a+b=-2n}\atop {a,b\ odd\ and>0}}aT_abT_b+ {1\over 16}\delta _{n,0} -
\partial /\partial T_{3+2n}.
\end{array}\eeq
Constraints (\ref{5m}) are exactly the equations
\cite{FKN1},\cite{DVV}, imposed on the square root of
the partition function (\ref{1m}) in the double-scaling limit
\subsection{GKM in the determinant form}
After the shift of the integration variable
\beq\new\begin{array}{c}
X\,=\,Y+M
\end{array}\eeq
the numerator in (\ref{def}) can be written in the form
\beq\new\begin{array}{c}\label{d1}
\int e^{-S(Y,M)}dY\,=\,e^{\mbox{\footnotesize Tr}\,[V(M)-MV'(M)]}\,F [V'(M)]
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{d2}
F[\Lambda]\,=\,\int e^{-\mbox{\footnotesize Tr}\,V(X)+\mbox{\footnotesize Tr}\,\Lambda X}dX\;;
\hspace{1cm}\Lambda\,\equiv\,V'(M)
\end{array}\eeq
Using the integration over the angular variables of the matrix $X$ according
to \cite{IZ},\cite{CMM}, one gets
\beq\new\begin{array}{c}\label{d3}
F[\Lambda]\,=\,(2\pi)^{N(N-1)/2}\frac{1}{\Delta(\lambda)}
\int\,\Delta(x)\prod_{i=1}^N e^{-V(x_i)+\lambda_ix_i}dx_i
\end{array}\eeq
where $\{\lambda_i\}$ and $\{x_i\}$ are eigenvalues of the matrices $\Lambda$
and $X$ respectively. Therefore, the function $F[V'(M)]$ in (\ref{d2}) can
be represented as
\beq\new\begin{array}{c}\label{d4}
F[V'(M)]\,\sim\,
\left.\frac{1}{\Delta(V'(\mu))}\,\det\,\Big\{\int\,x^{j-1}
e^{-V(x)+V'(\mu_i)x}dx\Big\}\right|_{i,j=1}^N
\end{array}\eeq
where $\Delta(V'(\mu))\equiv\prod_{i>j}(V'(\mu_i)-V'(\mu_j))$ in accordance
with the definition (\ref{vdm}) and the unessential constant factor is
omitted.
\bigskip\noindent
Proceed now to the denominator of (\ref{def})
\beq\new\begin{array}{c}\label{d5}
D^V_N[M]\,\equiv\,\int dY\ e^{-S_2(M,Y)}
\end{array}\eeq
Making use of $SU(N)$-invariance of the measure $dY$ one can easily diagonalize
$M$ in (\ref{d5}). Of course, this does not imply any integration over angular
variables and provide no factors like $\Delta (Y)$. Then for evaluation of
(\ref{d5}) it remains to use the obvious rule of Gaussian integration,
\beq\new\begin{array}{c}
\int dY\ e^{-\sum ^N_{i,j} S_{ij}(M)Y_{ij}Y_{ji}} \sim \prod ^N_{i,j}
S^{-1/2}_{ij}(M)
\end{array}\eeq
(a constant factor is omitted again), and substitute the explicit
expression for $U_{ij}(M)$. If potential is represented as a formal series,
\beq\new\begin{array}{c}\label{d6}
V(Y) =\sum^\infty _{k=1}\frac{v_k}{k}Y^k
\end{array}\eeq
(and thus is supposed to be analytic in $Y$ at $Y = 0)$, the definition
(\ref{qv}) implies that
\beq\new\begin{array}{c}
S_2(M,Y)\,=\,\frac{1}{2}\,\sum_{k=2}^\infty v_k
\left\{\sum_{a+b=k-2}\mbox{Tr}\,M^aYM^bY\right\}
\end{array}\eeq
and, consequently,
\beq\new\begin{array}{c}
S_{ij} =\sum ^\infty _{k=2}v_k\Big\{
\sum _{a+b=k-2}\mu ^a_i\mu ^b_j \Big\} =
\sum ^\infty _{n=0}V_k \frac{\mu^k_i-\mu^k_j}{\mu_i-\mu_j}=\\
= \frac{V'(\mu _i)-V'(\mu _j)}{\mu_i-\mu _j}
\end{array}\eeq
Hence,
\beq\new\begin{array}{c}\label{d7}
\int e^{-S_2(M,Y)}dY\,=\,\frac{\Delta(\mu)}{\Delta(V'(\mu))}\,
\prod_{i=1}^N[V''(\mu_i)]^{-1/2}
\end{array}\eeq
and substitution of (\ref{d1}), (\ref{d4}) and (\ref{d7}) to (\ref{def})
gives the following representation of the GKM partition function:
\beq\new\begin{array}{c}\label{g3}
Z^V_{_N}[M]\,=\,\frac{\Delta(V'(\mu))}{\Delta(\mu)}
\left.\left.\prod_{i=1}^N\right\{[V''(\mu_i)\big]^{-1/2}
e^{V(\mu_i)-\mu_iV'(\mu_i)}\right\}
F[V'(M)]\,\equiv\\ \equiv\,
\frac{\det\,\Phi^V_i(\mu_j)|_{i,j=1}^N}{\Delta(\mu)}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{v1}
\Phi^V_i(\mu)\,=\,[V''(\mu)]^{1/2}\,e^{V(\mu)-\mu V'(\mu)}\,
\int x^{i-1}e^{-V(x)+xV'(\mu)}dx
\end{array}\eeq
\subsection{Functional relations}
The vectors (\ref{v1}) form a linear independent infinite set. In the
generic situation the basis vectors determining the $\tau$-function
are functionally independent since they are parametrized by arbitrary
$gl_\infty$ matrix. On the contrary, in GKM case the solution is parametrized
, loosely speaking, by the vector (the coefficients of the potential $V(x)$).
In this sense the solution (\ref{g3}) is degenerate; the degeneration results
to functional relations (the constraints) on the level of the basis vectors
which, in turn, can be considered as a definition of GKM from the Grassmannian
point of view \cite{SP}.\\
Consider the model, parametrized by arbitrary polynomial potential of degree
$p+1\,;\;\,p\geq 2$:
\beq\new\begin{array}{c}
V(x)\,=\,\sum_{k=1}^{p+1}\frac{v_k}{k}x^k
\end{array}\eeq
First of all, after multiplication of (\ref{v1}) by $V'(\mu)$,
the integration by parts gives (assuming the vanishing boundary conditions):
\beq\new\begin{array}{c}
V'(\mu)\Phi^V_i(\mu)\,=\,[V''(\mu)]^{1/2}\,e^{V(\mu)-\mu V'(\mu)}\,
\int x^{i-1}e^{-V(x)}\frac{\partial}{\partial x}e^{xV'(\mu)}dx\,=\\
=\,[V''(\mu)]^{1/2}\,e^{V(\mu)-\mu V'(\mu)}\,
\int \Big\{x^{i-1}V'(x)-(i-1)x^{i-2}\Big\}e^{-V(x)+xV'(\mu)}dx\
\end{array}\eeq
i.e.
\beq\new\begin{array}{c}\label{g7}
V'(\mu)\Phi^V_i(\mu)\,=\,\sum_{k=1}^{p+1}v_k\Phi^V_{i+k-1}(\mu)-
(i\!-\!1)\Phi^V_{i-1}(\mu)\;;\hspace{1cm}i=1, 2,\ldots
\end{array}\eeq
This relation generalizes the notion of $p$-reduced KP hierarchy;
for the monomial potential one gets the condition
(\ref{p-r}) exactly. We shall show below (Sect. \ref{eq-h}) that the general
constraint (\ref{g7}) has the natural interpretation in terms of equivalent
hierarchies.\\
There is another type of constraint which is a generalization of (\ref{vr1}).
Indeed,
\beq\new\begin{array}{c}\label{g4}
\Phi^V_i(\mu)\,=\,[V''(\mu)]^{1/2}\,e^{V(\mu)-\mu V'(\mu)}\,
\frac{1}{V''(\mu)}\frac{\partial}{\partial\mu}
\int e^{-V(x)+xV'(\mu)}dx\,\equiv\,A^V(\mu)\Phi^V_{i-1}(\mu)
\end{array}\eeq
where $A^V(\mu)$ is the first-order differential operator of a special
form
\beq\new\begin{array}{c}\label{g5}
A^V(\mu)\,=\,\frac{e^{V(\mu)-\mu V'(\mu)}}{[V''(\mu)]^{1/2}}\,
\frac{\partial}{\partial\mu}\,\frac{e^{-V(\mu)+\mu V'(\mu)}}{[V''(\mu)]^{1/2}}\,=\\
=\,\frac{1}{V''(\mu)}\frac{\partial}{\partial\mu}\,+\,\mu\,-\,
\frac{V'''(\mu)}{2[V''(\mu)]^2}
\end{array}\eeq
Thus, we have the functional relation
\beq\new\begin{array}{c}\label{g6}
\Phi^V_{i+1}(\mu)\,=\,A^V(\mu)\Phi^V_i(\mu)
\end{array}\eeq
which leads to a kind of string equation similar to (\ref{vr2}). To obtain
the differential (w.r.t time variables) constraint on GKM partition function
resulting from (\ref{g6}), the notion of the quasiclassical hierarchies is
required (Sect. \ref{qua}).
\subsection{GKM as a solution of KP hierarchy}
We proved that the GKM partition function (\ref{def}) is represented in
the determinant form
\beq\new\begin{array}{c}\label{g3'}
Z_N^V[M]\,=\,\frac{\det\,\Phi^V_i(\mu_j)|_{i,j=1}^N}{\Delta(\mu)}
\end{array}\eeq
where the vectors $\Phi^V_i(\mu)$ are defined by (\ref{v1}).
Moreover, using the steepest descent method it is not hard
to find the following asymptotics of the GKM basis vectors:
\beq\new\begin{array}{c}\label{v2}
\Phi^V_i(\mu)\,=\,\left.\left.\mu^{i-1}\right(1+O(\mu^{-p-1})\right)
\hspace{1cm}\mu\to\infty
\end{array}\eeq
- compare with (\ref{det2}), (\ref{as2}).
From above consideration it follows that, being written in Miwa times
(\ref{mi}), the partition function (\ref{g3'}) solves the KP hierarchy,
i.e.
\beq\new\begin{array}{c}
Z[T]\,\sim\,\tau_n(T)
\end{array}\eeq
with some (yet unknown) value of the vacuum state $n$ (see definition
(\ref{tauTL})).
\bigskip\noindent
Before proceed further, the important remark
concerning the dependence on $N$ in the formula (\ref{g3'}) deserve
mentioning. The entire set $\{\Phi^V_i(\mu )\}$ is certainly $N$-independent
and infinite. It is evident that $\Phi^V_i$'s are linear independent.
The r.h.s. of (\ref{g3'}) naturally represents the
$\tau$-function for an {\it infinitely large} matrix $M$. In order
to return to the case of finite $N$, it is enough to require that all
eigenvalues of $M$, except $\mu _1,\ldots,\mu _N$, tend to infinity. In this
sense the partition function $Z_N^V[M]$ is
independent of $N$; the entire dependence on $N$ comes from the argument
$M$: $N$ is the quantity of finite eigenvalues of $M$. As a simple check
of consistency, let us additionally carry $\mu _N$ to infinity in
(\ref{g3'}), then, according to (\ref{v2}),
\beq\new\begin{array}{c}
\mbox{det}_{_{\!N}}\Phi^V_i(\mu_j)=(\mu_{_N})^{N-1}\cdot \mbox{det}_{_{\!N-1}}
\Phi^V_i(\mu _j)\cdot(1 +O(1/\mu_{_N}))
\end{array}\eeq
and
\beq\new\begin{array}{c}
\Delta_{_{\!N}}(\mu)\sim(\mu_{_N})^{N-1}\Delta_{_{\!N-1}}(\mu)(1 +
O(1/\mu_{_N}))
\end{array}\eeq
Therefore,
\beq\new\begin{array}{c}
Z_N^V[M] \stackreb{\mu_{_N}\to\infty}{\sim} Z^V_{N-1}[M]\cdot(1+O(1/\mu_{_N}))
\end{array}\eeq
This is the exact statement about the $N$-dependence of the GKM partition
function. In this sense one can claim that GKM partition function is
independent on $N$.
Therefore, we often omit the subscript $N$ in what follows.
\bigskip\noindent
As the solution of the KP hierarchy, the partition function
(\ref{g3'}) is parametrized by the coefficients
of the polynomial $V$. Since the latter depends only on the finite number
of parameters, the original matrix integral describes very particular
$\tau$-function. Therefore, the question arises whether it possible to write
down some kind of constraints which naturally select this specific solution
from the huge set of the typical $\tau$-functions parametrized by $gl_\infty$
matrix $||A_{ij}||$ (\ref{point}). It turns out, that GKM $\tau$-function
satisfies the subset of $W_{1+\infty}$ constraints;
indeed, one can find a number of the differential (in KP times $\{T_k\}$)
operators which annihilate the function (\ref{g3'}). This gives the
invariant description of the model in the spirit of \cite{FKN1}. The problem
is to describe the action of these operators on the $\tau$-function
which is essentially written in the Miwa variables. Of course, due to
\cite{KS}, \cite{FKN2} it is well known how to reformulate all the
constraints on the level of the basis vectors: the complete information
concerning the invariant properties of the $\tau$-functions can be decipher
from the relations similar to (\ref{g6}), (\ref{g7}) and vice versa;
this has been demonstrated explicitely in sections \ref{td} and \ref{virs}.
In the case of monomial potential the invariant properties
of the basis vectors give, indeed, the complete information (see below).
It is important, however, that relations mentioned above are not
enough to describe the non-trivial evolution of the GKM partition
function w.r.t. deformations of the potential $V$ (say, from the monomial
to arbitrary polynomial of the same degree). The account of such
deformations results to highly involved mixture of the standard KP flows and
so-called quasiclassical (or dispersionless) ones.
In order to interprets the latter evolution one needs to know the action
of the operators which do not annihilate the $\tau$-function of GKM. The
non-invariant actions can not be reformulated in terms of the basis
vectors; the explicit formulae on the level of $\tau$-functions are required.
\subsection{GKM with monomial potential. $p$-reduced KP hierarchy and
$\mbox{L}_{-p}$ constraint}
Consider the GKM partition function in the simplest case of monomial
potential $V(X)=\frac{\textstyle X^{p+1}}{\textstyle p+1}$:
\beq\new\begin{array}{c}\label{r0}
Z^{(p)}[M]\,=\,\frac{
\raise-3pt\hbox{$e^{-\frac{p}{p+1}\mbox{\footnotesize Tr} M^{p+1}}$}{\displaystyle\int}
\raise-3pt\hbox{$dX e^{\mbox{\footnotesize Tr}\,\big[- \frac{X^{p+1}}{p+1}+M^pX\big]}$}}
{\displaystyle\int
\raise-2pt\hbox{
$dX e^{-\frac{1}{2}\mbox{\footnotesize Tr}\,\big[\sum_{a+b=p-2}M^aXM^bX\big]}$}}
\,=\,\frac{\det \Phi^{(p)}_i(\mu_j)}{\Delta(\mu)}
\end{array}\eeq
The basis vectors
\beq\new\begin{array}{c}\label{r2}
\Phi^{(p)}_i(\mu)\,\equiv\,\sqrt{p\mu^{p-1}}\,e^{-\frac{\scriptstyle p}
{\scriptstyle p+1}{\mu^{p+1}}}
\int x^{i-1}\,e^{-\frac{\scriptstyle x^{p+1}}{\scriptstyle p+1}+x\mu^p}dx
\end{array}\eeq
satisfy the obvious relations
\beq\new\begin{array}{c}\label{r3}
\mu^p\,\Phi_i^{(p)}(\mu)\,=\,\Phi^{(p)}_{i+p}(\mu)\,-\,
(i-1)\Phi_{i-1}^{(p)}(\mu)
\end{array}\eeq
\beq\new\begin{array}{c}\label{r4}
A^{(p)}(\mu)\,\Phi_i^{(p)}(\mu)\,=\,\Phi_{i+1}^{(p)}(\mu)
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{r5}
A^{(p)}(\mu)\,\equiv\,\frac{1}{p\mu^{p-1}}\frac{\partial}{\partial\mu}\,-\,
\frac{p-1}{2p\,\mu^p}+\,\mu
\end{array}\eeq
is the Kac-Schwarz operator \cite{KS}. Note that up to linear term
it is proportional to the Virasoro operator $A_{-p}$ defined in (\ref{Vir4}).
\bigskip\noindent
We have seen already that the partition function (\ref{r0}) is a
$\tau$-function of KP hierarchy. Now more concrete statements can be made.
First of all, the GKM $\tau$-function is a solution of $p$-reduced KP
hierarchy. Moreover, $Z^{(p)}[T]$ is independent of times $T_{np}$:
\beq\new\begin{array}{c}\label{r6}
\frac{\partial Z^{(p)}[T]}{\partial T_{np}}\,=\,0 \;,\hspace{1cm}n=1,2,\,\ldots
\end{array}\eeq
In addition, the partition function (\ref{r0}) satisfies the
$\mbox{\bf L}_{-p}$ constraint:
\beq\new\begin{array}{c}\label{r7}
\frac{1}{p}\,\mbox{\bf L}_{-p}Z^{(p)}[T]+\frac{\partial Z^{(p)}[T]}{\partial T_1}\,=\,0
\end{array}\eeq
Let us give some comments concerning the relation (\ref{r6}).
Due to (\ref{v2}) one should note that first $p+1$
vectors $\Phi^{(p)}_1(\mu),\,\ldots\,,\Phi^{(p)}_{p+1}(\mu)$ has a canonical
structure (\ref{as1}). Therefore, for $k=p$ the formula (\ref{3der}) holds if
one substitutes $\Phi^{(p)}_i\,,\;i=1,\,\ldots\,N$ instead of canonical GKM
vectors $\Phi^{(can)}_i$ (see more careful discussion of this point in
Sect. \ref{gen}). Moreover, due to (\ref{r3}) the combination
$\Phi^{(p)}_{i+p}-\mu^p\Phi^{(p)}_i$ does not contain the vector
$\Phi^{(p)}_i$. Hence, from (\ref{p-r2})
\beq\new\begin{array}{c}\label{r8}
\frac{\partial Z^{(p)}[T]}{\partial T_p}\,=\,0
\end{array}\eeq
From general KP theory one can deduce that constraint (\ref{r8}) implies all
higher relations of the form $\partial_{T_{np}}Z^{(p)}[T]= Const\cdot\,
Z^{(p)}[T]$. Actually, it follows from the relations (\ref{r3}) due to
discussion in Sect. \ref{td}. Thus, $Z^{(p)}[T]$ is, indeed, the
$\tau$-function of $p$-reduced KP hierarchy, i.e. the corresponding Lax
operator satisfies the constraint
\beq\new\begin{array}{c}\label{lp}
L^p\,=\,(L^p)_{_+}
\end{array}\eeq
The {\it simple} proof of more strong statement (\ref{r6}), namely,
the complete independence of times $T_{np}\,,\;n\geq 1$ is absent,
unfortunately (see \cite{GKM} and, especially, \cite{IZ1} for details).
\bigskip\noindent
To derive the constraint (\ref{r7}) one needs again the canonical
structure of the GKM vectors
$\Phi^{(p)}_1(\mu),\,\ldots\,,\Phi^{(p)}_{p+1}(\mu)$.
It is important that because of this fact the
relations (\ref{time}) (with $k=1$) and (\ref{Vir4}) (with $k=p$) can be
written in terms of $\{\Phi^{(p)}_i\}$. The Kac-Schwarz operator (\ref{r5})
coincides with the formal shift operator $B(\mu)$ (\ref{sh}) due to
(\ref{r4}) while $A_{-p}(\mu)$ in (\ref{Vir4}) is represented as
$p(A^{(p)}(\mu)-\mu)$. Hence, one arrives to relations
\beq\new\begin{array}{c}\label{r10}
\frac{\partial Z^{(p)}}{\partial T_1}\,=\,\frac{1}{\Delta(\mu)}\sum_{m=1}^N
\left(A^{(p)}(\mu_m)-\mu_m\right)\det\Phi^{(p)}_i(\mu_j)
\end{array}\eeq
\beq\new\begin{array}{c}\label{r11}
\mbox{\bf L}_{-p}Z^{(p)}\,=\,
\,=\,-\,p\,\frac{1}{\Delta(\mu)}\,\sum_{m=1}^N\,
\left(A^{(p)}(\mu_m)-\mu_m\right)\,\det\Phi^{(p)}_i(\mu_j)
\end{array}\eeq
thus getting the constraint (\ref{r6}). Note that the latter can be written
in the form
\beq\new\begin{array}{c}\label{r12}
\frac{1}{2p}\sum_{k=1}^{p-1}k(p-k)T_kT_{p-k}\,+\,\frac{1}{p}
\sum_{k=1}^\infty(k+p)\Big(T_{k+p}+\frac{p}{p+1}\delta_{k,1}\Big)
\frac{\partial \log Z^{(p)}}{\partial T_k}\,=\,0
\end{array}\eeq
To conclude, the GKM $\tau$-function with monomial potential satisfies
the usual $\mbox{\bf L}_{-p}$-constraint (the integrated version of the string equation)
with the shifted times
\beq\new\begin{array}{c}\label{r13}
T_k\,\to\,T_k+\frac{p}{p+1}\delta_{k,p+1}
\end{array}\eeq
\subsection{General case: $V'$-reduction and transformation of times}
\label{gen}
In general situation of arbitrary polynomial of degree $p$
one gets the following matrix model:
\beq\new\begin{array}{c}\label{gr}
Z^V[T]\,=\,
\frac{e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[V(M)-MV'(M)]$}}}
{\int e^{-S_2(X,M)}dX}\,
\int e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+XV'(M)]$}}\;dX
\end{array}\eeq
The partition function (\ref{gr}) can be represented in the standard
determinant form:
\beq\new\begin{array}{c}\label{dv}
Z^V[T]\,=\,\frac{\det\Phi^V_i(\mu_j)}{\Delta(\mu)}
\end{array}\eeq
Therefore, $Z^V[T]$ is a $\tau$-function of KP hierarchy.
Its basis vectors
\beq\new\begin{array}{c}\label{gr0}
\Phi^V_i(\mu)\,=\,[V''(\mu)]^{1/2}\,e^{V(\mu)-\mu V'(\mu)}
\int x^{i-1}e^{-V(x)+xV'(\mu)}dx
\end{array}\eeq
satisfy the relations
\beq\new\begin{array}{c}\label{gr3}
V'(\mu)\Phi^V_i(\mu)\,=\,\sum_{k=1}^{p+1}v_k\Phi^V_{i+k-1}(\mu)-
(i\!-\!1)\Phi^V_{i-1}(\mu)\;;\hspace{1cm}i=1, 2,\ldots
\end{array}\eeq
\beq\new\begin{array}{c}\label{gr2}
\Phi^V_{i+1}(\mu)\,=\,A^V(\mu)\Phi^V_i(\mu)
\end{array}\eeq
where $A_{_V}(\mu)$ is the first-order differential operator
\beq\new\begin{array}{c}\label{gr1}
A^V(\mu)\,=\,\frac{1}{V''(\mu)}\frac{\partial}{\partial\mu}\,-\,
\frac{V'''(\mu)}{2[V''(\mu)]^2}+\mu
\end{array}\eeq
As before, these relations impose severe restrictions on the hierarchy.
It can be shown \cite{GKM} that $Z^V[T]$ satisfies the generalized
Virasoro constraint
\beq\new\begin{array}{c}\label{vg'}
\mbox{\bf L}^V\,Z^V[T]\,=\,0
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{vg}
\mbox{\bf L}^V=\sum_{n\geq 1}\mbox{Tr}\,
\Big[\frac{1}{V''(M)M^{n+1}}\Big]\frac{\partial}{\partial T_k} -
{1\over 2}\sum _{i,j}\frac{1}{V''(\mu_i)V''(\mu_j)}
\frac{V''(\mu _i)\!-\!V''(\mu_j)}{\mu _i-\mu _j}+{\partial\over\partial T_1}
\end{array}\eeq
For monomial potential this constraint is reduced to (\ref{r12}) while, in
general, it is impossible to write the compact expression of (\ref{vg})
in original times (\ref{mi}). Nevertheless, one can construct the set of new
times $\{{\widetilde T}_k\}$ as a linear combinations of "old" ones, $\{T_k\}$, in
such a way that the operator (\ref{vg}) can be transformed to the standard
one being expressed in $\widetilde{T}_k$. The way to find appropriate linear
combinations is as follows. From (\ref{gr3}) one sees that GKM basis vectors
determine the invariant point of the Grassmannian such that
\beq\new\begin{array}{c}\label{gr4}
{\cal P}(\mu)\Phi^V_i(\mu)\subset\mbox{Span}\,\{\Phi^V(\mu)\}\,;
\hspace{1cm}{\cal P}(\mu)\equiv V'(\mu)
\end{array}\eeq
This condition is a natural generalization of the standard $p$-reduction
and is called the $V'$-reduction.
The general ideology \cite{SP} tells us that the pseudo-differential Lax
operator
\beq\new\begin{array}{c}\label{gr5}
L\,=\,\partial+u_2\partial^{-1}+u_3\partial^{-2}+\ldots\\
L\Psi\,=\,\mu\Psi
\end{array}\eeq
corresponding to this point obeys the property
\beq\new\begin{array}{c}\label{gr6}
[{\cal P}(L)]_{_-}\,=\,0
\end{array}\eeq
i.e. $V'(L)$ is a differential operator of order $p$. Therefore, there
exists the Lax operator of KP hierarchy
\beq\new\begin{array}{c}\label{gr7}
\widetilde L\,=\,\partial+\widetilde u_2\partial^{-1}+\widetilde u_3\partial^{-2}+\ldots\\
\widetilde L\Psi\,=\,\widetilde\mu\Psi
\end{array}\eeq
such that
\beq\new\begin{array}{c}\label{gr8}
\widetilde L^p\,=\,{\cal P}(L)
\end{array}\eeq
and, certainly, the relation between the spectral parameters of the
corresponding hierarchies is
\beq\new\begin{array}{c}\label{gr9}
\widetilde\mu\,=\,{\cal P}^{1/p}(\mu)
\end{array}\eeq
Now it is clear that the relevant spectral parameter is $\widetilde\mu$ rather
then $\mu$.
Therefore, the times appropriate for description of
$V'$-reduced KP hierarchy should be determined by relations
\beq\new\begin{array}{c}\label{gr10}
\widetilde{T}_k\,=\,-\,\frac{1}{k}\sum_i\widetilde\mu^{-k}_i\,\equiv\,
-\,\frac{1}{k}\sum_i{\cal P}^{-k/p}(\mu_i)
\end{array}\eeq
In order to find the relation between $\{T_k\}$ and $\{\widetilde{T}_k\}$
one introduces the notion of the residue operation $\mbox{Res}$. For any
Laurent series $F(\lambda)=\sum_k F_k\lambda^k$
\beq\new\begin{array}{c}\label{res}
\mbox{Res}\,F(\lambda)d\lambda\,=\,F_{-1}
\end{array}\eeq
It is easy to see that this operation satisfies the properties
\beq\new\begin{array}{c}\label{res0}
\mbox{Res}\,\frac{dF(\lambda)}{d\lambda}\,d\lambda\,=\,0\\
\mbox{Res}\,Fd_\lambda G\,=\,-\,\mbox{Res}\,Gd_\lambda F\\
\mbox{Res}\,Fd_\lambda G\,=\,\mbox{Res}\,F_{_+}d_\lambda G_{_-}\,+\,
\mbox{Res}\,F_{_-}d_\lambda G_{_+}
\end{array}\eeq
for any two Laurent series $F(\lambda)\equiv F_{_+}(\lambda)+F_{_-}(\lambda)$ and
$G(\lambda)\equiv G_{_+}(\lambda)+G_{_-}(\lambda)$ where $F_{_+}\,(F_{_-})$ are the
parts of the corresponding Laurent series containing only non-negative
(negative) powers in $\lambda$.\\
Using the properties of $\mbox{Res}$ one finds
relations:
\beq\new\begin{array}{c}\label{gr11}
\widetilde{T}_k\,=\,\frac{1}{k}\sum_{m=k}^\infty
mT_m\,\mbox{Res}\,\lambda^{m-1}{\cal P}^{-k/p}(\lambda)d\lambda
\end{array}\eeq
\beq\new\begin{array}{c}\label{gr12}
T_k\,=\,\sum_{m=k}^\infty
\widetilde{T}_m\,\mbox{Res}\,\lambda^{-k-1}{\cal P}^{m/p}(\lambda)d\lambda
\end{array}\eeq
Let us prove now that for arbitrary polynomial potential the GKM partition
function (\ref{gr}) is independent of time $\widetilde{T}_p$:
\beq\new\begin{array}{c}\label{gr13}
\frac{d Z^V[T(\widetilde{T})]}{\partial \widetilde{T}_p}\,=\,0
\end{array}\eeq
i.e. $V'$-reduced KP hierarchy resembles the standard $p$-reduction while
considering the evolution along new integrable flows $\widetilde T_k$.
\bigskip\noindent
Actually, we shall derive more general formulas for the derivatives of $Z^V$
w.r.t first $p$ times $\widetilde T_1,\,\ldots\,,\widetilde T_p$.
To do so, one needs to calculate the derivatives w.r.t old times.
Let us apply the formula
(\ref{3der}) to the GKM partition function. Immediately the problem appears.
Indeed, the GKM vectors (\ref{gr0}) have a nice integral representation, but
these are not the canonical ones because of asymptotics (\ref{v2}). On the
other hand, the formula (\ref{3der}) is valid only for canonical basis
vectors. The compact integral representation for $\Phi^{(can)}_i(\mu)$ is
absent for GKM. Therefore, it is impossible to find the matrix integral
representation for the derivatives $\partial_{_{T_k}}Z^V$
which is valid for {\it all} times $k\geq 1$.
Fortunately, this problem disappears while
considering the derivatives w.r.t. first $p$ times $T_1,\,\ldots\,T_p$. The
key point is that the first $p+1$ basis vectors
$\Phi^V_1(\mu),\ldots,\Phi^V_{p+1}(\mu)$ already have a canonical form
(see (\ref{v2})).
As a corollary, one can directly use (\ref{3der}) with the simple
substitution $\Phi^{(can)}\,\to\,\Phi^V(\mu)$ (i.e. without any modification)
for derivatives with respect to these times. The derivatives w.r.t. higher
times do not allow such replacements. To illustrate this statement,
one can check the "marginal" derivative $\partial_{_{T_p}}Z^V$ which contains,
for example, the particular term (see (\ref{3der}) with $k=p$)
\beq\new\begin{array}{c}\label{2der}
\frac{1}{\Delta(\mu)}\,\left|
\begin{array}{ccc}
\Phi^{(can)}_1(\mu_1) &\ldots&
\Phi^{(can)}_1(\mu_N)\\
\ldots &\ldots&\ldots \\
\Phi^{(can)}_{N-1}(\mu_1) &\ldots&\Phi^{(can)}_{N-1}(\mu_N)\\
\Phi^{(can)}_{N+p}(\mu_1) &\ldots&\Phi^{(can)}_{N+p}(\mu_N)
\end{array}\right|
\end{array}\eeq
Obviously, the first $N-1$ rows in this expression can be written in
terms of the GKM vectors (\ref{gr0}). The only trouble can come from the last
row. But due to asymptotics (\ref{v2}) the canonical vectors can
be represented in the GKM basis as
$\Phi^{(can)}_{N+p}=\Phi^V_{N+p}+(\alpha_{_{N+p}}\Phi^V_{N-1}+\mbox{lower terms})$
with some constant $\alpha_{_{N+p}}$ and the row with entries
$\alpha_{_{N+p}}\Phi^V_{N-1}(\mu_1),\,\ldots\,, \alpha_{_{N+p}}\Phi^V_{N-1}(\mu_N)$
(as well as the rows with lower terms) does not contribute to determinant
(\ref{2der}). This conclusion is true for all other determinants resulting
to $\partial_{_{T_p}}Z^V$. Hence, the formula (\ref{3der}) with $k=p$ remains
unchanged if one simply substitutes $\Phi^V_i$ instead of $\Phi^{(can)}_i$.
The same reasoning is applied, certainly, for all derivatives
$\partial_{_{T_k}}Z^V$ with $k\leq p$.
On the contrary, the derivative $\partial_{_{T_{p+1}}}Z$ contains the determinant
similar to (\ref{2der}) with
$\Phi^{(can)}_{N+p+1}(\mu_1),\,\ldots\,, \Phi^{(can)}_{N+p+1}(\mu_N)$
in the last row. In this case the transformation
$\Phi^{(can)}_{N+p+1}=\Phi^V_{N+p+1}+(\alpha_{_{N+p+1}}\Phi^V_{N}+\mbox{lower
terms})$
results to additional term proportional to $Z^V$. Evidently, the higher
derivatives become more and more involved while expressing them through
non-canonical vectors (\ref{gr0}).\\
Due to the reasons described above only the first $p$ derivatives have a
simple integral representations. In this case the formula (\ref{3der}) gives
for $1\leq k\leq p$
\beq\new\begin{array}{c}\label{t1}
\frac{\partial Z^V[T]}{\partial T_k}\,=\,
\frac{e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[V(M)-MV'(M)]$}}}
{\int e^{-S_2(X,M)}dX}\;\int \mbox{Tr}\,[X^k-M^k]
\,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+XV'(M)]$}}\;dX
\end{array}\eeq
or, in compact notations,
\beq\new\begin{array}{c}\label{t2}
\frac{\partial}{\partial T_k}\log Z^V[T]\,=\,\<\mbox{Tr}\,X^k-\mbox{Tr}\,M^k\>\;;
\hspace{1cm}1\leq k\leq p
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{aver}
\<{\cal F}(X)\>\,\equiv\,\frac{\int
{\cal F}(X)\,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+XV'(M)]$}}\;dX}
{\int \,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+XV'(M)]$}}\;dX}
\end{array}\eeq
Indeed, the calculation of $\<\mbox{Tr}\,X^k\>$ does not differ
in technical details from those
resulting to (\ref{g3}) and gives exactly the r.h.s. of (\ref{3der})
with $\Phi^{(can)}$ substituted by $\Phi^V$
\footnote{
The fermionic approach together with the above reasoning allows to write
more complicated derivatives quite explicitely.
Without proof we represent the formula
$$
\frac{\partial^2 \log Z^V_N}{\partial T_k\partial T_m}=
\left\<\Big(\mbox{\rm Tr}X^k-\mbox{\rm Tr}M^k\Big)
\Big(\mbox{\rm Tr}X^m-\mbox{\rm Tr}M^m\Big)\right\>\;;
\hspace{0.5cm}1\leq k+m\leq p $$
The generalization is evident.
}
.
Using the relation (\ref{gr12}) between old and new times it
is easy to find the formulas we need:
\beq\new\begin{array}{c}\label{t3}
\frac{\partial}{\partial \widetilde
T_k}\log Z^V[T(\widetilde T)]\,=\, \left\<\mbox{Tr}\,[{\cal P}^{k/p}(X)]_{_+}
-\mbox{Tr}\,[{\cal P}^{k/p}(M)]_{_+}\right\>\;; \hspace{1cm}1\leq k\leq p
\end{array}\eeq
Note that in the r.h.s. of (\ref{t3}) is expressed through the eigenvalues
of the transformed matrix $\widetilde M$, i.e. $M$ should be substituted by the
solution of equation
\beq\new\begin{array}{c}\label{gr14}
{\cal P}(M)\,=\,\widetilde M^p
\end{array}\eeq The relation
(\ref{gr13}) can be readily proved now. Indeed,
\beq\new\begin{array}{c} \frac{\partial}{\partial \widetilde T_p}\log
Z^V[T(\widetilde T)]\,=\, \<\mbox{Tr}\,V'(X)-\mbox{Tr}\,V'(M)\>
\end{array}\eeq
and the r.h.s. vanishes since the expression under integral is a total
derivative. We proved
that, being written in times $\{\widetilde{T}_k\}$, the partition function
(\ref{gr}) has something to do with a solution of $p$-reduced KP hierarchy.
Therefore, it is natural to expect that the complicated Virasoro constraint
(\ref{vg'}), (\ref{vg}) can be simplified also being represented as a
differential operator w.r.t $\{\widetilde{T}_k\}$. This expectation is {\it almost}
true but slightly premature: the point is that the partition function
$Z^V[T(\widetilde T)]$ is not a $\tau$ function in general. Indeed, to express the
partition function (\ref{gr}) in new times (\ref{gr10}) means to substitute
the spectral parameters $\{\mu_i\}$ entering in (\ref{dv}) by the (formal)
solution of equation (\ref{gr9}). Evidently, the transformation
\beq\new\begin{array}{c}
\mu\,=\,\widetilde{\mu}(1+O(\widetilde{\mu}^{-1}))
\end{array}\eeq
destroys the structure of the van der Monde determinant and, hence, the
function $Z^V[M(\widetilde M)]$ does not possesses the standard form.
Nevertheless, the situation can be repaired: one can extract the genuine
$\tau$-function of the $p$-reduced KP hierarchy from $Z^V[M(\widetilde M)]$.
To describe the procedure we need to elaborate the notion of the equivalent
hierarchies.
\section{Equivalent hierarchies}\label{eq-h}
\subsection{Definition}
Consider the spectral problem $L\Psi=\mu\Psi$ where the operator $L$
defining the KP hierarchy has a standard form (\ref{l-op}). For any
given function $f$
\beq\new\begin{array}{c}\label{f}
f(\mu)=\sum_{i=-\infty}^0 f_i\mu^{i+1}\hspace{1cm}f_0=1
\end{array}\eeq
with time independent coefficients one can construct the new $L$-operator
\beq\new\begin{array}{c}\label{eq1}
\widetilde{L}\,=\,f(L)
\end{array}\eeq
which has the same structure as the original one. The spectral problem now
is
\beq\new\begin{array}{c}
\widetilde{L}\Psi\,=\,\widetilde{\mu}\Psi
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{tm}
\widetilde{\mu}\,\equiv\,f(\mu)
\end{array}\eeq
The new operator $\widetilde L$ (\ref{eq1}) determines the KP hierarchy which is
called the equivalent to the original one \cite{S}. Introducing the
differential operators $\widetilde{B}_k\equiv (\widetilde{L}^k)_{\!{_+}}$, one can
construct the evolution equations
\beq\new\begin{array}{c}\label{eq2}
\frac{d\widetilde{L}}{\partial\widetilde{T}_k}\,=\,[\widetilde{B}_k,\widetilde{L}]
\end{array}\eeq
which can be considered as a definition of times $\{\widetilde{T}_i\}$. Obviously,
\beq\new\begin{array}{c}
\widetilde{B}_m\,=\,B_k\,\frac{\partial T_k}{\partial \widetilde{T}_m}
\end{array}\eeq
The question is what is the relation between the solutions of the equivalent
hierarchies determined by the operators $L$ and $\widetilde{L}$. First of all, one
needs to establish the explicit relationship between $\{T_i\}$ and
$\{\widetilde{T}_i\}$. The second step is to find the $\tau$-function of the
"deformed" $\widetilde{L})$-hierarchy which corresponds to arbitrary given function
$\tau(T)$ of the original $L$-hierarchy. This gives the precise mapping
between the equivalent hierarchies.
\subsection{Variation of the spectral parameter}
It is evident that relation (\ref{tm}) can be considered as a transformation
of the original spectral parameter $\mu$ under the action of the Virasoro
generators. Let
\beq\new\begin{array}{c}\label{Vsum} \sum_{k=1}^\infty
a_kA_{-k}(\mu)\,\equiv\,\frac{1}{W'(\mu)}\frac{\partial}{\partial\mu}+
\frac{1}{2}\,\Big(\frac{1}{W'(\mu)}\Big)'\;\equiv\;A(\mu)
\end{array}\eeq
where the differential operators $A_k(\mu)$ are determined in (\ref{vir2}).
The function $W(\mu)$ has the asymptotical behavior
\beq\new\begin{array}{c}\label{as3}
W'(\mu)=\frac{\mu^{s-1}}{a_s}\Big(1+O(\mu^{-1})\Big)\hspace{1cm}
\mu\,\to\,\infty
\end{array}\eeq
where $a_s$ is a first non-zero coefficient in the sum (\ref{Vsum}).
The exponential operator $\exp A(\mu)$ can be disentangled as
\beq\new\begin{array}{c}\label{dis1}
\exp\Big\{\frac{1}{W'(\mu)}\frac{\partial}{\partial\mu}+
\frac{1}{2}\,\Big(\frac{1}{W'(\mu)}\Big)'\Big\}\;=\;
\Big\{\partial_\mu\Big(W^{-1}(W(\mu)+1)\Big)\Big\}^{1/2}\,
\exp\Big(\frac{1}{W'(\mu)}\frac{\partial}{\partial\mu}\Big)
\end{array}\eeq
where $W^{-1}$ is the function inverse to $W$. It is convenient to introduce
the function
\beq\new\begin{array}{c}\label{fw}
f(\mu)\,=\,W^{-1}\Big(W(\mu)+1\Big)\,\equiv\,\widetilde{\mu}
\end{array}\eeq
which has the Laurent expansion
\beq\new\begin{array}{c}
f(\mu)=\mu\Big(1+O(\mu^{-1})\Big)\hspace{1cm}\mu\,\to\,\infty
\end{array}\eeq
due to (\ref{as3}). We describe the transformation of the spectral parameter
by the formula
\beq\new\begin{array}{c}\label{mu-tr}
e^{{\textstyle\frac{1}{W'(\mu)}\frac{\partial}{\partial\mu}}}
\,\mu\,=\,W^{-1}\Big(W(\mu)+1\Big)\,\equiv\,f(\mu)
\end{array}\eeq
We have seen that the action of the operator $e^{A(\mu)}$ is expressed
in terms of the function $f$ rather than $W$. Therefore,
we shall denote below the function $A(\mu)$ entering in the definition
(\ref{Vsum}) as $A_f(\mu)$ keeping in mind the relation (\ref{fw})
between functions $f(\mu)$ and $W(\mu)$. Note that the relations between
the coefficients $a_k$ and $f_i$ are rather complicated.
\bigskip\noindent
Introduce two sets of times
\beq\new\begin{array}{c}
T_k\,=\,-\,\frac{1}{k}\sum_i\mu_i^{-k}\;,\hspace{0,5cm}
\widetilde{T}_k\,=\,-\,\frac{1}{k}\sum_i\widetilde{\mu}_i^{-k}
\end{array}\eeq
where $\widetilde{\mu}=f(\mu)$. It is easy to find the relations between
these times using the residue operation:
\beq\new\begin{array}{c}\label{eq3}
\widetilde{T}_k\,=\,\frac{1}{k}\sum_{m=k}^\infty
mT_m\,\mbox{Res}\,\lambda^{m-1}f^{-k}(\lambda)d\lambda
\end{array}\eeq
\beq\new\begin{array}{c}\label{eq4}
T_k\,=\,\sum_{m=k}^\infty
\widetilde{T}_m\,\mbox{Res}\,\lambda^{-k-1}f^m(\lambda)d\lambda
\end{array}\eeq
in complete analogy with (\ref{gr11}), (\ref{gr12}) where
$f(\lambda)={\cal P}^{1/p}(\lambda)$. Note that in the operator form
\beq\new\begin{array}{c}\label{TT1}
\widetilde{T}_k(\{\widetilde{\mu}\}) \,\equiv\,
\,-\,\frac{1}{k}\sum_{i}\widetilde{\mu}_i^{-k}\,=\,
\prod_ie^{{\textstyle\frac{1}{W'(\mu_i)}\frac{\partial}{\partial\mu_i}}}\,T_k(\{\mu\})
\end{array}\eeq
Since
\beq\new\begin{array}{c}
W'(\widetilde{\mu})d\widetilde{\mu}\,=\,W'(\mu)d\mu
\end{array}\eeq
the transformation (\ref{eq4}) can be written as
\beq\new\begin{array}{c}\label{TT2}
T_k(\{\mu\}) \,\equiv\,
\prod_ie^{{\textstyle -\frac{1}{W'(\tilde{\mu}_i)}
\frac{\partial}{\partial\tilde{\mu}_i}}}\,\widetilde{T}_k(\{\widetilde{\mu}\})\,=\,
-\,\frac{1}{k}\sum_{i}\Big(f^{-1}(\widetilde{\mu}_i)\Big)^{-k}
\end{array}\eeq
where $f^{-1}$ is an inverse function to $f$
\footnote{
Note that
$$
f^{-1}(\widetilde{\mu})\,=\,W^{-1}\Big(W(\widetilde{\mu})-1\Big)
$$
to compare with (\ref{fw}).
}.
\subsection{$\tau$-functions of the equivalent hierarchies}
Consider the correspondence between the $\tau\!$-functions of the equivalent
hierarchies. Let $\tau(T)$ be a solution of the original hierarchy. Using the
relations (\ref{eq4}) one can consider it as a function of times
$\{\widetilde{T}_k\}$, i.e. to deal with $\tau[T(\widetilde{T})]$. It is reasonable to
assume that the latter object has something to do with the equivalent
hierarchy determined by the operator $\widetilde{L}$ (\ref{eq1}). Actually,
$\tau[T(\widetilde{T})]$ is not a solution of the equivalent hierarchy.
Nevertheless, being corrected by the appropriate factor, this function
do determines the solution we need. Namely, one can show that the expression
\beq\new\begin{array}{c}\label{rel0}
e^{^{\frac{1}{2}\,A_{km}\widetilde{T}_k \widetilde{T}_m}}\tau[T(\widetilde{T})]\,\equiv\,
\widetilde\tau(\widetilde{T})
\end{array}\eeq
with some definite matrix $A_{km}$ is a $\tau\!$-function of the equivalent
hierarchy. The most easiest way to prove this statement is to consider the
$\tau$-functions in the determinant form (\ref{det2}). It is clear that the
transformation of times $T\to \widetilde{T}$ corresponds to the transformation
of the Miwa variables $\mu_i=f^{-1}(\widetilde{\mu}_i)$. In terms of
$\widetilde{\mu}_i$ the original function $\tau[T(\widetilde{T})]$ is not the ratio of two
determinants (since
$\Delta(\mu)|_{\mu=f^{-1}(\widetilde{\mu})}\equiv\Delta(\mu(\widetilde\mu))$ is
not the Van der Monde determinant in terms of $\{\widetilde{\mu}_i\}$) and,
therefore, does not correspond to any $\tau$-function. Nevertheless,
the $\tau$-function of the equivalent hierarchy can be easily extracted.
Indeed, consider the identical transformation
\footnote{
By $f'(\mu(\widetilde\mu))$ we mean the function $\partial_\mu f(\mu)$
calculated at the point $\mu=f^{-1}(\widetilde\mu)$.}:
\beq\new\begin{array}{c}\label{rel1}
\tau[T(\widetilde{T})]\,\equiv\,\Big\{\frac{\Delta(\widetilde{\mu})}{\Delta(\mu(\widetilde\mu))}\,
\prod_i [f'(\mu(\widetilde{\mu}))]^{1/2}\Big\}\,\widetilde\tau(\widetilde T)
\end{array}\eeq
where $\widetilde\tau(\widetilde T)$ as a function of times (\ref{TT1}) has the determinant
form (\ref{det2}), i.e.
\beq\new\begin{array}{c}\label{det3}
\widetilde\tau(\widetilde T)\,=\,\frac{\det \widetilde\phi_i(\widetilde{\mu}_j)}{\Delta(\widetilde\mu)}
\end{array}\eeq
with the basis vectors
\beq\new\begin{array}{c}\label{ba1}
\widetilde\phi_i(\widetilde\mu)\,=\,[f'(\mu(\widetilde\mu))]^{-1/2}\,\phi_i(\mu(\widetilde\mu))
\end{array}\eeq
By the direct calculation one can show that the prefactor in the r.h.s.
of (\ref{rel1}) may be represented in the form
\beq\new\begin{array}{c}\label{id1}
\frac{\Delta(\widetilde{\mu})}{\Delta(\mu(\widetilde\mu))}\,\prod_i [f'(\mu(\widetilde{\mu}))]^{1/2}
\,=\,e^{-\frac{1}{2}\,A_{km}\widetilde{T}_k \widetilde{T}_m}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{a}
A_{km}\,=\,\mbox{Res}\,f^k(\lambda)d_\lambda (f^m(\lambda))_{_+}
\end{array}\eeq
Thus, one arrives to relation (\ref{rel0}).
We omit brute force derivation of the identity (\ref{id1})
because the technical details are not instructive here and represent below
the "physical" proof of (\ref{rel0}) on the level of the fermionic
correlators. Such approach has two advantages: first of all, it explains
very clearly the meaning of the identical redefinition in
(\ref{rel1})-(\ref{ba1}) and, besides, it describes the explicit
transformation of the point of the Grassmannian while passing to equivalent
hierarchy.
\subsection{Proof of equivalence formula}
Let us consider the identity
\beq\new\begin{array}{c}
\tau(T|g)\,\equiv\,\prod_ie^{{\textstyle -\frac{1}{W'(\tilde{\mu}_i)}
\frac{\partial}{\partial\tilde{\mu}_i}}}\, \,\tau(\widetilde{T}|g)
\end{array}\eeq
(see (\ref{TT2})). The last (rather trivial) relation can be reformulated
in terms of the fermionic correlators as follows.
Using the correspondence (\ref{start2}) between the fermionic correlators and
the $\tau$-functions one can write
\footnote{\label{fo} \label{1/2}
From (\ref{vir2}), (\ref{Vsum}), (\ref{dis1})-(\ref{mu-tr}) it is clear that
$$
e^{\mbox{\footnotesize\bf L}_f(J)}\psi(\mu)\,e^{-\mbox{\footnotesize\bf L}_f(J)}\,=\,(\partial_\mu f(\mu))^{1/2}\psi(f(\mu))
$$
i.e. the fermions are transformed as 1/2-differentials.
The inverse transformation is
$$
e^{-\mbox{\footnotesize\bf L}_f(J)}\psi(\widetilde\mu)\,e^{\mbox{\footnotesize\bf L}_f(J)}\,=\,(\partial_{\widetilde\mu}
f^{-1}(\widetilde\mu))^{1/2}\psi(f^{-1}(\widetilde\mu))\,\sim\,
e^{\textstyle -\frac{1}{W'(\tilde{\mu}_i)}\frac{\partial}{\partial\tilde{\mu}_i}}
\psi(\widetilde\mu)
$$
During the calculations in (\ref{qq}) we are using last of these two
formulas. In fact, the only things we need is that
$\psi(\mu)\sim e^{-\mbox{\footnotesize\bf L}_f(J)}\psi(\widetilde\mu)\,e^{\mbox{\footnotesize\bf L}_f(J)}$ and $\<N|\mbox{\bf L}_f=0$.}
\beq\new\begin{array}{c}\label{qq}
\<0|e^{H(T)}g|0\>\,=\,
\frac{\<N|\psi(\mu_N)\ldots\psi(\mu_1)g|0\>}
{\<N|\psi(\mu_N)\ldots\psi(\mu_1)|0\>}
\,=\,
\frac{\prod_ie^{{\textstyle
-\frac{1}{W'(\tilde{\mu}_i)}\frac{\partial}{\partial\tilde{\mu}_i}}}\,
\<N|\psi(\widetilde{\mu}_N)\ldots\psi(\widetilde{\mu}_1)g|0\>}
{\prod_ie^{{\textstyle -
\frac{1}{W'(\tilde{\mu}_i)}\frac{\partial}{\partial\tilde{\mu}_i}}}\,
\<N|\psi(\widetilde{\mu}_N)\ldots\psi(\widetilde{\mu}_1)|0\>} \equiv\\
\equiv\;
\frac{\,\<N|\psi(\widetilde{\mu}_N)\ldots\psi(\widetilde{\mu}_1)
e^{\mbox{\footnotesize\bf L}_f(J)}g|0\>}
{\,\<N|\psi(\widetilde{\mu}_N)\ldots\psi(\widetilde{\mu}_1)
e^{\mbox{\footnotesize\bf L}_f(J)}|0\>}\;=\;
\frac{\,\<0|e^{H(\widetilde{T})}e^{\mbox{\footnotesize\bf L}_f(J)}g|0\>}
{\,\<0|e^{H(\widetilde{T})}e^{\mbox{\footnotesize\bf L}_f(J)}|0\>}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{Vsum2}
\mbox{\bf L}_f(J)\,=\,\sum_{k=1}^\infty a_k\mbox{\bf L}_{-k}(J)
\end{array}\eeq
similarly to (\ref{Vsum}). Thus, we proved, that the transition to
the equivalent hierarchy results to the following identity between
$\tau$-functions of the corresponding hierarchies:
\beq\new\begin{array}{c}\label{rel}
\tau(T|g)\,=\,\frac{\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f(J)}g)}
{\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f(J)})}
\end{array}\eeq
i.e. one needs to redefine the times together with the appropriate change
of the point of the Grassmannian
\beq\new\begin{array}{c}
g\;\to\;e^{\mbox{\footnotesize\bf L}_f(J)}g\;\equiv\;g_{_f}
\end{array}\eeq
and perform simultaneously the renormalization of the $\tau$-function.
The formula (\ref{rel}) coincides with (\ref{rel1}).
Indeed, the numerator is a $\tau$-function which can be written in the
determinant form (\ref{det3}) with the basis vectors
\beq\new\begin{array}{c}
\widetilde{\phi}_i(\widetilde{\mu})\,=\,
\frac{\<0|\psi^\ast_{i-1}\psi(\widetilde{\mu})e^{\mbox{\footnotesize\bf L}_f(J)}g|0\>}{\<0|g|0\>}
\hspace{1cm}i=1, 2, \ldots
\end{array}\eeq
These vectors coincide with the previously defined ones (\ref{ba1}). To
show this, let us move the Virasoro element to the left state $\<0|$. One
can discard the adjoint action of $e^{\mbox{\footnotesize\bf L}_f}$ on $\psi^\ast_i$, since
\beq\new\begin{array}{c}
[\mbox{\bf L}_{-k}(J),\psi^\ast_{i}]\,=\,\Big(\frac{k-1}{2} -i\Big)\psi^\ast_{i-k}
\end{array}\eeq
and, by definition, $\mbox{\bf L}_f$ contains only the Virasoro generators
$\mbox{\bf L}_{-k}$ with $k>0$. Therefore, for any $i\ge 1$
\beq\new\begin{array}{c}
e^{\mbox{\footnotesize\bf L}_f(J)}\psi^\ast_{i-1}e^{-\mbox{\footnotesize\bf L}_f(J)}\,=\,\psi^\ast_{i-1}+
\mbox{lower modes}
\end{array}\eeq
But the negative modes annihilate the left state $\<0|$ while the
positive lower modes generate the lower basis vectors which do not contribute
to the determinant $\det \widetilde\phi_i(\widetilde\mu_j)$. Therefore, without
loss of generality one may assume
\beq\new\begin{array}{c}
\widetilde{\phi}_i(\widetilde{\mu})\,=\,
\frac{\<0|\psi^\ast_{i-1}e^{-\mbox{\footnotesize\bf L}_f(J)}\psi(\widetilde{\mu})
e^{\mbox{\footnotesize\bf L}_f(J)}g|0\>}{\<0|g|0\>}
\,=\,\Big(\partial_{\widetilde\mu}f^{-1}(\widetilde\mu)\Big)^{1/2}
\;\frac{\<0|\psi^\ast_{i-1}
e^{{\textstyle -\frac{1}{W'(\tilde{\mu}_i)}\frac{\partial}{\partial\tilde{\mu}_i}}}\,
\psi(\widetilde{\mu})g|0\>}{\<0|g|0\>}\;\equiv\\
\equiv\;\Big(\partial_\mu f(\mu(\widetilde\mu))\Big)^{-1/2}
\phi_i(\mu(\widetilde{\mu}))
\end{array}\eeq
where in the last equality the original basis vectors $\phi_i(\mu)$
are expressed in terms of the deformed spectral parameter $\widetilde{\mu}$
via the substitution $\mu\,=\,f^{-1}(\widetilde{\mu})$. We get the
basis vectors (\ref{ba1}). Note that the appearance of the normalization
factor in the definition (\ref{ba1}) is very natural due to the fact that the
basis vectors are transformed as 1/2-differentials under the
action of the Virasoro group (see footnote \ref{fo} on page \pageref{qq}).
\bigskip\noindent
Thus, the only problem is to calculate the trivial $\tau$-function
\beq\new\begin{array}{c}
\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f})\,=\,
\<0|e^{H(\widetilde{T})}e^{\mbox{\footnotesize\bf L}_f(J)}|0\>\;=\;e^{\mbox{\footnotesize\bf L}_f(T)}\cdot 1
\end{array}\eeq
which corresponds to the point of Grassmannian
\beq\new\begin{array}{c}\label{triv}
g_0\,=\,e^{\mbox{\footnotesize\bf L}_f(J)}
\end{array}\eeq
It is evident that this function is the exponential of quadratic
combinations of $\widetilde{T}$. To find the explicit expression let us consider its
derivative with respect to the arbitrary time $\widetilde{T}_k$:
\beq\new\begin{array}{c}\label{der}
\partial_{_{\widetilde{T}_k}}\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f})\,=\,
\<0|e^{H(\widetilde{T})}J_ke^{\mbox{\footnotesize\bf L}_f(J)}|0\>
\end{array}\eeq
To calculate the r.h.s. of (\ref{der}) one can use the following trick.
The moving of the current $J_k$ through $e^{\mbox{\footnotesize\bf L}_f(J)}$ to the right
state $|0\>$ results to arising of all lower modes $J_i$ with $i\leq k$ since
\beq\new\begin{array}{c}\label{LJ}
[\,\mbox{\bf L}_{-i},J_k]\,=\,-\,kJ_{k-i}
\end{array}\eeq
The positive
modes annihilate the right state and do not contribute to (\ref{der}). Then
move the negative modes through $e^{\mbox{\footnotesize\bf L}_f(J)}$ back to the left. In such
commutation no positive modes arise (due to (\ref{LJ}) again). The
commutation of negative modes with $e^H$ gives the linear combination of
times due to commutation relations $[J_m,J_n]=m\delta_{n+m,0}$. At last, all
negative modes annihilate the left state $\<0|$ and the final result is
$\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f})$ multiplied by the linear combination of times.
The explicit calculation is as follows. From (\ref{LJ})
\beq\new\begin{array}{c}
[\,\mbox{\bf L}_f,J(\mu)]\,=\,\partial_\mu\Big(\frac{1}{W'(\mu)}\,J(\mu)\Big)
\end{array}\eeq
and, therefore, the exponentiation gives the differential operator
$\exp\Big(\frac{1}{W'(\mu)}\partial_\mu+(\frac{1}{W'(\mu)})'\Big)$ which can be
disentangled similarly to (\ref{dis1}). Thus,
\beq\new\begin{array}{c}\label{j1}
e^{\mbox{\footnotesize\bf L}_f}J(\mu)e^{-\mbox{\footnotesize\bf L}_f}\,=\,\partial_\mu f(\mu)\,J(f(\mu))
\end{array}\eeq
The inverse transformation is
\beq\new\begin{array}{c}\label{j2}
e^{-\mbox{\footnotesize\bf L}_f}J(\mu)e^{\mbox{\footnotesize\bf L}_f}\,=\,\partial_\mu f^{-1}(\mu)\,J(f^{-1}(\mu))
\end{array}\eeq
Multiplying (\ref{j2}) by $\mu^k$ and taking the residue one gets
\beq\new\begin{array}{c}\label{j3}
J_ke^{\mbox{\footnotesize\bf L}_f}\,=\,e^{\mbox{\footnotesize\bf L}_f}\mbox{Res}\, f^k(\lambda)J(\lambda)d\lambda
\end{array}\eeq
and the action of this operator on the right state reduces to
\beq\new\begin{array}{c}\label{j4}
J_ke^{\mbox{\footnotesize\bf L}_f}|0\>\,=\,e^{\mbox{\footnotesize\bf L}_f}\mbox{Res}\, f^k(\lambda)J^{(-)}(\lambda)d\lambda|0\>
\end{array}\eeq
where $J^{(-)}(\lambda)$ denotes the linear combination of the negative current
modes in the expansion
\beq\new\begin{array}{c}
J(\lambda) \equiv \sum_{k=0}^\infty J_k\lambda^{-k-1}+\sum_{k=-\infty}^{-1}
J_k\lambda^{-k-1} \,\equiv\,J^{(+)}(\lambda)+J^{(-)}(\lambda)
\end{array}\eeq
Note that $J^{(-)}(\lambda)$ contains only non-negative degrees of the spectral
parameter $\lambda$. Recall that we denote
$(F(\lambda))_{_+}$ the part of the Laurent series $F(\lambda)$ containing
non-negative degrees of $\lambda$. From (\ref{j1})
\beq\new\begin{array}{c}\label{j5}
e^{\mbox{\footnotesize\bf L}_f}J^{(-)}(\lambda)\,=\,
\Big(\partial_\lambda f(\lambda)\,J^{(-)}(f(\lambda))\Big)_{\!+}e^{\mbox{\footnotesize\bf L}_f}
\end{array}\eeq
One should stress that the positive modes $J^{(+)}(f(\lambda))$ do not
contribute to the r.h.s. of the last formula since $\partial_\lambda
f(\lambda)=1+O(\lambda^{-1})$ and $J^{(+)}(f(\lambda)$ contain only the negative degrees
of $\lambda$. Combining (\ref{j4}) and (\ref{j5}) one gets
\beq\new\begin{array}{c}\label{j6}
J_ke^{\mbox{\footnotesize\bf L}_f}|0\>\,=\,\mbox{Res}\, f^k(\lambda)
\Big(\partial_\lambda f(\lambda)\,J^{(-)}(f(\lambda))\Big)_{\!+}d\lambda\;e^{\mbox{\footnotesize\bf L}_f}|0\>\;\equiv\\
\equiv\;\sum_{m=1}^\infty\frac{1}{m}\,\mbox{Res}\,f^k(\lambda)
d_\lambda (f^m(\lambda))_{_+}
\,J_{-m}e^{\mbox{\footnotesize\bf L}_f}|0\>
\end{array}\eeq
After substitution of (\ref{j6}) to (\ref{der}) and taking into account
the commutation relations
\beq\new\begin{array}{c}
e^{H(\widetilde{T})}J_{-m}e^{-H(\widetilde{T})}\,=\,J_{-m}+m\widetilde{T}_m
\end{array}\eeq
one arrives to equation
\beq\new\begin{array}{c}\label{der2}
\partial_{_{\widetilde{T}_k}}\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f})\,=\,
\tau(\widetilde T|e^{\mbox{\footnotesize\bf L}_f})\,\sum_{m=1}^\infty\,A_{km}\widetilde{T}_m
\end{array}\eeq
where
\beq\new\begin{array}{c}
A_{km}\,=\,\mbox{Res}\,f^k(\lambda)d_\lambda (f^m(\lambda))_{_+}
\end{array}\eeq
It is easy to show that $A_{km}=A_{mk}$. From (\ref{der2}) the final
answer is
\beq\new\begin{array}{c}
\<0|e^{H(\widetilde{T})}e^{\mbox{\footnotesize\bf L}_f}|0\>\,=\,\exp\Big\{\frac{1}{2}
\sum_{k,m=1}^\infty\,A_{km}\widetilde{T}_k \widetilde{T}_m\Big\}
\end{array}\eeq
and the relation (\ref{rel}) between the $\tau$-functions of the equivalent
hierarchies takes the form
\beq\new\begin{array}{c}\label{taueq}
\tau[T(\widetilde T)|g]\,=\,
e^{-\frac{1}{2}\,A_{km}\widetilde{T}_k \widetilde{T}_m}\,
\tau(\widetilde{T}|e^{\mbox{\footnotesize\bf L}_f}g)
\end{array}\eeq
\subsection{Equivalent GKM}
In the GKM context the equivalent hierarchies are naturally
described by the function
\beq\new\begin{array}{c}\label{s}
\widetilde\mu={\cal P}^{1/p}(\mu)\hspace{1cm}{\cal P}(\mu)\equiv V'(\mu)
\end{array}\eeq
Applying the general formula (\ref{rel1}) one can
represent the original partition function (written in new times $\widetilde{T}$)
as follows
\beq\new\begin{array}{c}\label{s0}
Z^V[T(\widetilde{T})]\,=\,\frac{\Delta(\widetilde\mu)}{\Delta(\mu)}\,
\prod_{i}\left(\frac{V''(\mu_i)}{p\,\widetilde\mu_i^{p-1}}\right)^{1/2}
\widetilde Z^V[\widetilde T]
\,=\,e^{-\frac{1}{2}\,A_{ij}\widetilde{T}_i \widetilde{T}_j}\,\widetilde Z^V[\widetilde T]
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{s1}
A_{ij}\,=\,\mbox{Res}\,{\cal P}^{i/p}(\lambda)d_\lambda ({\cal P}^{j/p}(\lambda))_{_+}
\end{array}\eeq
and the $\tau$-function of the equivalent hierarchy is described by the
matrix integral
\beq\new\begin{array}{c}\label{s2}
\widetilde Z^V[\widetilde T]\,=\,\frac{\Delta(\widetilde{\mu}^p)}{\Delta(\widetilde\mu)}
\prod_i(p\,\widetilde\mu^{p-1}_i)^{1/2}\;\;
e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[V(M)-M\widetilde{M}^p]$}}
\int\,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+X\widetilde{M}^p\,]$}}\;dX
\end{array}\eeq
The r.h.s. of (\ref{s2}) should be expressed in terms of the
matrix $\widetilde M$. Using the relation (\ref{s}) it is easy to find that
\beq\new\begin{array}{c}\label{s3'}
\mu\,=\,\frac{1}{p}\sum_{k=-\infty}^{p+1}kt_k\,\widetilde{\mu}^{k-p}
\end{array}\eeq
\beq\new\begin{array}{c}\label{s3}
V(\mu)-\mu V'(\mu)\,=\,-\sum_{k=-\infty}^{p+1}t_k\,\widetilde\mu^{k}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{s4}
t_k\,\equiv\,-\,\frac{p}{k(p\!-\!k)}\,
\mbox{Res}\,{\cal P}^{\frac{\scriptstyle p-k}{\scriptstyle p}}(\lambda)\,d\lambda
\end{array}\eeq
Note that the parameters $t_1,\,\ldots\, t_{p+1}$ are independent; they
are related with the coefficients of the potential $V$
and can be interpreted as the times generating some integrable
evolution. Indeed, (\ref{s4}) can be considered as a set of equations which
determine the coefficients of the potential as the functions of these
additional times.
The equations (\ref{s4}) naturally arise in the dispersionless KP hierarchy
(see below). Note also that all higher positive times are zero due to
polynomiality of ${\cal P}$ while the negative "times" $\{t_k\,,\;k<0\}$
are complicated functions of the independent positive times.\\
Substitution od (\ref{s3}) to (\ref{s2}) results to the
matrix integral which depends on two sets of times:
\beq\new\begin{array}{c}\label{s5}
\widetilde Z^V[\widetilde T,t]\,=\,
e\raise9pt\hbox{${\scriptscriptstyle\sum_{k=1}^\infty} {\scriptstyle
kt_{-k}\widetilde{T}_k}$} \left\{ \frac{\Delta(\widetilde{\mu}^p)}{\Delta(\widetilde\mu)}
\prod_i(p\,\widetilde\mu^{p-1}_i)^{1/2}\;\;
e\raise9pt\hbox{${\scriptscriptstyle -\sum_{k=1}^{p+1}}
{\;\scriptstyle t_{k}\mbox{\footnotesize Tr}\,\widetilde{M}^k}$}\int\,
e^{\raise2pt\hbox{$\scriptstyle\!\mbox{\footnotesize Tr}\,[-V(X)+X\widetilde{M}^p\,]$}}\;dX\right\}
\end{array}\eeq
where the coefficients of $V(X)$ are functions of quasiclassical times
$t_1,\,\ldots\,t_{p+1}$ according to (\ref{s4}). From (\ref{s1}) it follows
that $A_{i,np}=A_{np,i}=0$. Moreover, $t_{-p}=0$ and, due to (\ref{gr13}),
\beq\new\begin{array}{c}\label{s6}
\frac{\partial \widetilde{Z}^V[\widetilde{T}]}{\partial \widetilde{T}_p}\,=\,0
\end{array}\eeq
\bigskip\noindent
Thus, we extract the $\tau$-function of $p$-reduced KP hierarchy from the
general matrix integral (\ref{gr}). The last logical step to reveal the
genuine integrable object hidden in (\ref{gr}) is to consider the part
of partition function (\ref{s5}) without the exponential prefactor with a
linear $\widetilde{T}$-dependence, i.e. the matrix integral
\beq\new\begin{array}{c}\label{s7}
\tau^V[\widetilde T,t]\,=\,\frac{\Delta(\widetilde{\mu}^p)}{\Delta(\widetilde\mu)}
\prod_i(p\,\widetilde\mu^{p-1}_i)^{1/2}\;\;
e\raise9pt\hbox{${\scriptscriptstyle -\sum_{k=1}^{p+1}}
{\;\scriptstyle t_{k}\mbox{\footnotesize Tr}\,\widetilde{M}^k}$}
\int\,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+X\widetilde{M}^p\,]$}}\;dX
\end{array}\eeq
This is exactly the object we need. First of all,
this $\tau$-function has the standard determinant form
\beq\new\begin{array}{c}\label{s8}
\tau^V[\widetilde T,t]\,=\,\frac{\det \phi_i^V(\widetilde{\mu}_j)}{\Delta(\widetilde{\mu})}
\end{array}\eeq
with the basis vectors
\beq\new\begin{array}{c}\label{s9}
\phi^V_i(\widetilde{\mu})\,=\,\sqrt{p\,\widetilde{\mu}^{p-1}}\;
e\raise7pt\hbox{${\scriptscriptstyle -\sum_{k=1}^{\infty}}
{\scriptstyle t_{k}\,\widetilde{\mu}^k}$}
\int x^{i-1}e^{-V(x)+\widetilde{\mu}^px}dx
\end{array}\eeq
satisfying to $p$-reduction condition
\beq\new\begin{array}{c}\label{s10}
\widetilde{\mu}^p\phi^V_i\,=\,\sum_{j=1}^{p+1}v_j \phi^V_{i+j-1}-(i-1)\phi^V_{i-1}
\end{array}\eeq
as well as to Virasoro-type constraint:
\beq\new\begin{array}{c}\label{s11}
A(\widetilde{\mu})\phi^V_i\,=\,\phi^V_{i+1}
\end{array}\eeq
where
\beq\new\begin{array}{c}\label{s12}
A(\widetilde{\mu})\equiv\frac{1}{p\widetilde{\mu}^{p-1}}
\frac{\partial}{\partial\widetilde{\mu}}-\frac{p\!-\!1}{2p\,\widetilde{\mu}^p}
+\frac{1}{p}\sum_{k=1}^{p+1}kt_k\,\widetilde{\mu}^{k-p}
\end{array}\eeq
The partition function (\ref{s7}) possesses the remarkable
properties. First of all, it is a solution of the $p$-reduced KP hierarchy,
i.e.
\beq\new\begin{array}{c}
\frac{\partial \tau^V[\widetilde{T},t]}{\partial \widetilde{T}_p}\,=\,0
\end{array}\eeq
- this is the corollary of (\ref{s5}).
Further, the relation (\ref{s11}) implies that (\ref{s7})
satisfies the standard $\mbox{\bf L}_{-p}$-constraint
\beq\new\begin{array}{c}\label{s15}
\mbox{\bf L}^V_{-p}\tau^V[\widetilde{T},t]\,=\,0
\end{array}\eeq
\beq\new\begin{array}{c}\label{s16}
\mbox{\bf L}^V_{-p}=\frac{1}{2p}
\sum_{k=1}^{p-1}k(p-k)(\widetilde{T}_k\!+\!t_k)(\widetilde{T}_{p-k}\!+\!t_{p-k})
+\frac{1}{p}\sum_{k=1}^\infty(k\!+\!p)(\widetilde{T}_{k+p}\!+\!t_{p+k})
\frac{\partial}{\partial \widetilde{T}_k}
\end{array}\eeq
where the KP times are naturally shifted by the corresponding quasiclassical
ones. We shall give the direct proof of this statement in Sect. \ref{pvir}.
\bigskip\noindent
Moreover, the form of $\mbox{\bf L}_{-p}$-operator (\ref{s16}) give a hint that there
should exist the object depending only on the sum $T_k+t_k$. Indeed, this
is the case. Consider the product
\beq\new\begin{array}{c}\label{s13}
{\cal Z}^V[\widetilde{T},t]\,\equiv\,\tau^V[\widetilde{T},t]\,\tau_0(t)
\end{array}\eeq
where $\tau_0(t)$ is a $\tau$-functions of {\it quasiclassical $p$-reduced
KP hierarchy}. We show in Sect \ref{ptimes} that ${\cal Z}^V[\widetilde{T},t]$
depends only on the sum of KP and quasiclassical times:
\beq\new\begin{array}{c}\label{s14}
\left(\frac{\partial}{\partial \widetilde{T}_k}\,-\,
\frac{\partial}{\partial t_k}\right){\cal Z}^V[\widetilde{T},t]\,=\,0\;,\hspace{0.7cm}
k=1,\,\ldots\,p
\end{array}\eeq
Of course, ${\cal Z}^V$ also satisfies the constraint (\ref{s15}).\
\bigskip\noindent
To prove the above statements, some essentials concerning the quasiclassical
hierarchies are required. At this stage one sees that the GKM
includes almost all fundamental notions of the integrable theory
thus boiled these ingredients together.
\section{Quasiclassical KP hierarchy}\label{qua}
\subsection{Basis definitions}
The general treatment of the quasiclassical limit in the theory of the
integrable systems can be found in \cite{Kri1}-\cite {Du} and references
therein. Here we outline the description of so-called quasiclassical, or
dispersionless KP hierarchy \cite{TT} which is
appropriate limit of the standard KP hierarchy (the careful investigation of
this limit was given in \cite {TT2}). Consider the quasiclassical version of
the $L$-operator \footnote{In what follows we shall use the term "operator"
in order to keep the resemblance with the usual KP-terminology; but one
should perceive, certainly, that we are dealing with the functions, not with
the genuine operators.}
\beq\new\begin{array}{c}
\label{cl} {\cal L} \,=\,\lambda+ \sum_{i=1}^\infty
u_{i+1}\lambda^{-i}
\end{array}\eeq
where the functions $u_i$ depend on the infinite set of the time
variables $(t_1, t_2, t_3, \ldots)=\{t\}$ and the evolution along
these times is determined by the Lax equations
\beq\new\begin{array}{c}\label{cLax}
\frac{\partial{\cal L}}{\partial t_i}\,=\,\{{\cal L}^i_+,{\cal L}\}\hspace{0.5cm}i=1, 2,\ldots
\end{array}\eeq
where the functions ${\cal L}^i_+(\{t\},\lambda)$ are polynomials in $\lambda$, and, in
complete analogy with the standard KP theory, are defined as a non-negative
parts of the corresponding degrees of the ${\cal L}$-operator:
\beq\new\begin{array}{c}
{\cal L}^i_+\,\equiv {\cal L}^i-{\cal L}^i_-
\end{array}\eeq
In (\ref{cLax}) the Poisson bracket $\{.,.\}$ is the quasiclassical analog
of the commutator; for any functions $F(t_1,\lambda),\;G(t_1,\lambda)$
\beq\new\begin{array}{c}
\{F,G\}\,=\,\frac{\partial F}{\partial\lambda}\frac{\partial G}{\partial t_1}\,-\,
\frac{\partial F}{\partial t_1}\frac{\partial G}{\partial\lambda}
\end{array}\eeq
It is useful to introduce the additional operator
\beq\new\begin{array}{c}\label{cm}
{\cal M}\,=\,\sum_{n=1}^\infty nt_n{\cal L}^{n-1}+\sum_{i=1}^\infty h_{i+1}{\cal L}^{-i-1}
\,\equiv\,\sum_{i\in\mbox{\Bbb Z}}^\infty i\,t_i\,{\cal L}^{i-1}
\end{array}\eeq
which satisfies the equations
\footnote{One should stress that "the negative times" $t_{-i}\,,\;i>0$
are the functions of the independent set $\{t_i\,,\;i>0\}$ which are
determined by the evolution equations (\ref{lm1}). They have
nothing to do with the actual negative times of the Toda lattice hierarchy.}
\beq\new\begin{array}{c}\label{lm1}
\frac{\partial{\cal M}}{\partial t_i}\,=\,\{{\cal L}^i_+,{\cal M}\}\hspace{0.5cm}i=1, 2,\ldots
\end{array}\eeq
\beq\new\begin{array}{c}\label{lm2}
\{{\cal L},{\cal M}\}\,=\,1
\end{array}\eeq
Originally, the differential prototype of (\ref{cm}) for KP hierarchy
was introduced in \cite{Orlov} in order to describe the symmetries
of the evolution equations.\\
In \cite{Kri1}, \cite{TT} it was proved that there exists the function
$S(\{t\},\lambda)$ whose total derivative is given by
\beq\new\begin{array}{c}
dS\,=\,\sum_{i=1}^\infty {\cal L}^i_+dt_i\,+\,{\cal M} d_\lambda{\cal L}
\end{array}\eeq
and, consequently,
\beq\new\begin{array}{c}\label{ds}
\Big(\frac{\partial S}{\partial t_i}\Big)_{\cal L}\,=\,{\cal L}^i_+\;;\ \ \ \ \ \
d_\lambda S\,=\,{\cal M} d_\lambda{\cal L}
\end{array}\eeq
The function $S$ is a direct quasiclassical analog of the logarithm of the
Baker-Akhiezer function; the solution to (\ref{ds}) can be represented in the
form \cite{TT}
\beq\new\begin{array}{c}\label{S}
S\,=\,\sum_{n=1}^\infty t_n{\cal L}^n-\sum_{j=1}^\infty \frac{1}{j}h_{j+1}{\cal L}^{-j}
\,\equiv\,\sum_{j\in\mbox{\Bbb Z}} t_j\,{\cal L}^j
\end{array}\eeq
\subsection{Quasiclassical $\tau$-function and $p$-reduction}
The notion of the quasiclassical $\tau$-function can be introduced as
follows. In \cite{TT} it was proved that
\beq\new\begin{array}{c}\label{der3}
\frac{\partial h_{i+1}}{\partial t_j}\,=\,\frac{\partial h_{j+1}}{\partial t_i}\,=\,\mbox{Res}\,
{\cal L}^id_\lambda{\cal L}^j_+\;;\hspace{1cm}i,j\ge 1
\end{array}\eeq
where the residue operation is defined in (\ref{res}), (\ref{res0}).
Therefore, there exists some function whose derivatives w.r.t. $t_i$
coincide with $h_{i+1}$. By definition, the quasiclassical $\tau$-function
is defined by relations
\beq\new\begin{array}{c}\label{ctau}
h_{i+1}\,=\,\frac{\partial\log\tau}{\partial t_i}\;\,;\hspace{1cm}i\ge 1
\end{array}\eeq
In \cite{TT2} it was shown that $\tau$-function defined above do satisfies
some dispersionless variant of the bilinear Hirota equations, so the
definition (\ref{ctau}) is reasonable.
\bigskip\noindent
Let us consider $p$-reduced quasiclassical KP hierarchy;
this means that for some
natural $p$ the function ${\cal P}\,\equiv\,{\cal L}^p$ is a polynomial in $\lambda$, i.e.
\beq\new\begin{array}{c}\label{red1}
{\cal P}_{_-}=0
\end{array}\eeq
One can construct the "dual" function
\beq\new\begin{array}{c} \label{cq}
{\cal Q}\,=\,\frac{1}{p}{\cal M}{\cal L}^{1-p}\,\equiv\,
\frac{1}{p}\,\sum_{j\in\mbox{\Bbb Z}}jt_j{\cal L}^{j-p}
\end{array}\eeq
which satisfy the equation
\beq\new\begin{array}{c}\label{pq}
\{{\cal P},{\cal Q}\}=1
\end{array}\eeq
as a corollary of (\ref{lm2}). In \cite{Kri1}, \cite{TT} the particular
case of $p$-reduced hierarchy has been discussed, namely, when
the function ${\cal Q}(\{t\},\lambda)$ is also polynomial in $\lambda$:
\beq\new\begin{array}{c}\label{red2}
{\cal Q}_{_-}\,=\,0
\end{array}\eeq
This constraint restricts the possible solutions of $p$-reduced
quasiclassical KP hierarchy to very specific subset; as Krichever has
shown \cite{Kri2} the $\tau$-function satisfies the infinite set of the
quasiclassical $W$-constraints when (\ref{red2}) holds
\footnote{Equations (\ref{red1}), (\ref{red2}) and (\ref{pq}) are the
analogues of the Douglas equations \cite{Doug} which, in turn, are
equivalent \cite{FKN1} to the $W$-constraints in the KP hierarchy.}.
In particular, the $\tau$-function satisfies the $\mbox{\bf L}_{-1}$-constraint
\beq\new\begin{array}{c}\label{con1}
\frac{1}{2}\sum_{i=1}^{p-1}i(p-i)t_it_{p-i}+\sum_{i=1}^\infty (p+i)t_{p+i}
\frac{\partial\log\tau}{\partial t_i}\,=\,0
\end{array}\eeq
(see the proof below).
\subsection{Quasiclassical times and the structure of solutions}
When constraints (\ref{red1}), (\ref{red2}) are satisfied, one can
construct the solutions of the hierarchy as follows \cite{Kri2}.
Note that, evidently,
\beq\new\begin{array}{c}\label{dd}
\mbox{Res}\,{\cal L}^{i-1}d_\lambda {\cal L}\,=\,\delta_{i,0}
\end{array}\eeq
Multiplying (\ref{cq}) by ${\cal L}^{p-i-1}d_\lambda{\cal L}$ and taking the residue
with the help (\ref{res}) it is easy to show that
\beq\new\begin{array}{c}\label{ct1}
t_i\,=\,-\,\frac{p}{i(p-i)}\,\mbox{\rm Res}\,{\cal L}^{p-i}d_\lambda{\cal Q}
\hspace{1cm}i\in\mbox{\Bbb Z}
\end{array}\eeq
From these equations for $i>0$ one can determine (at least, in principle)
the coefficients of ${\cal Q}=\sum_{i=0}^\infty q_i\lambda^i$ and ${\cal L}$ as the functions
of times $t_1, t_2, \ldots$, while the same equations for $i<0$ give then
the parametrization of the "negative times" $t_{-i}=-\frac{1}{i}h_{i+1}$:
\beq\new\begin{array}{c}\label{neg}
t_{-i}\,=\,-\,\frac{1}{i}\,\frac{\partial\log\tau}{\partial t_i}
\end{array}\eeq
in terms of $t_1, t_2, \ldots$. Consider the simplest situation when
${\cal Q}(\lambda)$ is a polynomial of the first order. From (\ref{cq}), (\ref{red2})
it is easily seen that such condition is equivalent to switching off
all the times with $i>p+1$: $t_{p+2}=t_{p+3}=\ldots =0$. In this case
\beq\new\begin{array}{c}
{\cal Q}(\{t\},\lambda)\,=\,\frac{p+1}{p}\,t_{p+1}\lambda\,+\,t_p
\end{array}\eeq
and equations (\ref{ct1}) are reduced to
\beq\new\begin{array}{c}\label{ct2}
t_{i}\,=\,-\,\frac{(p+1)t_{p+1}}{i(p-i)}\,\mbox{Res}\,
{\cal P}^{\frac{p-i}{p}}(\lambda)d\lambda\;;\hspace{1cm}i\leq p+1
\end{array}\eeq
Equations (\ref{ct2}) determine the coefficients of the
polynomial ${\cal P}$ as the functions of first $p+1$ times $t_1,t_2,\ldots ,
t_{p+1}$.
It is easy to see that the first time $t_1$ is contained (linearly) only
in $\lambda$-independent term of ${\cal P}(t,\lambda)$. Therefore,
\beq\new\begin{array}{c}
\frac{\partial{\cal P}}{\partial t_1}\,=\,-\frac{p+1}{p}\,t_{p+1}\;;\hspace{1cm}
\frac{\partial{\cal L}^i_+}{\partial t_1}\,=\,0\;\ \ (i=1, \ldots , p)
\end{array}\eeq
The Lax equations (\ref{cLax}) are reduced now to the form
\beq\new\begin{array}{c}\label{cLax2}
\frac{\partial{\cal P}}{\partial t_i}\,=\,-\,\frac{\partial\,{\cal P}^{i/p}_{_+}}{\partial\lambda}
\cdot\frac{p}{(p+1)t_{p+1}}
\hspace{1cm}i=1, \ldots , p
\end{array}\eeq
The rest of equations (\ref{ct2}) determine the functions
$t_{-i}(t_1,\ldots,t_{p+1})\,,\;\,i\geq 1$. It is possible to find
the explicit time dependence straightforwardly. For example,
using the equation of motion (\ref{cLax2}), one gets
\beq\new\begin{array}{c}\label{ti}
-j\frac{\partial t_{-j}}{\partial t_i}\,=\,
\mbox{Res}\,{\cal P}^{j/p}d_\lambda{\cal P}^{i/p}_{_+}\,=\,
\mbox{Res}\,{\cal P}^{i/p}d_\lambda{\cal P}^{j/p}_{_+}
\end{array}\eeq
where one uses the properties (\ref{res0}) of the residue operation.
In particular, the differentiation of $t_{-1}$ leads to the
simple relation (since $d_\lambda{\cal P}^{1/p}_{_+}\equiv d\lambda$)
\beq\new\begin{array}{c}
\frac{\partial t_{-1}}{\partial t_i}\,=\,-\,\mbox{Res}\,{\cal P}^{i/p}d\lambda\,=\,
\frac{i(p-i)}{p+1}\,\frac{t_{p-i}}{t_{p+1}}
\end{array}\eeq
After integration of these equations one arrives to relation
\beq\new\begin{array}{c}\label{t-1}
t_{-1}\,=\,-\,\frac{\partial\log\tau}{\partial t_1}\,=\,\frac{1}{2(p+1)t_{p+1}}\,
\sum_{i=1}^{p-1}i(p-i)t_it_{p-i}
\end{array}\eeq
which is equivalent to $\mbox{\bf L}_{-1}$-constraint (\ref{con1}) with
$t_{p+2}=\ldots =0$.
\bigskip\noindent
Note, that without loss of generality one can choose
\beq\new\begin{array}{c}\label{tp}
t_{p+1}= \frac{\textstyle p}{\textstyle p+1}
\end{array}\eeq
by the rescaling of lower times
and, therefore, the main equations (\ref{ct2}), (\ref{cLax2}) acquire
the standard form
\footnote{
Note also that the function ${\cal P}$ does not contain the term proportional
to $\lambda^p$ due to the structure of the ${\cal L}$-operator (\ref{cl}),
hence, $\partial{\cal P}/\partial t_p=0$.}
\beq\new\begin{array}{c}\label{ct2'}
t_{i}\,=\,-\,\frac{p}{i(p-i)}\,\mbox{Res}\,
{\cal P}^{\frac{p-i}{p}}(\lambda)d\lambda\;;\hspace{1cm}i\leq p+1
\end{array}\eeq
\beq\new\begin{array}{c}\label{cLax3}
\frac{\partial{\cal P}}{\partial t_i}\,=\,-\,\frac{\partial{\cal P}^{i/p}_{_+}}{\partial\lambda}
\hspace{1cm}i=1,\ldots , p
\end{array}\eeq
\subsection{Comparison with GKM}
The structure of the quasiclassical hierarchy has a nice interpretation
in the GKM framework. First of all, the prepotential
$V'(\mu)\equiv{\cal P}(\mu)$ of GKM generates the solution of the
quasiclassical KP hierarchy subjected the constraints
\beq\new\begin{array}{c}
{\cal P}_-(\mu)=0\;;\hspace{0.5cm}{\cal Q}(\mu)\,\sim\,\mu
\end{array}\eeq
The easiest way to see this is to note that
the definitions (\ref{s4}) and (\ref{ct2'}) are the same. Moreover,
{\it all} the quasiclassical ingredients are naturally reproduced.
Consider the first basis vector from the set $\{\phi_i^V(\mu)\}$ defined
by (\ref{s9}). Neglecting the exponential prefactor one can easily see
that the object
\beq\new\begin{array}{c}\label{q1}
\Psi(t,\mu)\,=\,\sqrt{p\,\mu^{p-1}}\int e^{-V(x)+x\mu^p}dx
\end{array}\eeq
is a Baker-Akhiezer function of the $p$-reduced quasiclassical KP hierarchy
\footnote{The use of $\mu$ instead of $\widetilde{\mu}$ should not lead to
confusion.} (recall that the coefficients of $V$ are parametrized by the
quasiclassical times according to (\ref{s4})).
It is evident that $\Psi(t,\mu)$ has the usual asymptotic
\beq\new\begin{array}{c}\label{q2}
\Psi(t,\mu)\stackreb{\mu\rightarrow\infty}{\longrightarrow}
\exp\Big(\sum^{\infty}_{k=1}t_k\mu^k\Big)\left(1+O(\mu ^{-1})\right)
\end{array}\eeq
Using equations of motion for quasiclassical KP hierarchy
\beq\new\begin{array}{c}\label{q3}
\frac{\partial V}{\partial t_k}\,=\,-\,{\cal P}^{k/p}_+
\end{array}\eeq
(this is the consequence of equations (\ref{cLax3}) or, {\it equivalently}
the corollary of parameterization (\ref{ct2'})) one can easy to show that
the Baker-Akhiezer function (\ref{q1}) satisfies the usual equations of the
$p$-reduced KP hierarchy:
\beq\new\begin{array}{c}\label{q4}
\Big[{\cal P}(\partial_{t_1}) + t_1\Big]\,\Psi(t,\mu)\,=\,\mu^p\Psi(t,\mu)\\
{\partial\Psi(t,\mu)\over\partial t_i}\,=\,
{\cal P}^{k/p}_+(\partial_{t_1})\,\Psi(t,\mu)
\end{array}\eeq
where polynomials ${\cal P}^{k/p}_+(\mu)$ are functions of times
$t_1,\,\ldots\, t_p$. Hence, the function (\ref{q1}) gives the explicit example
of exact solution. On the other hand, it
is important that ${\cal P}^{k/p}_+(\mu)$ does not depend on $t_1$ for
$k<p$ and, therefore, in the corresponding equations
(\ref{q4}) we can
treat $\partial /\partial t_1$ as a formal {\it parameter}, not an operator, i.e.
it is a case of quasiclassical system. Thus,
we see that "the quasiclassical limit" can be naturally treated
in the (pseudo-differential) context of the standard hierarchy
and quasiclassical solutions are
{\it exact} solutions of the full $p$-reduced KP hierarchy restricted on the
"small phase space". The exact Baker-Akhiezer function (\ref{q1}) gives the
explicit solution of quasiclassical evolution equations along first $p$
flows since the standard relation holds:
\beq\new\begin{array}{c}\label{q5}
\Psi(t,\mu)\,=\,\exp\Big(\sum^{p+1}_{k=1}t_k\mu^k\Big)\,
\frac{\tau\Big(t_k-\frac{\textstyle 1}{\textstyle k\mu^k}\Big)}{\tau(t_k)}
\end{array}\eeq
Evaluating the Baker-Akhiezer function (\ref{q1}) by the steepest descent
method, it is possible to find all the derivatives of the $\tau$-function
entering in the r.h.s. of (\ref{q5}). To conclude, the quasiclassical
hierarchy is determined completely by GKM integrals.\\
As a consequence of the above reasoning on can see that
upper $p\times p$ diagonal minor of the matrix
$A_{ij}$ (\ref{s1}) (which appears for the first time in the context of the
equivalent hierarchies) can be written as the second derivative of the
quasiclassical $\tau$-function. Indeed, in the $p$-reduced case the formulas
(\ref{der3}) and (\ref{ctau}) read
\footnote{We denote now the quasiclassical $\tau$-function as $\tau_0$ in
what follows.}
\beq\new\begin{array}{c}\label{q6}
\frac{\partial^2\log\tau_0(t)}{\partial t_i\partial t_j}\,=\,\mbox{Res}\,{\cal P}^{i/p}
d_\lambda{\cal P}^{j/p}_{_+}\;;\hspace{0.8cm}i,j=1\,,\ldots\,,p
\end{array}\eeq
and the r.h.s. is nothing but (\ref{s1}). Further, the "negative" times
entering to partition function (\ref{s5}) are also represented with the help
of $\tau_0$ due to (\ref{neg}):
\beq\new\begin{array}{c}\label{q7}
kt_{-k}\,=\,-\,\frac{\partial \log\tau_0(t)}{\partial t_k}\;;
\hspace{0.8cm}k=1\,,\ldots\,,p
\end{array}\eeq
Before returning to the GKM $\tau$-function we need to prove some useful
statement concerning the homogeneity of the quasiclassical $\tau$-function.
\subsection{Homogeneity property}
\begin{lem}
The conditions (\ref{red1}), (\ref{red2}) imply
\beq\new\begin{array}{c}\label{S2}
S_{_-}\,=\,0
\end{array}\eeq
\end{lem}
{\bf Proof.}~~ Recall that $d_\lambda S={\cal M} d_\lambda{\cal L}$. Therefore,
\beq\new\begin{array}{c}
\mbox{Res}\,{\cal L}^i_+ d_\lambda S\,=\,\mbox{Res}\,{\cal L}^i_+{\cal M} d_\lambda{\cal L}\,\equiv\,
\,p\,\mbox{Res}\,{\cal L}^i_+\Big(\frac{1}{p}{\cal M}{\cal L}^{1-p}\Big){\cal L}^{p-1}
d_\lambda{\cal L}\,\equiv \mbox{Res}\,{\cal L}^i_+{\cal Q} d_\lambda {\cal P}
\end{array}\eeq
Since, by definition, ${\cal P}_{_-}=0$, one gets further
\beq\new\begin{array}{c}
\mbox{Res}\,{\cal L}^i_+{\cal Q} d_\lambda {\cal P}\,\,=\,\mbox{Res}\,({\cal L}^i_+{\cal Q})_{_-}d_\lambda
{\cal P}\,=\,\mbox{Res}\,({\cal L}^i_+{\cal Q}_{_-})_{_-}d_\lambda {\cal P}
\end{array}\eeq
On the other hand,
$\mbox{Res}\,{\cal L}^i_+d_\lambda S=\mbox{Res}\,{\cal L}^i_+ d_\lambda S_{_-}$.
Thus, finally,
\beq\new\begin{array}{c}
\mbox{Res}\,{\cal L}^i_+ d_\lambda S_{_-}\,=\,
\mbox{Res}\,({\cal L}^i_+{\cal Q}_{_-})_{_-}d_\lambda {\cal P}
\hspace{1cm}\forall\,i=1,2,\ldots
\end{array}\eeq
Therefore, if ${\cal Q}_{_-}=0$ then $\mbox{Res}\,{\cal L}^i_+(d_\lambda S)_{_-}=0$ for
any $i=1,2,\ldots $. The last equality is equivalent to
\beq\new\begin{array}{c}
\mbox{Res}\,\lambda^i d_\lambda S_{_-}=0
\end{array}\eeq
and, consequently, (\ref{S2}) holds.\square
\begin{lem}
The constraint (\ref{S2}) is equivalent to homogeneity condition
\beq\new\begin{array}{c}\label{hom}
\sum_{n=1}^\infty t_n\frac{\partial t_{-i}}{\partial t_n}\,=\,t_{-i}
\end{array}\eeq
\end{lem}
{\bf Proof.}~~ Using the explicit representation of $S$ one gets
\beq\new\begin{array}{c}
\mbox{Res}\,Sd_\lambda{\cal L}^i_+\,=\,\sum_{n=1}^\infty t_n\mbox{Res}\,
{\cal L}^nd_\lambda {\cal L}^i_+
-\sum_{j+1}^\infty\frac{1}{j}\,h_{j+1}\mbox{Res}\,{\cal L}^{-j}d_\lambda {\cal L}^i_+
\end{array}\eeq
But for $j>0$ $\mbox{Res}\,{\cal L}^{-j}d_\lambda {\cal L}^i_+\,\equiv\,
\mbox{Res}\,{\cal L}^{-j}d_\lambda ({\cal L}^i-{\cal L}^i_{_-})=
\mbox{Res}\,{\cal L}^{-j}d_\lambda {\cal L}^i
=i\mbox{Res}\,{\cal L}^{i-j-1}d_\lambda{\cal L}\,=\,i\delta_{ij}$ due to (\ref{dd})
and, using (\ref{der3}), we have
\beq\new\begin{array}{c}
\mbox{Res}\,Sd_\lambda{\cal L}^i_+\,=\,\sum_{n=1}^\infty t_n
\frac{\partial h_{i+1}}{\partial t_n}-h_{i+1}
\end{array}\eeq
or, equivalently
\beq\new\begin{array}{c}
\mbox{Res}\,{\cal L}^i_+d_\lambda S_{_-}\,=\,h_{i+1}-
\sum_{n=1}^\infty t_n\frac{\partial h_{i+1}}{\partial t_n}
\end{array}\eeq
and in the case $S_{_-}=0$ one arrives to (\ref{hom})
using the identification $h_{i+1}=-it_{-i}$.\square
Note that in terms of the $\tau$-function (\ref{ctau})
the homogeneity condition (\ref{hom}) has the form
\beq\new\begin{array}{c}\label{hom2}
\sum_{n=1}^\infty t_n\frac{\partial\log\tau_0}{\partial t_n}\,=\,2\,\log\tau_0
\end{array}\eeq
\section{Polynomial GKM: synthesis}
\subsection{$\mbox{\bf L}_{-p}$-constraint}\label{pvir}
Here we represent the proof of the Virasoro constraint (\ref{s15}),
(\ref{s16}).
\bigskip\noindent
Let $\phi^{(can)}_i$ be the canonical basis vectors corresponding to
GKM ones, $\phi^V_i$, defined by (\ref{s9}). From the general formula
(\ref{3der}) one gets the expression for the derivative w.r.t. first time
$\widetilde{T}_1$:
\beq\new\begin{array}{c}\label{vi1}
\frac{\partial \tau^V}{\partial\widetilde{T}_1}\,=\,\frac{1}{\Delta(\widetilde\mu)}\,\left|
\begin{array}{ccc}
\phi^{(can)}_1(\widetilde{\mu}_1) &\ldots & \phi^{(can)}_1(\widetilde{\mu}_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{N-1}(\widetilde{\mu}_1) &\ldots&\phi^{(can)}_{N-1}(\widetilde{\mu}_N)\\
\phi^{(can)}_{N+1}(\widetilde{\mu}_1) &\ldots&\phi^{(can)}_{N+1}(\widetilde{\mu}_N)
\end{array}\right|\,-\,
\tau^V
\sum_{m=1}^N\widetilde{\mu}_m
\end{array}\eeq
Now consider the action of the Virasoro generator $\mbox{\bf L}_{-p}$ in accordance
with (\ref{Vir4}). The operator $A_{-p}(\widetilde{\mu})$ being expressed via operator
$A(\widetilde\mu)$ (\ref{s12})
\beq\new\begin{array}{c}
A_{-p}(\widetilde\mu)\,=\,pA(\widetilde{\mu})-\sum_{k=1}^{p-1}kt_k\widetilde{\mu}^{k-p}-
pt_p-p\widetilde{\mu}
\end{array}\eeq
results to relation
\beq\new\begin{array}{c}\label{vi2}
\frac{1}{p}L_{-p}(\widetilde{T})\tau^V\,=\,-\,\frac{1}{\Delta(\widetilde\mu)}\sum_{k=1}^N
A(\widetilde{\mu}_m)\det\phi^{(can)}_i(\widetilde{\mu}_j)-\sum_{k=1}^{p-1}k(p-k)\widetilde{T}_{p-k}
t_k+\tau^V\Big(Nt_p+\sum_{m=1}^N\widetilde{\mu}_m\Big)
\end{array}\eeq
where the KP times $\widetilde{T}_k$ are expressed through the Miwa variables
$\widetilde{\mu}_i$ due to (\ref{gr10}). In order to prove the analog of constraint
(\ref{r12}) one should calculate the action of the operator $A(\widetilde{\mu})$ on
the canonical basis vectors starting from (\ref{s11}). The vectors (\ref{s9})
are not of canonical form; let
\beq\new\begin{array}{c}\label{vi0}
\phi^V_i(\widetilde{\mu})\,=\,\widetilde{\mu}^{i-1}+\alpha_i\,\widetilde{\mu}^{i-2}+\ldots\;;
\hspace{0.7cm}i=1, 2,\,\ldots
\end{array}\eeq
(see the exact expression for $\alpha_i$ below). Now $\phi^{(can)}_i=\phi^V_i-
\alpha_i\phi^V_{i-1}+\ldots$ and, therefore,
\beq\new\begin{array}{c}\label{vi3}
A(\widetilde{\mu})\phi^{(can)}_i\,=\,\phi^{(can)}_{i+1}+(\alpha_{i+1}-
\alpha_i)\phi^{(can)}_i+\ldots\;;\hspace{1cm}\alpha_1=0
\end{array}\eeq
It is evident that
\beq\new\begin{array}{c}\label{vi4}
\frac{1}{\Delta(\widetilde\mu)}\sum_{k=1}^N
A(\widetilde{\mu}_m)\det\phi^{(can)}_i(\widetilde{\mu}_j)\,=\,
\frac{1}{\Delta(\widetilde\mu)}\,\left|
\begin{array}{ccc}
\phi^{(can)}_1(\widetilde{\mu}_1) &\ldots & \phi^{(can)}_1(\widetilde{\mu}_N)\\
\ldots &\ldots&\ldots \\
\phi^{(can)}_{N-1}(\widetilde{\mu}_1) &\ldots&\phi^{(can)}_{N-1}(\widetilde{\mu}_N)\\
\phi^{(can)}_{N+1}(\widetilde{\mu}_1) &\ldots&\phi^{(can)}_{N+1}(\widetilde{\mu}_N)
\end{array}\right|\,+\,\alpha_{N+1}\,\tau^V
\end{array}\eeq
and, after substitution of (\ref{vi4}), (\ref{vi1}) to (\ref{vi2}), one
arrives to relation
\beq\new\begin{array}{c}\label{vi5}
\Big(\frac{1}{p}\mbox{\bf L}_{-p}+\frac{\partial}{\partial \widetilde{T}_1}\Big)\tau^V\,=\,-\,
\sum_{k=1}^{p-1}k(p-k)\widetilde{T}_{p-k}t_k\,+(Nt_p-\alpha_{_{N+1}})\tau^V
\end{array}\eeq
The last step is to calculate the coefficients $\alpha_i$ in the expansion
(\ref{vi0}). Recall, that the vectors $\phi^V_i$ (\ref{s9}) are related with
the original ones (\ref{gr0}) as follows (taking into account the relation
(\ref{s3})):
\beq\new\begin{array}{c}\label{vi6}
\phi_i^V(\widetilde\mu)\,=\,\sqrt{\frac{\displaystyle p\widetilde{\mu}^{p-1}}{V''(\mu)}}\;
e^{\sum_{k=-\infty}^{-1}t_k\widetilde{\mu}^k}\,\Phi^V_i(\mu)\,=\\ =\,
\sqrt{\frac{\displaystyle p\widetilde{\mu}^{p-1}}{V''(\mu)}}\;
e^{t_{-1}\widetilde{\mu}^{-1}}\,\mu^{i-1}\Big(1+O(\mu^{-2})\Big)
\end{array}\eeq
- see the asymptotics (\ref{v2}). Using the residue technique it is easy to
find that
\beq\new\begin{array}{c}
\frac{p\widetilde{\mu}^{p-1}}{V''(\mu)}\,=\,\sum_{i=-\infty}^{1}\frac{i(p+i)}{p+1}\,
\frac{t_{p+i}}{t_{p+1}}\,\widetilde{\mu}^{i-1}\,=\,1+O(\widetilde{\mu}^{-2})
\end{array}\eeq
i.e. this term does not contribute to $\alpha_i$. Therefore, from (\ref{vi6})
and (\ref{s3'})
\beq\new\begin{array}{c}
\phi^V_i(\widetilde{\mu})\,=\,\widetilde{\mu}^{i-1}+\Big(t_{-1}+(i-1)t_p\Big)\widetilde{\mu}^{i-2}
+\ldots
\end{array}\eeq
Hence, $\alpha_i=(i-1)t_p+t_{-1}$ and, consequently,
\beq\new\begin{array}{c}
Nt_p-\alpha_{_{N+1}}\,=\,-t_{-1}
\end{array}\eeq
where $t_{-1}$ is just the corresponding derivative of the quasiclassical
$\tau$-function defined by (\ref{t-1}); using the convention (\ref{tp})
it reads now
\beq\new\begin{array}{c}\label{qVir}
t_{-1}\,=\,\frac{1}{2p}\sum_{k=1}^{p-1}k(p-k)t_kt_{p-k}
\end{array}\eeq
Hence, the equation (\ref{vi4}) acquires the form
\beq\new\begin{array}{c}\label{vi7}
\frac{1}{2p}\sum_{k=1}^{p-1}k(p-k)(\widetilde{T}_k\!+\!t_k)(\widetilde{T}_{p-k}\!+\!t_{p-k})
+\frac{1}{p}\sum_{k=1}^\infty(k\!+\!p)(\widetilde{T}_{k+p}\!+\!t_{p+k})
\frac{\partial\log\tau^V}{\partial \widetilde{T}_k}\,=\,0
\end{array}\eeq
and $\mbox{\bf L}_{-p}$-constraint (\ref{s15}), (\ref{s16}) is proved.
\subsection{Complete description of time dependence}\label{ptimes}
Let us bring all essential facts together.
\begin{itemize}
\item[(i)]{We transformed the original
matrix integral (\ref{gr}) in the terms of new times (\ref{gr12}) as follows:
\beq\new\begin{array}{c}\label{z1}
Z^V[T(\widetilde{T})]\,\equiv\,\frac{\Delta(\widetilde\mu)}{\Delta(\mu)}\,
\prod_{i}\left(\frac{V''(\mu_i)}{p\,\widetilde\mu_i^{p-1}}\right)^{1/2}\widetilde Z^V[\widetilde T]
\,=\,e^{-\frac{1}{2}\,A_{ij}\widetilde{T}_i \widetilde{T}_j}\,\widetilde Z^V[\widetilde T]
\end{array}\eeq
}
\item[(ii)]{ The matrix $||A_{ij}||$ depends on the quasiclassical times
$\{t_i\}$ related with the coefficients of the polynomial ${\cal P}(\lambda)
\equiv V'(\lambda)$ by the formula (\ref{ct2'}) and is
compactly written in the form
\beq\new\begin{array}{c}\label{z2}
A_{ij}(t)\,=\,\mbox{Res}\,{\cal P}^{i/p}(\lambda)d_\lambda{\cal P}^{j/p}_{_+} (\lambda)
\end{array}\eeq
Moreover, for $1\leq i,j\leq p$
\beq\new\begin{array}{c}\label{z'}
A_{ij}(t)\,=\,\frac{\partial^2\log\tau_0(t)}{\partial t_i\partial t_j}
\end{array}\eeq
and $\tau_0$ is a $\tau$-function of the $p$-reduced quasiclassical
KP hierarchy.
}
\item[(iii)]{The "preliminary" $p$-reduced $\tau$-function of GKM is a
solution of the equivalent hierarchy; it is represented as
\beq\new\begin{array}{c}\label{z5}
\widetilde Z^V[\widetilde T]\,=\,\frac{\Delta(\widetilde{\mu}^p)}{\Delta(\widetilde\mu)}
\prod_i(p\,\widetilde\mu^{p-1}_i)^{1/2}\;\;
e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[V(M)-MV'(M)]$}}
\int\,e^{\raise2pt\hbox{$\scriptstyle \!\mbox{\footnotesize Tr}\,[-V(X)+XV'(M)]$}}\;dX
\end{array}\eeq
where $\widetilde{\mu}^p\equiv V'(\mu)$.
Note that the r.h.s. of (\ref{z5}) is expressed in terms of the
matrix $\widetilde M$. In order to stress such dependence we denote the
corresponding terms as $M|_{_{\widetilde M}}$ etc. below.
}
\item[(iv)]{
The time derivatives of the partition function (\ref{z1}) can be also
written as a matrix integrals
\beq\new\begin{array}{c}\label{z4}
\frac{\partial}{\partial \widetilde T_k}\log Z^V[T(\widetilde T)]\,=\,
\<\mbox{Tr}\,[{\cal P}^{k/p}(X)]_{_+}
-\mbox{Tr}\,[{\cal P}^{k/p}(M)]_{_+}\> \hspace{1cm}1\leq k\leq p
\end{array}\eeq
}
\end{itemize}
Let us differentiate $\widetilde Z^V$ w.r.t. quasiclassical times $t_k$ keeping
$V'(M)=\widetilde M^p$ fixed. It is clear that
\beq\new\begin{array}{c}
\frac{\partial}{\partial t_k}\mbox{Tr}\,V(X)\,=\,-\,\mbox{Tr}\,[{\cal P}^{k/p}(X)]_{_+}
\end{array}\eeq
due to quasiclassical evolution equations. To calculate the derivative
of $\{V(M)-MV'(M)\}|_{_{\widetilde M}}$ one should take into account that,
besides of the coefficient of $V$, the elements of the matrix $M$ are also
dependent on times $\{t_k\}$ as a functions of $\widetilde M$. Therefore,
\beq\new\begin{array}{c}
\frac{\partial}{\partial t_k}\mbox{Tr}\,\left[\,V(M)|_{_{\widetilde M}}-M|_{_{\widetilde M}}
\,{\widetilde M}^p\,\right]\,\equiv\\ \equiv\,
\sum_{i=1}^{p+1}\frac{1}{i}\frac{\partial v_i(t)}{\partial t_k}\mbox{Tr}\,M^k\,+\,
\mbox{Tr}\,\left[\frac{\partial V(M)}{\partial M}\,\frac{\partial
M}{\partial t_k} \,-\,\frac{\partial M}{\partial t_k}\,\widetilde{M}^p\right]_{\widetilde M}
\end{array}\eeq
The second term in this expression is vanished identically, thus,
\beq\new\begin{array}{c}
\frac{\partial}{\partial t_k}\mbox{Tr}\,[V(M)-M{\widetilde M}^p)]\,=\,-\,\mbox{Tr}\,\left[
{\cal P}_{_+}^{k/p}(M)|_{_{\widetilde M}}\right]
\end{array}\eeq
and, finally,
\beq\new\begin{array}{c}
\frac{\partial}{\partial t_k}\log\,\widetilde Z^V[\widetilde T]\,=\,
\<\mbox{Tr}\,[{\cal P}^{k/p}(X)]_{_+}
-\mbox{Tr}\,[{\cal P}^{k/p}(M)]_{_+}|_{_{\widetilde M}}\> \hspace{1cm}1\leq k\leq p
\end{array}\eeq
Therefore, comparing the last relation with (\ref{z4}) it follows that
\beq\new\begin{array}{c}\label{t4}
\frac{\partial}{\partial T_k}\log Z^V[T(\widetilde T)]\,=\,\frac{\partial}{\partial t_k}\log\widetilde Z^V[\widetilde T]
\end{array}\eeq
From (\ref{z1}) and (\ref{t4}) one obtains
\beq\new\begin{array}{c}\label{t5}
\left(\frac{\partial}{\partial \widetilde{T}_k}-\frac{\partial}{\partial t_k}\right)\log Z^V[\widetilde{T},t]\,=\,
\sum_{i=1}^\infty A_{ki}\widetilde{T}_i\hspace{1cm}1\leq k\leq p
\end{array}\eeq
Introduce the $\tau$-function $\tau^V$ in agreement with (\ref{s5}),
(\ref{s7}):
\beq\new\begin{array}{c}\label{z6}
\widetilde Z^V[\widetilde T,t]\,=\,
e\raise9pt\hbox{${\scriptscriptstyle\sum_{i=1}^\infty} {\scriptstyle
it_{-i}\widetilde{T}_i}$}\,\tau^V[\widetilde{T},t]
\end{array}\eeq
where negative times $t_{-i}\,,\;\,i=1, 2, \ldots$ satisfy the equations
\beq\new\begin{array}{c}\label{z7}
-\,i\frac{\partial t_{-i}}{\partial t_k}\,=\,A_{ki}\;;
\hspace{0.8cm}k\,=\,1,\,\ldots\,,p
\end{array}\eeq
in accordance with (\ref{ti}).
Substitution of $\tau^V$ to (\ref{t5}) results to equation
\beq\new\begin{array}{c}\label{z8}
\left(\frac{\partial}{\partial \widetilde{T}_k}-\frac{\partial}{\partial t_k}\right)\log\tau^V[\widetilde T,t]\,=\,
-\,kt_{-k}\,+\sum_{i=1}^\infty\Big(A_{ki}+i\frac{\partial t_{-i}}{\partial t_k}\Big)
\widetilde{T}_i\,=\\ =\,-\,kt_{-k}\,\equiv\,\frac{\partial\log\tau_0(t)}{\partial t_k}
\end{array}\eeq
due to relations (\ref{z7}). Thus, the partition function
\beq\new\begin{array}{c}\label{z9}
{\cal Z}^V[\widetilde{T},t]\,\equiv\,\tau^V[\widetilde{T},t]\,\tau_0(t)
\end{array}\eeq
depends only on the sum $\widetilde{T}_k+t_k$:
\beq\new\begin{array}{c}\label{z10}
\frac{\partial{\cal Z}^V}{\partial\widetilde{T}_k}\,=\,
\frac{\partial{\cal Z}^V}{\partial t_k}
\end{array}\eeq
Let us find the relation between partition functions $Z^V$ and ${\cal Z}^V$.
Due to (\ref{z1}), (\ref{z6}) and (\ref{z9}) these functions are proportional
up to the exponential with
\beq\new\begin{array}{c}\label{z11}
-\frac{1}{2}\,A_{ij}\widetilde{T}_i \widetilde{T}_j+ it_{-i}\widetilde{T}_i-\log\tau_0(t)
\end{array}\eeq
Due to homogeneity relation (\ref{hom}) the expression (\ref{z11})
can be written as
\beq\new\begin{array}{c}
-\frac{1}{2}\sum_{ij}A_{ij}(t)\,(\widetilde{T}_i\!+\!t_i)(\widetilde{T}_j\!+\!t_j)
\end{array}\eeq
hence,
\beq\new\begin{array}{c}\label{z12}
Z^V[T(\widetilde{T})]\,=\,{\cal Z}^V(\widetilde{T}+t)\exp\left\{
-\frac{1}{2}\sum_{ij}A_{ij}(t)
(\widetilde{T}_i\!+\!t_i)(\widetilde{T}_j\!+\!t_j)\right\}
\end{array}\eeq
This formula gives the complete description of the polynomial GKM w.r.t
"quantum" $(\{\widetilde{T}_k\})$ and quasiclassical $(\{t_k\})$ times.
\section{Acknowledgments}
The author is deeply indebted to his coauthors
A.Marshakov, A.Mironov and A.Morozov for stimulating discussions.
This work is partly supported by grant RFBR 96-02-19085 and by
grant 96-15-96455 for Support of Scientific Schools.
|
1,108,101,563,680 | arxiv | \section{Introduction}
Ever since Hans Berger made the first recording of the human electroencephalogram
(EEG) in 1924 there has been a tremendous interest in understanding the physiological
basis of brain rhythms. This has included the development of mathematical models of
cortical tissue, which are often referred to as neural field models.
The formulation of these models has not changed much since the seminal work of Wilson
and Cowan, Nunez and Amari in the 1970s, as recently described in \cite{Coombes2014}.
Neural fields and neural mass models approximate neural activity assuming the
cortical tissue is a continuous medium. They are coarse-grained spatiotemporal models,
which lack important physiological mechanisms known to be fundamental in generating brain
rhythms, such as dendritic structure and cortical folding. Nonetheless their basic
structure has been shown to provide a mechanistic starting point for understanding
whole brain dynamics, as described by Nunez \cite{Nunez1995}, and especially that of
the EEG.
Modern biophysical theories assert that EEG signals from a single scalp
electrode arise from the coordinated activity of $\sim 10^8$ pyramidal cells in the
cortex. These are arranged with their dendrites in parallel and perpendicular to the
cortical surface. When activated by synapses at the proximal
dendrites, extracellular current flows parallel to the dendrites, with a net
membrane current at the synapse. For excitatory (inhibitory) synapses this creates a
sink (source) with a negative (positive) extracellular potential. Because there is
no accumulation of charge in the tissue the proximal synaptic current is compensated
by other currents flowing in the medium causing a distributed source in the case of a
sink and vice-versa for a synapse that acts as a source. Hence, at the population
level the potential field generated by a synchronously activated population of
cortical pyramidal cells behaves like that of a dipole layer. Although the important
contribution that single dendritic trees make to generating extracellular electric
field potentials has been known for some time, and can be calculated using
Maxwell's equations \cite{Pettersen08}, they are often not accounted for in neural
field models. However, with the advent of laminar electrodes to record from
different cortical layers it is now timely to build on early work by Crook and
coworkers \cite{crook1997role} and by Bressloff, reviewed in \cite{Bressloff97}, and develop neural field models that incorporate a
notion of dendritic depth. This will allow a significant and important departure
from present-day neural field models, and recognise the contribution of dendritic
processing to macroscopic large-scale brain signals. A simple way to generalise
standard neural field models is to consider the dendritic cable model of Rall as the
core component in a neural field, with source terms on the cable mediating
long-range synaptic interactions. These in turn can be described with the
introduction of an \textit{axo-dendritic} connectivity function.
Here we consider a neural field model which treats the voltage on a dendrite as the
primary variable of interest in a simple model of neural tissue. The model comprises
a continuum of somas (a \emph{somatic layer}, see schematic in
Figure~\ref{fig:sketch}(a)). Dendrites are modeled as unbranched fibres,
orthogonal to the somatic layer which, for simplicity, is one-dimensional and
rectilinear (see Figure~\ref{fig:sketch}(b)). At each point along the somatic layer
$x \in \mathbb{R}$ we envisage a fibre with coordinate $\xi \in \mathbb{R}$. The voltage dynamics
along the fibre is described by the cable equation, with a nonlocal input current
arising as
an integral
over the outputs from the somatic layer (where $\xi=0$). Denoting the voltage by
$V(x,\xi,t)$ we have an integro-differential equation for the
real-valued function $V: \mathbb{R}^2 \times \mathbb{R} \rightarrow \mathbb{R}$ of the form
\begin{multline} \label{1}
\partial_t V(x,\xi, t) = (-\gamma + \nu \partial_{\xi \xi}) V(x,\xi,t)
+ G(x,\xi,t)
\\
+ \int_{\mathbb{R}^2} W(x,\xi,y,\eta)
S(V(y,\eta,t))\mathop{}\!\mathrm{d} y \mathop{}\!\mathrm{d} \eta ,
\end{multline}
posed on $(x,\xi,t) \in \mathbb{R}^3$, for some typically sigmoidal or
Heaviside-type firing rate function $S$, and some external input function $G$. Here
$\nu$ is the diffusion coefficient and $1/\gamma$ the membrane time-constant of the
cable. As we shall see below, it will be crucial for our analysis that currents flow
exclusively along the fibres, that is, the diffusive term in \eqref{1} contains
derivatives only with respect to $\xi$.
The model is completed with a choice of the generalised axo-dendritic connectivity
function $W$. The nonlocal input current arises from the somatic layer, hence they
are transferred from sources in an $\varepsilon$-neighbourhood of $\xi = 0$, $0 < \varepsilon \ll 1$, to
contact points in an $\varepsilon$-neighbourhood of $\xi=\xi_0$ on the cable (see
Figure~\ref{fig:sketch}(b)). In addition, the strength of interaction depends
solely on the distance between the source and the contact point, measured along the
somatic layer, leading to the decomposition
\begin{equation}\label{eq:kernel}
W(x,\xi,y, \eta)= w(|x-y|)\delta_\varepsilon(\xi-\xi_0) \delta_\varepsilon(\eta),
\end{equation}
where $w$ describes the strength of interaction across the somatic space and is
chosen to be translationally invariant and $\delta_\varepsilon$ is a quickly-decaying
function.
\begin{figure}
\centering
\includegraphics{sketch}
\caption{Schematic of the neural field model. (a) Dendrites are represented as
unbranched fibres (red), orthogonal to a continuum of somas (somatic layer, in
grey). (b) Model
\eqref{1} is for a 1D somatic layer, with coordinate
$x \in \mathbb{R}$, and fiber coordinate $\xi \in \mathbb{R}$. Input currents are generated in a
small neighbourhood of the somatic layer, at $\xi=0$ and are delivered to a contact
point, in a small neighbourhood of $\xi = \xi_0$. The strength of interaction depends
on the distance between sources and contact points, measured along the somatic layer,
hence the inputs that are generated at $A$ and transmitted to $B$, $C$, and $D$ depend
on $|x_B - x_A|$, $|x_C-x_A|$, and $|x_D - x_A|$, respectively (see
\eqref{eq:kernel}).}
\label{fig:sketch}
\end{figure}
This work introduces a computational method for approximating
solutions to (\ref{1}), subject to suitable initial and boundary conditions,
and applies it to the numerical simulation of the model with kernel given by
(\ref{eq:kernel}). Numerical methods for neural fields in 2-dimensional media have
been introduced recently in flat
geometries~\cite{Rankin2014,hutt2014numerical,lima2015numerical} and on 2-manifolds
embedded in a 3-dimensional space~\cite{Bojak2010,Visser:2017hy}. In addition,
several available open-source codes, such as the Neural Field
Simulator~\cite{Nichols2015}, the Brain Dynamics Toolbox~\cite{Heitmann}, and
NFTsim~\cite{SanzLeon2018}, perform simulations of neural field equations. Numerical
schemes for models of type~\eqref{1} have not been introduced, analysed, or
implemented, and these are the main contributions of the present article.
In Section \ref{Sec:numerical} we describe, analyse, and discuss implementation
details of the numerical method. In Sections \ref{sec:TWTest}--\ref{sec:TuringTest}
we illustrate the performance of the method by means of some numerical experiments,
including problems whose exact solution has known properties. The numerical results
are discussed and their physical meaning is explained. We finish with some
conclusions and discussion in Section \ref{sec:conclusions}.
\section{Numerical Scheme}
\label{Sec:numerical}
Numerical simulations are performed on \eqref{1}, posed on a
bounded, cylindrical somato-dendritic domain
\[
\Omega = \mathbb{R}/2L_x\mathbb{Z} \times (-L_\xi,L_\xi),
\]
and subject to initial and boundary conditions,
\begin{equation}\label{eq:systemNum}
\begin{aligned}
& \partial_t V = (-\gamma + \nu \partial_{\xi \xi}) V + K(V) + G
& & \textrm{on $ \Omega \times (0,T]$}, \\
& V(\blank,\blank,0) = V_0
& & \textrm{on $ \Omega$}, \\
& \partial_\xi V(\blank,-L_\xi,\blank) = \partial_\xi V(\blank,L_\xi,\blank) = 0
& & \textrm{on $ (-L_x,L_x] \times [0,T]$}, \\
\end{aligned}
\end{equation}
for some positive constants $T$, $L_x$, $L_\xi$. This setup implies
$2L_x$-periodicity in the somatic
direction, and Neumann boundary conditions in the dendritic direction.
We denote by $K$ the integral operator defined by
\[
(K(V))(x,\xi,t) = \int_{\Omega} W_{\Omega}(x,\xi,y,\eta)
S(V(y,\eta,t)) \mathop{}\!\mathrm{d} y \mathop{}\!\mathrm{d} \eta,
\qquad (x,\xi) \in \Omega.
\]
where $W_{\Omega}$ is the restriction of $W$ on $\Omega$. This restriction implies
that the function $w$ in \eqref{eq:kernel} be substituted by
its periodic extension on $[-L_x,L_x)$. In the remainder of this paper we will omit the
subscript $\Omega$ from $W$, and assume $w$ to be $2L_x$-periodic.
To expose our scheme we introduce
a spatiotemporal discretisation
using the evenly spaced grid
$\{(x_j,\xi_i,t_n)\}$ defined by
\[
\begin{aligned}
& x_j = -L_x + j h_x, & & j \in \mathbb{N}_{n_x}, && h_x = 2L_x/n_x, \\
& \xi_i = -L_\xi + (i-1) h_\xi, & & i \in \mathbb{N}_{n_\xi}, && h_\xi =
2L_\xi/(n_\xi-1), \\
& t_n = n \tau, & & n \in \mathbb{N}_{n_t}, && \tau = T/n_t, \\
\end{aligned}
\]
where we posed $\mathbb{N}_k = \{1,2,\ldots,k\}$ for $k \in \mathbb{N}$. The scheme we propose uses
the method of lines for \eqref{eq:systemNum}, in conjunction with differentiation
matrices for the diffusive term and a quadrature scheme for the integral operator.
Collocating \eqref{eq:systemNum} at the somato-dendritic nodes we obtain
\begin{equation}\label{eq:collocation}
\begin{split}
\partial_t V(x_j,\xi_i,t) = (-\gamma + \nu \partial_{\xi\xi}) V(x_j,\xi_i,t)
& + K(V)(x_j, \xi_i,t) \\
& + G(x_j,\xi_i,t) \quad
(j,i) \in \mathbb{N}_{n_x} \times \mathbb{N}_{n_\xi},
\end{split}
\end{equation}
where, with a slight abuse of notation, we denote by $V$ an interpolant to the
function $V$ in \eqref{eq:systemNum} through $\{ (x_j,\xi_i) \}$.
To obtain a numerical solution of the problem we
must choose: (i) an approximation for the linear operator $(-\gamma + \nu
\partial_{\xi\xi})$ at the somato-dendritic nodes; (ii) an approximation for the
integral operator at the same nodes; (iii) a scheme to time step the derived set of
ODEs.
In the presentation of the scheme, we shall use two equivalent representations for
the voltage approximation: a matricial description
\begin{equation}\label{eq:VMatrix}
V(t) = \{ V_{ij}(t)
\colon (i,j) \in \mathbb{N}_{n_\xi} \times \mathbb{N}_{n_x} \}
\in \mathbb{R}^{n_\xi \times n_x}, \qquad V_{ij}(t) \approx
V(x_j,\xi_i,t),
\end{equation}
and a lexicographic vectorial representation, obtained by introducing the
index mapping $k(i,j) = n_\xi(i-1) + j$,
\begin{equation}\label{eq:VVector}
U(t) = \{ U_{k(i,j)}(t) \colon (i,j) \in \mathbb{N}_{n_\xi} \times \mathbb{N}_{n_x} \}
\in \mathbb{R}^{n_x n_\xi}.
\end{equation}
In the latter, we will sometimes suppress the dependence of $k$ on $(i,j)$, for
notational convenience.
\subsection{Discretisation of the linear operator} A simple choice for discretising
the linear differential operator $(-\gamma + \nu \partial_{\xi \xi})$ is to adopt
differentiation matrices~\cite{trefethen2000}. If a differentiation matrix
$D_{\xi \xi} \in \mathbb{R}^{n_\xi \times n_\xi}$ is chosen to approximate the action of
the Laplacian operator $\partial_{\xi \xi}$ on twice differentiable, univariate
functions defined on $[-L_\xi,L_\xi]$, satisfying Neumann boundary conditions, and
sampled at nodes $\{ \xi_i \}$, then the action of the operator $-\gamma + \nu
\partial_{\xi\xi}$ on bivariate functions defined on $[-L_x,L_x) \times
[-L_\xi,L_\xi]$, twice differentiable in $\xi$ with Neumann boundary conditions,
sampled at the nodes $\{ (x_j,\xi_i) \}$ with lexicographical ordering $k(i,j)$ is
approximated by the following block-diagonal matrix
\begin{equation*}\label{eq:LinOp}
-\gamma I_{n_x n_\xi} + \nu I_{n_x} \otimes D_{\xi \xi} =
\begin{bmatrix}
-\gamma + \nu D_{\xi \xi} & & & \\
&-\gamma + \nu D_{\xi \xi} & & \\
& & \ddots & \\
& & &-\gamma + \nu D_{\xi \xi} \end{bmatrix}
,
\end{equation*}
where $I_n$, $n \in \mathbb{N}$, is the $n$-by-$n$ identity matrix, and $\otimes$ is the
Kronecker product between matrices. Since the model prescribes diffusion only along
the dendritic coordinate, the
corresponding matrix has a block-diagonal structure \emph{with identical blocks},
which can be exploited to improve performance in numerical computations. The sparsity pattern of a block is dictated by the
underlying scheme to approximate the univariate Laplacian: we have full blocks if
$D_{\xi\xi}$ is derived from spectral schemes, and sparse blocks
for finite-difference schemes.
\subsection{Discretisation of the nonlinear integral operator} The starting point to
discretise the integral operator is an $m$th order quadrature formula with $q_m$
nodes $\{ (y_l,\eta_l) \colon l \in \mathbb{N}_{q_m} \}$ and weights $\{ \sigma_l \colon l
\in \mathbb{N}_{q_m} \}$ for the integral of a bivariate function over $\Omega$,
\[
Q(v)=\int_{\Omega} v(y,\eta) \, \mathop{}\!\mathrm{d} y \mathop{}\!\mathrm{d} \eta \approx
\sum_{l \in \mathbb{N}_m} v(y_l,\eta_l) \sigma_l = Q_m(v).
\]
Using this formula we approximate the nonlinear operator in \eqref{eq:collocation} by
\[
Q_m(K(V))(x_j,\xi_i,t) = \sum_{l \in \mathbb{N}{q_m}} W(x_j,\xi_i,y_l,\eta_l)
S(V(y_l,\eta_l,t)) \sigma_l .
\]
We stress that, in general, the quadrature nodes $\{ (y_l,\eta_l) \}$ and the
collocation nodes $\{ (x_{k(i,j)}, \xi_{k(i,j)}) \}$ are disjoint. The former are
chosen so as to approximate accurately the integral term, the latter to approximate
the differential operator. When the two grids are disjoint, an interpolation of $V$ with
nodes $\{ (y_l,\eta_l) \}$ is necessary to derive a set of ODEs at the collocation
nodes. In the remainder of this paper we will assume that collocation and quadrature
nodes coincide, so that we can omit the interpolant, for simplicity.
\subsection{Matrix ODE formulation}
Combining the differentiation matrix, the quadrature rule, and the lexicographic
representation \eqref{eq:VVector}
we obtain a set of $n_x n_\xi$ ODEs
\begin{equation}\label{eq:ODEs}
\begin{aligned}
\dot U(t) & = (-\gamma I_{n_x n_\xi} + \nu I_{n_x} \otimes D_{\xi \xi} ) U(t) +
F(U(t),t), \\
U(0) & = U_0.
\end{aligned}
\end{equation}
The structure of the differentiation matrix in section \eqref{eq:LinOp}, however,
suggests a rewriting of \eqref{eq:ODEs} in terms of the blocks of the linear
operator, which correspond to ``slices" at constant values of $x$: we recall the
matrix representation \eqref{eq:VMatrix} and obtain an equivalent
matrix ODE formulation
\begin{equation}\label{eq:MatrixODE}
\dot V(t) = (-\gamma I_{n_\xi} + \nu D_{\xi \xi}) V(t) + N(V(t)) + G(t),
\end{equation}
where $N$ is the matrix-valued function with components $N_{ij}(V) =
Q_m(V)(x_j,\xi_i)$ and $G$ is the matrix with components $G(x_j,\xi_i,t)$. In passing,
we note that the linear part of the equation involves
a multiplication between an $n_\xi$-by-$n_\xi$ matrix and the $n_\xi$-by-$n_x$ matrix $V$.
\subsection{Time-stepping scheme} The proposed time-stepping scheme for
\eqref{eq:systemNum} is obtained from \eqref{eq:MatrixODE} with the following
choices: (i) a first-order, implicit-explicit (IMEX) time-stepping
scheme~\cite{ascher1995}; (ii) a second-order, centered finite-difference
scheme for the differentiation matrix $D_{\xi \xi}$; (ii) a second-order trapezium
rule for the quadrature rule $Q_m$. As we shall see, these choices bring
a few computational advantages, which will be outlined below.
IMEX schemes treat the linear (diffusive) part of the ODE implicitly, and the
nonlinear part explicitly, so that the stiff diffusive term is integrated implicitly
to avoid excessively small time steps. The simplest IMEX method uses backward Euler
for the diffusive term, leading to
\begin{equation}\label{eq:IMEX}
\begin{aligned}
V^0 & = V_0, \\
A V^n & = V^{n-1} + \tau N(V^{n-1}) + \tau G^{n-1}, \qquad n \in \mathbb{N},
\end{aligned}
\end{equation}
where $V^n \approx V(t_n)$, $G^n = G(t_n)$, and $A$ is the matrix
\begin{equation}\label{eq:AMatr}
A = (1+\gamma \tau) I_{n_\xi} - \tau \nu D_{\xi \xi}.
\end{equation}
In concrete calculations we use second-order centred finite differences, leading to
\begin{equation}\label{eq:FinDiffLapl}
D_{\xi \xi} =
\Delta/h_\xi^{2}, \qquad \Delta =
\begin{bmatrix}
-2 & 2 & & & & \\
1 & -2 & 1 & & & \\
& \ddots & \ddots & \ddots & & \\
& & 1 & -2 & 1 & \\
& & & 2 & -2 &
\end{bmatrix},
\end{equation}
in which Neumann boundary conditions are included in the differentiation matrix.
Finally, we discuss the choice of the quadrature scheme. We use a composite trapezium
scheme with nodes $\{x_j\}$ and weights $\{\rho_j\}$ in $x$, and nodes $\{\xi_i\}$
and weights $\{ \sigma_i\}$ in $\xi$, respectively, hence quadrature and collocation
sets coincide,
\begin{equation} \label{eq:NQuad}
N_{ij}(V) = \sum_{j' \in \mathbb{N}{n_x}} \sum_{i' \in \mathbb{N}{n_\xi}}
W(x_j,\xi_i,x_{j'},\xi_{i'}) S(V_{i',j'}) \rho_{j'}\sigma_{i'}.
\quad (i,j) \in \mathbb{N}_{n_\xi} \times \mathbb{N}_{n_x}.
\end{equation}
\subsection{Properties of the IMEX scheme}
In this section we collect some analytical results on the IMEX scheme
\eqref{eq:IMEX}--\eqref{eq:NQuad}. We work with spaces of sufficiently regular
continuous functions, which provides the simplest setting for our results. We denote
by $C^k(D)$ the space of $k$-times continuously differentiable functions from $D$ to
$\mathbb{R}$, where $k$ is an integer, $D$ a domain in $\mathbb{R}^3$. We also indicate
by $C_b^k(D)$ the space of continuous functions from $D$ to $\mathbb{R}$ with bounded and
continuous partial derivatives up to order $k$.
Both spaces are endowed with the infinity norm $\Vert \blank
\Vert_\infty$. We will use the symbol $| \blank |_\infty$ for the standard
infinity-norm on matrices, induced by the corresponding vector norm. In addition, we
will denote by $\bar D$ the closure of $D$.
We begin with a generic assumption of boundedness on the functions in
\eqref{eq:systemNum}:
\begin{hypothesis}\label{hyp:boundedness}
There exist $C_W, C_S, C_G >0 $ such that
\[
|W| \leq C_W \; \textrm{in $\Omega \times \Omega$},
\qquad
|S| \leq C_S \; \textrm{in $\mathbb{R}$},
\qquad
|G| \leq C_G \; \textrm{in $\Omega \times \mathbb{R}$}.
\]
\end{hypothesis}
\begin{lemma}[Boundedness of IMEX solution]\label{lem:IMEXboundedness}
Assume Hypothesis \ref{hyp:boundedness}, then there exists a unique bounded
sequence $(V^n)_{n\in\mathbb{N}}$ satisfying
the IMEX scheme \eqref{eq:IMEX}--\eqref{eq:NQuad}. In
addition, the following bound holds
\[
\vert V^n \vert_\infty \leq \vert V^0 \vert_\infty +
n_x \frac{ \mu(\bar\Omega) C_W C_S + C_G}{\gamma},
\qquad n \in \mathbb{N}.
\]
\end{lemma}
\begin{proof}
The matrix $A$ in \eqref{eq:AMatr} has real, strictly positive eigenvalues given
by
\[
\lambda_k = 1 + \gamma \tau + \frac{4\nu \tau}{h_\xi^2}
\bigg[ \sin \bigg( \frac{\pi(k-1)}{2n_\xi} \bigg) \bigg]^2, \qquad k \in
\mathbb{N}_{n_\xi},
\]
where we have used the fact that the eigenvalues of $D_{\xi \xi}$ are known in closed form.
We conclude that $A$ is invertible, hence for any fixed $n \in \mathbb{N}$,
the matrix $V^n$ solving \eqref{eq:IMEX} is unique. In addition, $A$ is
strictly diagonally dominant, hence the following bound holds~\cite{varah75}
\begin{equation}\label{eq:invABound}
\vert A^{-1} \vert_\infty \leq \max_{i \in \mathbb{N}_\xi} \frac{1}{|A_{ii}| - \sum_{j
\neq i} |A_{ij}|} = \frac{1}{1+ \gamma \tau}.
\end{equation}
To prove boundedness of the sequence $(V^n)_{n \in \mathbb{N}}$ we first bound the
matrices $N(V^{n-1})$, $G^n$ appearing in \eqref{eq:IMEX}
\[
\begin{aligned}
\vert N(V^{n-1}) \vert_\infty
& = \max_{i \in \mathbb{N}_{n_\xi}} \sum_{j \in \mathbb{N}_{n_x}} |N_{ij}(V^{n-1})| \\
& \leq
\max_{i \in \mathbb{N}_{n_\xi}} \sum_{j \in \mathbb{N}_{n_x}}
\sum_{j' \in \mathbb{N}{n_x}} \sum_{i' \in \mathbb{N}{n_\xi}}
|W(x_j,\xi_i,x_{j'},\xi_{i'}) S(V_{i',j'}) \rho_{j'}\sigma_{i'}| \\
& \leq C_W C_S
\max_{i \in \mathbb{N}_{n_\xi}} \sum_{j \in \mathbb{N}_{n_x}}
\sum_{j' \in \mathbb{N}{n_x}} \sum_{i' \in \mathbb{N}{n_\xi}}
\rho_{j'}\sigma_{i'} \\
& \leq C_W C_S
\max_{i \in \mathbb{N}_{n_\xi}} \sum_{j \in \mathbb{N}_{n_x}} \mu(\bar\Omega)
= n_x \mu(\bar\Omega) C_W C_S,
\end{aligned},
\]
and similarly $\vert G^{n-1} \vert_\infty \leq n_x C_G$, and then combine
them with the bound for $\vert A^{-1}\vert_\infty$ to find
\[
\begin{aligned}
\vert V^n \vert_\infty
& \leq \vert A^{-1} \vert_\infty
\Big(
\vert V^{n-1} \vert_\infty
+ \tau \vert N(V^{n-1}) \vert_\infty
+ \tau \vert G^{n-1} \vert_\infty
\Big) \\
& \leq \frac{1}{1+\gamma \tau}
\Big(
\vert V^{n-1} \vert_\infty
+ \tau n_x \mu(\bar\Omega) C_W C_S
+ \tau n_x C_G
\Big).
\end{aligned}
\]
We set
\[
r = \frac{1}{1+\gamma \tau} <1, \qquad
q = \frac{\tau n_x}{1 + \gamma \tau}
( \mu(\bar\Omega) C_W C_S + C_G),
\]
and use induction and elementary properties of the geometric series to obtain
\[
\vert V^n \vert_\infty \leq r^n \vert V^0 \vert_\infty + q \sum_{j =0}^{n-1} r^j
\leq \vert V^0 \vert_\infty + \frac{q}{1-r},
\]
which proves the assertion.
\end{proof}
In addition to proving boundedness of the solution, we address the
convergence rate of the IMEX scheme. For this result, we assume the existence of a
sufficiently regular solution to \eqref{eq:systemNum}.
\begin{lemma}[Local convergence rate of the IMEX scheme]\label{lem:IMEXconvergence}
Assume Hypothesis \ref{hyp:boundedness}, $W \in C^2(\Omega \times \Omega)$, $S
\in C^2_b(\Omega)$, and assume \eqref{eq:systemNum}
admits a strong solution $V_*$
whose partial derivatives $\partial_{tt}V_*$, $\partial_{xx}V_*$,
$\partial_{x\xi}V_*$, $\partial_{\xi\xi}V_*$, $\partial_{\xi\xi\xi\xi}V_*$
exist and are bounded on $\bar \Omega \times [0,T]$. Denote
by $V^n_*$ the matrix with elements $(V_*^n)_{ij} =
V_*(x_j,\xi_i,t_n)$, for $(i,j,n) \in \mathbb{N}_{n_x} \times \mathbb{N}_{n_\xi} \times \mathbb{N}_{n_t}$.
Further, let $(V^n)_{n \in \mathbb{N}}$ be the solution to the IMEX scheme
\eqref{eq:IMEX}--\eqref{eq:NQuad}, and let
\[
\zeta = n_x \mu(\bar\Omega) \Vert W \Vert_\infty \Vert S' \Vert_\infty,
\qquad
h = \max(h_\xi, h_x).
\]
There exist constants $C_\tau, C_h >0$ such that
\begin{align}
& |V^n - V_*^n|_\infty \leq \frac{1}{\gamma -\zeta}(C_\tau \tau + C_h h^2)
& \text{if $\zeta < \gamma$} \label{eq:bound1}, \\
& |V^n - V_*^n|_\infty \leq \frac{T}{1+ \gamma \tau} (C_\tau \tau + C_h h^2)
& \text{if $\zeta = \gamma$} \label{eq:bound2}, \\
& |V^n - V_*^n|_\infty \leq
\frac{C_\tau \tau + C_h h^2}{\zeta - \gamma} \exp
\frac{(\zeta - \gamma) T}{1+\gamma \tau}
& \text{if $\zeta > \gamma$} \label{eq:bound3}.
\end{align}
\end{lemma}
\begin{proof}
The regularity assumptions on $V_*$, and standard results on finite-difference
approximation and trapezium quadrature rule guarantee the existence of constants
$C_{tt},C_{xx},C_{\xi\xi},C_{\xi\xi\xi\xi} > 0$ such that for all $n \in \{0 \}
\cup \mathbb{N}_{n_t}$
\begin{equation}\label{eq:IMEXExact}
AV_*^n = V_*^{n-1} + \tau \big( N(V_*^{n-1}) + G^{n-1} +
C_{tt} \tau + C_{\xi\xi\xi\xi}h^2_\xi + C_{xx}h_x^2 + C_{\xi\xi}h_\xi^2
\big),
\end{equation}
where the errors for the forward finite-difference in $t$, centred finite-difference
in $\xi$, and trapezium rule are listed progressively, with constants proportional
to the respective partial derivatives. We subtract \eqref{eq:IMEXExact} from
\[
AV^n = V^{n-1} + \tau N(V^{n-1}) + \tau G^{n-1},
\]
and obtain the error bound
\begin{equation}\label{eq:intermediateBound}
|V - V_*^n|_\infty \leq |A^{-1}|_\infty \big(|V - V_*^n|_\infty + \tau |N(V^n) -
N(V_*^n)|_\infty + \tau \omega \big),
\end{equation}
where $\omega = C_\tau \tau + C_h h^2$, $C_\tau = C_{tt}$, $C_h =
\max(C_{xx},C_{\xi\xi},C_{\xi\xi\xi\xi})$, and
$h = \max(h_\xi,h_x)$.
Since the first derivative $S'$ of $S$ is bounded, we have the following estimate
for the nonlinear term
\[
\begin{aligned}
|N(V^{n}) - N(V_*^n)|_\infty
& \leq
\Vert W \Vert_\infty
\Vert S' \Vert_\infty
\max_{i \in \mathbb{N}_{n_\xi}} \sum_{j \in \mathbb{N}_{n_x}}
\sum_{j' \in \mathbb{N}{n_x}} \sum_{i' \in \mathbb{N}{n_\xi}}
|V^n_{i'j'}-(V_*^n)_{i'j'}|\rho_{j'}\sigma_{i'} \\
& \leq
n_x \mu(\bar \Omega)
\Vert W \Vert_\infty
\Vert S' \Vert_\infty
\vert V^n - V_*^n \vert_\infty \\
&
= \zeta \vert V^n - V_*^n \vert_\infty ,
\end{aligned}
\]
which, together with \eqref{eq:invABound} and \eqref{eq:intermediateBound} gives a
recursive bound for the $\infty$-norm matrix error $|V^n-V_*^n|_\infty$\footnote{The scalar values $r,q$ defined in this proof are
different from the ones defined in the proof of \cref{lem:IMEXboundedness}.},
\[
E^0 = 0, \qquad
E^n \leq \frac{1+\zeta \tau}{1+\gamma \tau}E^{n-1} + \frac{\tau \omega}{1 + \gamma
\tau} := r E^{n-1} + q.
\qquad n \in \mathbb{N}_{n_t}.
\]
Hence,
\begin{equation}\label{eq:intermediateBound2}
E^n \leq q \frac{r^n - 1}{r-1}, \quad r \neq 1,
\qquad E^n \leq n q \quad r = 1,
\qquad
n \in \mathbb{N}_{n_t}.
\end{equation}
If $ \zeta < \gamma$, then $r < 1$, and we obtain \eqref{eq:bound1} as
\[
E^n \leq
\frac{q}{1-r} = \frac{\omega}{\gamma - \zeta} = \frac{1}{\gamma -\zeta}(C_\tau
\tau + C_h h^2),
\qquad n \in \mathbb{N}_{n_t}.
\]
If $\zeta = \gamma$, then $r = 1$ and \eqref{eq:bound2} is found as follows
\[
E^n \leq n q
\leq \frac{n_t\tau \omega}{1 + \gamma\tau
= \frac{T}{1 + \gamma\tau}(C_\tau \tau + C_h h^2),
\qquad n \in \mathbb{N}_{n_t}.
\]
If $\zeta > \gamma$, then $r > 1$ and we can bound the $n$th term of the sequence
with an exponential, using the bound $(1+x/n)^n \leq \e^x$ for all $x \in \mathbb{R}$, as
follows,
\[
r^n =
\bigg(
1 + \frac{(\zeta - \gamma) n \tau}{n(1+\gamma \tau)}
\bigg)^n
\leq
\exp \frac{(\zeta - \gamma) n \tau}{1+\gamma \tau}
\leq
\exp \frac{(\zeta - \gamma) T}{1+\gamma \tau},
\]
which combined with \eqref{eq:intermediateBound2} gives \eqref{eq:bound3}:
\[
E^n \leq \frac{\omega}{\zeta - \gamma} \exp \frac{(\zeta - \gamma) T}{1+\gamma \tau}
= \frac{C_\tau \tau + C_h h^2}{\zeta - \gamma} \exp \frac{(\zeta - \gamma)
T}{1+\gamma \tau}.
\]
\end{proof}
The preceding lemma shows that the IMEX scheme has first order convergence in time,
and second order convergence in space. As expected, this conclusion holds without
imposing any restriction to the size of $\tau$ in relation to $h$, as happens, for
example, in the case of explicit methods for parabolic equations. In passing we note
that if $\zeta < \gamma$ and $V_*(t)$ exists for all $t \in \mathbb{R}$, the error estimate
\eqref{eq:bound1} holds for $n \in \mathbb{N}$, that is, in an unbounded interval of time;
on the other hand, the error estimates do not hold on an unbounded time interval when
$\zeta \geq \gamma$, as the bounds depend on $T$.
\subsection{Implementational aspects and efficiency}
In this section we make a few considerations on the implementation of the proposed
IMEX scheme, with the view of comparing its efficiency to an ordinary IMEX scheme,
that is, to an IMEX scheme applied to \eqref{eq:ODEs}.
\subsubsection{Implementation}
IMEX schemes for planar semilinear problems require the inversion of a
discretised Laplacian, which usually is a square matrix with the same dimension of
the problem ($n_\xi n_x$ equations in our case). The particular structure of the
problem under consideration, however, implies that the matrix to be inverted is much
smaller (the square matrix $A$ has only $n_\xi$ rows and $n_\xi$ columns). At each
time step
\eqref{eq:IMEX} we solve a problem of the type $AX=B$, where $A \in \mathbb{R}^{n_\xi \times
n_\xi}$, and $X, B \in \mathbb{R}^{n_\xi \times n_x}$. This can be achieved efficiently by
pre-computing a factorisation of $A$, and then back-substituting for all
columns of $B$. Since the matrix $A$ is sparse and with low bandwidth, efficient
implementations of the $LU$ decompositions and backsubstitution can be used to solve
the $n_x$ linear problems corresponding to the columns of $X$ and $B$.
An important aspect of the numerical implementation is the evaluation of the
nonlinear term \eqref{eq:NQuad}: evaluating the right-hand side of
\eqref{eq:IMEX} requires in general $O(n^2_\xi n^2_x)$ operations, which is a
bottleneck for the time stepper, in particular for large domains. However, the
structure of the problem can be exploited once again to evaluate this term
efficiently. We make use of the following properties:
\begin{enumerate}
\item The kernel $W$ specified in \eqref{eq:kernel} has a product structure,
hence
\[
W(x_j,\xi_i,x_{j'},\xi_{i'}) = \alpha_i \alpha'_{i'}
w(|x_j-x_{j'}|).
\]
where $\alpha_i = \delta_\varepsilon(\xi_i-\xi_0)$, $\alpha'_{i'} = \delta_\varepsilon(\xi_{i'})$.
In addition, $w$ is periodic, therefore the matrix with entries $w(|x_j -
x_{j'}|)$ is circulant with (rotating) row vector $w
= \{w(|x_j|) \colon j \in \mathbb{N}_{n_x}\} \in \mathbb{R}^{1 \times n_x}$.
\item The function $x \mapsto V(x,\blank)$ is $2L_x$-periodic, hence the trapezium
rule has identical weights $\rho_j = h_x$, and the integration
in $x$ is a circular convolution, which can be performed efficiently in $O(n_x
\log n_x)$ operations, using the Discrete Fourier Transform (DFT).
\end{enumerate}
We have
\begin{equation}\label{eq:NSlow}
N_{ij}(V) = \alpha_i \sum_{j' \in \mathbb{N}_{n_x}} w_{j-j'} \rho_{j'}
\sum_{i' \in \mathbb{N}_{n_\xi}} \alpha'_{i'} \sigma_{i'} S(V_{i'j'})
\qquad
(i,j) \in \mathbb{N}_{n_\xi} \times \mathbb{N}_{n_x},
\end{equation}
and a DFT can be used to perform the outer
sums~\cite{coombes2012interface,Rankin2014}.
Introducing the direct, $\mathcal{F}_n$, and inverse, $\mathcal{F}_n^{-1}$, DFTs for $n$-vectors, we express compactly the nonzero elements of $N$ as
follows
\begin{equation}\label{eq:NFast}
N = \alpha h_x \mathcal{F}^{-1}_{n_x}
\big[
\mathcal{F}_{n_x}[w] \odot \mathcal{F}_{n_x}[ (\alpha' \odot \sigma)^T S(V)]
\big],
\end{equation}
where $\alpha, \alpha', \sigma \in \mathbb{R}^{n_\xi \times 1}$
are column vectors, and $\odot$ denotes the Hadamard product, that is, elementwise
vector multiplication. The formula above evaluates the
nonlinear term $N$ in just
$O(n_xn_\xi) + O(n_x \log n_x)$ operations.
We summarise our implementation with the pseudocode provided in Algorithm
\ref{alg:smart}, and we will henceforth compare quantitatively its efficiency with
a standard IMEX implementation, which we also provide in Algorithm \ref{alg:naive}. The
matricial version, Algorithm \ref{alg:smart} exposes row- and column-vectors, for
which a very compact Matlab implementation can be derived. We give an example of such
implementation in Appendix~\ref{sec:matlab}, and we refer the reader to
\cite{daniele_avitabile_2020_3731920} for a repository of codes used in this article.
\subsubsection{Efficiency estimates}
We now make a few considerations about the efficiency of our algorithm. We will provide
two main measures of efficiency: an estimate of the floating point operations (flops),
and an estimate of the storage space (in floating point numbers) required by the
algorithm, as a function of the input data which, in our case, are the number of
gridpoints in each direction, $n_x$ and $n_\xi$. We are interested
in how the estimates scale for large $n_x, n_\xi$.
To estimate the number of flops, we count the number of operations required by
Algorithms~\ref{alg:smart} and \ref{alg:naive} in the initialisation step (lines 2--6), and
in a single time step (lines 8--12). We base our estimates on the following
facts and hypotheses:
\begin{enumerate}
\item The cost of multiplying an $m$-by-$n$ matrix by an $n$-vector is $2mn - m$
flops.
\item If an $n$-by-$n$ matrix is tridiagonal, then the matrices $L$ and $U$ of its
$LU$-factorisation are bidiagonal, and $L$ has $1$ along its main diagonal.
This implies that storing the $LU$ factorisation requires only $3$
$n$-vectors. Calculating the $LU$ factorisation costs $2n + 1$ flops, while
solving the corresponding linear problem $LUx = b$, with $x,b \in \mathbb{R}^n$,
requires $2n-2$ and $3n-2$ flops for the forward- and backward-subsitution,
respectively. Similar considerations apply if $A$ is not tridiagonal, but still
sparse, as it would be obtained using a different discretisation method for the
diffusive operator: estimates for the flops of the corresponding
$PLU$-factorisation depend, in general, on the sparsity pattern of $A$, as well
as on the permutation strategy, which is heuristic but can have an impact on the
sparsity of $L$ and $U$, thereby influencing the performance of the algorithm.
We present calculations only in the case of a tridiagonal matrix $A$, for which
explicit estimates are possible.
\item As stated above, it is well known that a single FFT of an $n$-vector costs
$O(n \log n)$ operations.
\item We assume that function evaluations of the functions $G$, $S$, $w$, $\delta$
cost one flop. This estimate is optimistic, as most function evaluations will
require more than one flop, but we make this simplifying assumption for both the
algorithms we are comparing.
\end{enumerate}
\begin{algorithm}
\caption{IMEX time stepper in matrix form \eqref{eq:MatrixODE}, nonlinear term
computed with pseudospectral evaluation \eqref{eq:NFast}}
\label{alg:smart}
\DontPrintSemicolon
\SetAlgoNoLine
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Initial condition $V^0 \in \mathbb{R}^{n_\xi \times n_x}$, time step $\tau$, number of steps $n_t$.}
\Output{An approximate solution $(V^n)_{n=1}^{n_t} \subset \mathbb{R}^{n_\xi \times n_x}$}
\Begin{
Compute grid vectors $\xi \in \mathbb{R}^{n_\xi \times 1}$, $x \in \mathbb{R}^{1 \times n_x}$. \;
Compute synaptic vectors $w, \hat w = \mathcal{F}_{n_x}[w] \in \mathbb{R}^{1 \times n_x}$.\;
Compute synaptic vectors $\alpha, \alpha' \in \mathbb{R}^{n_\xi \times 1}$. \;
Compute quadrature weights $\sigma \in \mathbb{R}^{n_\xi \times 1}$. \;
Compute sparse $LU$-factorisation of $A$,
\[
LU=A \in \mathbb{R}^{n_\xi \times n_\xi}.
\]\;
\For{$n = 1,\ldots,n_t$}{
Set $V = V^{n-1} \in \mathbb{R}^{n_\xi \times n_x} $. \;
Compute the external input at time $t_{n-1}$ and store it in $G \in \mathbb{R}^{n_\xi \times n_x}$.\;
Set $z = \mathcal{F}_{n_x}\big[(\alpha' \odot \sigma)^T S(V)\big] \in \mathbb{R}^{1 \times n_x} $.\;
Set $N = h_x \alpha \mathcal{F}^{-1}_{n_x} [ \hat w \odot z] \in \mathbb{R}^{n_\xi \times n_x}$.\;
Solve for $V^{n}$ the linear problem $(LU)V^n = V + \tau(N+G)$.
}
}
\end{algorithm}
\begin{algorithm}
\caption{IMEX time stepper in vector form \eqref{eq:ODEs}, nonlinear term evaluated with
quadrature formula \eqref{eq:NQuad}.}
\label{alg:naive}
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Initial condition $U^0 \in \mathbb{R}^{n_\xi n_x}$, time step $\tau$, number of steps $n_t$.}
\Output{An approximate solution $(U^n)_{n=1}^{n_t} \subset \mathbb{R}^{n_\xi n_x}$}
\Begin{
\BlankLine
Compute grid vectors $\xi \in \mathbb{R}^{n_\xi}$, $x \in \mathbb{R}^{n_x}$. \;
Compute synaptic vector $w \in \mathbb{R}^{1 \times n_x}$.\;
Compute synaptic vectors $\alpha, \alpha' \in \mathbb{R}^{n_\xi \times 1}$. \;
Compute quadrature weights $\rho \in \mathbb{R}^{n_x}$, $\sigma \in \mathbb{R}^{n_\xi}$. \;
Compute sparse $LU$-factorisation
\[
LU = \big(
(1+\tau \gamma) I_{n_x n_\xi} - \tau \nu I_{n_x} \otimes D_{\xi \xi}
\big)
\in \mathbb{R}^{n_\xi n_x \times n_\xi n_x }.
\]
\BlankLine
\For{$n = 1,\ldots,n_t$}{
Set $Z = U^{n-1} \in \mathbb{R}^{n_\xi n_x} $. \;
Compute the external input at time $t_{n-1}$ and store it in $G \in \mathbb{R}^{n_\xi n_x}$. \;
Compute the nonlinear term $N$ using \eqref{eq:NQuad}.\;
Solve for $U^{n}$ the linear problem $(LU)U^n = Z + \tau(N+G)$.
}
}
\end{algorithm}
In Table~\ref{tab:flops} we count flops required in each line of Algorithms
\ref{alg:smart} and \ref{alg:naive}. The data is grouped so as to distinguish between
the initialisation phase of the algorithms, and the iterations for the time steps.
Algorithm \ref{alg:smart} outperforms substantiatlly Algorithm \ref{alg:naive} in
both phases. In the initialisation, the number of flops scales linearly for Algorithm
\ref{alg:smart}, and quadratically for Algorithm \ref{alg:naive}. This is mostly
owing to the $LU$-factorisation step, which involves the $n_\xi$-by-$n_\xi$ matrix
$A$ in the former, and an $n_\xi n_x$-by-$n_\xi n_x$ matrix in the latter.
The efficiency gain is more striking in the cost per time step: owing to the
pseudospectral evaluation of the nonlinearity, only $O(n_\xi n_x) + O(n_x \log n_x)$
flops are necessary in Algorithm \ref{alg:smart}, as opposed to $O(n^2_\xi n^2_x)$
flops in Algorithm \ref{alg:naive}.
\begin{table}
\centering
\caption{Flop count for the initialisation step (lines 2--6) and for one time
step (lines 8--12) in Algorithms 1,2.}
\label{tab:flops}
\begin{tabular}{cccc}
\toprule
\multicolumn{2}{c}{Algorithm 1} & \multicolumn{2}{c}{Algorithm 2} \\
Lines & Flops & Lines & Flops \\
\midrule[\heavyrulewidth]
2 & $ n_\xi + n_x $ & 2 & $ n_\xi + n_x $ \\
3 & $ 2n_x $ & 3 & $ n_x $ \\
4 & $ 2n_\xi $ & 4 & $ 2n_\xi $ \\
5 & $ n_\xi $ & 5 & $ n_\xi + n_x $ \\
6 & $ 2n_\xi-1 $ & 6 & $ 2n_\xi n_x -1 $ \\
\midrule
2--6 & $ O(n_\xi) + O(n_x) $ & 2--6 & $ O(n_\xi n_x) $ \\
\midrule
8 & $ n_\xi n_x $ & 8 & $ n_\xi n_x $ \\
9 & $ n_\xi n_x $ & 9 & $ n_\xi n_x $ \\
10 & $ 3 n_\xi n_x + O(n_x \log n_x) + n_\xi - n_x $ & 10 & $ 2n^2_\xi n_x^2 - n^2_\xi n_x $ \\
11 & $ n_\xi n_x + O(n_x \log n_x) + 2n_x $ & 11 & $ 5n_\xi n_x -4 $ \\
12 & $ 5 n_\xi n_x - 4 n_x $ & & $ $ \\
\midrule
8--12 & $O(n_\xi n_x) + O(n_x \log n_x)$ & 8--11 & $ O(n^2_\xi n^2_x) $ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Space requirements, measured in Floating Point Numbers, for
Algorithms 1 and 2. Arrays $d_1, \ldots, d_3$, store diagonals of the
$LU$-factorisation in the respective algorithms.}
\label{tab:memory}
\begin{tabular}{ccc}
\toprule
Floating Point Numbers & Algorithm 1 & Algorithm 2 \\
\midrule[\heavyrulewidth]
$n_\xi$ & $\xi,\alpha,\alpha',d_1,d_2,d_3,z$ & $\xi, \rho, \alpha, \alpha'$ \\
$n_x$ & $x,w,\sigma$ & $x, \sigma, w$ \\
$n_\xi n_x$ & $V,V^n,G,N$ & $U^n,Z,N,G,d_1,d_2,d_3$ \\
\midrule
Total & $4n_x n_\xi + 7n_\xi + 3n_x$ & $ 7 n_x n_\xi +2n_\xi + 2n_x $ \\
\bottomrule
\end{tabular}
\end{table}
An important point to note that, in the case of a 2D somatic space with, say
coordinates $(x,y,\xi)$ and $n_x=n_y=N$, $n_\xi$ grid points (see
Figure~\ref{fig:sketch}(a)), the size of the matrix $A$ in
Algorithm \ref{alg:smart} remains unaltered, while Algorithm \ref{alg:naive}
requires the factorisation and inversion of a much larger matrix, of size $n_\xi N^2$-by-$n_\xi
N^2$. Estimates for the efficiencies in this case can be obtained by replacing $n_x$ by
$N^2$ in the table, leading to much greater savings.
Finally, in Table~\ref{tab:memory} we collect the variables used by both algorithms,
and count the storage requirement of each of them, measured floating point numbers.
The results show that Algorithm 1 requires asymptotically the same storage as
Algorithm 2 $O(n_\xi n_x)$. For fixed values of $n_\xi$ and $n_x$, however, the
latter uses almost twice as much storage space as the former.
\section{Travelling waves}\label{sec:TWTest}
\begin{figure}
\centering
\includegraphics{wave}
\caption{ Coherent structure observed in time
simulation of \eqref{eq:systemNum}, \eqref{eq:TWSigmoidal}, \eqref{eq:TWKernel}.
(a): Pseudocolor plot of $V(x,\xi,t)$ at several time points, showing two
counter-propagating waves. (b): Solution at $\xi = 0$, showing the wave
profile. Parameters: $\xi_0 = 1$, $\varepsilon = 0.005$, $\nu=0.4$, $\gamma=1$, $\beta=1000$,
$\theta =0.01$, $\kappa =3$, $L_x = 24 \pi$, $L_\xi =3$, $n_x =
2^{10}$, $n_\xi=2^{12}$, $\tau = 0.05$.}
\label{fig:wave}
\end{figure}
We tested the algorithm on an analytically tractable neural field problem, and we
report in this section our numerical experiments. For the test, we take a sigmoidal
firing rate function
\begin{equation}\label{eq:TWSigmoidal}
S(V) = \frac{1}{1 + \exp(-\beta(V-\theta))},
\end{equation}
and kernel specified by
\begin{equation}\label{eq:TWKernel}
w(x) = \frac{\kappa}{2} \exp \bigg( -\frac{|x|}{2} \bigg),
\qquad
\delta_\varepsilon(\xi) = \frac{1}{\varepsilon
\sqrt{\pi}} \exp\bigg(-\frac{\xi^2}{\varepsilon^2}\bigg)
\end{equation}
where $\beta, \theta, \kappa$ are positive constants. If $S(V) = H(V- \theta)$, $H$
being the Heaviside
function, $\delta_\varepsilon$ is replaced by the Dirac delta distribution, and the
evolution equation is posed on $\mathbb{R}^2$, then the model supports solutions for which
$V(x,0,t)$ is a travelling front
$
V(x,0,t) = \varphi(x-v_*t)$,
with
$\varphi(\pm \infty) = V_\pm$,
whose speed $v_*$ satisfies the implicit equation~\cite{Ross2019}
\begin{equation}\label{eq:analyticalSpeed}
\frac{\kappa \exp(-\psi(v_*,\nu) \xi_0 )}{2 \psi(v_*,\nu) \nu} -\theta = 0,
\qquad \psi(v_*,\nu) = \sqrt{\frac{\gamma + v_*}{\nu}}.
\end{equation}
\begin{figure}
\centering
\includegraphics{convergenceTests}
\caption{(a) Travelling wave speed versus firing rate
threshold, computed analytically via \eqref{eq:analyticalSpeed}, and numerically
via the time-stepper. (b)--(d) Convergence of the computed speed to the
analytical speed at $\theta = 0.01$, as a function of the the kernel support
parameter $\varepsilon$, the steepness of the sigmoid $\beta$, and the time-stepping
parameter $\tau$, respectively. Parameters as in \eqref{fig:wave}.}
\label{fig:convergenceTests}
\end{figure}
To test our scheme we study solutions to \eqref{eq:systemNum},
\eqref{eq:TWSigmoidal}, \eqref{eq:TWKernel} with
$L_x, \beta \gg 1$, $L_\xi \gg \sqrt{\nu /\gamma}$, (the
characteristic electrotonic length), and $\varepsilon \ll 1$.
Since for this problem $[-L_x,L_x)
\cong \mathbb{S}$, we expect to observe at $\xi=\xi_0$
two counter-propagating waves with approximate speed $v$ and
\[
V(\pm L_x,\xi_0,t) \approx V_+, \qquad V(0,\xi_0,t) \approx V_-.
\]
We show an exemplary profile of this coherent structure in Figure~\ref{fig:wave},
where we observe two counter-propagating waves, as described above.
Since the wavespeed $v_*$ is available implicitly, we performed some tests
to validate the proposed algorithm. Firstly, we compute roots of
\eqref{eq:analyticalSpeed} in the variable $v_*$, as a function of the firing rate
threshold $\theta$. In Figure~\ref{fig:convergenceTests}(a) we observe a good agreement with
the wavespeed observed in direct simulations. The latter has been
computed by post-processing data from numerical simulations: using first-order
interpolants we approximate a positive function $x_*(t)$ such that $V(x_*(t),0,t) =
\theta$, that is, the $\theta$-level set of $V(x,0,t)$ on $[0,L_x] \times [0,T]$;
after an initial transient, $\dot x_*(t)$ is approximately constant and provides an
estimate of $v_*$, which is derived via first-order finite differences. In
Figure~\ref{fig:convergenceTests}(a) we observe a small discrepancy, which should be expected as
we have several sources of error, namely: the time-stepping error, the spatial
discretisation error for the differential and integral operators, the error due to
the sigmoidal firing rate and to $\delta_\varepsilon$ (the theory is
valid for Heaviside firing rate and for a Dirac-delta distribution $\delta$). In
Figures~\ref{fig:convergenceTests}(b)--(d) we show convergence plots for these errors (except for
the second-order spatial discretisation error which is dominated in numerical
simulations by the first-order time-stepping error).
\section{Turing instability}\label{sec:TuringTest}
The model defined by (\ref{1}), with an appropriate choice of somatic interaction,
can also support a Turing instability to spatially periodic patterns
\cite{Bressloff96}. These in turn may either be independent of time or periodic in
time. In the latter case this leads to periodic travelling waves. Whether emergent
patterns be \textit{static} or \textit{dynamic} they both provide another validation
test for the numerical scheme presented here, as the bifurcation point as determined
analytically from a Turing analysis should agree with the onset of patterning in a
direct numerical simulation. A relatively recent description of the method for
determining the Turing instability in a neural field with dendritic processing can be
found in \cite{Coombes2014}. Here we briefly summarise the necessary steps to arrive
at a formula for the continuous spectrum, from which the Turing instability can be
obtained.
In general a homogeneous steady state solution of (\ref{1}) will only exist if either
$S(0)=0$ or $\int_{\mathbb{R}^2} W(x,\xi,y,\eta)\mathop{}\!\mathrm{d} y \mathop{}\!\mathrm{d} \eta = \text{constant}$
for all $(x,\xi)$. The latter condition is not generic, and so for the purposes of
this validation exercise we shall work with the choice $S(0)=0$ for which $V=0$ is
the only homogeneous steady state. Linearising around $V=0$ and using
(\ref{eq:kernel}) gives an evolution equation for the perturbations $\delta
V(x,\xi,t)$ that can be written in the form
\begin{equation}
\delta V(x,\xi,t) = S'(0) \int_{-\infty}^t \Theta(\xi-\xi_0,t-s) \int_{\mathbb{R}}
w(|x-x'|) \delta V(x',0,s) \mathop{}\!\mathrm{d} x' \mathop{}\!\mathrm{d} s,
\label{deltaV}
\end{equation}
where
\[
\Theta(\xi, t)=\mathrm{e}^{-\gamma t} \frac{\mathrm{e}^{-\xi^{2} /(4 \nu
t)}}{\sqrt{4 \pi \nu t}} H(t).
\]
Focusing on a \textit{somatic} field $\delta V(x,0,t)$, we see from (\ref{deltaV})
(with $\xi=0$) that this has solutions of the form $\e^{\lambda t} \e^{i p x}$ for
$\lambda \in \mathbb{C}$ and $p \in \mathbb{R}$, where $\lambda=\lambda(p)$ is defined by the
implicit solution of $\mathcal{E}(\lambda,p)=0$, where
\begin{equation}
\mathcal{E}(\lambda,p) = 1 - S'(0) \frac{\exp(-\psi(\lambda,\nu) \xi_0 )}{2 \psi(\lambda,\nu) \nu} \widehat{w}(p) .
\end{equation}
Here the function $\psi$ is defined as in (\ref{eq:analyticalSpeed}) and
$\widehat{w}(p)$ is the Fourier transform of $w$:
\begin{equation}
\widehat{w}(p) = \int_{\mathbb{R}} w(|y|) \e^{-i p y} \mathop{}\!\mathrm{d} y.
\end{equation}
We note that since $w(x)=w(|x|)$ then $\widehat{w}(p) \in \mathbb{R}$.
\begin{figure}
\centering
\includegraphics{TuringStatic}
\caption{Numerical simulation of Turing bifurcation for the model with kernel and
firing rate function given in \eqref{eq:mexicanHatKernel}. (a): Plots of $\hat w(p) - w_*$
for $\beta = 28$ and $\beta=30$, from which we deduce
that a Turing bifurcation occurs for an intermediate value of $\beta$. We expect
perturbations of the trivial state to decay exponentially if $\beta=28$ and to increase
exponentially if $\beta = 30$, as confirmed in panels (b)--(d). (b): Maximum absolute
value of the voltage at $\xi=0$, as function of time, when the trivial steady state is
perturbed. (c),(d): Pseudocolor plot of $V$ when $\beta=28$ and $\beta=30$, respectively.
Parameters: $\xi_0 = 1$, $\varepsilon = 0.005$, $\nu = 6$, $c = 1$, $a_1 = 1$, $b_1 =
1$, $a_2 = 1/4$, $b_2 = 1/2$, $n_x = 2^9$, $L_x = 10\pi$, $n_\xi = 2^{11}$, $L_\xi =
2.5 \pi$, $\tau = 0.01$. }
\label{fig:TuringStatic}
\end{figure}
Note that if $\lambda \in \mathbb{R}$ with $\lambda > -\gamma$ then $\mathcal{E}(\lambda,p) \in
\mathbb{R}$.
The trivial steady state is stable to spatially-periodic perturbations if
\[
\hat w(p) < w_* = 2\psi(0,\nu) \nu \exp(\psi(0,\nu) \xi_0)/S'(0) \qquad
\text{for all $p \in \mathbb{R}$}.
\]
Hence a static instability occurs under a parameter variation when $w(p_*) = w_*$ for
some $p_* \in \mathbb{R}$, the critical wavelength of the unstable pattern.
Hence if $\widehat{w}(p)$ has a positive peak away from the origin, at $p=p_*$, then a static
Turing instability can occur (see Figure \ref{fig:TuringStatic}(a)). This is possible
if $w(|x|)$ has a Mexican-hat shape, describing short range excitation and long range
inhibition.
We have validated this scenario numerically, by simulating a neural field with
\begin{equation}\label{eq:mexicanHatKernel}
w(x) = a_1 \exp(-b_1 |x|) - a_2 \exp(-b_2 |x|), \qquad S(V) = \frac{1}{1 +
\exp{(-\beta V)}} - \frac{1}{2},
\end{equation}
and reporting results in Figure~\ref{fig:TuringStatic}. In the Figure we pick the
steepness of the sigmoidal firing rate as main parameter, deduce that
a Turing bifurcation occurs for a critical value $\beta_* \in [28,30]$, perturb the
trivial steady state by setting initial condition $V_0(x,\xi) = 0.01 \cos(p_* x)$ and
domain $\bar \Omega = [-4\pi/p_*, 4\pi/p_*] \times [-\pi/p_*,\pi/p_*]$, and observe
the perturbations decaying for $\beta = 28$, and growing for $\beta = 30$, respectively.
Note that, if $\lambda \in \mathbb{C}$, then $\mathcal{E}(\lambda,p) \in \mathbb{C}$ and it is
possible that a dynamic Turing instability can occur, with an emergent frequency $\omega_c$. It is
known that this case is more likely for an inverted Mexican-hat shape, describing
short range inhibition and long range excitation \cite{Bressloff96}. We do not show
computations for this case, but we briefly discuss it below. The dynamic
bifurcation condition can be defined by tracking the continuous spectrum at the point
where it first crosses the imaginary axis away from the real line. This is
equivalent to solving $P_p H_\omega - H_p P_\omega = 0$ with $P(0,\omega_c,p) = 0 =
H(0,\omega_c,p)$, where the subscripts denote partial differentiation and
$P(\nu,\omega,p) = \Real \, \mathcal{E}(\nu + i \omega,p)$ and $H(\nu,\omega,p) =
\Imag \, \mathcal{E}(\nu + i \omega,p)$ \cite[Chapter 1]{Coombes2005}.
\section{Conclusions} \label{sec:conclusions}
In this paper we have presented an efficient scheme for the
numerical solution of neural fields that incorporate dendritic processing. The model
prescribes diffusivity along the dendritic coordinate, but not along the cortex; in
addition, the nonlinear coupling is nonlocal on the cortex, but essentially local on
the dendrites. This structure allows the formulation of a compact numerical scheme,
and provides efficiency savings both in terms of operation counts, and in terms of
the space required by the algorithm. Firstly, a small diffusivity differentiation
matrix is decomposed at the beginning of the computation, and then used repeatedly
to solve a set of linear problems in the cortical direction. This aspect of the
computation is appealing, especially for high-dimensional computations where a 2D
cortex is coupled to the 1D dendritic coordinate, as the decomposition is performed
once, and involves only a 1D differentiation matrix. Secondly, the largest
computational effort of the scheme, which is in the evaluation of the nonlinear term, can be
reduced considerably using DFTs. We
have also provided a basic numerical analysis of the scheme, under the assumption that a
solution to the infinite-dimensional problem exists. The existence of this solution
remains an open problem, which will be addressed in future publications.
The numerical implementation presented here does not exploit the fact that the
synaptic kernel is localised via the function $\delta_\varepsilon$. If one models the kernel
using a compactly supported function, for instance
\begin{equation}\label{eq:compactDelta}
\delta_\varepsilon(\xi) = \kappa
\exp\bigg(-\frac{\xi^2}{\varepsilon^2}\bigg) 1_{(-\varepsilon,\varepsilon)}(\xi),
\end{equation}
which is supported in a small interval of $O(\varepsilon)$ length, its
evaluation at the grid points is nonzero only on a small index set, namely
\[
\delta_\varepsilon(\xi_i-\xi_0) =
\begin{cases}
\alpha_i & \text{if $i \in \mathbb{I}$,} \\
0 & \text{otherwise,}
\end{cases}
\qquad
\delta_\varepsilon(\xi_{i'}) =
\begin{cases}
\alpha'_{i'} & \text{if $i' \in \mathbb{I}'$,} \\
0 & \text{otherwise,}
\end{cases}
\]
where $\mathbb{I},\mathbb{I}' \subseteq \mathbb{N}_{n_\xi}$ are index sets with $O(\varepsilon/L_{n_\xi})$
elements $|\mathbb{I}|, |\mathbb{I}'| \ll n_\xi$, respectively. This implies
\[
N_{ij}(V) = \alpha_i \sum_{j' \in \mathbb{N}_{n_\xi}} w_{j-j'} \rho_{j'}
\sum_{i' \in \mathbb{I}'} \alpha'_{i'} \sigma_{i'} S(V_{i'j'})
\qquad
(i,j) \in \mathbb{I} \times \mathbb{N}_{n_x},
\]
and we note that only $|\mathbb{I}|$ rows of $N$ are nonzero, and the inner sum is only over
$|\mathbb{I}'|$ elements. The formula above evaluates the nonlinear term $N$ in just
$(2|\mathbb{I}| + |\mathbb{I}'|)n_x + O(n_x \log n_x) = O(n_x + n_x\log n_x)$ operations. Numerical
experiments and convergence tests have been performed also for this formula, albeit
the results not presented here, because a synaptic kernel with the choice
\eqref{eq:compactDelta} is no longer in $C^2(\Omega \times \Omega)$, hence
Lemma~\ref{lem:IMEXconvergence} does not hold, and we plan to provide a convergence
result for this case elsewhere. We provide, however, a Matlab implementation of this
code in Appendix~\ref{sec:matlab}.
Possible extensions of the model include curved geometries \cite{Visser:2017hy,Bojak2010}, which should
benefit from a similar strategy used here for the dendritic coordinate, as well as
multiple population models. We expect that the latter will induce different coherent
structures to the ones reported here. The method outlined in this paper is valid also
in the context of numerical bifurcation analysis, which can be employed to study the
bifurcation structure of steady states and travelling waves.
The inclusion of synaptic processing to the present model is straightforward, by
coupling \eqref{eq:systemNum} to an equation of type $Q\Psi =K$ where $Q=
(1+\alpha^{-1} \partial_t)^2$ is a temporal differential operator. The resulting
model would not involve any additional spatial differential or integral
operator, therefore the proposed scheme can be applied by simply augmenting the
discretised state variables.
It is also important to address the role of axonal delays on the generation of large
scale brain rhythms. A recent paper \cite{Ross2019} has explored how this might be
done in a purely PDE setting, generalising the Nunez brain-wave equation to include
dendrites. A natural extension of the work in this paper is to consider a more
general numerical treatment of a model with both dendritic processing and
space-dependent axonal delays in an integro-differential framework.
\section*{Acknowledgments} P.M. Lima acknowledges support from Funda\c c\~ao para a
Ci\^encia e a Tecnologia (the Portuguese Foundation for Science and Technology)
through the grants POCI-01-0145-FEDER-031393 and UIDB/04621/2020.
\newpage
|
1,108,101,563,681 | arxiv | \chapter{An overview of arithmetic motivic integration}{Julia Gordon}
\maketitle
\section{Introduction}
The aim of these notes is to provide an elementary
introduction to some aspects of the theory of
arithmetic motivic integration, as well as a brief guide to the extensive
literature on the subject.
The idea of motivic integration was introduced by M. Kontsevich in 1995.
It was quickly developed by J. Denef and F. Loeser in a series of papers
\cite{DL}, \cite{DL.McKay}, \cite{DL.Igusa}, and {by} others.
This theory, whose applications are mostly in algebraic geometry over
algebraically closed fields, now is often referred to as
``geometric motivic integration'',
to distinguish it from the so-called arithmetic motivic integration that
specifies to integration over $p$-adic fields.
The theory of arithmetic motivic integration first appeared in the 1999 paper by J. Denef
and F. Loeser \cite{DL.arithm}.
The articles \cite{Tom.intro} and \cite{DL.congr} together provide an
excellent exposition of this work.
In 2004, R. Cluckers and F. Loeser developed a different and very effective
approach to motivic integration (both geometric and arithmetic) \cite{CL}.
Even though there is an expository version
\cite{CL.expo}, this theory seems to be not yet well-known.
This note is intended in part to be a companion with examples
to \cite{CL}. The aim is not just to describe
what motivic integration achieves, but to give some clues as to how it works.
We have stayed very close to the work of Cluckers and Loeser in the
main part of this exposition. In fact, much of these notes is a direct
quotation, most frequently from
the articles \cite{CL}, \cite{CL.expo}, and also
\cite{DL.congr}, and \cite{DL.arithm}.
Even though we try to give precise references all the time, some quotes from these sources might not always be acknowledged since they are so ubiquitous.
Some ideas, especially in the appendices, are clearly borrowed from
\cite{Tom.intro}.
The secondary goal was to collect references to many sources on motivic integration, and to provide some information on the relationship and logical interconnections between them. This is done in Appendix 1 (Section~\ref{geo}).
Our ultimate hope is that the reader would be able to start using motivic integration instead of $p$-adic integration, if there is any advantage in doing
integration independently of $p$ at the cost of losing a finite number of primes.
{\bf Acknowledgment.}
The first author thanks T.C. Hales for introducing her to the subject;
Jonathan Korman -- for many hours of discussions, and
Raf Cluckers -- for explaining his work on several occasions.
We have learned a lot of what appears in these notes at the
joint University of Toronto-McMaster University seminar on motivic
integration in 2004-2005, and thank Elliot Lawes, Jonathan Korman and
Alfred Dolich for their lectures.
{The contributions of
the second author are limited to sections 1-5.}
Finally, the first author thanks the organizers and participants
of the mini-courses on motivic integration at the University of Utah and at
the Fields Institute Workshop at the University of Ottawa, where most of
this material was presented, and the editors of this volume for multiple
suggestions and corrections.
\section{$p$-adic integration}
Arithmetic motivic integration re-interprets the classical
measure on $p$-adic fields, and $p$-adic manifolds, in a geometric way.
The main benefit of such an interpretation is that it allows one to isolate the dependence on $p$, so that
one can perform integration in a field-independent way, and then ``plug in'' $p$ at the very end.
Even though this is not the only achievement of the theory, it will be our main focus in these notes.
Hence, we begin with a brief review of the properties of {the} field of
$p$-adic numbers, and integration on $p$-adic manifolds.
\subsection{The $p$-adic numbers}
Let $p$ be a fixed prime.
Throughout these notes our main example {of a local field} will be the field $\Qgordon_p$ of $p$-adic numbers, which is the completion of $\Qgordon$ with respect to the $p$-adic metric.
\subsubsection{Analytic definition of the field $\Qgordon_p$}
Every non-zero rational number $x\in\Qgordon$ can be written in the form
$x=\frac{a}{b}p^n$, where $n\in\ring{Z}} \newcommand{\R}{\ring{R}$, and $a,b$ are integers relatively
prime to $p$. The power $n$ is called the {\bf valuation} of $x$ and
denoted ${\text{ord}}(x)$. Using the valuation map, we can define a
norm on $\Qgordon$: $|x|_p=p^{-{\text{ord}}(x)}$ if $x$ is non-zero and
{$|0|_p = 0$.} This norm induces a metric on $\Qgordon$, which satisfies a stronger triangle inequality than the standard metric:
$$|x+y|_p\le \max\{|x|_p, |y|_p\}.$$
This property of the metric is referred to as the ultrametric property.
The set $\Qgordon_p$, as a metric space, is the completion of $\Qgordon$ with respect to this metric. The operations of addition and multiplication extend by continuity
from $\Qgordon$ to $\Qgordon_p$ and make it a field.
The set $\{x\in \Qgordon_p\mid {\text{ord}}(x)\ge 0\}$ is denoted $\ring{Z}} \newcommand{\R}{\ring{R}_p$ and called the ring of $p$-adic integers.
\subsubsection{Algebraic definition of the field $\Qgordon_p$}
There is a way to define $\Qgordon_p$ without invoking analysis.
Consider the {rings} $\ring{Z}} \newcommand{\R}{\ring{R}/p^n\ring{Z}} \newcommand{\R}{\ring{R}$. They form a projective system with natural maps
$$
\begin{aligned}
\ring{Z}} \newcommand{\R}{\ring{R}/p^{n+1}\ring{Z}} \newcommand{\R}{\ring{R}&\to\ring{Z}} \newcommand{\R}{\ring{R}/p^n\ring{Z}} \newcommand{\R}{\ring{R} \\
m&\mapsto m \pmod{p^n}.
\end{aligned}
$$
The projective limit is called $\ring{Z}} \newcommand{\R}{\ring{R}_p$, the ring of $p$-adic integers.
The field $\Qgordon_p$ is then defined to be its field of fractions.
\subsubsection{Basic facts about $\Qgordon_p$}
\begin{itemize}
\item The two definitions of $\Qgordon_p$ agree, and $\Qgordon_p$ is a field extension of
$\Qgordon$.
\item
Topology on $\Qgordon_p$: if we use the analytic definition, then $\Qgordon_p$ comes equipped with a metric topology. It follows from the strong triangle inequality that $\Qgordon_p$ is totally disconnected in this topology.
It is easy to prove that the sets $p^n\ring{Z}} \newcommand{\R}{\ring{R}_p$, as $n$ ranges over $\ring{Z}} \newcommand{\R}{\ring{R}$, form a basis of neighbourhoods of $0$.
If one uses the algebraic definition of $\Qgordon_p$, then
the topology for $\Qgordon$ is \emph{defined} by declaring that these sets form a basis of neighbourhoods of $0$, and
the basis of neighbourhoods at any other point is obtained by translating them.
\item The set $\ring{Z}} \newcommand{\R}{\ring{R}_p\subset \Qgordon_p$ is open and compact in this topology.
It follows that each $p^n\ring{Z}} \newcommand{\R}{\ring{R}_p$ is also a compact set,
{ which, in turn, implies that} $\Qgordon_p$ is
locally compact.
Note that $\ring{Z}} \newcommand{\R}{\ring{R}_p$ (in the analytic definition) has the description
$\ring{Z}} \newcommand{\R}{\ring{R}_p=\{x\in \Qgordon_p\mid |x|_p\le 1\}$, so it is the {closed}
unit ball in our metric
space (somewhat counter-intuitively).
\item As a set, $\ring{Z}} \newcommand{\R}{\ring{R}_p$ is in bijection with the set
$$\left\{\sum_{i=0}^{\infty}a_i p^i\mid a_i=0,\dots, p-1\right\}.$$
Note, however, that addition in $\ring{Z}} \newcommand{\R}{\ring{R}_p$ \emph{does not} agree with
coefficient-wise addition $\mod p$ of the power series
(because ``$p$ has to carry over'').\footnote{Passing to the
fields of fractions, we see that the field $\ring{F}} \newcommand{\hi}{\ring{H}_p((t))$ of formal Laurent series with
coefficients in $\ring{F}} \newcommand{\hi}{\ring{H}_p$, and $\Qgordon_p$ are naturally in bijection, but
not isomorphic;
we will see that these fields have a lot in common nevertheless.}
\end{itemize}
In these notes, we work with discretely valued fields, {\it i.e.},
fields equipped with a valuation map from the non-zero elements of the field to a group $\Gamma$ with a discrete topology; this valuation will always be denoted by ${\text{ord}}$.
From now on, we will always assume that $\Gamma=\ring{Z}} \newcommand{\R}{\ring{R}$.
\begin{theorem}\label{thm:classif}
Any complete discretely valued field that is locally compact in the topology induced by the valuation is isomorphic either to a finite extension of $\Qgordon_p$ or to a field $\ring{F}} \newcommand{\hi}{\ring{H}_q((t))$ of formal Laurent series over a finite
field.
\end{theorem}
We refer to fields of this kind by the term ``local fields''; when we want to distinguish between finite extensions of $\Qgordon_p$ and the function fields $\ring{F}} \newcommand{\hi}{\ring{H}_q((t))$, we refer to them as ``characteristic zero fields'', and
``equal characteristic fields'', respectively.
\begin{remark} Note that if a field $k((t))$ of formal
Laurent series over a field $k$ is locally compact, then $k$ is finite.
\end{remark}
The above theorem and a discussion of related topics can be found,
for example, in \cite[Appendix to Chapter 2]{Robert}.
\subsection{Hensel's Lemma}
{The theory of integration on local fields that is the focus of these notes} would have been impossible without the property of the non-archimedean local fields known as Hensel's Lemma.
The next example is classical; we include it as a reminder.
\begin{example} $\ring{Z}} \newcommand{\R}{\ring{R}_3$ does not contain $\sqrt{2}$, but $\ring{Z}} \newcommand{\R}{\ring{R}_7$ does. Indeed,
let us try to solve the equation $x^2=2$. If we write
$x=\sum_{i=0}^{\infty} a_i 3^i$, {then} $x^2=a_0^2+3\cdot 2a_0a_1 +3^2(a_1^2+2a_0a_2)+\dots$, {hence} $x^2\pmod{3}=a_0^2 \pmod{3}$. Since $a_0^2$ cannot
be congruent to
$2 \pmod{3}$, there is no solution.
However, if we play the same game $\mod 7$, we {have solutions, for example} $a_0=3$. Next we need to find $a_1$ such that $(3+7a_1)^2\equiv 2 \pmod{49}$ (we find $a_1=1$), and so on.
{Clearly for every step $i\ge 1$ we can find a unique solution for $a_i$,} and this way we get a power series which converges
(in $\Qgordon_7$) to a solution of the equation $x^2=2$. Since $\Qgordon_7$ is complete, it must
contain the sum of the series, and this way $\sqrt{2}$ is in $\Qgordon_7$. Since the series has no negative powers, it is {in $\ring{Z}} \newcommand{\R}{\ring{R}_7$, but this actually follows from $|\sqrt{2}|_7 = \sqrt{|2|_7}=\sqrt{1}=1$.}
\end{example}
\begin{theorem}(Hensel's Lemma)
Let $K$ be a non-archimedean local field, and let $f\in K[x]$ be a monic polynomial such that all its coefficients have non-negative valuation. Then if $x\in K$ has the property that ${\text{ord}}(f(x))>0$ and ${\text{ord}}(f'(x))=0$, then there exists
$y\in K$ such that $f(y)=0$ and ${\text{ord}}(y-x)>0$.
\end{theorem}
The root $y$ is constructed by using Newton's approximations (and taking $x$ as the first one). The completeness of the field is used to establish
convergence of the sequence of appoximations to a root $y$.
The meaning of Hensel's Lemma ({\it{e.g.}}\ for $\Qgordon_p$) is that every solution
of $f(x)=0 \pmod{p}$ can be lifted to an actual root of $f(x)$ in $\ring{Z}} \newcommand{\R}{\ring{R}_p$
(in particular, it justifies our example of $\sqrt{2}\in \ring{Z}} \newcommand{\R}{\ring{R}_7$).
Fields satisfying Hensel's Lemma are called Henselian.
The argument sketched above shows that all complete discretely valued
fields are Henselian, in particular, the fields of formal Laurent series
$K((t))$ where $K$ is an arbitrary field, are Henselian.
See {\it{e.g.}} \cite{ribenboim} for a detailed discussion of the Henselian property.
\subsection{Haar measure}\label{sub:haar}
{If $K$ is a locally compact non-archimedean field then the additive group
of $K$ (as a locally compact abelian group) has a unique up to a constant multiple translation-invariant
measure, called the Haar measure.}
The
$\sigma$-algebra of measurable sets is the usual Borel $\sigma$-algebra (generated by open sets in the topology on $K$ induced by the absolute value on $K$). This measure will be denoted by $\mu$.
It is easy to check that $\mu$ satisfies a natural
``Jacobian rule'': if $a\in K$, and $S\subset K$ is a measurable set, then
$\mu(aS)=|a|\mu(S)$.
\begin{example} Though it is very simple, this example is the source of intuition behind much of our general theory: if we normalize the Haar measure on $\Qgordon_p$ so that $\mu(\ring{Z}} \newcommand{\R}{\ring{R}_p)=1$, then
$\mu(p^n\ring{Z}} \newcommand{\R}{\ring{R}_p)=p^{-n}$.
\end{example}
Using the product measure construction, we can get a translation-invariant
measure on the affine space $\ring{A}^n(K)$ from the Haar measure on $K$. It is also unique up to a constant multiple.
Since it plays an important role in the construction of motivic measure, we recall Jacobian transformation rule for the $p$-adic measure, which is analogous
to the transformation rule for Lebesgue measure on $\R^n$.
\begin{theorem} Let $A$ be a measurable subset of $\ring{A}^n(\Qgordon_p)$, let $\phi$
be a
$C^1$-map $\phi:\ring{A}^n\to \ring{A}^n$ injective and with nonzero Jacobian on $A$,
and let
$f:\ring{A}^n(\Qgordon_p)\to \R$ be an integrable function.
Then
$$
\int_A f{\text {d}} \mu=\int_{\phi(A)}|{\text{Jac}}(\phi)|_pf\circ\phi^{-1}{\text {d}} \mu.
$$
\end{theorem}
\subsection{Canonical (Serre-Oesterl\'e) measure on $p$-adic
manifolds}\label{sub:Weil}
Much of this section is quoted from \cite{batyrev}; see also \cite{Weil}.
Let $K$ be a local Henselian field with valuation ${\text{ord}}$, uniformizer
$\varpi$, the ring of integers ${\mathcal O}_K=\{x\in K\mid {\text{ord}}(x)\ge 0\}$, and the residue field $\ring{F}} \newcommand{\hi}{\ring{H}_q$.
Let ${\mathcal X}$ be a smooth scheme over $\mathnormal{\mathrm{Spec\,}}{\mathcal O}_K$ of
dimension $d$.\footnote{Alternatively, one can think of ${\mathcal X}$ as
a smooth variety over $K$, such that its reduction $\mod \varpi$ is a smooth
variety over $\ring{F}} \newcommand{\hi}{\ring{H}_q$. The smooth subschemes $U_i$ below then should be
replaced with Zariski open subsets.}
Assume for now that there is a nowhere vanishing global differential form
$\omega$ on
${\mathcal X}$.
Since ${\mathcal X}$ is smooth, ${\mathcal X}({\mathcal O}_K)$ is a $p$-adic manifold, and we can use {the} differential form $\omega$
to define a measure on ${\mathcal X}({\mathcal O}_K)$, in the following way.
Let $x\in {\mathcal X}({\mathcal O}_K)$ be a point, and $t_1,\dots,
t_d$ be the local coordinates around this point.
They define a homeomorphism $\theta$
(in {the} $p$-adic analytic topology) from an open neighbourhood
$U\subset {\mathcal X}({\mathcal O}_K)$ to an open set $\theta(U)\subset \ring{A}^d({\mathcal O})$.
We write
$$
\omega=\theta^{\ast}(g(t_1,\dots, t_d){\text {d}} t_1\wedge \dots \wedge{\text {d}} t_n),
$$
where $g(t_1,\dots, t_n)$ is a $p$-adic analytic function on $\theta(U)$
with no zeroes.
Then we can define a measure on $U$ by ${\text {d}}\mu_{\omega}=|g(t)|{\text {d}} t$, where
$|{\text {d}} t|$ stands for the measure on $\ring{A}^d$ associated with the volume form
${\text {d}} t_1\wedge\dots \wedge {\text {d}} t_d$ ({\it i.e.}, the product measure defined in the previous subsection), normalized so that
$$\int_{\ring{A}^d({\mathcal O}_K)}|{\text {d}} t|=1.$$
Two different nonvanishing differential forms on ${\mathcal X}$ have to differ by a $p$-adic unit; therefore, they yield the same measure on
${\mathcal X}({\mathcal O}_K)$.
More generally, even if there is no non-vanishing form, we can cover
${\mathcal X}$ with finitely many smooth affine open subschemes $U_i$
such that on each one of
them there is a nonvanishing top degree form $\omega_i$.
The form $\omega_i$ allows us to transport the measure from $\ring{A}^d({\mathcal O}_K)$
to $U_i({\mathcal O}_K)\subset {\mathcal X}({\mathcal O}_K)$. Note that each of the forms $\omega_i$
is defined uniquely up to an element
$s_i\in\Gamma(U_i,{\mathcal O}_{\mathcal X}^{\ast})$.
Therefore, the measure we
define
on $U_i({\mathcal O}_K)$ does not depend on the choice of $\omega_i$, since
by definition, $|s_i(x)|=1$ for
$s_i\in\Gamma(U_i,{\mathcal O}_{\mathcal X}^{\ast})$, $x\in U_i$.
These measures on $U_i({\mathcal O}_K)$ glue together (to check this, we need
to check that our measures on $U_i({\mathcal O}_K)$ and $U_j({\mathcal O}_K)$ agree on
the overlap $U_i({\mathcal O}_K)\cap U_j({\mathcal O}_K)=U_i\cap U_j({\mathcal O}_K)$. This
follows from the definition of a differential form and the fact that
the measure on $\ring{A}^d({\mathcal O}_K)$ satisfies the Jacobian transformation
rule.
\begin{definition} The measure defined as above on
${\mathcal X}({\mathcal O})$ is
called {\bf the canonical $p$-adic measure}.
\end{definition}
The above approach (due to A. Weil)
to the definition of the measure has, through the
work of Batyrev, inspired the initial approach to motivic integration.
{Let us also sketch Serre's definition of the canonical measure on
subvarieties of the affine space, \cite{serregordon}, which was generalized
by Oesterl\'e to $p$-adic analytic sets \cite{Oesterle}, and by W. Veys -- to subanalytic sets,
\cite{Veys:measure},}
and which is now used as the classical definition of the $p$-adic measure.
Let $Y$ be a $d$-dimensional smooth subvariety of $\ring{A}^n$.
Instead of using local coordinates, one can use the coordinates of the
ambient affine space $x_1,\dots, x_n$.
For each subset of indices $I=\{i_1,\dots, i_d\}$ with $i_1<i_2<\dots<i_d$,
let $\omega_{Y,I}$ be the differential form on $Y$ induced by
$dx_{i_1}\wedge\dots\wedge dx_{i_d}$.
Let $\mu_{Y,I}$ be the measure on $Y$ associated with the form $\omega_{Y,I}$.
The canonical measure on $Y$ is defined by
$$\mu_Y=\sup_{I}\mu_{Y,I},
$$
where $I$ runs over all $d$-element subsets of $\{1,\dots, n\}$.
\subsection{Weil's theorem}
\begin{theorem}(A. Weil, \cite{Weil}) Let $K$ be a locally compact
non-archimedean field with the ring of integers ${\mathcal O}_K$ and residue
field $\ring{F}} \newcommand{\hi}{\ring{H}_q$, and let ${\mathcal X}$ be a smooth scheme over $\mathnormal{\mathrm{Spec\,}}
{\mathcal O}_K$ of dimension $d$. Then
$$\int_{{\mathcal X}({\mathcal O}_K)}{\text {d}} \mu=\frac{|\mathcal X(\ring{F}} \newcommand{\hi}{\ring{H}_q)|}{q^d}.
$$
\end{theorem}
\noindent{\bf Heuristic ``proof''.}
Consider the projection from ${\mathcal X}({\mathcal O})$ to
${\mathcal X}( \F_q)$ that is defined by
applying the reduction $\mod \varpi$ map to the local coordinates.
The smoothness of ${\mathcal X}$ implies that the fibres
of this map all look exactly like the fibres of this projection for the affine
space.
The definition of the measure on ${\mathcal X}({\mathcal O})$ implies that
the measure on ${\mathcal X}({\mathcal O})$ is transported from the measure on
the affine space, on each coordinate chart.
Finally, in the case of the affine space of dimension $d$,
the volume of each fibre of the projection is $q^{-d}$.
Hence, the total volume of ${\mathcal X}({\mathcal O})$ equals the cardinality of the
image of the projection times the volume of the fibre, which is,
$|{\mathcal X}(\F_q)|q^{-d}$.
Motivic integration initially started (in a lecture by M. Kontsevich)
as a generalization of these ideas behind $p$-adic integration.\footnote{Here we have (rather carelessly and briefly) considered only
the $p$-adic manifolds that arise by taking $\ring{Z}} \newcommand{\R}{\ring{R}_p$-points of smooth varieties.
It should at least be mentioned that there is a theory of
motivic integration on rigid analytic spaces \cite{Sebag}.}
\subsection{Motivation}
The goal of arithmetic motivic integration is to assign a geometric object
to a
$p$-adic measurable set, in such a way that the value of the measure can
be recovered by counting the number of points of this geometric object over
the residue field, in a way that is similar to Weil's Theorem,
where the volume of ${\mathcal X}({\mathcal O}_K)$ is recovered by counting
points on the closed fibre of ${\mathcal X}$, for a smooth projective
scheme ${\mathcal X}$.
There are two approaches to the development of arithmetic motivic integration.
The first one, that appears in \cite{DL.arithm}, uses arc spaces and
truncations.
The missing link between the residue fields of characteristic zero and residue fields of finite characteristic is provided by considering pseudofinite fields. There is a beautiful exposition of this
work \cite{Tom.intro}. We sketch the main steps in Appendix 1, for
completeness of this overview.
The other approach uses cell decomposition instead of truncation.
This theory was developed in \cite{CL}, and it gives more than just a
measure -- it is a theory of integration complete with a large class of
integrable functions.
The main body of these notes is devoted to this theory.
In order to describe it, we need to introduce some techniques from logic and
some abstract formalism. This is done in the next section. For a while there will be no $p$-adic manifolds, and no measure. These familiar concepts will
reappear in Section~\ref{sec:back}.
\section{Constructible motivic Functions}\label{sec:3lang}
To start with, we need to develop a way to talk about sets without mentioning their elements, similar to the way a variety is defined independently of its set of points over any given field.
This is done by means of specifying a language of logic, and describing sets by
formulas in that language.
Given a formal language $L$, a we say that an object $M$ is a
{\bf structure} for this language if formulas in $L$ can be interpreted
as statements about the elements of $M$. More precisely,
in this context one has an interpretation function from the set of
symbols of the alphabet of $L$ to $M$, a map from the symbols for
operations in $L$ to $M$-valued functions on the direct product of the corresponding number of copies of $M$, and a map that takes symbols for relations
in $L$ (such as ``='', for example) to subsets of $M^r$ with the
corresponding $r$ ({\it{e.g.}}, $r=2$ for binary relations such as ``='').
We refer for example to \cite{manin.logic} for a discussion of this and related topics.
Let $L$ be a language, and let $M$ be a structure for $L$.
A set $A\subset M^n$ is called {\bf $L$-definable} if there exists a formula in the language $L$ such that $A$ is the set of points in $M^n$ satisfying this
formula.
A function is called {\bf $L$-definable} if its graph is a definable set.
\subsection{The language of rings}\label{sub:ldp}
The first-order language of rings is the language {built from the
following set of symbols}:
\begin{itemize}
\item countably many symbols for variables $x_1, \dots, x_n,\dots$.
\item '$0$','$1$';
\item '$\times$', '$+$', '$=$', and parentheses '$($', '$)$';
\item The existential quantifier '$\exists$';
\item logical operations: conjunction '$\wedge$', negation '$\neg$', disjunction'$\vee$'.
\end{itemize}
Any syntactically correct formula built from these symbols is a formula in
the first order language of rings.
Any ring is a structure for this language.
Note that quantifier-free
formulas in the language of rings define constructible sets
(recall that constructible sets, by definition, are the sets that belong to the smallest family ${\mathcal F}$ containing Zariski open sets and such that
a finite intersection of elements of ${\mathcal F}$ is in ${\mathcal F}$, and a complement of an element of ${\mathcal F}$ is in ${\mathcal F}$).
\subsection{Presburger's language}
Presburger's language is a language with variables running over $\ring{Z}} \newcommand{\R}{\ring{R}$,
and symbols '$+$', '$\le$', '$0$', '$1$', and for each $d=2,3,4,\dots$, a symbol '$\equiv_d$' to denote the binary
relation $x\equiv y \pmod{d}$, together with all the symbols for quantifiers,
logical operations and parentheses, as above.
Note the absence of the symbol for multiplication.
Since multiplication is not allowed, definable functions have to be linear
combinations of piecewise-linear and periodic functions (where the period is a
vector in $\ring{Z}} \newcommand{\R}{\ring{R}^n$, and $n$ is the number of variables).
\subsection{The language of Denef-Pas}
The language of Denef-Pas is designed for valued fields. It is a \emph{three-sorted language}, meaning that
it has three sorts of variables. Variables of the first sort run over the valued field, variables of the second
sort run over the value group (for simplicity, we assume that the value group is $\ring{Z}} \newcommand{\R}{\ring{R}$), and variables of the
third sort run over the residue field.
The symbols for this language consist of the symbols of the language of rings for the residue field sort,
Presburger's language for the $\ring{Z}} \newcommand{\R}{\ring{R}$-sort, and the language of rings for the valued field sort, together
with two additional symbols:
${\text{ord}}(x)$ to denote a function from the valued field sort to the $\ring{Z}} \newcommand{\R}{\ring{R}$-sort, and $\overline{\text{ac}}(x)$ to denote a function from
the valued field sort to the residue field sort.
These functions are called the {\bf valuation map}, and the
{\bf angular component map}, respectively.
We also need to add the symbol {`}$\infty$' to the value sort, to denote the valuation of {`}$0$'
(so that {`}${\text{ord}}(0)=\infty$' has the {`}true' value in every structure).
A valued field $K$ \emph{together with the choice of the uniformizer of the valuation on $K$} is a
structure for Denef-Pas language.
In order to match the formulas in Pas's language with their interpretations in
its structure $K$, we need to give a meaning to the symbols {`}${\text{ord}}$' and
{`}$\overline{\text{ac}}$' in the language.
The function ${\text{ord}}(x)$ stands for the valuation of $x$.
In order to provide the interpretation for the symbol {`}$\overline{\text{ac}}(x)$',
we have to
fix a uniformizing parameter $\varpi$.
The valuation on $K$ is normalized
so that
${\ord}(\varpi)=1$.
If $x\in{\mathcal O}_{K}^{\ast}$ is a unit, there is a natural definition
of $\overline{\text{ac}}(x)$ -- it is the reduction of $x$ modulo the ideal $(\varpi)$.
Define, for $x\neq 0$ in $K$, $\overline{\text{ac}}(x)=\overline{\text{ac}}(\varpi^{-{\text{ord}}(x)}x)$,
and $\overline{\text{ac}}(0)=|0|=0$.
For convenience, a symbol for every rational number is added to the valued field sort, so that we could have formulas with coefficients in $\Qgordon$.
Sometimes, when the category of fields under consideration is restricted to
all fields containing a fixed ground field $k$, one can add a symbol for each
element of $k((t))$ to the valued field sort. This enlarges the class of definable sets.
In order to make distinctions between various settings, we will explicitly
talk of ``formulas with coefficients in $k((t))$ (or in $k[[t]]$)'' in such
cases. Note that in any case, for an arbitrary field $K$ containing $k$, coefficients from $K$, or $K((t))$, are not allowed (otherwise this would have been meaningless -- we want to
consider the sets of points satisfying a given formula for the varying
fields $K$). Given a local field $K$ containing $k$ with a uniformizer
$\varpi$, one can make a map from $k((t))$ to $K$ where $t\mapsto\varpi$
(this will be discussed in detail in Section~\ref{sec:back}). In this sense,
$t$ plays the role of the uniformizer of the valuation, to some extent.
We will talk in detail about interpreting formulas in different structures
in Section~\ref{sec:back}.
\subsection{Definable subassignments}
Here we introduce the terminology that conveniently puts
the set of points defined by an
interpretation of a logical formula over a given field on the same
footing with, say, the set of points of an affine variety. To do that, we use the
language of functors.
We fix a ground field $k$ of characteristic $0$.
For most applications, one can think that $k=\Qgordon$.
Denote by $\text{Field}_k$ the category of fields containing $k$.
Any variety $X$ over $k$ defines a functor -- its functor of points --
from $\text{Field}_k$ to
the category of sets, by sending every field $K$ containing $k$ to $X(K)$.
This functor will be denoted by $h_X$.
\begin{definition} We will denote by $h[m,n,r]$
(or $h_{\ring{A}^m_{k((t))}\times \ring{A}^n_k\times \ring{Z}} \newcommand{\R}{\ring{R}^r}$)
\footnote{
Even though the objects whose volumes we would like to compute correspond to
subsets of affine spaces over the \emph{valued field}, it is
very useful to
have a formalism that allows us to deal with valued-field, residue-field, and
integer-valued variables at the same time.
One of the advantages of doing that
is being able to look at definable families with integer-valued or
residue-field valued parameters. This is the reason that this functor
plays a fundamental role.}
the functor from the category
$\text{Field}_k$ to the category of sets defined by
$$
h_{\ring{A}^m_{k((t))}\times \ring{A}^n_k\times \ring{Z}} \newcommand{\R}{\ring{R}^r}(K)=K((t))^m\times K^n\times \ring{Z}} \newcommand{\R}{\ring{R}^r.
$$
\end{definition}
For example, $h[1,0,0]$ is the functor of points of $\ring{A}^1_{k((t))}$, and
$h[0,0,0]$ is a functor that assigns to each field $K$
a one-point set. We will usually write $h_{\mathnormal{\mathrm{Spec\,}} k}$ for $h[0,0,0]$.
\begin{definition}
Let $F:{\mathcal C}\to \underline{\text{Sets}}$ be a functor from a
category ${\mathcal C}$ to the category of sets.
A {\bf subassignment} $h$ of $F$ is a collection of
subsets $h(C)\subset F(C)$, one for each
object $C$ of ${\mathcal C}$.
\end{definition}
Note that a subassignment does not have to be a subfunctor
(that is, we are making no requirement that a morphism
between two objects $C_1$ and $C_2$ in ${\mathcal C}$ has to correspond to
a map between the corresponding sets $h(C_1)$ and $h(C_2)$).
The subassignments will replace formulas in the same way that
functors can replace varieties. When we talk about formulas, we will mean
logical formulas built using {the} Denef-Pas language (so in particular, {we use} the
language of rings for the residue field, and Presburger language for $\ring{Z}} \newcommand{\R}{\ring{R}$).
\begin{definition}
A subassignment $h$
of $h[m,n,r]$ is called {\bf definable} if there exists a formula $\phi$
in the language of Denef-Pas with coefficients in $k((t))$, with $m$ free
variables of the valued field sort, with coefficients in $k$ and $n$ free variables of the residue field sort, and $r$ free variables of the value sort, such
that for every $K$ in $\text{Field}_k$, $h(K)$ is the set of all points in
$K((t))^m\times K^n\times \ring{Z}} \newcommand{\R}{\ring{R}^r$ satisfying $\phi$.
\end{definition}
\begin{definition} A {\bf morphism of definable subassignments} $h_1$ and $h_2$
is { a definable subassignment $F$ such that $F(C)$ is the graph of a
function from $h_1(C)$ to $h_2(C)$ for each
object $C$.}
The {\bf category of definable subassignments} of some $h[m,n,r]$
will be denoted {\bf ${\text{Def}}_k$}.
\end{definition}
\subsubsection{Relative situation}
If $S$ is
an object in ${\text{Def}}_k$, one can consider the category of definable
subassignments equipped with a morphism to $S$, denoted by ${\text{Def}}_S$
(the morphisms being the morphisms over $S$).
More precisely, we could say that the objects
are morphisms $[Y\to S]$ with $Y\in{\text{Def}}_k$, and morphisms are
commutative triangles
\begin{equation*}
\xymatrix{
&W \ar[d] \ar[r] & Y \ar[ld] \\
& S& \quad.
}
\end{equation*}
We denote by $S[m,n,r]$ the subassignment
$$
S[m,n,r]:=S\times h_{\ring{A}^m_{k((t))}\times \ring{A}^n_k\times \ring{Z}} \newcommand{\R}{\ring{R}^r};$$
This is an object of ${\text{Def}}_S$, the morphism to $S$ being the projection onto the
first factor.
Finally, for $S$ an object in ${\text{Def}}_k$, there is the category
of {\bf R-definable subassignments over $S$}, denoted by
{\bf ${\text{RDef}}_S$} ($R$ stands for ``residue'').
The objects of ${\text{RDef}}_S$ are definable subassignments
of $S[0,n,0]$ for some integer $n\ge 0$ (with a morphism to $S$ coming from the projection onto the first
factor), and morphisms are morphisms over $S$.
Note that this abbreviation says that the objects in ${\text{RDef}}_S$ can have extra variables of the residue field sort, but \emph{no extra variables of the valued field sort nor the value group sort}, compared to $S$ itself.
\begin{example}
{\bf The category ${\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k}$.}
By definition, the category ${\text{RDef}}_{h_{\mathnormal{\mathrm{Spec\,}} k}}$ consists of definable subassignments with variables ranging only over the residue field (and therefore definable in the language of rings).
Note that if the formulas defining the subassignments in ${\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k}$
had been
quantifier-free, then they would essentially define
{constructible sets}
over $k$.
Depending on $k$, since quantifiers are allowed, this category may be richer,
but in many cases
there is a map from it to a category of geometric objects over the residue
field, as discussed in Section~\ref{sec:back}.
\end{example}
The category ${\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k}$ (and more generally, ${\text{RDef}}_S$ where $S$
is a definable subassignment) is going to play a very important role
in the theory.
In the next section, we will associate with each definable
subassignment its motivic volume that will be, essentially, an element of
the {Grothendieck} ring (defined in the next section) of the category
${\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k}$.
\subsubsection{Points on subassignments, and functions}
By definition, a point on a definable subassignment $Y\in {\text{Def}}_k$ is a
pair $(y_0, K)$ where $K\in \text{Field}_k$, and $y_0\in Y(K)$.
Given any definable morphism
$\alpha: S\to Z$, where both $S$ and $Z$ are definable {subassignments},
there is a corresponding function from the set of
points of $S$ to the set of points of $Z$. The function and the morphism define each other uniquely, so we can identify them.
In the special case $Z=h[0,0,1]$, the resulting function is integer-valued, so we will say that such a morphism
is an integer-valued function on the subassignment $S$.
\subsection{Grothendieck rings}\label{sub:gr.rings}
There are several Grothendieck rings used in various {constructions} of motivic measure. The first one is the Grothendieck ring of the category of varieties
over $k$, $K_0({\text{Var}}_k)$. Its elements are formal linear combinations
with coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}$ of isomorphism classes of varieties (with formal
addition) modulo the
natural relation $[X\setminus Y]+[Y]=[X]$;
the product operation comes from the product in the
category ${\text{Var}}_k$.
Another Grothendieck ring that is sometimes used is $K_0({\text{Mot}}_k)$ -- the Grothendieck ring of the category of Chow motives over $k$.
(We will not talk about Chow motives here, see \cite{Scholl} for an
introduction).
This is the ring constructed in the same way, but from the category of Chow motives rather than varieties over $k$.
These rings have an element (corresponding to the class of the affine line)
that plays a special role in the theory of motivic integration. It is always
denoted by $\ring{L}$. The notation comes from Chow motives, where $\ring{L}$ stands
for the so-called Lefschetz motive $\ring{L}=[{\mathbb P}^1]-[pt]$ (see
\cite{Scholl}).
In $K_0({\text{Var}}_k)$, $\ring{L}$ stands for $[\ring{A}^1]$.
It is a difficult theorem (Gillet and Soul\'e, \cite{gillet-soule},
and Guill\'en and Navarro Aznar)
that there exists a natural map from $K_0({\text{Var}}_k)$ to $K_0({\text{Mot}}_k)$.
\footnote{The meaning of ``natural'' here is the following. Chow motives are,
formally, equivalence classes of triples $(X, p, n)$, where $X$ is a variety,
$p$ is an idempotent correspondence on $X$ (one can think of it as a projector from $X$ to itself), and $n$ is an integer. Every \emph{smooth projective} variety $X$
naturally corresponds to the Chow motive $(X, \text{id}, 0)$. The content of the theorem is to extend this map to the elements of $K_0({\text{Var}}_k)$ that are not necessarily linear combinations of isomorphism classes of smooth projective varieties.}
Under this map, the class of the affine line corresponds to $\ring{L}$
(see \cite{Scholl}), thus justifying the notation.
The image of this map will be denoted by $K_0^{mot}({\text{Var}}_k)$, and it will play an important role in Section~\ref{sec:back}.
One can also make Grothendieck rings of the categories of subassignments that we have considered above. Note that one can define set-theoretic operations on subassignments in a natural way, {\it{e.g.}},
$(h_1\cup h_2)(K):=h_1(K)\cup h_2(K)$, {\it etc}.
Let $S$ be a definable subassignment.
One can make the ring $K_0({\text{RDef}}_S)$: its elements are formal linear
combinations of isomorphism classes of objects of ${\text{RDef}}_S$, modulo the relations $[(Y\cap X)\to S]+[(Y\cup X)\to S]=[Y\to S]+[X\to S]$, and
$[\emptyset\to S]=0$. With the natural operation of addition, $K_0({\text{RDef}}_S)$ is an abelian group; cartesian product gives it a structure of a ring.
\begin{remark}
Note that when making a Grothendieck ring, we first replace the objects
of a category by equivalence classes of objects. By changing the notion of equivalence (for example, making it more crude), one can define the rings where various important invariants take values. We shall see in Section~\ref{sub:pseudo} that in order to get a version of motivic integration that specializes to $p$-adic
integration, we need to replace equivalence by \emph{equivalence on pseudofinite fields}.
\end{remark}
\subsubsection{Dimension}
Before we can talk about measure theory for objects of ${\text{Def}}_k$, we need a dimension theory.
Recall that each subassignment has valued-field, residue-field, and value-group variables. The notion of dimension takes into account only the valued-field
variables (this {fits well} with the measure theory we are about to describe
since the measure on $K^n\times \ring{Z}} \newcommand{\R}{\ring{R}^r$ is
going to be {essentially} the counting measure, as {we will} see below).
First, note that each subvariety $Z$ of $\ring{A}^m_{k((t))}$ naturally gives a
subassignment $h_Z$ of $h[m,0,0]$ by $h_Z(K):=Z(K((t)))$.
For $S$ a subassignment of $h[m,0,0]$, we define {the} {\bf Zariski closure of $S$} to be
the intersection $W$ of all subvarieties $Z$ of $\ring{A}^m_{k((t))}$ such that $h_Z$ contains $S$. Then {the} dimension of $S$ is defined to be the dimension of $W$.
In general, if $S$ is a subassignment of $h[m,n,r]$, the dimension of $S$ is defined to be the dimension of the projection of $S$ onto the first component
$h[m,0,0]$.
\begin{proposition}\cite[Prop.~3.4]{CL.expo}
Two isomorphic objects of ${\text{Def}}_k$ have the same dimension.
\end{proposition}
Note that {definable} subassignments are closely related to analytic manifolds.
See \cite[\S~3.2]{CL} for a detailed discussion.
\subsection{Constructible motivic Functions}\label{sub:constr.f}
\subsubsection{The ring of values}\label{subsub:ring_of_values}
Let $\ring{L}$ be a formal symbol (later it will be associated with the class of the affine line in an appropriate Grothendieck ring). In Section~\ref{sec:back},
it will be matched with $q$ -- the cardinality of the residue
field -- but for now it is just a formal symbol.
{Consider} the ring
$$
A=\ring{Z}} \newcommand{\R}{\ring{R}\left[\ring{L}, \ring{L}^{-1}, \left(\frac{1}{1-\ring{L}^{-i}}\right)_{i>0}\right].
$$
For all real $q>1$, there is a homomorphism of rings
$\nu_q:A\to \R$ defined by $\ring{L}\mapsto q$.
Note that if $q$ is transcendental, then $\nu_q$ is injective.
This family of homomorphisms gives a partial ordering on $A$: for $a, b\in A$,
set $a\ge b$ if for every real $q\ge 1$ we have $\nu_q(a)\ge \nu_q(b)$.
Note that with this ordering, $\ring{L}^i$, $\ring{L}^i-\ring{L}^j$ with $i>j$, and
$\frac{1}{1-\ring{L}^{-i}}$ with $i>0$ are all positive, but for example, $\ring{L}-2$ is not positive.
\subsubsection{Constructible motivic functions}
In the $p$-adic setting, the smallest class of functions that one would definitely like to be able to integrate is built from two kinds of functions: characteristic functions of measurable sets, and functions
of the form $q^{\alpha}$, where $q$ is the cardinality of the residue field,
and $\alpha$ is a characteristic function of a measurable set (these appear as absolute values of the functions of the first kind).
Keeping this in mind, let us define constructible motivic functions.
Let $S\in {\text{Def}}_k$ be a definable subassignment.
The ring of constructible motivic functions {on $S$} is built from
two basic kinds of functions.
The first kind are definable functions with values in $\ring{Z}} \newcommand{\R}{\ring{R}$, and functions
of the form $\ring{L}^{\alpha}$, where $\alpha$ is a definable function on $S$ with values in $\ring{Z}} \newcommand{\R}{\ring{R}$
(these functions can be thought of as functions with values in $A$). In particular,
this collection of functions includes characteristic functions of
definable subsets of $S$. Let us denote the ring of $A$-valued
functions on $S$
generated by functions of these two kinds, by ${\mathcal P}(S)$.
The second kind
of definable functions on $S$ do not look like functions at all,
at the first glance.
Formally, they are the elements of the Grothendieck ring $K_0({\text{RDef}}_S)$, as defined in Section~\ref{sub:gr.rings}.
However, if we think {of specialization} to $p$-adic integration, we see that
once we have fixed a local field $K$ with a (finite) residue field $\ring{F}} \newcommand{\hi}{\ring{H}_q$,
an element of $[Y\to S]\in {\text{RDef}}_S$ gives an integer-valued function on $S$ by
assigning to each point
on $x\in S(K)$ the cardinality of the {fibre} of $Y$ over $x$.
Note that the {fibre} of $Y$ over $x$ is a subset of $\ring{F}} \newcommand{\hi}{\ring{H}_q^n$ for some $n$;
in particular, it is finite.
The reason these functions need to be included from the very beginning is that the
motivic integral will take values in a ring containing $K_0({\text{RDef}}_S)$, and we need to be able to integrate a function of two variables with respect to one of the variables, and get a function of the
remaining variable that is again integrable.
To put together the two kinds of functions described above,
note that characteristic functions of definable subsets of $S$ naturally
correspond to elements of ${\text{RDef}}_S$: ${\bf 1}_Y$ corresponds to
$[Y\to S]\in {\text{RDef}}_S$.
Let ${\mathcal P}^0(S)$ be the subring of ${\mathcal P}(S)$ generated
by the constant
function $\ring{L}_S-1_S$ (where $\ring{L}_S=[S\times \ring{A}_{k((t))}^1\to S]$, and
$1_S=[S\times h_{\mathnormal{\mathrm{Spec\,}} k}\to S]$), and the functions of the
form
${\bf 1}_Y$, where $Y$ is a definable subassignment of $S$.
We can form the tensor product of the ring ${\mathcal P}(S)$ and the ring $K_0({\text{RDef}}_S)$:
$$
{\mathcal C}(S):={\mathcal P}(S)\otimes_{{\mathcal P}^0(S)}K_0({\text{RDef}}_S).
$$
This is the ring of {\bf constructible motivic functions on $S$}.
We refer to
\cite[\S~3.2]{CL.expo} for details.
Finally, one defines {\bf constructible motivic Functions on $S$} as equivalence classes of
elements of ${\mathcal C}(S)$ ``modulo support of smaller dimension''.
See \cite[\S~3.3]{CL.expo} for a precise definition and discussion why this needs to be done. We will think of constructible motivic Functions as
functions defined almost everywhere (which is quite reasonable in the context of any integration theory).
\subsection{Summary}
Let $k$ be the base field, {{\it{e.g.}}}, $k=\Qgordon$.
To summarize,
instead of measurable sets we have definable {subassignments}; instead of functions -- constructible motivic Functions; and instead of numbers as values of the measure -- elements of a suitable Grothendieck ring (either of varieties, or of
Chow motives, or of ${\text{RDef}}_k$, depending on the context).
The measure theory and its relation to $p$-adic measure
is summarized by the diagram.
\begin{picture}(400,160)(50,30)
\put(110,210){\makebox(0,0)}
\put(190,150){\vector(-2,-1){60}}
\put(150,144){\makebox(0,0)[r]{Section~\ref{sub:interpr}}}
\put(200,160){\makebox(0,0)
{$h\in {\text{Def}}_k$}}
\put(210,150){\vector(2,-1){60}}
\put(254,144)
{\makebox(0,0)[l]{Section~\ref{sec:CL}}}
\put(280,110)
{\makebox(0,0)[l]{$\mu(h)\in K_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})$}}
\put(120,95){\vector(0,-1){47}}
\put(110,110){\makebox(0,0){\text{Subset of } $\Qgordon_p^m$ or $\ring{F}} \newcommand{\hi}{\ring{H}_p((t))^m$}}
\put(280,95){\vector(0,-1){47}}
\put(120,42){\makebox(0,0){number in $\Qgordon$}}
\put(100,90){\makebox(0,0)[t] {$p$-adic}}\put(100,78){\makebox(0,0){volume}}
\put(320,42){\makebox(0,0){Virtual Chow motive}}
\put(260,39){\vector(-1,0){110}}
\put(200,50){\makebox(0,0)[t]{$\text{TrFrob}_p$}}
\put(305,80){\makebox(0,0)[t] {\ \ Section~\ref{sub:comp}}}
\end{picture}
We describe the arrow from subassignments to elements of $K_0({\text{RDef}}_k)$
in the next section (this is what motivic integration developed in \cite{CL} essentially amounts to). We explain the relationship with $p$-adic integration in
Section~\ref{sec:back}, as indicated on the diagram.
\begin{remark}
As we will see, for the sets that come from definable subassignments,
the value of the $p$-adic measure, that is claimed to be
in $\Qgordon$ (in the bottom left corner of this diagram) in fact lies in
$\ring{Z}} \newcommand{\R}{\ring{R}\left[\frac1p, \left(\frac{1}{1-p^{-i}}\right)_{i>0}\right]$.
\end{remark}
In this diagram, one can make a choice for the collection of fields that
appears in the upper left-hand corner. One natural collection of local fields
would be the collection ${\mathcal A}_F$ of all finite degree field extensions of all non-archimedean
completions of a given global field $F$
(in that case, one adds to Denef-Pas
language constant symbols for all elements of $F$). Another natural collection
is the collection of all function fields $\ring{F}} \newcommand{\hi}{\ring{H}_q((t))$. One of the most
spectacular applications of motivic integration is the
\emph{Transfer Principle}
that allows to transfer identities between these two collections of fields.
We talk more about this in Section~\ref{sec:applic}.
\section{Motivic integration as pushforward}\label{sec:CL}
The main difference between motivic integration developed by
Cluckers and Loeser \cite{CL} and the older theories is that in \cite{CL} integration, by definition, is \emph{pushforward}
of morphisms, in agreement with Grothendieck's philosophy.
Let $f:S\to W$ be a morphism of definable subassignments.
We have described the rings of constructible motivic functions
${\mathcal C}(S)$ and ${\mathcal C}(W)$ on $S$ and $W$, respectively. The goal
is to define a morphism of rings $f_{!}:{\mathcal C}(S)\to {\mathcal C}(W)$ that
corresponds
to integration along the fibres of $f$.\footnote{In reality, the situation is more complicated because, naturally, not all constructible functions are integrable. Accordingly, one needs to define a class of integrable functions. We say a few words about it in Section~\ref{sub:carpet}, but for now we will ignore this issue
for simplicity of exposition.}
To make the situation more manageable, the operation of pushforward is defined for various types of projections and injections, keeping in mind that a
general morphism can be represented as the composition of a projection and an
injection by considering its graph.
Naturally, pushforward for injections is extension by zero, and the
interesting
part is the projections.
There are three kinds of projections: forgetting the valued-field, residue-field, or $\ring{Z}} \newcommand{\R}{\ring{R}$-valued variables.
It is a nontrivial proposition that the three kinds of variables are independent, in the sense that you can pushforward along these projections in any order.
In order to understand the theory completely, one needs to read \cite{CL}.
Here we only aspire to sketch integration with respect to one valued-field variable. The idea is to break up the domain of integration into simpler sets (the cells), and define the integral on each cell. Then one can repeat this procedure inductively to integrate along all the variables and get the volume. The hardest part of the theory is a collection of the statements of Fubini type that allow to permute the order of integrals with respect to the valued-field variables.
Throughout this section, we fix the ground field $k$ and let $S\in{\text{Def}}_k$ be a
definable subassignment (of some $h[m,n,r]$).
We start with the exposition of cell decomposition theorem, which is the main
tool of the construction.
\subsection{The Cell Decomposition Theorem}
Cell decomposition theorem is a very powerful theorem with many striking
applications.
The article \cite{Denef} gives a beautiful exposition of
$p$-adic cell decomposition (with a slightly
more restrictive definition of cells) and its
applications to questions about rationality of Poincar\'e series.
Here we will focus, instead, on examples illustrating
the technical side of the cell decomposition
used in the construction of the motivic measure.
Before we state the theorem, let us consider a simple example
of a $p$-adic integral.
\subsubsection{A motivating example}\label{subsub:example}
Consider the integral, depending on a parameter $x\in \ring{Z}} \newcommand{\R}{\ring{R}_p$:
$$\int_{\ring{Z}} \newcommand{\R}{\ring{R}_p}|t^3-x||\,{\text d}t|.$$
Let us calculate this integral by brute force, as a computer could have
done it. We assume that $p>3$.
First, consider the easiest case $x=0$.
Then the domain breaks up into infinitely many ``annuli'' $A_i$
on which the function
$|t^3|$ is constant. (Even though each one of the sets $A_i$ lives on the
line, we call it an annulus because it is a difference of two $1$-dimensional balls of radii $p^{-i}$ and $p^{-(i+1)}$ respectively).
The volume of each annulus is:
\begin{equation*}
\begin{aligned}\mu(A_i)&=\mu(\{t\mid |t|=p^{-i}\})=\mu(\{t\mid |t|\le p^{-i}\})-
\mu(\{t\mid |t|\le p^{-(i+1)}\})\\
&=p^{-i}-p^{-(i+1)}=p^{-(i+1)}(p-1).
\end{aligned}
\end{equation*}
Then the value of the integral for $x=0$ is the sum of the geometric series:
$$
\int_{\ring{Z}} \newcommand{\R}{\ring{R}_p}|t^3||\,{\text d}t|=\sum_{i=0}^{\infty}p^{-3i}p^{-(i+1)}(p-1)=
\frac{p-1}p\frac{1}{1-p^{-4}}.
$$
Now let us turn to the case of general $x$.
If ${\text{ord}}(x)$ is not divisible by $3$, then for any $t$, we have
$|t^3-x|=\max(|t^3|, |x|)$, and so
the domain of integration breaks up into two parts: the part where $|t^3|$
dominates, and the part where $|x|$ dominates.
The integral over each part is easily
reduced to the sum of a geometric series, and
we omit the details.
The most interesting case is the case where ${\text{ord}}(x)=3k$ for some integer $k$:
in this case, along with the two ``easy'' integrals similar to the previous case (which we omit) there is also the integral over the set
$B=\{t\mid |t^3|=|x|\}$.
This case breaks up further into three subcases:
\begin{enumerate}
\item $x$ is not a cube;
\item $x$ is a cube, and there is one cube root of $x$ in $\ring{Z}} \newcommand{\R}{\ring{R}_p$;
\item $x$ is a cube, and there are three cube roots.
\end{enumerate}
Case (1) is also easy to finish, because in this case the formula
$|t^3-x|=\max(|t^3|, |x|)$ still holds for all $t$.
We will focus on the cases (2) and (3), which are the most interesting.
If '$\exists y \mid x=y^3$' holds, then the number of solutions to this
equation in $\ring{Z}} \newcommand{\R}{\ring{R}_p$ depends on $p$: for example,
there is only one root in $\ring{Z}} \newcommand{\R}{\ring{R}_5$, and three roots in $\ring{Z}} \newcommand{\R}{\ring{R}_7$.
Let us consider the case with $3$ roots first.
We can write $t^3-x=(t-y_1)(t-y_2)(t-y_3)$.
Suppose $t\in B$.
First, consider the subset $B_0$ of $B$ that consists of the points $t$
such that $\overline{\text{ac}}(t)\neq \overline{\text{ac}}(y_i)$, $i=1,2,3$. On this set, $|t-y_i|=p^{-k}$,
and
\begin{equation*}
\int_{B_0}|t^3-x|=p^{-3k}\mu(B_0)=p^{-3k}((p^{-3k}-p^{-3(k+1)})-3p^{-3(k+1)})=
p^{-6k}-4p^{-(6k+1)}.
\end{equation*}
Finally, consider the sets $B_i=\{\overline{\text{ac}}(t)=\overline{\text{ac}}(y_i)\}$, $i=1,2,3$.
It is enough to understand the integral over one of them, say, $B_1$.
The set $B_1$ is defined by
$$
B_1=\{t\mid {\text{ord}}(t)=k={\text{ord}}(b_1) \wedge \overline{\text{ac}}(t)=\overline{\text{ac}}(y_1)\}=
\{t\mid {\text{ord}}(t-y_1)\ge (k+1)\}.$$
The integral over $B_1$ becomes an infinite sum (indexed by the degree of congruence between $t$ and $y_1$, which we denote by $m={\text{ord}}(t-y_1)$):
\begin{align}\label{eq:B_1}
\int_{B_1}|t^3-x| &=\sum_{m=k+1}^{\infty}p^{-m}p^{-2k}(p^{-m}-p^{-(m+1)})\\
&=(1-p^{-1})p^{-2k}p^{-2(k+1)}(1-p^{-2})^{-1}.
\end{align}
From here it is easy to get the final answer, and easy to do the case of one cube root of $1$ in the field. The main point here is that in each case,
the integral boils
down to a few geometric series with a power of $p$ as the ratio, and a few finite sums. As we will see, this is a very general pattern.
The interesting part of the final answer for the case when the
parameter $x$ is a cube, ${\text{ord}}(x)=3k$:
$$
\int\limits_{\{t \,\mid\, 3{\text{ord}}(t)={\text{ord}}(x)\}}\hspace*{-2.5em}|t^3-x||\,{\text d}t|=
\begin{cases}
3(1-p^{-1})\displaystyle\frac{p^{-4k-2}}{1-p^{-2}}+p^{-6k}-4p^{-(6k+1)}, &
p\equiv 1 \pmod{3};\\
(1-p^{-1})\displaystyle\frac{p^{-4k-2}}{1-p^{-2}}+p^{-6k}-2p^{-(6k+1)}, &
p\equiv 2 \pmod{3}.
\end{cases}
$$
\subsubsection{The definition of cells}
The general idea behind cell decomposition is to present every definable
set as a fibration over some definable set of dimension one less
(called the basis) with fibres that are $1$-dimensional $p$-adic balls.
\begin{definition}
Let $S$ be a definable subassignment.
Let $C\subset S$ be a definable subassignment of $S$, and
let $c:C\to h[1,0,0]$, $\alpha:C\to \ring{Z}} \newcommand{\R}{\ring{R}$, $\xi:C\to h_{{\mathbb G}_m, k}$
be definable morphisms.
Denote by {\bf $Z_{C, \alpha,\xi, c}$} a subassignment of $S[1,0,0]$
defined by $y\in C$, ${\text{ord}}(z-c(y))=\alpha(y)$, $\overline{\text{ac}}(z-c(y))=\xi(y)$.
The subassignment $Z_{C,\alpha, \xi, c}$ is a basic $1$-cell.
We will refer to the subassignment $C$ as its {\bf basis} and to the function
$c$ as its {\bf centre}.
\end{definition}
When doing cell decomposition, we will also need to be able to have some
{pieces of smaller dimension}. This is the {idea} behind the next
definition.
\begin{definition}
In the context of the previous definition,
denote by
{\bf $Z_{C,c}$} the subassignment of $S[1,0,0]$ defined by the
formula $y\in C, z=c(y)$. This is a {basic $0$-cell}.
This is a subassignment of the same dimension as $C$; essentially,
it is a copy of $C$ that sits in a space of dimension {one greater}.
\end{definition}
These basic cells are simple enough to work with, but not yet versatile enough
for cell decomposition to work.
We need to modify the definition of cells by allowing extra
residue field and integer-valued variables, and letting the points of the
cell live on different ``levels'' according to the values of these variables.
\begin{definition}\label{def:cells}
Let $S$ be a definable subassignment, let $s,r$ be some non-negative integers,
and let $\pi$ be the projection
$\pi:S[1,s,r]\to S[1,0,0]$ onto the first factor.
A definable subassignment $Z\subset S[1,0,0]$ is called a {\bf $1$-cell} if
there exists an isomorphism of definable subassignments
(called a {\bf presentation})
$$\lambda:Z\to Z_{C,\alpha, \xi, c}\subset S[1,s,r]$$
for some $s, r \ge 0$, some basis $C\subset S[0,s,r]$,
such that $\pi\circ \lambda$ is the identity on $Z$.
\[
\xymatrix{
Z_{C,\alpha,\xi,c} \ar[rr]^{\hookrightarrow}&& \ar[d]^{\pi} S[1,s,r]\\
& \ar[ul]^{\lambda} Z \ar[r]^{\hookrightarrow} & \ar[d] S[1,0,0]\\
C \ar[r]^{\hookrightarrow} & S[0,s,r] \ar[r] & S
}
\]
\end{definition}
A similar definition applies to $0$-cells, with the only change that
the isomorphism $\lambda$ is between $Z\subset S[1,0,0]$ and a $0$-cell
$Z_{C,c}\subset S[1,s,0]$ with some basis $C\subset S[0,s,0]$ (in particular, no extra $\ring{Z}} \newcommand{\R}{\ring{R}$-valued variables allowed).
\begin{example} Take $S=\mathnormal{\mathrm{Spec\,}} k$.
We can write the line $h[1,0,0]$ as the union of a
$0$-cell $h_{\mathnormal{\mathrm{Spec\,}} k}$ and a
$1$-cell $Z=\ring{A}^1_{k((t))}\setminus \{0\}$ (this is not a very precise
notation for a subassignment but this makes the meaning more clear).
Let us see precisely why $Z$ is indeed a $1$-cell. Let us define the subassignment $Z_{C,\alpha,\xi, c}$ and the presentation $\lambda$
required by the definition. We have the freedom to choose the number of extra
residue field
and $\ring{Z}} \newcommand{\R}{\ring{R}$-valued variables to introduce. Let us make
$Z_{C,\alpha,\xi, c}$ a subassignment of $h[1,1,1]$.
As the basis, we take the subassignment $C$ of $h[0,1,1]$ defined by
$\eta\neq 0$ (recall that $h[0,1,1]$ stands for $\ring{A}^1_k\times \ring{Z}} \newcommand{\R}{\ring{R}$).
We call the residue field variable $\eta$, and the $\ring{Z}} \newcommand{\R}{\ring{R}$-variable $r$. Let
$c(\eta,r)=0$ be the constant zero function from
$h[0,1,1]$ to $h[1,0,0]$ ({\it i.e.}, to $\ring{A}_{k((t))}^1$),
and let $\xi(\eta, r)=\eta$, $\alpha(\eta, r)=r$.
Now let $Z_{C,\alpha,\xi, c}$ be the subassignment of
$h[1,1,1]$ (denote the variables by $(z, \eta, r)$)
defined by ${\text{ord}}(z)=r$, $\overline{\text{ac}}(z)=\eta$.
The presentation $\lambda:Z\to Z_{C,\alpha,\xi, c}$ is given by
$\lambda(z)=(z,\overline{\text{ac}}(z),{\text{ord}}(z))$.
The projection $\pi$ is the projection onto the first factor from
$h[1,1,1]$ to $h[1,0,0]$ (that is, we forget the extra residue field and $\ring{Z}} \newcommand{\R}{\ring{R}$-variables). Clearly, $\pi\circ\lambda$ is the identity on $Z$.
One way to think about it is to imagine that we have placed different
points on the affine line (without $0$) over the valued field on
different ``layers''
indexed by their valuations and angular components.
\end{example}
\subsubsection{Cell Decomposition Theorem}
\begin{theorem}[\cite{CL}, Th.~7.2.1]
Let $X$ be a definable subassignment of $S[1,0,0]$ with $S$ in ${\text{Def}}_k$.
\begin{enumerate}
\item The subassignment $X$ is a finite disjoint union of cells.
\item For every constructible function $\varphi$ on $X$ there exists
a finite partition of $X$ into cells $Z_i$ with presentations
$(\lambda_i, Z_{C_i})$ such that $\varphi\mid_{Z_i}$ is the pullback by
$p_i\circ \lambda_i$ of a constructible function $\psi_i$ on $C_i$, where
$p_i$ is the projection $p_i:Z_{C_i}\to C_i$.
This is called the {\bf cell decomposition adapted to $\varphi$}.
\end{enumerate}
\begin{center}
\[
\xymatrix{
\ar[dd]^{p_i} Z_{C_i,\alpha_i,\xi_i,c_i} \ar[rr]^{\hookrightarrow}&& \ar[d] S[1,s_i,r_i]\\
& \ar[ul]^{\lambda_i} Z_i \ar[r]^{\hookrightarrow} & \ar[d] S[1,0,0]\\
C_i \ar[r]^{\hookrightarrow} & S[0,s_i,r_i] \ar[r] & S
}
\]
\end{center}
\end{theorem}
\begin{example}\label{ex:cells}
Let us consider the cell decomposition adapted to the function
$\varphi(x,t)=|t^3-x|$ with respect to the $t$-variable
(see our ``motivating example'',
Section~\ref{subsub:example}).
Note that $|t^3-x|_p=p^{-{\text{ord}}(t^3-x)}$, so it is natural to define the
corresponding
$A$-valued function (which we will also denote by $\varphi(x,t)$ in
this example) by $\varphi(x,t)=\ring{L}^{-{\text{ord}}(t^3-x)}$. (The details about the interpretation of constructible motivic functions will appear below in
Section \ref{sec:back}.)
As in Section~\ref{subsub:example}, it is convenient to consider the case
$x=0$ separately.
In our language, $\phi(x,t)$ is a function on $h[2,0,0]$.
We split the domain into the two subassignments defined by $x\neq 0$ and $x=0$.
We only deal with the part $x\neq 0$ as it is more interesting.
First, consider the subassignments $h_1$ and $h_2$ defined by
$3{\text{ord}}(t)<{\text{ord}}(x)$ and $3{\text{ord}}(t)>{\text{ord}}(x)$, respectively.
On $h_2$ we have $f(x,t)=|x|$. Since $f(x,t)$ is independent of $t$, this is the easiest part:
$h_2$ is a single cell and the function $\psi$ is
$\ring{L}^{-{\text{ord}}(x)}$. The details of the presentation are
left to the reader.
The subassignment $h_1$ is a single cell as well. Indeed, on $h_1$, we have
$f(x,t)=|t^3|$. To define the basis $C$, we add extra value sort variables
for ${\text{ord}}(x)$ and ${\text{ord}}(t)$, and an extra residue field variable for $\overline{\text{ac}}(t)$:
formally, let $C$ be the
subassignment of $h[1,1,2]$ defined by the formula
$$\phi(x, \eta, \gamma_1,\gamma_2)=\text{`} (x\neq 0)
\wedge (\gamma_1={\text{ord}}(x))\wedge (3\gamma_2<\gamma_1)
\text{'}.
$$
Let the centre $c: C\to h[1,0,0]$ be the zero function, let
$\alpha:C\to \ring{Z}} \newcommand{\R}{\ring{R}$ be the function
$(x,\eta, \gamma_1,\gamma_2)\mapsto \gamma_2$,
and let $\xi(x,\eta,\gamma_1,\gamma_2)=\eta$ (so that $\xi$ is a function from
$C$ to ${\mathbb G}_m$).
Let $Z_{C,\alpha,\xi, c}$ be the subassignment of
$h[2,1,2]$ defined by
$$\phi_1(x,t,\eta,\gamma_1, \gamma_2)=\text{`}
({\text{ord}}(t)=\gamma_2) \wedge (\overline{\text{ac}}(t)=\eta)
\text{'}.
$$
The presentation $\lambda: h_1\to Z_{C,\alpha,\xi, c}$ is given by
$$\lambda(x,t)=(x,t,\overline{\text{ac}}(t),{\text{ord}}(x),{\text{ord}}(t)).
$$
Finally, let $\psi$ be the function on $C$ (with values in the ring $A$
of Section~\ref{subsub:ring_of_values}) defined by
$\psi(x,\eta,\gamma_1,\gamma_2)=\ring{L}^{-\gamma_1}$.
Then, clearly, on $h_1$
our function $\ring{L}^{-{\text{ord}}(t^3-x)}$ is the pullback of $\psi$ by
$p\circ\lambda$.
Now let us consider the remaining subassignment $h_0$ defined by
$3{\text{ord}}(t)={\text{ord}}(x)$.
It breaks up into two subassignments, which we will call
$h_c$ and $h_{nc}$ (for ``cubes'' and ``non-cubes'', respectively)
defined, respectively, by $\exists y:y^3=x$ and
$\nexists y: y^3=x$.
We omit $h_{nc}$, because it is similar to $h_1$, and focus on the most
interesting part $h_c$.
We will use three extra residue field variables: the variable $\eta_1$
will stand for for $\overline{\text{ac}}(x)$, $\eta_2$ for
$\overline{\text{ac}}(t)$, and $\eta_3$ -- for the angular component of the difference between
$t$ and a given cube root of $x$ (the details will appear below, see equation
(\ref{eq:pres1})). We will also have one value sort variable $\gamma$ --
for the order of congruence between
$t$ and the chosen cube root of $x$.
Now let us do this formally. We can take as the basis the subassignment $C_1$
of $h[1,3,1]$ defined by the formula
\begin{multline}\label{eq:C_1}
\phi(x,\eta_1,\eta_2, \eta_3, \gamma)=\\
\text{`}(\exists y: y^3=x)
\wedge (\eta_1=\overline{\text{ac}}(x))\wedge (\eta_2^3=\overline{\text{ac}}(x))
\wedge (\gamma\ge {\text{ord}}(x)+1)
\text{'}.
\end{multline}
Let the function $c_1:C_1\to \ring{A}_{k((t))}^1$ be defined by
$c_1(x,\eta_1,\eta_2,\eta_3, \gamma)=y$, where $y^3=x$ and $\overline{\text{ac}}(y)=\eta_2$.
Note that $c_1$ is a definable function, since its graph clearly is a
definable set.
Let the function $\alpha_1:C_1\to \ring{Z}} \newcommand{\R}{\ring{R}$ be defined by
$\alpha_1(x,\eta_1,\eta_2, \eta_3,\gamma)=\gamma$, and let
$\xi_1(x, \eta_1,\eta_2, \eta_3,\gamma)=\eta_3$.
We make the set $Z_{C_1,c_1,\alpha_1,\xi_1}$ with these data according to
Definition \ref{def:cells}.
The presentation $\lambda:h_c\to Z_{C_1,c_1,\alpha_1,\xi_1}$
is given by the formula
\begin{equation}\label{eq:pres1}
\lambda(x,t):=(x,\overline{\text{ac}}(x),\overline{\text{ac}}(t), \overline{\text{ac}}(t-y),{\text{ord}}(t-y)),
\end{equation}
where $y^3=x$ and $\overline{\text{ac}}(y)=\overline{\text{ac}}(t)$. Finally, let $\psi_1:C_1\to A$ be the function
$$\psi(x,\eta_1,\eta_2, \eta_3, \gamma)=\ring{L}^{-2{\text{ord}}(x)-\gamma}.
$$
\end{example}
It is easy to see that all the conditions of cell decomposition theorem are satisfied with these formal constructions. We will soon see how this prepares the ground for integration, and will help us recover the calculation of Section~\ref{subsub:example}.
\subsection{Motivic integration as pushforward} We are almost ready to
define integration with respect to one valued field variable. We just need
to discuss the (tautological)
integration with respect to extra residue field variables,
and summation over $\ring{Z}} \newcommand{\R}{\ring{R}$-variables, since as we have just seen,
we do pick up these variables in the process of
cell decomposition.
\subsubsection{Integration over the residue field variables}\label{subsub:ras.variables}
Everything in this subsection comes from \cite[Section 5.6]{CL}.
Let $f:S[0,n,0]\to S$ be the projection onto the first factor.
Recall that by definition, the ring of constructible functions on
$S[0,n,0]$ is spanned by
the elements of the form $a\otimes\varphi$, where $a$ is an element of
$K_0({\text{RDef}}_S[0,n,0])$, and $\varphi$ is a
function on $S[0,n,0]$ with values in the ring $A$.
Using quantifier elimination, one can prove \cite[Proposition 5.3.1]{CL}
that in fact it is enough to have just the elementary tensors of the form
$a\otimes \varphi$ where the
$\varphi$'s are pullbacks to $S[0,n,0]$ of functions on $S$, namely,
the natural map
\begin{equation}\label{eq:fns}
K_0({\text{RDef}}_{S[0,n,0]})
\otimes_{{\mathcal P}^0(S)} {\mathcal P}(S) \to {\mathcal C}(S[0,n,0])
\end{equation}
is an isomorphism.
Here is an example illustrating this fact.
\begin{example}
Let $\varphi={\bf 1}_Y$ be a characteristic function of a definable
subassignment $Y$ of $S[0,n,0]$. Then $Y$ is an element of ${\text{RDef}}_S$, so
clearly ${\bf 1}_Y$ is in the image of the map (\ref{eq:fns}).
\end{example}
Given this isomorphism of rings of constructible motivic functions,
pushforward for the projection $f$ is easy to define, and it is, essentially, tautological. An element $a$ of
$K_0({\text{RDef}}_S[0,n,0])$ can be viewed as an elements of $K_0({\text{RDef}}_S)$ via
composition of the map to $S[0,n,0]$ with $f$. We denote it by
$f_{!}(a)$. Then let
$f_{!}(a\otimes\varphi):=f_{!}(a)\otimes \varphi$.
\subsubsection{Integration over $\ring{Z}} \newcommand{\R}{\ring{R}$-variables,\cite[Section 4.5]{CL}}\label{subsub:z.variables}
Essentially, the measure on $\ring{Z}} \newcommand{\R}{\ring{R}^r$ is just the counting measure, and integration is summation. More precisely, we call a family $(a_i)$ of elements of
$A$ {\bf summable}, if $\sum_i\nu_q(a_i)$ converges for all $q>1$.
A function $\varphi(s,i)\in {\mathcal P}(S\times \ring{Z}} \newcommand{\R}{\ring{R}^r)$ is called
{\bf $S$-integrable} if, for every $s\in S$, the family $(\varphi(s,i))_{i\in \ring{Z}} \newcommand{\R}{\ring{R}^r}$
is summable (recall that our functions are $A$-valued).
\begin{thm}\cite[Theorem-Definition 4.5.1]{CL}
For each $S$-integrable function $\varphi$ on $S\times \ring{Z}} \newcommand{\R}{\ring{R}^r$, there
exists a unique constructible motivic function $\mu_S(\varphi)$ on $S$
such that for all $q>1$ and all $s$ in $S$,
$$
\nu_q(\mu_S(\varphi)(s))=\sum_{i\in \ring{Z}} \newcommand{\R}{\ring{R}^r}\nu_q(\varphi(s,i)).
$$
\end{thm}
The proof of this theorem requires cell decomposition for Presburger functions; we will not discuss it here.
One of the consequences of the structure of Presburger functions is the fact
that the ring $A$ is the correct ring of values for constructible motivic functions. More precisely, it is the structure of Presburger functions that is ultimately responsible for the fact that it is enough to invert $\ring{L}$ and the
elements $1-\ring{L}^{-n}$ in order to do integration of summable functions.
\subsubsection{Integration over a $1$-cell}\label{subsub:val.variables}
Let $S$ be a definable subassignment as before, and let
$\pi:S[1,0,0]\to S$ be the projection onto the first factor.
Let $\varphi$ be a constructible motivic function on $S[1,0,0]$.
We want to produce a constructible motivic function $\pi_{!}(\varphi)$
on $S$
that is the result of integrating $\varphi$ along the fibers of $\pi$.
The idea of integration is very simple: take a cell decomposition
of $S[1,0,0]$ adapted to $\varphi$. We have $S[1,0,0]=\sqcup Z_i$,
where $Z_i$ are cells.
The function $\varphi$ breaks up into the sum of its restrictions to cells:
$\varphi=\sum\varphi{\bf 1}_{Z_i}$, and we define the function
$\pi_!(\varphi)$ cell by cell.
If we care only for functions defined almost everywhere, we can discard the restriction of $\varphi$ to the union of $0$-cells, since it is supported on the set of smaller dimension than the restriction of $\varphi$ to the union of $1$-cells.
Now let us define the pushforward on $1$-cells.
Let $Z$ be a $1$-cell, and let $\varphi_Z$ be the restriction of
$\varphi$ to $Z$.
We have:
\[
\xymatrix{
& \ar[dd]^{p_1} Z_{C_1,\alpha,\xi,c} \ar[rrr]^{\hookrightarrow} &&& S[1,s,r]\ar[d]\\
& & \ar[ul]^{\lambda} Z \ar[rr]^{\hookrightarrow} && S[1,0,0]\ar[d]^{\pi} \\
A & \ar[l]^{\psi_1} C_1 \ar[r]_{j_1}^{\hookrightarrow} & S[0,s,r] \ar[r]_{\pi_1} & S[0,0,r] \ar[r] & S
}
\]
Note that by definition of the cell, $\varphi_Z$ is constant
on the fibres of $p_1\circ\lambda$. If we identify $Z$ with
$Z_{C_1,\alpha,\xi,c}$ by means of the presentation $\lambda$, we can pretend
that the function $\varphi_Z$ lives on $Z_{C_1,\alpha,\xi,c}$, and it is
constant on the fibres of the projection $p_1:Z_{C_1,\alpha,\xi,c}\to C_1$.
It is natural to define the volume of the fibre of the projection $p_1$
over a point $y\in C_1$ to be $\ring{L}^{-\alpha_1(y)-1}$ -- by analogy with
the $p$-adic situation.
Hence, the following definition is natural:
\begin{definition}
\begin{equation}\label{eq:integral}
\pi_!(\varphi_Z):=\mu_S({\pi_1}_!({j_1}_!(\ring{L}^{-\alpha_1-1} \psi_1))).
\end{equation}
\end{definition}
Note that this definition automatically introduces normalization of the
measure: by specifying the factor $\ring{L}^{-\alpha_1-1}$, we have
fixed the volumes of $1$-dimensional $p$-adic balls.
\begin{example}
Let us return to the example of section \ref{subsub:example}, and consider
the case when the parameter $x$ is a cube, and in this case, let us only do the integral over a subset of the set $\{t\mid 3{\text{ord}}(t)={\text{ord}}(x)\}$.
Recall the notation from section \ref{subsub:example}: we had the set $B_1$ of all $t$ that are close to the cube root (or one of the three cube roots) of $x$. In Example~\ref{ex:cells}, we defined the corresponding subassignment
$h_c$ and
showed that it is a $1$-cell in the cell decomposition adapted to the
constructible function $\varphi(x,t)=\ring{L}^{-{\text{ord}}(t^3-x)}$.
Let us now compute the motivic integral of $\varphi$ with respect to the
variable $t$ over the cell $Z=h_c$.
In the notation used in the above definition,
we have $S=\ring{A}^1_{k((t))}=h[1,0,0]$.
On $Z=h_c$ (see Example~\ref{ex:cells}), we have
$\varphi=\lambda^{\ast}p_1^{\ast}(\psi_1)$, where $\psi_1$ is a function
on the basis $C_1\subset h[1,3,1]$ defined by
$\psi_1(x, \eta_1,\eta_2,\eta_3,\gamma)=\ring{L}^{-2{\text{ord}}(x)-\gamma}$.
The function $\alpha_1$ on $C_1$ is defined by:
$\alpha_1(x,\eta_1,\eta_2,\eta_3,\gamma_2)=\gamma$, so we have
$\ring{L}^{-\alpha_1-1} \psi_1=\ring{L}^{-2{\text{ord}}(x)-2\gamma-1}$.
Note that this function is in ${\mathcal P}(C_1)$.
By definition,
${\pi_1}_{!}{j_1}_{!}(\ring{L}^{-\alpha_1-1} \psi_1)=
[C_1]\otimes\ring{L}^{-2{\text{ord}}(x)-2\gamma-1}$, where now $C_1$ is thought of
as an element of ${\text{RDef}}_{S[0,0,1]}$ via the map $\pi_1\circ j_1$, and
$[C_1]$ is its class in $K_0({\text{RDef}}_{S[0,0,1]})$.
Let us denote the projection (that forgets the $\ring{Z}} \newcommand{\R}{\ring{R}$-variable)
from $S[0,0,1]$ to $S$ by $p$.
Now, $\mu_S$ amounts to summation over $\gamma$, and we get
\begin{equation}\label{eq:pi!}
\pi_!(\varphi_Z):=\mu_S({\pi_1}_!({j_1}_!(\ring{L}^{-\alpha_1-1} \psi_1)))=
[p(C_1)]\otimes\ring{L}^{-2{\text{ord}}(x)-1}\ring{L}^{-2({\text{ord}}(x)+1)}(1-\ring{L}^{-2}).
\end{equation}
Recall that $C_1$ is defined by the formula (\ref{eq:C_1}) of
Example~\ref{ex:cells}. Then $p(C_1)$ is a subassignment of $S[1,3,0]$
defined by: $\eta_1=\overline{\text{ac}}(x), \eta_2^3=\eta_1$ (we call the three residue field
variables $\eta_{1,2,3}$).
Note that magic happens as we fix a local field $K$ with a uniformizer
$\varpi_K$ and residue field $\ring{F}} \newcommand{\hi}{\ring{H}_q$, and interpret all the formulas in it.
As we discussed briefly in Section~\ref{sub:constr.f} and as we shall see in
detail in Section~\ref{sec:back}, to make $[p(C_1)]$ into a function on $S$,
we just need to count, for $x\in S(K)$, the number of points on the fibre of
$C_1$ over $x$. In our case, this yields three possible values of $\eta_2$ for each fixed $\eta_1=\overline{\text{ac}}(x)$ if there are $3$ cube roots of $1$ in the field,
or just one value of $\eta_2$ if there is only one cube root.
Since $\eta_3$ can take any value except $0$, we get $3(q-1)$ or $q-1$,
respectively. If we plug these numbers into (\ref{eq:pi!}), and replace all occurrences of $\ring{L}$ with $q$, we get an answer that agrees with equation
(\ref{eq:B_1}) of section~\ref{subsub:example}.
\end{example}
\subsection{What was swept under the carpet}\label{sub:carpet}
Since our goal was just to give a very basic exposition of the main ideas
of the theory of motivic integration, we have left out, so far, some very
important issues, such as integrability and integration over manifolds.
\subsubsection{Integrability}
Naturally, there are many definable sets whose $p$-adic volume is not finite,
and there are many constructible motivic functions whose integral
should not converge. In the earlier versions of motivic integration this
issue was mainly dealt with by letting the valued field variables in all
formulas range only over the ring of integers, and not over the whole
valued field. That approach made the domain of integration compact, and
guaranteed finiteness of the volume.
One of the advantages of the theory developed in \cite{CL} is that the restriction to the ring of integers is dropped, and instead a natural class of
integrable functions is constructed. This is done by
starting out only with {\emph {summable}}
Presburger functions over $\ring{Z}} \newcommand{\R}{\ring{R}^r$;
as the valued-field and residue-field variables are added, it is necessary to
consider Grothendieck {\emph {semirings}} of the so-called {\emph {positive}}
constructible motivic functions, instead of the full rings of constructible motivic functions.
Essentially, the term ``positive'' comes from the partial order that we have on the ring of values $A$.
The semiring of positive constructible motivic Functions on $S$ is
denoted by $C_+(S)$.
We refer to \cite[5.3]{CL} or
\cite[3.2]{CL.expo} for details.
The class of integrable positive Functions on $Z\in {\text{Def}}_S$
(denoted by $I_S C_+(Z)$) is defined inductively along with
the procedure of integration itself.
Let $S$ be in ${\text{Def}}_k$.
The main existence theorem for motivic integral
(Theorem \cite[Theorem 10.1.1]{CL}) states that
there is a unique functor from the category
${\text{Def}}_S$ to the category of abelian semigroups, $Z\mapsto I_S C_{+}(Z)$, assigning to every morphism $f:Z\to Y$ in ${\text{Def}}_S$ a morphism
$f_{!}:I_SC_+(Z)\to I_SC_+(Y)$ that satisfies a list of axioms.
We have already discussed most of these axioms in some form: they
include additivity, natural behaviour with respect to inclusions and
projections, normalization according to (\ref{eq:integral}), and the
Jacobian transformation rule, which is discussed below in \ref{subsub:graphs}.
Note that pushforward is functorial,
in the sense that it respects compositions: $(f\circ g)_{!}=f_{!}\circ g_{!}$.
We refer to \cite[Theorem 10.1.1]{CL} or to \cite[Section 2.5]{CHL} for
the complete list.
\subsubsection{Integration over graphs}\label{subsub:graphs}
The idea of integration that we have sketched so far is sufficient for
integration of constructible functions over $d$-dimensional subsets of
$\ring{A}^d_{k((t))}$ for some $d$. It would be natural for the theory to
include integration over manifolds, and a Jacobian transformation rule.
Cell decomposition helps with this issue as well: $0$-cells are
basically graphs of functions, and so one can make sure that transformation
rule holds by defining integrals over $0$-cells appropriately.
For a definable subassignment $h$, let
${\mathcal A}(h)$ be the ring of definable functions from $h$ to
$\ring{A}_{k((t))}^1$. For every positive integer $i$, one can define
an ${\mathcal A}(h)$-module $\Omega^i(h)$ of definable $i$-forms on $h$.
As one would naturally hope, the module of top degree forms is free
of rank $1$, and there is a canonical volume form
$|\omega_0(h)|$, which is an analogue
of the canonical volume form in the $p$-adic case.
\begin{definition}\cite[8.4]{CL}
Let $f:X\to Y$ be a morphism between two definable subassignments
of $h[m,n,r]$ and $h[m',n',r']$, respectively. Assume that both $X$
and $Y$ are of dimension $d$, and the fibres of $f$ have dimension $0$.
Then the order of Jacobian
\footnote{In geometric motivic integration, the {\bf order of Jacobian}
is given a very geometric meaning: if $f:X\to Y$ is a morphism of varieties,
the order of Jacobian is the function on the arc space of $X$ that assigns
to each arc its order of tangency to the singular locus of the morphism $f$.
As we discuss in Appendix 1, motivic integration theory described here
specializes to
geometric motivic integration. It is worth pointing out that the two
notions of the order of Jacobian
agree, \cite[8.6]{CL}.
}
is defined naturally by the formula
\footnote{It is possible to show that a definable function on a definable subassignment $S$ is analytic outside a subassignment $S'$ with $\dim S'< \dim S$.
On the subassignment $S\setminus S'$ the usual determinant formula for Jacobian holds.
}
$$f^{\ast}|\omega_0|_Y=\ring{L}^{-\text{ordjac} f}|\omega_0|_X,$$
with $\text{ordjac} f$ a $\ring{Z}} \newcommand{\R}{\ring{R}$-valued function on $X$ defined outside a
definable subassignment of dimension less than $d$.
\end{definition}
Now let $Z$ be a $0$-cell that is part of cell decomposition adapted to a
constructible fucntion $\varphi$, and let $\varphi_Z:=\varphi{\bf 1}_Z$
be the restriction of $\varphi$ to $Z$. Let us assume here that $Z$ has
dimension $d$, and this is the dimension of the support of $\varphi$.
Then we have:
\[
\xymatrix{
& Z_{C_0,c} \ar[dd]^{p_0} \ar[rr]^{\hookrightarrow} && S[1,s,0] \ar[d] \\
&& \ar[ul]^{\lambda} Z \ar[r]^{\hookrightarrow} & S[1,0,0]\ar[d]^{\pi} \\
A &\ar[l]^{\psi_0} C_0 \ar[r]_{j_0}^{\hookrightarrow} & S[0,s,0] \ar[r]_{\pi_0} & S
}
\]
Recall that by definition of $\psi_0$, we have
$\varphi_Z=\lambda^{\ast}p_0^{\ast}(\psi_0)$. As in the case of $1$-cells,
let us imagine that $Z$ is identified with $Z_{C_0,c}$ by means of the
isomorphism
$\lambda$, and the function $\varphi_Z$ is a function on $Z_{C_0,c}$.
\footnote{Of course, when the construction is finished, one needs to prove that it does not depend on $\lambda$. This turns out to be the case, see \cite[\S~9.1-9.2]{CL}.}
By definition of a $0$-cell, the fibres of the projection $p_0$ are
$0$-dimensional, so what we expect is that the functions $\varphi_Z$ and
$\psi_0$ would be related essentially by a factor that captures the order of the Jacobian
of the map between $Z$ and $C$. This is exactly the case.
By definition, $Z_{C_0,c}$ is an image of $C$ under the map $p_0^{-1}$.
It is natural to define ${p_0}_!(\varphi_Z)$ as $\ring{L}^{\gamma}\psi_0$, where
the function $\gamma:C_0\to \ring{Z}} \newcommand{\R}{\ring{R}$ is defined by $y\mapsto (\text{ordjac}p_0)\circ p_0^{-1}$.
Finally, we already know how to define ${\pi_0}_{!}$
(subsection \ref{subsub:ras.variables}),
and ${j_0}_!$ (extension by zero).
Putting all these pieces together, we get
\begin{definition}
$$
\pi_!(\varphi_Z):={\pi_0}_!({j_0}_!(\psi_0\ring{L}^{\gamma})).
$$
\end{definition}
The harderst part of the theory is proving that the final definition
of pushforward does not depend on the choice of cell decomposition, and that
integration with respect to several
valued field variables does not depend on the order
(statements of Fubini type).
\subsection{Motivic volume}
Let $\Lambda\in {\text{Def}}_k$ be a definable subassignment, and let
$S\in {\text{Def}}_{\Lambda}$ (in particular, $S$ comes equipped with a morphism
$f:S\to\Lambda$). Then we can define {the} relative motivic volume of $S$ as
$$\mu_{\Lambda}(S)=f_{!}([{\bf 1}_S]).$$
In particular, when $\Lambda=h_{\mathnormal{\mathrm{Spec\,}} k}$ is the final object of the
category ${\text{Def}}_k$, we get the motivic volume
for all definable subassignments $S$ such that the characteristic
function ${\bf 1}_S$ is integrable.
Let us call a subassignment $Z$ of some $h[m,n,0]$ bounded if there exists
a positive integer $s$ such that $Z$ is contained in the subassignment $W_s$ of
$h[m,n,0]$ defined by ${\text{ord}}(x_i)\ge -s$, $1\le i\le m$
(where the variables $x_i$ run over the valued field).
\begin{proposition}\cite[Proposition 12.2.2]{CL}
If $Z$ is a bounded definable subassignment of $h[m,n,0]$, then
$[{\bf 1}_Z]$ is integrable.
\end{proposition}
By definition, motivic volume takes values in the ring of positive integrable
constructible
motivic Functions on $\mathnormal{\mathrm{Spec\,}} k$. This ring, by definition,
is
$$
SK_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})\otimes_{\ring{N}} \newcommand{\Tgordon}{\ring{T}[\ring{L}-1]}A_{+},
$$
where $SK_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})$ is the Grothendieck semiring (as opposed to
the full Grothendieck ring) that is made by taking only formal
linear combinations of equivalence classes of objects of ${\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k}$ with
nonnegative coefficients, and $A_+$ is the set of nonnegative elements of
$A$.
\section{Back to $p$-adic integration}\label{sec:back}
Everywhere in this section, we fix the base field $k=\Qgordon$, for simplicity of the exposition.
Recall that in the definition of the language of Denef-Pas (see Section
\ref{sub:ldp}), there was some flexibility in the matter of choosing what to add to the language as allowed coefficients for formulas.
\emph{Everywhere in this section, we will consider one specific variant of Denef-Pas language: we allow coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}[[t]]$ for the valued field sort, and coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}$ for the residue field sort. This language will be denoted
${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$.}
There are two collections of fields {over which} we would like to do integration:
local fields of characteristic zero, and the function fields $\ring{F}} \newcommand{\hi}{\ring{H}_q((t))$.
Let ${\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}$ be the collection of all finite field
extensions of non-archimedean completions of
$\Qgordon$, and let ${\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$ be the collection of all local fields of positive characteristic.
In the last section we sketched the construction of a motivic volume of a
subassignment $h\in {\text{Def}}_{\Qgordon}$, and more generally, of an integral of a constructible motivic function on $h$. In order to relate this motivic integration
with the classical $p$-adic integration of Section~\ref{sub:Weil}, we need
to do two things: first, we need to relate subassignments to the $p$-adic measurable sets, and second, we need to find a way to get from the values of the motivic volume to the rational numbers.
We start with the first task.
\subsection{Interpreting formulas in $p$-adic fields}\label{sub:interpr}
Observe that a definable subassignment $S$ of, say, $h[1,1,0]$ does not automatically give us a subset of $\Qgordon_p\times \ring{F}} \newcommand{\hi}{\ring{H}_p$: indeed, $S(\Qgordon_p)$ is by definition a
subset of
$\Qgordon_p((t))\times \Qgordon_p$ rather than of $\Qgordon_p\times \ring{F}} \newcommand{\hi}{\ring{H}_p$.
However, is is clear that we can interpret the formulas defining $S$ so that we would get a subset of $\Qgordon_p\times \ring{F}} \newcommand{\hi}{\ring{H}_p$ as desired.
Let us describe this procedure precisely (we are essentially quoting
\cite[\S~6.7]{CL.expo}).
Let $S$ be a definable subassignment of $h[m,n,r]$. As specified at the beginning of this section, by this we mean that $S$ is defined by a formula $\varphi$
in
Denef-Pas language with coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}[[t]]$.
Let $(K,\varpi_K)$ be a local field of characteristic $0$ from the collection
${\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}$, with the choice of a uniformizer.
The field $K$ can be considered as a $\ring{Z}} \newcommand{\R}{\ring{R}[[t]]$-algebra via the morphism
$$\lambda_{\ring{Z}} \newcommand{\R}{\ring{R}, K}:\ring{Z}} \newcommand{\R}{\ring{R}[[t]]\to K: \quad \sum_{i\ge 0}a_i t^i\mapsto
\sum_{i\ge 0} a_i\varpi_K^i.$$
Note that the series $\sum_{i\ge 0} a_i\varpi_K^i$ converges in $K$, since
${\text{ord}}(a_i)\ge 0$ for any $a_i\in \ring{Z}} \newcommand{\R}{\ring{R}$.
A similar morphism exists also for fields of finite characteristic from
the collection ${\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$, even though in this case we prefer to
write it as
$$\lambda_{\ring{Z}} \newcommand{\R}{\ring{R}, K}:\ring{Z}} \newcommand{\R}{\ring{R}[[t]]\to K: \quad \sum_{i\ge 0}a_i t^i\mapsto
\sum_{i\ge 0} (a_i\pmod{p_K})\varpi_K^i,$$
where $p_K$ is the characteristic of the residue field of $K$.
Using these morphisms, any formula $\varphi$ with coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}[[t]]$ and $m$ free variables of the valued field sort and no other free variables
can be interpreted to define a subset of $\ring{A}^m(K)$ for any
$K\in {\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup {\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$.
Formulas in the language of rings with coefficients in $\ring{Z}} \newcommand{\R}{\ring{R}$ can naturally be
interpreted in the residue field of $K$, via reduction $\mod q_K$.
There is no additional work needed for the variables running over $\ring{Z}} \newcommand{\R}{\ring{R}$.
This way, any definable (with the mentioned above restriction on coefficients)
subassignment $S$ of $h[m,n,r]$ gives a subset $S_{K,\phi}$ of
$K\times k_{K}\times \ring{Z}} \newcommand{\R}{\ring{R}^r$, where
$K\in {\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup{\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$, and $k_K$ is the residue field of $K$, and where $\phi$ is the formula (or collection of formulas)
defining the subassignment $S$.
There is a very important issue here: the set $S_{K, \phi}$
depends on the choice of the formula $\phi$ that we used to define $S$,
as illustrated by a very simple example.
Consider the two formulas
$\phi_1(x)=\text{`}{x=0}\text {'}$ and $\phi_2(x)=\text{`}{3x=0}\text {'}$.
For each field $K$ of characteristic $0$, either formula defines a one-point
set $\{0\}$, so $\phi_1$ and $\phi_2$ define the same subassignment
(call is $S$) of
$h[1,0,0]$. On the other hand, for the fields $K$ of characteristic $3$,
$S_{K,\phi_1}\neq S_{K,\phi_2}$.
This example illustrates that
\emph{the correspondence between definable subassignments and definable $p$-adic sets is well defined only for sufficiently large $p$. Moreover, the choice of the primes to discard depends on the formula we are using to describe a given set, not on the set itself.}
The fact that only finitely many primes need to be discarded (which is of course crucial) is a nontrivial theorem.
Precisely, we have:
\begin{proposition}\cite[\S\S~6.7, 7.2]{CL.expo}
If two formulas $\psi$ and $\psi'$ define the same subassignment $S$,
then there exists an integer $N$ such that
$S_{K,\psi}=S_{K,\psi'}$ for every field
$K\in {\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup {\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$
with residue characteristic greater or equal to $N$.
However, this number $N$ can be arbitrarily large for different $\psi'$.
\end{proposition}
\subsubsection{Specialization of constructible motivic Functions}\label{sub:specialization}
We have just described how definable subassignments give measurable subsets of
$p$-adic fields. Let us now describe the specialization of constructible
motivic functions.
First, note that a morphism of definable subassignments $f:Z\to W$ specializes
to a function $f_K:S_K\to W_K$ for all
$K\in {\mathcal A}_{\Qgordon}\cup{\mathcal B}_{\Qgordon}$ of sufficiently large
residue
characteristic (since the graph of $f$ is a definable subassignment, it
will specialize to a definable subset of $S_K\times W_K$, and that gives the graph of $f_K$).
In particular, for $S\in {\text{Def}}_{\Qgordon}$, $\ring{Z}} \newcommand{\R}{\ring{R}$-valued functions on $S$ specializes
to $\ring{Z}} \newcommand{\R}{\ring{R}$-valued functions on $S_K$. The functions with values in the ring
$A$ specialize to $\Qgordon$-valued functions once we replace $\ring{L}$ with $q$,
where $q$ is the cardinality of the residue field of $K$.
Thus we can interpret elements of ${\mathcal P}(S)$.
Recall that a constructible motivic function on $S$ is an element of
${\mathcal P}(S)\otimes K_0({\text{RDef}}_S)$.
As mentioned in Section~\ref{sub:constr.f}, an
element $\pi:W\to S$ of $K_0({\text{RDef}}_S)$ gives an integer-valued functions
on $S_K$ by $x\mapsto \#\pi_K^{-1}(x)$.
The main point is that motivic integration specializes to $p$-adic
integration. Since now we also have the residue-field and integer-valued parameters, when we consider $p$-adic measure, we take the product of
Serre-Oesterl\'e measure on the Zariski closure of the set cut out by
the valued-field variables with the counting measure on
$k_K^n\times \ring{Z}} \newcommand{\R}{\ring{R}^r$.
Let $\Lambda\in {\text{Def}}_{\Qgordon}$ be a definable subassignment.
Let $S\in {\text{Def}}_{\Lambda}$, with the morphism $f:S\to \Lambda$.
Let $\varphi$ be an integrable constructible motivic function on $S$, and let $K$ be a local field. Then we have $f_K:S_K\to \Lambda_K$, and the
interpretation $\varphi_K$, which is a function on $S_K$ (all these
are well defined when the residue characteristic of $K$ is large enough).
It is possible to prove that the restriction of $\varphi_K$ to the fibre of
$f_K$ at a point $\lambda\in \Lambda_K$ is integrable for almost all
$\lambda\in \Lambda_K$. We denote by $\mu_{\Lambda_K}(\varphi_K)$ the
function on $\Lambda_K$ that assigns to each point $\lambda\in \Lambda_K$
the integral of
$\varphi_K$ over the fibre of $f_K$ at $\lambda$.
\begin{theorem}\cite[9.1.5, Specialization Principle]{CLF}\label{thm:spec.pr}
Let $f:S\to \Lambda$ be an ${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable morphism, and let
$\varphi$ be a constructible motivic function on $S$, relatively integrable
with respect to $f$.
Then there exists $N>0$ such that for all $K$ in
${\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup{\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$ with residue characteristic
greater than $N$, and
every choice of the uniformizer $\varpi$ of the valuation on $K$,
$$
(\mu_{\Lambda}(\varphi))_K=\mu_{\Lambda_K}(\varphi_K).
$$
\end{theorem}
This theorem is proved by comparing the construction of the motivic integral
with the understanding of the $p$-adic measure that one gets from $p$-adic
cell decomposition theorem \cite{Denef}.
\subsection{Pseudofinite fields}\label{sub:pseudo}
By now we have the motivic volume with values in
$SK_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})\otimes_{\ring{N}} \newcommand{\Tgordon}{\ring{T}[\ring{L}-1]} A_+$, and it specializes to the classical $p$-adic volume for almost all $p$, as discussed above.
It turns out that if we just want to capture the $p$-adic volume, then our
motivic volume is a bit too refined and complicated object, namely, one can
identify a lot of elements of $K_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})$, and specialization would
still hold. In order to define a new equivalence relation on formulas in the
language of rings, we need to define the category of
pseudofinite fields first.
\begin{definition}
The field $K$ of characteristic zero is called {\bf pseudofinite} if it is perfect, has exactly one field extension of
each finite degree, and if $V$ is a geometrically irreducible variety over $K$, then $V$ has a
$K$-rational point.
\end{definition}
One can get an example of a pseudofinite field by means of
constructing an {\emph {ultraproduct}} of finite fields, see {\it{e.g.}},
\cite[\S~20.10]{FJ}.
\begin{definition}\cite{DL.congr}.
Let {\bf $K_0({\text{PFF}}_k)$} be the group generated by symbols $[\phi]$, where
$\phi$ is any formula in the language of rings over $k$, subject to the
relations:
$[\phi_1 \vee \phi_2]=[\phi_1]+[\phi_2]-[\phi_1\wedge \phi_2]$ whenever
$\phi_1$ and $\phi_2$ have the same free variables, and the relations
$[\phi_1]=[\phi_2]$
if there exists a ring formula $\psi$ over $k$ such that the interpretation of $\psi$ in
any pseudofinite field $K$ containing $k$ gives a graph of a bijection between the tuples of elements of $K$ satisfying $\phi_1$ and those satisfying
$\phi_2$.
The multiplication on $K_0({\text{PFF}}_k)$ is induced by the conjunction of
formulas in disjoint sets of variables.
The additive group of $K_0({\text{PFF}}_k)$ is called the Grothendieck group of
pseudofinite fields.
\end{definition}
The reason the category of pseudofinite fields turns out to be so useful
for us is the following theorem.
A DVR-formula is a formula in the language of Denef-Pas with coefficients in
$\ring{Z}} \newcommand{\R}{\ring{R}[[t]]$ in the {valued} field sort, and such that all its valued field
variables are restricted to the ring of integers (DVR stands for ``Discrete Valuation Rings'').
\begin{theorem} (Ax-Kochen-Ersov Principle)
Let $\sigma$ be a DVR-formula over $\ring{Z}} \newcommand{\R}{\ring{R}$ with no free variables. Then the following statements are equivalent:
\begin{enumerate}
\item The interpretation of $\sigma$ in $\ring{Z}} \newcommand{\R}{\ring{R}_p$ is true for all but finitely many primes.
\item The interpretation of $\sigma$ in $K[[t]]$ is true for all pseudofinite fields $K$.
\end{enumerate}
\end{theorem}
\subsection{Comparison theorems}\label{sub:comp}
Let the base field be $k=\Qgordon$, as before.
Given a definable subassignment $X$ of $h[m,0,0]$, by now we have defined
the associated with it subsets of $K^m$ for
$K\in {\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup {\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$, and we have defined the
motivic volume of $X$, $\mu(X)\in SK_0({\text{RDef}}_{\Qgordon})\otimes_{\ring{N}} \newcommand{\Tgordon}{\ring{T}[\ring{L}-1]}A_+$.
There is a natural map from the Grothendieck ring $K_0({\text{RDef}}_k)$
to $K_0({\text{PFF}}_k)$: we just
identify the subassignments that coincide on the category of pseudofinite
fields containing $k$ to obtain a class in $K_0({\text{PFF}}_k)$.
Hence, to each subassignment $X$ we have also associated an element
of $K_0({\text{PFF}}_{\Qgordon})$, which we will also denote by $\mu(X)$.
By Ax-Kochen-Ersov principle, two formulas
$\phi_1$ and $\phi_2$ define the subsets of $K^m$ of the same volume for
$K\in {\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}\cup {\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$ with residue characteristic
bigger than $N$ for some $N$ if and only if
$\mu(h_{\phi_1})=\mu(h_{\phi_2})$, where $h_{\phi}$ denotes the
subassignment defined by the formula $\phi$.
It is a difficult theorem of Denef and Loeser \cite{DL} that
there exists a unique ring morphism
$$\chi_c:K_0({\text{PFF}}_k)\to K_0^{mot}({\text{Var}}_k)\otimes \Qgordon,$$
that satisfies two natural conditions. The first condition is that for any formula $\varphi$ which is a conjunction of polynomial equations over $k$, the element $\chi_c([\varphi])$ equals the class in $K_0^{mot}({\text{Var}}_k)\otimes \Qgordon$
of the variety defined by $\varphi$. The seconds condition is more complicated: it specifies how the map $\chi_c$ should behave with respect to cyclic covers. This relates to elimination of quantifiers in formulas of the form
$\varphi(x)=\text{'}\exists y: y^d=x\text{'}$.
It is this condition that makes Chow motives the right category for the values of the volume, as opposed to varieties, which would not have been
sufficient.
We refer to \cite[Th.~2.1]{DL.congr}
for the precise statement and a sketch of the proof, and to \cite{Tom.intro}
for an exposition.
The existence of the map $\chi_c$ allows to state the Comparison Theorem,
\cite[Th.~8.3.1, Th.~8.3.2]{DL.arithm}.
Here we quote a reformulation of this theorem as stated in \cite{CL.expo}.
\begin{theorem}
Let $\varphi$ be a formula in the language of Denef-Pas, with $m$ free
valued field variables and no other free variables. There exists a
virtual motive $M_{\varphi}$, canonically attached to $\varphi$, such that,
for almost all prime numbers $p$, the volume of $h_{\Qgordon_p,\varphi}$ is finite
if and only if the volume of $h_{\ring{F}} \newcommand{\hi}{\ring{H}_p[[t]],\varphi}$ is finite, and in
this case they are both equal to the number of points
of $M_{\varphi}$ in $\ring{F}} \newcommand{\hi}{\ring{H}_p$.\footnote{
In the original construction, the virtual Chow motive $M_{\varphi}$ lives in a certain completion of the ring $K_0^{mot}({\text{Var}}_k)$ (see {Appendix}~1, and
\cite{Tom.intro}). It follows from the Cluckers-Loeser theory
of motivic integration described in the previous section
that $M_{\varphi}$ lives in the ring obtained
from $K_0^{mot}({\text{Var}}_k)\otimes \Qgordon$ by inverting $\ring{L}$ and $1-\ring{L}^{-n}$ for
all positive integers $n$.
When we say ``the number of {points} on $M_{\varphi}$'' we mean by this the
extension of the function that counts the number of points over $\ring{F}} \newcommand{\hi}{\ring{H}_q$ from the category of varieties to the ring where $M_{\varphi}$ lives. This
extension is {obtained} as follows: first, one replaces the number of points by the
alternating sum of the trace of Frobenius on cohomology
(as in Grothendieck-Lefschetz fixed point formula). This procedure is well-defined for Chow motives, and extends the notion of the number of rational
points of a variety. Then the Trace of Frobenius function is extended
to the Grothendieck ring by additivity,
and then extended further to the tensor product with $\Qgordon$, in a
natural way (at this point it becomes $\Qgordon$-valued).
Finally, if we assign the value $q$ to $\ring{L}$, this function extends to the
localization by $\ring{L}$ and $1-\ring{L}^{-n}$.
}
\end{theorem}
\begin{remark}
Even though it is necessary to make a map from $K_0({\text{RDef}}_k)$
to $K_0(\text{PFF}_k)$ and further
to the ring of virtual Chow motives in order to state the comparison theorems
that give a geometric interpretation of the $p$-adic measure,
the motivic volume taking values in $SK_0({\text{RDef}}_{\mathnormal{\mathrm{Spec\,}} k})\otimes A_+$ is sufficient for
the transfer principle that we state in the next section. In fact, Ax-Kochen-Ersov principle that we have referred to in order to justify the map to $K_0(\text{PFF}_k)$ follows from this general transfer principle.
The way to think about it is that the motivic volume in
$SK_0({\text{RDef}}_k)\otimes A_+$ is the finest invariant of a subassignment; depending on the context,
one can map it to more crude invariants.
For example, motivic integration specializes to integration with respect to
Euler characteristic, as explained in the Introduction to \cite{CL}; one can also get Hodge or Betti numbers from the motivic volume
(that was one of the first applications of motivic integration), and so on.
\end{remark}
\section{Some applications}\label{sec:applic}
There are two natural directions for application of arithmetic motivic
integration. One is, to get various ``uniformity in $p$'' results.
A very spectacular application in this direction is
the results of Denef and Loeser on rationality of Poincar\'e series.
There are excellent expositions
\cite{DL.congr} and \cite{Denef}, so we will not discuss it here.
The other direction is transfer of identities from function fields to fields of characteristic zero. This is made possible by the very general transfer principle, which follows immediately from the construction of the
motivic integral and the fact that it specializes to the $p$-adic integral.
\begin{theorem}\cite[Transfer principle for integrals with parameters.]{CLF}
Let $S\to \Lambda$ and $S'\to\Lambda$ be
${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable morphisms.
Let $\varphi$ and $\varphi'$ be ${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-constructible
motivic functions on $S$ and $S'$, respectively. There exists $N$ such that
for every $K_1$ in ${\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}, N}$ and $K_2$ in
${\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R},N}$
with isomorphic residue fields,
$$\mu_{\Lambda_{K_1}}(\varphi_{K_1})=\mu_{\Lambda_{K_1}}(\varphi_{K_1}')
\quad\text{if and only if}\quad
\mu_{\Lambda_{K_2}}(\varphi_{K_2})=\mu_{\Lambda_{K_2}}(\varphi_{K_2}').
$$
\end{theorem}
Loosely speaking, this theorem says that an equality of
integrals of the specializations of
two constructible motivic functions holds over all local fields
of characteristic zero with sufficiently large residue characteristic
if and only if it holds over all function fields with sufficiently large
residue characteristic.
The most recent and important application of this transfer principle is
the transfer principle for the Fundamental Lemma that appeared in \cite{CHL}.
Here we cannot {explain} the {Fundamental} Lemma (which states that certain
$\kappa$-orbital integrals on two related groups are equal),
so we only include a
brief discussion of the relevance of motivic integration to computing
orbital integrals.
\subsection{Orbital integrals}
Recall the definition:
\begin{definition}
Let $G$ be a $p$-adic group and ${\bf\mathfrak g}$ -- its Lie algebra, and let
$X\in {\bf\mathfrak g}$.
An {\bf orbital integral at $X$} is a distribution on the space of Schwartz-Bruhat
functions on ${\bf\mathfrak g}$ defined by
$$\Phi_G(X,f):=\int_{G/C_G(X)} f(g^{-1}Xg){\operatorname d}^{\ast}g,
$$
where $C_G(X)$ is the centralizer of $X$ in $G$, and ${\operatorname d}^{\ast}g$ is
the invariant measure on $G/C_G(X)$.
\end{definition}
The natural question (posed by T.C. Hales, \cite{Tom.p-adic}) is,
can one use motivic integration to compute the orbital integrals in a
$p$-independent way?
Using all the terminology introduced above, we can rephrase this question:
\emph{Suppose we have fixed a definable test function $f$.
Is the orbital integral $\phi_G(X,f)$ a constructible function of $X$?}
It looks like a constructible function, because we start with a definable
function $f(g^{-1}Xg)$ of two variables $X$ and $g$, and then integrate
with respect to one of the variables -- so by the main result of the
theory of motivic integration, we should get a constructible function of the
remaining variable. The difficulty, however, lies in the
fact that the space of integration and the measure $d^{\ast}g$ on it vary
with $X$.
The initial approach taken in \cite{Tom.orbital} and \cite{CH} was to
average the orbital integral over definable sets of elements $X$, and then use
local constancy results to make conclusions about
the individual ones.
In \cite{CHL}, the authors start with definability of field extensions,
(which leads to definability of centralizers), and gradually prove that
all ingredients of the definitions of the so-called $\kappa$-orbital
integrals appearing in the Fundamental Lemma are definable.
Consequently,
\begin{theorem}(Cluckers-Hales-Loeser, \cite{CHL})\label{thm:transfer}
The transfer principle applies to the Fundamental Lemma.
\end{theorem}
It follows, in particular, from the main results of \cite{CHL}
that the answer to our question is affirmative: $\Phi_G(X,f)$ is a
constructible function of $X$ when $f$ is a
fixed definable function.
We also observe that the results of \cite{CH} give quite precise information
about the restriction of this constructible function to the set of
so-called {\bf good} elements. This direction is pursued further in
\cite{CGS} with the hope of developing an actual algorithm for
{computing} orbital integrals.
\subsection{Harish-Chandra characters}
Let $G$ be a $p$-adic group, and let $\pi$ be a representation of $G$.
Harish-Chandra distribution character of $\pi$ is also defined as an
integral over $G$, so it is natural to ask if the character is motivic as well.
The main difficulty in answering this question is that the construction of
representations has many ingredients, and does not a priori appear to
be a definable construction. However, if one adds additive characters of the
field to the language (for example, by passing to the exponential functions
as discussed in Section~\ref{sub:fourier}), then it is very likely that the
construction of representations can be carried out within the language.
Some partial results stating that certain classes
of Harish-Chandra characters, when restricted to the neighbourhood of
the identity, are motivic, appear in \cite{G} for depth-zero representations of classical groups, and in \cite{CGS} for certain positive depth
representations.
To give a flavour of a motivic calculation that appears when dealing with characters (and orbital integrals),
we have included Appendix 2, where we compute
motivic volume of a set that is relevant to the values of
characters of depth zero representations of
$G=\text{SL}(2,K)$, where $K$ is a $p$-adic field.
Many more calculations of this kind can be found in \cite{CG}.
\subsection{Motivic exponential Functions, and Fourier
transform}\label{sub:fourier}
In \cite{CLF}, R. Cluckers and F. Loeser developed a complete theory of Fourier transform for the motivic measure described above.
Here we sketch the main features of this theory, since it is used in the proof
of Theorem \ref{thm:transfer}, and is certain to find many
other applications.
\subsubsection{Additive characters}\label{subsub:characters}
We start by recalling the information about
additive characters of valued and finite fields.
First, for a prime field $\ring{F}} \newcommand{\hi}{\ring{H}_p$, we can identify the elements of the field
with the integers $\{0,1,\dots,p-1\}$. Then one character of the additive group
of $\ring{F}} \newcommand{\hi}{\ring{H}_p$ can be written explicitly as
$x\mapsto \exp\left(\frac{2\pi i}{p} x\right)$, and it generates the dual group $\hat\ring{F}} \newcommand{\hi}{\ring{H}_p$ of $\ring{F}} \newcommand{\hi}{\ring{H}_p$.
For a general finite field $\ring{F}} \newcommand{\hi}{\ring{H}_q$ with $q=p^r$, we can explicitly write down
one character by composing our generator of $\hat\ring{F}} \newcommand{\hi}{\ring{H}_p$ with the trace map:
\begin{equation}\label{eq:generator}
\psi_0:x\mapsto \exp\left(\frac{2\pi i}{p}\text{Tr}_{\ring{F}} \newcommand{\hi}{\ring{H}_q/\ring{F}} \newcommand{\hi}{\ring{H}_p}(x)\right).
\end{equation}
This way to write the character allows us to talk about characters as ``exponential functions'', and this will be used in the next subsection.
Given this character, we can identify the additive group of $\ring{F}} \newcommand{\hi}{\ring{H}_q$ with
its Pontryagin dual via the map $a\mapsto\psi_0(ax)$.
The additive group of a local field $K$ is self-dual in a similar way.
If $\psi:K\to\ring{C}} \newcommand{\Qgordon}{\ring{Q}^{\ast}$ is a nontrivial character, then
$a\mapsto \psi(ax)$ gives an isomorphism between $K$ and $\hat K$.
In particular, in agreement with our choice of the identification of $\ring{F}} \newcommand{\hi}{\ring{H}_p$ with $\{0,\dots, p-1\}$, and of $\psi_0$ made in
(\ref{eq:generator}), we will, for each local field $K$ with the residue field
$k_K$, consider the
collection ${\mathcal D}_K$ of additive characters $\psi:K\to \ring{C}} \newcommand{\Qgordon}{\ring{Q}^{\ast}$
satisfying
\begin{equation}\label{eq:D_K}
\psi(x)=\exp\left(\frac{2\pi i}{p}{\text{Tr}}_{k_K}(\bar x)\right)
\end{equation}
for $x\in {\mathcal O}_K$, where $p$ is the characteristic of $k_K$,
$\bar x\in k_K$ is the reduction of $x$ modulo the uniformizer $\varpi_K$, and
$\text{Tr}_{k_K}$ is the trace of $k_K$ over its prime subfield.
Any character from this collection can serve to produce an isomorphism between
$K$ and $\hat K$.
An example of a character from ${\mathcal D}_K$ is constructed in
\cite[2.2]{Tategordon}. {It} is also naturally an exponential function.
\subsubsection{}One starts by formally adding exponential functions to the definable world.
There are two kinds of exponentials one needs to add: the ones defined
over the valued field, and the ones defined over the residue field.
For $Z$ in ${\text{Def}}_k$, the category ${\text{RDef}}_Z^{\text{exp}}$ consists of triples
$(Y\to Z,\xi,g)$, where $Y$ is in ${\text{RDef}}_Z$, and $\xi$, $g$ are morphisms in ${\text{Def}}_k$, $\xi:Y\to h[0,1,0]$, and $g:Y\to h[1,0,0]$.
A morphism $(Y'\to Z,\xi',g') \to (Y\to Z,\xi,g)$ in ${\text{RDef}}_Z^{\text{exp}}$
is a morphism $f:Y'\to Y$ in ${\text{Def}}_Z$ such that $\xi'=\xi\circ f$, and
$g'=g\circ f$.
The idea is that $e^{\xi}$ will be an exponential function on $Z$
over the residue field, and $e^g$ -- over the valued field.
We will soon define the Grothendieck ring $K_0({\text{RDef}}_Z^{\text{exp}})$.
The class $[Y\to Z,\xi, g]$ will be suggestively denoted by
${\bf e}^{\xi}E(g)[Y\to Z]$.
Before we describe the relations that define the Grothendieck ring
$K_0({\text{RDef}}_Z^{\text{exp}})$, let us explain the intended specialization of
constructible exponential functions to the $p$-adic fields.
As in Section~\ref{sec:back}, we will only consider
${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable functions here. Recall that to interpret motivic
functions, we just needed to fix a field $K$ in ${\mathcal A}_{\ring{Z}} \newcommand{\R}{\ring{R}}$ or
in ${\mathcal B}_{\ring{Z}} \newcommand{\R}{\ring{R}}$, and a
uniformizer $\varpi_K$ of the valuation on $K$.
To interpret exponential motivic {functions}, one needs in addition an element
$\psi_K:K\to \ring{C}} \newcommand{\Qgordon}{\ring{Q}^{\ast}$ of the set
${\mathcal D}_K$ of additive characters satisfying
(\ref{eq:D_K}), as in Subsection~\ref{subsub:characters}.
Now suppose we have a triple $\varphi=(W,\xi,g)\in {\text{RDef}}_Z^{\exp}$, where $W$
is an ${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable subassignment equipped with an
${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable morphism $\pi:W\to Z$,
and $\xi, g$ -- ${\mathcal L}_{\ring{Z}} \newcommand{\R}{\ring{R}}$-definable morphisms
from $W$ to $h[0,1,0]$ and
$h[1,0,0]$, respectively.
For every $\psi_K$ in ${\mathcal D}_K$, we make a function
$\varphi_{K,\psi_k}:Z_K\to \ring{C}} \newcommand{\Qgordon}{\ring{Q}$. Recall that the morphisms $\xi$ and $g$ give
the functions $\xi_K:Z_K\to k_K$ and $g_K:Z_K\to K$ (all well defined when residue characteristic of $K$ is large enough). We define the function
$\varphi_{K, \psi_K}:Z_K \to \ring{C}} \newcommand{\Qgordon}{\ring{Q}$ by:
\begin{equation}\label{eq:interp}
z\mapsto
\sum_{y\in \pi_K^{-1}(z)}\psi_K(g_K(y))
\exp\left(\frac{2\pi i}{p}\text{Tr}_{k_K}(\xi_K(y))\right).
\end{equation}
Now we are ready to define the Grothendieck ring $K_0({\text{RDef}}_Z^{\text{exp}})$ that will play the same role as the ring $K_0({\text{RDef}}_Z)$ played in the definition of constructible motivic functions in Section~\ref{sub:constr.f}.
The first relation is, as expected:
\begin{multline}\label{eq:rel1}
[(Y\cup Y')\to Z, \xi,g]+[(Y\cap Y')\to Z, \xi_{Y\cap Y'}, g_{Y\cap Y'}]\\
=[Y\to Z,\xi_Y,g_Y]+[Y'\to Z,\xi_{Y'},g_{Y'}].
\end{multline}
for $Y,Y'\in {\text{RDef}}_Z$, and $\xi, g$ defined on $Y\cup Y'$.
The next relation is needed to take care of
the restrictions of the exponential functions on the valued field to
the residue field.
For a function $h:Y\to k[[t]]$, denote by $\bar h$ its reduction
$\mod (t)$, so that $\bar h:Y\to \ring{A}^1_k$.
The second relation is:
\begin{equation}\label{eq:rel2}
[Y\to Z, \xi, g+h]=[Y\to Z, \xi+\bar h, g]
\end{equation}
for $h:Y\to h[1,0,0]$ a definable morphism with ${\text{ord}}(h(y))\ge 0$ for all
$y\in Y$.
Note that this condition becomes very natural in view of the interpretation (\ref{eq:interp}) and the condition (\ref{eq:D_K}) on the character $\psi_K$.
The third relation encompasses the fact that the integral of a character (of the residue field) over the field is zero. It postulates that
\begin{equation}\label{eq:rel3}
[Y[0,1,0]\to Z, \xi+p, g]=0
\end{equation}
when $p:Y[0,1,0]\to h[0,1,0]$ is the projection onto the second factor, and
the morphisms $Y[0,1,0]\to Z$, $\xi$, and $g$ factor through the projection
$Y[0,1,0]\to Y$. \footnote{Note that when $Y=Z=h_{\mathnormal{\mathrm{Spec\,}} k}$ is a point, this statement literally amounts to the sum of the values of the character over the
finite field being $0$. So, in general, this is {the} statement {that} the sum of the character over the {fibre} of $Y[0,1,0]$ over each point $y\in Y$ equals $0$.}
Finally, the additive group of the Grothendieck ring $K_0({\text{RDef}}_Z^{\exp})$
is defined as the group of formal linear combinations of equivalence classes of triples $[Y\to Z, \xi, g]$ as above, modulo the subgroup generated by the relations (\ref{eq:rel1}), (\ref{eq:rel2}), and (\ref{eq:rel3}).
It turns out that one can define multiplication on this set, so that the subgroup generated by (\ref{eq:rel1}), (\ref{eq:rel2}), and (\ref{eq:rel3}) is an
ideal \cite[Lem.~3.1.1]{CLF}, making $K_0({\text{RDef}}_Z^{\exp})$ into a ring.
This ring is used instead of $K_0({\text{RDef}}_Z)$ in the definition of
constructible \emph{exponential} functions.
In \cite{CLF}, integration of constructible exponential functions is defined
(along with the class of integrable functions), and that allows {one} to
define {the} Fourier transform (satisfying all the expected properties).
The specialization principle holds
for constructible {exponential} functions as well, \cite[Th.~9.1.5]{CLF}.
Namely, given a local field $K$,
if we start with a constructible exponential function, integrate
it motivically, and then specialize the result to $K$
(using a fixed character $\psi\in {\mathcal D}_K$)
according to the formula (\ref{eq:interp}), we would get the same result as if we had done the specialization (using the same character) first,
and then integrated it with respect to the classical $p$-adic measure,
when the residue characteristic $p$ is large enough.
\section{Appendix 1: the older theories}\label{geo}
Here we give very brief outlines of geometric motivic integration,
and arithmetic motivic integration according to \cite{DL.arithm}, in order to point out the relationship of \cite{CL} with these older theories,
and their relative features. In a sense, we are assuming some familiarity
with geometric motivic integration, though the basic idea is sketched below.
There are excellent expositions \cite{blickle}, \cite{veys}.
\subsection{Arc spaces and geometric motivic measure}\label{sub:geo}
In the original theory of motivic integration,
the motivic measures live on arc spaces of algebraic varieties and
take values in a certain completion of the Grothendieck ring of the
category of all algebraic varieties over $k$.
Let $X$ be a variety over $k$.
The arcs are ``germs of order $n$ maps from the unit interval into $X$''.
Formally the space of arcs of order $n$ is defined as the scheme $\mathfrak{L}_n(X)$
that represents the functor
defined on the category of $k$-algebras by
$$R\mapsto \mathnormal{\mathrm{Mor}_{k-{schemes}}}(\mathnormal{\mathrm{Spec\,}} R[t]/t^{n+1}R[t],
X).$$
The {\bf space of formal arcs on $X$}, denoted by
$\mathfrak{L}(X)$, is the inverse limit
$\lim\limits_{\longleftarrow}\mathfrak{L}_n(X)$
in the category of $k$-schemes of the schemes $\mathfrak{L}_n(X)$.
The set of $k$-rational points of
$\mathfrak{L}(X)$ can be identified with the set of points of $X$ over
$\ta{k}$, that is,
$$\mathnormal{\mathrm{Mor}_{k-{schemes}}}(\mathnormal{\mathrm{Spec\,}}\ta{k},X).$$
There are canonical morphisms
$\pi_n : \mathfrak{L}(X)\to \mathfrak{L}_n(X)$ -- on the set of points, they correspond to
truncation of arcs. In particular, when $n=0$,
we get the the natural projection $\pi_X:\mathfrak{L}(X)\to X$.
We use only the arc space of the $m$-dimensional affine space
in these notes, so all that we need about arc spaces is
essentially contained in the next example.
\begin{example} {(The arc space of the affine line $\mathfrak{L}(\ring{A}^1)$.)}
By definition, $\mathfrak{L}_n(\ring{A}^1)$ represents
the functor
\begin{align*}
R\to {\mathrm{Mor}}(\mathnormal{\mathrm{Spec\,}} R[t]/t^{n+1}R[t],\ring{A}^1)=\mathrm{Mor}(k[x],R[t]/t^{n+1}R[t])\\
\cong
R[t]/t^{n+1}R[t]\cong R^{n+1}.
\end{align*}
Hence, $\mathfrak{L}_n(\ring{A}^1)\cong\ring{A}^{n+1}$, and the natural projection
$\mathfrak{L}_{n+1}(\ring{A}^1)\to\mathfrak{L}_n(\ring{A}^1)$ corresponds to the map
$R[t]/t^{n+2}R[t]\to R[t]/t^{n+1}R[t]$ that takes
$P\in R[t]/t^{n+2}R[t]$ to $(P\pmod{t^{n+1}})$, which, in turn,
corresponds to the map $(T_0,\dots,T_{n+1})\mapsto(T_0,\dots,T_n)$
from $\ring{A}^{n+2}$ to $\ring{A}^{n+1}$.
We conclude that the inverse limit of the system $\mathfrak{L}_n(\ring{A}^1)$ coincides with
the inverse limit of the spaces $\ring{A}^n$ with natural projections.
\end{example}
For simplicity, assume that the variety $X$ is smooth. When $X$ is not
smooth, the theory still works, but there is much more technical detail
(this constitutes the essence of \cite{DL}. See \cite{blickle} for
an exposition).
A set ${\mathcal C}\subset \mathfrak{L}(X)$ is
called {\bf cylindrical} if it
is of the form $\pi_n^{-1}(C)$ where $C$ is a constructible subset of
$\mathfrak{L}_n(X)$.
Let ${\mathcal C}$ be a cylinder
with constructible base
$C_n=\pi_n({\mathcal C})\subset \mathfrak{L}_n(X)$.
Then the motivic volume of ${\mathcal C}$ is by definition
$\ring{L}^{-n\dim(X)}[C_n]$, which is an element of $K_0({\text{Var}}_k)[\ring{L}^{-1}]$
(see Section~\ref{sub:gr.rings} for the definition of this ring).
The \emph{geometric motivic measure} was initially defined as
an additive function on an algebra
of subsets of the space $\mathfrak{L}(X)$ that had a good approximation by cylindrical sets, with values in
a completion of $K_0({\text{Var}}_k)[\ring{L}^{-1}]$.
\subsection{Toward arithmetic motivic measure}
As we see from Section~\ref{sub:geo}, one could think of
$\ring{F}} \newcommand{\hi}{\ring{H}_q[[t]]$-points of a
variety as $\ring{F}} \newcommand{\hi}{\ring{H}_q$-points of its arc space. It is also possible to think of
$X(K)$ as $\mathfrak{L}(X)(\ring{F}} \newcommand{\hi}{\ring{H}_q)$ for a characteristic zero local field
$K$ with residue field $\ring{F}} \newcommand{\hi}{\ring{H}_q$. So the first idea would be to assign measures
to cylindrical sets as above. However, there are two major problems with
this approach. First, the the tools of the theory of geometric motivic integration that deal with singularities do not work when the residue field has finite characteristic, and restricting ourselves just to cylindrical subsets with a smooth base leaves us with far too few measurable sets. Of even greater
importance is the issue that when the residue field is not algebraically
closed, the action of the Galois group becomes {important}, and this Galois
action varies with $p$. It turns out that Chow motives are ideally suited for keeping track of Galois action, and this is why arithmetic motivic measure takes values in a localization of $K_0({\text{Mot}}_k)$ as opposed to $K_0({\text{Var}}_k)$, which is
sufficient when the field $k$ is algebraically closed.
For these reasons, the original theory of arithmetic motivic integration developed in \cite{DL.arithm} is quite different from geometric motivic integration, and is based more on logic than on algebraic geometry.
Fix the base field $k$ of characteristic $0$ (for example, $k=\Qgordon$).
Let us first look at geometric motivic integration on $\ring{A}^m$ from the point
of view of definable subassignments rather than arc spaces of varieties.
The basic measurable sets for geometric motivic measure are stable cylinders in the arc space
$\mathfrak{L}(\ring{A}^m)$.
Recall that a point in $\mathfrak{L}(\ring{A}^m)$ can be thought of as an $m$-tuple of power series.
Let ${\mathcal C}=\pi_n^{-1}(C_n)$ be a cylinder as in the previous subsection.
Suppose for simplicity that the set $C_n=\pi_n({\mathcal C})$ is defined just by one polynomial equation $f(\underline{x}^{(n)})=0$, where
$f(\underline {x}^{(n)})=f(x_1^{(n)},\dots, x_m^{(n)})$ is a polynomial with coefficients in $k$, and
$\underline{x}^{(n)}$ is an $m$-tuple of truncated power series, with
each coordinate $x_i^{(n)}$, $i=1,\dots, m$ being a polynomial in $t$ of degree $n-1$.
\begin{exercise}
The cylinder ${\mathcal C}$ is given by the following formula in the language of Denef-Pas:
$$
\phi_{\mathcal C}(\underline{x})=\exists \underline{y}\colon {\text{ord}}(\underline{x}-\underline{y})\ge n\quad\wedge
\quad f(\underline{y})=0.
$$
Here $\underline{x}$ and $\underline{y}$ are $m$-tuples of variables
ranging over
$k[[t]]$.
\end{exercise}
Thus, geometric motivic volume of the cylinder ${\mathcal C}$ is obtained in the following way:
the polynomial $f(\underline{x})$, (where $\underline{x}$ is an $m$-tuple of variables of the valued field
sort) is replaced by the collection of ``truncated'' polynomials
$f(\underline{x}^{(n)})$,
where $\underline{x}^{(n)}$ is now an $m$-tuple of polynomials in $t$
of degree $n-1$ with coefficients in the residue field sort ({\it i.e.}, in $k$).
In the next step, the condition $f(\underline{x}^{(n)})=0$ is replaced by the collection of equations stating that
all the coefficients of the resulting polynomial in $t$ equal zero, which defines a constructible set over the
residue
field. Finally, the motivic volume of ${\mathcal C}$ is the class of the constructible set obtained above
multiplied by $\ring{L}^{-nm}$, where $n$ is the
level of truncation.
If we state the basic idea of geometric motivic integration in this form, it becomes natural to
define the corresponding procedure for more general formulas in Denef-Pas
language, than just the formulas defining cylinders.
The following two key steps allowed the above construction to work:
first, we were able to replace the formula that had a quantifier over the valued field
with a formula without quantifiers over the valued field; and then
the value of the motivic volume was obtained from a ring formula with
variables
in the residue field (the formula defining the constructible set
$\pi_n({\mathcal C})$).
In general, both of these steps rest on the process of quantifier elimination -- that is, replacing
a formula that has quantifiers with an equivalent formula with no quantifiers (see \cite{Tom.intro}
for a discussion of quantifier elimination in this context).
\subsection{Quantifier elimination}
The following theorem which is a version of the theorem of Pas allows to eliminate all
quantifiers in DVR-formulas except the ones over the residue field.
\begin{theorem}
Suppose that $R$ is a ring of characteristic zero.
Then for any DVR-formula $\phi$ over $R$ there exists a DVR-formula $\psi$ over $R$ which contains no quantifiers running over the valuation ring and
no quantifiers running over $\ring{Z}} \newcommand{\R}{\ring{R}$, such that:
\begin{enumerate}
\item $\theta\leftrightarrow\phi$ holds in $K[[t]]$ for all fields $K$ containing $R$,
\item $\theta\leftrightarrow\phi$ holds in $\ring{Z}} \newcommand{\R}{\ring{R}_p$ for all $p\gg0$ when $R=\ring{Z}} \newcommand{\R}{\ring{R}$.
\footnote{Even if the original formula $\phi$ had no {quantifiers} over the residue field, the formula
$\psi$ might have them.}
\footnote{Elimination of quantifiers over the value sort is
due to Presburger.
It is because we want this quantifier elimination result to hold, multiplication is not
permitted for variables of the value sort (it is the famous {theorem of G\"{o}del} that $\ring{N}} \newcommand{\Tgordon}{\ring{T}$ with the
standard operations
does not admit quantifier elimination).
}
\end{enumerate}
\end{theorem}
\begin{theorem}(Ax)\cite[\S~8.2]{FJ}
Algebraically closed fields admit elimination of quantifiers in the language of rings.
\end{theorem}
In particular, this theorem implies the theorem, due to Chevalley,
stating that an image of a constructible set under a projection morphism is
a constructible set.
The situation is different for non-algebraically closed fields; in particular,
the quantifiers over the residue
field of a local field cannot be eliminated in general.\footnote{A theorem due to A. Macintyre \cite{Mac} states that there would be complete quantifier elimination
if we added to the language, for each $d$, the predicate ``$x$ is the $d$-th power in the field''.
So in some sense all quantifiers except in the formulas '$\exists y: y^d=x$' can be eliminated.
This fact is reflected in the theory of Galois stratifications, which is the main tool in the construction of the map that takes the motivic volume of a
definable set into the Grothendieck ring of the {category} of Chow motives,
discussed in Section~\ref{sub:comp}.}
Ax's theorem is, in some sense, the reason why geometric motivic measure
is so much easier to construct than arithmetic motivic measure. In the easiest case of the stable
cylinder, for example, once we have the ring formula over the residue field, quantifier elimination
produces a quantifier-free formula over the residue field, that is, a constructible set.
\subsection{The original construction of arithmetic motivic measure}
The original construction of {arithmetic} motivic {measure}
\cite{DL.arithm} {follows} these steps.
\begin{enumerate}
\item[0.] We start with a DVR-formula $\phi$ or, equivalently, with a definable subassignment of the functor
$h_{\mathfrak{L}({\ring{A}_m})}$. When interpreted over a $p$-adic field, the formula $\phi$ gives a measurable
set (in the classical sense).
\item[1.] For every positive integer $n$, the definable subassigment $h$
defined by $\phi$ can be truncated at level $n$. By Pas's theorem on elimination of quantifiers,
the truncated subassignment $h_n$ is definable by a ring formula $\psi_n$ over the residue field
(note that the number of variables of $\psi_n$ depends
on $n$).\footnote{
By
analogy with the corresponding notion in the construction of Lebesgue measure, one can say that
the formulas $\psi_n$ define the ``outer'' approximations to the set defined by the formula
$\phi$ (see \cite{Tom.intro}).}
\item[2.] We consider the class of the formula $\psi_n$ in $K_0({\text{PFF}}_k)$. At this step, essentially,
the formulas that should have the same motivic volume are getting identified.
\item[3.] There is a map from $K_0({\text{PFF}}_k)$ to the Grothendieck
ring of the category of Chow motives. One takes the virtual Chow motive $M_n$
associated with $[\psi_n]$.
\item[4.] In the original construction of \cite{DL.arithm}, the ring of virtual Chow motives is completed
in a similar way to the completion of the ring of values of the geometric motivic measure.
Finally, the Chow motive associated with the definable subassignment $h$ is the inverse limit of $M_n\ring{L}^{-nd}$,
where $d$ is the dimension of $h$.
\end{enumerate}
\begin{remark}
As we have seen from Section~\ref{sub:geo}, the construction of geometric motivic measure follows the same steps,
with the following simplifications: In Step 2, we need to consider
{the equivalence} relation on formulas that
comes not from comparing them on pseudofinite fields, but from comparing them on algebraically closed fields.
Instead of the complicated Step 3, one can apply quantifier elimination over algebraically closed fields to it
to obtain a class of a constructible set, that is, an element of $K_0(\text{Var}_k)$.
\end{remark}
\subsection{Measurable sets in different theories}
It is worth pointing out that almost every variant of motivic
integration has a
slightly different algebra of measurable sets.
In the very first papers on motivic integration, {{\it{e.g.}}} \cite{DL.McKay},
the measurable sets
were the semi-algebraic sets, and then later $k[t]$-semi-algebraic sets.
In geometric motivic integration that developed later,
the basic measurable sets are
stable cylinders. In \cite[Appendix]{DL.McKay}, a measure theory and a
$\sigma$-algebra of measurable sets that includes stable cylinders
is worked out.
Looijenga \cite{looijenga} describes a slightly different version of
geometric motivic integration as well.
Here we make a few remarks about the relationships between the algebras of measurable sets in all these articles.
\subsubsection{Cylinders {\it vs.} semi-algebraic sets}
The algebra of sets definable in Denef-Pas language with coefficients in
$k[t]$ specializes to the algebra of the $k[t]$-semi-algebraic sets.
It follows from J. Pas's theorem on quantifier elimination that
if the set $A$ is semi-algebraic, then $\pi_n(A)$ is a constructible
subset of ${\mathcal L}_n(X)$.
This statement ultimately implies that the algebra of semi-algebraic subsets is contained in the
algebra of measurable sets of \cite[Prop.~1.7 (2)]{DL.McKay}.
The advantage of working with measurable sets that are well approximated by cylinders is that this algebra is more geometric, and bigger than the algebra of $k[t]$-semi-algebraic sets. The main disadvantage is that one needs to complete the ring $K_0({\text{Var}}_k)$ in order to define the measure on this algebra.
On the other hand, the algebra of $k[t]$-semi-algebraic sets possesses two
advantages:
first, it is this algebra that we can get by specializing the theory of Cluckers and Loeser to algebraically closed fields, and so it follows that it is
not necessary to complete
the ring $K_0({\text{Var}}_k)$ in order to define the restriction of the
motivic measure to this algebra -- inverting $\ring{L}$ and $1-\ring{L}^{-n}$, $n>0$
is sufficient.
Second, it is this algebra that (at the moment) appears in all the
generalizations
of the motivic measure theory ({\it i.e.}, in motivic integration on formal schemes \cite{Sebag}).
\subsubsection{Denef and Loeser {\it vs.} Looijenga}
Denef and Loeser work directly with subsets of the underlying topological space of the
$k$-scheme ${\mathcal L}(X)$, whereas Looijenga considers
subsets of the space of sections of its {structure}
morphism (as a scheme over $\mathnormal{\mathrm{Spec\,}} k[[t]]$) which is in bijection with the
set of \emph{closed} points
of ${\mathcal L}(X)$.
Thus, the algebra of measurable subsets constructed in \cite{looijenga}
is the restriction of the algebra of measurable sets of \cite{DL}
to the set of closed points
of ${\mathcal L}(X)$.
The advantage of the approach in \cite{looijenga}
is that it makes no difference between the schemes $X$ originally defined over $k$ {\it vs.} the schemes
$X$ defined over $R=k[[t]]$, which is sometimes very useful in applications.
Indeed, if $X$ is defined over $k$, it can be base changed to $k[[t]]$:
set ${\mathcal X}:=X\times_{\mathnormal{\mathrm{Spec\,}} k}\mathnormal{\mathrm{Spec\,}} k[[t]]$,
and then the set ${\mathcal X}_{\infty}$ of \cite{looijenga}, which is in bijection with the set of closed points of $\mathfrak{L}(X)$, is defined as the set of sections of the structure
morphism of the scheme ${\mathcal X}$.
\section{Appendix 2: an example}
Here we do in detail a calculation of the motivic volume of a set that is
relevant to character values of depth zero representations of
$G=\text{SL}(2,K)$ for a local field $K$.
Complete character tables for depth zero representations of $\text{SL}(2,K)$
restricted to the set of topologically unipotent elements appear in
\cite{CG}, and we refer to that article for the detailed explanation {as to} why these
sets appear.
Roughly, the calculation goes as follows. Depth zero representations are
obtained from representations of finite groups by inflation to
a maximal compact subgroup followed by compact induction.
There is a well-known Frobenius formula for the character of an induced
representation, and it applies in this situation as well. Let $H$ be a compact
set of topologically unipotent elements of $G$, let $f_H$ be the characteristic function of this set. Let $\pi$ be a depth zero representation that is induced from the maximal compact subgroup $G_x=\text{SL}(2, {\mathcal O}_K)$, and let
$\Theta_{\pi}$ be its Harish-Chandra character.
Then (see \cite{CG})
$$
\Theta_{\pi}(f_H)=\mu(G_x)\int_{G/G_x}\int_H\chi_{x,0}(g^{-1}hg)\,dh\,dg,
$$
where $\chi_{x,0}$ is the inflation to $G_x$ of the character of the
representation of $\text{SL}(2,\ring{F}} \newcommand{\hi}{\ring{H}_q)$ that $\pi$ restricts to.
It is, therefore, natural that the volume of the set of elements $g\in G$
such that the element $g^{-1}hg$ is in $G_x$ and projects under
reduction $\mod \varpi$
to a given unipotent conjugacy class of $\text{SL}(2,\ring{F}} \newcommand{\hi}{\ring{H}_q)$ is the key to
the value of the character at $h$.
The following calculation appears when we take $h$ of the form
$h=\left[\begin{smallmatrix} 0 & \varpi^n u\\
\epsilon\varpi^{n} u& 0\end{smallmatrix}\right]$, and the unit $u$ is a square.
Even though this calculation is included as an explicit example of a computation of a motivic volume, we are not using the technique of inductive application
of cell decomposition (since there are four valued field variables, it would have been too tedious a process). Instead, we do the calculation {\it ad hoc}, using the older approach through the outer motivic measures that is sketched in the previous appendix.
However, our motivic volume depends on a residue-field parameter, so we are using the language and the results of \cite{CL} as well.
Let us first introduce an abbreviation for the
``reduction $\mod \varpi$'' map: let
$$
\bar x=\begin{cases}
\overline{\text{ac}}(x), & {\text{ord}}(x)=0\\
0,& {\text{ord}}(x)>0.
\end{cases}
$$
\begin{example}
Let us consider the family of formulas depending on a parameter
$\eta$ that ranges over the set of non-squares in $\ring{F}} \newcommand{\hi}{\ring{H}_q$ (note that this
is a definable set):
\begin{equation}
\phi_{\eta}(a,b,c,d)\quad=\text{`}ad-bc=1 \quad\wedge\exists\, \xi
(\bar b^2-\bar d^2\eta=\xi^2)\text{'}.
\end{equation}
We claim that the motivic volume of the set defined by $\phi_{\eta}$ is independent of $\eta$ and equals $\frac12\ring{L}(\ring{L}-1)(\ring{L}+1)$.
\begin{proof}
Consider the formula
\begin{equation}
\Phi(a,b,c,d,\eta)
=\text{`}
ad-bc=1\ \wedge
\exists\, \xi \neq 0(\bar d^2-\bar b^2\eta=\xi^2)
\ \wedge
\nexists\, \beta (\eta=\beta^2)
\text{'}.
\end{equation}
In this formula, four variables $a,b,c,d$ range over the valued
field, the variable $\eta$ ranges over the residue field, and
all the quantifiers range over the residue field.
It defines a subassignment of $h[4,1,0]$, which corresponds to the
disjoint union of the subassignments of $h[4,0,0]$ defined by the formulas
$\phi_{\eta}$ over all non-squares $\eta$.
\subsubsection{Step 1. Reduction to the residue field}
The formula $\Phi$ can be broken up into two parts according to whether
$b$ is a unit: $\Phi=(\Phi\wedge ({\text{ord}}(b)=0))\vee (\Phi\wedge ({\text{ord}}(b)>0))$.
We start by showing that the subassignment defined by $\Phi\wedge
({\text{ord}}(b)=0)$ is
stable at level $0$, {\it i.e.}, that it is essentially ``inflated'' from
the finite field. In order to do this, we need to introduce an
abstract variety $V$ that plays the role of the ``projection'' of this
formula to the residue field.
Let $k$ be an arbitrary field of characteristic zero
(the theory of arithmetic motivic integration tells us that we should
think of $k$ as a pseudofinite field).
Consider the subvariety $V$ of $\ring{A}^4$ over $k$ cut out
by the equation $x_1^2-x_2^2x_3=x_4^2$. It has no singularities
outside the hyperplane $x_2=0$ (note that this statement is true in any
characteristic greater than $2$).
Recall the notation $[\phi]$ for the motivic volume of a formula
$\phi$ in the language of rings. Let
\begin{equation}\label{eqn:M1}
{\mathbb M}_1:=[\text{`}(x_1^2-x_2^2x_3=x_4^2)\wedge (x_2\neq 0)\wedge (x_3\neq 0)\wedge (x_4\neq
0)\wedge
(\nexists \beta ~ (x_3=\beta^2))\text{'}].
\end{equation}
Consider the formula
\begin{multline}
\Phi_1(b,d,\eta,\xi):=\\
\text{`}
(\bar d^2-\bar b^2\eta=\xi^2)\wedge (\xi\neq 0)\wedge
({\text{ord}}(b)=0)\wedge (\eta\neq 0)\wedge(\nexists\,\beta~(\eta=\beta^2)
)\text{'}
\end{multline}
Set $x_1=\bar d$, $x_2=\bar b$, $x_3=\eta$, $x_4=\xi$. This
``reduction'' takes the formula $\Phi_1$ exactly to the ring formula
that appears in the right-hand side of (\ref{eqn:M1}). Since it is
mutually exclusive with the formula $x_2=0$ that defines a set
containing the singular locus of $V$,
the subassignment of $h[2,2,0]$ defined by the formula $\Phi_1$
is stable at level $0$, and its motivic volume equals ${\mathbb M}_1$.
Let
$\Phi_2(b,d,\eta)$ be the formula
$$
\text{`}\exists\xi~
(\bar d^2-\bar b^2\eta=\xi^2)\wedge (\xi\neq 0)
\wedge ({\text{ord}}(b)=0)\wedge(\eta\neq 0)\wedge(\nexists \beta (\eta=\beta^2))
\text{'}.
$$
The formula $\Phi_1$ is a double cover of $\Phi_2$, so
$\mu(\Phi_2)=\frac12\mu(\Phi_1)=\frac12{\mathbb M}_1$.
Finally, consider
the projection $(d,b,c,a,\eta)\to(d,b,\eta)$.
The subassignment defined by $\Phi\wedge ({\text{ord}}(b)=0)$
projects to the subassignment
defined by $\Phi_2$, and the volume
of each fibre of this projection is $\ring{L}$: indeed, given that $b$ is
a unit, for every value of $a$, there is unique $c$ such that $ad-bc=1$.
Hence, we have $\mu(\Phi\wedge ({\text{ord}}(b)=0))=\frac 12\ring{L}{\mathbb M}_1$.
\subsubsection{Step 2. Independence of the parameter $\eta$}
This step consists in the observation that for each
$\eta_1, \eta_2\in K^{\ast}\setminus {K^{\ast}}^2$, there is a
definable bijection between the triples $(x_1,x_2,x_4)$ and
$(x_1',x_2',x_4')$
such that
$x_1^2-x_2^2\eta_1=x_4^2$ and ${x_1'}^2-{x_2'}^2\eta_2={x_4'}^2$: the {bijection}
is defined by the formula
\begin{multline*}
\Psi(x_1,x_2,x_4,x_1',x_2',x_4',\eta_1,\eta_2)\\
=\text{`}\exists\,
y (\eta_1=\eta_2y^2) \wedge (x_1'=x_1) \wedge (x_2'=x_2y)
\wedge (x_4'=x_4)\text{'}
\wedge \nexists\, \beta (\eta_1=\beta^2).
\end{multline*}
The Corollary \cite[14.2.2]{CL} together with
Remark \cite[14.2.3]{CL} implies
that if the motivic volume is constant on the fibres, then the total
volume is the volume of the fibre times the class of the base.
It follows that for each $\eta\in\ring{F}} \newcommand{\hi}{\ring{H}_q\setminus{\ring{F}} \newcommand{\hi}{\ring{H}_q}^2$,
we have
$$
\mu(\phi_{\eta}\wedge ({\text{ord}}(b)=
0))=\frac2{\ring{L}-1}\mu(\Phi\wedge({\text{ord}}(b)=0)).
$$
\subsubsection{Step 3. A residue-field calculation: finding ${\mathbb M}_1$}
Note that it is in this step that we see the conic promised in the
introduction.
We start by considering abstract varieties again. Recall the variety
$V$ defined in Step 1. Let us denote the coordinates on $\ring{A}^3$ by
$(t,s,e)$, and consider the subvariety $V_2$ of $\ring{A}^3$ defined by
the equation $t^2-s^2=e$.
Consider the birational map
$(x_1,x_2,x_3,x_4)\mapsto(x_2,\frac{x_1}{x_2},x_3,\frac{x_4}{x_2})$ from the
variety $V$ to the variety $V_2\times\ring{A}^1$. It is an
isomorphism between the open sets $x_2x_3\neq 0$ in $V$ and
$e\neq 0$ in $V_2\times \ring{A}^1$.
Then the class ${\mathbb M}_1$ equals $[\text{`}\nexists\beta\neq 0
:t^2-s^2=\beta^2\text{'}](\ring{L}-1)$. It remains to calculate the class
$[\nexists\beta\neq 0 :t^2-s^2=\beta^2]$.
The class $\ring{L}^2$ of the $(t,s)$-plane breaks up into the sum of the three classes:
$$
\ring{L}^2=
[\nexists\beta (t^2-s^2=\beta^2)]+
[\exists \beta (t^2-s^2=\beta^2)\wedge \beta\neq 0]+[t^2-s^2=0].
$$
It is easy to see that $[t^2-s^2=0]=2(\ring{L}-1)+1$.
We also have
$[\exists \beta (t^2-s^2=\eta^2)\wedge \beta\neq 0]=\frac12(\ring{L}-1)[x^2-y^2=1]
=\frac12(\ring{L}-1)(\ring{L}-1)$.
Therefore, $[\nexists\beta (t^2-s^2=\eta^2)]=\frac12\ring{L}^2-\ring{L}+\frac12$.
Hence, ${\mathbb M}_1=\frac12(\ring{L}-1)^3$.
Finally, we have:
\begin{equation}\label{eq: the hard volume}
\begin{aligned}
\mu(\phi_{\eta}\wedge {\text{ord}}(b)=0))&=\frac2{\ring{L}-1}\mu(\Phi\wedge ({\text{ord}}(b)=0))\\
&=\frac2{\ring{L}-1}\frac14(\ring{L}-1)^3\ring{L}=\frac12\ring{L}(\ring{L}-1)^2.
\end{aligned}
\end{equation}
\subsubsection{Step 4. Completing the proof}
It is easy to calculate the motivic volume of the remaining part
$\phi_{\eta}\wedge ({\text{ord}}(b)>0)$.
If ${\text{ord}}(b)>0$, then the formula $\psi(\bar d,\bar b,\eta)$ becomes
'$\exists \beta\neq 0 (\bar d^2=\beta^2)$'.
Clearly, its motivic volume is $(\ring{L}-1)$.
It remains to notice that if ${\text{ord}}(b)>0$,
then the variable $c$ contributes the factor $\ring{L}$,
both $(a,d)$ have to be units, and once $d$ is chosen, $a$ is
determined uniquely by the determinant condition.
Altogether, we get
$\mu(\phi_{\eta}\wedge ({\text{ord}}(b)>0))=\ring{L}(\ring{L}-1)$.
Finally, we get:
$$
\begin{aligned}
\mu(W_{U_{{\varepsilon}},n/2}^{(0)}(h))&=\mu(\phi_{\eta})=
\frac12\ring{L}(\ring{L}-1)^2+\ring{L}(\ring{L}-1)\\
&=\frac12\ring{L}(\ring{L}-1)(\ring{L}+1),
\end{aligned}
$$
which completes the proof.
\end{proof}
\end{example}
\begin{bibdiv}
\begin{biblist}
\bibselect{bibliography}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,563,682 | arxiv | \section{Introduction}
\label{Introduction}
Advances in the fabrication and measurement of microelectromechanical
systems (MEMS) and their evolution into nanoelectromechanical systems
(NEMS) have allowed researchers to measure astoundingly small forces,
masses, and displacements
\cite{Schwab:2005,ClelandBook:2002,Ekinci:2005,Blencowe:2004}.
Ultrasensitive mechanical sensors have been used in a variety of
experiments from the measurement of Casimir forces predicted by cavity
quantum electrodynamics \cite{Chan:2001}, to the testing of quantum
theories of gravity at nanometer length scales \cite{Smullin:2005}, to
the measurement of single electron spins \cite{Rugar:2004}. Recently,
micromechanical detectors have provided the clearest picture to date
of persistent currents in normal metallic rings
\cite{Bleszynski:2009}. At this stage, the measurement of forces at
the attonewton level has been reported by several teams of researchers
\cite{Stowe:1997,Mamin:2001,Rugar:2004,Naik:2006,Teufel:2009}. At the
same time, mechanical displacement sensitivities are approaching the
standard quantum limit, i.e.\ the fundamental limit to position
resolution set by back-action effects \cite{Teufel:2009}. NEMS
devices operated at very low temperatures are themselves approaching
the quantum regime, with thermal vibration energies only 100 quanta
above the zero-point motion of the resonators
\cite{Knobel:2003,LaHaye:2004,Naik:2006,Rocheleau:2009}. The
availability of devices with such exquisite force, mass, and
displacement sensitivities has not only allowed for the study of a
wide class of condensed matter physics problems, but it has also led
to new high resolution nano- and atomic-scale imaging techniques.
Magnetic resonance force microscopy, which is reviewed here, has made
important contributions to all of these emerging research areas.
Magnetic resonance force microscopy (MRFM) combines the physics of
magnetic resonance imaging (MRI) with the techniques of scanning probe
microscopy. In MRFM, a nanomechanical cantilever is used to sense the
tiny magnetic force arising between the electron or nuclear spins in
the sample and a nearby magnetic particle. The MRFM technique was
proposed in the early days of scanning probe microscopy as a method to
improve the resolution of MRI to molecular lengthscales
\cite{Sidles:1991,SidlesPRL:1992}. The visionary goal of this
proposal was to eventually image molecules atom-by-atom, so as to
directly map the three-dimensional atomic structure of macromolecules
\cite{SidlesRSI:1992}. Such a ``molecular structure microscope''
would have a dramatic impact on modern structural biology, and would
be an important tool for many future nanoscale technologies. While
this ultimate goal has not been achieved to date, the technique has
undergone a remarkable development into one of the most sensitive spin
detection methods available to researchers today. Among the important
experimental achievements are the detection of a single electronic
spin \cite{Rugar:2004} and the extension of the spatial
resolution of MRI from several micrometers to below ten nanometers
\cite{DegenTMV:2009}.
In this review we discuss the improvements made to the MRFM technique
over the last four years, and present an outlook of possible future
developments. Section \ref{Background} introduces the basics of MRFM
and provides a brief historical overview covering earlier work until
about 2006 (for a broader discussion of this work the reader is
referred to several reviews, for example Refs.
\cite{Sidles:1995,Nestle:2001,Suter:2004,Kuehn:2008,Berman:2006,Hammel:2007,Barbic:2009}).
Section \ref{nMRFMResearch} primarily focuses on work done by the
authors and collaborators while in the MRFM group at the IBM Almaden
Research Center. We discuss the recent experimental advances that
allowed the measurement sensitivity to reach below 100 nuclear spins.
We highlight some of the results enabled by this progress, in
particular the imaging of individual virus particles and organic
nanolayers, both with three-dimensional (3D) resolutions below 10 nm.
We also consider two physical phenomena that become important at these
small length-scales: the role of statistical fluctuations in spin
polarization and the appearance of fast spin relaxation by coupling to
mechanical modes. In Section \ref{Developments} we discuss promising
future directions aimed at improving the sensitivity of nuclear MRFM:
the development of improved magnetic tips, of novel nanomechanical
sensors, and of sensitive displacement transducers. We conclude with
a comparison to other nanoscale imaging and spin detection techniques
in Section \ref{Othertechniques}, and an outlook of future
applications in Section \ref{Outlook}.
\section{Background}
\label{Background}
\subsection{Principle}
\label{Principle}
MRI and its older brother, nuclear magnetic resonance (NMR)
spectroscopy, rely on measurements of the nuclear magnetic moments
present in a sample -- magnetic moments arising from atomic nuclei
with non-zero nuclear spin. In conventional magnetic resonance
detection, the sample is placed in a strong static magnetic field in
order to produce a Zeeman splitting between the nuclear spin states.
The sample is then exposed to a radio-frequency (rf) magnetic field of
a precisely defined frequency. If this frequency matches the Zeeman
splitting (which at a given static field is different for every
non-zero nuclear spin isotope), then the system absorbs energy from
the rf radiation resulting in transitions between the nuclear spin
states. From a classical point of view, the total nuclear magnetic
moment of the sample starts changing its orientation. Once the rf
field is turned off, any component of the total moment remaining
perpendicular to the static field is left to precess about this field.
The precession of this ensemble of nuclear magnetic moments produces a
time-varying magnetic signal that can be detected with a pick-up coil.
The electric current induced in the coil is then amplified and
converted into a signal that is proportional to the number of nuclear
moments (or spins) in the sample. In MRI this signal can be
reconstructed into a 3D image of the sample using spatially varying
magnetic fields and Fourier transform techniques. The magnetic fields
produced by nuclear moments are, however, extremely small: more than
one trillion ($10^{12}$) nuclear spins are typically needed to
generate a detectable signal.
The MRFM technique attempts to improve on the poor detection
sensitivity of inductive pick-up coils by mechanically detecting the
magnetic forces produced by nuclear moments. To grasp the basic idea
behind the method, imagine taking two refrigerator magnets and holding
them close together; depending on the magnets' orientation, they exert
either an attractive or repulsive force. In MRFM, a compliant
cantilever is used to sense the same magnetic forces arising between
the nuclear spins in a sample and a nearby nano-magnet. First, either
the sample (containing nuclear moments) or the nano-magnet must be
fixed to the cantilever. Then, using the techniques of conventional
NMR described above, the nuclear spins are made to periodically flip,
generating an oscillating magnetic force acting on the cantilever. In
order to resonantly excite the cantilever, the nuclear spins must be
inverted at the cantilever's mechanical resonance frequency. The
cantilever's mechanical oscillations are then measured by an optical
interferometer or beam deflection detector. The electronic signal
produced by the optical detector is proportional both to the
cantilever oscillation amplitude and the number of nuclear spins in
the imaging volume. Spatial resolution results from the fact that the
nano-magnet produces a magnetic field which is a strong function of
position. The magnetic resonance condition and therefore the region
where the spins periodically flip is confined to a thin, approximately
hemispherical ``resonant slice'' that extends outward from the
nano-magnet (see Figs.~\ref{fig1} and \ref{fig6}). By scanning the
sample in 3D through this resonant region, a spatial map of the
nuclear spin density can be made.
The advantage of force-detected over inductive techniques is that much
smaller devices can be made. In the latter case, the measurement can
only be sensitive if the nuclear spins significantly alter the
magnetic field within the pick-up coil, i.e.\ if the spins fill a
significant fraction of the coil volume. For spin ensembles with
volumes significantly smaller than (1 $\mu$m)$^3$, it is extremely
challenging to realize pick-up coils small enough to ensure an
adequate filling factor. As a result, even the best resolutions
achieved by inductively detected MRI require sample volumes of at
least (3 $\mu$m)$^3$ \cite{Ciobanu:2002}. Mechanical resonators, in
contrast, can now be fabricated with dimensions far below a micron,
such that the sample's mass (which is the equivalent to the filling
volume in a pick-up coil) is always significant compared to the bare
resonator mass. In addition, mechanical devices usually show resonant
quality factors that surpass those of inductive circuits by orders of
magnitude, resulting in a much lower baseline noise. For example,
state-of-the art cantilever force transducers achieve quality factors
between $10^4$ and $10^7$, enabling the detection of forces of
aN/Hz$^{1/2}$ -- less than a billionth of the force needed to break a
single chemical bond. In addition, scanning probe microscopy offers
the stability to position and image samples with nanometer precision.
The combination of these features allows mechanically detected MRI to
image at resolutions that are far below one micrometer, and -- in
principle -- to aspire to atomic resolution.
\begin{figure}\includegraphics[width=3.2in]{fig1}
\caption{\footnotesize Schematics of an MRFM apparatus. (a)
corresponds to the ``tip-on-cantilever'' arrangement, such as used
in the single electron MRFM experiment of 2004 \cite{Rugar:2004}.
(b) corresponds to the ``sample-on-cantilever'' arrangement, like
the one used for the nanoscale virus imaging experiment in 2009
\cite{DegenTMV:2009}. In both cases the hemispherical region
around the magnetic tip is the region where the spin resonance
condition is met -- the so-called ``resonant slice''.}
\label{fig1}
\end{figure}
\subsection{Early MRFM}
\label{EarlyMRFM}
The use of force-detection techniques in NMR experiments dates back to
Evans in 1955 \cite{Evans:1955}, and was also used in torque
magnetometry measurements by Alzetta and coworkers in the sixties
\cite{Alzetta:1967}. In 1991 Sidles, independent of this very early
work, proposed that magnetic resonance detection and imaging with
atomic resolution could be achieved using microfabricated cantilevers
and nanoscale ferromagnets. The first micrometer-scale experimental
demonstration using cantilevers was realized by Rugar
\cite{Rugar:1992}, demonstrating mechanically-detected electron spin
resonance in a 30-ng sample of diphenylpicrylhydrazil (DPPH). The
original apparatus operated in vacuum and at room temperature with the
DPPH sample attached to the cantilever. A mm-sized coil produced an
rf magnetic field tuned to the electron spin resonance of the DPPH
(220 MHz) with a magnitude of 1 mT. By changing the strength of a
polarizing magnetic field (8 mT) in time, the electron spin
magnetization in the DPPH was modulated. In a magnetic field gradient
of 60 T/m, produced by a nearby NdFeB permanent magnet, the sample's
oscillating magnetization resulted in a time-varying force between the
sample and the magnet. This force modulation was converted into
mechanical vibration by the compliant cantilever. Displacement
oscillations were detected by a fiber-optic interferometer achieving a
thermally limited force sensitivity of 3 fN/$\sqrt{\text{Hz}}$.
During the years following this initial demonstration of
cantilever-based magnetic resonance detection, the technique has
undergone a series of developments towards higher sensitives that, as
of today, is about $10^7$ times that of the 1992 experiment (see
Fig.~\ref{fig2}). In the following, we briefly review the important
steps that led to these advances while also touching on the
application of the technique to imaging and magnetic resonance
spectroscopy. Several review articles and book chapters have appeared
in the literature that discuss some of these earlier steps more
broadly and in richer detail
\cite{Sidles:1995,Nestle:2001,Suter:2004,Kuehn:2008,Berman:2006,Barbic:2009}.
\begin{figure}\includegraphics[width=3.2in]{fig2}
\caption{\footnotesize Advances in the sensitivity of force-detected
magnetic resonance over time. Remarkably, improvements have
closely followed a ``Moore's law'' for over a decade, with the
magnetic moment sensitivity doubling roughly every eight
months. Dots are experimental values
\cite{Rugar:1992,Rugar:1994,Wago:1996,Stipe:2001,Mamin:2003,Rugar:2004,DegenTMV:2009,Mamin:2009},
and dashed lines indicate sensitivities of one electron and one
proton magnetic moment ($\mu_{\rm P}$), respectively.}
\label{fig2}
\end{figure}
Following the initial demonstration of mechanically-detected electron
paramagnetic resonance (EPR) \cite{Rugar:1992}, the MRFM technique was
soon extended to NMR in 1994 \cite{Rugar:1994} and to ferromagnetic
resonance in 1996 \cite{Zhang:1996}. A major step towards higher
sensitivity was made by incorporating the MRFM instrument into a
cryogenic apparatus in order to reduce the thermal force noise of the
cantilever. A first experiment carried out in 1996 at a temperature of
14 K achieved a force sensitivity of 80 aN/$\sqrt{\text{Hz}}$
\cite{Wago:1996}, a roughly 50-fold improvement compared to 1992
mostly due to the higher cantilever mechanical quality factor and the
reduced thermal noise achieved at low temperatures. In 1998,
researchers introduced the ``tip-on-cantilever'' scheme
\cite{Wago:1998} (shown in Fig.~\ref{fig1}(a)), where the roles of
gradient magnet and sample were interchanged. Using this approach,
field gradients of up to $2.5 \times 10^5$ T/m were obtained by using
a magnetized sphere of 3.4-$\mu$m diameter \cite{Bruland:1998}. These
gradients are more than three orders of magnitude larger than those
achieved in the first MRFM experiment. In parallel, a series of spin
detection protocols were also invented. These protocols include the
detection of spin signals in the form of a shift in the cantilever
resonance frequency (rather than changes in its oscillation amplitude)
\cite{Stipe:2001Relax}, and a scheme that relies on detecting a
force-gradient, rather than the force itself \cite{Garner:2004}. In
2003, researchers approached the level of sensitivity necessary to
measure statistical fluctuations in small ensembles of electron spins,
a phenomenon that had previously only been observed with long
averaging times \cite{Mamin:2003}. Further refinements finally led to
the demonstration of single electron spin detection in 2004 by the IBM
group \cite{Rugar:2004}, which we discuss separately below.
While the bulk of MRFM experiments address the improvement of
detection sensitivity and methodology, effort has also been devoted to
demonstrate the 3D imaging capacity of the instrument. The first
one-dimensional MRFM image was made using EPR detection in 1993 and
soon after was extended to two and three dimensions
\cite{Zuger:1993,Zuger:1994,Zuger:1996}. These experiments reached
about 1-$\mu$m axial and 5-$\mu$m lateral spatial resolution, which is
roughly on par with the best conventional EPR microscopy experiments
today \cite{Blank:2003}. In 2003, sub-micrometer resolution (170 nm in
one dimension) was demonstrated with NMR on optically pumped GaAs
\cite{Thurber:2003}. In parallel, researchers started applying the
technique for the 3D imaging of biological samples, like the liposome,
at micrometer resolutions \cite{Tsuji:2004}. Shortly thereafter, a
80-nm voxel size was achieved in an EPR experiment that introduced an
iterative 3D image reconstruction technique \cite{Chao:2004}. The
one-dimensional imaging resolution of the single electron spin
experiment in 2004, finally, was about 25 nm \cite{Rugar:2004}.
The prospect of applying the MRFM technique to nanoscale spectroscopic
analysis has also led to efforts towards combination with pulsed NMR
and EPR techniques. MRFM is ill suited to high resolution
spectroscopy as broadening of resonance lines by the strong field
gradient of the magnetic tip completely dominates any intrinsic
spectral features. Nevertheless, a number of advances have been made.
In 1997, MRFM experiments carried out on phosphorus-doped silicon were
able to observe the hyperfine splitting in the EPR spectrum
\cite{Wago:1997}. Roughly at the same time, a series of basic pulsed
magnetic resonance schemes were demonstrated to work well with MRFM,
including spin nutation, spin echo, and $T_1$ and $T_{1\rho}$
measurements \cite{Wago:1998Echo,Schaff:1997}. In 2002, researchers
applied nutation spectroscopy to quadupolar nuclei in order to extract
local information on the quadrupole interaction \cite{Verhagen:2002}.
This work was followed by a line of experiments that demonstrated
various forms of NMR spectroscopy and contrast, invoking dipolar
couplings \cite{Degen:2005}, cross polarization
\cite{Lin:2005,Eberhardt:2007}, chemical shifts \cite{Eberhardt:2008},
and multi-dimensional spectroscopy \cite{Eberhardt:2008}. Some
interesting variants of MRFM that operate in homogeneous magnetic
fields were also explored. These techniques include measurement of
torque rather than force \cite{Alzetta:1967,Ascoli:1996} and the
so-called ``Boomerang'' experiment \cite{Leskowitz:1998,Madsen:2004}.
Finally, while not within the scope of this review, it is worth
mentioning that MRFM has also been successfully applied to a number of
ferromagnetic resonances studies, in particular for probing the
resonance structure of micron-sized magnetic disks.
\cite{Wigen:2006,Loubens:2007}.
\begin{figure}\includegraphics[width=3.2in]{fig3}
\caption{\footnotesize Image of an ultrasensitive mass-loaded Si
cantilever taken from an optical microscope. This type of
cantilever, which is about 100-nm-thick and has a spring constant
under 100 $\mu$N/m, has been used as a force transducer in the
many of the latest MRFM experiments \cite{Chui:2003}.}
\label{fig3}
\end{figure}
\subsection{Single electron MRFM}
\label{singleEMRFM}
The measurement of a single electron spin by the IBM group in 2004
concluded a decade of development on the MRFM technique and stands out
as one of the first single-spin measurements in solid-state physics.
A variety of developments led to the exceptional measurement
sensitivity required for single-spin detection. These include the
operation of the apparatus at cryogenic temperatures and high vacuum,
the ion-beam-milling of magnetic tips in order to produce large
gradients, and the fabrication of mass-loaded attonewton-sensitive
cantilevers \cite{Chui:2003} (shown in Fig.~\ref{fig3}). The thermal
noise in higher order vibrational modes of mass-loaded cantilevers is
suppressed compared with the noise in the higher order modes of
conventional, ``flat'' cantilevers. Since high frequency vibrational
noise in combination with a magnetic field gradient can disturb the
electron spin, the mass-loaded levers proved to be a crucial advance
for single-electron MRFM. In addition, the IBM group developed a
sensitive interferometer employing only a few nanowatts of optical
power for the detection of cantilever displacement \cite{Mamin:2001}.
This low incident laser power is crucial for achieving low cantilever
temperatures and thus minimizing the effects of thermal force noise.
A low-background measurement protocol called OSCAR based on the NMR
technique of adiabatic rapid passage was also employed
\cite{Stipe:2001}. Finally, the experiment required the construction
of an extremely stable measurement system capable of continuously
measuring for several days in an experiment whose single-shot
signal-to-noise ratio was just 0.06 \cite{Rugar:2004}.
The path to this experimental milestone led through a variety of
interesting physics experiments. In 2003, for example, researchers
reported on the detection and manipulation of small ensembles of
electron spins -- ensembles so small that the their statistical
fluctuations dominate the polarization signal \cite{Mamin:2003}. The
approach developed for measuring statistical polarizations provided a
potential solution to one of the fundamental challenges of performing
magnetic resonance experiments on small numbers of spins. In 2005,
Budakian and coworkers took these concepts one step further by
actively modifying the statistics of the naturally occurring
fluctuations of spin polarization \cite{Budakian:2005}. In one
experiment, the researchers polarized the spin system by selectively
capturing the transient spin order. In a second experiment, they
demonstrated that spin fluctuations can be rectified through the
application of real-time feedback to the entire spin ensemble.
\section{Recent strides in nuclear MRFM}
\label{nMRFMResearch}
In the following, we summarize the latest advances made to nuclear
spin detection by MRFM. The shift of focus from electron to nuclear
spins is driven by the prospect of applying the technique for
high-resolution magnetic resonance microscopy. MRI has had a
revolutionary impact on the field of non-invasive medical screening,
and is finding an increased number of applications in materials
science and biology. The realization of MRI with nanometer or
sub-nanometer resolution may have a similar impact, for example, in
the field of structural biology. Using such a technique, it may be
possible to image complex biological structures, even down to the
scale of individual molecules, revealing features not elucidated by
other methods.
The detection of a single nuclear spin, however, is far more
challenging than that of single electron spin. This is because the
magnetic moment of a nucleus is much smaller: a hydrogen nucleus
(proton), for example, possess a magnetic moment that is only $\sim
1/650$ of an electron spin moment. Other important nuclei, like
$^{13}$C or a variety of isotopes present in semiconductors, have even
weaker magnetic moments. In order to observe single nuclear spins, it
is necessary to improve the state-of-the-art sensitivity by another
two to three orders of magnitude. While not out of the question, this
is a daunting task that requires significant advances to all aspects
of the MRFM technique. In the following, we discuss some steps made
in this direction since 2005. Our focus is on the work contributed by
the authors while at the IBM Almaden Research Center. There the
authors were part of a team led by Dan Rugar, who has pioneered many
of the important developments in MRFM since its experimental
beginnings in 1992.
\subsection{Improvements to microfabricated components}
\label{ExpImprovements}
Improvements in the sensitivity and resolution of mechanically
detected MRI hinge on a simple signal-to-noise ratio, which is given
by the ratio of the magnetic force power exerted on the cantilever
over the force noise power of the cantilever device. For small
volumes of spins, we measure statistical spin polarizations, therefore
we are interested in force powers (or variances) rather than force
amplitudes:
\begin{equation}
\label{eq1}
\text{SNR} = N \frac{\left ( \mu_{N} G \right )^2}{S_{\rm F} B}.
\end{equation}
Here, $N$ is the number of spins in the detection volume, $\mu_{N}$ is
the magnetic moment of the nucleus of interest, $G$ is the magnetic
field gradient at the position of the sample, $S_{F}$ is the force
noise spectral density set by the fluctuations of the cantilever
sensor, and $B$ is the bandwidth of the measurement, determined by the
nuclear spin relaxation rate $1 / \tau_m$. This expression gives the
single-shot signal-to-noise ratio of a thermally-limited MRFM
apparatus. The larger this signal-to-noise ratio is, the better the
spin sensitivity will be.
From the four parameters appearing in (\ref{eq1}), only two can be
controlled and possibly improved. On the one hand, the magnetic field
gradient $G$ can be enhanced by using higher quality magnetic tips and
by bringing the sample closer to these tips. On the other hand, the
force noise spectral density $S_F$ can be reduced by going to lower
temperatures and by making intrinsically more sensitive mechanical
transducers.
\begin{figure}\includegraphics[width=3.2in]{fig4}
\caption{\footnotesize
An SEM image of a Cu ``microwire'' rf source with integrated FeCo
tip for MRFM \cite{PoggioAPL:2007}. The arrow-like structures at the
bottom of the image provide guidance for aligning the microwire with
the cantilever.}
\label{fig4}
\end{figure}
The latest improvements to MRFM sensitivity rely on advances made to
both of these critical parameters. In 2006, the IBM group introduced a
micromachined array of Si cones as a template and deposited a
multilayer Fe/CoFe/Ru film to fabricate nanoscale magnetic tips
\cite{Mamin:2007}. The micromachined tips produce magnetic field
gradients in excess of $10^6$ T/m owing to their sharpness (the tip
radius is less than 50 nm). Previously, maximum gradients of $2
\times 10^5$ T/m had been achieved by ion-beam-milling SmCo particles
down to 150 nm in size. The gradients from the new nanoscale tips
proved to be strong enough to push the resolution of MRI to below 100
nm, in an experiment that is further discussed in the next section.
In the two following years, the group made further improvements to
their measurement sensitivity through the development of a magnetic
tip integrated onto an efficient ``microwire'' rf source
\cite{PoggioAPL:2007}, illustrated in Fig.~\ref{fig4}. This change in
the apparatus solved a simple but significant problem: the typical
solenoid coils used to to generate the strong rf pulses for spin
manipulation dissipate large amounts of power, which even for very
small microcoils with a diameter of 300 $\mu$m amounts to over 0.2 W.
This large amount of heat is far greater than the cooling power of
available dilution refrigerators. As a result, nuclear spin MRFM
experiments had to be performed at elevated temperatures (4 K or
higher), thereby degrading the SNR. In some cases the effects can be
mitigated through pulse protocols with reduced duty cycles
\cite{Garner:2004,Mamin:2007}, but it is desirable to avoid the
heating issue altogether.
Micro-striplines, on the other hand, can be made with sub-micrometer
dimensions using e-beam lithography techniques. Due to the small size,
the stripline confines the rf field to a much smaller volume and
causes minimal heat dissipation. Using e-beam lithography and
lift-off, the IBM group fabricated a Cu ``microwire'' device that was
0.2 $\mu$m thick, 2.6 $\mu$m long, and 1.0 $\mu$m wide. A
stencil-based process was then used to deposit a 200-nm-diameter FeCo
tip on top of the wire to provide a static magnetic field gradient.
Since the sample could be placed within 100 nm of the microwire and
magnetic tip, rf magnetic fields of over 4 mT could be generated at
115 MHz with less than 350 $\mu$W of dissipated power. As a result,
the cantilever temperature during continuous rf irradiation could be
stabilized below 1 K, limited by other experimental factors and not
the rf device. Simultaneously, the cylindrical geometry of the
magnetic tip optimized the lateral field gradient as compared to the
micromachined thin-film Si tips, resulting in values exceeding
$4\times10^6$ T/m. As an added benefit, the alignment of the apparatus
was simplified as the magnetic tip and the rf source were integrated
on a single chip. The cantilever carrying the sample simply needed to
be positioned directly above the microwire device. Previous
experiments had required an involved three part alignment of
magnetic-tipped cantilever, sample, and rf source.
\subsection{MRI with resolution better than 100 nm}
\label{100nm}
The above instrumental advances to the technique led to two
significant experiments that finally demonstrated MRFM imaging
resolutions in the low nano-scale. In a first experiment in 2007,
Mamin and coworkers used a ``sample-on-cantilever'' geometry with a
patterned 100-nm-thick CaF$_2$ film as their sample and a
micromachined Si tip array coated with a thin magnetic layer as their
magnetic tip \cite{Mamin:2007}. The CaF$_2$ films were thermally
evaporated onto the end of the cantilever and then patterned using a
focused ion beam, creating features with dimensions between 50 and 300
nm. The cantilevers used in these measurements were custom-made
single-crystal Si cantilevers with a 60 $\mu$N/m spring constant
\cite{Chui:2003}.
Fig.~\ref{fig5} shows the result of such an imaging experiment,
measuring the $^{19}$F nuclei in the CaF$_2$ sample. The resultant
image reproduced the morphology of the CaF$_2$ sample, which consisted
of several islands of material, roughly 200-nm-wide and 80-nm-thick,
at a lateral resolution of 90 nm. At a temperature of 600 mK and after
10 minutes of averaging, the achieved detection sensitivity (SNR of
one) corresponded to the magnetization of about 1200 $^{19}$F nuclear
moments.
\begin{figure}\includegraphics[width=3.2in]{fig5}
\caption{\footnotesize
(a) Two-dimensional MRFM image of $^{19}$F nuclear spins in a
patterned CaF$_2$ sample, and (b) corrsponding SEM micrograph (side view) of the
cantilever end with the 80 nm thin CaF$_2$ film at the top of the
image. Figure adapted from Ref. \cite{Mamin:2007}. }
\label{fig5}
\end{figure}
\subsection{Nanoscale MRI of virus particles}
\label{VirusImaging}
Following the introduction of the integrated microwire and tip device,
the IBM researchers were able to improve imaging resolutions to well
below 10 nm \cite{DegenTMV:2009}. These experiments, which used single
tobacco mosaic virus (TMV) particles as the sample, both show the
feasibility for MRI imaging with nanometer resolution, and the
applicability of MRFM to biologically relevant samples.
\begin{figure}\includegraphics[width=3.2in]{fig6}
\caption{\footnotesize Artistic view of the MRFM apparatus used for
MRI of individual tobacco mosaic virus particles. Pictured in the
center is the cantilever, coming from the left is the laser beam
used for position sensing, and in red is the Cu microwire rf
source. The inset shows a close-up representation of the
gold-coated end of the cantilever with attached virus particles.
On top of the microwire, in blue, is the magnetic FeCo tip with
the ``mushroom'' shaped resonant slice hovering above. }
\label{fig6}
\end{figure}
Fig.~\ref{fig6} is a representation of the MRFM apparatus used in
these experiments. The virus particles were transferred to the
cantilever end by dipping the tip of the cantilever into a droplet of
aqueous solution containing suspended TMV. As a result, some TMV were
attached to the gold layer previously deposited on the cantilever end,
The density of TMV on the gold layer was low enough that individual
particles could be isolated. Then the cantilever was mounted into the
low-temperature, ultra-high-vacuum measurement system and aligned over
the microwire.
After applying a static magnetic field of about 3 T, resonant rf
pulses were applied to the microwire source in order to flip the $^1$H
nuclear spins at the cantilever's mechanical resonance. Finally, the
end of the cantilever was mechanically scanned in three dimensions
over the magnetic tip. Given the extended geometry of the region in
which the resonant condition is met, i.e. the ``resonant slice'', a
spatial scan does not directly produce a map of the $^1$H distribution
in the sample. Instead, each data point in the scan contains force
signal from $^1$H spins at a variety of different positions. In order
to reconstruct the three-dimensional spin density (the MRI image), the
force map must be deconvolved by the point spread function defined by
the resonant slice. Fortunately, this point spread function can be
accurately determined using a magnetostatic model based on the
physical geometry of the magnetic tip and the tip magnetization.
Deconvolution of the force map into the three-dimensional $^1$H spin
density can be done in several different ways; for the results
presented in \cite{DegenTMV:2009} the authors applied the iterative
Landweber deconvolution procedure suggested in an earlier MRFM
experiment \cite{Chao:2004,Dobigeon:2009}. This iterative algorithm
starts with an initial estimate for the spin density of the object and
then improves the estimate successively by minimizing the difference
between the measured and predicted spin signal maps. The iterations
proceed until the residual error becomes comparable with the
measurement noise.
The result of a representative experiment is shown in Fig.~\ref{fig7}.
Here, clear features of individual TMV particles, which are
cylindrical, roughly 300-nm-long, and 18 nm in diameter, are visible
and can be confirmed against a scanning electron micrograph (SEM) of
the same region (Fig.~\ref{fig8}). As is often the case, both whole
virus particles and particle fragments are observed. Note that the
origin of contrast in MRFM image and the SEM image is very different:
the MRFM reconstruction is elementally specific and shows the 3D
distribution of hydrogen in the sample; contrast in the SEM image is
mainly due to the virus blocking secondary electrons emitted from the
underlying gold-coated cantilever surface. In fact, the SEM image had
to be taken after the MRFM image as exposure to the electron beam
destroys the virus particles. The imaging resolution, while not fine
enough to discern any internal structure of the virus particles,
constitutes a 1000-fold improvement over conventional MRI, and a
corresponding improvement of volume sensitivity by about 100 million.
\begin{figure}[t]\includegraphics[width=3.2in]{fig7}
\caption{\footnotesize
Nanoscale MRI images of tobacco mosaic virus (TMV) particles
acquired by MRFM. (a) The series of images to the left depicts the
3D $^1$H spin density of virus particles deposited on the end of the
cantilever. Black represents very low or zero density of hydrogen,
while white is high hydrogen density. The right side shows a
representative xy-plane, with several viral fragments visible, and a
cross-section (xz-plane) of two virus particles that reveals an
underlying molecular layer of hydrocarbons covering the cantilever
surface. (b) 3D $^1$H spin density recorded on a different region
of the same cantilever as in (a), showing an intact and several
fragmented virus particles. The right side shows a representative
xy-plane.}
\label{fig7}
\end{figure}
\begin{figure}[t]\includegraphics[width=3.2in]{fig8}
\caption{\footnotesize SEM of the TMV particles and particle
fragments on the gold-coated cantilever end. The insets enlarge
the areas that were imaged by MRFM in Fig.~\ref{fig7}. }
\label{fig8}
\end{figure}
\subsection{Imaging organic nanolayers}
\label{NanoLayers}
In addition to ``seeing'' individual viruses, the researchers also
detected an underlying proton-rich layer. This signal originated from
a naturally occurring, sub-nanometer thick layer of adsorbed water
and/or hydrocarbon contamination.
The hydrogen-containing adsorbates picked up on a freshly cleaned gold
surface turn out to be enough to produce a distinguishable and
characteristic signal. From analysis of the signal magnitude and
magnetic field dependence, the scientists were able to determine that
the adsorbates form a uniform layer on the gold surface with a
thickness of roughly 5 to 10 \AA \cite{Mamin:2009}.
Using a similar approach, the researchers made a 3D image of a
multiwalled nanotube roughly 10 nm in diameter, depicted in
Fig.~\ref{fig9}. The nanotube, attached to the end of a 100-nm-thick
Si cantilever, protruded a few hundred nanometers from the end of the
cantilever. As had been previously observed with gold layers, the
nanotube was covered by a naturally occurring proton-containing
contamination layer. Though the magnitude of the signal was roughly
10 times less than that of the two-dimensional layer -- reflecting its
relatively small volume -- it was accompanied by a very low background
noise level that made it possible to produce a clear image of the
morphology of the nanotube. Using the same iterative deconvolution
scheme developed to reconstruct the image of the TMV particles, the
researchers produced an image of a cylindrical object, 10 nm in
diameter at the distal end. No evidence was found for the hollow
structure that might be expected in the image of such a layer. The
experiment did not show any evidence for an empty cylindrical region
within the nanotube. Given the small inner diameter (less than 10 nm),
however, it was not clear whether hydrogen-containing material was in
fact incorporated inside the nanotube, or if the resolution of the
image was simply not sufficient to resolve the feature.
\begin{figure}[b]\includegraphics[width=3.2in]{fig9}
\caption{\footnotesize
Scanning electron microscopy image of a 10-nm-diameter carbon
nanotube attached to the end of a Si cantilever (left), and MRFM
image of the proton density at the nanotube's distal end (right).
Figure adapted from Ref. \cite{Mamin:2009}.}
\label{fig9}
\end{figure}
\subsection{Observation and manipulation of statistical polarizations}
\label{StatPolarization}
While predominantly driven by the interest in high resolution MRI
microscopy, the exquisite spin sensitivity of MRFM also gives us a
window into the spin dynamics of small ensembles of spins. When
probing nuclear spins on the nanometer scale, for example, random
fluctuations of the spin polarization will typically exceed the mean
Boltzmann polarization if sample volumes are smaller than about (100
nm)$^3$, as shown in Fig.~\ref{fig10}. This statistical polarization
arises from the incomplete cancellation of randomly oriented spins.
For an ensemble of $N$ nuclei of spin 1/2 and in the limit of small
mean polarization, which is representative of most experiments, the
variance of the fluctuations is $\sigma^2_{\Delta N} \simeq N$. The
existence of statistical polarization was pointed out by Bloch in his
seminal paper on nuclear induction \cite{Bloch:1946}, and has been
observed experimentally by a number of techniques, including
superconducting quantum interference devices \cite{Sleator:1985},
conventional magnetic resonance detection
\cite{Mccoy:1989,Gueron:1989,Muller:2006}, optical techniques
\cite{Crooker:2004}, and MRFM \cite{Mamin:2005,Mamin:2007}.
In a result that was enabled by the latest advances in MRFM detection
sensitivity, the IBM scientists were able -- for the first time -- to
follow the fluctuations of a statistical polarization of nuclear spins
in real time. These experiments followed the dynamics of an ensemble
of roughly $2 \times 10^6$ $^{19}$F spins in CaF$_2$
\cite{Degen:2007}. The challenge of measuring statistical
fluctuations presents a major obstacle to nanoscale imaging
experiments. In particular, the statistical polarization has random
sign and a fluctuating magnitude, making it hard to average signals.
An efficient strategy for imaging spin fluctuations is therefore to
use polarization variance, rather than the polarization itself, as the
image signal. This has recently been demonstrated both by
force-detected \cite{Mamin:2007,Degen:2007,DegenTMV:2009,Mamin:2009}
and conventional \cite{Muller:2006} MRI. Furthermore, it was
demonstrated that for cases where spin lifetimes are long, rapid
randomization of the spins by rf pulses can considerably enhance the
signal-to-noise ratio of the image \cite{Degen:2007}. In the end, for
the purposes of imaging, it is not necessary to follow the sign of the
spin polarization; it is enough to simply determine from the measured
spin noise where and how many spins are present at a particular
location.
The nuclear spin lifetime itself, which is apparent as the correlation
time of the nuclear fluctuations $\tau_m$, was also shown to be an
important source of information. Using suitable rf pulses,
researchers demonstrated that Rabi nutations, rotating-frame
relaxation times, and nuclear cross-polarization can be encoded in
$\tau_m$ leading to new forms of image contrast
\cite{PoggioAPL:2007,Poggio:2009}.
\begin{figure}\includegraphics[width=3.2in]{fig10}
\caption{\footnotesize
Comparison of mean thermal (or Boltzmann) polarization $\bar{\Delta
N} = N\mu B/k_{\rm B}T$ versus statistical polarization $\Delta
N_{\rm rms}=\sqrt{N}$ as a function of the number $N$ of nuclear
spins in the ensemble. While statistical polarization fluctuations
are negligible for macroscopic samples where $N$ is large, they
dominate over thermal polarization for small $N$. Under the
conditions typical for MRFM, where $T = 4$ K and $B = 3$ T (shown
here), this crossover occurs around $N\approx10^6$, or for sample
volumes below about (100 nm)$^3$. Dots represent experimental
values for conventional MRI \cite{Ciobanu:2002} and MRFM
\cite{DegenTMV:2009,Mamin:2007,Rugar:1994}. }
\label{fig10}
\end{figure}
\subsection{Mechanically induced spin relaxation}
\label{MechRelax}
The high sensitivity of MRFM is enabled in part by the strong coupling
that can be achieved between spins and the cantilever. This coupling
is mediated by field gradients that can exceed $5 \times 10^6$ T/m.
The strong interaction between spins and sensor has been the subject
of a number of theoretical studies, and is predicted to lead to a host
of intriguing effects. These range from shortening of spin lifetimes
by ``back action'' \cite{Mozyrsky:2003,Berman:2003}, to spin alignment
by specific mechanical modes either at the Larmor frequency or in the
rotating frame \cite{Magusin:2000,Butler:2005}, to resonant
amplification of mechanical oscillations \cite{Bargatin:2003}, to
long-range mediation of spin couplings using charged resonator arrays
\cite{Rabl:2009}.
Recently the IBM group reported the first direct experimental evidence
for accelerated nuclear spin relaxation induced by a single,
low-frequency mechanical mode \cite{DegenPRL:2008}. In these
experiments the slight thermal vibration of the cantilever generated
enough magnetic noise to destabilize the spin. Enhanced relaxation
was found when one of the cantilever's upper modes (in particular the
third mode with a frequency of about 120 kHz) coincided with the Rabi
frequency of the spins. In this ``strong coupling'' regime, the
cantilever is more tightly coupled to one mechanical resonator mode
than to the continuum of phonons that are normally responsible for
spin-lattice relaxtation. Interestingly, these initial experiments
showed a scaling behavior of the spin relaxation rate with important
parameters, including magnetic field gradient and temperature, that is
substantially smaller than predicted by theory (see Fig.~\ref{fig11}).
\begin{figure}\includegraphics[width=3.2in]{fig11}
\caption{\footnotesize
Spin relaxation rate $\Gamma$ as a function of magnetic field
gradient $G$. In the weak coupling regime, nuclear spin relaxation
is dominated by interaction with lattice phonons
($\Gamma=\Gamma_0$). In the strong coupling regime, spins relax via
a specific low-frequency mechanical mode of the cantilever and
$\Gamma\propto G^{-1.23}$. Figure adapted from Ref.
\cite{DegenPRL:2008}.}
\label{fig11}
\end{figure}
\subsection{Force detected nuclear double resonance}
\label{DoubleResonance}
Most recently, the IBM group exploited couplings between different
spin species to enhance the 3D imaging capability of MRFM with the
chemical selectivity intrinsic to magnetic resonance. They developed
a method of nuclear double-resonance that allows the enhancement of
the polarization fluctuation rate of one spin species by applying an
rf field to the second spin species, resulting in suppression of the
MRFM signal \cite{Poggio:2009}. The physics behind this approach is
analogous to Hartmann-Hahn cross-polarization (CP) in NMR spectroscopy
\cite{Hartmann:1962}, but involves statistical rather than Boltzmann
polarization. The IBM group was inspired by previous work done with
Boltzmann polarizations at the ETH in Z\"{u}rich demonstrating CP as
an efficient chemical contrast mechanism for micrometer-scale
one-dimensional MRFM imaging
\cite{Lin:2006,Eberhardt:2007,Eberhardt:2008}. In the IBM experiment,
MRFM was used to measure the transfer between statistical
polarizations of $^1$H and $^{13}$C spins in $^{13}$C-enriched stearic
acid. The development of a cross-polarization technique for
statistical ensembles adds an important tool for generating chemical
contrast to the recently demonstrated technique of nanometer-scale
MRI.
\section{Future developments}
\label{Developments}
Since its invention and early experimental demonstration in the
nineties \cite{Sidles:1991,Rugar:1992,Rugar:1994}, the MRFM technique
has progressed in its magnetic sensitivity from the equivalent of
$10^9$ to presently about 100 proton magnetic moments (see
Fig.~\ref{fig3}). In order to eventually detect single nuclear spins
and to image molecules at atomic resolution, the signal-to-noise ratio
of the measurement must still improve by two orders of magnitude. It
is not clear if these advances can be achieved by incremental progress
to the key components of the instrument, i.e.\ cantilever force
transducers and nanoscale magnetic tips, or whether major shifts in
instrumentation and methodology will be necessary. In the following we
review some of the key issues and potential avenues for future
developments.
\subsection{Magnetic tips}
The magnetic force on the cantilever can be enhanced by increasing the
magnetic field gradient $G$. This can be achieved by making higher
quality magnetic tips with sharp features and high-moment materials,
and by simultaneously bringing the sample closer to these tips. To
date, the highest magnetic field gradients have been reported in
studies of magnetic disk drive heads, ranging between $20\times10^6$
and $40\times10^6$ T/m \cite{Tsang:2006}. The pole tips used in drive
heads are typically made of soft, high-moment materials like FeCo, and
have widths below 100 nm. The magnetic tips used in the latest MRFM
experiments, on the other hand, are more than 200 nm in diameter, and
generate field gradients of less than $5\times10^6$ T/m. Moreover,
calculations indicate that these tips do not achieve the ideal
gradients which one would calculate assuming that they were made of
pure magnetic material. This discrepancy may be due to a dead-layer
on the outside of the tips, to defects inside the tips, or to
contamination of the magnetic material. By improving the material
properties and shrinking dimension of present MRFM tips, $G$ could be
increased by up to a factor of ten. In practice, however, it will be
difficult to gain an order of magnitude in signal-to-noise purely by
improving the magnetic tips. To achieve higher gradients -- and
therefore higher signal-to-noise -- we must resort to decreasing the
tip-sample spacing.
\subsection{Force noise near surfaces}
Since the gradient strength falls off rapidly with distance, bringing
the sample closer to the magnetic tip would also increase the field
gradient. However, measurements at small tip-sample spacings are
hampered by strong tip-sample interactions which produce mechanical
noise and dissipation in the cantilever. At the moment, imaging
experiments are limited to spacings on the order of 25 nm. For some
experimental arrangements, surface dissipation can be observed at
separations well over 100 nm. This interaction has been studied in
similar systems \cite{StipeFriction:2001,Kuehn:2006} and several
mechanisms have been proposed to explain its origin depending on the
details of the configuration
\cite{Persson:1998,Zurita:2004,Volokitin:2003,Volokitin:2005,Labaziewicz:2008}.
Most explanations point to trapped charges or dielectric losses in
either the substrate or the cantilever. Experimentally, several
strategies could mitigate non-contact friction effects, including
chemical modification of the surface, narrow tip size, or
high-frequency operation. None of these approaches has yet emerged as
the clear path for future improvement.
\subsection{Mechanical transducers}
The second means to improving the signal-to-noise ratio is the
development of more sensitive mechanical transducers, i.e.\
transducers that exhibit a lower force noise spectral density $S_F$.
For a mechanical resonator, $S_F$ is given by:
\begin{equation}
\label{eq2}
S_{F} = \frac{4 k_B T m \omega_0}{Q},
\end{equation}
where $k_B$ is the Boltzmann constant, $T$ is the temperature, $m$ is
the effective motional mass of the cantilever, $\omega_0$ is the
angular resonance frequency of the cantilever's fundamental mode, and
$Q$ the mechanical quality factor. In practice, this requires going to
lower temperatures and making cantilevers which simultaneously have
low $m \omega_0$ and large $Q$.
None of these steps is as straightforward as it first appears.
Temperatures below 100 mK can be achieved by cooling the apparatus in
a dilution refrigerator. In the latest experiments, however, the
cantilever temperature was limited to about 1 K by laser heating from
the interferometer displacement sensor, not by the base temperature of
the apparatus. Progress to sub-100 mK temperatures will therefore
require new developments in displacement sensing.
The best strategies for maximizing $Q$ are not well-understood
either. Apart from a loose trend that $Q$ often scales with thickness
\cite{Yasumura:2000}, and a few general rules of thumb, i.e.\
minimizing clamping losses by design and keeping the mechanical
resonator pristine and free of defects and impurities, no clear path
has emerged. Holding $Q$ constant, one finds from simple
Euler-Bernoulli beam theory that the product of $m$ and $\omega_0$ is
minimized for cantilevers that are long and thin.
On a more fundamental level, it is worth considering the use of
different materials and alternative geometries. Over the past few
years a variety of nanomechanical resonators have been developed which
rival the force sensitivities of the single crystal Si cantilevers
used in most MRFM experiments. Some examples are the SiN membranes
serving as sample stages in transmission electron microscopy
\cite{Thompson:2008}, vapor grown silicon nanowires
\cite{Nichol:2008}, and strained SiN or aluminum beams
\cite{Teufel:2009,Rocheleau:2010}. With some exceptions, the general
trend is towards smaller resonators that more closely match the atomic
lengthscales of spins and molecules. Therefore, it appears likely that
future transducers will emerge as ``bottom-up'' structures rather than
the ``top-down'' structures of the past. Instead of processing and
etching out small mechanical devices out of larger bulk crystals,
future resonators will probably be chemically grown or self-assembled:
For example, they will be macroscale ``molecules'' such as nanowires,
nanotubes \cite{Sazonova:2004}, or single sheets of graphene
\cite{Bunch:2007}.
Although uncontrolled bottom-up approaches tend to be ``dirty'',
remarkable mechanical properties can be achieved if care is taken to
keep this self assembly process ``clean''. Most recently, researchers
have demonstrated suspended carbon nanotubes with resonant frequencies
of 250 MHz, masses of $10^{-20}$ kg, and quality factors of $10^5$
\cite{Huttel:2009,Steele:2009}. If such a carbon nanotube force
transducer could be operated at the thermal limit, which would require
improved displacement detectors capable of measuring the nanotube's
thermal motion, the resulting force sensitivity would be 0.01
aN/$\sqrt{\text{Hz}}$, about 50 times better than any known mechanical
force sensor today.
\subsection{Displacement sensors}
The mechanical deflection caused by spin or thermal force is typically
a fraction of an Angstrom. In order to transfer the deflection into
experimentally accessible electronic signals, very sensitive
displacement sensor must be employed. To the best of our knowledge,
all MRFM measurements have made use of optical detectors based on
either optical beam deflection or laser interferometry. While optical
methods provide an extremely sensitive means of measuring cantilever
displacement, they face limitations as cantilevers become smaller and
temperatures lower.
The first limitation comes about as the push for better spin
sensitivity necessitates smaller and smaller cantilevers. The
reflective areas of these levers will shrink to the order of, or even
below, the wavelength of light. As a result, optical sensors will
become less and less efficient as smaller and smaller fractions of the
incident light are reflected back from the resonators. Thus, for the
next generation of cantilevers -- made from nanowires and nanotubes --
interferometric displacement sensing may no longer be an option.
In principle, the inefficient reflection from small resonators can be
balanced by increased laser power. Indeed, in a recent experiment,
Nichol and coworkers have been able to sense the motion of Si
nanowires at room temperature with diameters on the order of 20-nm
using optical interferometry \cite{Nichol:2008}. The researchers used
a polarization resolved interferometer and a high incident laser power
in order to sense the cantilever's motion.
High optical powers are, however, not compatible with low temperature
operation. Especially at millikelvin temperatures, most materials
(except for metals) have very poor thermal conductivities, and even
very low incident laser powers can heat the cantilever. For example,
a laser power of only 20 nW from a 1550-nm laser is sufficient to
increase the temperature of a single crystal Si cantilever of the type
shown in Fig.~\ref{fig3} from less than 100 mK to 300 mK, even though
absorption is known to be minimal for this wavelength.
There are several potential displacement detectors which could achieve
better sensitivity than optical methods while causing less measurement
back-action, or heating. An idea pursued by one of the authors is to
make an off-board capacitively-coupled cantilever displacement
detector based on a quantum point contact (QPC) \cite{Poggio:2008}.
Preliminary measurements indicate that such a detector reaches at
least the sensitivity of optical methods for equivalent cantilevers,
with no indication of back-action from the electrons flowing in the
device. While more work needs to be done, these kinds of capacitively
coupled detectors are promising means of measuring mechanical
resonators much smaller than the wavelength of light. One might
imagine a future MRFM detection set-up where an arbitrarily small
cantilever could be used, and a capacitive displacement detector is
integrated on chip with a high-gradient magnetic tip, and an rf
microwire source. Outstanding displacement sensitivities have also
been achieved with microwave interferometers
\cite{Teufel:2009,Rocheleau:2010}, superconducting single-electron
transistors, or high-finesse optical cavities made from micro-toroids
which are very sensitive to fluctuations of nearby objects
\cite{Anetsberger:2009}. All of these latter displacement sensors
will, however, need adjustments in order to be integrated in a
contemporary scanning MRFM instrument.
\section{Comparison to other techniques}
\label{Othertechniques}
The unique position of MRFM among high-resolution microscopies becomes
apparent when comparing it to other, more established nanoscale
imaging techniques. As a genuine scanning probe method, MRFM has the
potential to image matter at atomic resolution. While atomic-scale
imaging is routinely achieved in scanning tunneling microscopy and
atomic force microscopy, these techniques are confined to the top
layer of atoms and cannot penetrate below surfaces
\cite{Hansma:1987,Giessibl:2003}. Moreover, in standard scanning
probe microscopy (SPM), it is difficult and in many situations
impossible to identify the chemical species being imaged. Since MRFM
combines SPM and MRI, these restrictions are lifted. The
three-dimensional nature of MRI permits acquisition of sub-surface
images with high spatial resolution even if the probe is relatively
far away. As with other magnetic resonance techniques, MRFM comes
with intrinsic elemental contrast and can draw from established NMR
spectroscopy procedures to perform detailed chemical analysis. In
addition, MRI does not cause any radiation damage to samples, as do
electron and X-ray microscopies.
MRFM also distinguishes itself from super-resolution optical
microscopies that rely on fluorescence imaging \cite{Huang:2009}. On
the one side, optical methods have the advantage of working \textit{in
vivo} and they have the ability to selectively target the desired
parts of a cell. Fluorescent labeling is now a mature technique which
is routinely used for cellular imaging. On the other side, pushing
the resolution into the nanometer range is hampered by fundamental
limitations, in particular the high optical powers required and the
stability of the fluorophores. Moreover, fluorescent labeling is
inextricably linked with a modification of the target biomolecules,
which alters the biofunctionality and limits imaging resolution to the
physical size of the fluorophores.
MRFM occupies a unique position among other nanoscale spin detection
approaches. While single electron spin detection in solids has been
shown using several techniques, these mostly rely on the indirect
read-out via electronic charge \cite{Elzerman:2004,Xiao:2004} or
optical transitions \cite{Wrachtrup:1993,Jelezko:2002}. In another
approach, the magnetic orientation of single atoms has been measured
via the spin-polarized current of a magnetic STM tip or using magnetic
exchange force microscopy \cite{Heinze:2000,Durkan:2002,Kaiser:2007}.
These tools are very valuable to study single surface atoms, however,
they are ill suited to map out sub-surface spins such as paramagnetic
defects. In contrast, MRFM \textit{directly} measures the magnetic
moment of a spin, without resorting to other degrees of freedom,
making it a very general method. This direct measurement of magnetic
moment (or magnetic stray field) could also be envisioned using other
techniques, namely SQuID microscopy \cite{Kirtley:1995}, Hall
microscopy \cite{Chang:1992}, or recently introduced diamond
magnetometry based on single nitrogen-vacancy centers
\cite{Degen:2008,Maze:2008,Balasubramanian:2008}. So far, however,
none of these methods have reached the level of sensitivity needed to
detect single electron spins, or volumes of nuclear spins much less
than one micrometer \cite{DegenNNano:2008,Blank:2009}. It is
certainly possible that future improvements to these methods --
especially to diamond magnetometry -- may result in alternative
techniques for nanoscale MRI that surpass the capabilities of MRFM.
\section{Outlook}
\label{Outlook}
Despite the tremendous improvements made to MRFM over the last decade,
several important obstacles must be overcome in order to turn the
technique into a useful tool for biologists and materials scientists.
Most existing MRFM instruments are technically involved prototypes;
major hardware simplifications will be required for routine screening
of nanoscale samples. Suitable specimen preparation methods must be
developed that are compatible with the low temperature, high vacuum
environment required for the microscope to operate at its highest
sensitivity and resolution. While this is particularly challenging
for biological samples, protocols exist which could be adapted to
MRFM. In cryo-electron microscopy, for example, dispersed samples are
vitrified to preserve their native structure by plunge-freezing in
liquid nitrogen \cite{Taylor:1974}. As objects become smaller,
isolation of samples and suppression of unwanted background signals
from surrounding material will become increasingly important.
The conditions under which the latest MRFM imaging experiments were
carried out are remarkably similar to those prevailing in
cryo-electron microscopy, the highest resolution 3D imaging technique
commonly used by structural biologists today. Cryo-electron
microscopy, like MRFM, operates at low temperatures and in high
vacuum, requires long averaging times (on the order of days) to
achieve sufficient contrast, and routinely achieves resolutions of a
few nanometers \cite{Lucic:2005,Subramaniam:2005}. Unlike MRFM,
however, electron microscopy suffers from fundamental limitations that
severely restrict its applicability. Specimen damage by the
high-energy electron radiation limits resolution to 5-10 nm if only a
single copy of an object is available. Averaging over hundreds to
thousands of copies is needed to achieve resolutions approaching 10
\AA \cite{Glaeser:2008}. In addition, unstained images have
intrinsically low contrast, whereas staining comes at the expense of
modifying the native structure.
MRFM has the unique capability to image nanoscale objects in a
non-invasive manner and to do so with intrinsic chemical selectivity.
For this reason the technique has the potential to extend microscopy
to the large class of structures that show disorder and therefore
cannot be averaged over many copies. These structures include such
prominent examples as HIV, Influenza virus, and Amyloid fibrils.
Virtually all of these complexes are associated with important
biological functions ranging from a variety of diseases to the most
basic tasks within the cellular machinery. For such complexes, MRFM
has the potential not only to image the three-dimensional
macromolecular arrangement, but also to selectively image specific
domains in the interior through isotopic labeling.
While the most exciting prospect for MRFM remains its application to
structural imaging in molecular biology, its applications are not
limited to biological matter. For example, most semiconductors
contain non-zero nuclear magnetic moments. Therefore MRFM may prove
useful for sub-surface imaging of nanoscale electronic devices. MRFM
also appears to be the only technique capable of directly measuring
the dynamics of the small ensembles of nuclear spin that limit
electron spin coherence in single semiconductor quantum dots. Polymer
films and self-assembled monolayers -- important for future molecular
electronics -- are another exciting target for MRFM and its capability
to image chemical composition on the nanoscale. Finally, isotopically
engineered materials are becoming increasingly important for tuning a
variety of physical properties such as transport and spin.
Researchers currently lack a general method for non-invasively imaging
the isotopic composition of these materials
\cite{Shimizu:2006,Shlimak:2001,Kelly:2007}; MRFM techniques could
fill this void.
As force-detected magnetic resonance has traditionally been an
exploratory field, it is possible that applications other than
nanoscale imaging will emerge. Single electron spin detection, for
example, is an important prerequisite for future quantum information
applications \cite{Rugar:2004,Kane:2000}. At the same time, MRFM may
also become an important tool in the study of defects or dopants deep
in materials, or for mapping of spin labels in decorated biological
nanostructures \cite{Moore:2009}. The key components to the
instrument -- in particular the ultrasensitive micromechanical
cantilevers, nanomagnetic tips, and displacement transducers -- could
also find new applications outside the area of spin detection.
\section{Conclusion}
\label{Conclusion}
Over the last two decades, MRFM has led to exciting progress in the
field of ultrasensitive spin detection and high-resolution MRI
microscopy. Starting with early demonstrations in the 1990s imaging
with resolutions of a few micrometers -- on par with conventional MRI
microscopy -- the technique has progressed to the point where it can
resolve single virus particles and molecular monolayers. Given the
fast pace at which modern nanofabrication technology is evolving, an
improvement of the method down to one-nanometer resolution seems
feasible without major changes to the instrument. This resolution,
which is comparable to what three-dimensional electron microscopy
reaches on biological specimens, would be sufficient to map out the
coarse structure of many macromolecular complexes. The extension of
MRFM to atomic resolution, where atoms in molecules could be directly
mapped out and located in 3D, remains an exciting if technically very
challenging prospect.
\begin{acknowledgments}
The work discussed in this review was only possible because of the
many fruitful experimental collaborations with H. J. Mamin and D.
Rugar of the IBM Almaden Research Center. The authors also thank
these colleagues for their many detailed comments and very helpful
discussions pertaining to this review.
\end{acknowledgments}
|
1,108,101,563,683 | arxiv |
\section*{Appendix}
\subsection{Proof of Theorem \ref{thm:CE}}\label{Append:thm1}
\begin{proof}
To compute $\vartheta_k^i$, the resource manager has no knowledge of $u_{[0,k-1]}^i$, but incorporates $\theta_{[0,k]}^i$'s, $i\!\in\!\mathrm{N}$, via $\tilde{\mathcal{I}}_k$. The controller $\mathcal{C}_i$ knows about~$u_{[0,k-1]}^i$, $\vartheta_{[0,k]}^i$ and $\theta_{[0,k]}^i$ via $\mathcal{I}_k^i$, while $u_{[0,k-1]}^i$, $\vartheta_{[0,k-1]}^i$ and $\theta_{[0,k-1]}^i$ are known for the delay controller via $\bar{\mathcal{I}}_k^i$. From (\ref{eq:local_objective})-(\ref{eq:global_OP}), we re-state (10) as
\begin{align}\label{prob:global_OP_outer}
&\min_{\gamma^i,\xi^i,\pi} \!J\!=\!\frac{1}{N}\sum\nolimits_{i=1}^N \E\bigg[\min_{\gamma^i,\pi}J^i(u^i,\vartheta^i)\;-\\\nonumber
&\min_{\gamma^i,\xi^i}\E\!\left[\|x_T^{i}\|_{Q_2^i}^2\!+\!\sum\nolimits_{k=0}^{T-1} \!\|x_k^{i}\|_{Q_1^i}^2\!+\!\|u_k^{i}\|_{R^i}^2\!+\theta_k^{i^\top}\!\Lambda\right]\!\bigg].
\end{align}
where, for the first term of (\ref{prob:global_OP_outer}), we obtain the following due to the one-directional independence of $\vartheta_k^i$ from $u_k^i$
\begin{align}\nonumber
&J^i(u^i,\vartheta^i)\!=\!\E\left[\E\!\left[\sum\nolimits_{k=0}^{T-1} \!\vartheta_{\!k}^{i^\top}\!\!\!\Lambda\Big|\tilde{\mathcal{I}}_k\!\right]\right]+\\\nonumber
&\E\left[\E\!\left[\|x_T^{i}\|_{Q_2^i}^2\!+\!\!\sum\nolimits_{k=0}^{T-1} \!\|x_k^{i}\|_{Q_1^i}^2\!+\!\|u_k^{i}\|_{R^i}^2\!\Big|\mathcal{I}_k^i,\tilde{\mathcal{I}}_k\!\right]\right].
\end{align}
We define $V_k^i=\|x_T^{i}\|_{Q_2^i}^2\!+\!\sum_{t=k}^{T-1} \!\|x_t^{i}\|_{Q_1^i}^2\!+\!\|u_t^{i}\|_{R^i}^2$. Since $\gamma^i$ is a local policy and its decision outcome $u^i$ is independent of all sub-systems $j\!\neq \!i$, and moreover, $\pi$ is independent of all $\gamma_i$'s, the optimal cost-to-go can be expressed as
\begin{align}
\min_{\substack{\gamma^i_{[k,T-1]}\\\pi_{[k,T-1]}}}J^i(u^i,\vartheta^i)=\!&\min_{\pi_{[k,T-1]}}\E\!\bigg[\min_{\gamma^i_{[k,T-1]}} \E\left[V_k^i\big|\mathcal{I}_k^i\right]+\!\!\\\nonumber
&\min_{\pi_{[k,T-1]}}\E\!\Big[\sum\nolimits_{t=k}^{T-1} \!\vartheta_t^{i^\top}\!\Lambda\big|\tilde{\mathcal{I}}_k\Big]\Big|\tilde{\mathcal{I}}_k\bigg]
\end{align}
For $J^i(u^i,\theta^i)$, we know $\bar{\mathcal{I}}_k^i\!\subseteq \!\mathcal{I}_k^i$, $\forall k$, from (\ref{set:controller-information}) and (\ref{set:DC-information}). Moreover, $u_k^i$ and $\theta_k^i$ are measurable w.r.t. $\mathcal{I}_k^i$ and $\bar{\mathcal{I}}_k^i$, respectively. Therefore, employing the tower property\footnote{For a random variable $X$ defined on a probability space with sigma-algebra $\mathcal{F}$, if $\E[X]\!<\!\infty$, then for any two sub-sigma-algebras $\mathcal{F}_1\!\subseteq \!\mathcal{F}_2\!\subseteq\! \mathcal{F}$, $\E[\E[X|\mathcal{F}_2]|\mathcal{F}_1]\!=\!\E[X|\mathcal{F}_1]$ \textit{almost surely}.}, and also using the law of total expectation\footnote{If the random variable $X$ is $\mathcal{F}$-measurable, then $\E[\E[X|\mathcal{F}]]=\E[X]$.}, we re-write (\ref{eq:local_objective}) as
\begin{align*}
&J^i(u^i,\theta^i)\!=\\
&\E\!\left[\E\!\left[\E\!\left[\|x_T^{i}\|_{Q_2^i}^2\!+\!\sum\nolimits_{k=0}^{T-1} \!\|x_k^{i}\|_{Q_1^i}^2\!+\!\|u_k^{i}\|_{R^i}^2\!+\theta_k^{i^\top}\!\Lambda\Big|\mathcal{I}_k^i\right]\Big|\bar{\mathcal{I}}_k^i\!\right]\right].
\end{align*}
Hence, introducing $C_k^i(u^i,\theta^i)=V_k^i+\sum_{t=k}^{T-1}\theta_t^{i^\top}\!\Lambda$, we obtain
\begin{align*}
\min_{\substack{\gamma^i_{[k,T-1]}\\\xi^i_{[k,T-1]}}}\!\!J^i(u^i,\theta^i)\!=\!\E\!\bigg[\min_{\xi_{[k,T-1]}^i}\!\E\!\bigg[\!\min_{\gamma_{[k,T-1]}^i}\!\!\E\!\left[C_k^i(u^i,\theta^i)|\mathcal{I}_k^i\right]\bigg|\bar{\mathcal{I}}_k^i\bigg]\bigg]
\end{align*}
Finally, we can re-express (\ref{prob:global_OP_outer}) as
\begin{align}\label{eq:global_cost_separation}
\min_{\gamma^i,\xi^i,\pi} J&=\!\frac{1}{N}\sum\nolimits_{i=1}^N\E\Bigg\{\!\!\min_{\pi}\E\!\left[\min_{\gamma^i} \E\left[V_0^i\big|\mathcal{I}_0^i\right]\Big|\tilde{\mathcal{I}}_0\right]\\\nonumber
&+\min_{\pi}\E\!\bigg[\sum\nolimits_{k=0}^{T-1} \!\vartheta_k^{i^\top}\!\Lambda\Big|\tilde{\mathcal{I}}_0\bigg]\\\nonumber
&-\!\min_{\xi^i}\E\!\bigg[\min_{\gamma^i}\E\!\bigg[V_0^i\!+\!\sum\nolimits_{k=0}^{T-1} \!\theta_k^{i^\top}\!\Lambda\Big|\mathcal{I}_0^i\bigg]\Big|\bar{\mathcal{I}}_0^i\bigg]\!\Bigg\}.
\end{align}
The sole $\gamma^i$-dependent term in the above expression is $\E[V_0^i|\mathcal{I}_0^i]$, and since this term is minimized only by the control law $\gamma^i$, it coincides with the standard LQG problem. Therefore, for all $k\in[0,T-1]$, the following control law solves the inner optimization problem $\min_{\gamma^i} \E\left[V_0^i|\mathcal{I}_0^i\right]$
\begin{align}\label{eq:cost-to-go}
u_{[k,T-1]}^{i,\ast}&=\gamma^{i,\ast}_{[k,T-1]}(\mathcal{I}_k^i)=\argmin_{\gamma^i_{[k,T-1]}} \E\left[V_k^{i}|\mathcal{I}_k^i\right]\\\nonumber
&=\argmin_{\gamma^i_{[k,T-1]}}\E\!\left[\|x_T^{i}\|_{Q_2^i}^2\!+\!\!\sum\nolimits_{t=k}^{T-1} \!\|x_t^{i}\|_{Q_1^i}^2\!+\!\|u_t^{i}\|_{R^i}^2\big|\mathcal{I}_k^i\right]\!.
\end{align}
As (\ref{eq:cost-to-go}) is a standard LQG problem, we drop the derivation of $\gamma^{i,\ast}$ for brevity. This is, however, known that the optimal law $\gamma_k^{i,\ast}$ and gain $L_k^{i,\ast}$ in (\ref{eq:CE-law}) and (\ref{eq:CE-gain}) are the solutions of the problem (\ref{eq:cost-to-go}). (Full derivation can be found in \cite{8405590}.)
\end{proof}
\subsection{Proof of Theorem \ref{thm:impassiveDC}}\label{Append:thm2}
\begin{proof}
The two assumptions on the independence of $\pi_k$ from $\gamma_k^i$'s, $i\!\in\!\mathrm{N}$, and $\bar{\mathcal{I}}_k^i\!\subseteq \!\mathcal{I}_k^i$ hold, so we begin from (\ref{eq:global_cost_separation}).
Recall that $\vartheta_{[0,k-1]}^i\notin\bar{\mathcal{I}}_k^i$, hence, to decide $\theta_k^i$, the delay controller presumes that the control signal is generated according to $\theta_{[0,k-1]}^i$ not $\vartheta_{[0,k-1]}^i$. We derived the optimal control policy that minimizes the sole $\gamma^i$-dependent term $V^i_0$ in (\ref{eq:global_cost_separation}), therefore, the optimal impassive delay control policy $\xi_k^{i,\ast}(\bar{\mathcal{I}}_k^i)$ will be obtained simply by minimizing the local LQG cost function $J^i(u^{i,\ast},\theta^i)$, i.e., $\forall k\in[0,T-1]$
\begin{equation}\label{eq:proof-delay-control}
\!\theta_{[k,T-1]}^{i,\ast}\!=\argmin_{\xi_{[k,T-1]}^i}\E\!\left[V^{i,\ast}_k(\gamma^{i,\ast}\!,\xi^i)\!+\!\sum\nolimits_{t=k}^{T-1}\theta_t^{i^\top}\!\!\Lambda\big|\bar{\mathcal{I}}_k^i\right]\!.\!
\end{equation}
Recalling Remark 2, we compute $V^{i,\ast}_k(\gamma^{i,\ast},\xi^i)$ at the impassive delay controller side. From the estimator dynamics (\ref{eq:estimator_dynamics}) and system dynamics (\ref{eq:sys_model}), the estimation error $e_k^i$ evolves as
\begin{equation*}
e_k^i=\sum\nolimits_{l=1}^{\tau_k^i}\sum\nolimits_{j=l}^{\tau_k^i}\bar{b}_{j,k}^iA_i^{l-1}w_{k-l}^i,
\end{equation*}
where $b_{j,k}^i$ in (\ref{eq:estimator_dynamics}) is replaced by $\bar{b}_{j,k}^i$ because the delay controller has no knowledge about the variables $\{\vartheta_0^i,\ldots,\vartheta_{k-1}^i\}$ (the plant controller and the collocated estimator have this knowledge). Since $\bar{\mathcal{I}}_k^i\subseteq \mathcal{I}_k^i$, it is, moreover, straightforward to compute $\E[\E[e_k^ie_k^{i^\top}|\mathcal{I}_k^i]|\bar{\mathcal{I}}_k^i]=\E[e_k^ie_k^{i^\top}|\bar{\mathcal{I}}_k^i]$, as follows:
\begin{align*}
\E[e_k^ie_k^{i^\top}\big|\bar{\mathcal{I}}_k^i]&=\sum\nolimits_{l=1}^{\tau_k^i}\sum\nolimits_{j=l}^{\tau_k^i}\bar{b}_{j,k}^i\E[A_i^{l-1}w_{k-l}^i w_{k-l}^{i^\top}A_i^{{l-1}^\top}]\\
&=\sum\nolimits_{l=1}^{\tau_k^i}\sum\nolimits_{j=l}^{\tau_k^i}\bar{b}_{j,k}^iA_i^{l-1}\Sigma_{k-l}^i A_i^{{l-1}^\top},
\end{align*}
where, $\Sigma_{k-l}^i\!=\!\Sigma_{x_0}^i$, $k\!<\!l$, and $\Sigma_{k-l}^i=\Sigma_{w}^i$, $k\geq l$. Having this and noting that $\bar{\mathcal{I}}_0^i=\{A_i, B_i, Q_1^i, Q_2^i, R^i, \Sigma_w^i,\Sigma_{x_0}^i\}$, we can rewrite $\E[V^{i,\ast}_0(\gamma^{i,\ast},\xi^i)|\bar{\mathcal{I}}_0^i]$ as follows
\begin{align}\label{eq:proof-optimal-value-function}
\E&[V^{i,\ast}_0(\gamma^{i,\ast},\xi^i)|\bar{\mathcal{I}}_0^i]=\|\!\E\left[x_0^i\right]\!\|^2_{P_0^i}+\!\sum\nolimits_{t=1}^T \!\Tr (P_t^i \Sigma_w^i)\\\nonumber
&+\Tr(P_0^i\sum\nolimits_{l=1}^{\tau_0^i}\sum\nolimits_{j=l}^{\tau_0^i}\bar{b}_{j,0}^iA_i^{{l-1}^\top}\Sigma_{x_0}^i A_i^{l-1})\\\nonumber
&+\sum\nolimits_{t=0}^{T-1}\Tr(\tilde{P}_t^i \sum\nolimits_{l=1}^{\tau_t^i}\sum\nolimits_{j=l}^{\tau_t^i}\bar{b}_{j,t}^iA_i^{{l-1}^\top}\Sigma_{t-l}^i A_i^{l-1}).
\end{align}
As the only term in the expression above that is dependent on $\theta_{[0,T-1]}^i$ is the last term, the optimization problem (\ref{eq:proof-delay-control}) can equivalently be expressed, initiating from the time $k=0$, as
\begin{align*}
&\theta_{[0,T-1]}^{i,\ast}\!=\argmin_{\xi_{[0,T-1]}^i}\E\left[V^{i,\ast}_0(\gamma^{i,\ast},\xi^i)+\sum\nolimits_{t=0}^{T-1}\theta_t^{i^\top}\Lambda\big|\bar{\mathcal{I}}_0^i\right]=\\
&\argmin_{\xi_{[0,T-1]}^i}\sum_{t=0}^{T-1}\!\left[\Tr(\tilde{P}_t^i \sum_{l=1}^{\tau_t^i}\sum_{j=l}^{\tau_t^i}\bar{b}_{j,t}^iA_i^{{l-1}^\top}\!\Sigma_{t-l}^i A_i^{l-1})+\theta_t^{i^\top}\!\Lambda\right]
\end{align*}
The constraints of the problem (\ref{eq:opt-imp-del-cont}) are all linear and $\theta_k^i$ is binary-valued, hence the above problem is a MILP.
Moreover, it is independent from both the noise realizations and $\vartheta_{[0,T-1]}$, thus $\theta_{[0,T-1]}^\ast$ can be computed offline. The constraint $\sum_{l=0}^D\theta_t^i(l)\!=\!1$ ensures that only one delay link is selected per-time, while the last two constraints look after convenient indexes for $\bar{b}_{j,k}^i$ for $k\!\geq \!D$ and $k\!<\!D$ (see the Corollary 1).
To find $\pi^{\ast}$, we use a similar procedure to that of computing $\xi^{i,\ast}$, except $\vartheta_k^i$ is now computed knowing the information $\{\theta_{[0,k]}^{i,\ast},\vartheta_{[0,k-1]}^{i,\ast}\}$, $\forall i$. We compute $\E[V^{i,\ast}_0(\gamma^{i,\ast},\pi)|\tilde{\mathcal{I}}_0]$ that results in a similar expression as on the right side of the equality in (\ref{eq:proof-optimal-value-function}) with the exception being $\bar{b}_{j,t}^i$ replaced by $b_{j,t}^i$. Hence, from (\ref{eq:global_cost_separation}), and considering the resource constraint $\sum_{i=1}^N \vartheta_t^i(d)\leq c_d, \forall d\in\mathcal{D}$, and the latency deviation constraint $-\alpha_i\!\leq\!(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\!\leq \beta_i, i\in \mathrm{N}$, we derive the optimal resource allocation offline from the following MILP:
\begin{align*}
&\vartheta_{[k,T-1]}^{\ast}\!=\argmin_{\pi_{[k,T-1]}}\frac{1}{N}\sum_{i=1}^N \E\!\bigg[V^{i,\ast}_k(\gamma^{i,\ast}\!,\pi^i)\!+\!\!\sum_{t=k}^{T-1}\vartheta_t^{i^\top}\!\Lambda\big|\tilde{\mathcal{I}}_k\bigg]\!=\\
&\argmin_{\pi_{[k,T-1]}}\frac{1}{N}\!\sum_{i=1}^N\sum_{t=k}^{T-1}\!\bigg[\vartheta_t^{i^\top} \!\!\Lambda\!+\!\sum_{l=0}^{\tau_{t}^i} \sum_{j=l}^{\tau_{t}^i} b_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\bigg]\!
\end{align*}
Since $\theta^{i,\ast}_{[0,T-1]}$ is computed offline from (\ref{eq:opt-imp-del-cont}) independent of $\vartheta^{i}_{[0,T-1]}$, we can set $k=0$ above to complete the proof.
\end{proof}
\subsection{Proof of Corollary \ref{corol:performance-comparison}}\label{Append:corol3}
\begin{proof}
The control policy $\gamma^{i,\ast}$ follows (\ref{eq:CE-law}) for both impassive and reactive scenarios, so we only compare the optimal cost values of the joint policies $(\xi^{i,\ast},\pi^\ast)$ derived from Theorems \ref{thm:impassiveDC} and \ref{thm:reactiveDC}. Define $(\bar{\theta}^{i,\ast},\bar{\vartheta}^{i,\ast})$ and $(\tilde{\theta}^{i,\ast},\tilde{\vartheta}^{i,\ast})$, respectively, as the joint optimal impassive and reactive delay control and resource allocation variables over time horizon $[0,T]$. First assume $\bar{\theta}^{i,\ast}\!=\!\tilde{\theta}^{i,\ast}$, then $\bar{b}_{j,t}^i\!=\!\tilde{b}_{j,t}^i, \!\!\; \forall t$ must hold from (\ref{eq:opt-imp-del-cont}) and (\ref{eq:thm3-del-ctrl}), which leads to $\bar{\vartheta}^{i,\ast}\!=\!\tilde{\vartheta}^{i,\ast}$ from (\ref{eq:opt-imp-res-alloc}) and (\ref{eq:thm3-res-manager}).
Having the problems (\ref{eq:opt-imp-del-cont}) and (\ref{eq:thm3-del-ctrl}), and also (\ref{eq:opt-imp-res-alloc}) and (\ref{eq:thm3-res-manager}) coincide, it easily leads to $J^{i,\ast}_{\text{Re}}\!=\! J^{i,\ast}_{\text{Im}}$ and $J^\ast_{\text{Re}}\!=\! J^\ast_{\text{Im}}$.
Now assume $\bar{\theta}^{i,\ast}\!\neq\!\tilde{\theta}^{i,\ast}$. Due to the fact that the information set $\bar{\mathcal{I}}^i_{[0,T-1]}$ associated with the impassive approach (given in (\ref{set:DC-information-impassive})) is a subset of its counterpart associated with the reactive approach (given in (\ref{set:DC-information})), any optimal solution of the problem (\ref{eq:opt-imp-del-cont}) can also be obtained from the problem (\ref{eq:thm3-del-ctrl}) if it is optimal for the latter.
Hence, if $\bar{\theta}^{i,\ast}\!\neq\!\tilde{\theta}^{i,\ast}$, then $\bar{\theta}^{i,\ast}$ is not the optimal solution of problem (\ref{eq:thm3-del-ctrl}), which implies $J^{i,\ast}_{\text{Re}}(u^{i,\ast}\!,\tilde{\theta}^{i,\ast})\!<\!J^{i,\ast}_{\text{Im}}(u^{i,\ast}\!,\bar{\theta}^{i,\ast})$.
For the resource allocation, assume $\tilde{\vartheta}^{i,\ast}$ be the optimal solution of the problem (\ref{eq:thm3-res-manager}) such that $\tilde{\vartheta}^{i,\ast}\!\neq\!\bar{\vartheta}^{i,\ast}$ while $J^\ast_{\text{Re}}\!>\! J^\ast_{\text{Im}}$.
Recall that $\bar{\vartheta}^{i,\ast}$ is the optimal resource allocation in response to $\bar{\theta}^{i,\ast}$ computed from (\ref{eq:opt-imp-del-cont}), while we know if $\tilde{\vartheta}^{i,\ast}\!\neq\!\bar{\vartheta}^{i,\ast}$, then $\bar{\theta}^{i,\ast}\!\neq\!\tilde{\theta}^{i,\ast}$.
Knowing this, together with $J^\ast_{\text{Re}}\!>\! J^\ast_{\text{Im}}$, implies that the joint policy $(\bar{\theta}^{i,\ast}\!,\bar{\vartheta}^{i,\ast})$ outperforms $(\tilde{\theta}^{i,\ast}\!,\tilde{\vartheta}^{i,\ast})$, which requires $J^{i,\ast}_{\text{Re}}(u^{i,\ast}\!,\tilde{\theta}^{i,\ast})\!>\!J^{i,\ast}_{\text{Im}}(u^{i,\ast}\!,\bar{\theta}^{i,\ast})$ to hold.
This, however, contradicts the previous condition ensuring that if $\bar{\theta}^{i,\ast}\!\neq\!\tilde{\theta}^{i,\ast}$, then $J^{i,\ast}_{\text{Re}}(u^{i,\ast}\!,\tilde{\theta}^{i,\ast})\!<\!J^{i,\ast}_{\text{Im}}(u^{i,\ast}\!,\bar{\theta}^{i,\ast})$, and hence the condition $J^\ast_{\text{Re}}\!>\! J^\ast_{\text{Im}}$ cannot be realized if $\tilde{\vartheta}^{i,\ast}\!\neq\!\bar{\vartheta}^{i,\ast}$.
\end{proof}
\section{CONCLUSION}\label{conclusion}\vspace{-1mm}
In this article, we address the problem of jointly optimal control and networking for multi-loop NCS exchanging data over a shared communication network that offers a range of capacity-limited, latency-varying and cost-prone transmission services. We investigate different awareness scenarios between the cross-layer decision makers and study the effects of the resulting interactions on the structure of the optimal policies.
By formulating a system (social) optimization problem, we derive the joint optimal policies under various cross-layer awareness models of constant parameters and dynamic variables. We show that higher awareness leads to better social performance, however, results in more complex optimization problems. In addition, we discuss that tighter sensitivity w.r.t. the deviations from the desired local decision variables may lead to better local performance for certain systems, however, in a constrained setup where multiple systems are competing for limited resources, results in higher cost for other systems and eventually degrades the social performance.
The proposed design approach is implemented on a multi-loop NCS where the simulation observations validate our theoretical results.
\section{Conclusion}
\input{conclusions}
\bibliographystyle{ieeetr}
\section{MOTIVATION And INTRODUCTION}
The design and operation of networked control systems (NCSs), wherein multiple control loops exchange information between their sensors, controllers and actuators via a common communication network, requires a major rethinking to respond to the growing requirements from current and future applications.
The introduction of communication technologies that provide demand-driven serviceability with adjustable parameters and prices, together with novel approaches to virtually program network functions and adaptable network features, have created a significant potential to bring control and networking architectures to a whole new level \cite{MOLINA2018407,BORDEL2017156}. This generally means moving from the traditional throughput-oriented and latency-minimizing data transmission with asymptotic-type performance guarantees, to smart data coordination schemes that consider real-time requirements and limitations of both the service providers and service recipients.
In the context of NCSs, this calls for novel sampling, control and resource management architectures that incorporate the wide range of opportunities provided by the network infrastructure, such as computational capability, adaptive service allocation, virtual programmability, adjustable channel reliability and latency, to maximize quality-of-control (QoC), while minimizing the cost of network usage. Emerging NCS applications, such as networked cyber-physical systems (Net-CPS), Internet of things (IoT), autonomous driving and Industry 4.0, often involve a large number of networked entities, each with time-varying requirements to fulfill specific tasks. The concept of ``network'' in such systems has gone beyond a simple shared communication channel to a general representation of evolving inter-layer dependencies (physical, information, and communication layers) \cite{Baras2014ISCCSP}. This creates a large potential to develop novel interactive approaches for real-time distributed sampling, networking and control in a cross-layer fashion, such that the individual entities become aware of networking architecture and opportunities, and coupling constraints and incorporate them in decision making, while the network is also aware of the demands and the task criticality of the entities and optimally allocate services and adjust the inter-dependencies.
\subsection{Contributions}
In this article, we propose jointly optimal communication and control policies for a general NCS model consisting of multiple delay-sensitive heterogeneous stochastic control systems closing their sensor-to-controller loops via a shared communication network, under various inter-layer awareness assumptions. Each sub-system is controlled by two local decision makers: a delay-sensitive controller that determines how fast state information should be sent to the plant controller, and a plant controller that maximizes control performance, measured by a linear-quadratic-Gaussian (LQG) cost. Local controllers have access to partial information of their own loop and may have some knowledge of the network parameters but do not have any knowledge about the dynamics and objectives of other sub-systems. The communication network offers various transmission services, for fixed prices, through multiple capacity-limited channels each with a distinct and deterministic latency. Transmission requests from sub-systems are arbitrated by a resource manager to avoid exceeding the link capacities. Resource arbitration is optimally performed such that the average sum of local (sub-system) LQG cost functions undergoes the minimum deviation compared to the resource-unlimited case, over a finite time horizon. We study scenarios each entailing a specific class of inter-layer awareness (one-directional and bi-directional awareness of time-varying and constant parameters) among the three decision makers, and derive the resulting jointly optimal policies. We show that performance of the joint design is associated with the level of delay-sensitivity tolerances and the awareness structure. In general, higher awareness results in lower local and social costs, though the resulting optimization problem becomes more computationally complex.
We also observe that the extent of performance improvement is firmly tied to the particular model of awareness, that is, for specific scenarios the improvements are slight compared to the extra solution complexity, while for others, the improvements are considerable. Interestingly, stricter delay sensitivity (i.e., local sub-systems tolerate minor deviations from their delay requirements) may result in lower local cost for some specific sub-systems, but higher social cost.
Our major contributions in this article are:
\begin{enumerate}
\item introducing a general model of NCS including heterogeneous control loops and variety of network services, with evolving interactions between control and network layers leading to enhanced joint performance.
\item investigating various awareness models for control and network layers and studying the interaction effects on the structure and performance of the optimal co-design.
\item deriving jointly optimal policies from awareness-based social optimization problems including performance-complexity comparisons w.r.t. the awareness model.
\end{enumerate}
We addressed a similar problem for a \textit{single-loop} NCS in \cite{8405590}, however, the present problem is far more general. The setup in \cite{8405590} does not include resource management as no contention exists, and interactions between control and network layers, in the previous formulation, reduce to one-directional knowledge of the network service prices.
\subsection{Related works}
The problem of joint control and communication design in NCSs has been an active research topic for the last two decades in both control and communication communities \cite{Baillieul2007IEEEProc, Shakkottai2003Comm}. Two rather distinct perspectives in addressing it have evolved: from the communication perspective where maximizing quality-of-service (QoS) is the major objective, requirements of control systems are often abstracted in the form of transmission rate, delay, and packet loss, with less attention given to the application dynamics and their real-time necessities \cite{Bai2012ICCPS,s8021099}. Numerous design methodologies are proposed including protocols for QoS-enhacing medium access control (MAC) \cite{YIGITEL20111982,Rajandekar2015ITJ,Bi2013MNA}, resource allocation \cite{Letaief2006IWC,1561930}, link scheduling and routing \cite{4399978,7479131}, and queuing management \cite{6614116,6115208}. On the other hand, from the control perspective the aim is to maximize QoC, and the communication network is usually seen as one or more maximum-rate and delay-negligible single-hop channels with some resource management capabilities to resolve contention. Many design approaches for sampling, estimation and control over shared networks are proposed to enhance QoC while reducing the rate of transmission, including event-triggered schemes \cite{FORNI2014490,5510124,SEYBOTH2015392,maity2019optimal}, self-triggered schemes \cite{6425820,7348666}, and adaptive/predictive data transmission and control models \cite{7423697,7798733,6882817}. For more sophisticated models of communication networks with data loss, delay and resource constraints, attempts have been made mostly on co-design architectures that guarantee stability rather than optimality\cite{5409530,7039815,8039513}. Altogether, the efforts have often led to design frameworks that either consider no evolving cross-layer coupling or presume interactions in average form over time, with performance guarantees mostly valid in the asymptotic regime.
New NCS applications, however, include multitude of heterogeneous systems that need to fulfill various real-time tasks while the network is responsible for coordinating the required type of communication and computation services \textit{per-time}. This urges the development of cross-layer architectures that consider active interactions between distributed components of control and communication layers to be aware of each others conditions, capabilities, requirements, and limitations to achieve joint optimal quality-of-control-and-service, not only asymptotically but also over finite time horizons. To achieve this, a main issue to address is optimal timeliness, i.e., when is the best time to make a specific action such as sampling, transmission or actuation. This problem is addressed in the control community mainly for data sampling over single-service communication support leading to optimal event-based technique to restrict unnecessary transmission \cite{7497849,6228518,Molin2014}, and prioritized MAC protocols to distribute resources based on urgency \cite{7963084,MAMDUHI2017209}. These approaches consider some measured or observed quantity of the control system, such as estimation error, as the triggering function. For multiple-loop non-scalar NCS, though, finding the optimal triggering law without major simplifications of the network layer is challenging. Moreover, resource allocation is often performed randomly or based on apriori given parameters but not based on dynamic awareness of interacting layers. In addition, the resulting performances of the proposed approaches are often addressed asymptotically over infinite horizon.
To the best of our knowledge, a systematic approach that proposes a cross-layer optimal design of control, sampling and resource management strategies to maximize QoC for multi-loop NCSs with a shared network of various service opportunities is not presented in the literature.
\subsection{Notations}
We denote expectation, conditional expectation, transpose, floor and trace operators by $\E[\cdot]$, $\E[\cdot|\cdot]$, $[\cdot]^\top$, $\lfloor\cdot\rfloor$ and $\Tr(\cdot)$, respectively. For $a\geq 0$, define the indicator $\mathbbm{1}(a)\!=\!0$ if $a\!=\!0$, and $\mathbbm{1}(a)\!=\!1$ if $a\!>\!0$. $X \!\sim \!\mathcal{N}(\mu, W)$ represents a multivariate Gaussian distributed random vector~$X$ with mean vector~$\mu$ and covariance $W \!\succ \!0$, where $A\!\succ \!B$ denotes $A\!-\!B$ is positive definite. The $Q$-weighted squared 2-norm of a column vector $X$ is denoted by $\|X\|^2_{Q}\!\triangleq\!X^\top Q X$. A time-varying column vector $X_t^i$ includes an array of variables belonging to sub-system $i$ at time~$t$, while we define
$X_{[t_1, t_2]}^i\!\triangleq\!\{X_{t_1}^i,X_{t_1+1}^i,...,X_{t_2-1}^i,X_{t_2}^i\}$, and
$X^i\!\triangleq\!\{X_{0}^i,X_{1}^i,...\;\}$.
\section{SIMULATION RESULTS}\label{num_res}
We consider an NCS consisting of 10 homogeneous stable and 10 homogeneous unstable sub-systems. The system and input matrices for the unstable and stable groups are $A^u\!=\!\begin{bmatrix} 1.01 & \!\!0.2\\ 0.2 & \!\!1 \end{bmatrix}$, $A^s\!=\!\begin{bmatrix} 0.5 & \!\!0.1\\ 0.6 & \!\!0.8 \end{bmatrix}$, and $B^u\!=\!B^s\!=\!\begin{bmatrix} 0.1 &\!\!0 \\ 0 &\!\! 0.15 \end{bmatrix}$, respectively. The disturbance is Gaussian distributed with mean and variance as $\mathcal{N}(0,1.5I_2)$. The LQG cost parameters for all sub-systems are identically set as $Q^i_1\!=\!Q^i_2\!=\!R^i\!=\!I_2$, and $T\!=\!20$ is the total time horizon of the simulations.
The network supports the control loops via~$6$ transmission links with delays of $d\!\in\![0,1,2,3,4,5]$ time-steps associated with the cost $\Lambda\!=\![25,17,11,7,4,1]$.
We assume $c_d\!=\!6$, $\forall d$, and $\alpha_i\!=\!\beta_i\!=\!3$, $\forall i\!\in\!\{1,\ldots, 20\}$. Note that $c_d\!=\!6$ satisfies the individual and total capacity constraints (\ref{link-capacity-constraint})~and~(\ref{tot-transmission}), however, does not meet the sufficient feasibility condition~(\ref{feasibility}) for $d\!=\!\{0,5\}$\footnote{According to (\ref{feasibility}), $c_d\!\geq \!10$ for $d\!=\!\{0,5\}$ and $c_d\!\geq \!6$ for $d\!=\!\{1,2,3,4\}$.} and yet is a valid choice for this simulation setup, which shows (\ref{feasibility}) is not a necessary condition.
\begin{figure}
\centering
\includegraphics[trim=10 10 20 20, clip,width=.91 \linewidth]{Comparison_color_impassive.eps}\vspace{-2mm}
\caption{Optimal costs for different approaches. MA: with model awareness (Sec.~\ref{sec:optimal-delay-model-aware}), W/o MA: without model awareness (Sec.~\ref{sec:no_model_awareness}), DI: delay-insensitive (Sec.~\ref{sec:weighted_cost}).} \label{fig:cost_comparison}
\vspace*{-4mm}
\end{figure}
We illustrate the optimal delay control and link allocation for each sub-system using the discussed approaches: 1) with model awareness, 2) without model awareness, and 3) delay-insensitive approach, as presented in sections \ref{sec:optimal-delay-model-aware}, \ref{sec:no_model_awareness}, and \ref{sec:weighted_cost}, respectively.
For the first two approaches, we employ both \textit{reactive} and \textit{impassive} methods to perform optimal co-design and compare their outcomes.
As discussed in Corollary \ref{corol:performance-comparison}, we demonstrate that the reactive method performs no worse than the impassive method and may often perform significantly better, due to the dynamic coupling between $\theta$ and $\vartheta$. Since such coupling does not exist in the delay-insensitive case, reactive and impassive methods yield identical results.
In Fig. \ref{fig:cost_comparison}, we illustrate the LQG control and communication costs for the above-mentioned approaches, where we see that the awareness of the constant model parameters leads to a significant performance improvement when compared with no model awareness scheme.
However, as also discussed in Section \ref{sec:no_model_awareness}, the superiority of the reactive approach over the impassive counterpart is far better for the case without model awareness.
In fact, one needs to contemplate whether to employ the reactive approach when the network manager has access to the constant model parameters, due to the insignificant overall performance augmentation at the expense of the extra computational complexity (see Remark~\ref{rem:complexity}).
Fig.~\ref{fig:utimulti_inhomo} shows the transmission link utilization profile (defined in \ref{eq:link_utilization_numerics}) where we only provide the plot for the impassive and reactive scenarios when the network manager is not aware of the constant model parameters (Section \ref{sec:no_model_awareness}).
\begin{align}\label{eq:link_utilization_numerics}
\rho_i(t) =\frac{\# \text{ of utilization of Link } i \text{ up to time }t}{N(t+1)}.
\end{align}
\begin{figure}[t]
\centering
\includegraphics[trim=10 20 15 25, clip, width=1.00 \linewidth]{utilization_no_model_color.eps}
\vspace{-5mm}\caption{Link utilization over time under capacity constraints without model awareness based approach.
Top: reactive method, bottom: impassive method.}
\label{fig:utimulti_inhomo}
\vspace{-4mm}
\end{figure}
According to \eqref{eq:link_utilization_numerics}, $\sum_{i=1}^N \rho_i(t)=1$ at every time $t$, that is also reflected in Fig.~\ref{fig:utimulti_inhomo}. For the case without model awareness, the network manager only cares about the communication cost and hence the cheaper links are utilized, as can be seen in Fig. \ref{fig:utimulti_inhomo}.
Notice that link 3 is used more than link 4 due to the coupling constraints between $\theta_t$ and $\vartheta_t$ in \eqref{prob:res_alloc_no_knowledge_imp} and \eqref{prob:res_alloc_no_knowledge_re}.
The sub-systems which requested for the link $\ell_0$, can not be assigned to any link beyond $\ell_3$ since $\beta_i\!=\!3$.
Thus, the majority of the requests for link $\ell_0$ were assigned to $\ell_3$ and the rest were assigned to $\ell_2$ ($\ell_1$ is more expensive).
Similarly, the majority of the requests for $\ell_5$ are assigned to $\ell_5$ and the rest to $\ell_4$, etc.
We also studied this problem for the case with model awareness, and we noticed that the difference in the link utilization is minor between the two impassive and reactive approaches (as also corroborated by the cost difference in Fig. \ref{fig:cost_comparison}).
In fact, the link utilization, in this case, changes only after time $t\!=\!15$.
This observation brings out the question whether it makes sense to adopt the computationally expensive reactive approach over the simple impassive approach for this little improvement.
Based on this observation, one may be tempted to adopt reactive approach in an intermittent fashion, i.e., instead of solving \eqref{eq:thm3-res-manager} for every $k$, do so at $k\!=\!t_1,t_2,\ldots,t_\ell$ where $0\!<\! t_1 \!<\!\ldots\!<\!t_\ell\!<\!T$.
An interesting yet challenging research question is how to determine $t_1,\ldots,t_\ell$. One may perhaps adopt an event-based strategy to solve for these quantities, we, however, leave this as a future research.
Next we study the average deviation between the requested $\theta^\ast$ and the allocated $\vartheta^\ast$, computed by the following formula
\begin{align} \label{E:avg_deviation}
\Delta_{i}(t)=\frac{\sum_{i=1}^N\sum_{k=0}^t|(\vartheta^{i,\ast}_k-\theta^{i,\ast}_k)^\top \Delta|}{N(t+1)}.
\end{align}
We report the average deviation result for all three approaches in Fig. \ref{fig:group_avg_deviation_model}.
The figure also shows that the average deviation is generally higher for the delay-insensitive approach compared to both delay-sensitive scenarios of reactive and impassive, confirming the explanations in the Section~ \ref{sec:weighted_cost}.
\begin{figure}[t]
\centering
\includegraphics[trim=10 0 10 20, clip,width=.935 \linewidth]{Group_avg_theta_variation.eps}\vspace{-1mm}
\caption{Average deviation in the allocated links as computed by \eqref{E:avg_deviation}.
}
\label{fig:group_avg_deviation_model}
\vspace{-5.5mm}
\end{figure}
\section{NCS Model: Control \& Communication Layers}\label{prob_model}
\begin{figure}[t]
\centering
\psfrag{a}[c][c]{\tiny \text{LTI plant}}
\psfrag{b}[c][c]{\tiny \text{plant controller}}
\psfrag{e}[c][c]{\tiny \text{delay controller}}
\psfrag{t}[c][c]{\scriptsize \text{sub-system} $1$}
\psfrag{tt}[c][c]{\scriptsize \text{sub-system} $N$}
\psfrag{d}[c][c]{\tiny $u_{k}^1$}
\psfrag{dd}[c][c]{\tiny $u_{k}^N$}
\psfrag{g}[c][c]{\tiny $\theta_{k}^1$}
\psfrag{h}[c][c]{\tiny $\theta_{k}^N$}
\psfrag{gg}[c][c]{\tiny $\vartheta_{k}$}
\psfrag{x}[c][c]{\tiny $x_{k}^1$}
\psfrag{xx}[c][c]{\tiny $x_{k}^N$}
\psfrag{f}[c][c]{\scriptsize \text{centralized network manager}}
\psfrag{sss}[c][c]{\scriptsize \text{Ack signal}}
\psfrag{m}[c][c]{\tiny $\lambda_0$}
\psfrag{mm}[c][c]{\tiny $\lambda_1$}
\psfrag{n}[c][c]{\tiny $\lambda_2$}
\psfrag{nn}[c][c]{\scriptsize $\ldots$}
\psfrag{s}[c][c]{\tiny $\vartheta_{k}^1$}
\psfrag{ss}[c][c]{\tiny $\vartheta_{k}^N$}
\psfrag{ddd}[c][c]{ $\ldots$}
\psfrag{o}[c][c]{\tiny $\lambda_D$}
\psfrag{p}[c][c]{\tiny $Z^{-2}(x_{k}^N)$}
\psfrag{q}[c][c]{\tiny $Z^{-1}(x_{k}^1)$}
\psfrag{pp}[c][c]{\tiny $\vartheta_{k}^N, x_{k}^N$}
\psfrag{qq}[c][c]{\tiny $\vartheta_{k}^1, x_{k}^1$}
\includegraphics[width=8.6cm, height=5.6cm]{multi-loop.eps}
\caption{Multiple LTI control loops exchange information with their respective controllers over a shared resource-limited communication network that can offer an array of latency-varying transmission services for different prices. ($Z^{-d}$ is the delay operator).}
\label{fig:sys-model}
\vspace{-7mm}
\end{figure}
We consider an NCS consisting of $N$ synchronous stochastic linear time-invariant (LTI) controlled processes exchanging information over a common resource-limited communication network with resource management capabilities (see Fig. \ref{fig:sys-model}). Each process $i\!\in \mathrm{N}\!\triangleq\!\{1,\ldots,N\}$ comprises of a physical plant $\mathcal{P}_i$, a delay-sensitivity controller $\mathcal{S}_i$, and a feedback control unit consisting of a state feedback controller $\mathcal{C}_i$ and an estimator $\mathcal{E}_i$. The dynamics of the plant $\mathcal{P}_i$, $i\in\mathrm{N}$, is described by the following stochastic difference equation:\vspace{-1mm}
\begin{equation}
x^i_{k+1}=A_ix^i_k+B_iu^i_k+w^i_k,
\label{eq:sys_model}\vspace{-1mm}
\end{equation}
where $x^i_k\!\in \!\mathbb{R}^{n^i}$ represents sub-system $i$'s state vector at time-step $k\!\in \!\mathbb{N}\cup \{0\}$, $u^i_k \!\in \!\mathbb{R}^{m^i}\!$ denotes the corresponding control signal, $w^i_k\!\in \!\mathbb{R}^{n^i}$ the stochastic exogenous disturbance, and $A_i\!\in \!\mathbb{R}^{n^i\times n^i}$ and $B_i\!\in \!\mathbb{R}^{n^i\times m^i}$ describe the system and input matrices, respectively.
To allow for heterogeneity, $A_i$ and $B_i$ matrices can be different across the NCS, i.e., $A_i\neq A_j$ and $B_i\!\neq\! B_j$, $i,j\!\in \!\mathrm{N}$. The disturbances are assumed to be random sequences with independent and identically distributed (i.i.d.) realizations $w^i_k\!\sim \!\mathcal{N}(0,\Sigma_w^i)$, $\forall k$ and $i\!\in\!\mathrm{N}$, and $\Sigma_w^i\!\succ\!0$. The initial states~$x^i_0$'s are also presumed to be randomly selected from any arbitrary finite-moment distributions with variance $\Sigma_{x_0}^i$. For simplicity, we assume that the sensor measurements are perfectly noiseless copies of the state values\footnote{The results of this article extend, with lengthy but straightforward mathematical efforts, to noisy measurements if noise is an i.i.d. process.}.
\subsection{Communication system model}
To support the information exchange between each plant and its corresponding control unit, a resource-limited communication network exists that provides cost-prone latency-varying transmission services.
More precisely, the communication network consists of a set of multiple distinct one-hop transmission links, represented by $\mathcal{L}\triangleq\{\ell_0,\ell_1,\ldots, \ell_D\}$, where $\ell_d$ denotes the transmission link with deterministic service latency of $d$ time-steps, and $|\mathcal{L}|\!=\!D\!+\!1$. Define the set $\mathcal{D}\!\triangleq\!\{0,1,\ldots,D\}$ and the vector $\Delta\!\triangleq\![0,1,\ldots,D]^\top$. Hence, if $x_k^i$ is sent to the controller $\mathcal{C}_i$ at time-step $k$ through the transmission link $\ell_d$ with $d$-step delay, $d\in\mathcal{D}$, then $x_k^i$ will be delivered to the controller at time-step $k+d$. Each transmission link $\ell_d\in \mathcal{L}$ is assigned a finite-valued service price $\lambda_d \in \mathbb{R}_{\geq 0}$ that is paid by the service recipient. Let $\Lambda\triangleq [\lambda_0,\lambda_1,\ldots,\lambda_D]^\top$ denote the prices assigned to the links in the transmission link set $\mathcal{L}$. The service prices are assigned such that shorter transmission delay induces higher price, i.e.,
$\lambda_0>\lambda_1 >\ldots>\lambda_D\geq 0$.
Denote $c_d\!\in \!\mathbb{N}$ as the transport capacity of a certain link $\ell_d\in \mathcal{L}$, which entails the link $\ell_d$ can transport at most $c_d$ number of data packets belonging to $c_d$ distinct sub-systems, simultaneously. The resource constraint can then be stated as
\begin{equation}\label{link-capacity-constraint}
c_d<N, \quad\forall \; d\in\mathcal{D}.
\end{equation}
Although, not all sub-systems can transmit through one certain link, we assume that the total capacity of all distinct transmission links is sufficient to service all sub-systems, via multiple transmission links, at every time-step $k\in\{0,1,\ldots\}$, i.e.,
\begin{equation}\label{tot-transmission}
\sum\nolimits_{d\in\mathcal{D}} c_d\geq N.
\end{equation}
\subsection{Distributed policy-makers \& decision variables}
We now introduce the policy makers and their corresponding decision outcomes for the underlying NCS, schematically depicted in Fig.~\ref{fig:sys-model}. The structural properties of the optimal policies will be thoroughly discussed in the next section.
\subsubsection{Delay-sensitivity} At the beginning of each sample cycle $k$ a local controller called ``\textit{delay controller}'' decides on delay-sensitivity of its corresponding sub-system by selecting one of the transmission links $\ell_d\!\in \!\mathcal{L}$.
We define the binary-valued vector $\theta^i_k\triangleq [\theta^i_k(0),\ldots,\theta^i_k(D)]^{\textsf{T}}$ as the delay controller's decision variable of sub-system $i$ at time-step $k$, where each element of $\theta^i_k$ is determined as follows:
\begin{equation}\label{eq:delay-selector-var}
\theta^i_k(d)=\begin{cases} 1, &\!\!\!\!\!\!\!\!\!\text{link $\ell_d$ is selected to transmit $x_k^i$ at time $k$,} \\ 0, \qquad & \!\!\!\!\!\!\!\!\! \text{link $\ell_d$ is not selected.} \end{cases}
\end{equation}
We assume that each local delay controller selects only one of the transmission links per time-step, therefore, we have
\begin{equation}\label{const1}
\sum\nolimits_{d=0}^D\theta^i_k(d)=1, \quad \forall \; k\in\{0,1,\ldots\},\; \forall \; i\in\mathrm{N}.
\end{equation}
\subsubsection{Control input} The control unit of each local sub-system includes a feedback controller $\mathcal{C}_i$ and an estimator~$\mathcal{E}_i$, which are assumed collocated. At every time $k$, the control command $u_k^i\!\in\!\mathbb{R}^{m^i}$ is the outcome of a causal and measurable law
$\gamma_k^i(\cdot)$, given the available information at $\mathcal{C}_i$. In the absence of the state information $x_k^i$, the collocated estimator~$\mathcal{E}_i$ may calculate the state estimate $\hat{x}_k^i$ if it is required for the computation of $u_k^i$.
\subsubsection{Resource allocation}
The constraint (\ref{link-capacity-constraint}) implies that if the number of requests to utilize a specific transmission link~$\ell_d$ exceeds the capacity $c_d$, not all requests can be accordingly serviced.
Assume that a centralized network manager coordinates the resource allocation among sub-systems. In case $\sum_{i=1}^N \theta^i_k(d)\!> \!c_d$ for a certain link $\ell_d$, it decides which sub-systems will be serviced via the link $\ell_d$ and which~ones are reassigned to new transmission links. According to~(\ref{tot-transmission}), no scheduled data packet is dropped due to capacity limitation, as there will be another transmission link with free capacity to be assigned. We define the binary-valued vector $\vartheta_k^i\!\triangleq\! [\vartheta_k^i(0),\ldots,\vartheta_k^i(D)]^{\top}$ as the decision outcome of the centralized resource allocation mechanism that determines implementable transmission links for sub-system $i$. The element $\vartheta_{k}^i(d)\!\in\!\{0,1\}$ is similarly defined as in (\ref{eq:delay-selector-var}), except that it is determined by the network manager after receiving the requests from all the sub-systems. If at a time $k$, $\sum\nolimits_{i=1}^N \theta_{k}^i(d)\!\leq \!c_d, \forall d\!\in\!\mathcal{D}$, then $\vartheta_{k}^i \!=\!\theta_{k}^i, \forall i\!\in\!\mathrm{N}$. Otherwise, if $m$ requests are received for a certain link $\ell_d$ such that $m=\sum\nolimits_{i=1}^N \theta_{k}^i(d)> c_d$, new transmission links will be assigned to $m-c_d$ of those requests.
This means for every sub-system $j$ of those~$c_d$ sub-systems, $\vartheta_{k}^j \!=\!\theta_{k}^j$ holds, while for every sub-system $\bar{j}$ belonging to the remaining set of $m- c_d$ sub-systems, $\vartheta_{k}^{\bar{j}} \!\neq\!d\in \mathcal{D} \setminus \{\tilde d,\bar d\}\theta_{k}^{\bar{j}}$.
Element-wise, if a sub-system $\bar{j}$ requested a certain link $\ell_{\bar{d}}$, but instead was serviced with a different link $\ell_{\tilde{d}}$, then $\vartheta_{k}^{\bar{j}}(\tilde{d}) \!\neq\!\theta_{k}^{\bar{j}}(\tilde{d})$ and $\vartheta_{k}^{\bar{j}}(\bar{d})\neq\theta_{k}^{\bar{j}}(\bar{d})$, while for the rest of the elements, we have $\vartheta_{k}^{\bar{j}}(d) =\theta_{k}^{\bar{j}}(d)$, $\forall d\in \mathcal{D} \setminus\{\tilde{d},\bar{d}\}$.
Since the ultimate link assignment is made by the network manager, state information received at the controller at time $k$, denoted by $\mathcal{Y}^i_k$, is determined by $\vartheta^i$. Define $y_{k-d}^i(d)\!=\!x_{k-d}^i$ if $\vartheta^i_{k-d}(d)\!=\!1$, and $y_{k-d}^i(d)\!=\!\emptyset$ if $\vartheta^i_{k-d}(d)\!=\!0$, then
\begin{equation}\label{set:received-state}
\mathcal{Y}^i_k=\{y_{k}^i(0),y_{k-1}^i(1),\ldots, y_{k-D}^i(D)\},
\end{equation}
where, to avoid notational inconvenience, we define $\vartheta_{-1}^i(d)\!=\!\vartheta_{-2}^i(d)\!=\!\ldots\!=\!\vartheta_{-D}^i(d)\!=\!0$ for all $d\in \mathcal{D}$.
Out of order delivery is a common phenomenon that may happen depending on the selected resource allocation policy. Assume state $x^i_0$ is sent with delay 5 and $x^i_1$ is sent with zero delay, then $x^i_1$ will arrive before $x^i_0$.
However, out of delivery packet arrival will be adequately handled while constructing the state estimate and computing the control.
If a stale state measurement arrives at the controller while a fresher one is available, the optimal controller uses only the \textit{freshest} one in constructing the optimal control input.
Hence, without delving into the details, one can intuitively confirm that the optimal delay link profile should impose the least communication cost\footnote{Due to the constraint \eqref{const1} each sub-system is forced to pay a communication cost of at least $\lambda_D$ per time-step.} for \textit{outdated measurements}. This will naturally emerge as the solution of the optimization problems described later.
\section{Problem Formulation: Joint Optimization}\label{sec:prob-state}
In this section, we formulate a cross-layer joint optimization problem and discuss its structural characteristics w.r.t. to the policy makers.
The three decision makers are 1) local plant controllers that computes the control input $u_k^i,\;i\!\in\!\mathrm{N}$, at time-step $k$, 2) local delay controllers where the decision outcome $\theta_k^i$ determines the link~$\ell_d\!\in \!\mathcal{L}$ through which~$x_k^i$ will be transmitted, and 3) resource manager to compute $\vartheta_k^i$ that determines whether $\theta_k^i$ can be accordingly serviced.
We assume that individual control systems have no knowledge of each other's parameters or decision variables. Let $\mathcal{I}_k^i$, $\mathcal{\bar{I}}_k^{i}$, and $\mathcal{\tilde{I}}_k$ denote the sets of accessible information for the plant controller, delay controller, and resource manager, respectively. (These sets are characterized in Section \ref{sec:optimal-co-design} where the information structure at each policy maker is discussed.). Then, at every time $k$, the plant control, delay control, and resource allocation policies are measurable functions of the $\sigma$-algebras generated by their corresponding information sets, i.e., $u_k^i \!=\! \gamma_k^i (\mathcal{I}_k^{i})$, $\theta_k^i \!=\! \xi_k^i (\mathcal{\bar{I}}_k^i)$, and $\vartheta_k\!=\!\pi_k(\mathcal{\tilde{I}}_k)$. Note that, $\gamma^i$ and $\xi^i$ represent local policies corresponding to a specific sub-system $i$, while $\pi$ is computed centrally and includes the resource allocation profile for all $i\!\in\!\mathrm{N}$. The local objective function of each sub-system $i\!\in\!\mathrm{N}$, denoted by~$J^i$, consists of its own LQG part plus the communication cost in average form over the finite horizon $[0,T]$, as follows:
\begin{align}\label{eq:local_objective}
\!\!\!J^i(u^i,\theta^i)\!=\!\E\!\Big[\|x_T^{i}\|_{Q_2^i}^2\!+\!\sum\nolimits_{k=0}^{T-1} \!\|x_k^{i}\|_{Q_1^i}^2\!+\!\|u_k^{i}\|_{R^i}^2\!+\!\theta_k^{i^\top}\!\!\Lambda\Big]\!
\end{align}
where, $Q_1^i\!\succeq \!0$, $Q_2^i\!\succeq \!0$, and $R^i\!\succ \!0$ represent constant weight matrices for the state and control inputs, respectively.
The overall objective for the underlying NCS is to maximize the average performance of all sub-systems under the resource constraint (\ref{link-capacity-constraint}). This cannot simply be obtained by taking the average of the sum of the local cost functions (\ref{eq:local_objective}) because the local decision variable $\theta_k^i$ might not be realized due to the resource limitations. More precisely, the time that a state information is received at a controller might not always be the time decided by its delay controller. In fact, the cost function (\ref{eq:local_objective}) is achievable for a certain sub-system $i$ only if $\vartheta_k^i=\theta_k^i$, $\forall k\!\in\![0,T]$. However, if the capacity of one or more transmission links are exceeded by the number of requests, the resource manager adjusts some of those requests, which eventually changes the realization of the control signal $u_k^i$ and consequently the value of the local cost $J^i(u^i,\theta^i)$.
We formulate the system (commonly called social) cost $J$ as the average difference between the sum of $J^i$'s from the resource manager (given $\vartheta_k^i$'s) and local sub-systems' (given $\theta_k^i$'s) perspectives, i.e., knowing $\vartheta_k\!=\!\pi_k(\tilde{\mathcal{I}}_k\!)$, we have
\begin{equation}
J=\frac{1}{N}\sum\nolimits_{i=1}^N \E\!\Big[J^i(u^i,\vartheta^i)-\min_{u^i,\theta^i}J^i(u^i,\theta^i)\Big],
\label{eq:global_OP}
\end{equation}
and $J^i\!$ has been adjusted after resource allocation as
\begin{align}\label{eq:local_objective_var}
\!\!\!J^i(u^i,&\vartheta^i)\!=\!\E\!\Big[\|x_T^{i}\|_{Q_2^i}^2\!+\!\sum\nolimits_{k=0}^{T-1} \!\|x_k^{i}\|_{Q_1^i}^2\!+\!\|u_k^{i}\|_{R^i}^2\!+\!\vartheta_k^{i^\top}\!\!\Lambda\Big]\!
\end{align}
Note that, $J^i(u^i,\theta^i)$ is computed locally independent of the decisions for sub-systems $j\!\neq \!i$, while $J^i(u^i,\vartheta^i)$ is computed after central resource allocation is performed.
The resources are allocated such that, w.r.t. the sub-systems preferences, the closest possible services are provided and $J$ is minimized.
In addition to the delay controllers that determine the \textit{per-time} sensitivity of the control loops w.r.t. transmission latency, we introduce a constant latency-tolerance bound for each sub-system such that the resource manager allocates a transmission link only within that given bound. To diversify this static sensitivity for each sub-system, we define $\alpha_i$ and $\beta_i$ ($\in \!\mathcal{D}$) representing the maximum allowable delay tolerances. This specifies that a sub-system $i$ can tolerate imposed deviations by the network manager from the selected link $\ell_d$ only within the set $\{d-\alpha_i,\ldots,d,\ldots,d+\beta_i\}$\footnote{To avoid notational inconvenience, the network manager only takes into account the feasible tolerances of this set that also belong to $\mathcal{D}$. Moreover, for a nontrivial set, we assume at least one non-zero $\alpha_i$ and $\beta_j$, $i,j\in\mathrm{N}$.}. The ultimate goal is then finding the optimal policies $\gamma_k^{i,\ast} (\mathcal{I}_k^{i})$, $\xi_k^{i,\ast} (\mathcal{\bar{I}}_k^{i})$ and $\pi_k^\ast (\mathcal{\tilde{I}}_k)$ that jointly minimize the social cost $J$:
\begin{subequations}
\begin{align}\label{prob:global_OP}
&\!\!\min_{\gamma^i,\xi^i,\pi} J\\
\text{\small s. t.}\;\;\;\; & u_k^i = \gamma_k^i (\mathcal{I}_k^{i}),\;\; \theta_k^i = \xi_k^i (\mathcal{\bar{I}}_k^i),\;\;\vartheta_k=\pi_k(\mathcal{\tilde{I}}_k),\\\label{sensit_constrains}
& \!-\alpha_i\leq(\vartheta_k^i-\theta_k^i)^\top \Delta\leq \beta_i, \; i\in \mathrm{N},\\\label{prob:global_OP_last_line}
& \!\!\; \sum\nolimits_{j=1}^N \vartheta_{k}^j(d)\leq c_d, \;d\in \mathcal{D}, \; k\!\in\![0,T-1].
\end{align}
\end{subequations}
Constraint (\ref{sensit_constrains}) specifies that if at time $k$, $\theta_k^i(d)\!=\!1$, then the network manager allocates an available resource only from the set of links $\{\ell_{\max\{0,d-\alpha_i\}}, \ldots,\ell_{\min\{d+\beta_i,D\}}\}$ to sub-system $i$.
The ultimate links from the allowable ones are selected by the resource manager such that the social cost $J$ is minimized. Note that problem (10) might not have a feasible solution for all $c_d$. We derive a sufficient feasibility condition in form of a lower bound for $c_d$ in the Section \ref{sec:optimal-co-design}.
Solving problem (10) is challenging due to the couplings between the decision variables. In fact, $\theta_k^i$ is the best choice, from the perspective of sub-system $i$, to make the balance between its LQG cost and communication price. However, delay controller decisions may go through changes because of resource limitations. Note that, the control input $u_k^i$ is explicitly affected by $\theta_k^i$ in the absence of the resource limitations, but if $\vartheta_k^i\!\neq \!\theta_k^i$, then $u_k^i$ will have a different~realization. This means the realization of $u_k^{i,\ast}$ computed~from the problem~(\ref{eq:local_objective}) might be different from that being computed from the problem~(\ref{eq:local_objective_var}) even if both are computed from the~same control law. Moreover, any decision of $\vartheta_k^i$ is clearly $\theta_k$-dependent. Further, as we discuss later, $\theta_{k+1}^i$ might also be a function of $\vartheta_{[0, k]}^i$. Altogether, problem (10) is nontrivial due to inter-dependencies and cross-layer constraints, hence we need to identify relevant conditions under which it can be decomposed.
\section{Awareness Models \& Optimal Co-design}\label{sec:optimal-co-design}
Structural properties of the joint optimal policies are correlated with the cross-layer awareness model which characterizes the information sets $\mathcal{I}_k^i, \bar{\mathcal{I}}_k^i, \tilde{\mathcal{I}}_k$. We introduce different awareness models under which the couplings between $u_k^i$, $\theta_k^i$ and $\vartheta_k^i$ are examined. We discuss directed models of awareness for two different sets of information that can be exchanged between the decision makers of both layers: ``constant model parameters'' and ``dynamic variables''. In the rest of the article, awareness of the constant model parameters for the network layer, if assumed, entails the knowledge of $\{A_i, B_i, Q_1^i, Q_2^i, R^i, \Sigma_w^i,\Sigma_{x_0}^i\}, \forall i\!\in\!\mathrm{N}$. Note that, $\{\alpha_i,\beta_i\}$'s are known to the network layer. The local delay and plant controllers are also assumed to have the knowledge of their own model parameters $\{A_i, B_i, Q_1^i, Q_2^i, R^i, \Sigma_w^i,\Sigma_{x_0}^i,\alpha_i,\beta_i\}$ as well as the constant network parameters $\{\Lambda,\mathcal{L}\}$.
\begin{figure}[tb]
\centering
\psfrag{a}[c][c]{\scriptsize \text{Plant}}
\psfrag{aa}[c][c]{\scriptsize \text{controller}}
\psfrag{b}[c][c]{\scriptsize \text{Delay}}
\psfrag{bb}[c][c]{\scriptsize \text{controller}}
\psfrag{d}[c][c]{\scriptsize \text{Network}}
\psfrag{dd}[c][c]{\scriptsize \text{manager}}
\psfrag{ddd}[c][c]{\scriptsize \text{Application layer}}
\psfrag{c}[c][c]{\scriptsize \text{Network layer}}
\psfrag{g}[c][c]{\scriptsize $\vartheta_{k}^i$}
\psfrag{e}[c][c]{\scriptsize $u_{k}^i$}
\psfrag{f}[c][c]{\scriptsize $\theta_{k}^i$}
\psfrag{h}[c][c]{\tiny \text{Network model parameters}}
\psfrag{j}[c][c]{\tiny \text{System model parameters}}
\psfrag{t}[c][c]{\tiny \text{Here cycle $k$ begins}}
\includegraphics[width=6.5cm, height=4.5cm]{layer_decomp.eps}
\caption{Cross-layer interaction model: magenta arrows represent awareness of constant parameters. For network layer, awareness of system parameters, if assumed, includes $\{A_i, B_i, Q_1^i, Q_2^i, R^i, \Sigma_w^i,\Sigma_{x_0}^i,\alpha_i,\beta_i\}$, $\forall i\!\in\!\mathrm{N}$. For control loops, network parameters $\{\mathcal{L},\Lambda\}$ are known. If $\vartheta_k^i$ is available for the delay controller (violet arrow) we call the delay-control policy \textit{reactive}, otherwise, it is called \textit{impassive}.}
\label{fig:interaction_model1}
\vspace{-5mm}
\end{figure}
To discuss awareness of dynamic variables, it is essential to have a clear picture of the order of generating variables in one sample cycle, e.g., $k \!\rightarrow \!k\!+\!1$. At the beginning of a sample time $k$, the system state $x_k^i$ is updated according to the dynamics (\ref{eq:sys_model}), and then the delay controller generates $\theta_k^i$, based on the policy $\xi_k^i(\bar{\mathcal{I}}_k^i)$ to determine the transmission link through which $x_k^i$ is to be communicated. System state $x_k^i$ together with the service request $\theta_k^i$ is then forwarded to the network to be serviced. The resource manager receives this information from all sub-systems and checks whether the number of requests for each link is exceeding its capacity. It then computes $\vartheta_k^i$, according to the policy $\pi_k(\tilde{\mathcal{I}}_k)$, and $x_k^i$ is transmitted through the link determined by $\vartheta_k^i$. The control signal $u_k^i$ is computed from the control law $\gamma_k^i(\mathcal{I}_k^i)$\footnote{In case the information set $\mathcal{I}_k^i$ is not updated, i.e. if no new state information belonging to sub-system $i$ is scheduled to be delivered at time $k$, the control signal is updated based on a model-based estimation of $x_k^i$.}, $x_{k+1}^i$ is afterward updated and the pattern repeats over next samples.
At the controllers, the following awareness model of the dynamic variables is valid throughout the article. Knowledge of the model parameters of sub-system $i$ is assumed for $\mathcal{C}_i$. Reminding (\ref{set:received-state}), the information set $\mathcal{I}_k^i$ at time $k$ is as
\begin{equation}\label{set:controller-information}
\mathcal{I}_k^i=\{\mathcal{Y}^i_0,... ,\mathcal{Y}^i_k,\theta_0^i,... ,\theta_k^i,\vartheta_0^i,... ,\vartheta_k^i,u_0^i,... ,u_{k-1}^i\}.
\end{equation}
As in Fig.~\ref{fig:interaction_model1}, the information set $\mathcal{I}_k^i$ in (\ref{set:controller-information}) specifies that the plant controllers are aware of the outcomes of the other two policies $\xi_{[0,k]}^i$ and $\pi_{[0,k]}^i$, from $t\!=\!0$ up to current time $t\!=\!k$. For that, we assume a dedicated low-bandwidth and error-free acknowledgement channel exists to inform the controllers at every time $k$ about $\theta_{k}^i$ and $\vartheta^i_{k}$ (see Fig. \ref{fig:sys-model}).
To determine the awareness structure for the resource manager, we consider the following assumption:
\textit{Assumption 1:}
The resource allocation law $\pi_k$ is rendered independent of the local plant control policies $\gamma_{[0,k-1]}^i$, $i\!\in\!\mathrm{N}$.
Assumption 1
declares a one-directional dependence between the plant control and resource allocation policies (see Fig.~\ref{fig:interaction_model1}), i.e., $\gamma_k^i$'s are explicit functions of $\vartheta_k^i$, but $\pi_k$ does not incorporate $u_{[0,k-1]}^i$'s, $i\!\in\!\mathrm{N}$, in determining $\vartheta_k^i$. Although this results in the resource allocation being independent of local control laws, $\pi_k$ depends on $\theta_{[0,k]}^i$ which itself is effected by the control signals.
In other words, the local delay controllers generate $\theta_k^i$'s such that an averaged equilibrium is achieved between maximizing the control performance and minimizing the communication cost. Since $\pi_k$ is an explicit function of $\theta_{[0,k]}^i$'s, the effect of optimizing control performance is indirectly considered in resource allocation. hence, the explicit dependence between the plant control and the resource manager policies that requires full knowledge of $u_{[0,k-1]}^i$'s, $i\!\in\!\mathrm{N}$ at the resource manager, is avoided. This assumption, nonetheless, leads to a considerable complexity reduction in computing the optimal policies $\pi_k^\ast$ and $\gamma_k^{i,\ast}$ (Section \ref{subsec:CE}).
Having Assumption 1, we introduce the dynamic variables included in the resource manager's information set $\tilde{\mathcal{I}}_k$, as\vspace{-1mm}
\begin{equation}\label{set:RM-information}
\tilde{\mathcal{I}}_k=\{\theta_0,\ldots,\theta_k,\vartheta_0,\ldots,\vartheta_{k-1}\}.
\end{equation}
We also discuss the resource allocation with (Sec. \ref{sec:optimal-delay-model-aware}) and without (Sec. \ref{sec:no_model_awareness}) knowledge of the control systems model parameters. For the purpose of comparison, we discuss the scenario that the network manager does not take into account the local delay sensitivities in computing $\vartheta_k^i$'s, i.e., it allocates resources among sub-systems knowing neither the constant $\{\alpha_i,\beta_i\}$'s nor $\theta^i_{[0,k]}$'s, $\forall i\in\mathrm{N}$ (see Sec. \ref{sec:weighted_cost}). This is an important observation which shows how the local and social cost functions change w.r.t. the individual delay sensitivities.
For delay controllers, we introduce two design approaches, so called \textit{impassive} and \textit{reactive} delay control policies, each representing a distinct model of awareness of the dynamic variables (Fig.~\ref{fig:interaction_model1}). We derive the resulting joint optimal delay control and resource allocation policies in Sections \ref{sec:optimal-delay-model-aware} and \ref{sec:no_model_awareness}. Before that, to determine the structure of the optimal plant control policy~$\gamma_k^{i,\ast}$, $i\!\in\!\mathrm{N}$,
we need to introduce the maximum amount of information that can be available at the $i^{\textsf{th}}$ delay controller at a time~$k$\footnote{Later we discuss that (\ref{set:DC-information}) corresponds to the reactive delay control approach and introduce the information set for the impassive approach.}. The set $\bar{\mathcal{I}}_k^i$ contains, at most, information about the following dynamic variables:\vspace{-1mm}
\begin{equation}\label{set:DC-information}
\bar{\mathcal{I}}_k^i=\{\theta_0^i,... ,\theta_{k-1}^i,\vartheta_0^i,... ,\vartheta_{k-1}^i,u_0^i,... ,u_{k-1}^i\}.
\end{equation} \vspace{-8mm}
\subsection{Certainty equivalence and optimal plant controller}\label{subsec:CE}
Having the sets $\mathcal{I}_k^i$, $\tilde{\mathcal{I}}_k$ and $\bar{\mathcal{I}}_k^i$ introduced in (\ref{set:controller-information})-(\ref{set:DC-information}), and reminding Assumption 1, we state the following theorem:
\begin{theorem}\label{thm:CE}
Given $\mathcal{I}_k^i$, $\tilde{\mathcal{I}}_k$ and $\bar{\mathcal{I}}_k^i$ in (\ref{set:controller-information})-(\ref{set:DC-information}) and under the Assumption 1, the optimal plant control law $\gamma_k^{i,\ast}$, $i\!\in\!\mathrm{N}$, w.r.t. (10) is of certainty equivalence form with the control inputs computed from the following linear state feedback law
\begin{align}\label{eq:CE-law}
u_k^{i,\ast}&=\gamma_k^{i,\ast}(\mathcal{I}_k^i)=-L_k^{i,\ast} \E[x_k^i|\mathcal{I}_k^i], \quad i\in\mathrm{N},\\\label{eq:CE-gain}
L_k^{i,\ast}&=\left(R^i+B_i^\top P_{k+1}^i B_i\right)^{-1}B_i^\top P_{k+1}^i A_i,
\end{align}
where, $P_T^i\!=\!Q_2^i$ and $P_k^i$ solves the below Riccati equation
\begin{align*}
P_k^i&\!=\!Q_1^i\!+\!A_i^\top \!\left[\!P_{\!k+1}^i\!-\!P_{\!k+1}^iB_i \!\left(R^i\!\!+\!B_i^\top \!P_{\!k+1}^i B_i\right)^{\!-1}\!B_i^\top \!P_{\!k+1}^i\!\right]\!A_i.
\end{align*}
\end{theorem}
\begin{proof}
See the Appendix \ref{Append:thm1}.
\end{proof}
\begin{remark}
In the absence of the constraint (\ref{link-capacity-constraint}), the resource allocation becomes redundant as $\vartheta_k^i\!=\!\theta_k^i$, $\forall i\!\in\!\mathrm{N}$ and $\forall k\!\in\![0,T]$. Hence, from (\ref{eq:global_cost_separation}), we have $\min_{\gamma^i,\xi^i,\pi} J=0$
\end{remark}
\begin{corollary}\label{corl:optimal_value_function}
Under the optimal certainty equivalence control law (\ref{eq:CE-law})-(\ref{eq:CE-gain}), the optimal cost-to-go $V_k^{i,\ast}$ equals
\begin{align}
V_k^{i,\ast}&\!=\|\E\left[x_k^i|\mathcal{I}_k^i\right]\|^2_{P_k^i}\\\nonumber
&\!+\E\!\left[\|e_k^i\|^2_{P_k^i}+\!\sum\nolimits_{t=k}^{T-1}\!\|e_t^i\|^2_{\tilde{P}^i_t}\Big| \mathcal{I}_k^i\right]\!+\!\sum\nolimits_{t=k+1}^T \!\!\Tr (P_t^i \Sigma_w^i),
\end{align}
where, $e_k^i\triangleq x_k^i - \E\left[x_k^i|\mathcal{I}_k^i\right]$, and $\tilde{P}^i_t=Q_1^i+A_i^\top P_{t+1}^iA_i-P_t^i$.
Moreover, the estimator, at time-step $k$, is given as follows
\begin{align}\label{eq:estimator_dynamics}
\!\!\!\!\!\!\E\!\left[x_k^i|\mathcal{I}_k^i\right]&\!=\!\sum\nolimits_{j=0}^{\text{min} \{D,k+1\}}\!b_{j,k}^i \E\!\left[x_k^i|x_{k-j}^i,u_0^i,...,u_{k-1}^i\right]\!,\!\!\!
\end{align}
and, for all $j\in\mathcal{D}$, and $k\geq j$, we have
\begin{align}\label{coeff:b}
b_{j,k}^i&=\prod\nolimits_{d=0}^{j-1}\prod\nolimits_{l=0}^d [1-\vartheta_{k-d}^i(l)][\sum\nolimits_{d=0}^j \vartheta_{k-j}^i(d)].
\end{align}
For, $k<j$, the $b_{0,k}^i,...,b_{k,k}^i$'s are defined as in (\ref{coeff:b}), $b_{k+1,k}^i\!=\!\prod_{d=0}^{k}\prod_{l=0}^d [1\!-\!\vartheta_{k-d}^i(l)]$, and for notational convenience, we define $b_{k+2,k}^i\!=\!...\!=\!b_{D,k}^i\!=\!0$.
\end{corollary}
\vspace{1mm}
\begin{proof}
The proof is similar to the proofs of Theorem 1 and Proposition 1 in \cite{8405590} and hence omitted for brevity.
\end{proof}
\begin{remark}
Theorem \ref{thm:CE} shows that the optimal control law is certainty equivalence (\ref{eq:CE-law}), yet $u_k^{i,\ast}$, i.e., the control law's realization, is computed based on $\E[x_k^i|\mathcal{I}_k^i]$ which is function on $\vartheta_{[k-D+1,k]}^i$, see \eqref{eq:estimator_dynamics}.
We discuss in the next section that, if the delay controller is impassive, $V_k^{i,\ast}$ is estimated according to $\theta_{[0,k-1]}^i$. Thus, if at a time $t\!\in\![k\!-\!D,k\!-\!1]$, $\vartheta_t^i\neq \theta_t^i$, the delay controller computes $\E[V_k^{i,\ast}]$ as if $\theta_t^i$ is realized. Hence, $\E[V_k^{i,\ast}(\gamma^{i,\ast},\xi^i)]\!\neq \!\E[V_k^{i,\ast}(\gamma^{i,\ast},\pi)]$, despite similar $\gamma^{i,\ast}$ laws.
\end{remark}
\subsection{Optimal delay control and resource allocation policies}\label{sec:optimal-delay-model-aware}
We now derive optimal delay control and resource~allocation policies $(\xi_k^{i,\ast}\!,\pi_k^\ast)$ under the following two awareness models of the \textit{dynamic variables}. In this section, we assume the constant model parameters of all sub-systems are accessible for the network manager. Resource allocation without knowledge of constant parameters is studied in Section \ref{sec:no_model_awareness}
\subsubsection{Impassive delay control}
We call the delay control policy an \textit{impassive process} if the decision on $\theta_k^i$'s is made independent of $\vartheta_{[0,k-1]}^i$, i.e., the delay controller is passive w.r.t. the resource manager's decisions. Hence, it decides on $\theta_k^i$'s knowing nothing about possible re-allocation by the resource manager. Therefore, the information set $\bar{\mathcal{I}}_k^i$ upon which $\theta_k^i=\xi_k^i(\bar{\mathcal{I}}_k^i)$ is computed impassively (see Fig. \ref{fig:info-topology}) becomes
\begin{equation}\label{set:DC-information-impassive}
\bar{\mathcal{I}}_k^i=\{\theta_0^i,... ,\theta_{k-1}^i,u_0^i,... ,u_{k-1}^i\}.
\end{equation}
Note that, although $\vartheta_{[0,k-1]}^i$ is not incorporated in computing $\theta_k^i$, the variable $\vartheta_k^i$ depends on $\{\theta_0, \ldots,\theta_k\}$. Moreover, the results of Theorem \ref{thm:CE} hold for $\bar{\mathcal{I}}_k^i$ in (\ref{set:DC-information-impassive}), as we have $\bar{\mathcal{I}}_k^i\!\subseteq\!\mathcal{I}_k^i$.
\begin{figure}[tb]
\centering
\psfrag{g}[c][c]{\tiny \text{LTI dynamics}}
\psfrag{a}[c][c]{\scriptsize $\bar{\mathcal{I}}_{k}^i$}
\psfrag{b}[c][c]{\scriptsize $\tilde{\mathcal{I}}_{k}$}
\psfrag{c}[c][c]{\scriptsize $\mathcal{I}_{k}^i$}
\psfrag{d}[c][c]{\scriptsize $\xi^i_k$}
\psfrag{f}[c][c]{\scriptsize $\pi_k$}
\psfrag{e}[c][c]{\scriptsize $\gamma_k^i$}
\psfrag{j}[c][c]{\scriptsize $u_{k}^i$}
\psfrag{k}[c][c]{\scriptsize $\vartheta_{k}^i$}
\psfrag{h}[c][c]{\scriptsize $x_{k+1}^i$}
\psfrag{m}[c][c]{\scriptsize $\theta_{k}^i$}
\psfrag{aa}[c][c]{\scriptsize $\bar{\mathcal{I}}_{k+1}^i$}
\psfrag{bb}[c][c]{\scriptsize $\tilde{\mathcal{I}}_{k+1}$}
\psfrag{cc}[c][c]{\scriptsize $\mathcal{I}_{k+1}^i$}
\psfrag{dd}[c][c]{\scriptsize $\xi^i_{k+1}$}
\psfrag{ff}[c][c]{\scriptsize $\pi_{k+1}$}
\psfrag{ee}[c][c]{\scriptsize $\gamma_{k+1}^i$}
\psfrag{jj}[c][c]{\scriptsize $u_{k+1}^i$}
\psfrag{kk}[c][c]{\scriptsize $\vartheta_{k+1}^i$}
\psfrag{hh}[c][c]{\scriptsize $x_{k+2}^i$}
\psfrag{mm}[c][c]{\scriptsize $\theta_{k+1}^i$}
\psfrag{CCC}[c][c]{\tiny \text{Centralized resource manager}}
\includegraphics[width=8cm, height=3.3cm]{info-topology1.eps}
\caption{Awareness model of the impassive delay control approach. Blue arrows represent policies' cross-awareness within one time-step. Red arrows show a policy maker's self-awareness. Green arrows depict state cross-awareness from one time-step to the next.}
\label{fig:info-topology}
\vspace{-5mm}
\end{figure}
\begin{theorem}\label{thm:impassiveDC}
Consider the problem (10) and let $\gamma^{i,\ast}, i\!\in \!\mathrm{N}$ follow the certainty equivalence law (\ref{eq:CE-law})-(\ref{eq:CE-gain}). Given $\bar{\mathcal{I}}_k^i$ and $\tilde{\mathcal{I}}_k$ in (\ref{set:DC-information-impassive}) and (\ref{set:RM-information}), the jointly optimal impassive delay control and resource allocation policies are offline solutions of the following constrained mixed-integer linear-programs (MILP)
\begin{align}\label{eq:opt-imp-del-cont}
&\theta_{[0,T-1]}^{i,\ast}=\argmin_{\xi_{[0,T-1]}^i}J^i(\gamma^{i,\ast},\xi_{[0,T-1]}^i(\bar{\mathcal{I}}_{[0,T-1]}^i))=\\\nonumber
&\argmin_{\xi_{[0,T-1]}^i} \sum_{t=0}^{T-1}\!\bigg[\theta_t^{i^\top}\!\Lambda\!+\!\sum_{l=0}^{\tau_{t}^i} \sum_{j=l}^{\tau_{t}^i}\bar{b}_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\bigg]\\\nonumber
&\text{s. t.}\quad \;\;\bar{b}_{j,t}^i=\prod\nolimits_{d=0}^{j-1}\prod\nolimits_{l=0}^d [1-\theta_{t-d}^i(l)][\sum\nolimits_{d=0}^j \theta_{t-j}^i(d)],\\\nonumber
& \qquad\;\;\;\sum\nolimits_{l=0}^D \theta_t^i(l)\!=\!1,
\quad\!\!\!\!\sum\nolimits_{j=0}^{\tau_t^i} \bar{b}_{j,t}^i\!=\!1, \quad \!\!\!\!\sum\nolimits_{j=t+2}^{D} \bar{b}_{j,t}^i\!=\!0,
\end{align}
and, \vspace{-3mm}
\begin{align}\label{eq:opt-imp-res-alloc}
&\!\!\!\vartheta_{[0,T-1]}^{\ast}\!=\!\argmin_{\pi_{[0,T-1]}}\!\frac{1}{N}\!\sum_{i=1}^N J^{i}(\gamma^{i,\ast}\!,\pi_{[0,T-1]}(\tilde{\mathcal{I}}_{[0,T-1]}))\!=\!\\\nonumber
&\!\!\!\argmin_{\pi_{[0,T-1]}}\frac{1}{N}\!\sum_{i=1}^N \sum_{t=0}^{T-1}\!\left[\vartheta_t^{i^\top} \!\!\Lambda\!+\!\sum_{l=0}^{\tau_{t}^i} \sum_{j=l}^{\tau_{t}^i}b_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\!\right]\\\nonumber
&\!\!\!\text{s. t.} \quad\; -\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i, \; b_{j,t}^i \;\text{as defined in (\ref{coeff:b})},\\\nonumber
&\!\!\!\qquad \quad\sum\nolimits_{i=1}^N \vartheta_{t}^i(d)\leq c_d,\; \forall d\in\mathcal{D}, \; t\in[0,T-1].
\end{align}
where, $\tau_t^i\!\triangleq \!\min\{D,t+1\}$.
\end{theorem}
\begin{proof}
See the Appendix \ref{Append:thm2}.
\end{proof}
Next, we propose (without a proof) a sufficient capacity condition for $c_d, d\!\in \!\mathcal{D}$, ensuring the allocated resources~are within $\{\ell_{\max\{0,d-\alpha_i\}}, \ldots, \ell_d,\ldots,\ell_{\min\{d+\beta_i,D\}}\}$, and the MILP (\ref{eq:opt-imp-res-alloc}) is feasible. Selected $c_d$'s should additionally satisfy (\ref{link-capacity-constraint}) and (\ref{tot-transmission}) to ensure the problem (10) is non-trivial, and avoid packet drop. We show in Section~\ref{num_res} that the condition is not necessary.
\begin{corollary}\label{corol:feasibility}
The MILP problem (\ref{eq:opt-imp-res-alloc}) is feasible if (\ref{tot-transmission}) is satisfied and $\forall d\!\in \!\mathcal{D}$, the following sufficient condition holds
\begin{equation}\label{feasibility}
c_d\geq \bigg\lfloor\!\frac{N}{1\!+\!\frac{1}{N}\left[h(\alpha,\beta)\right]}\!\bigg\rfloor,
\end{equation}
with, $h(\alpha,\beta)\!=\!\sum_{i\in N_1} \mathbbm{1}(d\alpha_i)\!+\!\sum_{j\in N_2} \mathbbm{1}((D-d)\beta_j)\!+\!\mathbbm{1}(d)\sum_{l\in N_3} \mathbbm{1}(d\alpha_l) \!+\!\mathbbm{1}(D-d)\sum_{l\in N_3} \mathbbm{1}((D-d)\beta_l)$, where, $\forall i\!\in\! N_1$, $j\!\in\! N_2$ and $l\!\in \!N_3$, we have $(\alpha_i\!\neq\!0, \beta_i\!=\!0)$, $(\alpha_j\!=\!0, \beta_j\!\neq\!0)$, $(\alpha_l,\beta_l\!\neq\!0)$ and $|N_1|\cup |N_2|\cup |N_3|=N$.
\end{corollary}
\subsubsection{Reactive delay control}
We call the delay control policy \textit{reactive} if the decisions on $\theta_k^i$'s are per-time made incorporating the knowledge of $\vartheta_{[0,k-1]}^i$. Thus, the information set $\bar{\mathcal{I}}_k^i$ upon which $\theta_k^i=\xi_k^i(\bar{\mathcal{I}}_k^i)$ is computed, needs to contain $\vartheta_{[0,k-1]}^i$, hence $\bar{\mathcal{I}}_k^i$ coincides with (\ref{set:DC-information}).
\begin{theorem}\label{thm:reactiveDC}
Consider the optimization problem (10). Let $\gamma^{i,\ast}, i\!\in \!\mathrm{N}$ follow the certainty equivalence law (\ref{eq:CE-law})-(\ref{eq:CE-gain}). Given the information sets $\bar{\mathcal{I}}_k^i$ and $\tilde{\mathcal{I}}_k$, respectively, in (\ref{set:DC-information}) and (\ref{set:RM-information}), the optimal reactive delay control law is computed online from the following constrained MILP
\begin{align}\label{eq:thm3-del-ctrl}
&\theta_{[k,T-1]}^{i,\ast}\!=\argmin_{\xi_{[k,T-1]}^i}J^i(\gamma^{i,\ast},\xi_{[k,T-1]}^i(\bar{\mathcal{I}}_{[k,T-1]}^i))=\\\nonumber
&\argmin_{\xi_{[k,T-1]}^i} \sum_{t=k}^{T-1}\!\left[\theta_t^{i^\top}\!\Lambda\!+\!\sum_{l=0}^{\tau_{t}^i} \sum_{j=l}^{\tau_{t}^i}\tilde{b}_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\right]\\\nonumber
&\text{s. t.}\quad \!\!\tilde{b}_{0,t}^i=\theta_t^i(0), \quad \tilde{b}_{j,t}^i \leq \sum\nolimits_{l=0}^j \vartheta_{t-j}^i(l),\; j\!\in\!\{1,\ldots,\tau_t^i\},\\\nonumber
&\qquad\sum\nolimits_{l=0}^D \!\theta_t^i(l)\!=\!1, \;\sum\nolimits_{j=0}^{\tau_t^i} \!\tilde{b}_{j,t}^i\!=\!1, \; \sum\nolimits_{j=t+2}^{D} \!\tilde{b}_{j,t}^i\!=\!0, t\!\geq \!k.
\end{align}
where, $\tau_t^i$ and $\tilde{P}_{t}^i$ are similarly defined as in Theorem \ref{thm:impassiveDC}, and
\begin{equation*}
\tilde{b}_{j,t}^i\!=\!\!\Big[[1\!-\!\theta_t^i(0)]\prod\nolimits_{d=1}^{j-1}\prod\nolimits_{l=0}^d [1\!-\!\vartheta_{t-d}^i(l)]\Big]\!\Big[\!\sum\nolimits_{d=0}^j \!\vartheta_{t-j}^i(d)\Big]\!,
\end{equation*}
with $\prod_{d=1}^{0}\prod_{l=0}^d [1-\vartheta_{t-d}^i(l)]\triangleq 1$, for notation convenience.
Moreover, the optimal resource allocation law is computed online from the following constrained MILP
\begin{align}\label{eq:thm3-res-manager}
&\vartheta_{[k,T-1]}^{\ast}=
\argmin_{\pi_{[k,T-1]}}\frac{1}{N}\!\sum\nolimits_{i=1}^N \sum\nolimits_{t=k}^{T-1}\bigg[\vartheta_t^{i^\top} \Lambda\\\nonumber
&\quad\qquad\;\;+\sum\nolimits_{l=0}^{\tau_{t}^i} \sum\nolimits_{j=l}^{\tau_{t}^i} b_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\bigg]\\\nonumber
&\text{s. t.} \quad\; -\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i, \; b_{j,t}^i \;\text{as defined in (\ref{coeff:b})},\\\nonumber
&\qquad \quad\sum\nolimits_{i=1}^N \vartheta_{t}^i(d)\leq c_d,\; \forall d\in\mathcal{D}, \; t\in[k,T-1].
\end{align}
\end{theorem}
\vspace{1mm}
\begin{proof}
Derivation of optimal policies in Theorem \ref{thm:reactiveDC} follows similarly to that of Theorem \ref{thm:impassiveDC} and hence omitted. The major differences are summarized in the Remark \ref{remark:3}.
\end{proof}
\begin{remark}\label{remark:3}
In Theorem \ref{thm:reactiveDC}, the reactive delay controller is aware of $\vartheta_{[0,k-1]}^{i,\ast}$ and incorporates them in deciding~$\theta_{[k,T-1]}^{i,\ast}$. Hence, unlike Theorem \ref{thm:impassiveDC}, here we solve a per-time-step MILP. Technically, the online nature of the MILP (\ref{eq:thm3-del-ctrl}) is reflected in the time-varying $\tilde{b}_{j,t}^i$ that results in a time-varying $\theta_{[k,T-1]}^{i,\ast}$. Comparing it with $\bar{b}_{j,t}^i$ in Theorem \ref{thm:impassiveDC}, we see that for each time $k$, $\theta_{[k,T-1]}^{i,\ast}$ depends on $\vartheta_{[k-D,k-1]}^{i,\ast}$, while in Theorem \ref{thm:impassiveDC} the same decision was dependent only on $\theta_{[k-D,k-1]}^{i,\ast}$. The MILP problem (\ref{eq:thm3-res-manager}) also becomes online as it needs to satisfy the time-varying constraint $-\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i$.
\end{remark}
\begin{remark}\label{rem:complexity}
The optimal impassive delay control and resource allocation variables $(\theta_{[0,T-1]}^\ast,\vartheta_{[0,T-1]}^\ast)$ are shown in Theorem~\ref{thm:impassiveDC} to be offline solutions of the MILPs (\ref{eq:opt-imp-del-cont}) and (\ref{eq:opt-imp-res-alloc}), while the same variables of the reactive approach are solved online from the MILPs (\ref{eq:thm3-del-ctrl}) and (\ref{eq:thm3-res-manager}), as in Theorem~\ref{thm:reactiveDC}.
Based on their formulations, the impassive approach requires an MILP of complexity $\mathcal{O}(NdT)$ whereas the reactive approach requires an MILP of complexity $\mathcal{O}(NdT^2)$. This confirms that both approaches incur linear complexity growth w.r.t. to the number of sub-systems and the number of transmission links. However, complexity of the reactive approach grows quadratically with the time horizon length while the respective growth rate for the impassive approach is linear\footnote{A less complex scenario can be discussed when the control systems decide on a desired transmission link not for a single time-step but for a time interval. The joint optimal solution for such a scenario can be derived similar to the results of this article yet the computational complexity is~reduced. The social cost, however, will be higher as constraints are per interval.}.
\end{remark}
\begin{remark}
According to (\ref{eq:estimator_dynamics}), the state estimation at the controller is performed using the freshest received state information, hence, if an outdated state arrives while a fresher one is available, the former will not be used. In addition, both local and social objective functions (\ref{eq:local_objective})-(\ref{eq:global_OP}) include communication costs. Therefore, to reduce the total cost, the delay controllers and the resource manager try to avoid transmission decisions that lead to out of order delivery of state information. This is reflected in the formulated MILPs in Theorems \ref{thm:impassiveDC} and~\ref{thm:reactiveDC}. This is, however, unavoidable due to the constraint (\ref{const1}) that forces each sub-system to select one delay link $\ell_d\!\in\!\mathcal{L}$ while the maximum delay $D$ is finite. Intuitively, many of transmissions with $D$-step delay would not have been executed if the sub-systems had the option to remain open-loop and select \textit{no transmission}. Hence, outdated information appearing at subsequent time-steps are discarded if a fresher data exists
\end{remark}
Corollary \ref{corol:performance-comparison} below shows that, although the reactive approach requires more computation, it outperforms the impassive approach in terms of both local and social performances.
\begin{corollary}\label{corol:performance-comparison}
Let the performance of the local policy co-design $(\gamma^{i,\ast},\xi^{i,\ast},\pi^\ast)$ for the impassive and reactive approaches be denoted, respectively, by $J^{i,\ast}_{\text{Im}}$ and $J^{i,\ast}_{\text{Re}}$, defined in (\ref{eq:local_objective}), and also denote the social performance of the overall joint design $(\gamma^{\ast},\xi^{\ast},\pi^\ast)$ by $J^\ast_{\text{Im}}$ and $J^\ast_{\text{Re}}$, defined in (\ref{eq:global_OP}). Let $\gamma^{i,\ast}$, $\xi^{i,\ast}$ and $\pi^\ast$ of the impassive approach be computed as (\ref{eq:CE-law}), (\ref{eq:opt-imp-del-cont}) and (\ref{eq:opt-imp-res-alloc}), and of the reactive approach as (\ref{eq:CE-law}), (\ref{eq:thm3-del-ctrl}) and (\ref{eq:thm3-res-manager}), respectively. Then, $J^{i,\ast}_{\text{Re}}\leq J^{i,\ast}_{\text{Im}}$ and $J^\ast_{\text{Re}}\leq J^\ast_{\text{Im}}$.
\end{corollary}
\begin{proof}
See the Appendix \ref{Append:corol3}.
\end{proof}
\subsection{Optimal resource allocation without model awareness} \label{sec:no_model_awareness}
In an NCS, the individual entities may not be willing to share the specifications of their dynamical model or their objective functions with the communication service provider. Within our problem formulation, this essentially means that the network manager does not have the knowledge of constant parameters $\{A_i, B_i, Q_1^i, Q_2^i, R^i, \Sigma_w^i,\Sigma_{x_0}^i\}$, $i\in \mathrm{N}$. Technically, having no knowledge of the constant parameters (except $\alpha_i,\beta_i$) the local cost functions $J^i$ are not computable for the network manager, hence the optimal resource allocation policy cannot be obtained from the problem (\ref{prob:global_OP}). More precisely, although the local policies $\gamma^{i,\ast}$'s and $\xi^{i,\ast}$'s can still be computed from (\ref{eq:CE-law}), (\ref{eq:opt-imp-del-cont}), and (\ref{eq:thm3-del-ctrl}), for impassive and reactive approaches, respectively, $\pi^\ast$ cannot be obtained from the either problems (\ref{eq:opt-imp-res-alloc}) and (\ref{eq:thm3-res-manager}). Let the information set $\tilde{\mathcal{I}}_k$ the network manager be defined as in (\ref{set:RM-information}) but excluding the knowledge of the constant parameters of all sub-systems except $\alpha_i,\beta_i$'s. Then the best the network manager can perform is to allocate resources such that, given $\alpha_i, \beta_i$'s, the average deviation between the delay control and resource allocation decisions is minimized, which is the first term in the MILPs (\ref{eq:opt-imp-res-alloc}) and (\ref{eq:thm3-res-manager}).
Hence, the optimal resource allocation for the impassive approach will be obtained from
\begin{align}\label{prob:res_alloc_no_knowledge_imp}
&\vartheta_{[0,T-1]}^{\ast}=\argmin_{\pi_{[0,T-1]}}\frac{1}{N}\sum\nolimits_{i=1}^N \sum\nolimits_{t=0}^{T-1}\vartheta_t^{i^\top} \!\Lambda\\\nonumber
&\text{s. t.} \quad\; -\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i, \; i\in\mathrm{N},\\\nonumber
&\qquad \quad\sum\nolimits_{i=1}^N \vartheta_{t}^i(d)\leq c_d,\; \forall d\in\mathcal{D}, \; t\in[0,T-1],
\end{align}
and for the reactive approach, is obtained from
\begin{align}\label{prob:res_alloc_no_knowledge_re}
&\vartheta_{[k,T-1]}^{\ast}=\argmin_{\pi_{[k,T-1]}}\frac{1}{N}\sum\nolimits_{i=1}^N \sum\nolimits_{t=k}^{T-1}\vartheta_t^{i^\top}\Lambda\\\nonumber
&\text{s. t.} \quad\; -\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i,\; i\in\mathrm{N},\\\nonumber
&\qquad \quad \sum\nolimits_{i=1}^N \vartheta_t^i(d)\leq c_d, \; \forall d\in\mathcal{D}, \; t\in [k,T-1],
\end{align}
where, $\theta_t^{i,\ast}$ in (\ref{prob:res_alloc_no_knowledge_imp}) is the solution of the impassive approach (\ref{eq:opt-imp-del-cont}), while in (\ref{prob:res_alloc_no_knowledge_re}) is solution of the reactive approach (\ref{eq:thm3-del-ctrl}).
From (\ref{prob:res_alloc_no_knowledge_imp}) and (\ref{prob:res_alloc_no_knowledge_re}), in the absence of the constant model parameters the resource manager only optimizes the communication cost, and that the allocated resource to remain within the sensitivity constraint (\ref{sensit_constrains}). This results in a solution for $\vartheta$ that tends to select the transmission links that incur the least communication cost ignoring that such selections may severely affect the control cost.
To counter that, in the reactive approach where the delay controller can adjust its link selection profile in response to the resource allocation policy, each system changes their $\theta^{i,\ast}_k$ drastically for the future time-steps to request for faster links aiming to reduce the control cost.
Assume a system asked for a fast link, e.g. with delay zero, due to its task criticality, however, the network manager does not realize the urgency due to not being capable of estimating the control cost and allocates a higher latency transmission link (say $d\!=\!2$) which optimizes only the communication cost. The system will then be forced to select a low delay link again since its past request is not served accordingly. This approach thus leads to higher total cost of control and communication compared to the scenario that the resource manager knows the constant model parameters.
Furthermore, when constant model parameters are assumed unknown, the reactive approach performs significantly better than its impassive counterpart since the systems will be generally unhappy of this agnostic resource allocation, hence respond with a significantly different $\theta_k^\ast$ than the prescribed $\vartheta_k^\ast$ that leads to a very different $\vartheta^\ast_{k+1}$ than $\vartheta^\ast_k$.
\subsection{Delay-insensitive optimal resource allocation}\label{sec:weighted_cost}
For the purpose of benchmarking and comparing the two methods presented in the previous sections, we propose another ad-hoc approach by extending the work of \cite{8405590} to a multi-agent scenario.
More specifically, the approach presented in this section adopts a formulation that does not consider the delay sensitivity in the formulation, rather solely interested in the capacity constraint.
This means that the resource manager ignores the knowledge of $\theta_{[0,k]}^i$ and $\{\alpha_i,\beta_i\}$'s, $i\!\in\!\mathrm{N}$, however, knows the constant model parameters of all sub-systems.
We define constant weights $w_i \!>\!0$ such that $\sum_{i=1}^N w_i \!=\!1$. The network manager then prioritizes each sub-system based on $w_i$ and optimizes the MILP at every time-step $k$, i.e.,
\begin{align} \label{prob:res_alloc_weighted_re}
&\vartheta_{[k,T-1]}^{\ast}\!=\argmin_{\pi_{[k,T-1]}}\sum_{i=1}^N w_i\E\!\bigg[\!V^{i,\ast}_k(\gamma^{i,\ast}\!,\pi^i)\!+\!\!\sum_{t=k}^{T-1}\vartheta_t^{i^\top}\!\Lambda\big|\tilde{\mathcal{I}}_k\bigg]\!=\nonumber \\\nonumber
&\argmin_{\pi_{[k,T-1]}}\!\sum_{i=1}^N\sum_{t=k}^{T-1}\!w_i\bigg[\vartheta_t^{i^\top} \!\!\Lambda\!+\!\sum_{l=0}^{\tau_{t}^i} \sum_{j=l}^{\tau_{t}^i} b_{j,t}^i \textsf{Tr}(\tilde{P}_{t}^i A_{i}^{{l-1}^{\textsf{T}}} \Sigma_{w}^i A_{i}^{l-1})\bigg]\!\\
&\text{s. t.}
\quad \sum\nolimits_{i=1}^N \vartheta_{t}^i(d)\leq c_d,\; \forall d\in\mathcal{D}, \; t\in[k,T-1].
\end{align}
Notice that since there is no coupling between $\vartheta_t$ and $\theta_t$ contrasting to the formulations in \eqref{eq:thm3-res-manager} and \eqref{prob:res_alloc_no_knowledge_re}, $\vartheta^{\ast}_{[k,T-1]}$ can be found from $\vartheta^{\ast}_{[0,T-1]}$ without solving \eqref{prob:res_alloc_weighted_re} for all $k$.
In fact if $\vartheta^{\ast}_{[0,T-1]}$ is the solution of \eqref{prob:res_alloc_weighted_re} for $k=0$, then the part $\vartheta^{\ast}_{[t,T-1]}$ of $\vartheta^{\ast}_{[0,T-1]}$ is the solution of \eqref{prob:res_alloc_weighted_re} for any $k=t$.
Furthermore, any feasible solution of \eqref{eq:thm3-res-manager} is a feasible solution for \eqref{prob:res_alloc_weighted_re}, and hence, often the delay-insensitive approach results in a lower social cost than the delay-sensitive MILP in \eqref{eq:thm3-res-manager}.
However, the lower social cost in this approach is obtained at the expense of higher deviations between the desired links and the allocated ones since no constraint of the form $-\alpha_i\leq(\vartheta_t^i-\theta_t^{i,\ast})^\top \Delta\leq \beta_i$ exists to restrict the deviation between $\vartheta_t^i$ and $\theta_t^{i,\ast}$.
Hence, the social performance is expected to improve, however, certain individual sub-systems suffer as their link allocation is far from the ones requested. This trade-off needs to be attended for the resource manager to be sufficiently responsive to timeliness sensitivity of local sub-systems.
|
1,108,101,563,684 | arxiv | \section{Abstract}
In this paper we examine the descriptive potential of a combinatorial data structure known as \textbf{Generating Set} in constructing the boundary maps of a simplicial complex. By refining the approach of \cite{Dumas} in generating these maps, we provide algorithms that allow for relations among simplices to be easily accounted for. In this way we explicitly generate faces of a complex only once, even if a face is shared among multiple simplices. The result is a useful interface for constructing complexes with many relations and for extending our algorithms to $\Delta$-complexes. Once we efficiently retrieve the representatives of "living" simplices i.e., of those that have not been related away, the construction of the boundary maps scales well with the number of relations and provides a simpler alternative to JPlex. We finish by noting that the generating data of a complex is equivalent in information to its incidence matrix and we provide efficient algorithms for converting from an incidence matrix to a Generating Set.\newline
\noindent Keywords: topological data analysis; boundary maps; incidence matrix; simplicial complex
\section{Introduction}
We use \cite{hatcher} as a background reference. Let $X = \bigcup_{k = 0}^N X_k$ be a simplicial complex where $X_k$ is the set of $k$-dimensional simplices and $N$ is the maximum dimension of a simplex in $X$. If we denote by $C_k$ the free Abelian group generated by the set $X_k$, the boundary map $\delta_k:C_k\rightarrow C_{k-1}$ is the linear map defined on a $k$-simplex $\alpha=\{v_0,v_1,\dots v_k\}$ by the linear combination
$$\delta_k(\alpha) = \sum_{i = 0}^k (-1)^{i}\alpha_i$$
where $1\le k\le N$ and $\alpha_i=\alpha - \{v_i\}$. For $k>N$, we define $\delta_k$ to be the zero map. We encode this mapping into a matrix $M_k$ by letting each column represent a $k$-simplex $\alpha$ in $X_k$ and each row a $(k-1)$-simplex $\alpha_i$ in $X_{k-1}$ such that the column entry for $(\alpha_i,\alpha)$ is the coefficient $(-1)^i$ of the linear combination $\delta_k(\alpha)$ above and zero otherwise. It is easy to show that $\delta_{k-1}(\delta_{k}(\alpha)) = 0$ for all $k$-simplices $\alpha$, i.e. the element $\delta_k(\alpha)\in C_{k-1}$ is a $(k-1)$-cycle. \newline
Constructing a matrix $M_k$ for each dimension $k$ allows one to compute the homology groups of the chain complex $C=\{C_k\}_{k=0}^N$. The $k$-th homology group is defined as the following quotient: $$H_k(C)=\ker(M_k)/{\rm im}(M_{k+1})$$
where $\ker(M_k)$ is the kernel of $\delta_k$, i.e. all $k$-chains $\alpha$ such that $\delta_k(\alpha) = 0$, and ${\rm im}(M_{k+1})$ is the image of all $(k+1)$-chains under $\delta_{k+1}$. This quotient is valid because, as we remarked before, the boundary of a boundary is always zero, hence the boundary of any $(k+1)$-simplex must itself have a trivial boundary under $\delta_k$, and therefore ${\rm im}(\delta_{k+1})\subseteq \ker(\delta_k)$. \newline
Once the boundary matrix $M_k$ is constructed, computing homology groups can be then done by the Gaussian row reduction. In particular, by taking the codimension of the image ${\rm im}(\delta_{k+1})$ inside the kernel to retrieve the $k$-th Betti numbers of the complex, i.e. the number of $k$-dimensional "holes" in $C$. As Carlsson points out in \cite{Carlsson} in his seminal paper on topological data analysis (TDA), the Gaussian method is in general too inefficient to be used on large scale complexes. However, if one only desires the Betti numbers of $C$ then alternatively one can use the Smith normal form of our boundary matrices which allows for the computation of all Betti numbers at once and is much more efficient. Algorithms for computing Betti numbers have been studied in depth by others, and in fact much that can be done in the field of topological data analysis relies on efficient computation of Betti numbers over filtered complexes as in \cite{Dumas, Jager}. Once we have the Betti numbers, another tool known as "barcodes" can be developed as a nice way of representing topological features at multiple time steps or distance thresholds; these are often referred to as "persistent" in our data, as they remain invariant over relatively large intervals of the parameter. See \cite{Ghrist}. \newline
Here however we do not discuss such algorithms in detail, but instead we focus on the generation of the boundary matrix itself. In \cite{Dumas}, some of the difficulties in generating a boundary matrix efficiently are discussed along with a process for trying to generate the boundary map while avoiding the repeat work of the naive algorithm that stems from many simplices sharing the same sub-complex. We first provide a recursive algorithm that can be used to generate any set of independent complexes without repeats. We then expand upon this algorithm to generate any complex with work proportional to that of generating complexes of independent simplices. To do this as generally as possible we introduce the concept of generating sets, which are a collection of data that describes our complex in a way that is completely combinatorial. One benefit of such a description is the relative ease of describing a type of cellular complex known as the $\Delta$-complex. This is made possible by the construction of a relational map between simplices, which can be deduced from our combinatorial descriptors. We provide algorithms for constructing the relational map, and then in turn the boundary matrix by a method that does not require the explicit generation of any maximal simplex more than once. This gives an algorithmic complexity that improves with the number of relations we have, and also allows us to use a convenient high level description of our complex. We conclude by discussing algorithms for constructing a generating set from the incidence matrix of a complex.
\section{Constructing Boundaries from Generating Sets}
\subsection{Generating Set:}
We define the \textbf{Generating Set} of a complex $C$ to be a pair of sets $G = (S,R)$. The set $$S = \{(n_1,d_1),(n_2,d_2),\dots,(n_k,d_k)\}$$ with $d_i<d_j$ for $i<j$ is a set of tuples where for each tuple $(n_i,d_i)$ there are $n_i$ maximal simplices of dimension $d_i$ in $C$. By \textbf{maximal simplices} we mean simplices that are not the face of another simplex in $C$. The set $R= \{r_1,r_2,\dots r_m\}$ is a set of relations among faces of maximal simplices where $r_i = (\alpha_1,\dots \alpha_p)$ with all $\alpha_j\in r_i$ of the same dimension. Observe that the number of $k$-simplices generated by $S$ can be calculated by the following function:
$$ j(S,k) = \sum_{(n_i,d_i)\in S}n_i\binom{d_i}{k}.$$
We may also want to calculate $j(S_m,k)$ by the same formula but taken over the set $S_m$ of all pairs $(n_i,d_i)$ with $d_i \leq m$ for a fixed $m$.
Generating Set can provide a nice interface for constructing certain complexes known as $\Delta$-complexes. Take the complex of the torus as an example. One way to construct it inductively (as JPlex does) is by first adding all necessary vertices, then drawing edges between certain vertices, and finally filling up faces bounded by three edges. This method, while efficient computationally, is tedious, and also does not allow for much simplification of the complex. One valid generating set for the torus would be the following:
$$ S = \{(2,2)\}, R = \{\{(0,1),(4,5)\},\{(0,2),(3,5)\},\{(1,2),(3,4)\}\}$$
As long as we have identified all the faces in $R$ (all sub-simplices are also identified between related simplices), that is sufficient to construct the boundary maps of the $\Delta$-complex of the torus. \newline
To generate the maximal simplices to which we later glue faces to make our complex, we provide the following algorithm:\newline
\begin{algorithm}
$P(S)$:\newline
$N = j(S,0)$\newline
$S^* = \emptyset$\newline
$c = 0$\newline
\For{$s_i = (n_i,d_i)\in S$}{
\For{$1\to n_i$}{
add \{$c,\dots c+d_i\}$ to $S^*$\newline
$c = c + d_i + 1$\newline
}
}
return $S^*$
\end{algorithm}
The algorithm $P(S)$ simply partitions out the vertices into tuples that form all the maximal simplices.
\subsection{From generating sets to boundary maps}
Let an ordered set of matrices be given by $D = \{D_0,D_1,\dots,D_N\}$ where $D_i$ is the boundary map from the $i$-chains to the $(i-1)$-chains in the complex $C$. We would like to construct $D$ by using only our Generating Set in a way that is not wasteful. Henceforth, let $G = (S,R)$ be the Generating Set for a complex $C$. Our goal is to generate a list of matrices $D_k$ representing the maps $\delta_k$. Doing this with no relations is straightforward and once we have this, in theory we can add in our relations by simply multiplying each boundary matrix $D_k$ by a permutation matrix $M$ and removing rows and columns of non-representative simplices. Henceforth we may refer to this method as the \textbf{naive} approach to dealing with relations. Also, we may sometimes refer to the \textbf{naive} method of generating faces of a simplex by which we mean computing all boundaries for all faces. Observe, the initial cost of generating the boundaries of all simplices is wasteful given that we know that some (potentially many) of them will disappear after identifications. Besides generating dead simplices, generating the list of permutation matrices is also computationally expensive when done inductively. What we describe is instead a method of constructing boundary maps only for the simplices that are still "alive", while generating relational maps only for simplices that are "dead". We do this by exploiting the patterns predictable from our generation data.
\subsection{Constructing a relational map}
The first step in our process is to pick a representative for each equivalence class constructed by our relation set, and to keep that representative valid through successive relations. We define the function $Lor:C\rightarrow C/R$ to be the map $\alpha\mapsto \hat\alpha$ where $\hat\alpha$ is the lexicographically lowest simplex to which $\alpha$ is related, i.e. $\hat\alpha=\min\{\beta \;|\; \alpha \equiv \beta\}$. This presents us with one method of constructing the equivalence classes with a valid representative, by simply iterating over all simplices, and placing them in their respective equivalence classes, finding the lowest, and constructing such a mapping. However, this is potentially wasteful in the sense that independent of the number of relations, we will have a runtime of $2^{\mathcal{O}(N)}$. In theory this is not necessary as we have our relational data up front and should be able to construct a map only modifying those simplices that appear in our relation. Below we describe an inductive approach to doing so. To make use of our previous notation, we will also consider the maps $Lor_r$ which will assigned the lowest related simplex of $\alpha$ given that the last relation accounted for was $r$. \newline
\begin{thm}
There exists an algorithm such that given a map $Lor_r$, we can then construct in time $n2^{\mathcal{O}(k)}$ the map equivalent to $Lor_s$ for any other remaining relation $s$ where $k$ is the dimension of simplices in $s$ and $n$ is the number of simplices in $s$.
\end{thm}
\begin{proof}
Let $H$ be a dictionary or lookup table which stores the mapping of $Lor_r$. Define $RLor(\alpha)$ to be equal to $Lor_r(Lor_r(\alpha))$. Let $s = \{\alpha_1,\dots,\alpha_n\}$ be an unaccounted for relation in $H$. If all $\alpha_i$ in $s$ are not accounted for in $H$ then we simply update $H$ with all $\alpha_i$ mapping to the minimum simplex in $s$. Now suppose that $s$ and $H$ are not disjoint. Here the process is similar in the sense that if the minimum of $s$ is also in $s\cap H$, call it $\beta$, then we simply add all the elements of $s$ to $H$ by mapping $\alpha_i\in s$ to $RLor(\beta)$. Suppose that the minimum of $s$ is not in $s\cap H$. Then for all simplices $\alpha$ in $s\cap H$ we map $RLor(\alpha) = a$ to $\beta$ if $\beta < a$ and vice versa otherwise. Given the recursive definition of $RLor$ we know that anything which had $a$ as its lowest order relation in $H$ will now have the correct relation as well. $RLor$ in tandem with $H$ gives us $Lor_s$. We then must also add all subrelations among $\alpha_i\in s$ induced by $s$. We do this by decomposing all $\alpha_i$ and matching their respective faces of a given dimension giving us new relations. We update all such relations in the same process, without decomposing further. This gives us a final runtime of $n2^{\mathcal{O}(k)}$.
\end{proof}
Observe that this is in fact an improvement from the naive approach as in the worst case scenario every simplex in $C$ is in a relation which then requires iteration over all simplices and therefore does the same amount of work as the naive algorithm previously discussed. We now present an algorithm for constructing our list of boundary maps $D$ for the complex $C$ using our generating data. First, we construct a partition of a set of natural numbers from our generating data which will represent our linearly independent simplices without relations. This can be done by computing $P(S)$ from above to give us $\bar{S}$. In the following we also assume access to a function $pos:C\rightarrow\mathbb{N}$ where $pos(x)$ is equal to the relative order of $x$ among simplices of the same dimension in $C$. The arguments of our algorithm are a simplex $\alpha\in \bar{S}$, an index $p$ used in recursion, and the matrix list $D$ which we are updating for the whole complex of $\alpha$. \newline
\begin{algorithm}
BCON($\alpha,p$,$D$):\newline
if $\alpha \neq Lor(\alpha)$: return\newline
$M = D_{\text{len}(\alpha) - 1}$\newline
\For{$0\leq i<\text{len}(s)$}{
$\alpha^* = \alpha - \alpha[i]$\newline
$a = pos(\alpha),b = pos(Lor(\alpha^*))$\newline
if $i\neq\text{len}(\alpha)\mod{2}$:
$\{M[b,a] = M[b,a] + 1\}$\newline
else:
$\{M[b,a] = M[b,a] - 1\}$\newline
if $i > p$: BCON($Lor(\alpha^*)$,$i$,$D$)
}
\end{algorithm}
\begin{thm}
Given a map $Lor$ there is an algorithm to compute the minimal boundary matrix for $C$ in time $\mathcal{O}(|T|)$ where $T$ is the set of all representatives in $C$.
\end{thm}
\begin{proof}
We claim that BCON$(\alpha,0,D)$ iterated over all simplices $\alpha\in\bar{S}$ is such an algorithm. Observe that BCON, in its first step terminates if the simplex $\alpha$ has already been related away by an explicit relation. Hence, we know that any representative simplex in $\bar{S}$ will continue into the loop. Because we are using $Lor(\alpha^*)$ for all faces of $\alpha$ we know each face gets mapped properly to the representative of its equivalence class. Lastly, the recursive call using the $Lor(\alpha^*)$ guarantees the update all the sub-simplices of $\alpha$ using only the representatives of each equivalence class.
Observe that the only way for any simplex to be related away entirely is by an explicit relation or a consequence of an explicit relation of higher order simplex. As we previously showed all simplices not related away explicitly will pass into the loop. Hence we know that BCON not only ignores all non-representative simplices, but also correctly computes boundaries for all representatives.
\end{proof}
It is worth noting that part of what makes $BCON$ efficient is that it does in fact improve upon the naive approach to boundary generation over a complex, although it may not be immediately obvious. The parameter $p$ that $BCON$ makes use of recursively is actually preventing the generation of repeat faces in $\alpha$. If one wants to pre-generate the simplices in $C$, as they do in \cite{Dumas} for example, this algorithm is straightforward to modify to do so: remove the matrix operations and recursively keep a list of all faces generated from $\alpha$.\newline
We have assumed the existence of a function $pos$ that gives us the relative order of a simplex among all other simplices in $C$ of the same dimension. In practice we can compute this by explicitly generating all sub-complexes of $C$, ordering them lexicographically, and using the position of each simplex to calculate its order. This method suggests an amortized algorithm which makes use of a lookup table storing the location of each simplex. However, we can also calculate the order of a simplex $\alpha$ by solely using $G$ which does not require the generation of the entire complex which up until now we have tried to avoid. For completeness we now illustrate one method of calculating $pos(\alpha)$, with the understanding that there may be better ways to implement this. Notationally we assume $\alpha$ is the set of vertices $\{\alpha_0,\alpha_1,\dots,\alpha_n\}$ with $d_\alpha = |\alpha| - 1$ and $G = (S,R)$
\newpage
\begin{algorithm}
$Pos(\alpha)$:\newline
\textbf{if $\alpha_0 \leq j(S_{d_\alpha},0)$}:\newline
\hspace{12pt}\pyspace return $\frac{\alpha_0 - j(S_{d_\alpha - 1},0)}{d_\alpha}$ //$\alpha$ is a maximal simplex, so it is before all non-maximal $d_\alpha$ simps\newline
\textbf{else}\newline
find min $x$ where $j(S_x,0)\geq \alpha_0$ \newline
//finds dimension of maximal simplex $\beta\supset\alpha$ that contains $\alpha$\newline
$\delta = \lfloor \frac{\alpha_0 - j(S_{d_\alpha - 1},0)}{x}\rfloor$ //find pos of $\beta$ among x-simps containing $\alpha$\newline
$a = \delta\binom{x}{d_\alpha} + j(S_x,d_\alpha)$ //number of $d_\alpha$-simps before $\beta$ in $C$\newline
$b = \sum_{i\leq d_\alpha}\sum_{k}^{\alpha_i - \alpha_{i-1}} \binom{\alpha_i - \alpha_{i-1}}{k}\cdot\binom{|\alpha_i - (\delta+1)\cdot x\cdot j(S_{x-1},0)|}{d_\alpha - k}$\newline
//number of $d_\alpha$ simps in $\beta$ before $\alpha$\newline
return $a+b$
\end{algorithm}
\subsection{Simple examples}
We have already illustrated some of the simplicity that generating sets provide in describing complexes in the example of the torus above. From experimentation with constructing different complexes it is reasonable to conjecture the existence of certain patterns that may simplify the generation of the relational data used in the generating set for certain topological features. Looking back at the $\Delta$-complex for the torus we can see that the relations among boundaries are matched together in reverse lexicographical order (this is not a pattern that necessarily holds in higher dimensional tori). We have yet to look at patterns that seem promising for constructing more general features. Here we conclude with a much simpler example, which at one time JPlex used as an exercise in its tutorial; that being the example of constructing a 7-dimensional hole. In our test code we make use of a function $getBoundary$ that maps a simplex (a list of natural numbers) to its (ordered) set of simplices that make up its boundary. We also make use of $coupleSimps$ which takes two lists of simplices $A,B$ and returns a new list $L = \{(a_i,b_i)|a_i\in A,b_i\in B\}$ which in this instance is our relation set.\newline
\includegraphics[scale = .7]{GeneratingDataExample}
This is not the only way to construct this specific example, as we could also use the generating set $S = \{(1,8)\}$ with no relations, and remove the highest order boundary matrix from our list of boundary maps. However, the above example is a good illustration of how easily we can describe a complex through its generating set.
\section{Converting to Generating Sets}
Consider the representation of a complex $C$ encoded in matrix $M$ in the following way. Similar to the boundary matrix, each column in $M$ represents a simplex in $C$ except all maximal simplices of dimension at least 1 are given a column (as opposed to only simplices of some fixed dimension). The rows of $M$ are given to the vertices of $C$, and each column in turn represents the complex formed by the power set of all vertices with non-zero rows in $M$. We may refer to $M$ as the \textbf{incidence matrix} of complex $C$. This representation of $C$ is equivalent in information to the generating sets we have discussed thus far.
Note: this version of the incidence matrix is not the same as what is defined in \cite{Awasthi} and possibly elsewhere. The term incidence matrix is sometimes also used in this context to refer to a matrix that encodes the relations among simplices, similar to one of the naive approaches we discussed in section 1 for representing relations. In other contexts it is sometimes used as being nearly equivalent to what we have defined to be the boundary matrix, which is how it is used in \cite{Awasthi}. We will give an algorithm for converting from $M$ to a generating set, with the observation that going from generating sets to a matrix $M$ as described is much more straightforward. \newline
\begin{thm}
Let $G = (S,R)$ be the generating data for the complex $C$ and let matrix $M_{n\times m}$ be its incidence matrix. Then $S$ can be constructed from $M$ in time $\mathcal{O}(mn)$.
\end{thm}
\begin{proof}
The algorithm is straightforward. For each column of $M$ we count the number of non-zero entries. If this count gives us the value $k$ we then know we have a maximal $(k-1)$-simplex. By keeping a running count of each $(k-1)$-simplex, we get the total number of $k$-simplices for all $1\leq k \leq n$.
\end{proof}
To encode relational data in a set $R$ is also straightforward in theory. Let a $k$-simplex $\alpha$ and an $\ell$-simplex $\beta$ be in a complex $C$. Suppose that $\alpha,\beta$ share the same $x+1\leq k,\ell$ non-zero rows in their representation in incidence matrix $M$. We then form a relation between $\alpha_{x}$, the $x$-dimensional face of $\alpha$, and $\beta_{x}$, the $x$-dimensional face of $\beta$.
We now present an algorithm for obtaining the relational data of incidence matrix $M$ given that we have already calculated $S$, the simplicial data for complex $C$, from $M$. To do this we assume for the moment access to two functions. The first is $O:C_k\times \{0\dots \text{dim}(\alpha)\}\rightarrow \{1\dots V\}$ where $V = j(S,0)$ for which $O(\alpha,i)$ is equal to the label in $C$ of the $i$-th vertex in $\alpha$. The second is $O^*:C_k\times C_0/R \rightarrow \{0\dots \text{dim}(\alpha)\} $ where $O^*(\alpha,v)$ is equal to the position of vertex $v$ in $\alpha$ with relations added (we can obtain this from $M$ directly).
\begin{algorithm}
$Rel(M_{n\times m}):$\newline
Let $M_c$ be the set of columns in $M$\newline
Let $R = \emptyset$\newline
\ForEach{$s,r\in M_c$}{
Let $I = \{ (O^*(s,v),O^*(r,v))|v\in s\cap r\cap C_0 \}$\newline
$\alpha_1 = \{O(s,x[0]) |x\in I \}$\newline
$\alpha_2 = \{O(r,x[1]) |x\in I \}$\newline
add $(\alpha_1,\alpha_2)$ to $R$
}
return $R$
\end{algorithm}
We now provide definitions for the two functions $O$ and $O^*$. They are the following:
\begin{algorithm}
$O(\alpha,i):$\newline
$d = dim(\alpha)$\newline
$\lambda = loc(\alpha)$\newline
$\bar{S} = S_d \cup \{(\lambda,d)\}$\newline
return $j(\bar{S},0) + i$
\end{algorithm}
\begin{algorithm}
$O^*(\alpha,v):$\newline
$d = dim(\alpha)$\newline
$\lambda = loc(\alpha)$\newline
$\bar{S} = S_d \cup \{(\lambda,d)\}$\newline
$t = j(\bar{S},0), row = 0,c = 0$\newline
\While{$t + c\neq v$}{
if $ M_\alpha[row] == 1: c = c + 1$\newline
$row = row + 1$\newline
}
return $c$
\end{algorithm}\newpage
Here $\lambda = loc(\alpha)$ is a function where $\alpha$ is the $\lambda$-th simplice of dimension $|\alpha|-1$ in the columns of the incidence matrix $M_c$.\newline
\section{Conclusion}
We have introduced the notion of Generating Sets and have shown how they can be used to construct the boundary matrix of both a simplicial complex as well as its representation as a $\Delta$-complex. We have calculated the complexity of such a construction, and have shown that this is at least an improvement on the naive algorithm of building and gluing inductively. In our approach we have illustrated how given a set of relations we can save work by not repeatedly generating a simplex as both a representative and non-representative of its equivalence class. Finally we conclude by showing that the incidence matrix can be thought of as equivalent to a generating set, and present algorithms to convert from incidence matrix to generating set.
|
1,108,101,563,685 | arxiv | \section{Introduction}
The tools of algorithmic randomness have been particularly useful in studying the power of random oracles in the context of Turing reducibility. It is well-known that access to a random oracle does not aid in the computation of any \emph{individual} sequence, as Sacks~\cite{Sac63} proved that any sequence that is computable from positive measure many oracles must be computable. However, if instead we attempt to compute some element of a collection of sequences by means of a random oracle, the situation is quite different.
For instance, in unpublished work, Martin proved that the collection of sequences of hyperimmune degree has Lebesgue measure $1$ (see Downey and Hirschfeldt~\cite[Theorem~8.21.1]{DowHir10}). A~careful examination of this proof yields, for any $\delta\in(0,1)$, an algorithm which with probability at least $1-\delta$ computes from a random oracle a function not dominated by any computable function (as~noted by G\'acs and reported by Rumyantsev and Shen~\cite{RumShe13}). Other types of sequences known to be computable from positive measure many sequences are the 1-generic sequences (as~shown by Kurtz~\cite{Kur81} and Kautz~\cite{Kau91}), the sequences of DNC degree (first established by Ku\v cera~\cite{Kuc85}), and sequences satisfying certain algebraic properties in the upper semi-lattice of the Turing degrees under Turing reducibility (studied by Barmpalias, Day, and Lewis-Pye \cite{BarDayLew14}).
Collections of sequences $\mathcal{C}\subseteq2^\omega$ with the property that only measure $0$ many sequences compute an element of $\mathcal{C}$ have been referred to as \emph{negligible} (for instance, in V'yugin \cite{Vyu82} and Levin \cite{Lev84}), and thus those collections $\mathcal{C}$ with the property that positive measure many sequences compute an element of $\mathcal{C}$ are called \emph{non-negligible}.
The focus of our study here is a Boolean algebra of non-negligible subsets of $2^\omega$ that are closed under Turing equivalence and where two such subsets are identified with each other if they differ only by a negligible set.
This Boolean algebra, first introduced by Levin and V'yugin \cite{LevVyu77} and systematically studied by V'yugin~\cite{Vyu82}, will be referred to as the \emph{Levin-V'yugin algebra}; its elements will be referred to as the \emph{Levin-V'yugin degrees}, or $\mathrm{LV}$\emph{-degrees} for short.
A significant portion of this article is a survey of previously established results about the Levin-V'yugin algebra, but we also establish
new facts about it as well. Much of our focus will furthermore be on explicating a technique developed by V'yugin~\cite{Vyu82} for building left-c.e.\ semi-measures,
which has applications outside of the study of the algebra, such as in the study of probabilistic computation. We first provide a general schematic account of this technique and then use it to establish the following result.
\begin{thm}[V'yugin \cite{Vyu12}]\label{thm-prob-alg}
For any $\delta\in(0,1)$, there is a probabilistic algorithm that produces with probability at least $1-\delta$ a non-computable sequence that does not compute any Martin-L\"of random sequence.
\end{thm}
We will then apply V'yugin's technique to prove the following generalization of Theorem~\ref{thm-prob-alg}.
\begin{thm}\label{thm-prob-alg2}
For any $\delta\in(0,1)$, there is a probabilistic algorithm that produces with probability at least~$1-\delta$ a non-computable sequence that is not of DNC degree.
\end{thm}
Theorems \ref{thm-prob-alg} and \ref{thm-prob-alg2} both follow from a result due to Kurtz \cite{Kur81}, namely that for every~$\delta\in(0,1)$, there is a probabilistic algorithm that produces a $1$-generic sequence with probability $1-\delta$. Since a $1$-generic sequence can compute neither a Martin-L\"of random sequence nor a sequence of DNC degree, the results follow. However, V'yugin's technique also has implications for the study of $\Pi^0_1$~classes, that is, effectively closed subsets of $2^\omega$: the probabilistic algorithms whose existence can be shown using V'yugin's technique are in fact Turing functionals on~$2^\omega$ with a closed range; and since such a functional is effective, its range is even $\Pi^0_1$. Thus, V'yugin's proof of Theorem \ref{thm-prob-alg} establishes the following stronger result.
\begin{corollary}\label{cor1}{\ }
For every $\delta\in(0,1)$, there is a Turing functional $\Phi$ such that
\begin{itemize}
\item[(i)] $\Phi$ maps no set of positive measure to any single sequence,
\item [(ii)] the domain of $\Phi$ has Lebesgue measure at least $1-\delta$,
\item [(iii)] the range of $\Phi$ is a $\Pi^0_1$ class, and
\item [(iv)] no sequence in the range of $\Phi$ computes a Martin-L\"of random sequence.
\end{itemize}
\end{corollary}
Similarly, the proof of Theorem \ref{thm-prob-alg2} that we provide here establishes the following result.
\begin{corollary}\label{cor2}{\ }
For every $\delta\in(0,1)$, there is a Turing functional $\Phi$ such that
\begin{itemize}
\item[(i)] $\Phi$ maps no set of positive measure to any single sequence,
\item [(ii)] the domain of $\Phi$ has Lebesgue measure at least $1-\delta$,
\item [(iii)] the range of $\Phi$ is a $\Pi^0_1$ class, and
\item [(iv)] no sequence in the range of $\Phi$ is of DNC degree.
\end{itemize}
\end{corollary}
The remainder of this article is structured as follows. In Section \ref{sec-background}, we review the necessary background. Section \ref{sec-neg-nonneg} introduces the notions of negligibility and non-negligibility and provides a number of examples from classical computability theory and algorithmic randomness. The Levin-V'yugin degrees, defined in terms of negligibility, are introduced in Section \ref{sec-lv-degrees}.
The general features of V'yugin's technique for constructing semi-measures are initially laid out in Section \ref{sec-building}, while specific examples of the technique are provided in Section \ref{sec-implementing}. Lastly, in Section~\ref{sec-conclusion} we conclude with a final observation about the connection between V'yugin's technique and $\Pi^0_1$ classes.
\section{Background}\label{sec-background}
\subsection{Some notation}
We fix the following notation and terminology. We denote the natural numbers by~$\omega$, and the set of infinite binary sequences, also known as \emph{Cantor space}, by $2^\omega$. We denote the set of finite binary strings by~$2^{<\omega}$ and the empty string by~$\varepsilon$. Let $\mathbb{Q}_2$ be the set of non-negative dyadic rationals, that is, rationals of the form $m/2^n$ for $m,n\in\omega$.
Given $X\in 2^\omega$ and an integer~$n$, $X {\upharpoonright} n$ is the string that consists of the first~$n$ bits of~$X$, and $X(n)$~is the $(n+1)^{\mathrm{st}}$ bit of $X$ (so that $X(0)$ is the first bit of $X$). If $\sigma$ and $\tau$ are strings, then $\sigma\preceq\tau$ means that $\sigma$ is an initial segment of $\tau$. Similarly, for $X\in2^\omega$, $\sigma\prec X$ means that $\sigma$ is an initial segment of~$X$.
Given a string~$\sigma$, the \emph{cylinder} $\llbracket\sigma\rrbracket$ is the collection of elements of $2^\omega$ having~$\sigma$ as an initial segment. Similarly, given $S\subseteq2^{<\omega}$, $\llbracket S\rrbracket$ is defined to be the collection $\bigcup_{\sigma\in S}\llbracket\sigma\rrbracket$. The cylinders form a basis for the usual product topology on Cantor space, and thus the open sets for this topology are those of the form $\llbracket S \rrbracket$ for some~$S$. An open set $\mathcal{U}$ is said to be \emph{effectively open}~(or~$\Sigma^0_1$) if $\mathcal{U}=\llbracket S \rrbracket$ for some computably enumerable (hereafter, c.e.) set~$S \subseteq 2^{<\omega}$. An \emph{effectively closed} (or~$\Pi^0_1$) set is the complement of an effectively open set. A sequence of open sets $(\mathcal{U}_n)_{n \in \omega}$ is said to be \emph{uniformly effectively open} if there exists a sequence $(S_n)_{n \in \omega}$ of uniformly c.e.\ sets of strings such that $\mathcal{U}_n=\llbracket S_n \rrbracket$ for all $n\in\omega$.
For $\mathcal{A}\subseteq2^\omega$, we write $(\mathcal{A})^{\equiv_\mathrm{T} }$ for the closure of $\mathcal{A}$ under Turing equivalence; that is, we let
\[(\mathcal{A})^{\equiv_\mathrm{T} }:=\{X\in2^\omega\colon (\exists Y\in\mathcal{A})\;X\equiv_\mathrm{T} Y\}.\]
\subsection{Turing functionals and computable measures} We assume that the reader is familiar with the basics of computability theory (for instance, the material covered in Soare~\cite[Chapters~I\nobreakdash-IV]{Soa16}, Nies~\cite[Chapter~1]{Nie09}, or Downey and Hirschfeldt~\cite[Chapter~2]{DowHir10}).
\begin{defn}\
\begin{itemize}
\item[(i)] A \emph{Turing functional} $\Phi\colon\subseteq2^\omega\rightarrow2^\omega$ is represented by a c.e.\ set $S_\Phi$ of pairs of strings $(\sigma,\tau)$ such that
if $(\sigma,\tau),(\sigma',\tau')\in S_\Phi$ and $\sigma\preceq\sigma'$, then $\tau\preceq\tau'$ or $\tau'\preceq\tau$.
\item[(ii)] For each $\sigma\in2^{<\omega}$, we define $\Phi^\sigma$ to be the maximal string (in the order given by~$\preceq$) in the set~$\{\tau\colon (\exists \sigma'\preceq\sigma)((\sigma',\tau)\in S_\Phi)\} \cup \{\varepsilon\}$.
Similarly, for each $s\in\omega$, $\Phi^\sigma_s$ is the maximal string in the set
${\{\tau\colon (\exists \sigma'\preceq\sigma)((\sigma',\tau)\in S_\Phi[s])\} \cup \{\varepsilon\}}$, where $S_\Phi[s]$ is the approximation of the c.e.~set~$S_\Phi$ at stage~$s$.
\item[(iii)] Let $\Phi^X$ be the minimal (in the order given by $\preceq$) $z\in2^{<\omega}\cup2^\omega$ such that $\Phi^{X {\upharpoonright} n} \preceq z$ for all~$n$.
\item[(iv)] We set $\mathrm{dom}(\Phi)=\{X\in2^\omega\colon\Phi^X\in2^\omega\}$.
\item[(v)] For $\tau \in 2^{<\omega}$, let $\Phi^{-1}(\tau)$ be
$\{ \sigma\in 2^{<\omega} \colon \exists \tau' \succeq \tau \colon (\sigma,\tau')\in S_\Phi\}$.
\item[(vi)] Lastly, for $\mathcal{A} \subseteq 2^\omega$, let $\Phi^{-1}(\mathcal{A})$ be
$\{
X \in 2^\omega\colon \Phi^X \in \mathcal{A}
\}$.
\end{itemize}
\end{defn}
When $\Phi^X\in2^\omega$, we will often write $\Phi^X$ as $\Phi(X)$ to emphasize that we view the functional~$\Phi$ as a (partial) map from~$2^\omega$ to~$2^\omega$.
A measure $\mu$ on $2^\omega$ is \emph{computable} if $\sigma \mapsto \mu(\llbracket\sigma\rrbracket)$ is computable as a real-valued function, that~is, if there is a computable function $\widetilde \mu\colon2^{<\omega}\times\omega\rightarrow\mathbb{Q}_2$ such that
\[|\mu(\llbracket\sigma\rrbracket)-\widetilde \mu(\sigma,i)|\leq 2^{-i}\]
for every $\sigma\in2^{<\omega}$ and $i\in\omega$. For all measures appearing in this article we assume that $\mu(2^\omega)\leq 1$ without explicit mention.
From now on, we will write $\mu(\llbracket\sigma\rrbracket)$ as $\mu(\sigma)$. By Carathéodory's Theorem, if the values $\mu(\sigma)$, for $\sigma\in2^{<\omega}$, of a measure $\mu$ on $2^\omega$ are fixed, then there is a unique extension of~$\mu$ to the Borel $\sigma$-algebra generated by the sets $\llbracket \sigma \rrbracket$, for $\sigma\in2^{<\omega}$. In this article, all measures will be defined in this way, which implies in particular that the same sets are measurable for each of these measures.
The \emph{uniform (or Lebesgue) measure}~$\lambda$ is the probability measure for which each bit of the sequence has value~$0$ with probability $1/2$, independently of the values of the other bits. It can be defined as the unique Borel measure such that $\lambda(\sigma)=2^{-|\sigma|}$ for all strings~$\sigma$. Clearly, $\lambda$~is a computable measure.
\subsection{Notions of algorithmic randomness}\label{subsec-notions}
The primary notion of algorithmic randomness that we will consider in this study is Martin-L\"of randomness.
\begin{defn}{\ }
\begin{itemize}
\item[(i)] A \emph{Martin-L\"of test} is a sequence $(\mathcal{U}_i)_{i\in\omega}$ of uniformly effectively open subsets of $2^\omega$ such that for each $i$,
$
\lambda(\mathcal{U}_i)\leq 2^{-i}
$.
\item[(ii)] $X\in2^\omega$ \emph{passes} the Martin-L\"of test $(\mathcal{U}_i)_{i\in\omega}$ if $X\notin\bigcap_{i \in \omega}\mathcal{U}_i$.
\item[(iii)] $X\in2^\omega$ is \emph{Martin-L\"of random}, denoted $X\in\mathrm{MLR}$, if $X$ passes every Martin-L\"of test.
\end{itemize}
\end{defn}
We will also consider relative versions of Martin-L\"of randomness, obtained by relativizing the above notion of a Martin-L\"of test to some oracle $A\in2^\omega$; such a class will be written as $\mathrm{MLR}^A$. For $A=\emptyset^{(n)}$, the resulting notion of randomness is known as $(n+1)$-randomness. Other randomness notions can be obtained as follows.
\begin{defn} Let $X\in2^\omega$.
\begin{itemize}
\item[(i)] $X$ is \emph{Schnorr random} if and only if $X$ passes every Martin-L\"of test $(\mathcal{U}_i)_{i\in\omega}$ such that $\lambda(\mathcal{U}_i)$ is computable uniformly in $i\in\omega$.
\item[(i)] $X$ is \emph{Kurtz random} (or \emph{weakly 1-random}) if and only if $X$ is not contained in any $\Pi^0_1$ class of Lebesgue measure~$0$.
\item[(ii)] $X$ is \emph{weakly 2-random} if and only if $X$ is not contained in any $\Pi^0_2$ class of Lebesgue measure~$0$.
\item[(iii)] $X$ is \emph{difference random} if and only if it is Martin-L\"of random and not Turing complete.
\end{itemize}
\end{defn}
Let $\mathrm{SR}$ and $\mathrm{KR}$ denote the collections of Schnorr random and Kurtz random sequences, respectively.
Each of the above notions of tests and randomness can also be formulated for arbitrary computable measures $\mu$ on $2^\omega$ simply by replacing the Lebesgue measure $\lambda$ in the respective definitions by~$\mu$. Thus, for instance, for a fixed computable measure $\mu$, a sequence $X$ is $\mu$\nobreakdash-Martin-L\"of random, denoted $X\in\mathrm{MLR}_\mu$, if and only if $X$ is not contained in any $\mu$\nobreakdash-Martin-L\"of test. Significantly, Martin-L\"of randomness with respect to some computable measure is Turing invariant in the following sense.
\begin{thm}[Levin, Zvonkin~\cite{LevZvo70}; Kautz~\cite{Kau91}]\label{thm-levin-kautz}
For every computable measure $\mu$ and for every non-computable $X\in\mathrm{MLR}_\mu$, there is some $Y\in\mathrm{MLR}$ such that $X\equiv_\mathrm{T} Y$.
\end{thm}
The requirement that $X$ be non-computable is necessary since every computable sequence~$X$ is random with respect to some computable measures on $2^\omega$, for example the measure $\delta_X$ defined for $\mathcal{A} \subseteq 2^\omega$ via
\[{\delta_X(\mathcal{A})=\begin{cases}1 & \text{if }X \in \mathcal{A}, \\ 0 & \text{else.} \end{cases}}\]
\section{Negligibility and Non-negligibility}\label{sec-neg-nonneg}
To define the notions of negligibility and non-negligibility, we need to review the definition of left-c.e.\ semi-measures, which were initially introduced by Solomonoff~\cite{Sol64a,Sol64b} and first systematically studied by Levin and Zvonkin \cite{LevZvo70}.
\subsection{Left-c.e.\ semi-measures}
\begin{defn}
A \emph{semi-measure} is a function $P\colon2^{<\omega}\rightarrow[0,1]$ that satisfies
\begin{itemize}
\item[(i)] $P(\varepsilon)\leq 1$,
\item[(ii)] $P(\sigma)\geq P(\sigma0)+P(\sigma1)$ for every $\sigma\in2^{<\omega}$.
\end{itemize}
In addition, $P$ is left-c.e.\ if $P(\sigma)$ is the limit of a computable, non-decreasing sequence of rationals, uniformly in $\sigma\in2^{<\omega}$.
\end{defn}
Functions satisfying conditions (i) and (ii) above are sometimes referred to in the algorithmic randomness literature as \emph{continuous semi-measures} to distinguish them from discrete semi-measures. As we do
not consider discrete semi-measures in this study, we will not make this distinction below.
In Section \ref{sec-implementing}, the support of a semi-measure will play an important role.
\begin{defn}
The \emph{support} of a semi-measure $P$, denoted $\supp{P}$ is the collection of sequences
\[
\{X\in2^\omega\colon\forall n\; P(X{\upharpoonright} n)>0\}.
\]
\end{defn}
It is not immediately clear how to extend semi-measures to Borel subsets of $2^\omega$. Levin and V'yugin~\cite{LevVyu77} proposed the following way of deriving measures from left-c.e.\ semi-measures.
\begin{defn}
Given a left-c.e.\ semi-measure $P$ and $\sigma\in2^{<\omega}$ we define
\[
\overline{P}(\sigma)=\inf_n\sum_{\sigma\preceq\tau \;\wedge\; |\tau|=n}P(\tau).
\]
\end{defn}
$\overline{P}$ can be extended to a measure on $2^\omega$, which we will also write as $\overline P$, by letting $\overline{P}(\llbracket \sigma \rrbracket) = \overline{P}(\sigma)$ and then applying Carathéodory's theorem.
One can show inductively that $\overline P$~is the maximal measure such that $\overline P(\sigma)\leq P(\sigma)$ for every $\sigma\in2^{<\omega}$ (see, for instance, Bienvenu~et~al.~\cite[Proposition 6.5]{BieHolPor14}). As a consequence, $\overline{P}$~is typically not a probability measure.
Inversely, given any computable measure~$\mu$ defined on $2^\omega$, we can identify it with the left-c.e.\ semi-measure~${\sigma \mapsto \mu(\llbracket \sigma \rrbracket)}$ defined on $2^{<\omega}$; then we have~$\overline\mu=\mu$.
\bigskip
An important property of left-c.e.\ continuous semi-measures is the following.
\begin{thm}[Levin, Zvonkin \cite{LevZvo70}]\label{thm-InduceSemiMeasures}{\ }
\begin{itemize}
\item[(i)] For every Turing functional $\Phi$, the function $\lambda_\Phi$ defined for every $\sigma \in 2^{<\omega}$ via
\[
\lambda_\Phi(\sigma)=\lambda(\llbracket\Phi^{-1}(\sigma)\rrbracket)=\lambda(\{X\in2^\omega\colon\Phi^X\succeq\sigma\}),
\]
where $\Phi^X\in2^\omega\cup2^{<\omega}$, is a left-c.e.\ semi-measure.
\item[(ii)] For every left-c.e.\ semi-measure $P$, there is a Turing functional $\Phi$ such that $P=\lambda_\Phi$.
\end{itemize}
\end{thm}
Using Theorem \ref{thm-InduceSemiMeasures} one can derive an alternative characterization of $\overline{P}$ for any left-c.e.\ semi-measure $P$.
\begin{prop}\label{prop-barchar}
Let $P$ be a left-c.e.\ semi-measure. Then
\[
\overline P(\sigma)=\lambda(\{X\in2^\omega\colon\Phi^X\in2^\omega\;\wedge\; \Phi^X\succeq\sigma\}),
\]
where $\Phi$ is as in Theorem~\ref{thm-InduceSemiMeasures} (ii).
Moreover, for measurable~$\mathcal{A} \subseteq 2^\omega$, Carathéodory's theorem implies that \[\overline P(\mathcal{A})=\lambda(\Phi^{-1}(\mathcal{A})).\]
\end{prop}
\noindent For a proof of the first part of the proposition, see Bienvenu et al.~\cite[Proposition 6.5]{BieHolPor14}.
\begin{thm}[Levin, Zvonkin \cite{LevZvo70}]\label{univ_semi_measure_sdfsdfer}
There is a universal left-c.e.\ semi-measure, that is, a left-c.e.\ semi-measure $M$ such that for every left-c.e.\ semi-measure $P$, there is some constant~$c$ such that
\[
P(\sigma)\leq c\cdot M(\sigma)
\]
for every $\sigma\in2^{<\omega}$.
\end{thm}
\begin{remark}\label{rmk1}{\ }
\begin{itemize}
\item[(i)] One way to define a universal semi-measure is via a universal functional. For instance, for an effective enumeration $(\Phi_e)_{e\in\omega}$ of all Turing functionals, we can define $\Phi\colon2^\omega\rightarrow2^\omega$ via $\Phi(1^e0X)=\Phi_e(X)$ for each $e\in\omega$ and $X\in2^\omega$. It is not hard to verify that $\lambda_\Phi$ is universal.
\item[(ii)] For every left-c.e.\ semi-measure $P$, there is some $c$ such that
\[
\overline{P}(\sigma)\leq c\cdot \overline{M}(\sigma).
\]
To see this, observe that for the $c$ appearing in Theorem~\ref{univ_semi_measure_sdfsdfer} we have
\[\overline{P}(\sigma) = \inf_n\sum_{\sigma\preceq\tau \;\wedge\; |\tau|=n} P(\tau) \leq \inf_n\sum_{\sigma\preceq\tau \;\wedge\; |\tau|=n} c \cdot M(\tau) =
c \cdot \overline{M}(\sigma).\]
\item[(iii)] From (ii) and a straightforward argument using open covers of null sets, we can derive the conclusion that for every left-c.e.\ semi-measure $P$, $\overline P$ is absolutely continuous with respect to $\overline M$; that is, if $\overline M(\mathcal{B})=0$ then $\overline P(\mathcal{B})=0$ for every measurable set~$\mathcal{B}$.
\end{itemize}
\end{remark}
Using a universal semi-measure we can provide an alternative characterization of $\mu$-Martin-L\"of randomness for each computable measure $\mu$.
\begin{thm}[Levin~\cite{Lev74}; Schnorr, see Chaitin~\cite{Cha75}]\label{thm-levin-schnorr}
Let $\mu$ be a computable measure. Then $X\in\mathrm{MLR}_\mu$ if and only if there is some $c$ such that $\mu(X{\upharpoonright} n)\geq c\cdot M(X{\upharpoonright} n)$ for every $n$.
\end{thm}
We can now define the notion of negligibility.
\begin{defn}
We say that $\mathcal{B}\subseteq2^\omega$ is \emph{negligible} if $\overline M(\mathcal{B})=0$.
\end{defn}
As a consequence of Remark \ref{rmk1}~(iii) we obtain the following corollary.
\begin{corollary}\label{dfgsdafkdfjhsdsdfg}
Let $P$ be a left-c.e.\ semi-measure and $\mathcal{B}\subseteq2^\omega$ a negligible collection of sequences. Then $\overline P(\mathcal{B})=0$. In particular, $\mu(\mathcal{B})=0$ for every computable measure $\mu$.
\end{corollary}
Negligibility of a collection can alternatively be characterized by stipulating that {\em no} Turing functional produce an element of that collection with positive probability, as the following proposition shows.
\begin{prop}\label{prop-neg}
Let $(\Phi_i)_{i\in\omega}$ be an effective enumeration of all Turing functionals. Then a measurable $\mathcal{B}\subseteq2^\omega$ is negligible if and only if
\[
\lambda\Biggl(\bigcup_{i\in\omega}\Phi_i^{-1}(\mathcal{B})\Biggr)=0.
\]
\end{prop}
\begin{proof}
($\Rightarrow$:) Suppose that $\lambda\Bigl(\bigcup_{i\in\omega}\Phi_i^{-1}(\mathcal{B})\Bigr)>0$. Then there is some $i$ such that $\lambda(\Phi_i^{-1}(\mathcal{B}))>0$. Setting $P(\sigma)=\lambda(\llbracket \Phi_i^{-1}(\sigma) \rrbracket)$ for $\sigma\in2^{<\omega}$,
it follows from Theorem \ref{thm-InduceSemiMeasures}~(i) that $P$ is a left-c.e.\ semi-measure.
Moreover, we have $\overline P(\mathcal{B})=\lambda(\Phi_i^{-1}(\mathcal{B}))$
by Proposition~\ref{prop-barchar} and thus $\overline P(\mathcal{B})>0$.
By Remark~\ref{rmk1}~(iii), $\overline M(\mathcal{B})>0$, so $\mathcal{B}$ is not negligible.\\
($\Leftarrow$:) Let $\Phi$ be a Turing functional such that $M=\lambda_\Phi$, which exists by Theorem \ref{thm-InduceSemiMeasures}~(ii). If $\mathcal{B}$~is not negligible, then we have $0<\overline M(\mathcal{B})=\lambda(\Phi^{-1}(\mathcal{B}))$ by Proposition~\ref{prop-barchar}, and hence \[\lambda\Bigl(\bigcup_{i\in\omega}\Phi_i^{-1}(\mathcal{B})\Bigr)>0.\qedhere \]
\end{proof}
Intuitively, a collection of sequences is negligible if none of its elements can be obtained with positive probability by any probabilistic algorithm. Indeed, we can see a probabilistic algorithm as consisting of two steps: First we generate infinitely many random bits, then we feed them to some Turing functional to produce the desired output.
More formally, we can think of a probabilistic algorithm as given by applying a Turing functional $\Phi$ to some random sequence. In this case, we can probabilistically compute an element of some fixed collection $\mathcal{B}$ with positive probability if there are positive measure many sequences $X$ such that $\Phi(X)\in\mathcal{B}$. Proposition \ref{prop-neg} tells us that the existence of such a probabilistic algorithm to compute elements of $\mathcal{B}$ with positive probability is equivalent to the non-negligibility of $\mathcal{B}$.
We conclude this subsection with a brief discussion of the atoms of a semi-measure.
\begin{defn}
Let $P$ be a semi-measure. $X\in2^\omega$ is an \emph{atom} of $P$ if there is some $\delta>0$ such that $P(X{\upharpoonright} n)>\delta$ for all $n$.
\end{defn}
\begin{lem}
Let $P$ be a semi-measure. $X\in2^\omega$ is an atom of $P$ if and only if $\overline{P}(\{X\})>0$.
\end{lem}
\begin{proof}
($\Rightarrow$:) If there is some $\delta>0$ such that $P(X{\upharpoonright} n)>\delta$ for all $n$, then for each $n$ and each~${m\geq n}$,
\[
\sum_{X{\upharpoonright} n\preceq\tau \;\wedge\; |\tau|=m}P(\tau)\geq P(X{\upharpoonright} m)>\delta.
\]
It follows from the definition of $\overline{P}$ that $\overline{P}(X{\upharpoonright} n)>\delta$ for all $n$.
\noindent ($\Leftarrow$:) ${\overline{P}(\{X\})>0}$ implies that there is an $\delta>0$ such that $\overline{P}(X{\upharpoonright} n)>\delta$ for all $n$. Then, for all~$n$,
\[P(X{\upharpoonright} n)\geq \overline{P}(X{\upharpoonright} n) > \delta.\qedhere\]
\end{proof}
\begin{prop}[Bienvenu et al.~\cite{BieHolPor14}]\label{prop_atoms_are_comp}
Let $P$ be a left-c.e.\ semi-measure. If $X$ is an atom of~$P$, then $X$ is computable.
\end{prop}
\subsection{Examples of negligible and non-negligible collections}
We now provide a number of examples of negligible and non-negligible collections of sequences, where the first set of examples is given by a classical theorem of Sacks.
\begin{thm}[Sacks~\cite{Sac63}]
For $X\in2^\omega$, $\lambda(\{Y\in2^\omega\colon Y\geq_\mathrm{T} X\})>0$ if and only if $X$ is computable. That is, $\{X\}$~is non-negligible if and only if $X$ is computable.
\end{thm}
Arbitrary subsets of $2^\omega$ of positive Lebesgue measure are further trivial examples of non-negligible collections. Thus, each of the notions of randomness defined above in Subsection~\ref{subsec-notions} forms a non-negligible collection.
We can find more interesting examples by considering naturally occurring collections of Turing degrees. We briefly review some of these collections. First, a sequence has \emph{PA degree} if it computes a consistent completion of Peano arithmetic. A sequence $X\in2^\omega$ is \emph{high} (or has \emph{high Turing degree}) if and only if $\{X\in2^\omega\colon X''\geq_\mathrm{T}\emptyset'\}$. A sequence $X\in2^\omega$ is \emph{1-generic} if for every c.e.\ $S\subseteq2^{<\omega}$, there is some $\sigma\prec X$ such that either $\sigma\in S$ or for all $\tau\succeq\sigma$, $\tau\notin S$. Similarly, $X\in2^\omega$ is \emph{2-generic} if for every $\emptyset'$-c.e.\ $S\subseteq2^{<\omega}$, there is some $\sigma\prec X$ such that either $\sigma\in S$ or for all $\tau\succeq\sigma$, $\tau\notin S$. Next, $X\in2^\omega$ has \emph{hyperimmune-free degree} if and only if every $X$-computable function is dominated by some computable function. Accordingly, $X$ has \emph{hyperimmune degree} if and only if $X$ computes a function that is not dominated by any computable function.
$X\in2^\omega$ is of \emph{DNC degree} if and only if there is some $f\leq_\mathrm{T} X$ such that $f(e)\neq \phi_e(e)$ for all $e\in\omega$. Lastly, $X$ is \emph{generalized low} (or is in GL$_1$) if and only if $X'\equiv_\mathrm{T} X\oplus \emptyset'$.
To establish the negligibility or non-negligibility of the various collections given above, we will use the following heuristic principles, which are justified by Proposition \ref{prop-neg}.
\begin{itemize}
\item[($P_1$)] \emph{If every sufficiently random sequence computes an element of some measurable $\mathcal{B}\subseteq2^\omega$, then $\mathcal{B}$ is non-negligible.}
\item[($P_2$)] \emph{If no sufficiently random sequence computes an element of some measurable $\mathcal{B}\subseteq2^\omega$, then $\mathcal{B}$ is negligible.}
\end{itemize}
\begin{prop} \label{prop-non-negligible}
The following collections are non-negligible:
\begin{enumerate}
\item[(i)] the collection of sequences of DNC degree,
\item[(ii)] the collection of 1-generic sequences,
\item[(iii)] the collection of sequences of hyperimmune degree, and
\item[(iv)] the collection of generalized low sequences.
\end{enumerate}
\end{prop}
\begin{proof}
To show that each of the above collections is non-negligible, we apply ($P_1$) by identifying a notion of randomness such that every sequence that is random in the respective sense computes an element of the given collection.
For~(i), Ku\v cera \cite{Kuc85} proved that every Martin-L\"of random sequence is of DNC degree.
For~(ii), Kautz~\cite{Kau91} established that every 2-random sequence computes a 1-generic. Since every 1-generic sequence has hyperimmune degree, it further follows that every 2-random sequence computes a sequence of hyperimmune degree, yielding (iii). Lastly, for~(iv), Kautz \cite{Kau91} also proved that every 2-random sequence is generalized low.
\end{proof}
\begin{prop} \label{prop-negligible}
The following collections are negligible:
\begin{enumerate}
\item[(i)] the collection of sequences of PA degree,
\item[(ii)] the collection of sequences of high degree,
\item[(iii)] the collection of 2-generic sequences, and
\item[(iv)] the collection of non-computable sequences of hyperimmune-free degree.
\end{enumerate}
\end{prop}
\begin{proof} To show that each of the above collections is negligible, we apply ($P_2$) by identifying
a notion of randomness such that no sequence that is random in the respective sense computes an element of the given collection.
For (i), Franklin and Ng~\cite{FranklinNg} extended work of Stephan~\cite{franktechreport} to show that no difference random sequence computes a completion of PA.
For (ii), Kautz~\cite{Kau91} established that no 3-random has high degree. As the high degrees are closed upwards under Turing reducibility, this implies that no 3-random computes a sequence of high degree. For (iii), Nies, Stephan, and Terwijn~\cite{NieSteTer05} proved that every 2-random sequence forms a minimal pair in the Turing degrees with every 2-generic, and so no 2-random computes a 2-generic. Lastly, for (iv), Lewis, Day, and Barmpalias~\cite[Theorem 5.1]{BarDayLew14} showed that for every $2$\nobreakdash-random sequence $X$, every non-computable~$Y\leq_T X$ computes a 1-generic sequence and therefore in particular a sequence of hyperimmune degree. So if any $2$-random could compute a non-computable sequence of hyperimmune-free degree, then this sequence could in turn compute a sequence of hyperimmune degree, contradicting the fact that hyperimmune-freeness is closed downwards under Turing reducibility.
\end{proof}
\section{The Levin-V'yugin Degrees}\label{sec-lv-degrees}
Using the notion of negligibility, we can define a degree structure whose elements are given by Turing invariant subsets of~$2^\omega$. Recall that $\mathcal{A}\subseteq2^\omega$ is Turing invariant if $X\in\mathcal{A}$ and $Y\equiv_\mathrm{T} X$ imply~$Y\in \mathcal{A}$. Let $\mathcal{I}$ denote the set of measurable Turing invariant subsets of $2^\omega$. In what follows, all Turing invariant collections of sets that we consider are Borel and thus measurable. One can routinely verify that $(\mathcal{I}, \cap,\cup,^c)$ is a Boolean algebra.
We now define a reducibility~$\leq_\mathrm{LV}$ on~$\mathcal{I}$.
\begin{samepage}\begin{defn}
Let $\mathcal{A},\mathcal{B}\in \mathcal{I}$.
\begin{itemize}
\item[(i)] $\mathcal{A}\leq_\mathrm{LV} \mathcal{B}$ if and only if $\mathcal{A}\setminus\mathcal{B}$ is negligible.
\item[(ii)] $\mathcal{A}\equiv_\mathrm{LV} \mathcal{B}$ if and only if $\mathcal{A}\leq_\mathrm{LV} \mathcal{B}$ and $\mathcal{B}\leq_\mathrm{LV} \mathcal{A}$.
\end{itemize}
\end{defn}\end{samepage}
Given $\mathcal{A},\mathcal{B}\in \mathcal{I}$, $\mathcal{A}\leq_\mathrm{LV} \mathcal{B}$ says that, for any probabilistic algorithm, the probability that it produces an element of $\mathcal{A}$ that is not in $\mathcal{B}$ is~$0$. The stronger statement $\mathcal{A}<_\mathrm{LV} \mathcal{B}$ says in~addition that there is some probabilistic algorithm such that the probability that it produces an element of~$\mathcal{B}$ that is not in~$\mathcal{A}$ is strictly positive. In this sense, the larger a collection of sets is with regards to the given order, the easier it is to probabilistically produce an element of it.
It is well-known that a Boolean algebra modulo an equivalence relation is still a Boolean algebra. Thus, $\mathcal{D}_\mathrm{LV}=\mathcal{I}/{\equiv_\mathrm{LV}}$ is a Boolean algebra, which we refer to as the \emph{Levin-V'yugin algebra}. In fact, $\mathcal{D}_\mathrm{LV}$ is a measure algebra, since it is a Boolean algebra of measurable sets modulo $\overline M$-null sets. Individual elements of $\mathcal{D}_\mathrm{LV}$ will be referred to as $\mathrm{LV}$\emph{-degrees}. We will write $\mathrm{LV}$-degrees as $\mathbf{a},\mathbf{b},\dotsc$ and so on. For $\mathcal{A}\in\mathcal{I}$, $\mathbf{deg_\mathbf{LV}(\mathcal{A})}$ denotes the $\mathrm{LV}$-degree of $\mathcal{A}$. Given $\mathrm{LV}$-degrees $\mathbf{a}$ and $\mathbf{b}$ and any $\mathcal{A}\in\mathbf{a}$ and $\mathcal{B}\in\mathbf{b}$, we define
\begin{itemize}
\item[] $\mathbf{a}\wedge \mathbf{b}:=\mathbf{deg_\mathbf{LV}(\mathcal{A}\cap\mathcal{B})}$,
\item[] $\mathbf{a}\vee \mathbf{b}:=\mathbf{deg_\mathbf{LV}(\mathcal{A}\cup\mathcal{B})}$, and
\item[] $\mathbf{a}^c:=\mathbf{deg_\mathbf{LV}}(2^\omega\setminus\mathcal{A})$.
\end{itemize}
It is straightforward to verify that these are well-defined.
With slight abuse of notation, we let~$\leq_\mathrm{LV}$ denote the order on $\mathcal{D}_\mathrm{LV}$ that is induced by the order $\leq_\mathrm{LV}$ on $\mathcal{I}$ modulo the equivalence relation~$\equiv_\mathrm{LV}$; that is, we write
write $\mathbf{a}\leq_\mathrm{LV}\mathbf{b}$, for two $\mathrm{LV}$-degrees $\mathbf{a}$ and $\mathbf{b}$, if there exist~$\mathcal{A}\in\mathbf{a}$ and~$\mathcal{B}\in\mathbf{b}$ such that~$\mathcal{A}\leq_\mathrm{LV}\mathcal{B}$.
Then the following is immediate.
\begin{prop}\
\begin{itemize}
\item[(i)] The bottom element $\mathbf{0}$ of $\mathcal{D}_\mathrm{LV}$ consists of the Turing invariant negligible subsets of $2^\omega$.
\item[(ii)] The top element $\mathbf{1}$ of $\mathcal{D}_\mathrm{LV}$ consists of all Turing invariant $\mathcal{A}\subseteq2^\omega$ such that $2^\omega\setminus\mathcal{A}$ is negligible.
\end{itemize}
\end{prop}
\subsection{Elementary properties of the $\mathrm{LV}$-degrees}
Recall that $A$ is an atom
of a Boolean algebra~$\mathcal{B}$ if there are no elements $A_0$, $A_1\in\mathcal{B}\setminus\{0\}$ such that $A=A_0\vee A_1$ and $A_0\wedge A_1=0$. To avoid confusion with the atoms of a semi-measure, we will hereafter refer to atoms of $\mathcal{D}_\mathrm{LV}$ as \emph{$\mathcal{D}_\mathrm{LV}$-atoms}.
As reported by V'yugin~\cite{Vyu82} in results attributed to Levin, two $\mathcal{D}_\mathrm{LV}$-atoms are readily identifiable: the $\mathrm{LV}$-degree of the computable sequences, denoted~$\mathbf{c}$, and the $\mathrm{LV}$-degree of the Martin-L\"of random sequences, denoted $\mathbf{r}$. We provide the proofs of these results here.
For $\mathcal{A}\subseteq2^\omega$, let $\mathrm{Spec}_\mathrm{T} (\mathcal{A})=\{\deg_\mathrm{T} (X)\colon X\in\mathcal{A}\}$ be the \emph{Turing degree spectrum of $\mathcal{A}$}. The following basic fact will be useful.
\begin{samepage}
\begin{lem}\label{fact1}
Given $\mathbf{a_0},\mathbf{a_1}\in\mathcal{D}_\mathrm{LV}$ such that $\mathbf{a_0}\wedge\mathbf{a_1}=\mathbf{0}$, there are $\mathcal{A}_0,\mathcal{A}_1\in\mathcal{I}$ such that
\begin{itemize}
\item[(i)] $\mathrm{Spec}_\mathrm{T} (\mathcal{A}_0)\cap\mathrm{Spec}_\mathrm{T} (\mathcal{A}_1)=\emptyset$ and
\item[(ii)] $\mathbf{deg_\mathbf{LV}(}\mathcal{A}_0\mathbf{)}=\mathbf{a_0}$ and $\mathbf{deg_\mathbf{LV}(}\mathcal{A}_1\mathbf{)}=\mathbf{a_1}$.
\end{itemize}
Furthermore, for any given $\mathcal{A}\in\mathcal{I}$ satisfying $\mathbf{deg_\mathbf{LV}(}\mathcal{A}\mathbf{)}=\mathbf{a_0\vee a_1}$, we can w.l.o.g.\ assume that
\begin{itemize}
\item[(iii)] $\mathcal{A}_i\subseteq \mathcal{A}$ for $i=0,1$.
\end{itemize}
\end{lem}
\end{samepage}
\begin{proof}
The statement $\mathbf{a_0}\wedge\mathbf{a_1}=\mathbf{0}$ says that if we pick \textit{any} element $\mathcal{B}_0 \in \mathcal{I}$ of the equivalence class $\mathbf{a_0}$ and \textit{any} element $\mathcal{B}_1 \in \mathcal{I}$ of the equivalence class $\mathbf{a_1}$, then $\mathcal{B}_0 \cap \mathcal{B}_1$ is negligible. Then $\mathcal{A}_0 := \mathcal{B}_0 \setminus \mathcal{B}_1 \equiv_\mathrm{LV} \mathcal{B}_0$ is in the equivalence class $\mathbf{a_0}$, $\mathcal{A}_1 := \mathcal{B}_1 \setminus \mathcal{B}_0 \equiv_\mathrm{LV} \mathcal{B}_1$ is in $\mathbf{a_1}$, and since $\mathcal{B}_0$ and $\mathcal{B}_1$ are closed under Turing equivalence we also have ${\mathrm{Spec}_\mathrm{T} (\mathcal{A}_0)\cap\mathrm{Spec}_\mathrm{T} (\mathcal{A}_1)=\emptyset}$.
To verify (iii), suppose that $\mathbf{deg_\mathbf{LV}(}\mathcal{A}\mathbf{)}=\mathbf{a_0\vee a_1}$ for some $\mathcal{A}\in\mathcal{I}$ and
let $\mathcal{A}_0^\prime$ and $\mathcal{A}_1^\prime$ satisfy conditions (i) and (ii) above. Then $\mathbf{deg_\mathbf{LV}(}\mathcal{A}\mathbf{)}=\mathbf{deg_\mathbf{LV}(}\mathcal{A}_0^\prime\cup\mathcal{A}_1^\prime\mathbf{)}$, which implies that $\mathcal{A}\Delta(\mathcal{A}_0^\prime\cup\mathcal{A}_1^\prime)$ is negligible. As $\mathcal{A}_0^\prime$ and $\mathcal{A}_1^\prime$ are disjoint, this implies that $\mathcal{A}_i^\prime\setminus\mathcal{A}$ is negligible for $i=0,1$. For~${i=0,1}$, setting $\mathcal{A}_i= \mathcal{A}_i^\prime\cap\mathcal{A}$, we have
\[
\mathcal{A}_i^\prime= (\mathcal{A}_i^\prime \cap \mathcal{A}) \cup (\mathcal{A}_i^\prime\setminus \mathcal{A}) = \mathcal{A}_i \cup (\mathcal{A}_i^\prime\setminus \mathcal{A}).
\]
Thus, $\mathcal{A}_i^\prime$ and $\mathcal{A}_i$ differ only by a negligible set for $i=0,1$, and thus $\mathcal{A}_0$ and $\mathcal{A}_1$ satisfy~(ii). Moreover, since $\mathcal{A}_i\subseteq\mathcal{A}_i^\prime$ for $i=0,1$, $\mathcal{A}_0$ and $\mathcal{A}_1$ also satisfy~(i). Thus, (iii)~holds.
\end{proof}
\begin{prop}\label{prop-comp-atom}
$\mathbf{c}$ is a $\mathcal{D}_\mathrm{LV}$-atom.
\end{prop}
\begin{proof}
Suppose that $\mathbf{c}$ is not a $\mathcal{D}_\mathrm{LV}$-atom. Then there are $\mathrm{LV}$-degrees $\mathbf{a_0},\mathbf{a_1}>\mathbf{0}$ such that $\mathbf{a_0} \wedge\mathbf{a_1}=\mathbf{0}$ and $\mathbf{a_0}\vee\mathbf{a_1}=\mathbf{c}$. Then,
if we choose $\mathcal{A}$ in condition~(iii) of Lemma~\ref{fact1} as the collection of all computable sequences, there are $\mathcal{A}_0,\mathcal{A}_1\in\mathcal{I}$ satisfying all three conditions of that lemma. But clearly, conditions~(i) and~(iii) are in contradiction with each other in this case.
\end{proof}
\begin{thm}\label{thm:rand-atom}
$\mathbf{r}$ is a $\mathcal{D}_\mathrm{LV}$-atom.
\end{thm}
To prove Theorem \ref{thm:rand-atom}, we will need to draw upon several classical results from measure theory, as well as several auxiliary lemmata. Here we follow V'yugin's general proof strategy while filling in more details, especially in isolating and proving Lemma \ref{lem:rand-atom-1} below.
As noted in Remark \ref{rmk1}~(iii), for any left-c.e.\ semi-measure $P$, $\overline{P}$ is absolutely continuous with respect to $\overline{M}$. It follows by the Radon-Nikodym Theorem that there is a measurable function~$\frac{d\overline{P}}{d\overline{M}}$ such that, for all measurable $\mathcal{X} \subseteq 2^\omega$,
\[
\overline{P}(\mathcal{X})=\int_\mathcal{X}{\frac{d\overline{P}}{d\overline{M}}}(X)d\overline{M}(X).
\]
The Radon-Nikodym Theorem further guarantees that for any measurable $f\!\colon2^\omega\rightarrow\mathbb{R}$ such that for all measurable $\mathcal{X} \subseteq 2^\omega$ the property
\[
\overline{P}(\mathcal{X})=\int_\mathcal{X} f(X)d\overline{M}(X)
\]
holds, we have $f(X)=\dfrac{d\overline{P}}{d\overline{M}}(X)$ for $\overline{M}$-almost every $X\in2^\omega$.
\begin{lem}\label{lem:rand-atom-1}
$\dfrac{d\overline{P}}{d\overline{M}}(X)=\lim_{n\rightarrow\infty}\dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)}$ for $\overline{M}$-almost every $X\in2^\omega$.
\end{lem}
\begin{proof}
First, recall that for a measure $\mu$ on $2^\omega$, a $\mu$-martingale is a function $d\colon2^{<\omega}\rightarrow \mathbb{R}^{\geq 0}$ such that
\[
\mu(\sigma)d(\sigma)=\mu(\sigma0)d(\sigma0)+\mu(\sigma1)d(\sigma1)
\]
for every $\sigma\in2^{<\omega}$.\footnote{See, for instance, Nies~\cite[Chapter 7]{Nie09} or Downey and Hirschfeldt~\cite[Section 6.3]{DowHir10} for a discussion of the role of martingales in the theory of algorithmic randomness.}
Now, observe that $\dfrac{\overline{P}}{\,\overline{M}\,}$ is an $\overline{M}$-martingale. Indeed, for every $\sigma\in2^{<\omega}$,
\[
\overline{M}(\sigma)\dfrac{\overline{P}(\sigma)}{\overline{M}(\sigma)}=\overline{P}(\sigma)=\overline{P}(\sigma0)+\overline{P}(\sigma1)=
\overline{M}(\sigma0)\dfrac{\overline{P}(\sigma0)}{\overline{M}(\sigma0)}+\overline{M}(\sigma1)\dfrac{\overline{P}(\sigma1)}{\overline{M}(\sigma1)}.
\]
Thus $\lim_{n\rightarrow\infty}\dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)}$ exists for $\overline{M}$-almost every $X\in2^\omega$ by the martingale convergence theorem.\footnote{It is well known that every martingale in the sense of algorithmic randomness (as given above) is a martingale in the classical sense, and thus the classical martingale convergence theorem is applicable. See Downey and Hirschfeldt~\cite[Theorem~7.1.3]{DowHir10} for a proof of an effective version of the martingale convergence theorem.} Thus, by the Radon-Nikodym theorem, we just need to show that
\begin{equation*}\tag{$\dagger$}
\overline{P}(\mathcal{A})=\int_\mathcal{A} \lim_{n\rightarrow\infty}\dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)} d\overline{M}(X)
\end{equation*}
for every clopen $\mathcal{A}\subseteq2^\omega$ (which can then can be extended to every measurable $\mathcal{A}\subseteq2^\omega$).
Since there is some $c$ such that $\overline{P}(\sigma)\leq c\cdot\overline{M}(\sigma)$ for every $\sigma\in2^{<\omega}$, we have for every $n$ that
\[
\dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)}\leq c,
\]and hence by the dominated convergence theorem,
\begin{equation*}\tag{$\ddagger$}
\lim_{n\rightarrow\infty}\int_\mathcal{A} \dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)} d\overline{M}(X)=
\int_\mathcal{A} \lim_{n\rightarrow\infty}\dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)} d\overline{M}(X).
\end{equation*}
Using ($\dagger$), it now suffices to show that $\overline P(\mathcal{A})$ is equal to the left-hand side of ($\ddagger$). For each sufficiently large $N$, let $\mathcal{A}=\bigcup_{i=1}^k\llbracket\sigma_i\rrbracket$ for distinct $\sigma_1,\dotsc,\sigma_k\in2^N$. Then
\allowdisplaybreaks
\begin{align}
\lim_{n\rightarrow\infty}\int_\mathcal{A} \dfrac{\overline{P}(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)} d\overline{M}(X)&=\int_\mathcal{A} \dfrac{\overline{P}(X{\upharpoonright} N)}{\overline{M}(X{\upharpoonright} N)} d\overline{M}(X)\\
&=\sum_{i=1}^k\int_{\llbracket\sigma_i\rrbracket}\dfrac{\overline{P}(X{\upharpoonright} N)}{\overline{M}(X{\upharpoonright} N)} d\overline{M}(X)\\
&=\sum_{i=1}^k\dfrac{\overline{P}(\llbracket\sigma_i\rrbracket)}{\overline{M}(\llbracket\sigma_i\rrbracket)}\overline{M}(\llbracket\sigma_i\rrbracket)\\
&=\sum_{i=1}^k\overline{P}(\llbracket\sigma_i\rrbracket)=\overline{P}(\mathcal{A}).\qedhere
\end{align}
\end{proof}
\begin{lem}[V'yugin \cite{Vyu82}]\label{lem:rand-atom-2}
Let $P$ be a left-c.e.\ semi-measure and suppose that for $\mathcal{B}\subseteq2^\omega$, we have $\overline{M}(\mathcal{B}_0)=0$, where
\[
\mathcal{B}_0=\Biggl\{X\in\mathcal{B}\colon \frac{d\overline{P}}{d\overline{M}}(X)=0\Biggr\}.
\]
Then $\overline{P}(\mathcal{B})=0$ implies that $\overline{M}(\mathcal{B})=0$.
\end{lem}
\begin{proof}
By the hypothesis,
\[
0=\overline P(\mathcal{B}\setminus\mathcal{B}_0)=\int_{\mathcal{B}\setminus\mathcal{B}_0}\frac{d\overline{P}}{d\overline{M}}(X)d\overline{M}(X).
\]
Since $\dfrac{d\overline{P}}{d\overline{M}}(X)\neq 0$ for every $X\in\mathcal{B}\setminus\mathcal{B}_0$, it follows that $\overline{M}(\mathcal{B}\setminus\mathcal{B}_0)=0$. Thus, $\overline M(\mathcal{B})=0$.
\end{proof}
\begin{lem}[V'yugin \cite{Vyu82}]\label{lem:rand-atom-3}
Let $\mu$ be a computable measure, and let $\mathcal{B}\subseteq\mathrm{MLR}_\mu$ be such that~$\mu(\mathcal{B})=0$. Then $\mathcal{B}$ is negligible.
\end{lem}
\begin{proof}
Since $\mathcal{B}\subseteq\mathrm{MLR}_\mu$, by Theorem \ref{thm-levin-schnorr}, for every $X\in \mathcal{B}$, there is some $c$ such that
\[
\mu(X{\upharpoonright} n)\geq c\cdot M(X{\upharpoonright} n)
\]
for every $n$. It follows that for all $n$,
\[
\frac{\mu(X{\upharpoonright} n)}{\overline{M}(X{\upharpoonright} n)}\geq\frac{\mu(X{\upharpoonright} n)}{M(X{\upharpoonright} n)}\geq c.
\]
By Lemma \ref{lem:rand-atom-1}, $\dfrac{d\mu}{d\overline M}(X)\neq 0$ for $\overline{M}$-almost every $X\in\mathcal{B}$, and so by Lemma \ref{lem:rand-atom-2} and the fact that~${\mu(\mathcal{B})=0}$, it follows that $\mathcal{B}$ is negligible.
\end{proof}
Lastly, we need one further classical result. Recall that $\mathcal{A}\subseteq2^\omega$ is a tailset if for all $\sigma\in2^{<\omega}$ and all~$Y\in2^\omega$ with $\sigma Y\in\mathcal{A}$ we also have that $\tau Y\in\mathcal{A}$ for every $\tau \in 2^{|\sigma|}$. That is, for a tailset~$\mathcal{A}$, modifying a finite initial segment of an infinite binary sequence has no bearing on whether that sequence is an element of $\mathcal{A}$ or not. The following result will only be used in the context of Cantor space; for a proof specific to that setting see Downey and Hirschfeldt~\cite[Theorem~1.2.4]{DowHir10}.
\begin{thm}[Kolmogorov's 0-1 Law]\label{thm:tailset}
If $\mathcal{A}\subseteq2^\omega$ is a measurable tailset, then $\lambda(\mathcal{A})=0$ or~${\lambda(\mathcal{A})=1}$.
\end{thm}
We can now prove Theorem~\ref{thm:rand-atom}.
\begin{proof}[Proof of Theorem \ref{thm:rand-atom}]
Suppose that $\mathbf{r}=\mathbf{a_0}\vee\mathbf{a_1}$ and $\mathbf{a_0}\wedge\mathbf{a_1}=\mathbf{0}$ for some $\mathbf{a_0},\mathbf{a_1}>\mathbf{0}$. Let ${\mathcal{A}_0,\mathcal{A}_1\in\mathcal{I}}$ be collections of sequences as given by Lemma~\ref{fact1} where ${\mathbf{deg_\mathbf{LV}(}\mathcal{A}_i\mathbf{)}=\mathbf{a}_i}$ and $\mathcal{A}_i\subseteq(\mathrm{MLR})^{\equiv_\mathrm{T} }$ for~${i=0,1}$. Note that for $i=0,1$, for each $X\in\mathcal{A}_i$ there is some $Y\in\mathrm{MLR}\cap \mathcal{A}_i$ such that $X\equiv_\mathrm{T} Y$. Let us consider the subcollections of sequences $\mathcal{A}_i^*=\mathrm{MLR}\cap \mathcal{A}_i$ for $i=0,1$. Since each~$\mathcal{A}_i$ is non-negligible, it follows that
\[
\lambda\Biggl(\bigcup_e\Phi^{-1}_e(\mathcal{A}_i)\Biggr)>0
\]
for $i=0,1$. Since each $X\in\mathcal{A}_i$ is Turing equivalent to some $Y\in\mathcal{A}^*_i$, it follows for $i=0,1$ that
\[
\bigcup_e\Phi^{-1}_e(\mathcal{A}_i)=\bigcup_e\Phi^{-1}_e(\mathcal{A}^*_i)
\]
and hence
\[
\lambda\Biggl(\bigcup_e\Phi^{-1}_e(\mathcal{A}^*_i)\Biggr)>0.
\]
Then Proposition~\ref{prop-neg} and Lemma \ref{lem:rand-atom-3} imply that $\lambda(\mathcal{A}_i^*)>0$ for $i=0,1$. But each ~$\mathcal{A}^*_i$~is a measurable tailset, so by Theorem \ref{thm:tailset} it follows that
$\lambda(\mathcal{A}^*_i)=1$ for $i=0,1$, which is impossible as $\mathcal{A}^*_0$ and $\mathcal{A}^*_1$
are disjoint.
\end{proof}
\subsection{Additional results about the $\mathrm{LV}$-degrees}
It is reasonable to ask whether the degree $\mathbf{r}\vee\mathbf{c}$ is
the top degree in $\mathcal{D}_\mathrm{LV}$. V'yugin gave a negative answer to this question by proving that the complement of $\mathbf{r}\vee\mathbf{c}$ in $\mathcal{D}_\mathrm{LV}$ is non-negligible.
We will give the details of his proof in Section~\ref{sec-implementing}, where we will provide the first instance of the technique of building semi-measures that we mentioned in the introduction.
However, in this subsection, we provide a simpler proof of this result, and a number of new results about $\mathcal{D}_\mathrm{LV}$.
Given $\mathbf{a}\in\mathcal{D}_\mathrm{LV}$ and $\mathcal{A}\subseteq2^\omega$ such that $\mathbf{deg_\mathbf{LV}(}(\mathcal{A})^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{a}$, we say that $\mathcal{A}$ \emph{generates} $\mathbf{a}$ or that $\mathbf{a}$ is the $\mathrm{LV}$-degree \emph{generated by} $\mathcal{A}$. We will use the following lemma repeatedly.
\begin{samepage}
\begin{lem}\label{lem-negfacts} Let $\mathcal{A},\mathcal{B}\subseteq 2^\omega$ be measurable sets.
\begin{itemize}
\item[(i)] If $\mathcal{A}\setminus\mathcal{B}$ is negligible, then $(\mathcal{A})^{\equiv_\mathrm{T} }\setminus(\mathcal{B})^{\equiv_\mathrm{T} }$ is also negligible. In particular, we have~${(\mathcal{A})^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathcal{B})^{\equiv_\mathrm{T} }}$.
\item[(ii)] If $\mathcal{A}\subseteq \mathcal{B}$, then $(\mathcal{A})^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathcal{B})^{\equiv_\mathrm{T} }$.
\end{itemize}
\end{lem}
\end{samepage}
\begin{proof}
(i) First observe that $(\mathcal{A})^{\equiv_\mathrm{T} }\setminus(\mathcal{B})^{\equiv_\mathrm{T} }\subseteq (\mathcal{A}\setminus\mathcal{B})^{\equiv_\mathrm{T} }$. Indeed, given $X\in(\mathcal{A})^{\equiv_\mathrm{T} }\setminus(\mathcal{B})^{\equiv_\mathrm{T} }$, there is some $Y\equiv_\mathrm{T} X$ such that $Y\in\mathcal{A}$ and for all $Z\in\mathcal{B}$, we have $Z\not\equiv_\mathrm{T} X$. It follows that~${Y\notin \mathcal{B}}$, and hence $X\in (\mathcal{A}\setminus\mathcal{B})^{\equiv_\mathrm{T} }$.
Now suppose that $(\mathcal{A})^{\equiv_\mathrm{T} }\setminus(\mathcal{B})^{\equiv_\mathrm{T} }$ is non-negligible. By the above observation, $(\mathcal{A}\setminus\mathcal{B})^{\equiv_\mathrm{T} }$ is also non-negligible. For $i,j\in\omega$ define $\S_{i,j}=\{X\in2^\omega\colon (\exists Y\in \mathcal{A}\setminus \mathcal{B})\; (\Phi_i(Y)=X \;\wedge\;\Phi_j(X)=Y)\}$. Then we have
\[
(\mathcal{A}\setminus\mathcal{B})^{\equiv_\mathrm{T} }=\bigcup_{(i,j)\in\omega^2}\S_{i,j}.
\]
Since $(\mathcal{A}\setminus\mathcal{B})^{\equiv_\mathrm{T} }$ is non-negligible, there is some pair $(i,j)\in\omega^2$ such that $\S_{i,j}$ is non-negligible. Then by Proposition \ref{prop-neg}, there is some Turing functional $\Psi$ such that $\lambda(\Psi^{-1}(\S_{i,j}))>0$. By definition of $\S_{i,j}$, if $\Psi(Z)\in\S_{i,j}$, then $\Phi_j(\Psi(Z))\in\mathcal{A}\setminus\mathcal{B}$.
Thus $\Psi^{-1}(\S_{i,j})\subseteq (\Phi_j\circ\Psi)^{-1}(\mathcal{A}\setminus\mathcal{B})$, and so $\lambda((\Phi_j\circ\Psi)^{-1}(\mathcal{A}\setminus\mathcal{B}))>0$. Thus by Proposition \ref{prop-neg}, $\mathcal{A}\setminus\mathcal{B}$ is not negligible.
\smallskip
\noindent (ii) If $\mathcal{A}\subseteq \mathcal{B}$, then $\mathcal{A}\setminus\mathcal{B}=\emptyset$ is trivially negligible. Thus by~(i), $(\mathcal{A})^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathcal{B})^{\equiv_\mathrm{T} }$.
\end{proof}
It is natural to ask how the $\mathrm{LV}$-degree of the Martin-L\"of random Turing degrees compares to the $\mathrm{LV}$-degrees associated to other notions of algorithmic randomness. First we show
that the $\mathrm{LV}$-degree of the Schnorr random Turing degrees is also $\mathbf{r}$.
\begin{thm}\label{thm-srvd}
$\mathbf{deg_\mathbf{LV}(}(\mathrm{SR})^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{r}$.
\end{thm}
\begin{proof}
($\geq_\mathrm{LV}$:) $\mathrm{MLR}\subseteq\mathrm{SR}$, and thus
by Lemma \ref{lem-negfacts} (ii), $(\mathrm{MLR})^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathrm{SR})^{\equiv_\mathrm{T} }$. \\
($\leq_\mathrm{LV}$:) We show that $\mathrm{SR}\setminus\mathrm{MLR}$ is negligible, which by Lemma \ref{lem-negfacts} (i) implies $(\mathrm{SR})^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathrm{MLR})^{\equiv_\mathrm{T} }$.
As shown by Nies, Stephan, and Terwijn \cite{NieSteTer05}, every $X\in\mathrm{SR}\setminus\mathrm{MLR}$ has high degree. But by Proposition \ref{prop-negligible}, the collection of sequences of high degree is negligible.
\end{proof}
\begin{corollary}\label{cor-srvd}
Let $\mathrm{R}$ be any notion of algorithmic randomness such that $\mathrm{MLR}\subseteq\mathrm{R}\subseteq\mathrm{SR}$. Then \[\mathbf{deg_\mathbf{LV}(}(\mathrm{R})^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{r}.\]
\end{corollary}
\begin{proof}
By Lemma \ref{lem-negfacts} (ii) and Theorem \ref{thm-srvd}, we have
\[
\mathbf{r}=\mathbf{deg_\mathbf{LV}(}(\mathrm{MLR})^{\equiv_\mathrm{T} }\mathbf{)}\leq_\mathrm{LV} \mathbf{deg_\mathbf{LV}(}(\mathrm{R})^{\equiv_\mathrm{T} }\mathbf{)}\leq_\mathrm{LV}\mathbf{deg_\mathbf{LV}(}(\mathrm{SR})^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{r}.\qedhere
\]
\end{proof}
Thus, notions of randomness such as computable randomness, Kolmogorov-Loveland randomness, and the non-monotonic randomness notions studied in Bienvenu et al.~\cite{BieHolKra12} all are of $\mathrm{LV}$-degree $\mathbf{r}$. Similar results hold for notions of randomness stronger than Martin-L\"of randomness, as the following result shows.
\begin{thm}\label{thm-relrvd}
For every $Z\in2^\omega$, $\mathbf{deg_\mathbf{LV}(}(\mathrm{MLR}^Z)^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{deg_\mathbf{LV}(}(\mathrm{MLR})^{\equiv_\mathrm{T} }\mathbf{)}$.
\end{thm}
\begin{proof}
($\geq_\mathrm{LV}$:) $\mathrm{MLR}^Z\subseteq\mathrm{MLR}$, and so by Lemma \ref{lem-negfacts} (ii), $(\mathrm{MLR}^Z)^{\equiv_\mathrm{T} }\leq_\mathrm{LV}(\mathrm{MLR})^{\equiv_\mathrm{T} }$. \\
($\leq_\mathrm{LV}$:) We show that $\mathrm{MLR}\setminus\mathrm{MLR^Z}$ is negligible and apply Lemma \ref{lem-negfacts} (i).
Given any ${X\in\mathrm{MLR}\setminus\mathrm{MLR}^Z}$, by the \emph{XYZ} Theorem of Miller and Yu \cite{MilYu08}, if $X\leq_\mathrm{T} Y\in\mathrm{MLR}^Z$, then~$X\in\mathrm{MLR}^Z$. Thus no $Y\in\mathrm{MLR}^Z$ computes any $X\in\mathrm{MLR}\setminus\mathrm{MLR}^Z$. That is, no sufficiently random sequence computes an element of $\mathrm{MLR}\setminus\mathrm{MLR}^Z$, and so by our heuristic~($P_2$), this latter collection is negligible.
\end{proof}
An immediate consequence of Theorem \ref{thm-relrvd} is that for each $n\in\omega$, the $\mathrm{LV}$-degree of the collection of $n$-random sequences is $\mathbf{r}$. Another consequence is the following, the proof of which is analogous to that of Corollary \ref{cor-srvd}.
\begin{corollary}\label{cor-strong}
Let $\mathrm{R}$ be any notion of algorithmic randomness such that $\mathrm{MLR}^{\emptyset'}\subseteq\mathrm{R}\subseteq\mathrm{MLR}$. Then \[\mathbf{deg_\mathbf{LV}(}(\mathrm{R})^{\equiv_\mathrm{T} }\mathbf{)}=\mathbf{r}.\]
\end{corollary}
It follows that notions of randomness such as difference randomness, Demuth randomness, and weak 2-randomness all generate the $\mathrm{LV}$-degree $\mathbf{r}$.
We now show that $\mathbf{r}\vee\mathbf{c}$ is not the top $\mathrm{LV}$-degree by exhibiting an $\mathrm{LV}$-degree that is incomparable with it. Let $\mathbf{g}$ be the $\mathrm{LV}$-degree generated by the collection of 1-generic sequences.
By Proposition~\ref{prop-non-negligible} this collection is non-negligible.
\begin{prop}\label{prop-r-g}
{\ }
\begin{itemize}
\item[(i)] $\mathbf{r}\wedge\mathbf{g}=\mathbf{0}$, and hence $\mathbf{r},\mathbf{g}<_\mathrm{LV}\mathbf{r}\vee\mathbf{g}$.
\item[(ii)] $(\mathbf{r}\vee\mathbf{c})\wedge\mathbf{g}=\mathbf{0}$.
\item[(iii)] $\mathbf{r}\wedge(\mathbf{g}\vee\mathbf{c})=\mathbf{0}$.
\end{itemize}
\end{prop}
\begin{proof}
(i) As shown by Demuth and Ku\v cera \cite{DemKuc87}, no 1-generic can compute a Martin-L\"of random sequence. Thus the set of Turing degrees containing a Martin-L\"of random sequence is disjoint from the set of Turing degrees containing a 1-generic sequence, from which the first part of~(i) follows. The second part of~(i) immediately follows from the first part.
Statements~(ii) and~(iii) follow from~(i) and the fact that the collection of computable sequences is disjoint from the collection of 1-generic sequences and from the collection of Martin-L\"of random sequences.
\end{proof}
\begin{corollary}
Neither $\mathbf{r}\vee\mathbf{c}$ nor $\mathbf{g}\vee\mathbf{c}$ equals the top $\mathrm{LV}$-degree $\mathbf{1}$.
\end{corollary}
Let $\mathbf{h}$ be the $\mathrm{LV}$-degree of the collection of sequences of hyperimmune degree, which is non-negligible by Proposition \ref{prop-non-negligible}.
\begin{remark}\label{rmk-hyp-vd}
As shown by Kurtz, a Turing degree is hyperimmune if and only if it contains a weakly 1-generic sequence, where a sequence is weakly 1-generic if for every dense c.e.\ $S\subseteq 2^{<\omega}$, there is some~$\sigma\prec X$ such that $\sigma\in S$. Here $S\subseteq 2^{<\omega}$ is called {\em dense} if every element of $2^{<\omega}$ has an extension in~$S$. If we write the collection of weakly 1-generic sequences as $\mathrm{W1GEN}$ we have~$\mathbf{h}=\mathbf{deg_\mathbf{LV}(}(\mathrm{W1GEN})^{\equiv_\mathrm{T} }\mathbf{)}$.
\end{remark}
An additional characterization of $\mathbf{h}$ can be given in terms of the collection $\mathrm{KR}$ of Kurtz random sequences.
\begin{prop}
$\mathbf{h}=\mathbf{deg_\mathbf{LV}(}(\mathrm{KR})^{\equiv_\mathrm{T} }\mathbf{)}$.
\end{prop}
\begin{proof}
($\leq_\mathrm{LV}$:)
Since every weakly 1-generic sequence is Kurtz random, by Lemma~\ref{lem-negfacts}~(ii) we have
\[\mathbf{deg_\mathbf{LV}(}(\mathrm{W1GEN})^{\equiv_\mathrm{T} }\mathbf{)}\leq_\mathrm{LV}\mathbf{deg_\mathbf{LV}(}(\mathrm{KR})^{\equiv_\mathrm{T} }\mathbf{)}.\]
\noindent ($\geq_\mathrm{LV}$:) We need to show that the collection of Kurtz random sequences that do not have hyperimmune degree is negligible.
As shown by Yu in unpublished work (see Downey and Hirschfeldt~\cite[Theorem 8.11.12]{DowHir10}), every Kurtz random sequence of hyperimmune-free degree is weakly $2$-random. Since every 2-random
sequence has hyperimmune degree, such a sequence must be weakly $2$\nobreakdash-random and not $2$-random.
By Corollary \ref{cor-strong}, the collection of weakly $2$-random sequences that are not $2$-random is negligible, from which the conclusion
follows.
\end{proof}
Since the collection of Kurtz random sequences includes every Martin-L\"of random sequence and every 1-generic sequence, we obtain the following result.
\begin{prop}\label{prop-rgh}
$\mathbf{r}<_\mathrm{LV}\mathbf{h}$ and $\mathbf{g}<_\mathrm{LV}\mathbf{h}$.
\end{prop}
\begin{proof}
Since $\mathrm{MLR}\subseteq\mathrm{KR}$ and $\mathrm{1GEN}\subseteq\mathrm{KR}$, by Lemma \ref{lem-negfacts} (ii) we have $\mathbf{r}\leq_\mathrm{LV}\mathbf{h}$ and $\mathbf{g}\leq_\mathrm{LV}\mathbf{h}$.
Moreover, $\mathrm{1GEN}\subseteq\mathrm{KR}\setminus\mathrm{MLR}$, so this latter collection is non-negligible, which implies $\mathbf{r}<_\mathrm{LV}\mathbf{h}$. Similarly, ${\mathrm{MLR}\subseteq\mathrm{KR}\setminus\mathrm{1GEN}}$ implies
$\mathbf{g}<_\mathrm{LV}\mathbf{h}$.
\end{proof}
$\mathbf{h}<_\mathrm{LV}\mathbf{h}\vee\mathbf{c}$, as the collection of computable sequences is disjoint from the collection of sequences of hyperimmune degree.
In fact, $\mathbf{h}\vee\mathbf{c}$ can be identified as the top $\mathrm{LV}$-degree.
\begin{prop}\label{prop-hyp-top}
$\mathbf{h}\vee\mathbf{c}=\mathbf{1}$.
\end{prop}
\begin{proof}
By Proposition \ref{prop-negligible}~(iv) the collection of non-computable sequences of hyperimmune-free degree is negligible, from which the result immediately follows.
\end{proof}
The following corollary, pointed out to the authors by Frank Stephan, allows identifying~$\mathbf{h}$ also as the $\mathrm{LV}$-degree of immunity notions.
\begin{defn}{\ }
\begin{itemize}
\item[(i)] Let $\mathrm{IM}$~denote the collection of immune sequences, where a sequence is immune if it has no infinite computably enumerable subsets.
\item[(ii)] Let $\mathrm{BI}$~denote the collection of biimmune sequences, where a sequence is biimune if it and its complement are immune.
\item[(iii)] Let $\mathrm{BHI}$~denote the collection of bihyperimmune sequences, where a sequence is bi\-hyperimmune if it and its complement are hyperimmune.
\end{itemize}
Then set $\mathbf{i}=\mathbf{deg_\mathbf{LV}(}(\mathrm{IM})^{\equiv_\mathrm{T} }\mathbf{)}$,
$\mathbf{b}=\mathbf{deg_\mathbf{LV}(}(\mathrm{BI})^{\equiv_\mathrm{T} }\mathbf{)}$, and
$\mathbf{bh}=\mathbf{deg_\mathbf{LV}(}(\mathrm{BHI})^{\equiv_\mathrm{T} }\mathbf{)}$.
\end{defn}
\begin{corollary}\label{biimmunedegree}
We have $\mathbf{i}=\mathbf{b}=\mathbf{h}=\mathbf{bh}=\mathbf{c}^c$.
\end{corollary}
\begin{proof}
Let $\mathrm{COMP}$ denote the computable and $\mathrm{HI}$ denote the hyperimmune sequences. Then
\[
(2^\omega \setminus \mathrm{COMP})^{\equiv_\mathrm{T} }=(\mathrm{IM})^{\equiv_\mathrm{T} }\supseteq (\mathrm{BI})^{\equiv_\mathrm{T} } \supseteq (\mathrm{BHI})^{\equiv_\mathrm{T} } = (\mathrm{HI})^{\equiv_\mathrm{T} }.
\]
Here the first equality is by Dekker and Myhill~\cite{MR0099292} (see, for example, Odifreddi~\cite[item~1~on~page~498]{MR982269}), the first inequality is by definition, and the final equality is by Kurtz~\cite[Corollary 2.1]{MR716638}.
Using the definition of hyperimmunity given in terms of strong c.e.\ arrays (see, for example, Odifreddi~\cite[Definition III.3.7]{MR982269}), it is easy to see that every hyperimmune set is immune, and by applying this to both a set and its complement, we see that every bihyperimmune set is biimmune, giving the second inequality.
Therefore, by Lemma~\ref{lem-negfacts}~(ii), we have
\[
\mathbf{h} =\mathbf{bh}\leq_\mathrm{LV} \mathbf{b} \leq_\mathrm{LV} \mathbf{i} = \mathbf{c}^c = \mathbf{h},
\]
where the last equality is by Proposition~\ref{prop-hyp-top}.
\end{proof}
We can also conclude that there is no intermediate $\mathrm{LV}$-degree between $\mathbf{h}$ and $\mathbf{1}$.
\begin{corollary}
There is no $\mathrm{LV}$-degree $\mathbf{e}$ such that $\mathbf{h}<_\mathrm{LV}\mathbf{e}<_\mathrm{LV}\mathbf{1}$.
\end{corollary}
\begin{proof}
By Proposition \ref{prop-comp-atom}, $\mathbf{c}$ is an atom of $\mathcal{D}_\mathrm{LV}$, and by Corollary \ref{biimmunedegree}, $\mathbf{c}^c=\mathbf{h}$. It is a general fact that in Boolean algebras the complement of an atom is a co-atom, that is, an element~$\mathbf{k}$ such that there is no $\mathbf{k}'$ such that $\mathbf{k}<\mathbf{k'}<\mathbf{1}$ (see, for instance, Blyth~\cite[item~(3) on page~79]{Bly05}).
\end{proof}
Let $\mathbf{d}$ denote the LV-degree of the collection of sequences of DNC degree, which is non-negligible by Proposition~\ref{prop-non-negligible}. Given that every Martin-L\"of random sequence is of DNC degree, we have~${\mathbf{r}\leq_\mathrm{LV}\mathbf{d}}$;
Bienvenu and Patey~\cite{BiePat14} showed the strictness of the relation.
\begin{thm}[Bienvenu, Patey \cite{BiePat14}]\label{thm-bienvenu-patey}
$\mathbf{r}<_\mathrm{LV}\mathbf{d}$.
\end{thm}
Since $\mathbf{c}\wedge\mathbf{d}=\mathbf{0}$, we have the following corollary.
\begin{corollary}
$\mathbf{r}\vee\mathbf{c}$ and $\mathbf{d}$ are incomparable.
\end{corollary}
We can also easily derive the following result and corollary.
\begin{prop}\label{prop-dg}
$\mathbf{d}\wedge\mathbf{g}=\mathbf{0}$.
\end{prop}
\begin{proof}
No 1-generic sequence is of DNC degree by a result of Demuth and Ku\v{c}era~\cite{DemKuc87}, and thus the result follows from the same reasoning used in the proof of Proposition \ref{prop-r-g}~(i).
\end{proof}
\begin{corollary}
$(\mathbf{r}\vee\mathbf{g})<_\mathrm{LV} (\mathbf{d}\vee\mathbf{g})$.
\end{corollary}
\begin{proof}
Using general properties of Boolean algebras (see, for example, Blyth~\cite{Bly05}), we have
\[(\mathbf{d}\vee\mathbf{g})\wedge(\mathbf{r}\vee\mathbf{g})^c=(\mathbf{d}\vee\mathbf{g})\wedge (\mathbf{r}^c\wedge\mathbf{g}^c)=((\mathbf{d}\vee\mathbf{g})\wedge \mathbf{g}^c)\wedge\mathbf{r}^c=(\mathbf{d}\wedge\mathbf{g}^c)\wedge\mathbf{r}^c=\mathbf{d}\wedge\mathbf{r}^c>_\mathrm{LV}\mathbf{0},\]
where the last equality is by Proposition \ref{prop-dg} and the final inequality is by Theorem \ref{thm-bienvenu-patey}.
In~particular, $(\mathbf{d}\vee\mathbf{g})>_\mathrm{LV} (\mathbf{r}\vee\mathbf{g})$.
\end{proof}
In Section \ref{sec-implementing}, our new application of V'yugin's technique for building semi-measures implies that the collection of non-computable sequences that are not of DNC degree is non-negligible, which in turn implies that $\mathbf{d}\vee\mathbf{c}$ is not the top $\mathrm{LV}$-degree.
However, we can alternatively derive this latter fact as follows.
\begin{prop}\label{prop-d-h}
$\mathbf{d}<_\mathrm{LV}\mathbf{h}$.
\end{prop}\nopagebreak\begin{proof}
By Proposition \ref{prop-hyp-top}, $\mathbf{d}\leq_\mathrm{LV}\mathbf{h}\vee\mathbf{c}$, which implies that the collection of sequences of DNC~degree that are neither computable nor of hyperimmune degree is negligible. But clearly no sequence of DNC~degree is computable, and thus we have
$\mathbf{d}\leq_\mathrm{LV}\mathbf{h}$.
Since every $1$-generic sequence has hyperimmune degree and is not of DNC degree, we have $\mathbf{h}\nleq_\mathrm{LV}\mathbf{d}$, and thus~${\mathbf{d}<_\mathrm{LV}\mathbf{h}}$.
\end{proof}
The following results about joins in $\mathcal{D}_\mathrm{LV}$ are immediate.
\begin{samepage}
\begin{corollary}{\ }
\begin{itemize}
\item[(i)] $\mathbf{c}<_\mathrm{LV}\mathbf{r}\vee\mathbf{c}<_\mathrm{LV}\mathbf{d}\vee\mathbf{c}<_\mathrm{LV}\mathbf{1}$.
\item[(ii)] $\mathbf{c}<_\mathrm{LV}\mathbf{g}\vee\mathbf{c}<_\mathrm{LV}\mathbf{d}\vee\mathbf{g}\vee\mathbf{c}$.
\item[(iii)]$\mathbf{d}\vee\mathbf{c}$, $\mathbf{g}\vee\mathbf{c}$, and $\mathbf{d}\vee\mathbf{g}$ are pairwise incomparable $\mathrm{LV}$-degrees.
\end{itemize}
\end{corollary}
\end{samepage}
The results of this section are summarized in Figure~\ref{figure-LV-degrees}.
\subsection{Open questions} We conclude with the following open questions.
\begin{question}
Is $\mathbf{d} \vee\mathbf{g} = \mathbf{h}$? In particular, is $\mathbf{d} \vee\mathbf{g}\vee\mathbf{c} = \mathbf{1}$?
\end{question}
Given that $\mathbf{r}$ is a $\mathcal{D}_\mathrm{LV}$-atom, it is also reasonable to ask whether the same holds for $\mathbf{g}$.
\begin{question}
Is $\mathbf{g}$ a $\mathcal{D}_\mathrm{LV}$-atom?
\end{question}
\begin{figure}[h]
\begin{center}
\scalebox{0.9}{
\begin{tikzpicture}[scale=0.65,auto=left,every node/.style={black}]
\node (1) at (0,16) {$\mathbf{1}=\mathbf{h}\vee\mathbf{c}$};
\node (duguc) at (0,12) {$\mathbf{d} \vee\mathbf{g} \vee\mathbf{c}$};
\node (h) at (5,12) {$\mathbf{h}$};
\node (duc) at (-1.75, 9.5) {$\mathbf{d} \vee\mathbf{c}$};
\node (dug) at (1.75, 9.5) {$\mathbf{d} \vee\mathbf{g}$};
\node (guc) at (4.2, 9.25) {\scalebox{1.25}{$\mathbf{g} \vee\mathbf{c}$}};
\node (ruc) at (-3.5,7) {$\mathbf{r} \vee\mathbf{c}$};
\node (d) at (0,7) {$\mathbf{d}$};
\node (rug) at (3.5,7) {$\mathbf{r} \vee\mathbf{g}$};
\node[circle,inner sep=2.5pt] (c) at (-7,2) {$\mathbf{c}$};
\node (r) at (0,2) {$\mathbf{r}$};
\node[circle,inner sep=2.5pt] (g) at (7,2) {$\mathbf{g}$};
\node (0) at (0,0) {$\mathbf{0}$};
\foreach \from/\to in {
0/r,c/ruc,r/ruc,r/d,ruc/duc,d/duc,d/dug,r/rug,g/rug,rug/dug,duc/duguc,dug/duguc}
\draw [->,thick] (\from) -- (\to);
\foreach \from/\to in {
duguc/1,dug/h,0/g}
\draw [->,thick,dashed] (\from) -- (\to);
\draw [->,thick,looseness=1] (c) to [out=80,in=225] (1);
\draw [->,thick,looseness=1] (g) to [out=95,in=290] (h);
\draw [->,thick,looseness=1] (g) to [out=105,in=299] (guc);
\draw [looseness=1,draw=white,double=black,double distance=\pgflinewidth,line width=2.5pt] (guc) to [out=123,in=-15] (duguc);
\draw [->,thick,looseness=1] (guc) to [out=123,in=-15] (duguc);
\draw [looseness=1,draw=white,double=black,double distance=\pgflinewidth,line width=2.5pt] (c) to [out=15,in=230] (guc);
\draw [->,thick,looseness=1] (c) to [out=15,in=230] (guc);
\draw [->,thick,looseness=1] (h) to [out=125,in=-25] (1);
\draw [->,thick] (0) -- (c);
\end{tikzpicture}}
\end{center}
\caption{Standard arrows represent strict separations in the LV-degrees. Dotted arrows represent the following open questions: (a)~Is $\mathbf{g}$ a $\mathcal{D}_\mathrm{LV}$-atom? (b)~Is $\mathbf{d} \vee\mathbf{g} = \mathbf{h}$, and thus is $\mathbf{d}\vee\mathbf{g} \vee\mathbf{c} = \mathbf{1}$?}
\label{figure-LV-degrees}
\end{figure}
For the definitions of the notions appearing in the following open questions, see a standard reference such as Downey and Hirschfeldt~\cite{DowHir10}.
\begin{samepage}
\begin{question}\quad
\begin{itemize}
\item What are the $\mathrm{LV}$-degrees of the collections of sequences that are Turing equivalent to some sequence of Hausdorff dimension $1$, of packing dimension $1$, of Hausdorff dimension $<1$, of packing dimension $<1$?
Given some $\alpha \in (0,1)$, what are the $\mathrm{LV}$-degrees of the collections of sequences that are Turing equivalent to some sequence of Hausdorff dimension $\alpha$ or of packing dimension $\alpha$?
\item What is the $\mathrm{LV}$-degree of the collection of sequences that {\em compute} some $1$-generic sequence?
\item What is the $\mathrm{LV}$-degree of the collection of generalized low$_{\,1}$ sequences, that is, sequences~$X$ with the property that $X' \equiv_\mathrm{T} X \oplus \emptyset'$?
\end{itemize}
\end{question}
\end{samepage}
\section{How to Build a Semi-measure}\label{sec-building}
In this section, we outline a template for building left-c.e.\ semi-measures that was developed~\cite{Vyu82} and applied~\cite{Vyu08,Vyu09,Vyu12} by V'yugin and which has several applications in the study of $\mathcal{D}_\mathrm{LV}$ as well as the study of $\Pi^0_1$ classes. The main idea of V'yugin's construction is that a semi-measure on $2^{<\omega}$ can be seen as a network flow on a directed graph $G$ such that
\begin{itemize}
\item[(i)] the nodes of $G$, $\mathcal{V}_G$, are the elements of $2^{<\omega}$, and
\item[(ii)] the edges of $G$, $\mathcal{E}_G$, are pairs $(\sigma,\tau)$ of nodes $\sigma,\tau\in2^{<\omega}$ such that $\sigma\prec \tau$.
\end{itemize}
For $\sigma,\tau\in 2^{<\omega}$ with $\sigma \preceq \tau$ we will say that $\sigma$ is {\em above} $\tau$ and that $\tau$ is {\em below} $\sigma$; that is, in this article the binary tree~$2^{<\omega}$ grows downward. Note that, while this goes against the usual convention in computability theory, it has the intuitive advantage that measure will flow from the root $\varepsilon$ downwards, as liquids naturally do.
Given $\sigma,\tau\in 2^{<\omega}$ with $\sigma \prec \tau$, the length of $(\sigma,\tau)$, written as $|(\sigma,\tau)|$, is defined to be $|\tau|-|\sigma|$. If $|(\sigma,\tau)|=1$ then we always have
$(\sigma,\tau) \in \mathcal{E}_G$; such edges of $G$ will be referred to as \emph{normal edges} and the set of normal edges will be denoted by $\mathcal{N}_G$.
If $|(\sigma,\tau)|>1$ then $(\sigma,\tau)$ may or may not be in $\mathcal{E}_G$; if it is, we call~$(\sigma,\tau)$ an \emph{extra edge} of $G$. The set of extra edges will be denoted by $\mathcal{X}_G$. We will omit the subscripts if $G$ is clear from context.
Directed graphs $G$ that satisfy $\mathcal{V}_G=2^{<\omega}$ as described above will be called \emph{$2^{<\omega}$-digraphs}. In the sequel, we will restrict our attention to computable $2^{<\omega}$-digraphs.
\begin{defn}
Given a $2^{<\omega}$-digraph $G$, a \emph{network} on $G$ is a function $q\colon \mathcal{E}_G\rightarrow\mathbb{Q}\cap [0,1]$ satisfying, for each $\sigma\in2^{<\omega}$,
\[
\sum_{(\sigma,\tau)\in \mathcal{E}_G}q(\sigma,\tau)\leq 1.
\]
\end{defn}
\noindent The idea here is that for a node $\sigma$, $q(\sigma,\tau)$ gives the proportion of the flow arriving in $\sigma$ that continues to flow into~$\tau$.
In the remainder of the article, we will always have $q(\sigma,\tau)>0$ for every extra edge $(\sigma,\tau)\in \mathcal{X}$. In fact, if $|(\sigma,\tau)|>1$, we will silently identify the two properties $q(\sigma,\tau)=0$ and $(\sigma,\tau)\notin \mathcal{E}$ since both cases equally have no effect on the outcome of the construction. Note however that for normal edges $(\sigma,\tau)\in \mathcal{N}$ the case $q(\sigma,\tau)=0$ will occur quite often.
\begin{defn}
The \emph{amount of flow into a node $\tau$}, denoted $R(\tau)$, is defined inductively by
\begin{align*}
R(\varepsilon)&=1,\\
R(\tau)&=\sum_{(\sigma,\tau)\in \mathcal{E}_G}q(\sigma,\tau)R(\sigma).
\end{align*}
\end{defn}
\noindent
Hereafter we will refer to $R$ as the \emph{in-flow function associated to $q$}. Observe further that if $q$~is computable, then so is $R$.
\begin{remark}\label{rmk-R}
$\sigma\prec\tau$ does not necessarily imply that $R(\sigma)\geq R(\tau)$. In particular, not all of the flow that we observe below $\sigma$ must have flowed through $\sigma$ itself, as there could be an extra edge that bypasses $\sigma$ and diverts flow to an extension of $\sigma$.
\end{remark}
To correct for this lack of monotonicity of $R$, we define the $q$-flow associated with a network~$q$. Given $\sigma\in2^{<\omega}$, let $T_\sigma$ \label{dkhfdghjkwqeuzidgfbnsdf} be the collection of finite prefix-free sets of strings $\tau$ such that $\sigma\preceq \tau$.
\begin{defn}\label{def:q-flow}
Let $q$ be a network on a $2^{<\omega}$-digraph $G$, and let $R$ be the in-flow function associated to $q$. Then the \emph{$q$-flow} $P$ is defined by
\[
P(\sigma)=\sup_{D\in T_\sigma}\sum_{\tau\in D}R(\tau).
\]
\end{defn}
\noindent
$P(\sigma)$ is thus the maximal amount of flow that can be observed passing through a set of extensions of the node~$\sigma$. The motivation for looking at prefix-free sets $D$ of nodes is to avoid counting the same quantity of flow more than once.
Note that since $\{\sigma\}\in T_\sigma$, we always have $P(\sigma) \geq R(\sigma)$, but equality need not hold due to the reason discussed in~Remark \ref{rmk-R}.
We have the following important fact.
\begin{lem}\label{lem:semimeasure}
Let $q$ be a computable $2^{<\omega}$-digraph. Then the $q$-flow $P$ is a left-c.e.\ semi-measure.
\end{lem}
\begin{proof}
Clearly, $P(\varepsilon)= 1$. Let $s_0=\sup_{D\in T_{\sigma0}}\sum_{\tau\in D}R(\tau)$ and $s_1=\sup_{D\in T_{\sigma1}}\sum_{\tau\in D}R(\tau)$. Given $\delta>0$, there are $D_0\in T_{\sigma0}$ and $D_1\in T_{\sigma1}$ such that
\[
\sum_{\tau\in D_i}R(\tau)\geq s_i-\delta/2
\]
for $i=0,1$. Then $D_0\cup D_1\in T_\sigma$, and hence
\[
\sup_{D\in T_\sigma}\sum_{\tau\in D}R(\tau)\geq\sum_{\tau\in D_0\cup D_1}R(\tau)\geq s_0+s_1-\delta,
\]
for every $\delta>0$.
Thus $P(\sigma)\geq P(\sigma0)+P(\sigma1)$. Lastly, $P(\sigma)$ is left-c.e.\ uniformly in $\sigma$, as $G$, $q$, and~$R$ are all computable.
\end{proof}
\begin{defn}
A network $q$ is \emph{elementary} if $q(\sigma,\tau)=1/2$ for all but finitely many $(\sigma,\tau)\in \mathcal{N}$.
\end{defn}
By the definition of a network $q$, it follows that the set of extra edges $\mathcal{X}$ is finite if $q$~is elementary. Since by definition networks~$q$ only take rational values, every elementary network~$q$ is computable. Given a computable network $q$, we can write~$q$ as a limit of elementary networks~${(q_n)_{n\in\omega}}$ by requiring that
\begin{itemize}\label{elementary_approx}
\item[(i)] $q_n(\sigma,\tau)=q(\sigma,\tau)$ if $|\tau|\leq n$;
\item[(ii)] $q_n(\sigma,\tau)=1/2$ if $(\sigma,\tau)\in \mathcal{N}$ and $|\tau|>n$;
\item[(iii)] $q_n(\sigma,\tau)=0$ if $(\sigma,\tau)\in \mathcal{X}$ and $|\tau|>n$.
\end{itemize}
Note that these conditions imply that $q_{n-1}$ and $q_n$ agree on every edge $(\sigma,\tau)$ except possibly on edges $(\sigma,\tau)$ satisfying $|\tau|=n$.
We refer to such a sequence of elementary networks as the \emph{sequence of elementary restrictions} of $q$. Moreover, we will refer to each $q_n$ as the \emph{level $n$ elementary restriction} of~$q$.
\subsection{The general template}\label{subsec-template}
The semi-measure $P$ that we construct will be one induced by a network flow $q$ as described in the previous paragraphs. Here, $q$ will be constructed through an infinite procedure which works in stages. At each stage $n$, an elementary network~$q_n$ together with its extra edge set $\mathcal{X}_n$ will be built. In the end we will then let $q=\lim_n q_n$ and $\mathcal{X}=\bigcup_n \mathcal{X}_n$. We first make some general informal remarks about the overall procedure, and then go on to describe in formal detail the individual stages.
\bigskip
\noindent The general construction template depends upon three parameters:
\begin{itemize}
\item[(1)] A computable function $\task\colon\omega\rightarrow\omega$, called the \emph{task function}, such that the values \[\task(0),\task(1),\task(2),\task(3),\dotsc\] follow the pattern
\[
0,1,0,1,2,0,1,2,3,0,1,2,3,4,\dotsc
\]
In particular, for each $i$, the set $\{n\colon\task(n)=i\}$ is infinite and $\task(n)\neq\task(n+1)$ for every~$n$. Every node will be assigned a task; namely, each $\sigma\in2^{<\omega}$ will be assigned the task~$\task(|\sigma|)$. For a given task $i$, the \emph{$i$-nodes} are the nodes $\sigma\in2^{<\omega}$ with~$\task(|\sigma|)=i$.
\item[(2)] A computable predicate $B(q',\sigma,\tau)$ which is defined for \textit{elementary} networks $q'$ on a $2^{<\omega}$-digraph $G$ and strings $\sigma$, $\tau$ such that both are $i$-nodes for the same $i\in\omega$.
\item[(3)] A computable, strictly increasing function $\mathfrak{c}\colon\omega\rightarrow\omega$.
\end{itemize}
The predicate $B$ will be determined by the requirements we are attempting to satisfy, while the function $\mathfrak{c}$ will be specifically used to provide the initial values for countdowns to expiration for certain nodes that are active in the construction, in a technical sense to be explained shortly.
We take action towards fulfilling the task $i$ if we add an extra edge connecting two $i$-nodes; we will refer to such an edge as an \emph{$i$-edge} (or as an edge that is \textit{assigned to} task $i$). That is, an edge~$(\sigma,\tau)\in \mathcal{E}_G$ is an $i$-edge if $\task(|\sigma|)=\task(|\tau|)=i$. Let $\mathcal{X}[i]$ be the set of extra edges assigned to task $i$. Note that we never assign normal edges to any task $i$, since $\task(n)\neq\task(n+1)$ for every~$n$.
In the course of the construction, for $j<i$, we would ideally want to first perform all actions necessary for task $j$ before beginning to work on task $i$. That is,
for every extra edge~$(\sigma,\tau)$ between a pair of $i$-nodes $\sigma$ and $\tau$ and for any extra edge $(\sigma',\tau')$ assigned to some task~$j$ with~$j<i$, we would like to have $|\tau'|<|\sigma|$.
But, in fact, during the construction we will not be able to always ensure this property.
After having added $(\sigma,\tau)$ for task $\task(|\sigma|)=\task(|\tau|)=i$ it may turn out later in the construction that further edges for task~$j$ need to be added. Adding them will then invalidate our previous actions for task~$i$. The edge $(\sigma,\tau)$ stays in the digraph, but we will consider it a failure\label{sdfsdsdfsdfsdgdhfgh}, as it does not help us achieve the desired goal for task $i$. While the presence of $(\sigma,\tau)$ also causes no harm, we will, at some later stage, have to completely restart the construction for task $i$. The construction can therefore be thought of as a type of finite injury argument.
For a given task $i$, we will need to talk about the minimal length of an $i$-node to which an extra edge can be attached. We thus define the following auxiliary function~$w$: Let $q'$ be an elementary network on $G$, with the associated set of extra edges $\mathcal{X}'$ through which some of the flow passes. Then for each $i\in\omega$, we define
\[
w(i,q')=\min\{n\colon\task(n)=i \;\wedge\; (\forall j<i)(\forall(\sigma,\tau)\in \mathcal{X}'[j])\; |\tau|<n\}.
\]
That is, $w(i,q')$ is the least $n$ such that (i)~$\task(n)=i$ and (ii)~every edge in $G$ assigned to task~$j$ for some $j<i$ ends in a node of length less than $n$.
For an arbitrary (that is, not necessarily elementary) computable network~$q'$, $w(i,q')$ may be undefined in general. But in fact, for $q$'s built using the template described here,
$w(i,q')$ will always be well-defined by the above equation, and $w(i,q)=\lim_n w(i,q_n)$, where $q_n$ is the level~$n$ elementary restriction of $q$.
The lengths $n$ where $w(i,q_{n-1})\neq w(i,q_n)$ correspond to the \qt{failures} described in the previous paragraph.
Another component of our construction is that at each stage, a number of nodes may be set as active, serving as candidates to which an extra edge may be attached.
Before activation, all flow into a node $\sigma$ will be equally divided to flow into $\sigma$'s direct successor nodes $\sigma0$ and~$\sigma1$ through the corresponding normal edges. To {\em activate} a node we reduce the flow from $\sigma$ into~$\sigma0$ and~$\sigma1$, resulting in a certain amount of flow into $\sigma$ being temporarily unused. We say that we have {\em delayed} part of the flow. In a later step we may then attach an extra edge to $\sigma$ and direct the delayed, leftover flow through this new edge.
More formally, for an elementary network $q'$, we have a function $d'$, called a \emph{flow-delay function}, which satisfies
\[
d'(\sigma)=1-q'(\sigma,\sigma 0)-q'(\sigma,\sigma1)
\]
for every $\sigma\in2^{<\omega}$. This is precisely the proportion of flow into $\sigma$ that is prevented from flowing into~$\sigma0$ and~$\sigma1$. The \emph{active} nodes consist of those nodes $\sigma$ such that $0<d'(\sigma)<1$; the construction will be such that if we block all of the flow through a node $\sigma$ by setting $d'(\sigma)=1$, then it, and all of its extensions, will never be activated from that point on. Moreover, for~$j<i$, to enforce the requirement that all $j$-edges end before any $i$-edges begin, whenever we attach an extra edge to a $j$-node $\tau$, all active $i$-nodes whose length is less than $|\tau|$ become unusable as, by the conditions in the construction, we will never attach edges to such nodes. We will call such nodes {\em deactivated}. Intuitively, we can then think of the flow that was delayed at such nodes as wasted.
Next, given a node $\sigma$ to which we would like to attach an extra edge, there is a function~$\beta(\sigma,q',n)$ that selects (somewhat arbitrarily) a candidate $\tau\succ \sigma$ of length $n$ for connecting an edge between $\sigma$ and $\tau$ in an elementary network $q'$ in such a way as to satisfy the predicate~$B$, if such~a~$\tau$ exists. Specifically,
\[
\beta(\sigma,q',n)=\min\{\tau\in2^n \colon \tau \succ \sigma \;\wedge\; \task(|\sigma|)=\task(|\tau|) \;\wedge\; B(q',\sigma,\tau)\},
\]
where the minimum refers to the lexicographic ordering of strings $\tau$ of length $n$.
\bigskip
After these informal remarks, we describe in detail how to construct the network $q$, with its set of extra edges $\mathcal{X}$ and its flow-delay function $d$: As mentioned at the start of this subsection, we first build a sequence $(q_n,\mathcal{X}_n,d_n)_{n\in\omega}$, where each $q_n$ is an elementary network with
associated set of extra edges $\mathcal{X}_n$. For each $n\in \omega$ we will let $d_n$ denote the flow-delay function associated with~$q_n$. In the end we will set $q= \lim_nq_n$,
$\mathcal{X}=\bigcup_n \mathcal{X}_n$ and $d=\lim_n d_n$.
\smallskip
The definition of the sequence $(q_n, \mathcal{X}_n, d_n)_{n\in\omega}$ proceeds in stages as follows: For $n=0$,
\[
q_0(\sigma,\tau)=\begin{cases}
1/2 & \text{if $\tau=\sigma0$ or $\tau=\sigma1$,}\\
0 & \text{otherwise}.
\end{cases}
\]
Clearly $d_0(\sigma)=0$ for all $\sigma\in2^{<\omega}$ and $\mathcal{X}_0=\emptyset$.
Suppose we have defined $(q_{n-1},\mathcal{X}_{n-1},d_{n-1})$, where for all $(\sigma,\tau)\in \mathcal{X}_{n-1}$, $|\tau|<n$. We will first define $\mathcal{X}_n, d_n$, and then $q_n$. The goal of this stage of the construction is to attach an extra edge connecting a $\task(n)$-node whose length is strictly less than $n-1$ with a $\task(n)$-node of length~$n$. We consider two cases.
\bigskip
\noindent
{\bf Case 1:} $w(\task(n),q_{n-1})=n$. This means that the extra edges in $\mathcal{X}_{n-1}$ assigned to some task~$j<\task(n)$ terminate in nodes of length $\leq n-1$, and this is the least $n$ for which this holds. This further implies that there is no active $\task(n)$-node of length less than $n$ to which we can attach an extra edge. We thus take the following steps:
\begin{itemize}
\item[(i)] Set $\mathcal{X}_n=\mathcal{X}_{n-1}$.
\item[(ii)] Set $d_n(\sigma)=
\begin{cases}
1/\mathfrak{c}(n) & \text{if $|\sigma|=n$}\\
d_{n-1}(\sigma) & \text{otherwise.}\end{cases}$
\end{itemize}
Setting $d_n(\sigma)=1/\mathfrak{c}(n)$ for each $\sigma$ of length $n$ has the effect of activating these nodes, in anticipation of attaching a $\task(n)$-edge to them later in the construction. We call this the \emph{initial activation} of the nodes, since Case~1 is the case where we begin working towards task $i$ (or where we begin \textit{anew} to work towards it, in case that a previous injury has occured for task~$i$ that requires us to start over).
Recall that $\mathfrak{c}$ provides the initial value for a countdown mechanism that we will use during the construction; once we implement this template for a specific application, we will have to choose~$\mathfrak{c}$ carefully to ensure that a positive amount of flow stays in the network in the limit.
\bigskip
\noindent
{\bf Case 2:} $w(\task(n),q_{n-1})<n$. Our hope in this case is that we can attach some extra edges from $\task(n)$-nodes of length $\geq w(\task(n), q_{n-1})$ to $\task(n)$-nodes of length $n$. Thus we search for $\sigma\in2^{<\omega}$ such that the following four conditions hold:
\begin{itemize}
\item[$(a)$] $w(\task(n),q_{n-1})\leq|\sigma|<n$;
\item[$(b)$] $0<d_{n-1}(\sigma)<1$;
\item[$(c)$] $\beta(\sigma,q_{n-1},n)$ is defined; and
\item[$(d)$] $\sigma\prec \rho$ implies that $(\sigma,\rho)\notin \mathcal{X}_{n-1}$.
\end{itemize}
Condition $(a)$ guarantees that the start of the new edge occurs beyond the end of any currently present
$j$-edge for $j<\task(n)$; in particular, this rules out attaching edges at deactivated nodes. Condition $(b)$ guarantees that $\sigma$ is active (henceforth, we will refer to a node $\sigma$ such that $0<d_n(\sigma)<1$ as \emph{active at stage $n$}). Condition $(c)$ guarantees that $\sigma$ is assigned to task $\task(n)$ and that there is a length~$n$ node that can serve as the endpoint of a new $\task(n)$-edge we want to attach at $\sigma$ (that is, the predicate $B$ is satisfied). Finally, condition $(d)$ guarantees that no extra edge has been previously attached starting at $\sigma$.
Let $\mathcal{C}_n$ be the set of $\sigma\in2^{<\omega}$ such that conditions $(a)$--$(d)$ are satisfied. Then we have two subcases to consider.
\bigskip
\noindent
{\bf Subcase 2.1:} $\mathcal{C}_n\neq\emptyset$. For every $\sigma \in \mathcal{C}_n$ and every $\tau \succ \sigma$ with $|\tau|=n$ we let
\[
d_n(\tau)=
\left\{
\begin{array}{lll}
0 & \mbox{if } \tau=\beta(\sigma,q_{n-1},n), \\
d_{n-1}(\sigma)/(1-d_{n-1}(\sigma)) & \mbox{else}.
\end{array}
\right.
\]
For all other $\tau$ we let $d_n(\tau) = d_{n-1}(\tau)$.
By condition~$(b)$ above, setting $d_n(\tau)=0$ makes $\tau$ inactive, meaning that we will not add any further $\task(n)$-edges to any extensions of $\tau$, with one possible exception:
It may be that at a later stage $n'>n$ with $\task(n') < \task(n)$, a new $\task(n')$-edge is added, which at some even later stage $n''>n'>n$ with $\task(n'') = \task(n)$ would lead to Case~1 above occurring again for task~${\task(n'') = \task(n)}$.
This would in turn lead to all strings of length $n''$ getting initially activated for task $\task(n'') = \task(n)$ at stage $n''$ where we begin anew to work on that task. In this case we say that task $\task(n'') = \task(n)$ has been {\em injured} by task $\task(n')$.
When a node is assigned a non-zero delay value by the second line above, we call this its {\em non-initial activation}. This is because that new delay value at node $\tau$ is a consequence of an earlier assignment of a non-zero delay value to the strictly shorter node $\sigma$.
Note that when initially activating a node $\sigma$, we assign a delay of the form $1/k$, where $\mathfrak{c}(|\sigma|)=k$ for some $k\in\omega$. Moreover, the mapping $d\mapsto d/(1-d)$ used for non-initial activations maps such a number to $1/(k-1)$, which is then mapped to $1/(k-2)$, and so on. Note further that the nodes where these new delay values are set are by construction $\task(n)$-nodes. The reciprocal of these assigned delay values are positive integers, and we can interpret them as a counter counting down by $1$ along a path every time a $\task(n)$-edge branches off it; see Figure~\ref{fig_adding_edges}.
Even on the same path different tasks are initially activated separately at different nodes of appropriate length. The countdown happens separately for all tasks, as a new delay value assigned to an $i$-node depends on the delay value of an $i$-node of shorter length, and not on the delay values of $j$-nodes with $i\neq j$. It therefore makes sense to talk about the \emph{$i$-counter} for task $i$ along a given path, and we will use this expression in the informal explanations in the sequel.
As we continue to add edges for task $\task(n)$ that branch off a path, the $\task(n)$-counter may eventually reach $1$ on some initial segment of that path. Such a node is then by definition inactive. (Formally, the counter reaching $1$ means that the delay value along the path has increased until all flow is blocked at a value of~$1$.) Once this happens, by construction, we stop attaching $\task(n)$-edges on any extension of that initial segment of the path.
\begin{figure}
\centering
\def16cm{16cm}
\input{z1_addingedges_layer1.pdf_tex}
\caption{An edge for task $\task(n)$ is added. The root of the edge was initially activated with counter value $\mathfrak{c}(n)$. The node at the end of the new edge has delay value~$0$, thus is inactive. All other extensions of the root receive a positive delay value by non-initial activation, and thereby become active. The counter value on these nodes, which is the reciprocal of the value of $d$ on the respective node, has been reduced by $1$ compared to the counter value on the root.
Note how other, completely independent $\task(n)$-edges can occur off to the side.}
\label{fig_adding_edges}
\end{figure}
\smallskip
Next, we set
\[
\mathcal{X}_n=\mathcal{X}_{n-1}\cup\{(\sigma,\beta(\sigma,q_{n-1},n))\colon\sigma\in \mathcal{C}_n\}
\]
and
\[
q_n(\sigma,\tau)=
\left\{
\begin{array}{lll}
\frac{1}{2}(1-d_n(\sigma)) & \mbox{if } \tau=\sigma0\;\text{or}\;\tau=\sigma1,\\
d_{n}(\sigma) & \mbox{if } (\sigma,\tau)\in \mathcal{X}_n,\\
0 & \mbox{otherwise.}
\end{array}
\right.
\]
Note that for $j>\task(n)$, $w(j,q_k)>n$
for all $k \geq n$. In particular, we are now prevented from attaching an edge to any $j$-nodes that were active at the beginning of this stage; as mentioned above we call such nodes {\em deactivated}.
\bigskip
\noindent
{\bf Subcase 2.2:} $\mathcal{C}_n=\emptyset$. Then we set $d_n=d_{n-1}$, $\mathcal{X}_n=\mathcal{X}_{n-1}$, and $q_n=q_{n-1}$. No new nodes are activated, nor do any active nodes become deactivated.
\bigskip
\noindent To finalize the outline of the construction template we lastly set $q=\lim_n q_n$, $d=\lim_n d_n$, and $\mathcal{X}=\bigcup_n \mathcal{X}_n$.
It is not difficult to check that $q$ and $d$ are computable functions and that $\mathcal{X}$ is a computable set. It then follows from Lemma \ref{lem:semimeasure} that the resulting $q$-flow $P$, as in Definition~\ref{def:q-flow}, is a left-c.e.\ semi-measure.
\subsection{Verification of the general template}
We now work to establish the desired properties of the constructed objects $q$, $d$, $\mathcal{X}$, $R$, $P$, and so on.
For the sake of notational simplicity, during this verification, we will again use the letters $q_n$, $d_n$, and $\mathcal{X}_n$, for $n\in\omega$, to refer to the finite approximations of $q$, $d$, and $\mathcal{X}$ that we built in the previous subsection. In particular note that, for~$q$, these finite approximations~$q_n$, $n\in \omega$, coincide with the sequence of elementary restrictions discussed on page~\pageref{elementary_approx}.
\bigskip
Before we implement this template, we show that a number of features of the construction can be established independently of the concrete implementation. First,
the following two lemmata show that the construction prevents certain relative arrangements of extra edges.
\begin{lem}\label{djsdjhkdfgjhqweshfasjddbwe}
Assume that an extra edge $(\zeta,\xi)$ is added during the construction. Then no node~$\sigma$ such that $\task(\zeta)=\task(\sigma)$ and $\zeta \prec \sigma$ and $|\sigma|\leq |\xi|$ can ever become active during the construction.
\end{lem}
\begin{proof}
The fact that $(\zeta,\xi)$ is added implies that $\zeta$ was activated at stage $|\zeta|$ and cannot have been deactivated until after stage $|\xi|$.
This implies in particular that between stages~$|\zeta|$ and~$|\xi|$ no injury of task $\task(\zeta)$ occured, so that~
$\sigma$ cannot have been initially activated at stage~$|\sigma|$.
Assume that $\sigma$ was activated non-initially.
Then there must exist a sequence of extra $\task(\zeta)$-edges
\[(\nu_1, \mu_1), (\nu_2,\mu_2), (\nu_3,\mu_3), \dots, (\nu_\ell, \mu_\ell)\]
such that for all $2 \leq i \leq \ell$ we have
\[\zeta=\nu_1, \quad \nu_{i-1} \prec \nu_i,\quad |\mu_{i-1}|=|\nu_i|,\quad \mu_{i-1}\neq\nu_i,\quad |\mu_\ell| = |\sigma| \quad \text{and} \quad
\mu_\ell \neq \sigma,\]
see Figures~\ref{fig_adding_edges} and~\ref{edges_along_a_path}. But, by condition~(d) in Case~2 of the construction, the presence of the edge~$(\zeta, \mu_1)$
would have precluded~$(\zeta,\xi)$ from having been added later, a contradiction.
\end{proof}
\begin{lem}\label{fact_constr}
There do not exist strings $\zeta \prec \sigma \prec \xi$ and $\zeta \prec \sigma \prec \tau$ such that $|\xi|\leq|\tau|$ and $(\zeta,\xi) \in \mathcal{X}$ and $(\sigma,\tau) \in \mathcal{X}$.
\end{lem}
Note that without the condition ``$|\xi|\leq|\tau|$'' the statement is false, as typically there are many pairs of extra edges
for which that condition does not hold but which satisfy the other conditions in the statement.
\begin{proof}
Assume for a contradiction that extra edges $(\zeta,\xi)$ and $(\sigma,\tau)$ as in the statement exist.
First assume that both are assigned to the same task; that is, $\task(\zeta)=\task(\xi)=\task(\sigma)=\task(\tau)$.
Then by Lemma~\ref{djsdjhkdfgjhqweshfasjddbwe}, the fact that $(\zeta,\xi)$ was added implies that
$\sigma$~is never activated, and thus
$(\sigma,\tau)$~cannot have been added,
which contradicts our initial assumption.
Thus, $(\zeta,\xi)$ and $(\sigma,\tau)$ must be assigned to two different tasks.
(In particular, in this case, the strict inequality $|\xi|<|\tau|$ must hold.) We distinguish two cases.
If $\task(\zeta)=\task(\xi) < \task(\sigma) =\task(\tau)$, then even if $\sigma$~was activated at stage $|\sigma|$, the addition of $(\zeta,\xi)$ would have constituted an injury of task $\task(\sigma)$ that would have led to $\sigma$ becoming deactivated, which means that $(\sigma,\tau)$ could not have been added later, a contradiction.
If, on the other hand, $\task(\sigma)=\task(\tau) < \task(\zeta)=\task(\xi)$, then the fact that $(\sigma,\tau)$ was added implies that $\sigma$ must have been activated at stage $|\sigma|$. This could be for two possible reasons:
Either $w(\task(|\sigma|),q_{|\sigma|-1})=|\sigma|$ held at stage $|\sigma|$ and thus $\sigma$ was initially activated as described in Case~1. The cause for that new initial activation, namely the addition of some edge for some task $j < \task(\sigma)$, would have also deactivated $\zeta$, since
$j < \task(\sigma) < \task(\zeta)$, and that would have precluded the addition of $(\zeta,\xi)$ later, again a contradiction.
Or, if $\sigma$ was non-initially activated due to Subcase~2.1, then that must have been due to an extra edge $(\nu,\mu)$ for task $\task(\sigma)$ with $|\mu|=|\sigma|$ having been added at stage $|\sigma|$. Then the addition of that edge would have injured $\task(\zeta)$, which again would have deactivated $\zeta$, yet another contradiction.
\medskip
In summary, there is no scenario that allows the presence of both $(\zeta,\xi)$ and $(\sigma,\tau)$ in the digraph at the same time.
\end{proof}
Next, we establish that the work towards every individual task terminates eventually.
\begin{lem}[Stability Lemma]\label{finite_i_edges}
For every $i\in\omega$, $\mathcal{X}[i]$ is finite and $w(i,q)<\infty$.
\end{lem}
\begin{proof}
First, observe that if $\mathcal{X}[j]$ is finite for every $j<i$, then $w(i,q)<\infty$. It is therefore sufficient to prove the first part of the statement.
So suppose that $i$ is minimal such that $\mathcal{X}[i]$ is infinite. Then by the previous observation we have~$w(i,q)<\infty$. For $\sigma$ with $|\sigma|\geq w(i,q)$, define $m_\sigma$ to be the maximal $m>w(i,q)$ such that there is an edge ${(\sigma{\upharpoonright} m,\tau) \in \mathcal{X}[i]}$ where $\tau$ is incomparable with $\sigma$.
If no such $m$ exists, set~${m_\sigma = w(i,q)}$.
Then define a function $u$ via
\[
u(\sigma)=\begin{cases}
1/d(\sigma{\upharpoonright} m_\sigma) & \text{if } d(\sigma{\upharpoonright} m_\sigma)>0, |\sigma|\geq w(i,q), \\
\mathfrak{c}(w(i,q)) & \text{if } |\sigma| < w(i,q).\\
\end{cases}
\]
Note that these two cases are exhaustive; to see this assume that $|\sigma|\geq w(i,q)$. If $m_\sigma=w(i,q)$, then by construction $d(\sigma {\upharpoonright} m_\sigma) = 1/\mathfrak{c}(m_\sigma) >0$. The only other possibility is that $m_\sigma$ is
the maximal $m>w(i,q)$ such that there is an edge ${(\sigma{\upharpoonright} m,\tau) \in \mathcal{X}[i]}$ where $\tau$ is incomparable with~$\sigma$. But then $d(\sigma {\upharpoonright} m_\sigma) > 0$ as well, as otherwise the edge $(\sigma{\upharpoonright} m_\sigma,\tau)$ would not have been added according to the conditions in the construction.
We claim that $u(\sigma)$ is an upper bound on the number of possible $i$-edges branching off below length $\max(w(i,q),|\sigma|)$ from any path going through $\sigma$.
First, consider $\sigma$ meeting the conditions of the first line of the definition, and such that an edge $(\sigma{\upharpoonright} m_\sigma,\tau)$ as in the definition of $m_\sigma$ exists. Since by the choice of $m_\sigma$ the edge~$(\sigma{\upharpoonright} m_\sigma,\tau)$ is the last edge branching off above $\sigma$, and by the discussion of the $i$-counter mechanism above, we know that then at most $\frac{1}{d(\sigma{\upharpoonright} m_\sigma)}-2$ further $i$-edges can branch off below~$\sigma$ from any path extending $\sigma$, and the claim in this case follows.
Secondly, consider $\sigma$ meeting the conditions of the first line of the definition, but where an edge of the form $(\sigma{\upharpoonright} m_\sigma,\tau)$ as in the definition of $m_\sigma$ does {\em not} exist. For those $\sigma$ we have that a parent~$\rho$ of~$\sigma$ with $|\rho|=w(i,q)$ has been initially activated, but that there is no extra $i$-edge that branches off between $\rho$ and~$\sigma$. Again by the discussion of the $i$-counter mechanism, we know that then at most $\mathfrak{c}(w(i,q)) - 1$ $i$-edges can branch off below $\sigma$ from any path extending~$\sigma$. Since
\[u(\sigma)=1/d(\sigma{\upharpoonright} m_\sigma)=1/d(\sigma{\upharpoonright} w(i,q))=\mathfrak{c}(w(i,q)),\] the claim in this case follows.
Lastly, consider $\sigma$ satisfying $|\sigma|<w(i,q)$. Let $\tau \succ \sigma$ be of length~$w(i,q)$. Then $\tau$~is initially activated with $d(\tau)=1/\mathfrak{c}(|\tau|)$.
By the definition of $u$, $u(\sigma)=u(\tau)$, and we can argue as in the previous paragraph to conclude that at most $\mathfrak{c}(w(i,q)) - 1$ $i$-edges can branch off below length~$w(i,q)$ from any path extending $\sigma$.
It should now be clear that $u$ is constant on all strings $\sigma$ with $|\sigma| \leq w(i,q)$; and that for arbitrary strings $\sigma$ and $\tau$ with $\sigma \preceq \tau$ we have $u(\sigma) \geq u(\tau)$. We then define the function~${\widehat u\colon2^\omega\rightarrow\omega}$ by letting, for every $A \in 2^\omega$,
\[
\widehat u(A) = \min\{n\colon u(A {\upharpoonright} n)= u(A {\upharpoonright} \ell) \textnormal{ for all } \ell \geq n\}.
\]
We claim that the function $\widehat u$ is continuous. This is because $(a)$ $u$ is non-increasing over longer and longer initial segments of a path $A$, $(b)$ $u$ only takes integer, positive values, and $(c)$ a decrease in~$u$ cannot happen arbitrarily late along $A$. This last point $(c)$ follows from the two facts that {\em (i)} at every node at most one edge starts (by construction) and that {\em (ii)} for an $i$-edge branching off $A$ at $A{\upharpoonright} \ell$ we must have that $\ell$ is either $w(i,q)$ or the length of the endpoint of the previous $i$-edge branching off $A$; otherwise $A{\upharpoonright} \ell$ would not be active; see Figure~\ref{edges_along_a_path}.
Therefore, for a long enough initial segment $A {\upharpoonright} k$ of $A$, $u(A {\upharpoonright} k)$ has stabilized; meaning that $A {\upharpoonright} k$ already determines~$\widehat u(A)$.
\begin{figure}
\centering
\def16cm{16cm}
\input{z1_edges_along_a_path_layer1.pdf_tex}
\caption{A sequence of extra edges branching off a given path $A$. Note how the length of the start point of every edge has to coincide with the length of the endpoint of the previous edge branching off the path.}
\label{edges_along_a_path}
\end{figure}
Because $2^\omega$ is compact, $\widehat u$ is bounded by some $N \in \omega$, meaning in particular that ${u(\sigma)=u(\sigma{\upharpoonright} N)}$ for all $\sigma$ with $|\sigma|\geq N$. But then no new $i$-edge $(\sigma,\tau)$ can be attached to any such~$\sigma$, as that would imply $u(\tau) < u(\sigma)$, contradicting the choice of $N$. Thus $\mathcal{X}[i]$~cannot be infinite.
\end{proof}
\begin{defn}
For a finite sequence $\sigma\in2^{<\omega}$ we call an infinite sequence $X\in2^\omega$ an \emph{$i$\nobreakdash-continuation} of $\sigma$ if $i=\task(|\sigma|)$, $\sigma \prec X$, and $B(q_{n-1},\sigma,X {\upharpoonright} n)$ holds for almost all $n$ with~${\task(n)=i}$.
\end{defn}
\begin{defn}
A sequence $X$ is called \emph{$i$-discarded} if $d(X {\upharpoonright} n)=1$ for some $n$ where $\task(n)=i$.
\end{defn}
Note that a sequence $X\in2^\omega$ becomes $i$-discarded if there exists an initial segment $X {\upharpoonright} k$ such that the counter for task $i$ has reached the final value $1$ on $X {\upharpoonright} k$. By the conditions stated in Case 2 of the construction, below such an $X {\upharpoonright} k$ no further extra edges for task $i$ will branch off of $X$, hence the name ``discarded.''
\begin{lem}[Edge Existence Lemma]\label{lem:extension}
Assume that for $X\in2^\omega$ and for all $k\in\omega$ such that $\task(k)=i$ it holds that $X {\upharpoonright} k$ has an $i$-continuation and that $X$ is not $i$-discarded. Then $X$ contains an $i$-edge~$(\sigma,\tau)$; that is, $\sigma\prec \tau\prec X$.
\end{lem}
\begin{proof}
Assume that $X$ is not $i$-discarded. Let $m$ be maximal with $\task(m)=i$ and~${d(X {\upharpoonright} m)>0}$.
We know by the following argument that $m$ is defined: First, an $m$ as described exists, since by \nameref{finite_i_edges}~\ref{finite_i_edges} we have that $w(i,q)$ is finite, and, by construction, $d(X {\upharpoonright} w(i,q))$~is set to a value strictly between $0$ and $1$. Secondly, by construction, any positive value $d(X {\upharpoonright} \ell)$ for some~$\ell > w(i,q)$ with~$\task(\ell)=i$ must be the result of a chain of $i$-edges branching off~$X$, as illustrated in Figure~\ref{edges_along_a_path}.
Again by \nameref{finite_i_edges}~\ref{finite_i_edges}, $\mathcal{X}[i]$ is finite, and therefore any such chain can only have finite length, therefore only finitely many $\ell$ with $\task(\ell)=i$ can have~${d(X {\upharpoonright} \ell)>0}$. As a result a maximal~$m$ as described must exist.
Since, by assumption, $X{\upharpoonright} m$ has an $i$-continuation and $d(X {\upharpoonright} m) < 1$, the conditions of Subcase~2.1 of the construction are met. Therefore, eventually an $i$-edge of the form $(X {\upharpoonright} m, \tau)$ is attached at~$X {\upharpoonright} m$. By construction $d(\tau)=0$ and $d(\rho)\not= 0$ for all $\rho$ such that $X{\upharpoonright} m \prec \rho$, $\tau\not=\rho$, and $|\rho|=|\tau|$. By the choice of $m$ we must therefore have $\tau \prec X$.
\end{proof}
\begin{lem}[Continuity Lemma]
\label{lemma_contin1}
The semi-measure $P$ has no atoms.
\end{lem}
\begin{proof}
Note that by definition of the function $w$, there are no extra edges $(\sigma,\tau) \in \mathcal{X}$
such that ${|\sigma|< w(i,q)\leq |\tau|}$ for any~$i$. That is, for any $i$, all flow that flows from nodes of length less than~$w(i,q)$ to nodes extending them flows through normal edges. Let $\sigma$ be a node of length~${w(i,q)-1}$. By construction $q(\sigma,\sigma0)=q(\sigma,\sigma1)\leq \nicefrac{1}{2}$, and hence, for $b\in\{0,1\}$,
\[
P(\sigma b)=R(\sigma b)=q(\sigma,\sigma b)\cdot R(\sigma) \leq \nicefrac{1}{2} \cdot P(\sigma).
\]
Since there are infinitely many numbers of the form $w(i,q)$, $i\in\omega$, we have $\lim_{n\rightarrow\infty}P(X{\upharpoonright} n)=0$ for every $X\in2^\omega$.
\end{proof}
\subsection{The roadmap}
Everything discussed thus far in this section forms the common part of the construction. In particular, we do not need to re-prove Lemma~\ref{fact_constr}, \nameref{finite_i_edges}~\ref{finite_i_edges}, \nameref{lem:extension}~\ref{lem:extension}, and \nameref{lemma_contin1}~\ref{lemma_contin1} for each application of the template. However, when applying the template to obtain different results, some parts of the construction need to be adapted to the statement that should be proved. There will still be a common structure with the following components.
\begin{description}
\item[Predicate $\emph{B}$] The predicate $B$ determines when edges are added to the digraph, and therefore the information that will be coded into the semi-measure constructed.
\item[Cut-off Lemma] Here we show that if any positive flow occurs beyond a node $\tau$, then at least some part of that flow must have passed through normal edges.
\item[Continuation Existence Lemma] To be able to apply the \nameref{lem:extension}~\ref{lem:extension} to all of the sequences in the support of the semi-measure we construct, we need to prove that the hypotheses of the lemma are satisfied by these sequences. That is, we need to prove that for every $i\in\omega$, every sequence $X$ in the support is an $i$-continuation for all of its own initial segments~$X {\upharpoonright} n$ with $\task(n)=i$.
\item[Measure Lemma] This shows that the support of the constructed semi-measure $P$ has positive $\overline P$-measure. Note that, together with \nameref{lemma_contin1}~\ref{lemma_contin1} and using Proposition~\ref{prop_atoms_are_comp}, this implies that the support of~$P$ does not exclusively contain computable elements.
\item[Verifying the desired properties] Finally we need to verify that the semi-measure we constructed has the desired properties needed for the statement that was to be shown.
\end{description}
\section{Implementing the Template}\label{sec-implementing}
\subsection{A first example} We begin by giving V'yugin's proof of Theorem \ref{thm-prob-alg}.
\theoremstyle{thm}
\newtheorem*{thm:thm-prob-alg}{Theorem \ref{thm-prob-alg}}
\begin{thm:thm-prob-alg}[V'yugin \cite{Vyu12}]
For any $\delta\in(0,1)$, there is a probabilistic algorithm that produces with probability at least $1-\delta$ a non-computable sequence that does not compute any Martin-L\"of random sequence.
\end{thm:thm-prob-alg}
To prove this, we will show the following more general statement.
\begin{samepage}
\begin{thm}[V'yugin \cite{Vyu12}]\label{thm:template-1}
For each $\delta\in(0,1)$, there is a left-c.e.\ semi-measure $P$ such that
\begin{itemize}
\item[(i)] $P$ has no atoms;
\item[(ii)] $\overline{P}(2^\omega)=\overline{P}(\supp{P})>1-\delta$; and
\item[(iii)] for each $X\in \supp{P}$ and each Turing functional $\Phi$, if $\Phi(X)$ is defined, then ${\Phi(X)\not\in\mathrm{MLR}}$.
\end{itemize}
\end{thm}
\end{samepage}
We obtain the desired probabilistic algorithm from Theorem \ref{thm:template-1} by applying Theorem \ref{thm-InduceSemiMeasures}~(ii): Since $P$ is a left-c.e.\ semi-measure, there is some Turing functional $\Psi$ such that $P=\lambda_\Psi$. The functional $\Psi$ equipped with a random oracle provides the probabilistic algorithmic satisfying the conditions of Theorem \ref{thm-prob-alg}.
One additional consequence of Theorem \ref{thm:template-1} is that $\mathbf{r}\vee\mathbf{c}$ is not the top degree of $\mathcal{D}_\mathrm{LV}$, which we already showed via an alternative method in Section \ref{sec-lv-degrees}. Indeed, since $\supp{P}$ contains no atoms and every atom of a left-c.e.\ semi-measure is computable, it follows that \[\overline{P}(\supp{P}\setminus \{X\colon X\text{ computable}\})>0.\] By Corollary~\ref{dfgsdafkdfjhsdsdfg}, this implies that $\supp{P}\setminus \{X\colon X\text{ computable}\}$ is non-negligible. But, by construction, the Levin-V'yugin degree generated by $\supp{P}\setminus \{X\colon X\text{ computable}\}$ is disjoint from $\mathbf{r}\vee\mathbf{c}$.
V'yugin originally proved this result in~\cite{Vyu76} without use of the machinery laid out in the previous section, but in a later article~\cite{Vyu12} he gave the proof discussed here.
To prove Theorem~\ref{thm:template-1}, we first need to specify the predicate $B$ and the function $\mathfrak{c}$, as in the template outlined above. For an elementary network $q'$ and nodes $\sigma$, $\tau$ with $\task(|\sigma|)=\task(|\tau|)$, $B(q',\sigma,\tau)$ is defined to hold if and only if
\begin{itemize}
\item[(a)] $\sigma\preceq \tau$,
\item[(b)] $d'(\tau{\upharpoonright} k)<1$ for all $k$ such that $1\leq k\leq |\tau|$, where $d'$ is the flow-delay function of~$q'$, and
\item[(c)] $\bigl|\Phi_{j,|\tau|}^\tau\bigr|>\langle \#(\sigma),s\rangle$, where $\task(|\sigma|)=\langle j,s\rangle$.
Here $\#(\sigma)$ denotes the position of $\sigma$ in the canonical lexicographic ordering of $2^{<\omega}$ and $\langle\cdot,\cdot\rangle$ denotes a pairing function that satisfies $\langle m,n\rangle\geq m+n$ for all $m,n \in \omega$.
\end{itemize}
The idea of this choice of~$B$ is that for each $i\in\omega$ such that $i=\langle j,s\rangle$ for $j,s\in\omega$, we attach an $i$-edge between $i$-nodes $\sigma$ and $\tau$ only if
$\bigl|\Phi_{j,|\tau|}^\tau\bigr|>\langle\#(\sigma),s\rangle$; that is, $\Phi_{j,|\tau|}^\tau$ is sufficiently long. Moreover, we will ensure that for each $X\in2^\omega$, either there is some $n$ such that the flow out of $X{\upharpoonright} n$ is completely blocked, or, for each Turing functional $\Phi_j$ such that $\Phi_j(X)$ is defined, $\Phi_j(X)\notin\mathrm{MLR}$.
This latter condition will be accomplished by, for $\langle j,s\rangle$-edges $(\sigma,\tau)$ with $s\in\omega$, enumerating $\tau$ into a Martin-L\"of test.
As for the choice of $\mathfrak{c}$, given $\delta\in(0,1)$, we let ${\mathfrak{c}(n)=(n+n_0)^2}$, where $n_0$~is such that
\[
\sum_{n\in\omega}(n+n_0)^{-2}<\delta.
\]
This will be used to prove \nameref{lem:semimeasure-support}~\ref{lem:semimeasure-support} below.
Now let $P$ be the semi-measure produced by the template outlined in Section~\ref{subsec-template} when used with this specific choice of~$B$ and~$\mathfrak{c}$.
We establish that $P$ has the desired properties.
\begin{lem}[Measure Lemma]\label{lem:semimeasure-support}
$\overline{P}(\supp{P})>1-\delta$.
\end{lem}
For $X\in \supp{P}$ we already have that $P(X{\upharpoonright} n)>0$ for all $n$; that is, at any finite level $n$, not all measure has dissipated. We will show that
for all $n$, the amount of flow that flows into but not out of strings of length $n$ is bounded from above by
$(n+n_0)^{-2}$.
This implies that the total dissipation is $\sum_n(n+n_0)^{-2}<\delta$, thus establishing the result.
In the construction, when an $i$-counter runs out along a path, the delay value is set to $1$ at some node $\sigma$ that is an initial segment of that path to remove the path from the support of the constructed semi-measure. As this means that {\em all} flow arriving in $\sigma$ is blocked at $\sigma$, the amount of measure lost this way could be very large. This is why we start the countdown with larger and larger numbers in the construction, as this ensures that there are more and more chances to add edges, which preserves more and more measure.
On the other hand we {\em do} need that after finitely many attempts to add an edge we give up and block all flow along that path completely, as otherwise a single task might cause infinitely many of the failures described on page~\pageref{sdfsdsdfsdfsdgdhfgh}, which might prevent the construction from ever successfully handling the remaining tasks. Furthermore, if a currently investigated functional~$\Phi_j$ stops producing output somewhere, then we only lose the measure currently delayed there; all the remaining measure keeps flowing through normal edges. The measure lost this way is another quantity that we need to control.
The trade-offs needed to reconcile these necessities make the construction quite complex and are the reason why establishing a lower bound for the remaining measure requires the following involved argument.
\begin{proof}[Proof of Lemma \ref{lem:semimeasure-support}]
By the definition of $R$ and $d$,
\begin{equation}\label{eq:sem-supp1}
\begin{split}
\sum_{|\sigma|=n+1}R(\sigma)&=\sum_{|\tau|=n}q(\tau,\tau0)R(\tau)+\sum_{|\tau|=n}q(\tau,\tau1)R(\tau)+\sum_{(\rho,\xi)\in \mathcal{X}, |\xi|=n+1}q(\rho,\xi)R(\rho)\\
&=\sum_{|\tau|=n}(1-d(\tau))R(\tau)+\sum_{(\rho,\xi)\in \mathcal{X}, |\xi|=n+1}q(\rho,\xi)R(\rho).
\end{split}
\end{equation}
\noindent
We set
\begin{equation}\label{eq:sem-supp2}
S_n=\sum_{|\sigma|=n}R(\sigma)-\sum_{(\rho,\xi)\in \mathcal{X}, |\xi|=n}q(\rho,\xi)R(\rho),
\end{equation}
so that it follows from (\ref{eq:sem-supp1}) and (\ref{eq:sem-supp2}) that
\begin{equation}\label{eq:sem-supp3}
S_{n+1}=\sum_{|\tau|=n}(1-d(\tau))R(\tau).
\end{equation}
That is, $S_{n+1}$ is the amount of flow into nodes of length $n+1$ that comes directly from nodes of length $n$ (and not through extra edges whose end nodes have length $n+1$).
We claim that $S_{n+1}\geq S_n-(n+n_0)^{-2}$ for all $n$. For fixed $n$, we consider the possible values of $w(\task(n),q_{n-1})$. First, we consider Subcase~2.2 of the construction, where $w(\task(n),q_{n-1})<n$ but we added no extra edge $(\sigma,\tau)$ where $|\tau|=n$. In this case, for each $\rho$ such that $|\rho|=n$, $d(\rho)=d_n(\rho)=d_{n-1}(\rho)=0$. It then follows from (\ref{eq:sem-supp2}) and (\ref{eq:sem-supp3}) that $S_{n+1}= S_n$.
Next, suppose that we are in Subcase~2.1 of the construction, where $w(\task(n),q_{n-1})<n$ and we added at least one extra edge $(\sigma,\tau)$ with $|\tau|=n$. For $\sigma,\tau\in2^{<\omega}$, let
\[
\mathrm{Fan}(\sigma,\tau)=\{\rho\colon |\rho|=|\tau|\;\wedge\; \sigma\prec \rho \;\wedge\;\rho\neq \tau\}.
\]
In Figures~\ref{fig_adding_edges} and~\ref{edges_along_a_path} the fans of extra edges were represented by dotted cones.
\begin{sublem}\label{lem-flowR}
For every $(\sigma,\tau)\in \mathcal{X}$,
\begin{equation}\label{sdfsdgsdsdjfgsdfhgsdfgwwejafsdhfas}
\sum_{\rho\in \mathrm{Fan}(\sigma,\tau)} R(\rho)\leq(1-d(\sigma))R(\sigma).
\end{equation}
\end{sublem}
\begin{proof} The term on the left-hand side of the inequality is the total amount of flow that flows into all nodes in $\mathrm{Fan}(\sigma,\tau)$, while the term on the right-hand side is the total flow into $\sigma$ (the node at the base of the fan) minus the flow that is diverted into the extra edge $(\sigma,\tau)$. The only case where this inequality can fail to hold is if there is some flow through an extra edge
$(\zeta,\xi)\in \mathcal{X}$ such that $\zeta\prec\sigma\prec\xi\preceq\rho$ for some~$\rho\in\mathrm{Fan}(\sigma,\tau)$. However, since $(\sigma,\tau)\in\mathcal{X}$, the existence of such an extra edge $(\zeta,\xi)$ contradicts Lemma~\ref{fact_constr}.\footnote{For this, apply the lemma in such a way that the nodes in the current proof are identified with the nodes of equal name appearing in the statement of the lemma.} Thus the inequality must hold.
\renewcommand{\qedsymbol}{$\Diamond$}\end{proof}
The sum
$
\sum_{|\rho|=n}d(\rho)R(\rho)
$
can be understood as the total amount of measure that is delayed at level $n$. Indeed, since $R(\rho)$ is the absolute amount of flow into a node $\rho$ and $d(\rho)$ is the relative fraction of flow delayed at $\rho$, we have that $d(\rho)R(\rho)$ is the absolute quantity of flow delayed at~$\rho$.
Since we are in Subcase 2.1 (and therefore a non-trivial delay value at a node $\rho$ cannot be caused by an initial activation of $\rho$ but must be caused by an extra edge ending in a node $\tau$ with~${\rho\in \mathrm{Fan}(\sigma,\tau)}$), we have:
\begin{align*}
\sum_{|\rho|=n}d(\rho)R(\rho)&=\sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}\;\sum_{\rho\in \mathrm{Fan}(\sigma,\tau)} d(\rho)R(\rho)\\
\intertext{By definition of $d$ on $\rho\in\mathrm{Fan}(\sigma,\tau)$: }
&=\sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}\frac{d(\sigma)}{1-d(\sigma)}\sum_{\rho\in \mathrm{Fan}(\sigma,\tau)} R(\rho)\\
\intertext{By Sublemma \ref{lem-flowR}:}
&\leq \sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}d(\sigma)R(\sigma)\\
&=\sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}q(\sigma,\tau)R(\sigma).
\end{align*}
\smallskip
Then
\begin{equation}\label{eq-blah}
\begin{split}
S_{n+1}=\sum_{|\rho|=n}(1-d(\rho))R(\rho)&=\sum_{|\sigma|=n}R(\sigma)-\sum_{|\rho|=n}d(\rho)R(\rho)\\
&\geq\sum_{|\sigma|=n}R(\sigma)-\sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}q(\sigma,\tau)R(\sigma)=S_n.
\end{split}
\end{equation}
Lastly, in Case~1 of the construction, we have $w(\task(n),q_{n-1})=n$, and hence
\[
\sum_{|\rho|=n}d(\rho)R(\rho)\leq1/\mathfrak{c}(n)=(n+n_0)^{-2}.
\]
Consequently,
\begin{equation}
\begin{split}
S_{n+1}=\sum_{|\rho|=n}(1-d(\rho))R(\rho)&=\sum_{|\sigma|=n}R(\sigma)-\sum_{|\rho|=n}d(\rho)R(\rho)\\
&\geq\sum_{|\sigma|=n}R(\sigma)-(n+n_0)^{-2}\\
&\geq\sum_{|\sigma|=n}R(\sigma)-\sum_{(\sigma,\tau)\in \mathcal{X},|\tau|=n}q(\sigma,\tau)R(\sigma)-(n+n_0)^{-2}\\
&=S_n-(n+n_0)^{-2}.
\end{split}
\end{equation}
Now since $S_{n+1}\geq S_n-(n+n_0)^{-2}$ for every $n$ and $S_0=1$, we have
\[
S_n\geq1-\sum_{i=1}^\infty(i+n_0)^{-2}>1-\delta.
\]
Lastly, by the definition of the support of a semi-measure, we have
\[
\overline{P}(\supp{P})=\inf_n\sum_{|\rho|=n} P(\rho)\geq \inf_n\sum_{|\rho|=n} R(\rho)\geq \inf_n S_n>1-\delta.\qedhere
\]
\end{proof}
\begin{lem}[Cut-off Lemma]\label{lem:blocked-flow}
For $\tau\in2^{<\omega}$, $P(\tau)=0$ if and only if there is some $\sigma\prec \rho\preceq \tau$ such that $\rho \in\{\sigma0, \sigma1\}$ and $q(\sigma,\rho)=0$.
\end{lem}
\begin{proof}
Assume that, for all $0\leq i<|\tau|$, $q\bigl(\tau{\upharpoonright} i, \tau{\upharpoonright} (i+1)\bigr)>0$ holds. Then by definition of $R$ we have
\[
R(\tau) \geq \prod_{i=0}^{|\tau|-1} q\bigl(\tau{\upharpoonright} i, \tau{\upharpoonright} (i+1)\bigr) > 0,
\]
which together with $P(\tau)\geq R(\tau)$ implies $P(\tau)>0$.
For the other direction, suppose there is some $n<|\tau|$ such that $q\bigl(\tau{\upharpoonright} n,\tau{\upharpoonright}(n+1)\bigr)=0$, but~${P(\tau)\neq 0}$.
Then there must be some extra edge $(\sigma,\rho)$ such that $\sigma\preceq \tau{\upharpoonright} n$ and $\tau{\upharpoonright} (n+1)\preceq \rho$.
We have that $q\bigl(\tau{\upharpoonright} n,\tau{\upharpoonright}(n+1)\bigr)=0$ implies $d(\tau{\upharpoonright} n)=1$.
But, by condition~(b) in the definition of $B$ above, $(\sigma,\rho)$~can only be added if $d(\rho{\upharpoonright} k)<1$ for all $k$ such that $1\leq k\leq |\rho|$, contradicting the fact that $d(\rho{\upharpoonright} n)=d(\tau{\upharpoonright} n)=1$.
\end{proof}
\begin{lem}[Continuation Existence Lemma]\label{lem:edge-guarantees}
For every Turing functional $\Phi_j$, every ${X\in \supp{P}}$ such that $\Phi_j(X)$ is defined, and every $i=\langle j,s\rangle$ for $s\in\omega$, $X$ is an $i$-continuation of~$X{\upharpoonright} m$ for every~$m\in\omega$ such that $\task(m)=i$.
\end{lem}
\begin{proof}
Fix $j,m,s\in\omega$, and let $i=\langle j,s\rangle$. Recall that $X$ is an $i$-continuation of $\sigma\in2^{<\omega}$ with~${\task(|\sigma|)=i}$ if $\sigma\prec X$ and $B(q_{n-1},\sigma,X{\upharpoonright} n)$ holds for almost all $n$ such that $\task(n)=i$.
Thus, to show that $X$ is an $i$-continuation of~$X{\upharpoonright} m$, it suffices to show that, for almost every $n$, the following two conditions from the definition of the predicate $B$ hold:
\begin{itemize}
\item[(b)] $d(X{\upharpoonright} k)<1$ for every $k$ such that $1\leq k\leq n$, and
\item[(c)] $\bigl|\Phi_{j,n}^{X{\upharpoonright} n}\bigr|>\langle \#(X{\upharpoonright} m),s\rangle$.
\end{itemize}
Since $X\in \supp{P}$, $P(X{\upharpoonright} n)>0$ for every $n$, and it follows from the \nameref{lem:blocked-flow}~\ref{lem:blocked-flow} that $d(X{\upharpoonright} n)<1$ for every $n$, and so (b) holds. Moreover, as $\Phi_j(X)$ is defined, for each $N\in\omega$, $|\Phi_{j,n}^{X{\upharpoonright} n}|\geq N$ for all sufficiently large $n$; thus, (c) holds.
\end{proof}
\begin{lem}\label{mlr_final_conclusion}
For any $X\in \supp{P}$ and any Turing functional $\Phi_j$ such that $\Phi_j(X)$ is defined, \[{\Phi_j(X)\notin\mathrm{MLR}}.\]
\end{lem}
\begin{proof}
For $s\in\omega$, let
\[
\mathcal{U}_s=
\bigcup_{n\colon\task(n)=\langle j,s\rangle}\;\bigcup_{\sigma\in \mathcal{C}_n}
\;\;\;\llbracket\Phi_{j,n}^{\beta(\sigma,q_{n-1},n)}\rrbracket
\]
where $\mathcal{C}_n$ is the set of the same name that was defined during the construction.
Fix $s \in \omega$. Since $X\in \supp{P}$ and $\Phi_j(X)$ is defined, by \nameref{lem:edge-guarantees}~\ref{lem:edge-guarantees}, $X$ is an $i$-continuation of $X{\upharpoonright} m$ for $i=\langle j,s\rangle$ and every $m\in\omega$ such that $\task(m)=i$.
Since~${X\in \supp{P}}$, $X$~cannot be $i$-discarded.
Then, by \nameref{lem:extension}~\ref{lem:extension}, there are $n,m\in\omega$ with $m<n$ such that there is an extra $i$-edge $(X{\upharpoonright} m,\beta(X{\upharpoonright} m,q_{n-1},n))$ such that $\beta(X{\upharpoonright} m,q_{n-1},n)=X{\upharpoonright} n$.
It follows that $\llbracket\Phi_j^{X{\upharpoonright} n}\rrbracket$ is enumerated into $\mathcal{U}_s$.
Since $\bigl|\Phi_{j,n}^{\beta(\sigma,q_{n-1},n)}\bigr|>\langle \#(\sigma),s\rangle$ for each $n \in \omega$ and $\sigma\in \mathcal{C}_n$,
\[
\lambda(\mathcal{U}_s)\leq\sum_{n\colon\task(n)=\langle j,s\rangle}\;\sum_{\sigma\in \mathcal{C}_n}2^{-\langle \#(\sigma),s\rangle}\leq 2^{-s}.
\]
Hence, $(\mathcal{U}_s)_{s \in \omega}$ is a Martin-L\"of test covering $\Phi_j(X)$, and thus ${\Phi_j(X)\notin\mathrm{MLR}}$.
\end{proof}
This completes the proof of Theorem~\ref{thm:template-1}, as \nameref{lemma_contin1}~\ref{lemma_contin1} establishes the Theorem's condition~(i), \nameref{lem:semimeasure-support}~\ref{lem:semimeasure-support} establishes condition~(ii), and Lemma~\ref{mlr_final_conclusion} establishes condition~(iii).
\medskip
In light of the second paragraph of the proof of Lemma \ref{mlr_final_conclusion} we can now formulate an intuitive understanding of \nameref{lem:extension}~\ref{lem:extension}: It states that every path (that meets the conditions in the statement of the lemma) will eventually either be removed from the support of the semi-measure during its construction, or, if not, will be treated using the predicate $B$ to make sure all paths that remain in the support have the desired properties. In either case, the construction succeeds.
\subsection{A new application of the technique}
We now turn to the proof of Theorem \ref{thm-prob-alg2}, an extension of V'yugin's Theorem \ref{thm-prob-alg}.
\theoremstyle{thm}
\newtheorem*{thm:thm-prob-alg2}{Theorem \ref{thm-prob-alg2}}
\begin{thm:thm-prob-alg2}
For any $\delta\in(0,1)$, there is a probabilistic algorithm that produces with probability at least~$1-\delta$ a non-computable sequence that is not of DNC degree.
\end{thm:thm-prob-alg2}
To prove Theorem \ref{thm-prob-alg2}, we prove a strengthening of Theorem \ref{thm:template-1} in terms of a family of weak notions of randomness; just as Theorem \ref{thm-prob-alg} follows from Theorem \ref{thm:template-1}, so too will Theorem \ref{thm-prob-alg2} follow from this strengthening. The following notion was explicitly defined by Higuchi~et~al.~\cite{HigHudSim14} and was further studied by Simpson and Stephan~\cite{SimSte15}.
\begin{defn}
Let $f\!\colon2^{<\omega}\rightarrow\omega$ be a total computable function.
\begin{itemize}
\item[(i)] An \emph{$f$\!-Martin-L\"of test} is a sequence of uniformly c.e.\ sets of strings $(U_i)_{i\in\omega}$ such that
\[
\sum_{\sigma\in U_i}2^{-f(\sigma)}\leq 2^{-i}
\]
for every $i\in\omega$.
\item[(ii)] A sequence $X\in2^\omega$ is \emph{$f$\!-random} if $X\notin\bigcap_{i\in\omega}\llbracket U_i\rrbracket$ for every $f$\!-Martin-L\"of test $(U_i)_{i\in\omega}$.
\end{itemize}
\end{defn}
We will focus our attention on notions of $f$\!-randomness for sequences $X$ and functions~$f$ where $f$~is \emph{unbounded along} $X$; that is, $\lim_{n\rightarrow\infty}f(X{\upharpoonright} n)=\infty$. We can now state our generalization of Theorem \ref{thm:template-1}.
\begin{samepage}
\begin{thm}\label{thm:template-2}
For each $\delta\in(0,1)$, there is a left-c.e.\ semi-measure $P$ such that
\begin{itemize}
\item[(i)] $P$ has no atoms;
\item[(ii)] $\overline{P}(2^\omega)=\overline{P}(\supp{P})>1-\delta$; and
\item[(iii)] for each $X\in \supp{P}$ and each Turing functional $\Phi$, if $\Phi(X)$ is defined, then $\Phi(X)$~is not $f$\!-random for any computable function $f$ that is unbounded along $\Phi(X)$.
\end{itemize}
\end{thm}
\end{samepage}
We call a function $f\!\colon2^{<\omega}\rightarrow\omega$ {\em monotone} if for any $\sigma,\tau\in2^{<\omega}$ with $\sigma\preceq\tau$ we have that~${f(\sigma)\leq f(\tau)}$.
For any~$f\!\colon2^{<\omega}\rightarrow\omega$, we define $f^*\!\colon2^{<\omega}\rightarrow\omega$ by letting, for each $\sigma \in 2^{<\omega}$,
\[f^*(\sigma)=\max\{f(\tau)\colon \tau\preceq \sigma\}.\]
Clearly, $f^*$ is monotone and we have $f(\sigma)\leq f^*(\sigma)$ for all $\sigma\in2^{<\omega}$. If $f$ is furthermore computable and unbounded along some $X\in2^\omega$, then the same holds for~$f^*$.
The proof of Theorem~\ref{thm:template-2} below will only ensure that~(iii)~holds for monotone~$f$. The following argument establishes that this is sufficient.
\begin{lem}\label{lem-monotone}
Let $f\!\colon2^{<\omega}\rightarrow\omega$ be a total computable function. Then $X\in2^\omega$ is $f$\!-random if and only if $X$ is $f^*$-random.
\end{lem}
\begin{proof}
($\Leftarrow$:) Suppose that $X\in2^\omega$ is not $f$\!-random. Then there is some $f$\!-Martin-L\"of test $(U_i)_{i\in\omega}$ such that $X\in\bigcap_{i\in\omega}\llbracket U_i\rrbracket$. We claim that $(U_i)_{i\in\omega}$ is an $f^*$-Martin-L\"of test. Indeed, since $f(\sigma)\leq f^*(\sigma)$ for all $\sigma\in2^{<\omega}$,
\[
\sum_{\sigma\in U_i}2^{-f^*(\sigma)}\leq \sum_{\sigma\in U_i}2^{-f(\sigma)}\leq 2^{-i}.
\]
It thus follows that $X$ is not $f^*$-random.
($\Rightarrow$:) Now suppose that $X$ is not $f^*$-random. Then there is some $f^*$-Martin-L\"of test $(U_i)_{i\in\omega}$ such that $X\in\bigcap_{i\in\omega}\llbracket U_i\rrbracket$. We modify $(U_i)_{i\in\omega}$ to produce an $f$\!-Martin-L\"of test covering $X$ as follows. First note that for every $\sigma\in2^{<\omega}$, if $f(\sigma)\neq f^*(\sigma)$, then there is some $\tau\prec\sigma$ such that $f(\tau)=f^*(\tau)=f^*(\sigma)$. In this case, let us set $\widehat\sigma=\tau$. In the case that $f(\sigma)=f^*(\sigma)$, set $\widehat\sigma=\sigma$; in either case, we have $\widehat\sigma\preceq\sigma$. Then for each $i\in\omega$ and $\sigma \in 2^{<\omega}$, we define $\widehat {U}_i$ so that $\widehat\sigma\in \widehat {U}_i$ if and only if $\sigma\in U_i$. It follows that $(\widehat{U}_i)_{i\in\omega}$ is an $f$\!-Martin-L\"of test, since
\[
\sum_{\widehat\sigma\in \widehat{U}_i}2^{-f(\widehat\sigma)}= \sum_{\sigma\in U_i}2^{-f^*(\sigma)}\leq 2^{-i}.
\]
Next, since for every $\sigma\in2^{<\omega}$ we have $\widehat\sigma\preceq\sigma$, it follows that $\llbracket U_i \rrbracket \subseteq \llbracket\widehat{U}_i \rrbracket$ for every $i\in\omega$, and hence $X\in\bigcap_{i\in\omega}\llbracket U_i\rrbracket\subseteq\bigcap_{i\in\omega}\llbracket \widehat{U}_i\rrbracket$. We thus conclude that $X$ is not $f$\!-random.
\end{proof}
The general strategy of the proof of Theorem \ref{thm:template-2} is much like that of the proof of Theorem~\ref{thm:template-1}, but with several modifications. First, since we want that elements of $\supp{P}$ cannot compute any $f$\!-random sequences for any monotone unbounded computable $f\!\colon2^{<\omega}\rightarrow\omega$, our construction will have to involve all total computable functions. Of course, there is no effective enumeration of such functions, so we have to work with an enumeration of all partial computable functions $(\phi_e)_{e\in\omega}$ (where each $\phi_e$ is viewed as a map from $2^{<\omega}$ to $\omega$). Moreover, we can assume that all functions~$\phi_e$ are monotone simply by replacing each~$\phi_e$ with the corresponding monotone~$\phi_e^*$.
Second, we have to modify the definition of the predicate $B$ from the proof of Theorem~\ref{thm:template-1} as follows:
For an elementary network $q'$ and nodes $\sigma$,$\tau$ with $\task(|\sigma|)=\task(|\tau|)$, $B(q',\sigma,\tau)$ is defined to hold if and only if
\begin{itemize}
\item[(a)] $\sigma\preceq \tau$,
\item[(b)] $d'(\tau{\upharpoonright} k)<1$ for all $k$ such that $1\leq k\leq |\tau|$, where $d'$ is the flow-delay function of~$q'$, and
\item[(c$^*$)] there is some $\rho\preceq\Phi_{j,|\tau|}^\tau$ such that $\phi_{e,|\tau|}(\rho){\downarrow}
>\langle \#(\sigma),s\rangle$, where $\task(|\sigma|)=\langle j,s,e\rangle$.\footnote{We cannot simply let $\rho=\Phi_{j,|\tau|}^\tau$ as there is no guarantee that running $\phi_e$ with input $\Phi_{j,|\tau|}^\tau$ terminates within $|\tau|$~steps.}
\end{itemize}
Observe that for non-total $\phi_e$ condition (c$^*$) may never become true and as a result we may never attach a $\task(|\sigma|)$-edge to~$\sigma$. This is not a problem, as condition (iii) in Theorem~\ref{thm:template-2} only makes a promise about total functions, so no action is required in this case. Tasks of the form~$\langle j,\cdot,e\rangle$ can therefore be safely ignored when verifying that the construction yields the desired semi-measure.
As in the previous subsection, that \nameref{lemma_contin1}~\ref{lemma_contin1} holds is an inherent feature of the construction technique, independently of the specific choice of the predicate $B$ and the countdown function~$\mathfrak{c}$.
\nameref{lem:semimeasure-support}~\ref{lem:semimeasure-support} also still holds since its truth does not depend on the specific choice of~$B$ while $\mathfrak{c}$~is unchanged. As for \nameref{lem:blocked-flow}~\ref{lem:blocked-flow}, an inspection of its proof shows that it only relies on condition~(b) of the predicate~$B$ which we haven't changed from the last subsection; so this lemma still holds as well. The Continuation Existence Lemma, however, needs to be modified.
\begin{lem}[Modified Continuation Existence Lemma]\label{lem:mod-edge-guarantees}
Suppose that we have a Turing functional~$\Phi_j$, some $X\in \supp{P}\cap\mathrm{dom}(\Phi_j)$, and a monotone total computable function $\phi_e$ such that $\phi_e(\Phi_j(X){\upharpoonright} n)$ is unbounded in $n$. Then for every $i$ of the form $\langle j,s,e\rangle$ for some~${s\in\omega}$, $X$ is an $i$-continuation of~$X{\upharpoonright} m$ for every~$m\in\omega$ such that $\task(m)=i$.
\end{lem}
\begin{proof}
That condition~(b) in the predicate~$B$ holds is shown as in the proof of Lemma~\ref{lem:edge-guarantees}.
For condition~(c), since $X\in\mathrm{dom}(\Phi_j)$ and $\phi_e(\Phi_j(X){\upharpoonright} n)$ is unbounded in $n$, there are~$n_1$ and~$n_2$ such that $\phi_{e,n_2}(\Phi_j^{X{\upharpoonright} n_1}) {\downarrow}> \langle \#(X{\upharpoonright} m),s\rangle$. Then for any $n \geq \max\{n_1,n_2\}$, $\Phi_j^{X{\upharpoonright} n}$ has the initial segment~$\rho=\Phi_j^{X{\upharpoonright} n_1}$ such that
$\phi_{e,n}(\rho) = \phi_{e,n_2}(\rho) > \langle \#(X{\upharpoonright} m),s\rangle$.
Therefore, condition~(c) holds for any large enough~$n$ with $\task(n)=i$.
\end{proof}
Lastly, we prove the following.
\begin{lem}\label{t-random_final_conclusion}
Let $f\!\colon2^{<\omega}\rightarrow\omega$ be a monotone total computable function. For any~${X\in \supp{P}}$ and any Turing functional $\Phi$ such that $X\in\mathrm{dom}(\Phi)$, if $f(\Phi(X){\upharpoonright} n)$ is unbounded in~$n$, then $\Phi(X)$ is not $f$\!-random.
\end{lem}
\begin{proof}
Let $e$ be the index of $f$ as a partial computable function and let $j$ be the index of~$\Phi$.
Then we define an $f$\!-Martin-L\"of test $(U_s)_{s\in\omega}$, where $U_s$ consists of all strings of the form~$\Phi_{j,n}^{\beta(\sigma,\,q_{n-1},\,n)}$ where~$n\in\omega$, $\sigma\in \mathcal{C}_n$, and $\task(n)=\langle j,s,e\rangle$.
For each $X\in\supp{P}\cap\mathrm{dom}(\Phi_j)$ such that $f(\Phi_j(X){\upharpoonright} n)$ is unbounded in $n$, by the Modified Continuation Existence Lemma \ref{lem:mod-edge-guarantees}, $X$ is an $i$-continuation of $X{\upharpoonright} m$ for every~$i$ such that~${i=\langle j,s,e\rangle}$ for some~$s\in\omega$ and every $m\in\omega$ such that $\task(m)=i$.
Furthermore, $X$ is not $i$\nobreakdash-discarded, and hence by \nameref{lem:extension}~\ref{lem:extension} there is an $i$-edge $\bigl(X{\upharpoonright} m,\beta(X{\upharpoonright} m,q_{n-1},n)\bigr)$ such that ${\beta(X{\upharpoonright} m,q_{n-1},n)=X{\upharpoonright} n}$ for some~$n,m\in\omega$ with~$m<n$. Since $\task(n)=\langle j,s,e\rangle$, it follows that $\Phi_{j,n}^{X{\upharpoonright} n}$~is enumerated into~$U_s$.
Choose any $\eta \in U_s$. Then, by definition of $U_s$, there is some $\tau$ such that $\Phi_{j,n}^\tau = \eta$ and some~$\sigma \in \mathcal{C}_{|\tau|}$ such that $(\sigma, \tau)\in \mathcal{X}$. By condition (c$^*$) of the predicate~$B$, the fact that this extra edge was added to the digraph implies that there is a witnessing initial segment $\rho \preceq \Phi_{j,n}^\tau$ such that
\[\phi_e(\eta)=\phi_e( \Phi_{j,n}^{\tau})\geq
\phi_{e}(\rho)
=\phi_{e,|\tau|}(\rho)
>\langle \#(\sigma),s\rangle;\]
here the first inequality follows from the monotonicity of $\phi_e$.
As this line of reasoning applies to every $\eta \in U_s$, we obtain
\[
\sum_{\eta\in U_s}2^{-\phi_e(\eta)}\leq\sum_{n\colon\task(n)=\langle j,s,e\rangle}\;\sum_{\sigma\in \mathcal{C}_n}2^{-\langle \#(\sigma),s\rangle}\leq 2^{-s}.
\]
Hence, $(U_s)_{s\in\omega}$ is an $f$\!-Martin-L\"of test covering $\Phi_j(X)$, and so $\Phi_j(X)$ is not $f$\!-random.
\end{proof}
\noindent This completes the proof of Theorem~\ref{thm:template-2}.
We can recast this result in terms of autocomplexity as well as in terms of being of DNC degree. Recall that the Kolmogorov complexity of a string~$\sigma\in2^{<\omega}$ is defined by
$
K(\sigma)=\min\{|\tau|\colon U(\tau){\downarrow}=\sigma\},
$
where $U$ is a universal, prefix-free Turing machine. Moreover, a function $f\!\colon\omega\rightarrow\omega$ is called an \emph{order} if $f$ is unbounded and non-decreasing.
\begin{defn}
$X\in2^\omega$ is \emph{autocomplex} if there is an $X$-computable order $f$ such that ${K(X{\upharpoonright} n)\geq f(n)}$ for every $n\in\omega$.
\end{defn}
The following two propositions provide alternative characterizations of $f$\!-randomness.
\begin{prop} [Higuchi, Hudelson, Simpson, Yokoyama~\cite{HigHudSim14}]
$X\in2^\omega$ is autocomplex if and only if there is some computable function $f\!\colon2^{<\omega} \rightarrow \omega$ such that $f$ is unbounded along $X$ and $X$~is $f$\!-random.
\end{prop}
\begin{prop}[Kjos-Hanssen, Merkle, Stephan~\cite{KjoMerSte11}]
$X\in2^\omega$ is autocomplex if and only if $X$ is of DNC degree.
\end{prop}
We can now recast Theorem \ref{thm:template-2} as follows.
\begin{corollary}\label{cor:template}
For each $\delta\in(0,1)$, there is a left-c.e.\ semi-measure $P$ such that
\begin{itemize}
\item[(i)] $P$ has no atoms;
\item[(ii)] $\overline{P}(2^\omega)=\overline{P}(\supp{P})>1-\delta$; and
\item[(iii)] for each $X\in \supp{P}$ and each Turing functional $\Phi$, if $\Phi(X)$ is defined, then $\Phi(X)$~is not autocomplex, or equivalently, $\Phi(X)$ is not of DNC degree. Equivalently, for each $X\in \supp{P}$, $X$~is not of DNC degree.
\end{itemize}
\end{corollary}
By the same argument as the one that immediately follows the statement of Theorem \ref{thm:template-1}, Corollary \ref{cor:template} yields an alternative proof of the fact that
$\mathbf{d}\vee\mathbf{c}$ is not the top $\mathrm{LV}$-degree.
\section{Applications to $\Pi^0_1$ Classes}\label{sec-conclusion}
As we have seen, V'yugin's construction as laid out in Sections \ref{sec-building} and \ref{sec-implementing} yields significant results in the study of the $\mathrm{LV}$-degrees. As noted in the introduction, the construction also has some interesting consequences for the study of $\Pi^0_1$ classes, that is, effectively closed subsets of $2^\omega$.
In particular, for each of the semi-measures $P$ defined via V'yugin's construction, for~${\sigma\in2^{<\omega}}$, the condition $P(\sigma)=0$ is computable, as $P(\sigma)=0$ if and only if $q(\sigma)$ is set to 0 at stage $|\sigma|$ in the construction of $P$. This implies that in each case, the support of~$P$ is a $\Pi^0_1$~class. We thus establish the two corollaries stated in the introduction.
\begin{samepage}
\newtheorem*{thm:cor1}{Corollary \ref{cor1}}
\begin{thm:cor1}{\ }
For every $\delta\in(0,1)$, there is a Turing functional $\Phi$ such that
\begin{itemize}
\item[(i)] $\Phi$ maps no set of positive measure to any single sequence,
\item [(ii)] the domain of $\Phi$ has Lebesgue measure at least $1-\delta$,
\item [(iii)] the range of $\Phi$ is a $\Pi^0_1$ class, and
\item [(iv)] no sequence in the range of $\Phi$ computes a Martin-L\"of random sequence.
\end{itemize}
\end{thm:cor1}
\end{samepage}
\begin{samepage}
\newtheorem*{thm:cor2}{Corollary \ref{cor2}}
\begin{thm:cor2}{\ }
For every $\delta\in(0,1)$, there is a Turing functional $\Phi$ such that
\begin{itemize}
\item[(i)] $\Phi$ maps no set of positive measure to any single sequence,
\item [(ii)] the domain of $\Phi$ has Lebesgue measure at least $1-\delta$,
\item [(iii)] the range of $\Phi$ is a $\Pi^0_1$ class, and
\item [(iv)] no sequence in the range of $\Phi$ is of DNC degree.
\end{itemize}
\end{thm:cor2}
\end{samepage}
\section*{Acknowledgments}
The authors would like to thank Laurent Bienvenu, Frank Stephan, Mushfeq Khan, and Paul Shafer for helpful discussions. In particular, Bienvenu contributed to the reconstruction of the proof of Theorem~\ref{thm:rand-atom}, Khan informed us of the proof of the negligibility of the collection of non-computable sequences of hyperimmune-free degree in Proposition \ref{prop-negligible}, and Stephan pointed out the relation with immunity notions discussed in Corollary~\ref{biimmunedegree}. The authors would also like to thank the referees for detailed and insightful comments.
\medskip
H\"olzl was supported by a Feodor Lynen postdoctoral research fellowship by the Alexander von Humboldt Foundation
and by the Ministry of Education of Singapore through grant {R146\nobreakdash-000\nobreakdash-184\nobreakdash-112} (MOE2013\nobreakdash-T2\nobreakdash-1\nobreakdash-062). Porter was supported by the National Science Foundation through grant OISE-1159158 as part of the International Research Fellowship Program and by the John Templeton Foundation as part of the project ``Structure and Randomness in the Theory of Computation,'' and by the National Security Agency Mathematical Sciences Program grant H98230-I6-I-D310 as part of the Young Investigator's Program.
Both authors received travel support from the Bayerisch-Franz\"osisches Hochschulzentrum/\allowbreak Centre de Coop\'eration Universitaire
Franco-Bavarois. In addition, H\"olzl received travel support from the above Templeton grant for a collaborative visit to work with Porter at the University of Florida.
The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
\bibliographystyle{alpha}
|
1,108,101,563,686 | arxiv | \section{Introduction}
Narrow-linewidth lasers are in increasing demand in science and bleeding edge technologies as they give a competitive advantage in such areas as coherent communications \cite{fulop2018high}, high-precision spectroscopy \cite{Suh600,Yang:19}, optical clocks \cite{Papp:14,Newman:19}, ultrafast optical ranging \cite{Suh2018,Trocha2018,Riemensberger2020} and others. Self-injection locking (SIL) of a laser diode to a whispering gallery mode (WGM) microresonator is widely used for the linewidth narrowing and stabilization in different ranges from ultraviolet to mid-IR \cite{VASSILIEV1998305, savchenkov2019self, Liang2015, Dale:16, Savchenkov:19a, shitikov2020microresonator, Coupling18}. That technology is robust and compact. Self-injection locked laser provides a direct opportunity to generate coherent frequency combs in a form of solitonic structures \cite{ Pavlov2018, raja2019electrically, shen2020integrated, kondratiev2020numerical, lobanov2020generation, VoloshinDynamics}. Optical frequency combs may be also generated in laser diodes by rapidly switching the gain above and below the lasing threshold \cite{anandarajah2011generation}. Gain switching looks very attractive due to the simplicity of the design and it is used for spectroscopy \cite{jerez2016dual} and high-capacity communication \cite{pfeifle2015flexible}. The gain-switched (GS) lasers also extensively use the injection locking technique for a frequency comb generation \cite{quirce2020nonlinear, zhu2016novel}, dual-comb generation \cite{quevedo2020gain}, active frequency stabilization \cite{liekhus2012injection} Kerr frequency combs excitation \cite{Wengeaba2807} and for quantum key distribution \cite{comandar2016quantum}. Injection locking can reduce the laser linewidth below 100 kHz \cite{zhou201140nm} but it is still limited by the master laser linewidth. However, self-injection locking to external passive resonator with high quality factor \cite{lin2017nonlinear}, that allows to achieve outstanding results in laser stabilization \cite{jin2020hertz, lim2017microresonator}, has never been applied to GS lasers. The implementation of the self-injection locking leads both to further miniaturization of the laser refusing the master laser and to narrowing of the laser linewidth.
In this work, we developed microresonator stabilized gain-switched laser operating in the SIL regime. We demonstrated experimentally high-contrast electrically tuned optical frequency combs with line spacing from 10 kHz to 10 GHz. It was revealed that SIL leads to a frequency distillation of each comb teeth and consequently increase the comb contrast.
The adjustment of the modulation voltage can be used to control the width of the frequency comb in terms of the lines quantity. The widths of the central line and the comb teeth were measured as narrow as a single line in non-switched SIL regime and were equal to several kHz. That results undoubtedly will be useful for spectroscopy, multi-carrier communications, multi-frequency pump for solitonic structure generation either for anomalous or normal group velocity dispersion.
Unique combination of a gain-switched laser with self-injection locking technique allows to achieve a wide plain spectrum of the comb with an ultra-narrow sub-kHz linewidth. It leads to the bright perspective of a compact high-capacity coherent communication device construction based on ordinary easy-to-get components.
Interestingly, qualitatively comparable results of comb line distillation in the presence of the high-Q microresonator were obtained with electro-optic combs and WGM microresonator as a filter \cite{prayoonyong2021optical}. The effect of the noise reduction of the electro-optic comb lines was achieved in case of the matching of the microresonator FSR with comb line spacing which limits the range of possible modulation frequencies compare to self-injection locked GS combs.
This work consists of three parts. First, we describe the experimental setup. Second, we prove that gain-switched lasers can be self-injection locked and demonstrate the impact of the self-injection locking. Significant narrowing of the comb teeth is shown for both MHz and GHz modulation frequencies. We also study the influence of the parameters defining SIL efficiency, such as coupling efficiency, laser frequency detuning from the microresonator mode, backscattering phase, on the generated comb parameters. Third, we demonstrate the possibility of electrical tuning of comb parameters by variation of the modulation frequency and modulation depth.
\section{Experiment}
\begin{figure}[hbtp!]
\centering
\includegraphics[width=\linewidth]{Group_1.png}
\caption{Experimental setup. WGM modes excited with a coupling prism by DFB 1550 nm laser without isolator. Supply current was modulated with a RF generator through a bias tee. Transmitted light was analyzed on the oscilloscope, optical spectrum analyzer (OSA) and electrical spectrum analyzer (ESA). }
\label{fig:Setup}
\end{figure}
For an experimental study of the considered effect the usual SIL setup was used (see Fig. \ref{fig:Setup}). The WGMs in microresonator with diameter 4 mm made of MgF$_{2}$ were excited with a prism. A peak contrast of the mode in the transmission spectrum exceeded 30\% for the critical coupling. The internal quality factor was $4\cdot10^8$ and was controlled during experiments by the technique proposed in \cite{shitikov2020microresonator}. The 1550 nm continuous wave distributed feedback (DFB) laser without isolator was used. The locking range at a critical coupling exceeded 1 GHz. The laser power was as low as 1.5 mW which provides a linear SIL regime. The distance between the laser diode and the microresonator and between the coupler and the microresonator were precisely controlled by piezo elements. The transmitted light was split into two parts. First part was sent to the free-space detector and second one was coupled into single-mode optical fiber to observe optical spectrum or radio frequency spectrum with implementation of the heterodyne scheme.
The light coupled to a single-mode optical fiber was further mixed with a signal from narrow-linewidth fiber laser (Koheras Adjustik) and was proceeded into fiber-input detector with 40 GHz bandwidth. The signal was analyzed with oscilloscope (100 MHz bandwidth and sample rate of 2.5 GS/s) and ESA (bandwidth up to 25 GHz).
Laser diode current was modulated with a RF generator through an ordinary bias tee circuit. In our experiments we studied the impact of the modulation of the laser supply current on the SIL. Note that, when the bias tee is connected, the laser frequency shifts up and a correction of the pump current is needed to maintain the SIL regime. It can be done easily since the heterodyne laser provides us a fixed reference frequency for the real-time observation. We revealed that it is possible to stay in the SIL regime for a large range of the modulation frequencies and modulation depths.
To prove that the laser was operated in GS regime we measured voltage from the diodes pins in case of different modulation depth. For modulation depth 8 V the minimal voltage on the pins reached 0 V during oscillations and for modulation depth 1.4 V the minimal voltage on the pins was bellow threshold of the laser, while the maximal voltage was above the threshold in both cases. The same results were obtained for modulation frequencies from 1 MHz to 100 MHz.
We also checked different waveforms as ramp and square applied to the bias tee. It didn't change the spectra dramatically, so all the results presented in this work were attained with sine modulation.
\section{GAIN-SWITCHING IMPACT ON SIL }
We started with studying in detail how the gain switching influence on the SIL.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\linewidth]{Group_2.png}
\caption{(a)-(f) Comparison of the non-switched (left panels) and GS laser diode (right panels) in the SIL regime. (a), (b) The transmission spectra of the non-switched (a) and gain-switched (b) laser in the SIL regime. (c)-(f) The spectrograms of the (a) and (b), where (e) and (f) are enlarged area of the locking state. (g) Evolution of the output spectrum of the SIL GS laser with the variation of the detuning of the GS laser central frequency from the microresonator eigenfrequency. High-contrast lines near 1.8 GHz is the 5 MHz spaced comb (also presented in the insets), solid blue is a noisy part of the spectrum. }
\label{fig:CombsSIL}
\end{figure*}
A method of spectrograms was used for the analysis. A spectrogram is a visual representation of the frequency spectrum of a generated signal as it varies with time. Spectrograms allow to clarify the temporal dynamics of the self-injection locking. The transmitted signal from the microresonator was mixed with a signal from the reference laser and the beat-note was digitized at the oscilloscope with a sample rate up to 2.5 GS/s. We calculated short-time Fourier transform with Blackman window. The DFB laser frequency was slowly swept by the power supply and the frequency of the reference laser was constant. The transmission signal in case of non-switched and GS SIL are presented on the (a) and (b) panels of Fig. \ref{fig:CombsSIL}. The resulting spectrograms are presented in panels (c)-(f), where (c) and (e) are for the non-switched SIL, and (d) and (f) for the gain-switched SIL with different scales. Time scales in the panels (a) and (c), as well as (b) and (d) are synchronized. The modulation frequency was equal to 1 MHz and voltage amplitude was 1.4 V. In the (c) and (d) from 2 to 4 ms one may see unlocked laser frequency evolution. Without the gain switching one blurred line is observed, and with gain switching it is changed to a broad frequency range. In the (e) and (f) the locking area is presented enlarged. In the spectrogram (f) separate comb lines can be seen. In panel (c) one may see unlocked laser frequency changing from 2.5 s to 4 s, then it becomes self-injection locked to WGM. The frequency change almost ceases in SIL regime. The ratio of the slope of the tuning curve in locked and unlocked regimes determines the stabilization coefficient. There is no unlocked part after SIL in the panel because laser frequency detuned out of the sample rate bandwidth. If there were no SIL, one would have observed V-shaped beatnote frequency dependence. In the panel (d) one may see a spectrum of the unlocked gain-switched laser from 1.5 to 3 s. Then it becomes partially locked by WGM from 3.5 to 4 s. The examples of the spectra in that case are presented in the first 3 panels from (g). Such regime corresponds to the locking of the generated sideband to the WGM. For higher modulation frequencies it leads to multiple excitations of WGM with a maximum transmission dip near the center. Then the laser becomes fully self-injection locked in interval from 4 to 5 s (4th panel in (g)). It happens when central frequency is locked to the WGM. In the interval from 5 to 5.5 s it anew transfers into a partially locked state (last panel in (g)) and then finally one may observe a fringe of the unlocked gain-switched spectrum from 5.5 till 7 s. It is worth noting that the inclination of the lines in GS regime, which is determined by the stabilization coefficient, is close to the non-switched SIL one. In the panel (g) one can see the spectra obtained for different GS laser frequency detuning from the WGM resonance. Gain switching completely alters the SIL dynamic. As the spectrum of the GS laser approaches the SIL range, high narrow comb lines in this range appears. To achieve an optimal regime with fully distilled output spectrum one should set the detuning of the laser from the resonance close enough to zero.
\begin{figure}[hbtp!]
\centering
\includegraphics[width=0.9\linewidth]{Group_3.png}
\caption{Spectra of the 100-MHz gain-switched comb generated by the free-running (top panel) and self-injection locked (bottom panel) laser. SIL leads to noise suppression and higher power level. Lower spectrum is noticeably spectrally purified.
}
\label{fig:OnOff}
\end{figure}
As it was shown in several works \cite{Kondratiev:17, Galiev20}, the value of the locking phase is an important parameter for SIL. We were able to tune it varying the distance between the laser and the microresonator. We found that the optimal phase for the unmodulated and modulated cases may not coincide. First, to obtain the optimal phase in the modulated case we adjust the distance between the laser and the microresonator to achieve maximum peak contrast on the free-space detector. The laser frequency was continuously scanned during the alignment process. Then the frequency scan was turned off and laser detuning was adjusted for optimal GS-comb excitation.
Obtained results confirm that pump modulation does not lead to the suppression of the SIL regime. Interestingly and even surprisingly, the locking persists even for the modulation frequencies larger than the locking range width. This can be viewed as follows. First the self-injection locking narrows the laser linewidth, then the parametric process in the laser provides comb teeth, which are also narrow because of the narrow seed.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=0.9\linewidth]{Group_4.png}
\caption{The analysis of the teeth linewidth of the GS combs in the SIL regime for high (3 GHz) and low (100 MHz) modulation frequency. We fit the beatnote signal with Voigt profile. The central line and sideband both have narrow kHz-scale linewidth. }
\label{fig:Ultimate2}
\end{figure*}
\section{SIL impact on the gain switching}
\begin{figure}[hbtp!]
\centering
\includegraphics[width=0.95\linewidth]{Group_5.png}
\caption{ RF spectrum evolution with variation of the coupling efficiency. }
\label{fig:Coupl}
\end{figure}
Self-injection locking of the GS laser diode to a high-Q WGM drastically changes its spectrum.
First, we compared complete spectra of the locked and free-running gain-switched lasers.
In Fig. \ref{fig:OnOff} the spectra of the modulated GS laser without SIL in the top panel and with it in the bottom panel are presented. SIL was switched off by receding the microresonator from the coupler. The modulation frequency was equal to 100 MHz and the voltage amplitude was 8 V. One may see that while the width of the spectrum does not change a lot (a broadening of about 30$\%$ can be observed in the SIL regime), a drastic enhancement of the comb contrast by $\approx$20 dBm is clearly visible due to the significant narrowing of the comb teeth. The same effects were observed for the whole range of the studied modulation frequencies. Parasitic lines below 1 GHz $\omega_{\rm gen}, 2\omega_{\rm gen}, 3\omega_{\rm gen}$ and $\omega_{\rm cal}$ can be figured out from the bottom graph. The $\omega_{\rm cal}$ is the Koheras calibration line appearing at 550 MHz and $\omega_{\rm gen}$ is the line corresponding to the modulation frequency.
Then, the spectral properties of the central line and sidebands in case of either low and high modulation frequencies were accurately measured (see Fig. \ref{fig:Ultimate2}). We analyzed the spectral properties of the GS SIL combs for significantly different cases exploring beatnotes of the central line (Fig. \ref{fig:Ultimate2}a) and sideband (Fig. \ref{fig:Ultimate2}b, c) from the 3-GHz frequency comb (modulation voltage of 1.8 V) and central line and sideband from the 100-MHz comb (modulation voltage of 1.4 V, see Fig. \ref{fig:Ultimate2} d, e, f). To analyze the beatnote data we utilized an approximation with Voigt profile (black curves in the panels). The Faddeeva error function was used to calculate a Voigt approximation \cite{ABRAROV20111894}. Voigt profile allows to estimate the contribution of the white frequency noise impact (Lorentzian linewidth) and the flicker frequency noise impact (Gaussian linewidth) \cite{galiev2018spectrum}). The first one is less than 0.5 kHz, the second one is less than 6 kHz.
The obtained results allow to declare the possibility of the generation of the high-contrast electrically-tuned frequency combs with the spectral components of the kHz width. The self-injection locking provides spectral purification of the gain-switched combs for a wide range of the interline frequencies.
We also varied the coupling of the microresonator to the optical pump from the undercoupled regime (losses for coupling less than internal losses) through critical coupling (losses for coupling are equal to internal losses) to overcoupled regime (losses for coupling are greater than internal losses) by changing the gap between the coupling prism and microresonator in approximately $0...200$ nm range (see Fig. \ref{fig:Coupl}). The modulation frequency was equal to 320 kHz and the voltage amplitude was 1.4 V. The width of the full spectrum is almost the same in all three cases. The frequency distance between the lines in overcoupled regime is a lot less stable, see the insets. The noise floor in overcoupled regime is higher and the lines are less narrow. Thus the loading of the microresonator determines the linewidths of the comb teeth and its stability. Note, that in undercoupled regime the remote lines separation is doubled.
\section{Electrical tuning of the gain-switched combs}
\begin{figure*}[hbtp!]
\centering
\includegraphics[width=\linewidth]{Group_6.png}
\caption{The frequency combs observed in SIL regime with gain-switched laser. Spectra were obtained by means of the heterodyne method. Comb teeth are marked with red circles and red arrows, and green arrows correspond to the interference lines. (a) The 40 kHz spaced comb. The spacing between adjacent lines is presented on the inset. (b) The 10 MHz spaced comb with width wider than 1 GHz. (c). The 0.3 GHz spaced comb. (d) The 0.5 GHz spaced comb. (e) The 1 GHz spaced comb. (f) The 10 GHz spaced comb.
}
\label{fig:SIL}
\end{figure*}
\begin{figure}[hbtp!]
\centering
\includegraphics[width=\linewidth]{Group_7.png}
\caption{ The dependence of the GS comb full width at $-45$ dBm level on the switching frequency in SIL regime. The GS spectrum width without SIL is presented with yellow dots.
}
\label{fig:CombWidth}
\end{figure}
\begin{figure}[hbtp!]
\centering
\includegraphics[width=0.9\linewidth]{Group_8.png}
\caption{ The spectrum broadening with the modulation voltage increase in SIL regime. Upper panel corresponds to the voltage of 1.4 V, bottom panel - to 8.0 V at 5 MHz frequency. }
\label{fig:Electrically_tuned}
\end{figure}
We studied the characteristics of the generated combs in SIL regime for different modulation parameters. For detailed analysis we varied the modulation frequency and modulation voltage in the wide range and observed frequency combs of different types with the comb line spacing from 10 kHz to 10 GHz (see Fig. \ref{fig:SIL}).
First, we investigated the evolution of the generated combs with the growth of the modulation frequency.
In the panel (a) of Fig. \ref{fig:SIL} a frequency comb with 40 kHz interval between the lines consist of 33 resolvable lines is presented. The modulation amplitude was equal to 1.4 V. The lines at such low repetition rate are indistinguishable for the free-running GS laser and become pronounced due to the linewidth narrowing and frequency stabilization in the SIL regime. In the panel (b) of Fig. \ref{fig:SIL} a frequency comb with 10 MHz interval between the lines and broad spectrum of 1.4 GHz is presented. The modulation amplitude on the generator output was equal to 8 V which led to the spectrum broadening. In the panel (c) and (d) the spectra of the combs with at modulation voltage was 1.6 V and line spacing of 0.3 and 0.5 GHz are presented. The spectra have triangle shape and almost the same width. In the panel (e) a frequency comb with 1 GHz interval between the lines and 4 GHz width is presented. The modulation amplitude was limited by the generator and equal to 1.6 V. Parasitic lines are marked green. They are the Koheras calibration line $\omega_{\rm cal}$ appearing at 550 MHz and the line corresponding to the modulation frequency $\omega_{\rm gen}$. In the panel (f) a frequency comb with 10 GHz interval between the lines is presented. Here the heterodyne is near the central comb line (shown with red arrow $\omega_0$), so both its wings are to the right of it (shown with red arrows $\omega_{\pm 1}$). The modulation amplitude was equal to 1.6 V. The lines, corresponding to $\omega_{cal}$ and its interference with the central line (shown with the green arrows $\omega_{cal \pm 1}$) together with $\omega_{gen}$ appear in that case.
The frequency interval between the lines of the comb can be seamlessly tuned by the GS frequency. However, one should keep in mind that the SIL of GS laser diode is affected by the amplitude of the applied signal. First, it is contributed to biasing the diode that changes its unlocked frequency, second, overdrive may suppress the carrier and a sideband became locked instead. In our case, the bias tee circuit input impedance was frequency dependent and unmatched at high frequencies, so changing the GS frequency we inevitably changed its amplitude too. A frequency-dependent DC current correction and perfect impedance matching leads to ultra-wideband continuous tuning of the SIL GS comb. We managed to keep SIL regime continuously changing the frequency for hundreds of MHz adjusting the GS voltage within 25\%.
Thus, we demonstrated that the frequency interval between the lines can be continuously tuned in a vast range if the impedance of the RF circuit is taken into account. The tuning may be either continuous or discrete: we tested tuning steps from 1 MHz up to 100 MHz. On the other hand, the central frequency cannot be tuned in a range larger than several loaded microresonator linewidths ($\approx 1$ MHz in our case, see Fig. \ref{fig:CombsSIL} (e)) without special tricks like simultaneous tuning of the WGM microresonator temperature \cite{Liang2015compact} and laser diode supply current or deformation of the microresonator \cite{liu2020monolithic}.
In Fig. \ref{fig:CombWidth} the dependence of the comb full width at $-45$ dBm level from a switching frequency is presented. We investigated this dependence for a wide range of the switching frequencies in SIL regime. We also implemented different voltage amplitude (1.4 V marked with red circles and 8 V marked with blue ones in Fig. \ref{fig:CombWidth}) and found the same behavior. The width of the comb was monotonously increased up to the width of 3 GHz at modulation frequency of 10 MHz both due to the interline distance and line number increase. At this level a comb width reached the plateau and we observed the decrease of line number with the growth of the modulation frequency. This plateau is determined by the laser properties and is independent from SIL, because we observed the same spectral width of the generated comb in the unlocked regime (see yellow points in Fig. \ref{fig:CombWidth}). At modulation frequencies of the GHz scale the comb full width again increases with modulation frequency.
Second, we studied the evolution of the generated combs with the variation of the modulation depth and revealed that the amplitude of the modulation voltage is one of the key parameters of the scheme allowing to manage the frequency combs characteristics. In Fig. \ref{fig:Electrically_tuned} the 5-MHz GS combs for the modulation voltage of 1.4 V in the top panel and 8 V amplitude in the bottom panel are presented. For 8 V switching voltage the laser current monitored from the laser diode pins was reaching zero value during oscillation. In that case the width of the spectrum is almost 3 times wider for the same comb line spacing. The modulation voltage can be selected in a manner to converse all the central line into sidebands.
The stability of the SIL gain-switched comb was characterized by measurement of the beatnote phase noise (see Fig. \ref{fig:phaseNoise}). The single-sideband phase noise spectral density of the beatnote signal of the laser under consideration and reference laser (Koheras Adjustik) was measured. We compared phase noise level for the unlocked laser, SIL laser without modulation and 100 MHz gain-switched SIL comb central line, first and second sidebands. Until frequency offset $10^3$ Hz, fluctuations of the temperature of the microresonator are the dominant destabilizing factor. Then the phase noise values in locked cases are well below 1 kHz asymptote. We compared the phase noise with the phase noise of two reference lasers and evaluated that in the region from $10^3$ Hz to $10^5$ Hz the phase noise reached the level of the reference laser. At the frequency offset of $10^6$ Hz phase noise value reaches a plateau determined by intensity noise. The sidebands reached the plateau at higher level due to lower peak contrast. This experiment demonstrates outstanding stability of the SIL gain-switched frequency comb.
\begin{figure}[hbtp!]
\centering
\includegraphics[width=\linewidth]{Group_9.png}
\caption{Single sideband spectral density of the phase noise of the beat signal of the reference laser and the laser under study in different regimes. Grey line is an unlocked DFB laser. Red line is a SIL laser without modulation applied. The rest is the center line and two sidebands on each side of SIL GS laser. Black lines are the asymptotes corresponds to linewidth of 0.1, 1, 10, 100 kHz and 1 MHz. It can be seen that for frequencies above $10^3$, the values are below 1 kHz asymptote. SIL itself stabilizes at short averaging times and above $10^3$ the fluctuations due to temperature instability are dominant. The lower the line power, the higher the plateau for frequencies closer to MHz. }
\label{fig:phaseNoise}
\end{figure}
At this point we demonstrated conceptually the frequency distillation of the gain-switched comb in the self-injection locking regime. The frequency interval between comb lines can be varied in extremely large range from tens of kHz up to tens of GHz. Ordinary equipment (cables, connectors) used in RF scheme were not designed to perform microwave measurements. Thus at GHz frequencies the losses in cable traces became extremely significant, and optimization of the RF path may lead to even better performance.
\section {Conclusion}
GS laser optical frequency combs were observed experimentally with an ordinary continuous-wave DFB laser locked to a high-Q WGM microresonator. We revealed an immense flexibility of the switching rate (from 10 kHz to 10 GHz in our case). We revealed that SIL leads to a frequency distillation of the comb teeth up to kHz scale and to significant increase of the comb contrast (by more than 20 dBm in our case). The linewidth of the central line and the comb teeth were measured as narrow as in non-switched SIL case. We disclosed that the loaded Q-factor of the microresonator defines the linewidth narrowing and its stabilization as in a plain SIL. The comb line spacing value is determined by the modulation frequency from a generator and can be continuously tuned. On the other hand the full width of the comb can be controlled by adjusting the amplitude of the modulation voltage. That results may be undoubtedly useful for coherent communications, spectroscopy, quantum key distribution or in coherent LIDARs.
\section*{Acknowledgements}
The work was supported by the Russian Science Foundation (project 20-12-00344).
\newpage
\eject
|
1,108,101,563,687 | arxiv | \section{Global magnetospheric simulations}
\label{sec:sim}
The global magnetospheric simulations are produced with the OpenGGCM-CTIM-RCM code, a MHD-based model that simulates the interaction of the solar wind with the magnetosphere-ionosphere-thermosphere system. OpenGGCM-CTIM-RCM is available at the Community Coordinated Modeling Center at NASA/GSFC for model runs on deman
. A detailed description of the model and typical examples of OpenGGCM applications can be found in~\citet{raeder03t}, \citet{raeder01a}, \citet{Raeder2005PO}, \citet{Connor2016MO}, \citet{raeder00c}, \citet{Ge2011b}, \citet{raeder2010a}, \citet{ferdousi2016signal}, \citet{Dorelli2004AN}, \citet{Raeder2006FL}, \citet{berchem95l}, \citet{moretto2005}, \citet{vennerstroe2005}, \citet{Anderson2017CO}, \citet{Zhu2009IN}, \citet{Zhou2012DI}, \citet{Shi2014SO}, to name a few. Of particular relevance to this study, OpenGGCM-CTIM-RCM simulations have recently been used for a Domain of Influence Analysis, a technique rooted in Data Assimilation that can be used to understand what are the most promising locations for monitoring (i.e. spacecraft placing) in a complex system such as the magnetosphere~\citep{millas2020domain}.
\textcolor{black}{OpenGGCM-CTIM-RCM uses a stretched Cartesian grid~\citep{raeder03t}, which in this work has 325x150x150 cells, sufficient for our large scale classification purposes, while running for few hours on a modest number of cores. The point density increases in the Sunwards direction and in correspondence with the magnetospheric plasma sheet, the ``interesting" region of the simulation for our current purposes. The simulation extends from $-3000~R_E$ to $18~R_E$ in the Earth-Sun direction, from $-36~R_E$ to $+36~R_E$ in the y and z direction. $R_E$ is the Earth's mean radius, the Geocentric Solar Equatorial (GSE) coordinate system is used in this study. \\ In this work, we do not classify points from the entire simulated domain. We focus on a subset of the points with coordinates $-41< x/R_E< 18$, i.e. the magnetosphere / solar wind interaction region and the near magnetotail.\\}
\textcolor{black}{The OpenGGCM-CTIM-RCM boundary conditions require the specification of the three components of the solar wind velocity and magnetic field, the plasma pressure and the plasma number density at 1 AU. Boundary conditions in the sunward direction vary with time. They are interpolated to the appropriate simulated time from ACE observations~\citep{stone1998advanced}, and applied identically to the entire Sunward boundary. At the other boundaries, open boundary conditions (i.e., zero normal derivatives) are applied, with appropriate corrections to satisfy the $\nabla \cdot \mathbf{B}=0$ condition.}
For this study we initialize our simulation with solar wind conditions observed starting from May 8$^{th}$, 2004, 09:00 UTC, denoted as $t_0$. After a transient, the magnetosphere is formed by the interaction between the solar wind and the terrestrial magnetic field.
We classify simulated data points from the time $t_0$ + 210 minutes, when the magnetosphere is fully formed. We later compare our results with earlier and later times, $t_0$ + 150 and 225 minutes. In Figure~\ref{fig:mag_py0}, we show significant magnetospheric variables, at $t_0$ + 210 minutes, in the $xz$ plane at $y/R_E=0$ (meridional plane): the three components of the magnetic field $B_x$, $B_y$, $B_z$, the three velocity components $v_x$, $v_y$, $v_z$, the plasma density $n$, pressure $pp$, temperature $T$, the Alfv\'{e}n speed $v_A$, the Alfv\'{e}nic Mach number $M_A$, the plasma $\beta$. Magnetic field lines (more precisely, lines tangential to the field direction within the plane) are drawn in black. Panels g to l are in logarithmic scale.
We expect the algorithm to identify well-known domains such as pristine solar wind, magnetosheath, lobes, inner magnetosphere, plasma sheet, and boundary layers, which can be clearly identified in these plots. The classification is done for points from the entire 3D volume, and not only for 2D cuts such as the one shown.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig1.pdf}
\caption
{\small Simulated magnetospheric variables, in the $y/R_E=0$, meridional, plane and at $t_0$+ 210 minutes. The different magnetospheric regions are evident. Magnetic field lines are depicted as black arrows. }
\label{fig:mag_py0}
\end{figure}
In Figure~\ref{fig:violin_feat__good}, we depict the violin plots for the variables in Figure~\ref{fig:mag_py0}.
Violin plots are useful tools to visualize at a glance the distribution of feature values, as they depict the probability density of the data at different values. In violin plots, the shape of the violin depicts the frequency of occurrence of each feature: the ``thicker" regions of the violin are where most of the observations lie. The white dot, the thick black vertical lines and the thin vertical lines (the ``whiskers") separating the blue and orange distributions depict the median, the interquartile range, and the 95~\% confidence interval respectively. 50~\% of the data lie in the region highlighted by the thick black vertical lines. 95~\% of the data lie in correspondence of the whiskers.
The left and right sides of the violins depict distributions at different times. The left side, in blue, depicts data points from $t_0$ + 210 minutes. The right side, in orange, depicts a data set composed of points from multiple snapshots, $t_0$ + 125, 175, 200 minutes, and intends to give a visual assessment of the variability of the distribution of magnetospheric properties with time. The width of the violins are normalized to the number of points in each bin.
In the simulations, points closer to Earth correctly exhibit very high magnetic field values, up to several $\mu$T. In the violin plots, for the sake of visualization, the magnetic field components of points with $|B|>100\;nT$ have been clipped to $\sqrt{100^2}/3\;nT$, multiplied by their respective sign (hence the accumulation of points at $\pm \sqrt{100^2}/3\;nT$ in the magnetic field components).
The multi-peaked distribution of several of the violins reflect the variability of these parameters across different magnetospheric environments. Multi-peaked distributions bode well for classification, since they show that the underlying data can be inherently divided in different classes.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{Fig2.pdf}
\caption
{\small Violin plots of the data sets extracted from the magnetospheric simulations. The left sides of the violins, in blue, are data points at $t_0$ + 210 minutes, the right, in orange, are from $t_0$ + 125, 175, 200 minutes. In the $B_x$ plot, the red, green, blue arrows point at the median, first and third quartiles, whiskers, respectively. }
\label{fig:violin_feat__good}
\end{figure}
\section{Self Organizing Maps: a recap}
\label{sec:techniques}
To classify magnetospheric regions, we use Self Organizing Maps (SOMs), an unsupervised ML technique. Self Organizing Maps~\citep{kohonen1982self,villmann2006magnification, Kohonen2014, amaya2020visualizing}, also known as Kohonen maps or self organizing feature maps, are a clustering technique based on a neural network architecture.
\textcolor{black}{SOMs aim at producing an \textit{ordered} representation of data, which in most cases has lower dimensionality with respect to the data itself. ``Ordered" is a key word in SOMs. The topographical relations between the trained SOM nodes are expected to be similar to those of the source data: data points that map to ``nearby" SOM nodes are expected to have ``similar" features. Each SOM node then represents a local average of the input data distribution, and nodes are topographically ordered according to their similarity relations~\citep{Kohonen2014}. }\\
A SOM is composed of :
\begin{itemize}
\item a (usually) two dimensional lattice of $Lr\times Lc=q$ nodes, with $Lr$ and $Lc$ the number of rows and columns. This lattice, also called a map, is a structured arrangement where all nodes are located in fixed positions, $\mathbf{p}_i \in \mathbb{R}^2$, and associated with a single \textit{code word}, $\mathbf{w}_i$. \textcolor{black}{As it is often done with two-dimensional SOM lattices, the nodes are organized in an hexagonal grid~\citep{Kohonen2014}. }
\item a list of q \textit{code words} $\mathbf{W}=\{\mathbf{w}_i\in\mathbb{R}^n\}_{i=0..q-1}$, where $n$ is the number of features associated to each data point (and hence to each code word). \textcolor{black}{$n$ is therefore the number of plasma variables that we select, among the available ones, for our classification experiment.} Each $\mathbf{w}_i$ is associated with a map node $\mathbf{p}_i$
\end{itemize}
Each of the $m$ input data points is a data entry $\mathbf{x}_\tau \in \mathbb{R}^n$. \textcolor{black}{Notice that, in the rest of the manuscript, we will use terms such as ``data point", ``data entry", ``input point" interchangeably.} Given a data entry $\mathbf{x}_\tau$, the closest code word in the map, $\mathbf{w}_s$ (Eq.~\ref{eq:winner}), is called the \textit{winning element}, and the corresponding map node $\mathbf{p}_s$ is the Best Matching Unit (BMU):
\begin{equation}
\mathbf{w}_s = \underset{\mathbf{w}_i \in \mathbf{W}}{\arg \min}\left( \lVert \mathbf{x}_\tau - \mathbf{w}_i\rVert \right)
\label{eq:winner}
\end{equation}
\textcolor{black}{$||.||$ is the distance metric. In this work, we use the Euclidean norm.}\\
SOMs learn by moving the winning element \textit{and neighboring nodes} closer to the data entry, based on their relative distance, and on a iteration number-dependent learning rate $\epsilon(\tau)$, with $\tau$ the progression of samples being presented to the map for training\textcolor{black}{: the feature values of the winning element are altered as to reduce the distance between the ``updated" winning element and the data entry.} The peculiarity of the SOMs is that a single entry is used to update the position of several code words in feature space, the winning nodes and its nearest neighbors: code words move towards the input point at a speed proportional to the distance of their correspondent lattice node position to the \textit{winning node}.
\textcolor{black}{It is useful to compare the learning procedure in SOMs and in another, perhaps more known, unsupervised classification method, K-means~\citep{lloyd1982least}. Both SOMs and K-means classification identify and modify the best matching unit for each new input. In K-means, only the winner node is updated. In SOMs, the winner node \textit{and its neighbors} are updated. This is done to obtain an ordered distribution: nearby nodes, notwithstanding their initial weights, are modified during training as to become more and more similar. }\\
At every iteration of the method, the code words of the SOM are shifted according to the following rule:
\begin{equation}
\Delta\mathbf{w}_i = \epsilon(\tau)h_\sigma(\tau_i,i,s)(\mathbf{x}_\tau-\mathbf{w}_i)
\end{equation}
with $h_\sigma(\tau,i,j)$ defined as the lattice neighbor function:
\begin{equation}
h_\sigma(\tau,i,j) = e^{-\frac{\lVert \mathbf{p}_i - \mathbf{p}_j \rVert^2}{2\sigma(\tau)^2}},
\end{equation}
where $\sigma(\tau)$ is the iteration-number dependent lattice neighbor width.
The training of the SOM is an iterative process. At each iteration a single data entry is presented to the SOM and code words are updated accordingly.
\textcolor{black}{The radius of the neighboring function $\sigma(\tau)$ determines how far from the winning node the update introduced by the new input will extend. The learning rate $\epsilon(\tau)$ gives a measure of the magnitude of the correction. Both are slowly decreased with the iteration number. At the beginning of the training, the update introduced by a new data input will extend to a large number of nodes (large $\sigma$), which are significantly modified (large $\epsilon$), since it is assumed that the map node do not represent well the input data distribution. At large iteration number, the nodes are assumed to have already become more similar to the input data distribution, and lower $\sigma$ and $\epsilon$ are used for ``fine tuning".\\
In this work, we choose to decrease $\sigma$ and $\epsilon$ with the iteration number. Another option, which we do not explore, is to divide the training into two stages, coarse ordering and final convergence, with different values of $\sigma$ and $\epsilon$.\\
However small, $\sigma$ has to be kept larger than 0, otherwise only the winning node is updated, and the SOM loses its ordering properties~\citep{Kohonen2014}. }
This learning procedure ensures that neighboring nodes in the lattice are mapped to neighboring nodes in the $n$-dimensional feature space. The 2D maps obtained can then be graphically displayed, allowing to visually recognize patterns in the input features and to group together points that have similar properties, see Figure~\ref{fig:cl7_values__good}.
The main metric for the evaluation of the SOM is the \textit{quantization error}, which measures the average distance between each of the $m$ entry data points and its BMU, and hence how closely the map reflects the training data distribution.
\begin{equation}
Q_E = \frac{1}{m} \sum_{i=1}^m \lVert \mathbf{x}_i - \mathbf{w}_{s|\mathbf{x}_i} \rVert
\label{eq:Qe}
\end{equation}
\section{Methodology and results}
\label{sec:res}
For our unsupervised classification experiments, we initially focus on a single temporal snapshot of the OpenGGCM-CTIM-RCM simulation, $t_0 + 210$ minutes.
Although the simulation domain is much larger, we restrict our input data set to the points with $-41 <x/R_E< 18$ (see Figure~\ref{fig:mag_py0}), since we are particularly interested in the magnetospheric regions more directly shaped by interaction with the solar wind.
We select 1 $\%$ of the 5557500 data points at $x/R_E> -41$, $t= t_0+ 210$ minutes as the training data set.
\textcolor{black}{The selection of these points is randomized, and the seed of the random number generator is fixed to ensure that results can be reproduced. Tests with different seeds and with an higher number of training points did not give significantly different classification results. }
In Figure~\ref{fig:violin_comp__good}, panel a, we plot the correlation matrix of the training data set. This and all subsequent analyses, unless otherwise specified, are done with the feature list labeled as F1 in Table~\ref{tab:caseList}: the three components of the magnetic field and of the velocity, the logarithm of the density, pressure, temperature. In Table~\ref{tab:caseList}, we describe the different sets of features used in the classification experiments described in Section~\ref{sec:SOMFeature}. Each set of feature is assigned an identifier (``Case"); the list of magnetospheric variables used in each case are listed under ``Features". Differences with respect to F1 are marked in red.
\begin{table}
\small
\caption{Combination of features (``Cases") used for the different classification experiments. We mark in \textcolor{black}{bold} differences with respect to F1, our reference feature set.}
\begin{tabular}{c c }
\hline
\textbf{Case} & \textbf{Features}\\
\hline
F1 & $B_x$, $B_y$, $B_z$ , $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T) \\
\hline
F1-NL & $B_x$, $B_y$, $B_z$, $v_x$, $v_y$, $v_z$, \textbf{n, pp, T} \\
\hline
F2 & $B_x$, $B_y$, $B_z$, $v_x$, $v_y$, $v_z$, \textbf{\st{log(n)}}, log(pp), log(T) \\
\hline
F3 & $B_x$, $B_y$, $B_z$, \textbf{\st{v$_x$}}, $v_y$, $v_z$, log(n), log(pp), log(T) \\
\hline
F4 & $B_x$, $B_y$, $B_z$, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T), \textbf{log(v$_A$)}\\
\hline
F5 & $B_x$, $B_y$, $B_z$, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T), \textbf{log(M$_A$)}\\
\hline
F6 & $B_x$, $B_y$, $B_z$ , $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T), \textbf{log($\beta$)}\\
\hline
F7& \textbf{\st{B$_x$, B$_y$, B$_z$}}, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T) \\
\hline
F8& \textbf{B$_x$, B$_y$, B$_z$ (clipped)}, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T) \\
\hline
F9 & \textbf{|B| (clipped)}, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T)\\
\hline
F10 & \textbf{log|B| }, $v_x$, $v_y$, $v_z$, log(n), log(pp), log(T)\\
\hline
\end{tabular}
\label{tab:caseList}
\end{table}
The correlation matrix shows the correlation coefficients between a variable and all others, including itself (the correlation coefficient of a variable with itself is of course 1). We notice in the bottom right of the matrix that correlation is high between logarithm of density, logarithm of pressure, logarithm of temperature and velocity in the Earth-Sun direction. This suggests that a lower-dimension feature set can be obtained, that still expresses a high percentage of the original variance. Using a lower dimensional training data set is desirable, since it reduces the training time of the map.
At this stage of our investigation, we use Principal Component Analysis (PCA)~\citep{shlens2014tutorial} as a dimensionality reducing tool. More advanced techniques, and in particular techniques that do not rely on linear correlation between the features, are left as future work.
First, the variables are scaled between two fixed numbers, here 0 and 1, to prevent those with larger ranges from dominating the classification. Then, we use PCA to extract linearly independent Principal Components, PCs, from the set of original variables.
We keep the first three PCs, which express 52 $\%$, 35 $\%$ and 5.4 $\%$ of the total variance, retaining $~93 \%$ of the initial variance. We plot in Figure~\ref{fig:violin_comp__good}, panel b to d, left blue half-violins, the violin plots of these scaled components.
For a visual assessment of temporal variability in the simulations we show in the right orange half-violins the first three PCs of the mixed-time data set, where data points are taken at $t_0$ + 125, 175, 200 minutes. We see difference, albeit small, between the two sets, which explain the different classification results with fixed and mixed time data sets that we discuss in Section~\ref{sec:SOMFeature}. Notice, by comparing the blue and orange half-violins in panel b, that PC0 is ``rotated" around the median value in the two data sets, which is possible for components reconstructed through linear PCA.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{Fig3.pdf}
\caption
{\small Correlation plot for the fixed-time training data set at time $t_0$ + 210 minutes (panel a), violin plots of the first three PCs after PCA for the fixed- (left, blue half-violins) and mixed- (right, orange half-violins) time data sets (panel b to d). }
\label{fig:violin_comp__good}
\end{figure}
To investigate which of the features contribute most to each PC, we print in Table~\ref{tab:LogFeat_3comps}, section ``F1 feature set", the eigenvectors associated with the first three PCs (rows). Each column correspond to one feature. The three most relevant features for each PC are marked in bold fonts.
\begin{table}
\small
\caption{Eigenvectors associated with the first three PCs (rows), for each of the F1, clipped F1, clipped F9 features in Table~\ref{tab:caseList} (columns). The first most relevant features for each PC are marked in bold fonts. }
\begin{tabular}{c c c c c c c c c c c}
\hline
& $|B|$ & $B_x$ & $B_y$ & $B_z$ & $v_x$ & $v_y$ & $v_z$ & $log(n)$ & $log(pp)$ & $log(T)$ \\
\hline
\multicolumn{11}{c}{\normalsize{F1 feature set}}\\
\hline
PC0 & - & 1.73e-04 & 2.83e-04 & -6.50e-04 & \textbf{2.47e-01 } & 4.57e-02 & 5.89e-03 & \textbf{-8.55e-01} & \textbf{-4.52e-01} & -3.20e-02\\
\hline
PC1 & - & -7.67e-06 & -7.51e-05 & 1.59e-04 & \textbf{ 3.67e-01} & 5.77e-02 & 5.30e-02 & -1.87e-01 & \textbf{5.07e-01} & \textbf{7.53e-01}\\
\hline
PC2 & - & -2.12e-04 & 9.58e-06 & -3.81e-04 & -9.59e-03 &\textbf{-9.80e-01} & \textbf{-1.75e-01} & -6.78e-02 & \textbf{1.71e-02 }& 6.38e-02\\
\hline
\multicolumn{11}{c}{\normalsize{\textcolor{black}{F8 feature set - B clipped}}}\\
\hline
PC0 & - & 4.44e-02 & 6.16e-02 & -4.59e-02 & \textbf{2.43e-01} & 4.55e-02 & 5.64e-03 & \textbf{-8.51e-01} & \textbf{-4.54e-01} & -3.73e-02 \\
\hline
PC1 & - & -1.78e-02 & -1.42e-02 & 4.62e-02 & \textbf{3.7e-01} & 5.8e-02 & 5.29e-02 & -1.96e-01 & \textbf{5.01e-01} & \textbf{7.51e-01 } \\
\hline
PC2 & - & \textbf{-1.74e-01} & -2.61e-02 & -3.54e-02 & -2.1e-02 &\textbf{-9.59e-01} & \textbf{-1.94e-01} & -7.78e-02 & 1.39e-02 & 6.60e-02 \\
\multicolumn{11}{c}{\normalsize{F9 feature set - B clipped}}\\
\hline
PC0 & 1.12e-01 & -& -& - & \textbf{2.8e-01} & 4.94e-02 & 1.00e-02 & \textbf{-8.61e-01} & \textbf{-4.05e-01} & 2.98e-02 \\
\hline
PC1 & \textbf{4.72e-01} & -& -& - & 3.11e-01 & 4.09e-02 & 4.43e-02 & -4.64e-02 & \textbf{ 4.98e-01} & \textbf{6.53e-01} \\
\hline
PC2 & \textbf{8.57e-01} & -& -& - & -2.67e-02 & -6.25e-02 & -3.68e-02 & \textbf{1.93e-01} & -2.31e-01 & \textbf{-4.11e-01} \\
\hline
\end{tabular}
\label{tab:LogFeat_3comps}
\end{table}
The three most significant F1 features for PC0 are the logarithm of the density, of the pressure and the velocity in the $x$ direction; for PC1 the logarithm of the temperature, of the pressure and the velocity in the $x$ direction; for PC2 the velocity in the $y$ direction, in the z direction and the logarithm of the plasma pressure. We see that the three magnetic field components rank the lowest in importance for all the three PCs.
This last result is at a first glance quite surprising, given the fundamental role of the magnetic field in magnetospheric dynamics. We can explain it looking at the violin plots in Figure~\ref{fig:violin_feat__good}. There, we see that the magnetic field distributions are quite ``simple" when compared to the multi-peak distributions of more significant features such as density, pressure, temperature, $v_x$. Still, one may argue that the very high values of the magnetic field close to Earth distort the magnetic field component distributions, and reduce their weight in determining the PCs. In Table \ref{tab:LogFeat_3comps}, section ``\textcolor{black}{F8} feature set - B clipped", we repeat the analysis clipping the magnetic field values as in Figure~\ref{fig:violin_feat__good}. \textcolor{black}{The intention of the clipping procedure is to cap the maximum magnitude of the magnetic field module to 100 nT, while retaining information on the sign of each magnetic field component.}
Also now, the magnetic field does not contribute significantly in determining the PCs.\\
\textcolor{black}{In Table \ref{tab:LogFeat_3comps}, section ``F9 feature set - B clipped", we list the eigenvectors associated with the first three PCs for the F9 feature set. We see that now the clipped magnetic field magnitude ranks higher than with ``F1 feature set" and ``\textcolor{black}{F8} feature set - B clipped" in determining the PCs, and becomes relevant especially for PC1 and PC2.
In the violin plots of the PCs for F9, not shown here, we see that PC0 is not significantly different in F1 and F9, while PC1 and PC2 are. In particular, PC2 for F9 exhibits more peaks than PC2 for F1.}
The first three PCs obtained from the F1 feature list (without magnetic field clipping) are used to train a SOM. Each of the data points is processed and classified separately, based solely on its local properties, at $t_0$ + 210 minutes. We consider this local approach one of the strengths of our analysis method, which makes it particularly appealing for spacecraft on-board data analysis purposes.
The procedure for the selection of the SOM hyper-parameters is described in Appendix~\ref{sec:SOMPars}. At the end of it, we choose the following hyper-parameters: $q=10 \times 12$ nodes, initial learning rate $\epsilon_0=0.25$ and initial lattice neighbor width $\sigma_0=1$.
After the SOM is generated, its nodes are further classified using K-means clustering in a predetermined number of classes. Data points are then assigned to the same cluster as their Best Matching Unit, BMU.
The overall classification procedure can then be summarized as follows:
\begin{enumerate}
\item data pre-processing: feature scaling, dimensionality reduction via PCA, scaling of the reduced values;
\item SOM training;
\item K-means clustering of the SOM nodes;
\item classification of the data points, based on the classification of their BMU.
\end{enumerate}
\textcolor{black}{It is useful to remark that, even if the same data are used to train different SOMs, the trained networks will differ due e.g. to the stochastic nature of artificial neural networks and to their sensitivity to initial conditions. If the initial positions of the map nodes are randomly set (as in our case), maps will evolve differently, even if the same data are used for the training.\\
To verify that our results do not correspond to local minima, we have trained different maps seeding the initial random node distribution with different seed values. We have verified that the trained SOMs so generated give comparable classification results, even if the nodes that map to the same magnetospheric points are located at different coordinates in the map. The reason for this comparable classification results is that the `net' created by a well-converged SOM will always have a similar coverage and neighbouring nodes will always be located at similar distances with respect to their neighbours (if the training data do not change). Hence, while the final map might look different, the classes and their properties will produce very similar end results. We refer the reader to \citet{amaya2020visualizing} for exploration of the sensitivity of the SOM method to the parameters and to initial condition, and for a study of the rate and speed of convergence of the SOM. }
\textcolor{black}{Our maps are initialized with random node distributions. It has been demonstrated that different initialization strategies, such as using as initial node values a regular sampling of the hyperplane spanned by the two principal components of the data distribution, significantly speed up learning~\citep{Kohonen2014}. }
\subsection{Classification results and analysis}
\label{sec:resAn}
We describe in this Section the results of a classification experiment with the feature set F1 from Table~\ref{tab:caseList}. After training the SOM, we proceed to node clustering.
The optimal number of K-means classes $k$ can be chosen examining the variation with $k$ of the Within Cluster Sum of Squares (WCSS), i.e. the sum of the squared distances from each point to their cluster centroid. The WCSS decreases as $k$ increases; the optimal value can be obtained using the Kneedle (``knee" plus ``needle") class number determination method~\citep{Satopaa2011}, that identifies the knee where the perceived cost to alter a system parameter is no longer worth the expected performance benefit. Here, the Kneedle method, Figure~\ref{fig:cl7__good}, panel a, gives $k=7$ as the optimal cluster number, i.e. a representative and compact description of feature variability.
The clustering classification results can be plotted in 2D space. Figure~\ref{fig:cl7__good} shows, \textcolor{black}{in panel d and e, points with $-1 < y/R_E<1 $ and with $-1 < z/R_E<1 $, that we identify, for simplicity, with the meridional and equatorial plane respectively.} The projected field lines are depicted in black. $k=7$, as per the results of the Kneedle method. The points are depicted in colors, with each color representing a class of classified SOM nodes. The dot density changes in different areas of the simulation because the grid used in the simulation is stretched, with increasing points per unit volume in the Sunward direction and in the plasma sheet center. \textcolor{black}{The (T) in the label is used to remind the reader that these points are the ones used for the training of the map. Plots of validation datasets will be labeled as (V).}
The SOM map in panel c depicts the clustered SOM nodes. In panel b, the clusters are a posteriori mapped to different magnetospheric regions.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig4.pdf}
\caption
{\small panel a: Kneedle determination of the optimal number of K-means clusters for the SOM nodes. WCSS (left axis) is the Within Cluster Sum of Squares, the maximum of the normalized distance (right axis) identifies the optimal cluster number, here $k=7$. \textcolor{black}{panel b: a posteriori class identification. panel c: clustered SOM nodes. Panel d and e: classified points in the meridional and equatorial planes respectively. In panel b to e, $k=7$. The points depicted are the ones used for the training (T) of the map.} }
\label{fig:cl7__good}
\end{figure}
Comparing Figure~\ref{fig:cl7__good} and Figure~\ref{fig:mag_py0}, we see that cluster 0, purple, corresponds to unshocked solar wind plasma. Cluster 4 (brown) and 1 (blue) map to shocked magnetosheath plasma just downstream the bow shock. Cluster 5 (orange) groups both points in the downwind supersonic magnetosheath, further downstream from the bow shock, and a few points \textit{at} the bow shock.
A possible explanation for this is that the bow shock is not in fact a vanishingly thin boundary, but has a finite thickness. The points within this region of space would present characteristics intermediate between the unshocked solar wind and the shocked plasma just downstream the bow shock, which are serendipitously very similar to those of other regions.
Cluster 2, cyan, maps to boundary layer plasma. Cluster 3, green, corresponds to points in the inner magnetosphere.
The result of this unsupervised classification is actually quite remarkable, because it corresponds quite well to the ``human" identification of magnetospheric regions developed over decades on the basis of analysis satellite data and understanding of physical processes. Here, instead, this very plausible classification of magnetospheric regions is obtained without human intervention.
In Figure~\ref{fig:cl7_values__good} we plot the feature map associated with the classification in Figure~\ref{fig:cl7__good}. While a good correspondence between feature value at a SOM node and values at the associated data points can be expected for the features that contribute most to the first PCs, this cannot be expected with less relevant features, such as the three components of the magnetic field in this case (see Table~\ref{tab:LogFeat_3comps}, section ``F1 feature set", and accompanying discussion).
Keeping these considerations in mind while looking at the feature values across the map nodes in Figure~\ref{fig:cl7_values__good}, we see that they correspond quite well with what we expect from the terrestrial magnetosphere.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig5.pdf}
\caption
{\small Distribution of the feature values in the SOM map, with $q=10 \times 12$, $\sigma_0=1$, $\epsilon_0=0.25$. The cluster boundaries and numbers are for $k=7$, Figure~\ref{fig:cl7__good}: cluster 0 corresponds to pristine solar wind, cluster 1, 4 and 5 to magnetosheath plasma, cluster 2 to boundary layers, cluster 3 to the inner magnetosphere, cluster 6 to the lobes.}
\label{fig:cl7_values__good}
\end{figure}
In particular, we see that the pristine solar wind (cluster 0 with $k=7$ in Figure~\ref{fig:cl7__good}, in the bottom right corner of the map in Figure~\ref{fig:cl7_values__good}), is well separated in terms of properties from the neighboring regions, especially when considering $v_x$, plasma pressure and temperature. This is because the plasma upstream a shock is faster, lower-pressure and colder than the plasma downstream.
In a shock, we expect higher density downstream the shock. We see that, of the three magnetosheath clusters (clusters 1, 4 and 5 in Figure~\ref{fig:cl7__good}; the cluster surrounding the bottom right cluster in Figure~\ref{fig:cl7_values__good}), the two mapping to regions just downstream the bow shock (cluster 1 and 4) have higher density than the solar wind cluster. When moving from cluster 1 and 4 towards the lobes, i.e. into cluster 5, the density decreases.
Cluster 1 and 4 are associated with magnetosheath plasma immediately downstream the bow shock. We see that their nodes have very similar values in terms of density, pressure, temperature, $v_x$, the quantities mainly associated to the downstream of the bow shock. They differ mainly in terms of the sign of the $v_z$ velocity components: the regions identified in Figure~\ref{fig:cl7__good} as cluster 4 (1) have mainly positive (negative) $v_z$. This may be the reason why two nodes adjacent to cluster 4 but presenting $v_z<0$ are carved out as cluster 1 in the map. We notice that, as a general rule, nodes belonging to the same cluster are expected to be contiguous in the SOM map, barring higher-dimension geometries which cannot be drawn in a 2D plane.
Other clusters that draw immediate attention are cluster 2 and 3, top right of the map, which are the only ones whose nodes include positive $v_x$ values \textcolor{black}{(Sunward velocity)}. These clusters map to boundary layers and the inner magnetosphere: the $v_x >0$ nodes are associated with the Earthwards fronts we see in Figure~\ref{fig:mag_py0}, panel d.
Finally, we remark on a seemingly strange fact: lobe plasma is clustered in cluster 6, which maps in Figure~\ref{fig:cl7_values__good} to nodes associate to $B_x >0$ only. We can explain this with the negligible role that $B_x$ has in determining the PCs for the F1 feature set (see discussion above in Section~\ref{sec:res}), and hence the map structure. We can expect that the feature map for features that rank higher in determining the PCs (here, density, pressure, temperature, $v_x$) will be more accurate than for lower-ranking features.
\subsection {Model validation}
\label{sec:validation}
In this Section, we address the robustness of the classification method when confronted with data from different simulated times, Section~\ref{sec:differentTimes}, and we compare against a different unsupervised classification method, pure K-means classification, in Section~\ref{sec:K-meansPure}.
\subsubsection{Robustness to temporal variations}
\label{sec:differentTimes}
In Figure~\ref{fig:cl7__good} we plot classification results for the training set \textcolor{black}{(T)}. Now, \textcolor{black}{in Figure~\ref{fig:diff_times}}, we move to validation sets \textcolor{black}{(V)}, composed of different points from the same simulated time as the training set (panel \textcolor{black}{b and e}, $t_0$ + 210 minutes), and of points from different simulated times (panel a and c, \textcolor{black}{d and f} $t_0$+ 150 and $t_0$ + 225 respectively).
While panel \textcolor{black}{b and e} shows a straightforward sanity check, panel a and c aim at assessing how robust the classification method is to temporal variation. We want to verify how well classifiers trained at a certain time perform at different times, and in particular under different orientations of the geoeffective component of the Interplanetary Magnetic Field (IMF) $B_z$, which has well known and important consequences for the magnetic field structure.
$B_z$ points Southwards at time $t_0$ + 210 and 225 minutes, Northwards at $t_0$ + 150 minutes.
The points classified in Figure~\ref{fig:diff_times}, panel a to \textcolor{black}{f}, are pre-processed and classified not only with the same procedure, but also with the same scalers, SOM and classifiers described in Section~\ref{sec:res}, and trained on a subset of data from $t_0$ + 210 minutes. \textcolor{black}{Panel a to c depicts points in the meridional plane, panel d to f in the equatorial plane.\\}
While we can expect that the performance of classifiers trained at a single time will degrade when magnetospheric conditions change, it is useful information to understand how robust to temporal variation they are, and which are the regions of the magnetosphere which are more challenging to classify correctly.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig6.pdf}
\caption
{Classification of validation \textcolor{black}{(``V")} data sets \textcolor{black}{in the meridional (a to c) and equatorial (d to f)} planes at $t_0$ + 150, $t_0$ + 210, $t_0$ + 225 minutes. In panel a to f, the points are classified with the same classifiers as Figure~\ref{fig:cl7__good}; data points in panels b \textcolor{black}{and e} are from the same simulated time as the training set, in panels \textcolor{black}{a, c, d and f} from different times. $B_z$ points Northwards at $t_0$ + 150 minutes, Southwards at $t_0$ + 210, 225 minutes. In panel g to i, the classifiers are trained with a mixed-time data set composed of points from $t_0$+ 125, 175, 200 minutes\textcolor{black}{, feature set F1-TV.} }
\label{fig:diff_times}
\end{figure}
Examining Figure~\ref{fig:diff_times}, panel b \textcolor{black}{and e}, we see that the classification results for the validation set at $t_0$ + 210 minutes excellently match those obtained with the training set, Figure~\ref{fig:cl7__good}.
The classification outcomes at time $t_0$ + 150 minutes, panel a \textcolor{black}{and d}, are also well in line with time $t_0$ + 210. The biggest difference \textcolor{black}{of plots at $t_0$ + 150 minutes} with $t_0$ + 210 is in the Southern magnetosheath region just downstream the bow shock \textcolor{black}{in the meridional plane}: while this region is classified as cluster 1 at time $t_0$+ 210, it is classified \textcolor{black}{at time $t_0$ + 150 mins} as cluster \textcolor{black}{1 or} 4, the other magnetosheath cluster downstream the bow shock, mostly associated with the Northern magnetosheath at time $t_0$ + 210. In panel c, time $t_0$ + 225 minutes, all magnetosheath plasma downstream the bow shock is classified as cluster 1.
This result can be easily explained. Cluster 1 and 4 both map to shocked plasma downstream the bow shock, i.e. regions with virtually identical properties in terms of the quantities that weigh the most in determining the PCs and therefore, arguably, the SOM structure: plasma density, pressure, temperature, $v_x$. The features that could help distinguishing between the North and South sectors, \textcolor{black}{$B_x$} and $v_z$, rank very low in determining the first PCs, and hence the SOM structure. On the other hand, exactly distinguishing via automatic classification between cluster 1 and 4 is not of particular importance, since the same physical processes are expected to occur in the two. Furthermore, a quick glance at the spacecraft spatial coordinates can clarify in which sector it is.
Relatively more concerning is the fact that several points in the Sunwards inner magnetosphere at time $t_0$ + 225 minutes are identified as inner magnetosheath plasma in Figure~\ref{fig:diff_times}, panel c \textcolor{black}{and f}. While training the SOM with several feature combinations in Section~\ref{sec:SOMFeature}, we notice that this particular region is perhaps the most difficult to classify correctly, especially in cases, like this one, where the classifiers are trained at a different time with respect to the classified points. A possible explanation for this particular mis-classification comes from Figure~\ref{fig:mag_py0}. There, we notice that the plasma density and pressure in the Sunwards inner magnetospheric regions have values compatible with those of certain inner magnetosheath regions. This may have pulled nodes mapping to the two regions close in the SOM, and in fact we see that cluster 3 and 5 are neighbors in the feature maps of Figure~\ref{fig:cl7_values__good}.
In Figure~\ref{fig:diff_times}, panel \textcolor{black}{g to i}, we explore classification results in the case of a mixed-time (``time-variable", TV) training data set\textcolor{black}{, in the meridional plane only}.
\textcolor{black}{In our visualization procedure, the cluster number (and hence the cluster color) is arbitrarily assigned. Hence, clusters mapping to the same magnetospheric regions may have different colors in classification experiments with different feature sets. For easier reading, we match a posteriori the cluster colors in the different classification experiments to those in Figure~\ref{fig:cl7__good}}.
Contrary to what shown before, now we train our map with points from three different simulated times, $t_0$+ 125, 175, 200 minutes. The features used are the F1 feature set and the map hyper-parameters are $q= 10 \times 12$, $\epsilon_0= 0.25$, $\sigma_0=1$. The classified data points are a validation set from t=$t_0$+ 150, 210, 225 minutes.
We see that the classification results agree quite well with those shown in Figure~\ref{fig:diff_times}, panel a to c. One minor difference is the fact that the two magnetosheath clusters just downstream the box shock do not change significantly with time in panels d to f, while they did so in panel a to c. Another difference can be observed in the inner magnetospheric region.
In panel c, inner magnetospheric plasma at $t_0$ + 225 was mis-classified as magnetosheath plasma. In panels \textcolor{black}{g}, inner magnetospheric plasma at $t_0$+ 150 minutes is mis-classified as boundary layer plasma, possibly a less severe mis-classification. \textcolor{black}{In panel i, the number of mis-classified points in the inner magnetosphere is negligible with respect to panel c.}
\subsubsection{Comparison with different unsupervised classification methods }
\label{sec:K-meansPure}
Another form of model validation consists in comparing classification results with those from another unsupervised classification model. Here, we compare with \textcolor{black}{pure} K-means classification.
In Figure~\ref{fig:kMeans_noSOM} we present in panel a, b and c the classification results for the $x/R_E=0$, $y/R_E=0$ and $z/R_E=0$ planes \textcolor{black}{at time $t_0$ + 210 minutes} (panel b \textcolor{black}{and c are} reproduced here from Figure~\ref{fig:cl7__good} for ease of reading). We contrast them in panels d, e, and f with K-means classification, also with $k=7$, using the same features and in the same planes.
Comparing panel a and d, b and e, c and f, we see that the two classification methods give quite similar results. We had remarked, in Figure~\ref{fig:cl7__good}, on the fact that few points, possibly located inside the bow shock, are classified as inner magnetosheath plasma, orange. We notice that the same happens in panels d, e, f.
One difference between the two methods is visible in panel b and e: the magnetosheath cluster associated to the North sector extends \textcolor{black}{a few points} further Southwards in panel e than in panel b. This is a minimal difference that can be explained with the fact that the two clusters map to very similar plasma, as remarked previously.
A rather more significant difference can be seen comparing panel c and f. In panel f, some plasma regions rather close to Earth and deep into the inner magnetosphere are classified as magnetosheath plasma (brown) rather than as inner magnetospheric plasma (green). In panel c, they are classified as the latter (green), a classification that appears more appropriate for the region, given its position.
\textcolor{black}{To compare the two classification methods quantitatively, we calculate the number of points which are classified in the same cluster with SOMs plus K-means vs pure K-means classification. 92.15 \% of the points are classified in the same cluster, 92.74 \% if the two magnetosheath clusters just downstream the bow shock are considered the same. These percentage are calculated on the entire training dataset at time $t_0$ + 210 minutes, of which cuts are depicted in the panels in Figure~\ref{fig:kMeans_noSOM}. }
We therefore conclude that the classification of SOM nodes and ``simple" K-means classification globally agree. An advantage of using SOM with respect to K-means is that the former reduces mis-classification of a section of inner magnetospheric plasma, the region most challenging to classify correctly. Furthermore, SOM feature maps give a better representation of feature variability within each cluster \textcolor{black}{than K-means centroids. This representation can} be used to assess feature variability within the cluster. In K-means, only the feature values at the centroid (meaning, one value per class) are available.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig7.pdf}
\caption
{\small Unsupervised \textcolor{black}{K-means} classification of \textcolor{black}{trained} SOM nodes, with $k=7$ (panel a, b, c), and \textcolor{black}{pure} K-means classification, with $k=7$ (panel d, e, f). The feature set is F1, \textcolor{black}{the time is $t_0$ + 210 minutes}. \textcolor{black}{The training (T) dataset is depicted}. }
\label{fig:kMeans_noSOM}
\end{figure}
\subsection{On the choice of training features}
\label{sec:SOMFeature}
Up to now, we have used as features for SOM training the three components of the magnetic field and of the velocity and the logarithms of the plasma density, pressure, temperature. We label this feature set F1 in Table~\ref{tab:caseList}, where we list several other feature sets we experiment with. In this Section, we show classification results for different feature sets, listed in Table~\ref{tab:caseList}, and we aim at obtaining some insights into what constitutes a ``good" set of features for our classification purposes.
The SOM hyper-parameters are the same in all cases, $q= 10 \times 12$, $\sigma_0=1$, $\epsilon_0= 0.25$. In all cases, $k=7$. \textcolor{black}{The data used for the training are from $t_0$ + 210 minutes. In Figure~\ref{fig:SubOptimal},~\ref{fig:Brole1} and~\ref{fig:Brole2} validation (V) datasets are depicted.}
In Figure~\ref{fig:SubOptimal}, panel a to c, we show sub-standard classification results obtained with non-optimal feature sets. In panel a, F1-NL (``Not-Logarithm") uses density, pressure, temperature, rather than their logarithms. In panel b, F2, we eliminate from the feature list the logarithm of the plasma density, i.e. the most relevant feature for the calculation of the PCs for F1. In panel c, F3, we do not use the Sun-Earth velocity.
We see that F1-NL groups together magnetosheath and solar wind plasma (probably the biggest possible classification error), and inner magnetospheric regions are not as clearly separated as in F1. F2 mixes inner magnetospheric and boundary layer data points \textcolor{black}{(green and cyan)}, and magnetosheath regions just downstream the bow shock and internal magnetosheath regions \textcolor{black}{(orange)}. With F3, some inner magnetospheric plasma is classified as magnetosheath plasma, \textcolor{black}{already at $t_0$ + 210 minutes.}
\textcolor{black}{When analysing satellite data, variables such as the Alfv\'en speed $v_A$, the Alfv\'en Mach number $M_A$, the plasma beta $\beta$ provide precious information on the state of the plasma. In Figure~\ref{fig:SubOptimal}, panel d to f, we show training set results for the F4, F5 and F6 feature sets, where we add to our ``usual" feature list, F1, the logarithm of $v_A$, $M_A$, $\beta$ respectively. Using as features the logarithm of variables, such as $n$, $pp$, $T$, $v_A$, $M_A$, $\beta$, which vary across order of magnitude (see Figure~\ref{fig:mag_py0}), is one of the ``lessons learnt" from Figure~\ref{fig:SubOptimal}, panel a.
Comparing Figure~\ref{fig:SubOptimal}, panel d to f, with Figure~\ref{fig:cl7__good}, we see that introducing $log(v_A)$ in the feature list slightly alters classification results. What we called boundary layer cluster in Figure~\ref{fig:cl7__good} does not include in F4 points at the boundary between the lobes and the magnetosheath. Perhaps more relevant is the fact that the boundary layer and inner magnetospheric clusters (green and cyan) appear to be less clearly separated than in F1.
The classification obtained with F5 substantially agrees with F1. With F6, the boundary layer cluster is slightly modified with respect to F1.}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig8.pdf}
\caption
{\small \textcolor{black}{Validation plots at $t_0$ + 210 min} in the $y/R_E=0$ plane, from SOMs trained with feature sets F1-NL, F2, F3, \textcolor{black}{F4, F5, F6}. F1-NL uses density, pressure, temperature, rather than their logarithms. F2 does not include $log(n)$, F3 does not include $v_x$. \textcolor{black}{F4, F5, F6} add to the feature set F1 the logarithms of Alfv\'en speed $v_A$, the Alfv\'enic Mach number $M_A$ and the plasma beta $\beta$, respectively. }
\label{fig:SubOptimal}
\end{figure}
\textcolor{black}{In Table~\ref{tab:quanti}, second column (``S"), we report the percentage of data points classified in the same cluster as F1 for each of the feature sets of Table~\ref{tab:caseList}, for the validation dataset at $t_0$ + 210 minutes. In the third column (``M"), we consider cluster 1 and 4 as a single cluster: in the previous analysis, we remarked that cluster 1 and 4 (the two magnetosheath clusters just downstream the bow shock) map to the same kind of plasma. We keep this into account when comparing classification results with F1.
The metrics depicted in Table~\ref{tab:quanti} cannot be used to assess the quality of the classification per se, since we are not comparing against ground truth, but merely against another classification experiments. However, it gives us a quantitative measure of how much different classification experiments agree.}
\textcolor{black}{Comparing Figure~\ref{fig:SubOptimal} with Table~\ref{tab:quanti} results we see, as one could expect, that sub-standard feature sets (F1-NL, F2, F3) agree less with F1 classification than F4, F5, F6. This is the case with F1-NL in particular, which exhibit the lower percentage of similarly classified points with respect to F1. We see that the agreement is particularly good with F5, as already noticed in Figure~\ref{fig:SubOptimal}.}
\begin{table}
\small
\caption{\textcolor{black}{Percentage of data points classified in the same cluster as F1, for the different feature sets (second column, header ``S"). In the third column, header ``M", the two magnetosheath clusters just downstream the bow shock, 1 and 4, are considered one. The dataset used is the validation dataset, at time $t_0$ + 210 minutes. }}
\begin{tabular}{c c c}
\hline
\textbf{Case} & S & M \\
\hline
F1-TV & 80.71 & 85.72 \\
\hline
F1-NL & 59.83 & 61.47 \\
\hline
F2 & 84.69 & 84.71 \\
\hline
F3 & 87.85 & 89.01 \\
\hline
F4 & 82.70 & 83.01 \\
\hline
F5 & 93.07 & 94.42 \\
\hline
F6 & 91.02 & 92.36 \\
\hline
F7 & 83.38 & 84.77 \\
\hline
F8 & 91.55 & 91.78 \\
\hline
F9 & 66.05 & 75.67\\
\hline
F10 & 92.49 & 93.90 \\
\hline
\end{tabular}
\label{tab:quanti}
\end{table}
When discussing Table~\ref{tab:LogFeat_3comps}, we remarked on the seemingly negligible role that the magnetic field components appear to have in determining the first three PCs, both when their values are not clipped (``F1 feature set") and when they are (``\textcolor{black}{F8} feature set - B clipped"). Here, we investigate if this reflects in classification results.
In Figure~\ref{fig:Brole1} we show classification of validation data sets at time $t_0$+ 150, 210, 225 minutes for the \textcolor{black}{F7} feature set (panel a to c), which do not include the magnetic field, and for \textcolor{black}{F8} (panel d to f), where the magnetic field components are present but clipped as described in Section~\ref{sec:sim}.
Comparing Figure~\ref{fig:Brole1}, panel a to f, with Figure~\ref{fig:diff_times} \textcolor{black}{(the validation plot for F1)}, we see that the identified clusters are indeed rather similar, including the variation with time of the outer magnetosheath clusters, see discussion of Figure~\ref{fig:diff_times}. The main difference with Figure~\ref{fig:diff_times} is the fact that, in the \textcolor{black}{F7} case, the boundary layer cluster does not include most of the data points at the boundary between the lobe and magnetosheath plasma \textcolor{black}{(this reflects in the percentage of similarly classified points in Table~\ref{tab:quanti})}. The boundary layer cluster for \textcolor{black}{F8} instead corresponds quite well with F1.
As already observed in Figure~\ref{fig:diff_times}, inner magnetospheric plasma is the most prone to mis-classification in the validation test. In the case without magnetic field, \textcolor{black}{F7}, as also in F1, the mis-classified inner magnetospheric plasma is assigned to the inner magnetosheath cluster. In \textcolor{black}{F8}, it is assigned to either boundary layer plasma, at time $t_0$+ 150 minutes, or to one of the magnetosheath clusters just downstream the bowshock, at $t_0$ + 225 minutes.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig9.pdf}
\caption
{\small Classification of validation data sets: $y/R_E=0$ plane at $t_0$+ 150, 210, 225 minutes, for maps trained with the \textcolor{black}{F7 and F8} feature sets. In \textcolor{black}{F7}, $B_x$, $B_y$, $B_z$ are not used for the map training. In \textcolor{black}{F8}, $B_x$, $B_y$, $B_z$ are clipped as described in Section~\ref{sec:res}. }
\label{fig:Brole1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig10.pdf}
\caption
{\small Classification of validation data sets: $y/R_E=0$ plane at $t_0$+ 150, 210, 225 minutes, for maps trained with the \textcolor{black}{F9 and F10} feature sets. In F9 \textcolor{black}{and F10}, the module of the magnetic field, instead of its components, is used. In F9, $|B|$ is clipped as described in Section~\ref{sec:res}. \textcolor{black}{In F10, the logarithm of the module of the magnetic field is used.} }
\label{fig:Brole2}
\end{figure}
In Figure~\ref{fig:Brole2}, panel \textcolor{black}{a to c}, we plot the classification results for the validation data sets for a map trained with feature set F9, where instead of the three components of the magnetic field we use only its magnitude, clipped as described above. We see that the \textcolor{black}{green and blue} clusters correspond to the regions of highest magnetic field, closer to the Earth. \textcolor{black}{The green and blue} regions map to high-$|B|$ regions of the inner magnetosphere and lobes respectively. We notice that with this choice of features the inner magnetospheric is consistently classified as such at different simulated times, contrarily to what happened with feature sets previously discussed. The remaining inner magnetospheric plasma is classified together with the current sheet in the \textcolor{black}{cyan} cluster (while in F1 inner magnetosphere and current sheets were clearly separated), which does not include plasma at the boundary between magnetosheath and lobes. Magnetosheath plasma just downstream the bow shock is now classified in a single cluster. \textcolor{black}{This classification is consistent with our knowledge of the magnetosphere, and is very robust to temporal variation. However, it differs significantly from F1 classification, hence the low percentage of similarly classified points in Figure~\ref{tab:quanti}.}
\textcolor{black}{F10, depicted in panels d to f, looks remarkably similar to F1 (see also Table~\ref{tab:quanti}), especially in the internal regions, including the mis-classification of some inner magnetospheric points as magnetosheath plasma at $t_0 + 225$ minutes. The three magnetosheath cluster vary at the three different times depicted with respect to F1. This behavior, and the pattern of classification of magnetosheath plasma in F9, shows that the magnetic field is a feature of relevance in classification especially for magnetosheath regions. F10 classification results show that the blue cluster in F9 originates from the clipping procedure. This somehow artificial procedure is however beneficial for inner magnetospheric points, which are not mis-classified in that case.}
From this analysis, we learn important lessons on possible different outcomes of the classification procedure and on how to choose features for SOM training.
First of all, we can divide our feature sets into acceptable and sub-standard. Sub-standard feature sets are those, such as F1-NL and F2, that fail to separate plasma regions characterized by highly different plasma parameters.
Examining the feature list in F1-NL, the reason for this is obvious: as one can see at a glance from Figure~\ref{fig:mag_py0}, panel g to l, only using a logarithmic representation allows to appreciate how features that span orders of magnitude vary across magnetospheric regions. The ``lesson learned" here is to use the data representation that more naturally highlights differences in the training data.
In F2, we excluded from the feature list $log(n)$, which expresses a large percentage of the variance of the training set, with poor classification results: a good rule of thumb is to always include this kind of variables into the training set.
With the exception of F1-NL and F2, all feature sets produce classification results which are first of all quite similar, and generally reflect well our knowledge of the magnetosphere. Differences arise with the inclusion of extra variables, such as in Figure~\ref{fig:SubOptimal}, panel d to f, where we add $log(v_A)$, $log(M_A)$ and $log(\beta)$ to F1. All these quantities are derived from base simulation quantities and, while quite useful to the human scientist, do not seem to improve classification results here. In fact, occasionally they appear to degrade them, at least in panel \textcolor{black}{d}. These preliminary results therefore point in the direction of not including somehow duplicated information into the training set. One might even argue that the algorithm is ``smart" enough to ``see" such derived variables as long as the underlying variables are given.
\textcolor{black}{Some} feature sets, F1 (Figure~\ref{fig:cl7__good} and~\ref{fig:diff_times}) and F9 (Figure~\ref{fig:Brole2}), raise particular interest. \textcolor{black}{Other feature sets, such as F5, F6, F10, albeit interesting in their own right, essentially reproduce the results of F1. Both F1 and F9} produce classification results which, albeit somehow different, separate well known magnetospheric regions. In F9, less information than in F1 is made available to the SOM: we use magnetic field magnitude rather than magnetic field components for the training. This results into two clusters, \textcolor{black}{green and blue}, that clearly correspond to high-$|B|$ points\textcolor{black}{, whose values have been clipped as described above}. This classification appears more robust to temporal variation than F1, perhaps because all three PCs (and not just the first two, as for F1) present well-defined multi-peaked distributions. This confirms the well-known fact that multi-peaked distributions of the input data are a very relevant factor in determining classification results.
We remark that these insights have to be further tested against different classification problems, and may be somehow dependent on the classification procedure we chose in our work.
\conclusions
\label{sec:concl}
The growing amount of data produced by magnetospheric missions is amenable to the application of ML classification methods, that could help clustering the hundreds of gigabits of data produced every day by missions such as MMS into a small number of clusters characterized by similar plasma properties.~\cite{argall2020}, for example, argue that ML models could be used to analyze magnetospheric satellite measurements in steps: first, region classifiers would separate between macro-regions, as the model we propose here does. Then, specialized event classifiers would target local, region-specific, processes.
Most of the classification works focusing on the magnetosphere consist of supervised classification methods. In this paper, instead, we present an \textit{unsupervised} classification procedure for large scale magnetospheric regions based on~\cite{amaya2020visualizing}, where 14 years of ACE solar wind measurements are classified with a techniques based on SOMs. We choose an unsupervised classification method to avoid relying on a labelled training set, which risks introducing the bias of the labelling scientists into the classification procedure~.
\textcolor{black}{As a first step towards the application of this methodology to spacecraft data, we verify its performance on simulated magnetospheric data points obtained with the MHD code OpenGGCM-CTIM-RCM. We choose to start with simulated data since they offer several distinct advantages. First of all, we can for the moment bypass issues, such as instrument noise and instrument limitations, that are unavoidable with spacecraft data. Data analysis, de-noising, pre-processing is a fundamental component of ML activities. With simulations, we have access to data from a controlled environment that need minimal pre-processing, and allow us to focus on the ML algorithm for the time being. Furthermore, the time/ space ambiguity that characterizes spacecraft data is not present in simulations, and it is relatively easy to qualitatively verify classification performance by plotting the classified data in the simulated space. Performance validation can be an issue for magnetospheric unsupervised models working on spacecraft data. A model such as ours, trained and validated against simulated data points, could be part of an array of tests against which unsupervised classifications of magnetospheric data could be benchmarked.\\
The code we are using to produce the simulation is MHD. This means that kinetic processes are not included in our work, and that variables available in observations, such as parallel and perpendicular temperatures and pressures, moments separated by species, are not available to us at this stage. This is certainly a limitation of our current analysis. This limitation is somehow mitigated by the fact that we are focusing on classification on large scale regions. Future work, on kinetic simulations and spacecraft data, will assess the impact of including ``kinetic" variables among the classification features. \\}
We obtain classification results, e.g. Figure~\ref{fig:cl7__good} and Figure~\ref{fig:Brole2}, that match surprisingly well our knowledge of the terrestrial magnetosphere, accumulated in decades of observations and scientific investigation. The analysis of the SOM feature maps, Figure~\ref{fig:cl7_values__good}, shows that the SOM node values associated with the different features represent well the feature variability across the magnetosphere, at least for the features that contribute most to determine the principal components used for the SOM training. Roaming across the feature map, we get hints of the processes characterizing the different clusters, see the discussion on plasma compression and heating across the bow shock.
Our validation analysis in Section~\ref{sec:validation} shows that the classification procedure is quite robust to temporal evolution in the magnetosphere. In particular, consistent results are produced with opposite orientation of the $B_z$ IMF component, which has profound consequence for the magnetospheric configuration.
Since this work is intended as a starting point rather than as a concluded analysis, we report in details on our exploration activities in terms of SOM hyper-parameters (\ref{sec:SOMPars}) and feature sets (Section~\ref{sec:SOMFeature}). We hope that this work will constitute a useful reference for colleagues working on similar issues in the future. In Section~\ref{sec:SOMFeature}, in particular, we highlight our ``lessons learned" when exploring classification with different feature sets. They can be summarized as follows: a) the most efficient features are characterized by multi-peaked distributions; b) when feature values are spread over orders of magnitude, a logarithmic representation is preferable; c) a preliminary analysis on the percent variance expressed by each potential feature can convey useful information on feature selection; d) derived variables do not necessarily improve classification results; e) different choices of features can produce different but equally significant classification results.
\textcolor{black}{In this work, we have focused on the classification of large scale simulated regions. However, this is only one of the classification activities one may want to be able to perform on simulated, or observed, data. Other activities of interest may be the classification of meso-scale structures, such as dipolarizing flux bundles or reconnection exhausts. This seems to be within the purview of the method, assuming that an appropriate number of clusters is used, and that the simulations used to produce the data are resolved enough. To increase the chances of meaningful classification of meso-scale structure, one may consider applying a second round of unsupervised classification on the points classified in the same, large-scale cluster. Another activity of interest could be the identification of points of transition between domains. Such an activity appears challenging in the absence, among the features used for the clustering, of spatial and temporal derivatives. We purposefully refrained from using them among our training features, since we are aiming for a local classification model, that does not rely on higher resolution sampling either in space or time.}
Several points are left as future work. It should be investigated whether our classification procedure, while satisfactory at this stage, could be improved.
Possible venues of improvement could be the use of a dimensionality reduction technique that does not rely on linear correlation between the features, or the use of dynamic~\citep{rougier2011dynamic} rather than static SOMs.
\textcolor{black}{As an example, in \citet{amaya2020visualizing}, more advanced pre-processing techniques were experimented with, which will most probably prove useful when we will move to the more challenging environment of spacecraft observations (as opposed to simulations). Furthermore, \citet{amaya2020visualizing} employed windows of time in the classification, which we have not used in this work in favor of an ``instantaneous" approach. In future work, we intend to verify which approach gives better results.\\}
It should also be verified whether similar modifications reduce the mis-classification of inner magnetospheric points observed with a number of feature sets including F1, and if they reduce the importance or outright eliminate the need of looking for optimal sets of training features.
The natural next step of our work is the classification of spacecraft data. There, many more variables not included in an MHD descriptio
will be available. They will probably constitute both a challenge and an opportunity for unsupervised classification methods, and will allow to attempt classification aimed at smaller scale structure, where such variables are expected to be essential. Such procedures will be aimed not at competing in accuracy with supervised classifications, but they will hopefully be pivotal in highlighting new processes.
|
1,108,101,563,688 | arxiv | \section{Introduction}
In psychotherapy assessment, the quality of a session is generally evaluated through the process of behavioral coding in which experts manually identify and annotate behaviors of the participants \cite{bakeman2000behavioral}. However, this procedure is time-consuming, which makes it resource-heavy in terms of human capital and therefore often unfeasible in most treatment contexts. In recent years, researchers have developed automated behavioral coding algorithms using speech and language features for several clinical domains such as addiction counseling \cite{xiao2016behavioral}, post-traumatic stress disorder (PTSD) care \cite{shiner2012automated} and autism diagnosis \cite{bone2016use}. The work in \cite{hirsch2018s} even presents an automated evaluation Motivational Interviewing (MI) \cite{miller2012motivational} system which avoids both manual annotation and transcription. Some studies extend to multimodal approaches which also take non-lexical characteristics into consideration \cite{ardulov2018multimodal, singla2018using, chen2019improving}.
Cognitive Behavioral Therapy (CBT) is evidence-based psychotherapy predicated on the cognitive model which involves shifts in the patient’s thinking and behavioral patterns~\cite{beck2011cognitive}. In a CBT session, the therapist guides individuals to identify goals and develop new skills to overcome obstacles that interfere with goal attainment. As a common type of talk therapy, CBT has been developed for many decades and has become an effective treatment for a wide range of mental health conditions \cite{hofmann2012efficacy}. Extending upon this strong evidence base, recent research has explored whether combining CBT with other evidence-based psychotherapies might potentiate treatment outcome. For example, studies indicate that adding MI as an adjunct to CBT may benefit patients by increasing motivation for and commitment to the intervention \cite{westra2006preparing, randall2017motivational}.
One of the early computational behavioral coding efforts for CBT is
found in \cite{flemotomos2018language} which employed an end-to-end evaluation pipeline that overcomes the need of manual transcription and coding. This work formulated
the CBT session quality evaluation as a classification task and compared the performance of various lexical features.
In this paper, we develop a new automated approach to assess CBT session quality.
Specifically, we utilize MI data to extract utterance-level features due to the similarities between MI and CBT and propose a novel fusion strategy.
We experiment on both manual transcripts and automatically derived ones to show the superiority of the new fusion approach the and robustness of our automated evaluation system.
\section{Datasets}
The CBT data, with accompanying audio-recorded sessions, used in this work come from the Beck Community Initiative
\cite{creed2016implementation}, a large-scale public-academic partnership to implement CBT in community mental health settings.
The CBT quality is evaluated by the session-level behavioral codes based on Cognitive Therapy Rating Scale (CTRS) \cite{young1980cognitive}. Each session receives 11 codes scored on a 7 point Likert scale ranging from 0 (poor) to 6 (excellent) for each evaluated dimension.
We also compute the total CTRS by summing up the scores as an overall measurement of the quality of a session. Raters were doctoral-level experts who were required to demonstrate calibration prior to coding process to prevent rater drift, which resulted in high inter-rater reliability for the CTRS total score (ICC = 0.84).
In this paper, we use 225 coded CBT sessions for experiments which include the 92 sessions used in \cite{flemotomos2018language} with the highest and lowest total CTRS and 133 additional sessions to balance the distribution. Each session considered is a dyadic conversation between a therapist and a patient. We manually transcribed the sessions including information about talk turns, speaker roles, and punctuation. The sessions were recorded at a 16kHz sampling rate
and their lengths range from 10 to 90 minutes. We binarized the CTRS codes by assigning codes greater or equal to 4 as ``high" and less than 4 as ``low" since 4 is the primary anchor
indicating the skill is fully present, but still with room for improvement \cite{young1980cognitive}. The threshold indicative of CBT competence on the total CTRS is 40 \cite{vallis1986cognitive}. The descriptions and the label distributions of the codes are shown in Table \ref{tab:labels}.
\begin{table}[htb]
\caption{CBT behavior codes defined by the CTRS manual}
\label{tab:labels}
\centering
\resizebox{0.37\textwidth}{!}{\begin{tabular}{lll}
\toprule
Abbr. & CTRS Code & low/high \\
\midrule
ag & agenda & 131/94 \\
at & \begin{tabular}[l]{@{}l@{}}application of cognitive-\\ behavioral techniques\end{tabular} & 150/75 \\
co & collaboration & 111/114 \\
fb & feedback & 150/75 \\
gd & guided discovery & 146/79 \\
hw & homework & 165/60 \\
ip & interpersonal effectiveness & 47/178 \\
cb & \begin{tabular}[l]{@{}l@{}}focusing on key cognitions\\ and behaviors\end{tabular} & 122/103 \\
pt & pacing and efficient use of time & 135/90 \\
sc & strategy for change & 126/99 \\
un & understanding & 123/102 \\
\midrule
total & total score & 134/91 \\
\bottomrule
\end{tabular}}
\end{table}
\section{Approach}
Our end-to-end evaluation approach includes two stages. In the first stage, we took the session recordings as inputs and used a speech processing pipeline to substitute manual transcription. In the second stage, we
extracted the linguistic features from the therapist's transcripts to predict the binarized label of each code. The classification tasks are performed by a linear Support Vector Machine (SVM) with sample weights inversely proportional to their class frequencies.
\subsection{Speech Processing Pipeline}\label{sec:3-1}
To automatically transcribe the recorded sessions we adopted the speech pipeline described in \cite{martinez2019identifying} consisting of
Voice Activity Detection (VAD), diarization, Automatic Speech Recognition (ASR) and role assignment presented by the yellow box in Fig.~\ref{fig:pipe}. The diarization error rate (including VAD errors) and ASR word error rate for the transcribed sessions are 21.47\% and 44.01\%, respectively.
The role assignment module in \cite{martinez2019identifying} is trained to distinguish the therapist and patient in a counseling session and the annotation accuracy for the transcribed CBT sessions is 100\% (225/225).
\begin{figure}[htb]
\centering
\includegraphics[width=7.0cm]{speech-pipe.png}
\caption{Session Decoding Pipeline}
\label{fig:pipe}
\end{figure}
As shown in Fig.~\ref{fig:pipe}, the output of the speech pipeline in the yellow box is at the turn level without any punctuation. There might be multiple utterances within a turn, something which potentially
affects the quality of utterance-level lexical features, and the subsequent behavioral coding. Thus, we implemented an utterance segmentation module at the end of the pipeline. We made use of the word boundaries to split the text
whenever the pause between consecutive words is more than 2 seconds, and then segmented the transcripts into utterances. The package we applied for utterance segmentation is an open source tool called ``DeepSegment" \cite{deepsegment}. It employs a bi-LSTM layer followed by a Conditional Random Field (CRF) \cite{lafferty2001conditional}. The model is trained using the Tatoeba corpus \cite{ho2016tatoeba} and it achieves an F1 score of 0.7364 on the transcribed sessions.
\subsection{Baseline Mid-level Features}\label{sec:3-2}
We extract a number of different mid-level features from the transcribed text. The first set includes the term frequency - inverse document frequency (tf-idf) \cite{dillon1983introduction} transform of n-grams, while the second focuses on estimated Dialog Acts (DA) \cite{stolcke2000dialogue}. The tf-idfs and DAs were reported to achieve the best overall performance among the interpretable features in \cite{flemotomos2018language}. While these aforementioned features are based on general spoken dialog characteristics, the third feature set considered here is inspired by utterance level codes drawn from MI. Under the hypothesis that there are shared characteristics between MI and CBT (based on prior work that has reported using MI techniques as a way to facilitate CBT treatments \cite{westra2006preparing, randall2017motivational}), we experimentally investigate the usefulness of MI based ``features" in contributing to the quality assessment of CBT.
We extract all these features for the therapist side of the conversation only, because, as reported in \cite{flemotomos2018language}, they perform robustly
for the task of behavioral coding, and further fusing features of the two roles (i.e., therapist and patient) does not lead to substantial improvements.
We compute the tf-idfs over unigrams.
We additionally tag each utterance in a CBT session by one DA from the 7-class scheme described in Table~\ref{tab:DA_MC}. We used a linear chain CRF model trained on the Switchboard-DAMSL dataset \cite{can2015dialog} which achieves 84.78\% accuracy of the in-domain test set.
For the DA-based feature representation
we 1) count the utterances coded with each DA and normalize the counts with respect to the total number of utterances in each session; 2) count the words in the utterances tagged by each DA and normalize the count with respect to the total number of words in each session.
Concatenating the two sets of normalized features, we get a DA feature set of 7 $\times$ 2 = 14 dimensions that outperforms the individual use of either of those sets.
To capture MI-like approaches used within a CBT session, we use specific utterance-level representations that describe MI relevant behaviors in the conversation. In particular, we employ the set of Motivational Interviewing Skills \cite{miller2003manual} codes
described in \cite{xiao2016behavioral} and summarized in Table~\ref{tab:DA_MC}. We cluster `RES' and `REC' into one class `RE' since they are domain-specific in distribution \cite{chen2020label} and easily confused with each other.
We extract the MI relevant behavioral codes (MC, henceforth) the same way as in \cite{chen2019improving} which uses a neural architecture stacking an embedding layer, a bi-LSTM with attention layer and a dense layer. We train the model on the MI corpus from \cite{atkins2014scaling, baer2009agency} with train/validation/test split equal to 3/1/1 and the classification accuracy on the MI test set is 81.1\%. The final MC-based feature representation is the same as the DA-based described previously. As observed in Table~\ref{tab:DA_MC}, the main difference between the DAs and the MCs is that the former focus on the function of the dialog structure, while the latter emphasize on the critical and causal elements deemed useful in the psychotherapy.
The tf-idfs are computed with regards to the occurrence of words in the sessions while the DAs and MCs are both annotations extracted at the utterance level.
On this basis, we group the basic features into word-level features (tf-idfs) and utterance-level features (DAs, MCs).
\begin{table}[htb]
\centering
\caption{Details of DA and MC}
\label{tab:DA_MC}
\resizebox{0.35\textwidth}{!}{\begin{tabular}{|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}Coding\\ Schemes\end{tabular} & Codes \\ \hline
DA & \begin{tabular}[c]{@{}c@{}}Question, Statement, Agreement, Other\\ Appreciation, Incomplete, Backchannel\end{tabular} \\ \hline
MC & \begin{tabular}[c]{@{}c@{}}Facility (FA), Giving Information (GI),\\ Reflection (RE), Closed Question (QUC),\\ Open Question (QUO), MI Adherent (MIA), \\ MI Non-Adherent (MIN)\end{tabular} \\ \hline
\end{tabular}}
\end{table}
\section{Feature Fusion Strategies}
In this section, we discuss two feature fusion methods for combining the word-level and the utterance-level features.
\subsection{Fusion by Concatenation}
The first fusion approach is straightforward, namely concatenation of the different feature sets. The hypothesis here is that the fused feature sets are complementary to each other so that they jointly carry richer information.
Herein we combine the word-level feature tf-idfs with each of the utterance-level features (DAs, MCs) and denote the fused feature sets as tf-idfs + DAs and tf-idfs + MCs, respectively.
\subsection{Augmenting Words with Utterance Tags}
When we compute word-level features like tf-idfs and bag of words, contextual information is ignored.
For example, the word ``homework” (an important element within CBT) in a question may denote that the therapist is checking if the patient has completed the given assignment, while in a reflection it might imply that the therapist is describing/confirming the assignment to/with the patient. The distribution of (just the) word ``homework" helps us evaluate how well a therapist incorporates the use of homework relevant to CBT. However, to incorporate the context in which they are used, we propose a fusion strategy of augmenting words with utterance level information.
\begin{figure}[htb]
\centering
\includegraphics[width=7.1cm]{augment-intro.png}
\caption{Word Augmentation}
\label{fig:segment}
\end{figure}
We show an example of the word augmentation we propose using MCs in Fig.~\ref{fig:segment}. We first tag the therapist's utterances by the model trained in \Cref{sec:3-2} and then pad the words with the label of the utterance they belong to. In Fig.~\ref{fig:segment} the augmented tokens ``homework$|$QUC" and ``homework$|$RE" are
viewed as different words for further analysis. Finally, we extract the tf-idfs based on the augmented words of the therapist to obtain the fused features.
Similar to the previously-mentioned feature concatenation method, we fuse the augmented tf-idfs with each of the DAs and MCs
and
denote the fused feature sets as DA-tf-idfs and MC-tf-idfs, respectively.
\section{Experiments and Results}
In this section, we describe experiments on both the manual and the automatically derived transcripts of the CBT sessions.
We compute the tf-idfs, DA-tf-idfs and MC-tf-idfs using the TfidfVectorizer from the scikit-learn Python module \cite{pedregosa2011scikit}. We set the parameters max\_df=0.95 and min\_df=0.05 to ignore terms that appear in more than 95\% or less than 5\% of the documents
and select the K best features based on cross-validation on the total CTRS using a univariate F-test. All the feature sets are z-normalized before being fed into the linear SVM classifier. A 5-fold cross-validation is conducted to report the F1 score of each CTRS code and the total CTRS. The F1 scores are computed according to the total number of
true and false positives over the folds \cite{forman2010apples}.
\subsection{Results on Manual Transcripts}
The results
of the classification task on the manual transcripts are presented in Table \ref{tab:manual}.
From the reported results we find that the code `ip' (interpersonal effectiveness) -- which has the most imbalanced label distribution -- always has the lowest F1 score. Among the basic feature sets, the tf-idfs achieve a
substantially better performance compared to either DAs or MCs, which indicates that these utterance-level features cannot fully capture CBT-relevant information contained in the word-level features.
Next we look at the direct fusion results of word level and utterance level features. By comparing the results of the tf-idfs with tf-idfs + DAs and tf-idfs + MCs we conclude that directly concatenating the tf-idfs with utterance-level features does not lead to substantial improvements,
confirming a similar conclusion drawn in \cite{flemotomos2018language}. Finally, we consider the proposed alternative fusion strategy.
The performance of the DA-tf-idfs and MC-tf-idfs demonstrates that applying the new proposed fusion strategy to augment the words with the utterance tags, by either DAs or MCs, results in a better CBT relevant code prediction performance.
Especially the MC-tf-idfs -- which yield the best results among all the features sets -- significantly improve the F1 score of the total CTRS and averaged F1 score over tf-idfs (with $p$-value $<$ 0.05 based on the combined 5$\times$2cv F test \cite{alpaydm1999combined}).
It is interesting to point out that the MCs always lead to better performance compared to DAs, no matter whether we try to predict the CTRS codes by the basic feature set or after fusing with the tf-idfs.
This indicates that the behavioral codes defined in MI might exploit more useful therapy-relevant information, by encoding not only structural characteristics of the conversation, but also more psychotherapy-based cues. This also underscores the potential for transfer learning between MI and CBT (and perhaps other domains); some initial insights along these lines are provided in \cite{Gibson2019Multi-labelMulti-taskDeepLearning}.
\begin{table}[htb]
\caption{\label{tab:manual} F1 scores of the tasks on the manual transcripts.}
\centering
\resizebox{0.44\textwidth}{!}{\begin{tabular}{c|ccccccc}
\hline
& tf-idfs & DAs & MCs & \begin{tabular}[c]{@{}c@{}}tf-idfs\\ +DAs\end{tabular} & \begin{tabular}[c]{@{}c@{}}tf-idfs\\ +MCs\end{tabular} & \begin{tabular}[c]{@{}c@{}}DA-\\ tf-idfs\end{tabular} & \begin{tabular}[c]{@{}c@{}}MC-\\ tf-idfs\end{tabular} \\
\hline
ag & 0.76 & 0.60 & 0.65 & 0.75 & 0.76 & 0.77 & 0.80 \\
at & 0.70 & 0.60 & 0.61 & 0.70 & 0.69 & 0.71 & 0.73 \\
co & 0.75 & 0.63 & 0.64 & 0.74 & 0.75 & 0.75 & 0.80 \\
fb & 0.75 & 0.56 & 0.63 & 0.75 & 0.76 & 0.74 & 0.76 \\
gd & 0.74 & 0.58 & 0.61 & 0.72 & 0.77 & 0.76 & 0.73 \\
hw & 0.66 & 0.55 & 0.61 & 0.66 & 0.70 & 0.70 & 0.68 \\
ip & 0.54 & 0.51 & 0.55 & 0.57 & 0.53 & 0.57 & 0.58 \\
cb & 0.73 & 0.55 & 0.69 & 0.75 & 0.75 & 0.77 & 0.77 \\
pt & 0.69 & 0.54 & 0.66 & 0.69 & 0.73 & 0.74 & 0.75 \\
sc & 0.73 & 0.60 & 0.61 & 0.74 & 0.76 & 0.76 & 0.76 \\
un & 0.74 & 0.56 & 0.58 & 0.74 & 0.76 & 0.73 & 0.75 \\
\hline
avg & 0.71 & 0.57 & 0.62 & 0.71 & 0.72 & 0.73 & \textbf{0.74}
\\
tot & 0.78 & 0.62 & 0.67 & 0.77 & 0.78 & 0.81 & \textbf{0.83}
\\
\hline
\end{tabular}}
\end{table}
\begin{table}[htb]
\caption{\label{tab:automated} F1 scores of the tasks on the automatically derived transcripts from the speech pipeline.}
\centering
\resizebox{0.42\textwidth}{!}{
\begin{tabular}{c|c@{\hskip 0.8cm}c@{\hskip 1.0cm}c@{\hskip 0.7cm}cc}
\hline
& tf-idfs & DAs & MCs & DA-tf-idfs & MC-tf-idfs\\
\hline
ag & 0.75 & 0.62 & 0.64 & 0.77 & 0.78 \\
at & 0.69 & 0.59 & 0.61 & 0.73 & 0.73 \\
co & 0.73 & 0.61 & 0.66 & 0.74 & 0.75 \\
fb & 0.75 & 0.58 & 0.64 & 0.75 & 0.77 \\
gd & 0.68 & 0.57 & 0.60 & 0.72 & 0.70 \\
hw & 0.65 & 0.52 & 0.63 & 0.73 & 0.69 \\
ip & 0.55 & 0.53 & 0.53 & 0.52 & 0.53 \\
cb & 0.71 & 0.56 & 0.63 & 0.71 & 0.75 \\
pt & 0.66 & 0.46 & 0.64 & 0.68 & 0.72 \\
sc & 0.72 & 0.62 & 0.63 & 0.73 & 0.76 \\
un & 0.74 & 0.51 & 0.56 & 0.72 & 0.73 \\
\hline
avg & 0.69 & 0.56 & 0.62 & 0.71 & \textbf{0.72}
\\
tot & 0.76 & 0.60 & 0.66 & 0.77 & \textbf{0.80}
\\
\hline
\end{tabular}}
\end{table}
\subsection{Results of Automatically-derived Transcripts}
We next consider the end-to-end automated evaluation of CBT sessions using the transcripts generated by the speech processing pipeline described in \Cref{sec:3-1}. We perform the prediction tasks on the basic feature sets and the fused features after word augmentation.
The experimental results are given in Table \ref{tab:automated}. Comparing the results with the ones
in Table \ref{tab:manual}, we
observe that while the performance of the code prediction using the automatically derived transcripts is degraded compared to evaluating on manually-derived transcripts, the drop is relatively small.
This modest performance degradation underscores both the robustness of this end-to-end speech processing system, and the room for further improvements. Again the tf-idf features achieve significantly better F1 scores than the DAs and MCs ($p<0.01$) while DAs lead to the worst performance among the basic feature sets. The DA-tf-idfs and MC-tf-idfs both outperform the tf-idfs, which is consistent with the results of the manual transcripts (Table \ref{tab:manual}). The MC-tf-idfs achieve the best overall metrics and F1 scores for the majority of the CTRS codes.
One operational challenge for utterance level processing that we often face while dealing with rich spoken interactions such as seen during therapy is the presence of multiple utterances per talk turn. This led us to investigate the role of using turn level utterance segmentation. To
demonstrate the effect of incorporating an utterance segmentation
module, we experiment on the end-to-end evaluation tasks by removing this
component from the pipeline. The comparison between the overall performances with and without the utterance segmentation is presented in Fig.~\ref{fig:comparison}. Since whether we segment the turn into sentences or not does not affect the
output when we use individually the tf-idfs, we show the performance for the other four sets studied in Tables \ref{tab:manual} and \ref{tab:automated}. The results indicate that, for all the feature sets,
removing the segmentation module leads to worse prediction outcomes. This confirms our hypothesis that multi-utterance turns need to be appropriately handled when we are employing utterance-specific representations such as DAs and MCs, in this study.
\begin{figure}[htb]
\centering
\begin{minipage}[b]{0.4\linewidth}
\centering
\centerline{\includegraphics[width=4cm]{DAs.png}}
\centerline{\footnotesize (a) F1 scores for the DAs}\medskip
\end{minipage}
\hspace*{2em}
\begin{minipage}[b]{0.4\linewidth}
\centering
\centerline{\includegraphics[width=4cm]{MCs.png}}
\centerline{\footnotesize (b) F1 scores for the MCs}\medskip
\end{minipage}
\begin{minipage}[b]{0.4\linewidth}
\centering
\centerline{\includegraphics[width=4cm]{DA-tf-idfs.png}}
\centerline{\footnotesize (c) F1 scores for the DA-tf-idfs}\medskip
\end{minipage}
\hspace*{2em}
\begin{minipage}[b]{0.4\linewidth}
\centering
\centerline{\includegraphics[width=4cm]{MC-tf-idfs.png}}
\centerline{\footnotesize (d) F1 scores for the MC-tf-idfs}\medskip
\end{minipage}
\caption{Comparison of the tasks performed with and without the utterance segmentation for different feature sets.}
\label{fig:comparison}
\end{figure}
\section{Conclusions and Future Work}
In this paper, we employed an end-to-end approach to assess CBT psychotherapy sessions automatically without manual transcription and annotation.
The overall CBT session quality assessment was formulated as a binary classification task, for each of the 11 target behavioral codes, and was performed using word-level and utterance-level linguistic feature sets and their fused combinations.
In particular, inspired by the commonality in certain elements of the therapy process between MI and CBT, we
introduce utterance-level MI codes as one of the feature sets. A new feature fusion strategy was proposed where we augmented the words of an utterance with an utterance-level tag.
We then applied a tf-idf transformation on those augmented tokens. The experimental results showed that our end-to-end automated approach was robust
and the final performance was comparable to using manual transcripts.
The best performance was achieved by the fused features of the tf-idfs and MI codes obtained with the new fusion strategy. Additionally, we confirmed the importance of
including an utterance segmentation module into the pipeline.
In the future, we will explore the importance of each code in DAs and MCs for predicting the CBT session quality and explore in further detail transfer learning between MI and CBT domains.
\section{Acknowledgements}
This work was supported by the NIH.
\bibliographystyle{IEEEtran}
|
1,108,101,563,689 | arxiv | \section{Introduction}
Wide and deep multicolour surveys are useful tools to investigate in
detail the processes of galaxy formation and evolution, especially
beyond the spectroscopic capabilities
of current instrumentation. One of the main aims of the deep
multicolour surveys is to provide a clear picture of the processes
involved in the mass assembly and star formation of galaxies across
the cosmic time. Galaxy observables, e.g. luminosity and mass
functions, two point correlation function, are typically compared with
current renditions of semi-analytical models in hierarchical CDM
scenarios, in order to derive the key ingredients related to the
physics of galaxy formation
(\cite{croton06,menci06,bower06,nagamine,ff07,dlb07,somerville08,keres,dekel}).
The observed properties of the galaxy population, however, are mainly
affected by the limited statistics. Deep pencil beam surveys, such as
GOODS (\cite{goods}) or HUDF (\cite{udf}), have been carried out on
relatively small sky areas and are subject to the cosmic variance
effect. Larger surveys like COSMOS which extends over a 2 deg$^2$ area
are indeed shallower and limit the knowledge of the faint galaxy
population at intermediate and high redshifts.
To probe the statistical properties of the faint galaxy population
reducing the biases due to the cosmic variance, a major effort should
be performed over large areas with efficient multicolour imagers at 8m
class telescopes. This is especially true in the near-UV band (hereafter UV)
where instrumentation is in general less efficient but where it is possible
to extract information on the star formation activity and dust
absorption present in distant galaxies.
In this context we are exploiting the unique power of the Large
Binocular Camera (LBC) installed at the prime focus of the Large
Binocular Telescope (LBT,
\cite{pedik,speziali,ragazzoni06,hill,giallongo}) to reach faint
magnitude limits in the U band ($\lambda\sim 3600$\AA) over areas of
several hundreds of sq. arcmin.
Long LBC observations in the UV band are of comparable depth to those
of the HDFs (although, obviously, the image quality will be
poorer). Even a single pointing with LBC produces a field two
orders of magnitude larger than that of the combined HDF-N and
HDF-S. This is very important because the transverse extent of the
HDFs corresponds to about 1 Mpc at $z\sim 0.5-2$ where Dark Matter
clustering is still important.
The goal of this paper is to provide UV (360nm) galaxy number counts
down to the faintest magnitude limits available from ground based
observations. The comparison of normalization and shape of the
observed counts with that predicted by theoretical hierarchical models
can help to enlighten critical issues in the description of galaxy
formation and evolution like dust extinction and the formation of
dwarf galaxies.
Throughout the paper we adopt the Vega magnitude system
($U_{Vega}=U_{AB}-0.86$) and we refer to differential number counts
simply as ``number counts'', unless otherwise stated.
\section{The Data}
The deep UV observations described here have been carried out with the
Large Binocular Camera (LBC, \cite{giallongo}). LBC is a double imager
installed at the prime foci stations of the 8.4m telescopes LBT (Large
Binocular Telescope, \cite{hill}). Each LBT telescope unit are
equipped with similar prime focus cameras. The blue channel (LBC-Blue)
is optimized for imaging in the UV-B bands and the red channel
(LBC-Red) for imaging in the VRIZY bands. The unvignetted FoV of each
camera is 27 arcminutes in diameter, and the detector area is
equivalent to a $23\times 23$ arcmin$^2$ field, covered by four 4K by
2K chips of pixel scale 0.225 arcsec. Because the mirrors of both
channels are mounted on the same pointing system, a given target can
be observed simultaneously over a wide wavelength range, improving the
operation efficiency. Extensive description of the twin LBC instrument
can be found in \cite{giallongo,ragazzoni06,pedik,speziali08}.
We have used a deep
U-BESSEL image of 3 hours, acquired in normal seeing condition
(FWHM=1arcsec) during the commissioning of the LBC-Blue camera, to derived
faint UV galaxy-counts in a 478.2 $arcmin^2$ sky area till
U(Vega)$=26.5$, in the Q0933+28 field (\cite{steidel03}).
The data have been reduced using the LBC pipeline described in
detail in \cite{giallongo}, applying the standard debias, flat-fielding
and stacking procedures to derive the coadded image. The flux calibration
of the U-BESSEL image has been
derived through observations of photometric standards from the fields
SA98 and SA113 (\cite{landolt92}) and the photometric fields
of \cite{galadi00}, as described in detail in \cite{giallongo}.
The precision of the zero point calibration in the U-BESSEL filter
is typically of the order of 0.03 mag at 68\% confidence level.
A correction to the photometric zero point of 0.11 mag due to the
Galactic extinction has been applied to the final coadd U-BESSEL image.
The Q0933+28 field was also imaged in the SDT-Uspec\footnote{this is an
interference filter with a wavelength range similar to U-BESSEL filter, but
the peak transmission is $\sim 30\%$ more efficient than that, as
described in Fig.2 of \cite{giallongo}.} filter of LBC for
an additional hour in the first quarter of 2007, under normal seeing
conditions (1.1 arcsec). These images are reduced and coadded in the same
way of the U-BESSEL ones, except for the photometric calibration procedure,
that is carried out using a spectrophotometric standard star of \cite{oke}.
The precision of the zero point calibration in the SDT-Uspec filter
is typically of the order of 0.05 mag at 68\% c.l.
A Galactic extinction correction of 0.13 has been used for this image.
To push deeper the galaxy number counts in
the UV band, we sum up the image obtained in the U-BESSEL filter (with
exposure time of 3 hours) with this new one, after rescaling the two
images to the same zero point. We have verified that the effective
wavelengths of these two filters are the same, the only difference
being the higher transmission efficiency (1.5 times, after integrating its
efficiency curve from $\lambda$=3000 to 4000 \AA) of the SDT-Uspec
filter compared to the U-BESSEL one. The colour term between these two
filters is 0.01, thus we neglect it when summing up the two
images. The resulting image goes $\sim$0.5 mag deeper than the original 3
hours with U-BESSEL filter, given the higher image quality of the new
SDT-Uspec image due to the general improvement of the telescope-instrument
system, in particular the reduction of scattered light from the telescope and
dome environment after the first run of the LBC-Blue commissioning.
We use this final coadded image to improve the
magnitude limit in the UV band and extend the number counts in this
band to faint fluxes.
To decrease the effects of cosmic variance in the number counts at
$U\sim 20$ we have used 3 additional LBC fields with shallower magnitude limits
$U\le 25$ but much larger area (892 $arcmin^2$) in the Subaru XMM Deep
Survey (SXDS, \cite{sxds}) region. These images have been reduced and
calibrated as described above for the Q0933+28 field.
The FWHM of these images (SXDS1, SXDS2, SXDS3) is higher (1.2, 1.25, 1.4
arcsec) than the one in the Q0933+28 field and the exposure
time per LBC pointing is 1.0, 1.5, and 1.5 hours, respectively,
in the U-BESSEL filter.
These three LBC images, combined with the deep point in the Q0933+28 field
are then used to derive wide and deep galaxy number counts in the U band,
from $U=19.5$ to 25.0 over a FoV of 1370 $arcmin^2$ (0.38 sq. deg.)
and to $U=27.0$ for a sub-sample of $\sim$ 480 $arcmin^2$.
\section{The number counts in the U band}
\subsection{Deep U band galaxy number counts}
We computed galaxy number counts using the SExtractor package
(\cite{sex}). For objects with area greater than that
corresponding to a circular aperture of radius equal to the FWHM, we
used the ``best'' photometry (Kron magnitude or corrected
isophotal magnitude if
the galaxy is severely blended with surrounding objects) provided by
SExtractor. For smaller sources we computed magnitudes in circular
apertures with diameter equal to 2 times the FWHM, and correct them
with an aperture correction we derived using relatively bright stars
in the field. This allows us to avoid the well known underestimate of
the flux of faint galaxies provided by the isophotal method. To
isolate the few stars from the numerous faint galaxies in this field,
we relied on the class\_star classifier provided by SExtractor. It is
known that the morphological star/galaxy classifier of SExtractor is
not reliable for faint objects, but the contamination from stars at
faint U band fluxes and at high galactic latitude is very limited
(1\%), according to the model of \cite{bs1980}. At brighter magnitudes,
$U\sim 20$, the contamination from stars cannot be neglected, but at
high S/N ratio the morphological classificator of SExtractor is
robust. Moreover, the agreement between the LBC number counts at $U\le 22$
with those of SDSS and other large area surveys (e.g., VVDS) indicates
that the contamination from galactic stars are reduced also for bright U band
magnitudes.
We search for an optimal configuration of the detection parameters of
SExtractor in order to maximize the completeness at faint magnitude
limits and to reduce the number of spurious sources. The two
parameters regulating the depth, completeness and reliability of
photometric catalog are the threshold (relative to the RMS of the
image) adopted for detection of objects and the minimum area of
connected pixels above this threshold. We used the negative image
technique, described in \cite{dickinson2004,yw04,bouwens07}, as an
estimate of the reliability of the catalog. Using SExtractor, we
produce the ``-OBJECTS'' image, i.e. the image with the detected
objects subtracted, then we compute the negative of this image and run
SExtractor with the same detection parameters used for the positive
image. The sources detected on the negative image give an estimate of
the contamination from spurious sources, provided that the noise
statistic is symmetric (towards positive and negative pixel values) in
the image.
After several tests with different parameter set, the best effort
combination are thresh=0.7, area=8 pixels for Q0933+28
field and thresh=0.8, area=9 for SXDS fields (where the seeing is a bit
worse), which minimize the
number of spurious sources (measured in the negative image) while
maintaining the number counts and completeness of real sources
relatively high at faint magnitude limits ($U\sim 26.5$). In the
Q0933+28 field the contamination from spurious sources is $\sim 2.2\%$
at $U=26.5$, and remains less than $2.5\%$ till $U=27.0$
(corresponding to U(AB)=27.9).
The results on the galaxy number count analysis can be affected by
blending of galaxies when deep fields are investigated. In SExtractor
the two parameters affecting the deblending of sources are the {\sc
DEBLEND\_NTHRESH} and {\sc DEBLEND\_MINCONT} parameters. We produce
different catalogs using various combination of these two parameters
for each LBC image, and we find that the raw number counts are
not sensitive to these parameters, since the number of galaxies per
magnitude bin and per square degree varies of $\Delta Log N=0.02$ at
$U=24-25$ and $\Delta Log N=0.04$ at $U=27$, our faintest bin in the
galaxy number counts. We adopt {\sc DEBLEND\_NTHRESH=32} and {\sc
DEBLEND\_MINCONT=0.002} for all the fields described here, based on
visual inspection of the reliability of deblended sources on the
Q0933+28 field.
Resulting raw counts are shown in Fig.\ref{fig:lognsu} where a clear
decrease is apparent for $U(Vega)>26.4$. The typical photometric
error at $U\sim 27$ is $\sigma=0.4$ magnitude, corresponding to a total
integrated number counts of $\sim$100 galaxies per $arcmin^2$. An
estimate of the completeness level should be performed in order to
evaluate the amount of correction to the raw counts at the faint
limits. This has been evaluated including in the real image 1000
simulated galaxies per 0.25 magnitude bin in the magnitude
interval $U(Vega)=24-28$ using the standard ''artdata'' package in
IRAF. We include disk galaxies of half light radius of 0.2-0.4 arcsec,
convolved with the PSF of stellar objects in the field. We then
run SExtractor on this new image using the same detection parameters
described above, and we studied how effective SExtractor is in
recovering the simulated galaxies as a function of magnitude. The
resulting sizes of the simulated galaxies are typical of real galaxies
in the magnitude interval $U(Vega)=24-25$, so the completeness we have
computed at $U=26-27$ can be considered a robust estimate. We have
verified using different datasets, i.e. \cite{windhorst} and
GOODS-South \cite{music},
that, at $B\ge 25$, the half light radius of these galaxies is always
between 0.2 and 0.4 arcsec. We choose not to use real sources to
simulate the completeness of the images mainly for two reasons: the
half light radii of galaxies decrease with the magnitude, and it is
known that fainter galaxies are smaller. If we use bright ($U\le 24$) high
signal-to-noise objects, their size, once dimmed to $U\sim 26-27$,
can be overestimated with respect to
their actual size, enhancing artificially the completeness correction
at fainter magnitudes. If we use real faint sources as input in our
simulations, the signal-to-noise ratio of these galaxies is
very low, enhancing even in this case the completeness correction. We
have tried instead a robust and realistic assumption, that is to
simulate galaxies with half light radius between 0.2 and 0.4 arcsec,
which is the range observed at the faint magnitude limits where
morphological analysis is still reliable (see e.g. \cite{conselice}).
The completeness at $U\le 25.5$ does not depend on the half light radius
of the simulated galaxies, while at fainter magnitudes it depends on
the size of the galaxies: in particular, at $U=26.0$ the scatter of
the completeness for various half light radii between 0.2 and 0.4
arcsec is 3\%, increasing to 6\% at $U=26.5$ and reaching 16\% at
$U=27.0$.
The resulting 50\% completeness level is measured
at $U(Vega)=26.6$, while at $U=27.0$ we have a formal completeness of
30\%.
The number counts corrected for incompleteness are shown again in
Fig.\ref{fig:lognsu}. Given the wide magnitude interval from
$U(Vega)=19.5$ to $U(Vega)=27.0$ available in the present survey, the
shape of the counts can be derived from a single survey in a
self-consistent way, possibly minimizing offsets due to systematics in
the photometric analysis of data from multiple surveys (zero point
calibration, field to field variation, etc). A clear bending is
apparent at $U(Vega)> 23.5$. To quantify the effect we fitted the
shape of the counts in the above magnitude interval with a double
power-law. The slope changes from $0.58\pm 0.03$ to $0.24\pm 0.05$ for
magnitudes fainter than $U_{break}=23.6$. The uncertainty in the
break magnitude is however large, $\sim 0.5$, since the transition
between the two regimes of the number counts is gradual.
In Fig.\ref{fig:lognsu} we compare our galaxy number counts with those
derived by shallow surveys of large area (SDSS EDR, \cite{sdss}) or
of similar area (GOYA by \cite{goya};
VVDS-F2 by \cite{radovich}), and with deep pencil beam surveys
(Hawaii HDFN by \cite{capak04}; WHT, HDFN, and HDFS by \cite{wht}). In
particular, the WHT galaxy counts (\cite{wht}) are based on a 34h
exposure time image reaching $U(Vega)=26.8$ but at the much lower
3$\sigma$ level in the photometric noise and in an area of $\sim$50
arcmin$^2$, while the GOYA survey at the INT telescope is complete at
50\% level at $U(Vega)=24.8$. These counts are shown together with the
two pencil beam surveys in the Hubble Deep Fields (\cite{wht}).
The agreement with the GOYA survey (900 sq. arcmin.) is remarkable,
and suggests that once big areas of the sky are investigated, the
effects of cosmic variance are slightly reduced.
The present UV counts obtained during the commissioning of
LBC-Blue are thus a unique combination of deep imaging in the U band
and of large sky area, with the result of a considerable reduction of
the cosmic variance effects for $U\ge 21$. For brighter magnitude limits,
we refer to larger area surveys, shallower than our survey, as shown in
Fig.\ref{fig:lognsu}.
Table \ref{tab:lognsU} summarizes the galaxy number counts, corrected for
incompleteness, with their upper and lower 1 $\sigma$ confidence level
uncertainties, assuming Poisson noise and cosmic variance effect.
For the latter, we used the Cosmic Variance Calculator (v1.02)
developed by \cite{trenti} using as input values a linear size of 22 arcmin
(corresponding to our deeper area of 478.2 $arcmin^2$) and redshift from
0.0 to 3.0, with standard $\Lambda$-CDM cosmology. At U=27, for example,
the computed cosmic variance is 4.5\%.
Using the Q0933+28 and the other three fields in the SXDS area, we study
the field to field variation of the number counts in the U band.
We find that the typical variation from one LBC field to another is 0.04
in LogN for $U\sim 20$, while it reduces gradually to 0.01 at U=24.5,
well below the poissonian uncertainties described in Table \ref{tab:lognsU}.
This ensures that the zero-point calibration of the images is robust
and the area of this survey is sufficient to decrease the cosmic variance
effects below the statistical uncertainties of the galaxy number counts.
We complement these internal checks on the galaxy number counts in U with
an external consistency test, comparing our final number counts with shallower
NC derived in different areas or deep pencil beam surveys in
Fig.\ref{fig:lognsu}. We find that the survey-to-survey maximal variations
are of the order of 0.1 in LogN at all magnitudes, with lower scatter
for the wider surveys considered here (GOYA, SDSS-EDR, VVDS-F2).
\begin{figure}
\includegraphics[width=9cm]{LOGNS_U9.ps}
\caption{
Number counts of galaxies in the U-BESSEL band for the Q0933+28 and SXDS
LBC fields. Black crosses represent the raw galaxy number counts, while
filled big circles show the number counts corrected for incompleteness.
Magnitudes are in the Vega system. We compare our counts with
shallow surveys of similar or larger area (SDSS EDR, GOYA, VVDS F2), and with
deeper pencil beam surveys (Hawaii HDFN, WHT, HDFN, HDFS).
}
\label{fig:lognsu}
\end{figure}
\begin{table}
\caption{LBC U galaxy number counts.}
\scriptsize
\begin{tabular}{ccccc}
\hline
\hline
\textbf{U(Vega)} & \textbf{LogN} & \textbf{Max LogN} & \textbf{Min LogN} &
\textbf{Completeness} \\
\hline
19.50 & 2.409 & 2.648 & 1.837 & 1.00 \\
19.75 & 2.484 & 2.707 & 2.001 & 1.00 \\
20.00 & 2.517 & 2.734 & 2.066 & 1.00 \\
20.25 & 2.891 & 3.043 & 2.654 & 1.00 \\
20.50 & 2.991 & 3.129 & 2.787 & 1.00 \\
20.75 & 3.103 & 3.227 & 2.930 & 1.00 \\
21.00 & 3.239 & 3.346 & 3.095 & 1.00 \\
21.25 & 3.350 & 3.446 & 3.227 & 1.00 \\
21.50 & 3.503 & 3.585 & 3.401 & 1.00 \\
21.75 & 3.655 & 3.725 & 3.572 & 1.00 \\
22.00 & 3.782 & 3.843 & 3.711 & 1.00 \\
22.25 & 3.914 & 3.989 & 3.824 & 1.00 \\
22.50 & 4.100 & 4.161 & 4.028 & 1.00 \\
22.75 & 4.191 & 4.247 & 4.127 & 1.00 \\
23.00 & 4.311 & 4.360 & 4.256 & 1.00 \\
23.25 & 4.452 & 4.494 & 4.405 & 1.00 \\
23.50 & 4.561 & 4.598 & 4.521 & 1.00 \\
23.75 & 4.633 & 4.667 & 4.595 & 1.00 \\
24.00 & 4.702 & 4.734 & 4.668 & 1.00 \\
24.25 & 4.799 & 4.827 & 4.768 & 0.99 \\
24.50 & 4.865 & 4.892 & 4.837 & 0.98 \\
24.75 & 4.945 & 4.970 & 4.919 & 0.96 \\
25.00 & 5.016 & 5.040 & 4.992 & 0.93 \\
25.25 & 5.081 & 5.103 & 5.058 & 0.90 \\
25.50 & 5.135 & 5.156 & 5.113 & 0.88 \\
25.75 & 5.199 & 5.219 & 5.178 & 0.83 \\
26.00 & 5.268 & 5.288 & 5.248 & 0.75 \\
26.25 & 5.341 & 5.361 & 5.321 & 0.66 \\
26.50 & 5.431 & 5.450 & 5.411 & 0.55 \\
26.75 & 5.469 & 5.510 & 5.425 & 0.43 \\
27.00 & 5.495 & 5.563 & 5.419 & 0.30 \\
\hline
\hline
\end{tabular}
LBC U galaxy number counts corrected for incompleteness; $N$ is
the number of galaxies per $deg^2$ and per magnitude bin, while the
minimum and maximum counts are the 1 $\sigma$ confidence level (Poisson noise
and cosmic variance effect).
\label{tab:lognsU}
\end{table}
\subsection{The UV extragalactic background light}
The slope of the galaxy number counts in the U band at $U\sim 23.5$
changes from 0.58 to 0.24, which implies that the
contribution of galaxies to the integrated EBL in the UV has a maximum
around this magnitude. The contribution of observed galaxies to the
optical extragalactic background light (EBL) in the UV band can be
computed directly by integrating the emitted flux multiplied by the
differential number counts down to the completeness limit of the
survey.
Following the work of \cite{mp2000}, we compute the EBL in the U band,
$I_{\nu}$ measured in $erg~s^{-1}~cm^{-2}~Hz^{-1}~Sr^{-1}$, using the
following method:
\begin{equation}
I_{\nu}= 10^{-0.4(U_{AB}+48.6)}N(U_{AB}) \ ,
\end{equation}
where we use the following correction from Vega to AB:
$U_{AB}=U_{Vega}+0.86$.
Fig.\ref{fig:ebl} shows the EBL obtained by the LBC deep number counts
and compare it with the results of \cite{mp2000}. Our data agrees well
with previous estimates of the EBL in the U band, and indeed it allows
to derive precisely the peak of the differential EBL, at
$U=23.55\pm0.25$, and put strong constraints to the contributions of
faint galaxies down to $U=27.0$ to the integrated EBL in this band.
At magnitudes $U\ge 24$ our estimate of the EBL is significantly
larger than the one of \cite{mp2000}. Checking in detail
Fig.\ref{fig:lognsu}, it is clear that the HDFS is slightly underdense
with respect to the HDFN or other deep pencil beam surveys, probably
due to cosmic variance effects, and this is reflected on the smaller
EBL of \cite{mp2000} compared to the LBC estimate.
\begin{figure}
\includegraphics[width=9cm]{EBL_U8.ps}
\caption{
The extragalactic background light per magnitude bin $u_{\nu}$ as a function
of U band magnitudes. We compare the results of this work with the EBL derived
by \cite{mp2000} in the HDFS field. The small error bars of EBL at $U\le 18$
are due to the small uncertainties on the galaxy number counts derived
using SDSS data.
}
\label{fig:ebl}
\end{figure}
The integrated galaxy contribution to the EBL in the UV band results
$\nu I(\nu)=3.428\pm 0.068 nW/m^2/Sr$ at the effective wavelength of
the LBC U-BESSEL filter, 3590\AA, taking into account the magnitude
range U=17-27. This estimate is 20\% higher than the one estimated by
\cite{mp2000} ($2.87^{+0.58}_{-0.42}nW/m^2/Sr$) although their value
is still consistent with our measure due to their large
uncertainties. Moreover, our estimate reduces the uncertainties by an
order of magnitudes, which is of fundamental importance
to put strong constraints to galaxy
evolution models. Extrapolating the observed number counts in the U
band down to flux=0 we derive an integrated EBL of
$\nu I(\nu)=3.727\pm 0.084nW/m^2/Sr$,
so in our survey to U=27 we are resolving $92\%$ of EBL produced by
galaxies.
The integrated flux from resolved galaxies, however, should be considered
as a lower limit to the total EBL in the universe, since our
estimate could be affected by different systematics:
\begin{itemize}
\item
Incompleteness in the number counts at faint magnitudes due to the presence
of very extended/diffuse sources that escape detection in our LBC deep images
and in the HDFs surveys. The adopted detection algorithms usually select
galaxies down to a threshold in their surface brightness limits, and so
are prone to selection effects against low surface brightness galaxies.
We have computed the completeness of galaxies assuming a conservative
half light radius of 0.3 arcsec, and we cannot exclude the presence
of a population of diffuse and extended galaxies in the UV bands, even
this hypothesis is quite unlikely, due to the well known relation between
observed optical magnitudes and half light radius of galaxies
(\cite{totani01}).
\item
The surface brightness dimming of sources at high redshifts. This is
not a serious problem in the U band, since all the light in this
wavelengths comes from $z\le 3$ galaxies (non U-dropout galaxies) and,
as it is discussed in the next paragraph, the redshift distribution of galaxies
contributing to the peak of the EBL are at $z\le 1$.
\end{itemize}
Independent estimates to the total EBL in the U band are given in
\cite{bernstein02,bernstein07}, and are slightly larger
than our estimate from the galaxy number counts
($\nu I(\nu)=14.4\pm 9 nW/m^2/Sr$ in \cite{bernstein02}
and $\nu I(\nu)=(21.6\pm 14.4) nW/m^2/Sr$ in \cite{bernstein07},
after a different treatment for the contaminating foregrounds).
With the present observations, we reject the hypothesis of
\cite{bernstein02} that a huge contribution to the sky background
in the U band could be derived by the overlapping wings of extended
galaxies. With the present observations we reach a surface brightness limit
of $U=28.2 mag/arcsec^2$ at 3$\sigma$ and thus exclude the presence of
faint and extended wings in the galaxy UV light distributions.
The discrepancy between the resolved and total EBL could be explained
by a population of ultra faint and numerous galaxies at $U\ge 27$,
beyond the current limits of the present surveys and with a LogN-LogS
slope steeper than 0.5, or by an improper subtraction of the
foreground components, as stated in \cite{mattila03} and \cite{bernstein07}.
Recently, \cite{fornax} proposed a new method to derive stringent
limits to the diffuse light in the UV bands, using the inverse compton
emission in $\gamma$-rays of the EBL by the high energy electrons in
the radio lobes of Fornax A. This technique will allow to derive
constraints to the EBL at shorter wavelengths than the limits obtained
by the TeV blazar emission (\cite{stanev,aharonian,albert}). At the
present stage, \cite{albert} give an upper limit of $\nu I(\nu)\le 5
nW/m^2/Sr$ to the total EBL in the UV. This limit strengthens the
hypothesis that the discrepancy between our estimate of the EBL in U
band and that of \cite{bernstein07} is due to a foreground local
component.
Our derivation of the EBL by integrated galaxy counts is marginally consistent
with the lower limit derived by \cite{kd08} of $\nu I(\nu)\ge 3.97
nW/m^2/Sr$ at $\lambda=3600$\AA.
\section{Comparison with theoretical models}
The availability of galaxy number counts down to faint magnitude
limits and over large sky areas can be used to test the predictions of
different theoretical models without being strongly
affected by the cosmic
variance. Among the various models (numerical and
semi-analytical) developed in the framework of the
standard hierarchical CDM scenario, we have selected the models
developed by \cite{menci06} (hereafter M06),
the \cite{kw07} model based on the Millennium Simulation (hereafter
K07) and the model developed by \cite{morgana} (hereafter {\sc MORGANA}).
All these model simulate
the formation and evolution of galaxies, starting from a statistical
description of the evolution of the DM halo population (merger trees),
and using a set of approximated, though physically motivated, ``recipes''
to treat the physical processes (gas cooling, star formation, AGN and SF
feedback, stellar population synthesis) acting on the barionic component.
An appropriate treatment of
dust attenuation is also crucial in comparing model predictions to the
UV observations, given the efficient scattering of radiation at
these wavelengths by dust grains, as shown by GALEX number counts of
\cite{xu2005}.
The comparison of the observed counts with that predicted by the three
selected models is used to enlighten different critical issues
concerning the physical description of the galaxy formation and
evolution.
In particular, the M06 model is characterized by a specific
implementation of the AGN feedback on the star formation activity in
high redshift galaxies. The description is based on the expanding
blast waves as a mechanism to propagate outwards the AGN energy
injected into the interstellar medium at the center of galaxies (see
Menci et al. 2008); such a feedback is only active during the active
AGN phase (``QSO mode''), and it is effective in suppressing the star
formation in massive galaxies already by $z\approx 2$, thus yielding a
fraction of massive and extremely-red objects in approximate agreement
with observations (Menci et al. 2006).
The M06 model uses three different prescriptions
for the dust absorption, namely the Small Magellanic Cloud model (SMC),
the Milky Way (MW) law and the \cite{calzetti} extinction curve (C00).
In Fig.\ref{fig:models} the two short-dashed curves enclose the minimum
and maximum number counts predicted by the M06 model taking into account the
three different extinction curves (SMC, MW, and C00).
The model of \cite{kw07} is based on the Millennium Simulation
(\cite{springel}), on a concordance Lambda CDM cosmology. In the
model an important critical issue is represented by the particle mass
resolution of the code, which affects the physical and statistical
properties of faint low-luminosity galaxies. The adopted CDM mass
resolution is sufficient to resolve the halos hosting galaxies as faint
as 0.1 $L^*$ with at least 100 particles of $8.6 \cdot 10^{8} h^{-1} M_\odot$.
The model also adopts a different AGN feedback called
"radio mode" in which cooling is suppressed by the continuous
accretion of hot gas onto Supermassive Black Holes at the center of
groups or cluster of galaxies.
The only difference of K07 from \cite{croton06} and \cite{dlb07}
is in the
dust model used in their simulations. For the local galaxies, they adopt
a simple relationship between face-on optical depth and intrinsic luminosity
$\tau\propto \tau_0 (L/L_*)^\beta$ with $\beta=0.5$. At high redshifts
they take into account the different dust and gas contents, the varying
metallicities, and the shorter rest frame emitted wavelengths. In particular
their average extinction increases strongly at high redshifts due to
the smaller disc sizes of the galaxies. We refer to \cite{kw07} for
details about their dust model.
Finally, the {\sc MORGANA} model is characterized by a different
treatment of the processes of gas cooling and infall (following Viola
et al., 2008), star formation and feedback (using the multi-phase
model of Monaco 2004). Black hole accretion is described in detail in
Fontanot et al., (2006): the accretion rate in Eddington units
determines the nature of feedback from the AGN. Synthetic SEDs for
model galaxies are obtained using the GRASIL code (Silva et al. 1998),
which explicitly solves the equation of radiative transfer in a dusty
medium. Fontanot et al. (2007) show that {\sc MORGANA} is able to
reproduce both sub-mm ($850 \mu$m) number counts and the redshift
distribution observed in $K$-limited samples, with conservative choice
for the stellar IMF. Galaxy SEDs, magnitudes, and colours for a
variety of passbands are obtained using the {\sc GRASIL}
spectrophotometric code (Silva et al. 1998), which explicitely solves
the equations for the radiative transfer in a dusty medium taking into
account the composite geometry (bulge+disc) of each model galaxy. The
dust properties (dust-to-gas mass fractions, composition, and size
distribution) are kept fixed to values providing a good agreement with
local observations (see \cite{ff07}, for more details on the coupling
between {\sc MORGANA} and {\sc GRASIL}). In this work we take
advantage of the recent update of the model, adapted for WMAP3
cosmology and Chabrier IMF and presented in Lo Faro et al. (2009).
\begin{figure}
\includegraphics[width=9cm]{model6.ps}
\caption{
Number counts of galaxies in the U-BESSEL band for the Q0933+28 LBC
field, complemented at brighter magnitudes ($U\le 20$) by the number
counts of VVDS-F2 and SDSS-EDR, whose error bars are always smaller
than the size of the points.
We compare our counts with theoretical models (M06, K07, {\sc MORGANA}).
}
\label{fig:models}
\end{figure}
It is
worth noticing that photoelectric absorption by the interstellar
medium is redshifted, for high redshift galaxies, into the observed UV
band. For this reason only galaxies with $z<3$ contribute to the UV
number counts. The common characteristics of the models is that
only galaxies in the $1.5<z<2.5$ redshift intervals give the
main contribution to the counts at $U>26$ (see \cite{barro2009}).
This implies that the shape of the
faint counts is provided by the average faint-end shape of the galaxy
luminosity function in the same redshift interval.
The comparison of the observed galaxy number counts in the U
(360 nm) band with the predictions of
theoretical models shown in Fig.\ref{fig:models} indicates that:
\begin{itemize}
\item
the M06 and K07 models reasonably reproduce the UV number counts from
$U=17$ to $U=27$, while the {\sc MORGANA} model shows the largest deviation
from the observed data.
\item
The three models show different shapes for the UV galaxy number counts
at faint fluxes, implying discrepancies which increase for increasing
magnitudes. This can be evaluated in more detail dividing the counts
in redshift bins (see \cite{barro2009}).
\item
With the current observations of the UV number counts down to $U=27$
both M06 and K07 models show a similar behaviour, although the K07
model appears to bend over for $U>27$ in contrast with the steeper
slope of the M06 model.
\end{itemize}
The discrepancies between predicted and observed faint UV galaxy
number counts could be due in principle to several effects, e.g. the
evolution in number density, star formation activity or
dust extinction of the faint galaxy population. Among these, an
appropriate treatment of the dust extinction plays an important role,
in this comparison based on the U band, since it can affect both the
normalization and shape of the observed UV counts. The effects due to
the different extinction laws (Calzetti 2000 and Small Magellanic
Cloud) adopted in the M06 model for example, increase at fainter
magnitudes since we are observing galaxies at high ($z<3$)
redshifts. In this respect, the bending observed in the UV counts
could be indicative of a gradual redshift evolution of the dust
properties.
Another physical quantity relevant for the correct reproduction of the
faint counts is the amount of star formation activity in faint low-mass
galaxies. This effect is particularly important in the MORGANA model
where it is responsible for the flat shape of the predicted counts.
\cite{fontanot09} show that the underprediction of UV number counts of
faint galaxies can be explained by a rapid decline in their star
formation activity and consequently in their associated UV emission at
$z\lesssim 2$. These galaxies are predicted to be too passive and to host
too old stellar populations at later times with respect to observations.
The reduced SFRs can easily explain the flat shape of the predicted
counts in {\sc MORGANA}.
As an attempt to disentangle dust/SFR effects from
number density evolution, we have extended the model comparison to the
NIR K band number counts which are much less sensitive to dust
absorption and short episodes of star formation.
Indeed, the observed flux in the K band is more related
to the star formation history in the galaxy quantified by its
assembled stellar mass (see e.g. Fontana et al. 2006).
Fig.\ref{fig:lognsk} shows the comparison of the observed number
counts in the K band collected in the literature and provided by
different instruments/telescopes and surveys (SSDF: \cite{ssdf}, HDFS:
\cite{hdfs}, KDS: \cite{kds}, UDS: \cite{uds}, WHTDF: \cite{whtdf},
HWDF: \cite{hwdf}, GOODS: \cite{music}) with the predictions of M06,
K07, and {\sc MORGANA}. The observed K band galaxy counts show a clear
bending at $K\simeq 17$ where the average slope changes from 0.69 to a
flatter value 0.33. Here the bending is more clear compared to what
observed in the UV since the break magnitude is brighter with respect
to the faintest survey limits.
\begin{figure}
\includegraphics[width=9cm]{LOGNS_K5.ps}
\caption{
Number counts of galaxies in the K band for different deep surveys
in the literature.
The model predictions come from the same models described in the previous
figure.
}
\label{fig:lognsk}
\end{figure}
In the K band the models tend to overestimate the number counts both
at the bright end $K<15$ and at the faint magnitudes $K>20$. The
excess in the predicted K band number counts resembles that at the
faint end of the theoretical luminosity functions in the NIR (see for
example \cite{poli} and \cite{fontanot09}). This can be indicative of
an excess in the production of galaxies with low luminosity and high
redshifts. If the NIR number counts are extrapolated at fainter
limits, the amount of this excess differs among the models, being
smaller in the K07 model at $K\ge 26$. The latter however is affected
by a threshold in the mass resolution used
by the numerical code of the Millennium simulation adopted in the K07
model. This could provide an artificial removal of faint galaxies.
The simultaneous comparison of the UV and NIR K band number
counts at faint fluxes seems to indicate that the underprediction of
the modelled UV number counts could be due to dust extinction or
reduced SF activity rather than to an intrinsic evolution in the
galaxy number density.
Finally we identify a third issue when comparing the bright end of the
galaxy counts. The treatment of the feedback on SF processes due to
AGN activity can play an important role in this respect. The M06 model
for example shows the effect of a different treatment of AGN feedback
based on the so called QSO mode at variance with the Radio mode
feedback used by the K07 model, while {\sc MORGANA} is characterized
both by Radio and by QSO modes. The different treatment of AGN feedback is
plausibly the responsible for the differences in the predictions of
NIR galaxy number counts at $K\le 15$. The excess of the UV number
counts predicted by models is related to the quenching of the star formation
activity. {\sc MORGANA} tend to
overpredict star formation in massive central galaxies at low-z. This is
related to the less efficient, or delayed, quenching of the cooling
flows in massive halos via Radio-mode feedback. In fact, in this model
gas accretion onto the central black hole is related to star formation
activity in the spheroidal component: this implies that AGN heating
switches on only after some cooled gas has already started forming stars
in the host galaxy (see Kimm et al., 2008 for a complete discussion and
a comparison of different Radio-mode implementations in semi-analytical
models).
As a last comment, all the three models give a contribution to the UV
EBL which is broadly consistent with the upper limits described in section
3.2. Using the same method described there for the observed
data, we derived an EBL of 0.71-1.18 $nW/m^2/Sr$ for the M06 model in
the UV band, while the {\sc MORGANA} and K07 ones give 2.66 and 3.26
$nW/m^2/Sr$, respectively.
\section{Summary}
To summarize the main results of the paper:
\begin{itemize}
\item
We have derived in a relatively wide field of 0.4 deg$^2$ the deepest
counts in the 360nm UV band. This allowed to evaluate the shape of the
galaxy number counts in a wide magnitude interval U=19-27
with the advantage of mitigating the cosmic variance effects.
The agreement with the number counts of shallower surveys confirms
the low impact of systematic errors on the LBC galaxy statistics.
\item
The shape of the counts in UV can be described by a double power-law
with a steep slope 0.58 followed beyond $U\simeq 23.5$ by a flatter
shape 0.24. Our counts are consistent at the bright end with surveys
of comparable or greater areas. At the faint end our counts are more
consistent with that found in the HDF-N.
\item
The faint-end slope of the counts is below 0.4 and this ensures
the convergence of the
contribution by star forming galaxies to the EBL in the UV band. The
total value in the UV band obtained extrapolating the slope of our
counts to flux=0 is indeed $3.727\pm 0.084 nW/m^2/Sr$.
It is consistent with recent upper limits coming from TeV
observations of \cite{albert}, $\nu I(\nu)\le 5 nW/m^2/Sr$,
showing that the UV EBL is resolved at $\ge$74\% level.
\item
We have compared our counts in the UV and K bands with few selected
hierarchical CDM models which are representative of specific critical
issues in the physical description of the galaxy formation and
evolution.
\item
The mass resolution of numerical models is critical for reproducing the
faint end of the UV galaxy number counts.
\item
The discrepancies between predicted and observed UV galaxy number counts
at faint magnitudes could be mainly due to the treatment of dust extinction and
the star formation activity in low mass galaxies at $z\le 2$.
\item
The AGN feedback (Radio vs QSO mode) may affect galaxy counts at the bright
end of the LogN-LogS in the K band.
\end{itemize}
A correct physical description of the AGN feedback, dust properties
and star formation activities in the models is fundamental to ensure a
reasonable agreement of the model predictions at the faint end of the
galaxy counts.
Adding colour information for galaxies with UV emission as faint as
$U=27-28$ implies very deep observations in the red bands which are
feasible with several hours of integration at 8m class telescopes.
Very deep multicolour information on areas of the order of the square
degree can help in extracting physical information on the star
formation history of the dwarf population at intermediate and high
redshifts.
\begin{acknowledgements}
Observations have been carried out using the Large Binocular Telescope
at Mt. Graham, Arizona, under the Commissioning phase of the Large Binocular
Blue Camera. The LBT is an international collaboration among
institutions in the United States, Italy and Germany. LBT Corporation
partners are: The University of Arizona on behalf of the Arizona
university system; Istituto Nazionale di Astrofisica, Italy; LBT
Beteiligungsgesellschaft, Germany, representing the Max-Planck
Society, the Astrophysical Institute Potsdam, and Heidelberg
University; The Ohio State University, and The Research Corporation,
on behalf of The University of Notre Dame, University of Minnesota and
University of Virginia. The Millennium Simulation databases used in
this paper and the web application providing online access to them
were constructed as part of the activities of the German Astrophysical
Virtual Observatory.
Some of the calculations were carried out on the PIA cluster of the
Max-Planck-Institut f\"ur Astronomie at the Rechenzentrum Garching.
We thank the anonymous referee for useful comments which helps in improving
the quality of the present paper.
AG warmly thanks Kalevi Mattila and Martin Raue for useful comments on the
EBL limits.
\end{acknowledgements}
|
1,108,101,563,690 | arxiv | \section{Introduction} Given a Hamiltonian $L$ with $N$-degrees of freedom, the procedure of extension allows the construction of Hamiltonians $H$ with $(N+1)$ degrees of freedom admitting as first integrals $L$ itself, with all its possible constants of motion, and a characteristic first integral dependent on a rational parameter $k$. We gave several examples of extension of natural Hamiltonians on Riemannian and pseudo-Riemannian manifolds in \cite{CDRPol,CDRfi,CDRgen,CDRsuext,CDRraz,TTWcdr,CDRpw}, including anisotropic, Harmonic oscillators, three-body Calogero and Tremblay-Turbiner-Winternitz systems. In all these examples, $L$ and $H$ are natural Hamiltonians and the characteristic first integral is polynomial in the momenta of some degree depending on $k$.
However, the procedure of extension do not make any assumption on $L$ other than it is a regular function on some cotangent bundle $T^*M$. Until now, we always considered $L$ as a natural Hamiltonian, in such a way that the extended Hamiltonian is itself natural. In this work, we apply the Extension Procedure on functions $L$ which are no longer quadratic in the momenta and, consequently, the extended Hamiltonian is not a natural one. The construction of an extended Hamiltonian requires the determination of a certain function $G$ well defined on all $T^*M$, up to some lower-dimensional subset of singular points. The extended Hamiltonian is a polynomial in $p_u$, $L$, while its characteristic first integral is a polynomial in $p_u$, $L$, $G$ and $X_LG$, the derivative of $G$ with respect to the Hamiltonian vector field of $L$, so that its global definition depends ultimately on $G$ and $X_LG$. Therefore, our analysis is focused on the determination of the function $G$ in the different cases, and on the study of its global behaviour on $T^*M$.
Since this work is intended as a preliminary study of the possible applications of the extension procedure to non-natural Hamiltonians, we do not pretend here to obtain complete and general results, but we focus on some meaningful examples only.
In Sec. 2 we recall the fundamentals of the theory of extended Hamiltonians. In Sec. 3 we consider extensions of functions quartic in the momenta and we find examples of extended Hamiltonians in analogy with the quadratic Hamiltonian case. The analysis becomes more subtle in Sec. 4, when we try to extend functions which are not polynomial in the momenta, as the case of the two point-vortices Hamiltonian. Here the global definition of the extended Hamiltonian and its characteristic first integral becomes an issue in some cases, so that the extension is possible only for some values of the constant of motion $L$, while in other cases the extension is always possible.
We conclude in Sec. 5 with some examples where we are unable to find any properly globally defined extended Hamiltonian.
\section{Extensions of Hamiltonian systems}\label{ex}
Let $L(q^i,p_i)$ be a Hamiltonian with $N$ degrees of freedom, that is defined on the cotangent bundle $T^*M$ of an $N$-dimensional manifold $M$.
We say that $L$ {\em admits extensions}, if there exists $(c,c_0)\in \mathbb R^2
- \{(0,0)\}$ such that there exists a non null solution $G(q^i,p_i)$ of
\begin{equation}\label{e1}
X_L^2(G)=-2(cL+c_0)G,
\end{equation}
where $X_L$ is the Hamiltonian vector field of $L$.
If $L$ admits extensions, then, for any $\gamma(u)$ solution of the ODE
\begin{equation}\label{eqgam}
\gamma'+c\gamma^2+C=0,
\end{equation}
depending on the arbitrary constant parameter
$C$,
we say that any Hamiltonian
$H(u,q^i,p_u,p_i)$ with $N+1$ degrees of
freedom of the form
\begin{equation}\label{Hest}
H=\frac{1}{2} p_u^2-k^2\gamma'L+ k^2c_0\gamma^2
+\frac{\Omega}{\gamma^2},
\qquad k=\frac mn,\, m,n\in \mathbb{N}-\{0\}, \ \Omega\in\mathbb{R}
\end{equation}
is an {\em extension of $L$}.
Extensions of Hamiltonians where introduced
in \cite{CDRfi} and studied because they admit polynomial in the momenta first integrals generated via a recursive algorithm.
Moreover, the degree of the first integrals is related with the choice of $m,n$.
Indeed, for any $m,n\in \mathbb N-\{0\}$,
let us consider the operator
\begin{equation}\label{Umn}
U_{m,n}=p_u+\frac m{n^2}\gamma X_L.
\end{equation}
\begin{prop}\cite{CDRraz}
For $\Omega=0$,
the Hamiltonian (\ref{Hest})
is in involution with the function
\begin{equation}\label{mn_int}
K_{m,n}=U_{m,n}^m(G_n)=\left(p_u+\frac{m}{n^2} \gamma(u) X_L\right)^m(G_n),
\end{equation}
where $G_n$ is the $n$-th term of the recursion
\begin{equation}\label{rec}
G_1=G, \qquad G_{n+1}=X_L(G)\,G_n+\frac{1}{n}G\,X_L(G_n),
\end{equation}
starting from any solution $G$ of (\ref{e1}).
\end{prop}
For $\Omega\neq 0$, the recursive construction of a first integral is more complicated: we construct the following function, depending on two strictly positive integers
$s,r$
\begin{equation} \label{ee2}
\bar K_{2s,r}=\left(U_{2s,r}^2+2\Omega \gamma^{-2}\right)^s(G_r),
\end{equation}
where
the operator $U^2_{2s,r}$ is defined {according to
(\ref{Umn})} as
$$
U^2_{2s,r}=\left( p_u+\frac{2s}{r^2} \gamma(u) X_L
\right)^2,
$$
and
$G_r$ is, as in (\ref{mn_int}), the $r$-th term of the recursion (\ref{rec}),
with $G_1=G$ solution of (\ref{e1}).
For $\Omega=0$ the functions (\ref{ee2})
reduce to (\ref{mn_int}) and thus can be computed
also when the first of the indices is odd.
\begin{teo}\cite{TTWcdr} \label{t2}
For any $\Omega\in \mathbb{R}$,
the Hamiltonian (\ref{Hest})
satisfies,
for $m=2s$,
\begin{equation}\label{cp}
\{H,\bar K_{m,n}\}=0,
\end{equation}
for $m=2s+1$,
\begin{equation}
\label{cd}
\{H ,\bar K_{2m,2n}\}=0.
\end{equation}
\end{teo}
We call $K$ and $\bar{K}$, of the form (\ref{mn_int}) and
(\ref{ee2}) respectively, \emph{ characteristic first integrals} of the corresponding extensions.
It is proved in \cite{CDRfi,TTWcdr} that the characteristic first integrals $K$ or $\bar K$ are functionally independent from $H$, $L$, and from any
first integral $I(p_i,q^i)$ of $L$.
This means that the extensions of (maximally) superintegrable Hamiltonians are
(maximally) superintegrable Hamiltonians with one additional degree of freedom (see also \cite{CDRsuext}).
In particular, any extension of a one-dimensional Hamiltonian is maximally superintegrable.
The explicit expression of the characteristic first integrals is given as follows \cite{CDRraz,TTWcdr}.
For $r\leq m$,
we have
\begin{equation}\label{EEcal}
U_{m,n}^r(G_n)= P_{m,n,r}G_n+D_{m,n,r}X_{L}(G_n),
\end{equation}
with
$$
P_{m,n,r}=\sum_{ j=0}^{[r/2]}\binom{r}{2 j}\, \left(\frac mn \gamma \right)^{2 j}p_u^{r-2 j}(-2)^j(cL+c_0)^ j,
$$
$$
D_{m,n,r}=\frac 1{n}\sum_{ j=0}^{[(r-1)/2]}\binom{r}{2 j+1}\, \left(\frac mn \gamma \right)^{2 j+1}p_u^{r-2 j-1}(-2)^j(cL+c_0)^ j, \quad m>1,
$$
where $[\cdot]$ denotes the integer part and $D_{1,n,1}=\frac 1{n^2} \gamma$.
The expansion of the first integral (\ref{ee2}) is
\begin{equation*}
\bar K_{2m,n}=\sum_{j=0} ^{m}\binom{m}{j}\left(\frac {2\Omega}{\gamma^2}\right)^jU_{2m,n}^{2(m-j)}(G_n),
\end{equation*}
with $U^0_{2m,n}(G_n)=G_n$, and
\begin{equation}\label{Gn_alt}
G_n=\sum_{k=0}^{\left[\frac{n-1}{2}\right]}
\binom{n}{2k+1}(-2(cL+c_0))^k G^{2k+1}(X_L G)^{n-2k-1}.
\end{equation}
\begin{rmk}
\rm
In \cite{CDRgen} it is proven that the ODE (\ref{eqgam}) defining $\gamma$ is a necessary condition in order to get a characteristic first integral of the form (\ref{mn_int}) or
(\ref{ee2}).
According to the value of $c$ and $C$, the explicit form of $\gamma(u)$ is given (up to constant translations of $u$) by
\begin{equation}\label{fgam}
\gamma= \left\{
\begin{array}{lc}
-C u & c=0 \\
\frac{1}{T_\kappa (cu)}=\frac{C_\kappa (cu) }{S_\kappa (cu)}
& c\neq 0
\end{array}
\right.
\end{equation}
where $\kappa=C/c$ is the ratio of the constant parameters appearing in (\ref{eqgam}) and $T_\kappa$, $S_\kappa$ and $C_\kappa$ are the trigonometric tagged functions
(see also \cite{CDRraz} for a summary of their properties)
$$
S_\kappa(x)=\left\{\begin{array}{ll}
\frac{\sin\sqrt{\kappa}x}{\sqrt{\kappa}} & \kappa>0 \\
x & \kappa=0 \\
\frac{\sinh\sqrt{|\kappa|}x}{\sqrt{|\kappa|}} & \kappa<0
\end{array}\right.
\qquad
C_\kappa(x)=\left\{\begin{array}{ll}
\cos\sqrt{\kappa}x & \kappa>0 \\
1 & \kappa=0 \\
\cosh\sqrt{|\kappa|}x & \kappa<0
\end{array}\right.
$$
$$
T_\kappa(x)=\frac {S_\kappa(x)}{C_\kappa(x)}.
$$
Therefore, we have
\begin{equation}\label{fgamp}
\gamma'= \left\{
\begin{array}{lc}
-C & c=0 \\
\frac{-c}{S_\kappa ^2 (cu)}
& c\neq 0 .
\end{array}
\right.
\end{equation}
\end{rmk}
\begin{rmk} \rm The global definition of the characteristic first integral of the extended Hamiltonian is ultimately determined by the definition of $G$ and its derivative $X_L G$. When these objects are globally defined, then also the characteristic first integral is.
\end{rmk}
From the brief exposition given above it is clear that the extensions of a function $L$ on $T^*M$ are completely determined once a solution $G$ of (\ref{e1}) is known, provided it is regular and well defined on $T^*M$. The fundamental step for the application of the extension procedure is therefore the determination of $G$. In all existing examples of extended Hamiltonians the function $L$ is always a quadratic polynomial in the momenta. The examples include the anisotropic harmonic oscillators, the Tremblay-Turbiner-Winternitz and the Post-Winternitz systems. For several of these systems there exist a quantization theory, based on the Kuru-Negro factorization in shift and ladder operators, adapted to Hamiltonians which are extended Hamiltonians \cite{CRsl}.
In order to generalise the extension procedure to non-natural Hamiltonians, we focus our research here on the determination of functions $G$ solution of (\ref{e1}), leaving for other works a deeper analysis of the resulting extended systems. We consider below some examples of non-natural Hamiltonians, or natural Hamiltonians in a non-canonical symplectic or Poisson structure.
Since the forms of the extended Hamiltonian and of the characteristic first integral are completely determined once known $L$ and $G$, we are solely concerned with the determination and analysis of $G$ and $X_LG$.
\section{Extensions of Quartic Hamiltonians}
Hamiltonians of degree four in the momenta are considered in \cite{FC}. These Hamiltonians are written in Andoyer projective variables and allow a unified representation of several mechanical systems, such as harmonic oscillator, Kepler system and rigid body dynamics, corresponding to different choices of parameters. We consider here some toy model of Hamiltonians of degree up to four in the momenta.
\begin{enumerate}
\item Let us assume
\begin{equation}
L=p^4+f_1(q)p^3+f_2(q)p^2+f_3(q)p+V(q).
\end{equation}
The exension of $L$ is possible if global solutions $G$ of
\begin{equation}\label{LE1}
X_L^2 G=-2(cL+c_0)G,
\end{equation}
are known. If we assume $G(q)$, then the function
$$
G=C_1q+C_2,
$$
is a solution of (\ref{LE1}) if $L$ is in the form
\begin{eqnarray}\label{sq}
L=\frac {\left(16C_1 p^2+8C_1fp+2cC_1q^2+4cC_2q+C_1f^2+8C_1C_3\right)^2}{256\,C_1^2}-\frac {c_0}c,
\end{eqnarray}
where $C_i$ are real constants and $f(q)$ is an arbitrary function.
Hence, we have that this system admits the most general extension, with $c$ positive or negative.
\item We consider now
\begin{equation}
L=p^4+f(q)p^2+V(q),
\end{equation}
If we assume $G=g(q)p$ and solve the coefficients of the monomials in $p$ in (\ref{e1}) equal to zero, we obtain two solutions, one
\begin{eqnarray}
V&=& \frac 1{4}(\frac 1{16}q^2c+\frac 1{8C_1}cC_2q-\frac 12\frac{C_3}{C_1(C_1q+C_2)^2}+C_4)^2-\frac {c_0}c,\cr
g&=& C_1q+C_2, \cr
f&=& \frac 1{16}q^2c+\frac 1{8C_1}cC_2q-\frac 12\frac{C_3}{C_1(C_1q+C_2)^2}+C_4,
\end{eqnarray}
that, substituted in $L$, gives, by assuming $c\neq 0$,
\begin{eqnarray}\label{psq}
L&=&\frac 1{1024C_1^2(C_1q+C_2)^4}\left(32p^2(C_1^3q^2+2C_1^2C_2q +C_1C_2^2)+C_1^3cq^4\right.\cr &+&\left.4C_1^2C_2cq^3+16C_1^3C_4q^2+5C_1C_2^2cq^2+(32C_1^2C_2C_4+2C_2^3c)q\right.\cr
&+&\left.16C_1C_2^2C_4-8C_3\right)^2-\frac {c_0}c,
\end{eqnarray}
and the other, holding for $c\neq 0$ also
\begin{eqnarray}
V &=& \frac 1{(C_1q+C_2)^4}\left(C_4-\frac 1{1024cC_1^3}q(-C_1^7c^3q^7-8C_1^6C_2c^3q^6\right.\cr
&-& 28C_1^5C_2^2c^3q^5-56C_1^4C_2^3c^3q^4+(-70C_1^3C_2^4c^3+1024c_0C_1^7\cr
&-& 32C_1^5C_3c^2)q^3+(4096c_0C_1^6C_2-128C_1^4C_2C_3c^2-56C_1^2C_2^5c^3)q^2\cr
&+&(-28C_1C_2^6c^3+6144c_0C_1^5C_2^2-192C_1^3C_2^2C_3c^2)q-8C_2^7c^3\cr
&+&\left.4096c_0C_1^4C_2^3-128C_1^2C_2^3C_3c^2)\right),\cr
g&=& C_1q+C_2,\cr
f&=& \frac 1{16C_1^2}(C_1q+C_2)^2c+\frac {C_3}{(C_1q+C_2)^2},
\end{eqnarray}
where the $C_i$ are constants. It is interesting to remark that in the last case $L$ is not in general a perfect square plus a constant, as in (\ref{sq}) and (\ref{psq}).
\item We assume now
\begin{equation}
L=\left(p_1^2+\frac 1{(q^1)^2}p_2^2+V(q^1,q^2) \right)^2,
\end{equation}
that is the square of a natural Hamiltonian on $\mathbb E^2$, and search for functions $V$ allowing the existence of non-trivial solutions $G$ of (\ref{e1}). Again, by assuming $G(q^1,q^2)$ and collecting the terms in $(p_1,p_2)$ in (\ref{e1}), the requirement that the coefficients of the momenta are identically zero, after assuming $c_0=0$, gives the following solution
\begin{eqnarray}
G&=& \left(\sin(q^2)C_2+\cos(q^2)C_3\right)q^1+C_1,\cr
V&=&-\frac c8 \frac{(C_3\sin(q^2)-C_2\cos(q^2))^2(2\tan(q^2)C_2C_3-C_2^2+C_3^2)}{C_3^2(\tan(q^2)C_3-C_2)^2} (q^1)^2\cr
&+&\frac c4\frac{(C_3\sin(q^2)-C_2\cos(q^2))C_1}{C_3(\tan(q^2)C_3-C_2)}q^1\cr
&+&F\left((\sin(q^2)C_3-\cos(q^2)C_2)q^1\right),
\end{eqnarray}
where $F$ is an arbitrary function.
\end{enumerate}
In all the examples above all the elements of the extension procedure are polynomial in the momenta, therefore, the extended Hamiltonian and its characteristic first integral are globally defined in the same way as for the natural Hamiltonian case.
\section{Extensions of the Two Point-Vortices Hamiltonian}
The dynamics of two point-vortices $z_j=x_j+iy_j$ of intensity $k_j$, $j=1, 2$, in a plane $(x,y)$ is described, in canonical coordinates $(Y_i=k_iy_i,X_i=x_i)$, by the Hamiltonian
$$
L=-\alpha k_1k_2\ln \left((X_1-X_2)^2+\left(\frac{Y_1}{k_1}-\frac{Y_2}{k_2}\right)^2\right),
$$
where $\alpha=\frac 1{8\pi}$ and $k_i$ are real numbers \cite{TT}.
If $k_2\neq -k_1$, the functions $( k_1z_1+k_2z_2, L)$ are independent first integrals of the system (three real functions). If $k_2=-k_1$, the functions above give only two real independent first integrals.
The coordinate transformation
$$
\tilde X_1=(X_1-X_2)/2,\quad \tilde X_2=(X_1+X_2)/2,\quad \tilde Y_1=Y_1-Y_2, \quad \tilde Y_2=Y_1+Y_2,
$$
is canonical and transforms $L$ into
$$
L=-\alpha k_1k_2\ln \left(4\tilde X_1^2+\left(\frac {\tilde Y_1 +\tilde Y_2}{2k_1} +\frac {\tilde Y_1 -\tilde Y_2}{2k_2} \right)^2\right).
$$
The exension of $L$ is possible if global solutions $G$ of
(\ref{e1}) are known. We consider below two cases
\begin{enumerate}
\item If $k_1=k_2=k>0$ the Hamiltonian becomes
$$
L=-\alpha k^2 \ln \left(4\tilde X_1^2+\frac {\tilde Y_1^2}{k^2}\right).
$$
For $c=0$, $G$ in this case can be computed by using Maple, obtaining
\begin{eqnarray}
G= \left(\frac {\tilde Y_1+2ik\tilde X_1}{\sqrt{Q_1}}\right)^{\frac{Q_1\sqrt{2c_0}}{4\alpha k^3} } F_1 + \left(\frac {\tilde Y_1+2ik\tilde X_1}{\sqrt{Q_1}}\right)^{-\frac{Q_1\sqrt{2c_0}}{4\alpha k^3} } F_2,
\end{eqnarray}
where $Q_1=k^2e^{-\frac{L}{\alpha k^2}}=4\tilde X_1^2+\frac {\tilde Y_1^2}{k^2}$ and $F_i$ are arbitrary functions of $L$.
The function $G$ is not single-valued in general, but it is, for example, when $\frac{Q_1\sqrt{2c_0}}{4\alpha k^3}$ is an integer.
Let us consider now $X_LG$, since $Q_1$ depends on the canonical coordinates through $L$ only, $Q_1$ and the exponents in $G$ behave as constants under the differential operator $X_L$, hence, the exponents in it remain integer if they are integer in $G$ and $X_LG$ is well defined on $T^*M$. Therefore, both $H$ and its characteristic first integral are globally well defined for integer values of $\frac{Q_1\sqrt{2c_0}}{4\alpha k^3} $.
We have in this case an example in which the possibility of finding an extension depends on the parameters of the system and, in particular, on the values of the constant of motion $L$.
\item If $k_2=-k_1=-k$, $k>0$ the Hamiltonian is
$$
L=\alpha k^2 \ln \left(4\tilde X_1^2+ \frac{\tilde Y_2^2}{k^2}\right).
$$
For $c=0$, the solution $G$ is obtained by Maple as
\begin{eqnarray}
G= F_1\sin \left(\frac{\sqrt{2c_0}Q_2 \tilde X_2}{2\alpha k^2 \tilde Y_2}\right) + F_2 \cos \left(\frac{\sqrt{2c_0}Q_2 \tilde X_2}{2\alpha k^2 \tilde Y_2}\right),
\end{eqnarray}
where $Q_2=k^2e^{\frac{L}{\alpha k^2}}$ and $F_i$ are arbitrary first integrals of $L$.
It is evident that the function $G$ above, real or complex, is always globally defined, as well as $X_LG$, up to lower-dimensional sets, and this makes possible the effective extension of the Hamiltonian $L$.
We observe that in this case the extended Hamiltonian has four independent constants of motion.
\end{enumerate}
\section{Hamiltonians with no known extension}
The procedure of extension can be applied in any Poisson manifold, not only in symplectic manifolds with canonical symplectic structure, as in the examples above.
Indeed, if $\pi$ is the symplectic form or the Poisson bivector determining the Hamiltonian structure of the system of Hamiltonian $L$ in coordinates $(x^1, \ldots,x^n)$, then the symplectic or Poisson structure of the extended manifold in coordinates $(u,p_u,x^1, \ldots,x^n)$ is given by
$$
\Pi=\left(\begin{array}{cc|c}
0 & 1 & 0 \\
-1 & 0 & \\
\hline
0 & & \pi
\end{array}
\right).
$$
We recall that the Hamiltonian vector field of $L$ on Poisson manifolds with Poisson vector $\pi$ is $\pi dL$.
We consider below two cases of Hamiltonian systems for which we are unable to find extended Hamiltonians. The obstruction to the extension lies in both cases in the non-global definition of the known solutions of (\ref{e1}).
\subsection{The Lotka-Volterra system}
It is well known that the Lotka-Volterra prey-predator system
\begin{eqnarray}
\dot x=a x-b x y\\
\dot y=d xy -gy,
\end{eqnarray}
where $a,b,d,g$ are real constants, can be put in Hamiltonian form (see \cite{Nu}), for example with Poisson bivector
\begin{equation}
\pi=\left ( \begin{matrix} 0 & A \cr -A & 0 \end{matrix} \right),\quad A=-x^{1+g}y^{1+a}e^{-by-dx},
\end{equation}
and Hamiltonian
\begin{equation}
L=x^{-g}y^{-a}e^{dx+by}.
\end{equation}
Since the manifold is symplectic, there is only one degree of freedom, and the existence of the Hamiltonian itself makes the system superintegrable.
The equation (\ref{e1}) with $c=0$ admits solution $G$ of the form
\begin{equation}
G=F_1(L) e^{-B} +F_2(L) e^B,
\end{equation}
where
$$
B=-\frac{\sqrt{-2c_0}}a\int{\left[t\left(W \left( -\frac ba t^{-\frac ga}x^{\frac ga}ye^{\frac{d(t-x)-by}a}\right)+1\right)\right]^{-1}dt},
$$
and $W$ is the Lambert W function, defined by
$$
z=W(z)e^{W(z)}, \quad z\in \mathbb C.
$$
If we put $F_1=\frac 12 \left(\alpha+\frac \beta i\right)$, $F_2=\frac 12 \left(\alpha-\frac \beta i\right)$, where $\alpha$ and $\beta $ are real constants, then
$$
G=\alpha \cos B - \beta \sin B.
$$
However, the Lambert W function is multi-valued in $\mathbb C-\{0\}$, even if its variable is real (in this case, it is defined only for $z\geq-1/e $ and double-valued for $-1/e<z<0$). Therefore, such a $G$ cannot provide a globally defined first integral and does not determine an extension of $L$.
By comparison, the Hamiltonian of the one-dimensional harmonic oscillator admits iterated extensions with $c=0$ and $c_0$ always equal to the elastic parameter of the first oscillator. One obtains in this way the $n$-dimensional, $n\in \mathbb N$, anisotropic oscillator with parameters having rational ratios, and therefore always superintegrable \cite{CDRraz}. This is not the case for the Lotka-Volterra system, where the periods of the closed trajectories in $x,y$ are not all equal, as happens for the harmonic oscillators.
\subsection{The Euler system}
It is well known that the Euler rigid-body system is described by the Hamiltonian
$$
L=\frac 12\left(\frac {m_1^2}{I_1}+\frac {m_2^2}{I_2}+\frac {m_3^2}{I_3}\right),
$$
on the Poisson manifold of coordinates $(m_1,m_2,m_3)$ and Poisson bivector
$$
\pi=\left(\begin{matrix}0 & -m_3 & m_2 \cr m_3 & 0 &-m_1 \cr -m_2 & m_1 & 0 \end{matrix} \right).
$$
The $(m_i)$ are the components of the angular momentum in the moving frame and they are conjugate momenta of the three components of the principal axes along one fixed direction.
A Casimir of $\pi$ is
$$
M=m_1^2+m_2^2+m_3^2.
$$
The system has two functionally independent constants of the motion: $L$ and one of the components of the angular momentum in the fixed frame.
A solution of equation (\ref{e1}) can be found by using the Kuru-Negro \cite{CRsl} ansatz
\begin{equation}\label{KN}
X_L G=\pm \sqrt{-2(cL+c_0)}G,
\end{equation}
whose solutions are solutions of the equation (\ref{e1}) too.
A solution of (\ref{KN}) is
\begin{eqnarray}
G=f e^{\left[\mp \frac{I_1I_2I_3}{\sqrt{I_2(I_1-I_3)}}\sqrt{\frac{-2(cL+c_0)}{X_2}}F\left(m_1\sqrt{\frac{I_2(I_1-I_3)}{X_1}},\sqrt{\frac{I_3(I_1-I_2)X_1}{I_2(I_1-I_3)X_2} }\right)\right]},
\end{eqnarray}
where $f$ is an arbitrary function of the first integrals $L$ and $M$,
\begin{eqnarray}
X_1=I_1I_2(M-2I_3L),\\
X_2=I_1I_3(2I_2L-M),
\end{eqnarray}
and $F(\phi,k)$ is the incomplete elliptic integral of first kind
$$
F(\phi,k)=\int_0^\phi \frac {d\theta}{\sqrt{1-k^2\sin ^2 \theta}},
$$
which is a multiple valued function, being the inverse of Jacobi's sinus amplitudinis $sn$ function.
The function $G$, possibly complex valued and with singular sets of lower dimension, depends essentially on $m_1$, since all other arguments in it are either constants or first integrals.
Since our function $G$ is not single-valued, we cannot in this case build an extended Hamiltonian from $L$.
\section{Conclusions}
By the examples discussed in this article we see that extended Hamiltonians can be obtained also from non-natural Hamiltonians $L$ and not only from the natural ones. The case when $L$ is quartic in the momenta is very much similar to the quadratic cases studied elsewhere, and the extension procedure does not encounter new problems.
For the two point-vortices Hamiltonian, we have that global solutions of (\ref{e1}) can be obtained in correspondence of particular choices of parameters in $L$.
In the remaining examples, we are unable to obtain globally defined solutions of (\ref{e1}) and we cannot build extensions in these cases.
In future works, the extended Hamiltonians obtained here could be studied in more details, while the search for global solutions for the cases of Lotka-Volterra and of the rigid body, or for the reasons of their non-existence, could be undertaken.
\
\
{\bf Conflict of Interest}: the authors
declare that they have no conflicts of interest.
|
1,108,101,563,691 | arxiv |
\section*{\refname}}{}{}{}
\newcommand{AE$\bar{\hbox{g}}$IS}{AE$\bar{\hbox{g}}$IS}
\newcommand{$\bar{\hbox{H}}$}{$\bar{\hbox{H}}$}
\newcommand{$\bar{\hbox{p}}$}{$\bar{\hbox{p}}$}
\newcommand{$\bar{\hbox{g}}$}{$\bar{\hbox{g}}$}
\newcommand{Moir\'{e}-deflectometer}{Moir\'{e}-deflectometer}
\newcommand{\note}[1]{{\emph{\textcolor{red}{#1}}} \normalsize}
\newcommand{\new}[1]{\textcolor{blue}{#1}}
\section{Introduction}
Positronium (Ps), the bound state of an electron and a positron, is a purely leptonic matter-antimatter system with many applications in fundamental research \cite{Mil01, Ito2005:Ps_porosimetry, Cass07:Ps2, cassidy_review:18, Mills2019:PsBEC, Nagashima2020, Zimmer2021:PsCooling}. In particular, positronium can be used to induce the charge exchange reaction with cold trapped antiprotons in order to efficiently create antihydrogen. This was recently achieved by AE$\bar{\hbox{g}}$IS{} (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) \cite{AEGIS2021:HBAR} and is also planned to be used by GBAR (Gravitational Behaviour of Antihydrogen at Rest) \cite{GBAR:Ps_Hbar}. Both experiments are located at the Antiproton Decelerator (AD) facility at CERN. The AE$\bar{\hbox{g}}$IS{} collaboration produces ground state positronium by implanting positron bunches into a nanochanneled silicon target \cite{mariazzi_prl:10, AEGIS2021:Morpho}. A substantial fraction of the long-lived positronium atoms in a triplet state (ortho-positronium with total spin $S=1$) subsequently cools by inelastic collisions with the channel walls and escapes from the nanoporous target at almost environmental temperature into vacuum. Such cold Ps is then excited to Rydberg states by laser irradiation \cite{aegis_neq3:16} before reaching the trapped antiproton plasma and forming antihydrogen. Such experiments require ultra-high vacuum, a homogeneous magnetic field in the order of few Tesla and cryogenic temperatures, making it difficult to set up an efficient detector to monitor and track positronium creation and decay.
The commonly used technique for Ps detection, namely Single Shot Positron Annihilaton Lifetime Spectroscopy (SSPALS) with scintillation detectors \cite{CassidySSPALS2007}, is stringent limited in narrow-spaced environments at liquid helium temperatures with high magnetic fields and small coverable solid angles such as is occurring at AE$\bar{\hbox{g}}$IS{} \cite{aegis_SSPALS:2019,aegis_MCP:2020}.
Therefore, positronium formation is monitored in a destructive way by laser-induced photoionization and by imaging the ionized positrons, which are axially confined by a \SI{1}{\tesla} magnetic field, with a microchannel plate and a phosphor screen \cite{aegis_MCP:2019,AEGIS:2020_RydbergPs}. Antihydrogen formation and annihilation, on the other hand, can be monitored by means of the Fast Annihilation Cryogenic Tracking (FACT) scintillating fiber detector surrounding the production region \cite{aegis_fact:13,aegis_FACT:2020}. The FACT detector has been designed to detect pions produced by the annihilation of antihydrogen atoms. Nevertheless, the scintillating fibers could be used as well to non-destructively monitor the production and the excitation of ortho-positronium by the detection of $3\gamma$ annihilation following the in-flight decay of o-Ps in vacuum.
In order to determine such a possibility, we first ran Monte Carlo simulations of the response of such a fiber detector to gamma radiation. We then performed experimental tests on one of the scintillating fibers as is installed in the FACT detector and studied its response to positron bursts from the AE$\bar{\hbox{g}}$IS{} positron system.
\section{A "digital calorimeter" for o-Ps: Principle and simulations}
The FACT detector of AE$\bar{\hbox{g}}$IS{} is made of 794 scintillating fibers distributed in four layers with cylindrical symmetry and a length of \SI{200}{\milli\meter}. The fiber type is \emph{Kuraray SCSF-78M}, multi-cladded with \SI{1}{\milli\meter} in diameter. There are two layers of scintillating fibers at radial distances of \SI{70}{\milli\meter} and \SI{98}{\milli\meter} from the central beam axis. Each layer consists again of two layers, but one is shifted against the other in order to avoid blind spots in the detection area. Per single layer, the fiber centers are horizontally separated by \SI{1.2}{\milli\meter} and the radial distance of the shifted layer is increased by \SI{0.8}{\milli\meter} (see Fig. \ref{fig:FACTgeo}).
\begin{figure}[hptb]
\centering
\includegraphics[width=0.85\linewidth]{Fig1_FACT.png}
\caption{Schematic of the FACT cross section. In total 794 scintillating fibers in four layers in cylindrical geometry (inner diameter is \SI{70}{\milli\meter}, outer diameter is \SI{98}{\milli\meter}) are used to detect annihilation products. The Ps converter is positioned in the center, but radially displaced by \SI{20}{\milli\meter} to be aligned with AE$\bar{\hbox{g}}$IS{} positron transfer line.}
\label{fig:FACTgeo}
\end{figure}
Each scintillating fiber is mechanically and optically coupled to a clear fiber, which transports scintillation light to a Multi-Pixel Photon Counter (MPPC, type \emph{Hamamatsu S10362-11-100C}), which produces a small electrical signal at its output. A hit is registered only when its signal overcomes a given threshold. The MPPC output is connected to a fast monolithic amplifier (\emph{Mini-Circuits MAR-6+}), whose signal is fed into a discriminator which then returns the time over threshold (ToT), digitized by a FPGA acquisition board \cite{aegis_fact:13,aegis_fact:20}.
The positron/Ps converter is positioned in the center of FACT, but the radial distance from the central axis is approximately \SI{20}{\milli\meter} so that it is aligned with the positron transfer line. When positrons hit the converter, ortho-positronium is created which flies for several millimeters before it annihilates into predominately three gamma quanta with energies ranging between 0 and \SI{511}{\kilo\electronvolt}.
This continuous o-Ps annihilation spectrum is described by the so-called Ore-Powell formula \cite{Ore49}, which was used for the following Monte Carlo simulation.
With the 2014 version of the PENELOPE code \cite{PENELOPE} we simulated the response of an array of scintillating fibers in the same geometry as FACT to a discrete Ore-Powell energy spectrum of $\gamma$-radiation. Fig. \ref{fig:PENELOPE} shows the simulated detector response to a delta pulse of \SI{2e6}{} gamma quanta originating from the target region for three different energy thresholds. The vertical axis represents the number of gamma quanta leaving a signal above the threshold in each fiber. Exploiting the symmetry of the entire setup, only 1/8 of all available fibers needed to be simulated for this proof-of-principle. They were numbered from 0 to 99, with the central fiber of the entire array being placed at the zero position of the abscissa in the simulation.
The three different curves show the number of gamma quanta leaving more than \SI{50}{\kilo\electronvolt ee} (black squares), \SI{100}{\kilo\electronvolt ee} (white squares) or \SI{200}{\kilo\electronvolt ee} (white circles) of energy in each fiber. The reduction of the number of counts reflects the effect of the solid angle, which is decreasing by approximately \SI{70}{\percent} when going from the central fiber (\#0) to the outermost fiber (\#99). This basic response signal is valid for one layer of fibers with constant radius.
\begin{figure}[hptb]
\centering
\includegraphics[width=1\linewidth]{Fig2_PenelopeSimu.png}
\caption{Simulated response of an array of scintillating fibers to a pulse of \SI{2e6}{} o-Ps annihilation gamma quanta with applied energy thresholds of 50, 100 and 200\,keVee, respectively.}
\label{fig:PENELOPE}
\end{figure}
Considering its solid angle $\Omega \approx \SI{0.09}{sr}$, the central fiber would be irradiated by \SI{1.4e4}{} photons originating from the position of the Ps converter. However, the simulation result with an energy threshold of \SI{50}{\kilo\electronvolt ee} as plotted in Fig. \ref{fig:PENELOPE} shows that only 115 gamma quanta leave a sizeable signal inside the fiber. This is mainly due to the intrinsic efficiency of the fiber yielding the ability to "switch it on", but of course also partly due to the applied threshold.
In order to validate the simulation, we estimated this intrinsic efficiency of one fiber at a given energy threshold by taking into account the solid angle of the central fiber, and compared this to measurements existing in literature for similar systems. The intrinsic efficiency of one fiber at an energy threshold of \SI{150}{\kilo\electronvolt ee} is \SI{0.3}{\percent} as yielded by the PENELOPE simulation. This compares well with the efficiency of \SI{0.23}{\percent} measured in Ref. \cite{Machaj2011} on a \SI{1}{\milli\meter} thick plastic scintillator for gamma radiation of the same energy regime and detection threshold.
In order to be able to simulate the detector response to a decaying cloud of Ps atoms, we first have to consider a typical ortho-positronium SSPALS spectrum as is shown in Fig. \ref{fig:SSPALS}. This spectrum was obtained by implanting a pulse of \SI{3e6}{} positrons with a time spread of about \SI{10}{\nano\second} full-width-half-maximum (FWHM) and with a kinetic energy of roughly \SI{3}{\kilo\electronvolt} into a positron/Ps converter target.
The annihilation signal was recorded with a PbWO$_4$ scintillation detector positioned about \SI{4}{\centi\meter} away from the Ps converter target inside a dedicated test chamber. The used positron system and the Ps converter are described in detail elsewhere \cite{aegis_nimb:15,AEGIS2021:Morpho}.
As shown in Fig. \ref{fig:SSPALS}, the positrons hit the Ps converter target at time $t = 0$, leading to an almost instantaneous peak of positron annihilation into \SI{511}{\kilo\electronvolt} gamma quanta. In the absence of o-Ps formation, the signal quickly approaches the noise level. Conversely, in the presence of o-Ps being emitted into vacuum, the signal at times greater than \SI{50}{\nano\second} is proportional to the number of gamma quanta emitted by the decaying cloud of o-Ps as a function of time with the characteristic ground state vacuum lifetime of \SI{142}{\nano\second}. Due to the exponential nature of the decay, one can calculate the number of annihilation products occurring in any time interval of a chosen width as a function of the initial number of ortho-Ps atoms.
\begin{figure}[hptb]
\centering
\includegraphics[width=1\linewidth]{Fig3_SSPALS2020.png}
\caption{Examples for SSPALS spectra measured with a PbWO$_4$ scintillation detector for an aluminum target (no Ps formation, grey curve) and for a positron/Ps converter (Ps formation, black curve). The SSPALS spectrum in presence of the converter shows an exponentially decreasing Ps tail with the typical ground state vacuum lifetime of about \SI{142}{\nano\second}. The best exponential fit is reported as the dashed line. The slight deviation from the theoretical Ps lifetime value comes from systematic effects such as the non-linear detector response function at high signal amplitudes.}
\label{fig:SSPALS}
\end{figure}
We then take into account the \SI{200}{\mega\hertz} clock as used in the FACT acquisition system, which sets a lower limit for the sampling time of \SI{5}{\nano\second} \cite{aegis_fact:13}. We measured the average recovery time $\tau_{\text{MPPC}}$ of one Hamamatsu MPPC and its amplifier with an oscilloscope, amounting to less than \SI{5}{\nano\second} following the detection of a single photon. We thus chose a time integration interval of \SI{20}{\nano\second} for the simulation of the o-Ps decay, which surely is a long enough recovery time for the readout electronics after detecting single photons. In case too many photons arrived at the fiber within this chosen time window, one would adjust the threshold on the corresponding MPPC until the recovery time is again within an acceptable range.
If we now count the number of fibers firing above that threshold inside the chosen time interval and track this activity over time, while taking into account the position of each fiber and thus the resulting solid angles, we will finally reproduce the exponentially decaying number of o-Ps atoms.
In other words, the response to the o-Ps decay hinges on the statistical nature of fibers being switched \emph{on} or \emph{off} under the exponentially decreasing bombardment with annihilation gamma quanta. Note that the choice of an optimal threshold for each MPPC is crucial, as too high values cut away the important long tail of o-Ps decay, while a too low threshold would saturate the MPPCs for a too long time --- up to several \SI{100}{\nano\second} --- due to the intense prompt positron $2\gamma$-annihilation pulse.
If there was no formation of Ps in the target, all implanted positrons would rapidly annihilate into $2\gamma$ quanta, producing a spectrum that shows a large number of fibers firing at $t=0$, and then rapidly approaching noise level within the first 50-\SI{100}{\nano\second}.
This noise level in turn can be estimated using the dark noise level of the real FACT detector at low energy thresholds, which amounts to \SI{100}{\kilo\hertz} per channel \cite{aegis_fact:13}. This means, each of the 794 fibers fires $0.002$ times within a time interval of \SI{20}{\nano\second}. The way this will affect the counting when taking into account all fibers is described by a Poisson distribution, with an expectation value $\lambda=794\cdot0.002=1.6$ counts within a \SI{20}{\nano\second} time interval as was used for the simulation. Thus if the recorded counts exceed the noise level several tens of nanoseconds after positron implantation, it must be due to o-Ps annihilation.
In summary, the method proposed here consists of counting the number of fibers arranged in a cylindrical array that are firing within a fixed time interval, e.g. \SI{20}{\nano\second} as explained above, and tracking such a number as a function of the time. Taking into account the geometry of the detector array, one can find the exponentially decreasing number of gamma quanta originating from o-Ps annihilation in flight and even infer the approximate amount of originally formed positronium atoms.
\section{Experiments with a single fiber}
We coupled one Kuraray scintillating fiber to a fast photomultiplier tube (PMT \emph{Hamamatsu R1450}). The characteristics of such a fiber-PMT assembly are shown in Table \ref{tab:stats} on the left side. This should be compared to the characteristics of the MPPC on the right side as used in the FACT detector of AE$\bar{\hbox{g}}$IS{}. Both systems are very similar, therefore the results we obtain with our fiber-PMT assembly are indicative for the combination of fiber-MPPC as used in FACT.
\begin{table}[hptb]
\centering
\begin{tabular}{l|c|c}
& Fiber-PMT & Fiber-MPPC\\
\hline
Spectral response & $300-\SI{600}{\nano\meter}$ &$320-\SI{900}{\nano\meter}$\\
$\lambda$ of max. response & \SI{420}{\nano\meter} & \SI{440}{\nano\meter} \\
Nominal voltage & \SI{-1500}{\volt} & \SI{70}{\volt} \\
Gain & \SI{1.7e6}{} & \SI{2.4e6}{} \\
Rise time & \SI{1.8}{\nano\second} & \SI{1.2}{\nano\second} \\
Decay time & \SI{6}{\nano\second} & $<$\SI{5}{\nano\second} \\
\hline
\end{tabular}
\caption{Characteristics of systems where a PMT and a MPPC are coupled to a scintillating fiber. Both systems are very similar, thus results obtained with fiber-PMT can directly be compared to fiber-MPPC as is used for the FACT detector.}
\label{tab:stats}
\end{table}
The fiber was shielded from ambient light by a black plastic coverage. One of the polished ends was coupled to the PMT via a black plastic holder by an air-gap without use of optical grease or bonding. This fiber-PMT assembly was tested by using a $^{133}$Ba source whose emission spectrum shows a dominant peak at \SI{356}{\kilo\electronvolt} energy \cite{133Ba}. The X-ray peaks at around \SI{31}{\kilo\electronvolt} energy were strongly suppressed, since the source was sealed in a brass casing of about \SI{5}{\milli\meter} thickness. As a consequence, the dominant peak of the Ba-spectrum can be used as a proxy for the mean energy of the continuous annihilation spectrum of o-Ps. By recording the number of counts with a Tektronix TDS5054B oscilloscope, its trigger threshold being set to \SI{5}{\milli\volt}, we confirmed the ability of a \SI{1}{\milli\meter} thick Kuraray scintillating fiber to detect gamma rays in the energy range between $300-$\SI{400}{\kilo\electronvolt}. When the Ba source was located about \SI{10}{\milli\meter} away from the fiber, the background count rate of \SI{150}{\hertz} increased to several \SI{}{\kilo\hertz}.
In order to experimentally test the response of a Kuraray fiber to a burst of positrons, the aforementioned positron test chamber of AE$\bar{\hbox{g}}$IS{} (sketched in Fig. \ref{fig:BB}) was used. Pulses with about \SI{3e6}{} positrons were produced using the AE$\bar{\hbox{g}}$IS{} positron system, which is located at the Antiproton Decelerator (AD) facility at CERN. The loose end of the fiber was rolled up into three loops of about \SI{5}{\centi\meter} in diameter for a total length of about \SI{47}{\centi\meter}. The 3-loop end of the fiber has been inserted in the detector “pit” of the positron test chamber of AE$\bar{\hbox{g}}$IS{} with the loop axis pointing towards an aluminium target. The positron pulses were steered onto the target, which led on average to a peak amplitude of about \SI{100}{\milli\volt} on the used oscilloscope. This resembled the average response of our fiber-PMT assembly to the full $2\gamma$ annihilation signal consisting of about \SI{6e6}{} gamma quanta with \SI{511}{\kilo\electronvolt} energy.
We assumed all positrons to annihilate on the surface of the aluminum target into two gamma quanta with no production of Ps or positron back-reflection. The background signal was acquired by switching off the magnets in the transfer line so that no positrons were transferred from the source region to the test chamber.
\begin{figure}[hptb]
\centering
\includegraphics[width=0.65\linewidth]{Fig4_BB.png}
\caption{Test chamber, where the Kuraray scintillating fiber is tested with a pulse of \SI{3e6}{} positrons.}
\label{fig:BB}
\end{figure}
The 3-loop fiber was positioned in the detector pit at distances of 4, 14, \SI{18}{\centi\meter} from the aluminium target, and between 50 and 100 measurements were taken for each position recording the signal amplitudes. The resulting histograms are shown in Fig. \ref{fig:distance}. One can see that as distances from the target increase the distributions shift towards smaller signal amplitudes and have a smaller width. Moreover, the output of the fiber at greater distances was increasingly often not distinguishable from the noise level, which then added to the histogram as a zero signal. This was due to the decreasing solid angle covered by the 3-loop fiber in each position. The solid angles relative to the tested distances are \SI{0.161}{sr},
\SI{0.013}{sr} and \SI{0.008}{sr}, respectively.
\begin{figure}[hptb]
\centering
\includegraphics[width=1\linewidth]{Fig5_Distance.png}
\caption{Number of occurrences as a function of pulse amplitudes for different distances, i.e. solid angles. Error bars are left out, as this is a qualitative indication only. Solid lines are meant as a guide to the eye.}
\label{fig:distance}
\end{figure}
We can now derive the probability for a single fiber to register a signal above a certain energy threshold by calculating for each distance the ratio between the number of counts higher than a given threshold and the sum of all occurrences. In Fig. \ref{fig:threshold} we report the percent-probability that a fiber activates, i.e. that a photon burst with constant intensity deposits enough energy into the fiber that it exceeds the threshold. The corresponding thresholds are indicated in the abscissa.
\begin{figure}[hptb]
\centering
\includegraphics[width=1\linewidth]{Fig6_Threshold.png}
\caption{Probability of detecting a constant flash of \SI{6e6}{} photons from different distances as a function of the detection threshold. Equivalently, this can be interpreted as the probability to detect bursts of photons with varying intensities, but from a constant arbitrarily chosen solid angle (here \SI{0.09}{sr}). Again the error bars were left out for this qualitative result.}
\label{fig:threshold}
\end{figure}
\section{Simulated response of the "digital calorimeter"}
The variation of the distance and therefore of the solid angle can be thought to be equivalent to a change in the number of annihilating positrons. In other words, the measurements at different distances simulate the responses of one particular fiber with a fixed solid angle to flashes of gamma radiation with \emph{different} intensities --- exactly as it occurs during the exponential decay of o-Ps in free flight. Referring to the central fiber of a detector with the geometry of FACT, i.e. with a solid angle of \SI{0.09}{sr}, one can directly use these sets of measurements at different distances in order to obtain the probability of this fiber to respond to nearly instantaneous flashes of gamma quanta with varying intensities.
The curves in Fig. \ref{fig:threshold} can therefore also be interpreted as the percent-probability that a fiber covering a solid angle of \SI{0.09}{sr} activates due to a flash of \SI{1.1e7}{} (circles), \SI{8.8e5}{} (rectangles) or \SI{5.3e5}{} (triangles) annihilation photons, respectively.
We can now construct the response of the central fiber to a decaying cloud of o-Ps atoms at different times. The positronium cloud is generated by a pulse of $N_0$ positrons implanted at time $t=0$. It then exponentially decays with its typical vacuum annihilation lifetime of \SI{142}{\nano\second}.
The initial o-Ps conversion ratio of the positron/Ps converter targets at AE$\bar{\hbox{g}}$IS{} is 0.3 \cite{AEGIS2021:Morpho}. Then taking only the fraction surviving magnetic quenching inside the present \SI{1}{\tesla} field (\SI{66}{\percent}), one finally finds a conversion efficiency of \SI{20}{\percent}.
From this, we obtained the number of gamma quanta emitted within intervals of \SI{20}{\nano\second} at any given time.
As we have shown above, a fiber has a chance to get activated with an certain intrinsic efficiency that depends on the set energy threshold when flashes of gamma rays irradiate it. Combining this effect with the simulated geometrical responses of the entire fiber array as was given for three thresholds in Fig. \ref{fig:PENELOPE}, we obtained the statistical number of scintillating fibers being activated at the same time, $T_{firing}$.
The outcome is given in Fig. \ref{fig:firing}, which shows the number of fibers firing during the production and the decay of o-Ps for two different intensities of the initial positrons pulse at $t=0$. With the threshold set to \SI{25}{\milli\volt} and a positron pulse containing \SI{2e8}{} particles, the number of active fibers saturates for about \SI{100}{\nano\second} and then it decreases. When using only half the positrons, we observe no saturation but the decreasing tail. The threshold of \SI{25}{\milli\volt} has been chosen for this specific example and was used to verify the correct behaviour of the simulation. In reality, it depends on the intrinsic response of the single fiber-amplifier assembly, the number of implanted positrons and the aspired geometry and thus needs to be adjusted when using another system as is presented here. Notably, by regarding the convolution with the detector geometry, the resulting decrease of active fibers resembles the decay of o-Ps in free flight such as is usually tracked by the aforementioned SSPALS method (black curve in Fig. \ref{fig:SSPALS}).
An estimate of the errors has been done for these particular tests by taking into account the statistical fluctuation on the number of gamma rays that leave a sizeable amount of energy, i.e. above the chosen \SI{25}{\milli\volt} threshold, in the central fiber. This yielded an error ranging from $3-$\SI{4}{\percent} at short times where almost all the fibers are hit and respond, to about $10-$\SI{12}{\percent} at the longest times where a reduced number of fibers respond.
\begin{figure}[hptb]
\centering
\includegraphics[width=1\linewidth]{Fig7_FibFiring.png}
\caption{Simulated response of a fiber array detector such as FACT to \SI{1e8}{} positrons (circles) and \SI{2e8}{} positron (squares) as a function of time after positron implantation and conversion to Ps. As threshold \SI{25}{\milli\volt} was used for this test.}
\label{fig:firing}
\end{figure}
\section{CONCLUSIONS}
A novel method to use an array of scintillating fibers coupled to an amplification and detection system in order to track ortho-Ps formation and decay in cryogenic and magnetic environments over time has been tested. A single fiber coupled to a fast PMT was irradiated by flashes of about \SI{6e6}{} gamma quanta produced by a burst of positrons from the AE$\bar{\hbox{g}}$IS{} positron system.
The response of the fiber obtained at different distances from the radiation source demonstrated the possibility to setup a fiber array in the geometry of the FACT detector of AE$\bar{\hbox{g}}$IS{} and use it to monitor the annihilation signal of decaying o-Ps in free flight.
By introducing the precise geometry and efficiencies of a cylindrical fiber array as is used in FACT into a Monte Carlo simulation, we provided a proof-of-principle on the capability of this scintillating fiber detector to monitor the o-Ps formation and decay with a time resolution of at least \SI{20}{\nano\second}.
Hence, this method could provide a non-destructive measurement of o-Ps annihilation in ground state with a dynamic range of about 9.5 bits inside a narrow-spaced, cryogenic, evacuated and magnetic environment. An on-leading development of this method consists of tracking the decay of laser-excited Rydberg-Ps atoms over time, which have a lifetime of the order of microseconds.
\section*{ACKNOWLEDGMENTS}
The authors wish to thank the AE$\bar{\hbox{g}}$IS{} collaboration at CERN for permitting the use of the positron apparatus.
This work was supported by: Istituto Nazionale di Fisica Nucleare (INFN Italy); the CERN Fellowship program and the CERN Doctoral student program; European’s Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreements No. 754496, FELLINI.
\bibliographystyle{apsrev}
|
1,108,101,563,692 | arxiv | \section{Introduction}
The classic phenomenology scenario derived from string theory is that of $\mr{E}_8\times \mr{E}_8$ heterotic strings compactified on Calabi-Yau manifolds \cite{Candelas1}, but these models suffered from a plethora of problems including supersymmetry breaking, zero cosmological constant and too many massless moduli. In recent years some of these problems have been addressed in the context of type II strings by introducing sources and turning on background fluxes on Calabi-Yau manifolds \cite{Grana}. However, in the heterotic setting there is a dearth of sources and fluxes, and an alternative to turning on fluxes is to replace the compactification Calabi-Yau manifold by SU(3) manifolds known as \emph{half-flat mirror manifolds}, which bear close resemblance to the original Calabi-Yau manifolds \cite{Gurrieri0, Gurrieri1, Gurrieri2}.\\
In this paper we consider some simple models, known as STZ models in the supergravity literature, that arise from the compactification of heterotic strings on half-flat and generalized half-flat manifolds that have one complex structure and one K\"ahler structure moduli.\footnote{The meaning of ``moduli" in the half-flat and generalized half-flat cases is a bit subtle, and it is explained in the appendix of this paper.} These manifolds have SU(3) structure and are the correct generalizations of Calabi-Yau manifolds which give the desired minimal supersymmetry in four dimensions. Due to the fact that they encode tunable fluxes in their geometry, these manifolds also allow for ways of generating potentials for the moduli fields in the effective low energy theory. In particular, we compute the low-energy potentials for the STZ fields that are derived from the Gukov-Vafa-Witten type superpotentials of \cite{Gurrieri2,deCarlos}. These models are consistent with $\mathcal{N}=1$ supergravity.\\
Here we are interested in investigating whether these models yield cosmological power-law solutions, also known as scaling solutions. These are cosmological attractor solutions. For example, an exponential potential has a scale factor that satisfies power-law behaviour. Similar analyses have been done in \cite{Blaback:2013sda} for maximal ($\mathcal{N}=8$) and minimal ($\mathcal{N}=1)$ supergravity theories, based on earlier works \cite{Rosseel, Townsend}. Although our main motivation was to look for late-time accelerating solutions we also look for ekpyrotic power-law cosmologies. The reason for looking for these two classes of solutions is that the mathematical techniques for doing so are very similar. In our calculations looking for accelerating solutions we also include first order $\alpha'$ corrections to the effective action.\\
To be more specific, we search for accelerating and ekpyrotic power-law cosmologies in some simple heterotic supergravity models. For these supergravity theories the dynamics of the scalar sector is dictated by a scalar potential which has the form
\begin{equation}\label{eq:multiple_exponentials}
V (\vec \phi)=\sum_{a} \Lambda_a\ e^{\vec \alpha_a \cdot \vec \phi} ,
\end{equation}
where $\Lambda_a$ and $\vec{\alpha}_a$ are real numbers\footnote{$\Lambda_a$ can in general depend on the all the scalar fields in the theory.}, and $\vec{\phi}$ is a vector consisting of scalar fields. To realize the power-law behaviour, we want to rewrite the scalar potential in the following form
\begin{equation} \label{eq:single_exponential}
V = e^{c \, \psi} \, U \, ,
\end{equation}
where $\psi$ is the running scalar field and $U$ is a function of the remaining scalar fields which need to be stabilized. The coefficient $c$ in the exponent of the above expression is related to the power-law behaviour $P=\nicefrac{1}{c^2}$, where the scale factor goes as $a (t) \sim t^P$. There are several types of scaling solutions. The scaling solution will be accelerating when $P > 1$ (or $c^2<1$) and $U$ is stabilized at a positive value \cite{Collinucci:2004iw, Hartong:2006rt}. However, the scaling solution will be ekpyrotic when $P < \nicefrac{1}{3}$ (or $c^2>3$) and $U$ is stabilized at a negative value \cite{Khoury:2004xi}. In this paper we concentrate on these two types of scaling solutions since other popular cosmological models like inflation and de Sitter turn out to be extremely difficult to realize in string theory. Also, we study heterotic string theory which is phenomenologically very interesting but has been much less studied in the context of cosmology.\\
Inflation suggests that the very early universe went through an enormous amount of acceleration. Within a fraction of a second, it grew from a subatomic scale to a macroscopic scale. Apart from conceptual issues with inflation, it would appear that for the first time some inflationary models seem to be disfavoured \cite {Ijjas:2013vea, Ijjas:2014nta} by the recent data from WMAP, ACT and Planck2013. Ekpyrotic or cyclic cosmology \cite{Steinhardt:2001st, Khoury:2001bz, Khoury:2003rt} is the most popular alternative to the theory of cosmic inflation.\footnote{This situation seems to have changed dramatically due to the BICEP2 results. See our ``note added" at the end of this introduction for more details.} It proposes that our universe is going through an infinite number of cycles. Each of these cycles begins with a big bang phase followed by a slowly accelerating expansion phase, which in turn is followed by a slow contraction phase, finally ending with a big crunch. The scaling behaviour occurs during the slow contraction phase of the evolution of the ekpyrotic universe. This corresponds to a steep and negative minimum potential, since at the end of this contraction phase the universe bounces back to a positive value. This scaling phase of the ekpyrotic solutions can be realized in supergravity theories \cite{Blaback:2013sda}. In this paper, we search for this part of ekpyrotic universes for some simple heterotic supergravity theories.\\
On the other hand, accelerating power-law solutions are the most popular alternative to de Sitter solutions as models for late time acceleration. Most of the research on the construction of de Sitter \cite{Silverstein:2007ac, Haque:2008jz,Caviezel:2008tf, Flauger:2008ad, Danielsson:2009ff, Danielsson:2010bc, Danielsson:2011au} and power-law solutions \cite{Blaback:2013sda} is done in the context of type II supergravity. Moreover, all known de Sitter critical points \cite{deRoo:2002jf, deRoo:2003rm, deRoo:2006ms, Gibbons:2001wy, Cvetic:2004km} develop some instability in the scalar spectrum. One can get de Sitter solutions without instabilities in type II theories only by including either non-geometric fluxes \cite{Damian:2013dwa, Blaback:2013ht,Danielsson:2012by} or by considering non-perturbative corrections \cite{Blaback:2013qza,Danielsson:2013rza}. Consequently, in this context it is natural to study the best alternative to de Sitter which are the power-law solutions for other string theories such as the heterotic string theory.\\
Another motivation for looking for cosmological solutions other than de Sitter spaces is the no-go theorem of \cite{Green} excluding the existence of de Sitter vacua in compactification of heterotic strings. Note that this no-go theorem takes into account the leading order $\alpha'$ corrected heterotic string action and so is very different from the usual supergravity no-go theorem \cite{Gibbons,Maldacena} that is invoked to exclude de Sitter spaces which only works for supergravity. However, \cite{Green} only considers a time independent dilaton and is agnostic about how time-dependent model-dependent scalar fields influence the cosmology. Although we don't look for de Sitter spaces we do assume time dependence of our scalar field, and so our work is somewhat tangentially related to that of \cite{Green}.\\
Heterotic string theory is considered to be the most attractive string theory from the perspective of particle physics phenomenology. Yet from the cosmological perspective, it still remains largely unexplored. Part of the reason is the absence of the RR fluxes and sources, which makes it less flexible from the model building angle. Thus it is natural to consider manifolds which are generalizations of Calabi-Yau manifolds as they allow one to play with more ingredients. For example, in this paper we consider manifolds with SU(3) structure which have non-vanishing intrinsic torsion. In particular we consider half-flat mirror manifolds where the intrinsic torsion measures the deviation of these manifolds from Calabi-Yau manifolds. These torsion classes come with geometric fluxes, which play an important role in moduli stabilization. For our purposes they become additional parameters that one can then play with. We also consider leading $\alpha'$ correction for these mirror half-flat manifolds which potentially provides us with more room to play with.\\
Finally, we explore an even more general class of manifolds with non-integrable $\mr{SU}(3)\times \mr{SU}(3)$ structure, described in \cite{ D'Auria:2004tr}, which in \cite {deCarlos} are referred to as the generalized half-flat manifolds. These generalized half-flat manifolds do not seem to have an obvious geometric interpretation but their relevant properties can be encoded in certain simple equations that enter our calculations, as we describe in our appendix. For related work in moduli stabilization in the heterotic setting see \cite{Gray,Klaput,Lukas}.
\\
The paper is organized as follows. In section 2 we discuss the STZ models and the K\"ahler potential and superpotential for these models for compactification on half-flat and generalized half-flat manifolds. In section 3 we search for ekpyrotic and accelerating power-law solutions for the STZ model derived from compactification on half-flat manifolds. In doing so we also briefly explain our methodology. We conclude this section by considering the $\alpha'$ correction for the half-flat case and investigate the power-law solutions. In section 4 we examine generalized half-flat manifolds and we find two ekpyrotic solutions. We do not examine the full $\a'$ corrected potential for the generalized half-flat manifolds as the full expression for the potential is extremely lengthy and unwieldy.
We conclude with a summary of the paper and with some open problems. In the appendix we briefly explain mirror half-flat and generalized half-flat manifolds. We also describe the superpotential and the K\"ahler potential used in our calculations.
\subsection*{Note Added}
Since the first version of this paper appeared on the archive, the results of BICEP2 experiment have been announced which seem to have made the case for inflation much stronger than ekpyrosis. However, until the results of BICEP2 are verified by further independent experiments it is still worthwhile to consider alternative models to inflation such as ekpyrosis.
\section{STZ Model: K\"ahler Potential and Superpotential}
Although our ultimate goal is to study the heterotic string cosmology that comes from compactification on half-flat and generalized half-flat manifolds, we restrict our attention in this paper to the minimal truncation of the four-dimensional effective theories that arise from these types of compactifications. These models are known as STZ models in the supergravity literature and they arise from restricting to one modulus each from both the complex structure and K\"ahler structure moduli spaces -- the corresponding moduli are Z and T, respectively. On the other hand the complex scalar field S is the axion-dilaton field which is universal in all of these models as they correspond to the four dimensional $H$ field (which becomes a scalar upon dualization) and the four dimensional component of the dilaton. In this section we collect our master formulae, partly to establish our conventions.
\subsection{The K\"ahler Potential} \label{kahler}
The expression for the K\"ahler potential and the superpotential, including the $\alpha'$ corrections, are taken from \cite{Gurrieri2}. The details for the K\"ahler potential are given in the appendix \ref{sec:kahler}, and here we quote the results. The K\"ahler potential for the STZ model at the zeroth order in $\alpha'$ is given by
\begin{equation}
K = K_{\mathrm{cs}} + K_{\mathrm{K}} + K_{S}\ ,
\end{equation}
with
\begin{align}
K_{\mathrm{cs}} &= - 3 \ln \{ i (\bar{Z}-Z)\} \\
K_S &= - \ln \{i (\bar{S}-S)\} \\
K_{\mathrm{K}} &= - 3 \ln \{i(T-\bar{T})\}.
\end{align}
Where $K_{\mr{cs}}, K_{S}$ and $K_{\mr{K}}$ are the K\"ahler potentials arising from the complex structure modulus, the axion-dilaton, and the K\"ahler structure modulus, respectively. On the other hand, the first order in $\a'$ correction to the K\"ahler potential is given by,
\begin{align}
K_{\alpha'}&= -3\alpha'\left[\frac{C\bar{C}}{(Z-\bar{Z})^2} + \frac{4D\bar{D}}{(T-\bar{T})^2}+ \frac{6 (CD+ \bar{C}\bar{D})}{(T-\bar{T})(Z-\bar{Z})}\right],
\end{align}
where $C$ and $D$ denote matter fields in the K\"ahler and the complex structure sectors, respectively, which belong to the $\boldsymbol{27}$ (and $\overline{\boldsymbol{27}}$) representation of $\mr{E}_6$. The $\mr{E}_6$ GUT symmetry arises as a generalization \cite{Gurrieri2, Ali} of the standard embedding of the usual Calabi-Yau scenario \cite{Candelas1}. The $\mr{E}_6$ indices have been largely suppressed in this paper. Also, throughout this paper we shall assume that these fields take on {\sc vev}s that are at the GUT symmetry breaking scale or some other suitable high energy scale. The precise details of GUT symmetry breaking is beyond the scope of this paper. \\
The coefficients to the kinetic terms at the leading order that one gets from the zeroth order K\"ahler potential are given by the a K\"ahler metric which has the following components,
\begin{align}
K_{S\bar{S}} &= -\frac{1}{(S-\bar{S})^2}\\
K_{T\bar{T}} & = -\frac{3}{(T-\bar{T})^2}\\
K_{Z\bar{Z}} & = -\frac{3}{(Z-\bar{Z})^2},
\end{align}
with all other components vanishing. \\
However, there are corrections to some of these kinetic terms as well as the off-diagonal terms (which are zero at the zeroth order) at the $\alpha'$ order. But we ignore these corrections when looking for accelerating scaling solutions for the following reason. When looking at accelerating solutions we imagine that we are in a slow role regime in which the zeroth order kinetic terms can be ignored. The $\a'$ correction to the kinetic terms are further suppressed by a factor of $\a'$. These two factors combine to justify ignoring the $\a'$ corrections to the K\"ahler potential. \\
But in the case of ekpyrosis we cannot assume that the kinetic term for the ekpyrotic scalar is small, as it has to be of the same order of magnitude as the potential term. Thus, if we include $\a'$ corrections to the potential term we also need to include the $\a'$ corrections to the kinetic terms, and then redefine the scalars such that the new scalars are canonically normalized. Here we run into a difficulty as we find that even for the simplest STZ model, the $\a'$ corrections to T and Z fields are non-linear and so expressing the old fields in terms of the new variables becomes extremely messy without making additional assumptions regarding the values of the fields. Thus, in this paper we restrict ourselves to looking at \emph{both} the kinetic and potential terms only the zeroth order in $\a'$ when looking for ekpyrotic solutions. For completeness, the full metric to first order in $\a'$ of the K\"ahler manifold is given in appendix \ref{sec:kahler}.
\subsection{The Superpotential}
In this subsection we present the superpotential computed in \cite{Gurrieri2} and then specialize to the case where we keep only one K\"ahler modulus and one complex structure modulus as is appropriate to the STZ models.\\
According to \cite{Gurrieri2} the Gukov-Vafa-Witten superpotential arising from compactification on generalized half-flat manifolds is given by
\begin{align}
W= \tilde{\epsilon}_A Z^A - \tilde{\mu}^A \mathscr{G}_A, \label{eq:zero-order-superpotential}
\end{align}
with
\begin{align}
\tilde{\epsilon}_A &= \epsilon_A - T^i p_{Ai}\\
\tilde{\mu}^A &= \mu^A - T^i q^A_i.
\end{align}
Let us describe the different components of these formulae. The $Z^A$ are the complex structure moduli given in projective coordinates on the moduli space. This means that one of the $Z^A$ is redundant. Consequently, in some of the following formulae we set $Z^0=1$ and denote the rest of the complex structure moduli by $Z^a$. The $\mathscr{G}_A =\frac{\partial \mathscr{G}}{\partial Z^A}$ are the first derivative of the pre-potential $\mathscr{G}$ whose explicit form is given in the appendix \ref{sec:superpotential}.\\
As explained in \cite{deCarlos, Gurrieri2}, the fact that certain forms fail to close on half-flat and generalized half-flat manifolds as compared to Calabi-Yau manifolds allows us to turn on additional component of $H$ fluxes. $\epsilon^A$ and $\mu_A$ in the formulae above, defined more precisely in appendix \ref{sec:generalized half-flat}, are flux parameters that come from turning on these $H$ fluxes. On the other hand $p_{Ai}$ and $q^A_i$ are torsion parameters associated with half-flat and generalized half-flat structures of the internal manifold. The $A$ index in these parameters is associated with the complex structure moduli whereas the $i$ index is associated with the K\"ahler structure.\\
When both $q^A_i$ and $p_{Ai}$ are non-zero, the internal manifold is a generalized half-flat manifold. When $q^{A}_i=0$ and $p_{ai}=0$ it reduces to a half-flat manifold. In other words, the half-flat case corresponds to only the $p_{0i}\ne 0$. See appendices \ref{sec:half-flat} and \ref{sec:generalized half-flat} for more details. When all of these parameters vanish we recover the Calabi-Yau case.\\
Putting in the explicit expressions for $\mathscr{G}_A$ and setting $Z^0=1$, we arrive at:
\begin{align}
W= (\epsilon_0 - p_{0i}T^i)+ (\epsilon_a - p_{ai} T^i) Z^a + \frac{1}{2} \tilde{d}_{abc} (\mu^a - q^a_i T^i) Z^b Z^c - \frac{1}{6} \tilde{d}_{abc}(\mu^0 - q^0_i T^i) Z^a Z^b Z^c ,
\end{align}
where $\tilde{d}_{abc}$ are related to the Yukawa couplings of the original Calabi-Yau manifold and has the interpretation of being the intersection numbers of the mirror dual of the Calabi-Yau manifold in the large complex structure limit. However, in the case of STZ model $\tilde{d}_{abc}$ has only one component and its value is simply 1 for the regime we are in. In this limit the superpotential becomes:
\begin{align}
W= (\epsilon_0 - p_{01}T)+ (\epsilon_1 - p_{11} T) Z + \frac{1}{2} (\mu^1 - q^1_1 T) Z^2 - \frac{1}{6} (\mu^0 - q^0_1 T) Z^3.
\end{align}\\
The first order in $\a'$ correction to the superpotential has been computed in \cite{Gurrieri2}. For the STZ model arising from the compactification on a generalized half-flat manifold it is given by
\begin{align}
W_{\alpha'} &= 2 \alpha' \left(p_{11} -\frac{1}{2} q^0_1 Z^2 + q^1_1 Z\right) C_P D^P - \frac{\alpha'}{3} \left\{j_{\bar{P}\bar{R}\bar{S}} C^{\bar{P}}C^{\bar{R}}C^{\bar{S}} + j_{PRS} D^P D^R D^S\right\},
\end{align}
where $j_{PRS}$ ($j_{\bar{P}\bar{R}\bar{S}}$) is the singlet piece in the $\boldsymbol{27}\times\boldsymbol{27}\times\boldsymbol{27}$ ($\boldsymbol{\overline{27}}\times\boldsymbol{\overline{27}}\times\boldsymbol{\overline{27}}$) of $\mr{E}_6$.\\
Given the K\"ahler potential and the superpotential, the scalar potential is given by the formula:
\begin{equation}\label{potential}
V=e^{K} \left ( \sum_{\Phi} K^{I\bar J} D_{I} W D_{\bar J} \bar W -3 |W| ^2 \right) .
\end{equation}
Here $\Phi^I$ is a generic complex scalar field ($\bar{\Phi}^{\bar{I}}$, its complex conjugate), $K^{I \bar J}$ is the inverse K\"ahler metric to $K_{I\bar J}=\frac{\partial^2 K}{ \partial \Phi^I \partial \bar{\Phi}^{J} }$ and $D_{\Phi} W=\frac{\partial W }{\partial \Phi }+\frac{\partial K}{\partial \Phi } W$. The general expression for $V$ for generalized half-flat manifolds with $\alpha'$ correction can be easily computed using Mathematica (as we have done). However, the expression is long, unwieldy and not very instructive and so we have decided not to reproduce it here.\\
As usual, the kinetic terms of the complex scalar fields are given by
\begin{equation}
\mathcal{L}_{kin} =K_{I \bar J} \partial_\mu {\Phi^I} \partial^\mu {\bar \Phi}^{\bar{J}},
\end{equation}
keeping in mind that for the kinetic terms we ignore the $\alpha'$ corrections as discussed above.
\section{STZ on Half-flat Manifold}
In this section we examine the scalar potential for the STZ model that arises from compactification on half-flat manifolds. For a brief review of half-flat manifolds see appendix \ref{sec:half-flat}. For now we only consider the zeroth order (in $\alpha'$) potential. The $\alpha'$ corrected potential is examined in the next subsection. The superpotential in this case is given by
\begin{eqnarray}
W&=& - e T +\epsilon Z +\frac{\mu}{2} Z^2.
\end{eqnarray}
Here $\epsilon$ and $\mu$ are flux parameters, while $e$ is a torsion parameter, that have been relabelled. We define the complex scalars in terms of real scalars as follows:\\
\begin{eqnarray}
S&=& s -i e^{\sqrt{2} \sigma} \cr
T&=& t -i e^{\sqrt{\frac{2}{3}}\tau} \cr
Z&=&-z - i e^{\sqrt{\frac{2}{3}}\zeta}.
\end{eqnarray}
We, then, get the following scalar potential:
\begin{eqnarray} \label{pot}
V_{scalar}&=&\left (\frac{e^2}{96} \right )\ e^{-\sqrt{6} \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}} \tau} +\left(\frac{\mu^2}{384}\right) e^{\sqrt{\frac{2}{3}} \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} +\left\{\frac{(\epsilon - z \mu)^2}{96}\right\} e^{-\sqrt{\frac{2}{3}} \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} \cr
&+& \left\{\frac{\left (2 e t + z (2 \epsilon - z \mu)\right)^2}{128} \right \} e^{-\sqrt{6} \zeta- \sqrt{2} \sigma - \sqrt{6} \tau} .
\end{eqnarray}
The kinetic part of the Lagrangian becomes:
\begin{equation}
\mathcal{L}_{kin}=\frac{1}{2} \left\{ (\partial \zeta)^2 + (\partial \sigma)^2+ (\partial \tau)^2 + \frac{3}{2} e^{-2 \sqrt{\frac{2}{3}}\zeta} (\partial z)^2 + \frac{1}{2} e^{-2 \sqrt{2} \sigma} (\partial s)^2 + \frac{3}{2} e^{-2 \sqrt{\frac{2}{3}}\tau} (\partial t)^2 \right\}.
\end{equation}
The $\{\zeta, \sigma, \tau\}$ submanifold is flat and the fields are canonically normalized.
\subsection{Power-law Solutions}
Before we analyze the potential for possible scaling solutions we will briefly discuss the methodology. The details of this can be found in \cite{Collinucci:2004iw,Hartong:2006rt,Blaback:2013sda}. \\
The potential (\ref{pot}) can be expressed as,
\begin{equation}
V (\phi)=\sum_{a=1}^4 \Lambda_a e^{\sum_{i=1}^3 \alpha_{ai} \phi_i},
\end{equation}
where $\vec{\phi}=\{ \zeta, \sigma, \tau\}$ is a vector consisting of the scalars fields and $\vec{\alpha}_a$ are the vectors of coefficients of the scalar fields in the argument of the exponential that corresponds to the of the $a$-th term in the potential. $\alpha_{ai}$ is the $i$-th component of the vector $\vec{\alpha}_a$. In general, for potentials of this form we denote by $M$ the number of exponential terms and by $R$ the number of scalar fields that appear in the exponential. For the potential (\ref{pot}) we have $M=4$ and $R=3$. \\
The motivation for writing the potential in this form is that it helps us to rewrite the potential in the form (\ref{eq:single_exponential}) which allows us to identify the $P$ dependence of the scale factor on cosmological time, \emph{i.e.}, $a\sim t^P$. However, since $R<M$ the $\vec{\alpha}_a$ vectors entering (\ref{pot}) are not all linearly independent and a scaling solution is only possible if the set of $\vec{\alpha}_a$ vectors are mutually \emph{affine}, which we define below.\\
Since $R<M$ we can always choose $R$ of the $\vec{\alpha}_a$ to be linearly independent and express the rest as a linear combination of this set:
\begin{align}
\vec{\alpha}_b = \sum_{a=1}^{R} c_{ba} \vec{\alpha}_a.
\end{align}
A set of $\vec{\alpha}_a$ vectors are called \emph{affine} if the coefficients $c_{ba}$ above can be chosen such that
\begin{align}
\sum_a c_{ba}= 1 \ \ \forall \ \ b=R+1, \dots, M.
\end{align}
Once we have found out the largest set of mutually affine $\vec{\alpha}_a$ vectors we can then calculate what the $P$ values are associated with that set. For the $R<M$ case, one needs to consider $\alpha_{ai}$ as an $M\times R$ matrix and then define an $R\times R$ matrix as follows
\begin{equation}
B_{ij}=\sum_{a=1}^M\alpha_{ai} \alpha_{aj},
\end{equation}
where $a=1,...,M $ and $i=1, ..., R$. Then we get the $P$ value from,
\begin{equation}
P=\sum_i \left(\sum_{ja} |B^{-1}|_{ij} \alpha_{aj} \right).
\end{equation}
For an accelerating scaling solution we need $P>1$ and the potential needs to be stabilized at a positive minimum, and for an ekpyrotic solution we need $P<\nicefrac{1}{3}$ with the potential now minimized at a negative value. \\\\
\noindent $\bullet$ \ \underline{{\bf Analysis:}}\\\\
As explained in the previous section first we will collect the $\vec \alpha$-vectors. From the scalar potential (\ref{pot}) we get the following $\vec \alpha$ vectors:
\begin{eqnarray} \vec \alpha_1&=&\{-\sqrt{6},-\sqrt{2},-\sqrt{\nicefrac{2}{3}}\},\
\vec \alpha_2=\{\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{6}\}, \cr
\vec \alpha_3&=&\{-\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{6}\}, \
\vec \alpha_4=\{-\sqrt{6},-\sqrt{2},-\sqrt{6}\}.\end{eqnarray}
As outlined above and also in \cite{Blaback:2013sda} we need to find the $P$ values for the largest common set of $\vec{\alpha}$ vectors that are mutually affine.
From these $\vec \alpha$-vectors the only $P$ we get is $\nicefrac{1}{2}$. Thus the scalar potential will not give us any accelerating scaling solution, since there is no combination of vectors that can give us $P>1$. \\
However, there might still be an ekpyrotic solution. After turning off some of the flux parameters, it may be possible to decrease the value of $P$ from $\nicefrac{1}{2}$, and stabilize the rest of the potential at a negative value. If so, we would achieve an ekpyrotic solution. Unfortunately, for the simple STZ on mirror half-flat manifold we did not find a solution that can be stabilized at a negative minimum.\\
The intuitive picture of the process is as follows. We consider the full scalar potential and then calculate $P$, as outlined at the beginning of this section. A straightforward calculation shows that we get $P=\nicefrac{1}{2}$. This can be seen very easily from the set of $\vec \alpha$-vectors, as we can factor out $-\sqrt{2} \sigma$ term in the exponential, which immediately tells us about the scaling behaviour. Here we also find $P=\nicefrac{1}{2}$. Thus we conclude that the STZ models coming from half-flat manifolds will not give us any accelerating cosmology. Suppose we somehow get rid of one or more terms from the scalar potential by an appropriate choice of fluxes. As all the terms in the scalar potential have the same $\sigma$ behaviour, we can still factor out the $-\sqrt{2} \sigma$ term and get the same $P$ value. Since the $P$ value does not change we do not get any accelerating solution. But there still remains the possibility of getting ekpyrotic solutions because if we can factor out one or more common scalars along with $-\sqrt{2}\sigma$ term it may be possible to lower the value of $P$. For this we need to see if there is any set of orthogonal field redefinitions \cite{Collinucci:2004iw,Hartong:2006rt} that keep the kinetic terms invariant and but lets us factor out a scalar field from the potential. We did not find any such field redefinition that minimizes the potential at a negative value.
\subsection{STZ with $\alpha'$ Correction on Half-flat Manifold}
When we add linear order $\alpha'$ corrections to the K\"ahler potential and the superpotential, there is the possibility that the results given above may change qualitatively. As argued in section (\ref{kahler}), we ignore the corrections to the kinetic terms as we are looking for solutions with small kinetic energy. Using the $\alpha'$ corrections to the superpotential and the K\"ahler potential derived in \cite{Gurrieri2} and quoted above, it is a straightforward exercise to compute the $\alpha'$ correction to the scalar potential for half-flat manifolds. It is given by
\begin{align}
V_{\alpha'} &= \alpha' \left\{ - \frac{e}{96} \mathrm{Im}(J) e^{-\sqrt{6} \zeta - \sqrt{2} \sigma - 2 \sqrt{\frac{2}{3}}\tau} - \frac{z\mu}{96}\mathrm{Im}(J) e^{-2\sqrt{\frac{2}{3}} \nonumber \zeta-\sqrt{2}\sigma - \sqrt{6} \tau}\right.\\
&\left. +\left[ \frac{et}{48} \mathrm{Re}(J)- \frac{\mu z^2}{96} \mathrm{Re}(J)\right] e^{-\sqrt{6}\zeta-\sqrt{2}\sigma - \sqrt{6}\tau} \right\} ,
\end{align}
where $J\equiv \left\{j_{\bar{P}\bar{R}\bar{S}} C^{\bar{P}}C^{\bar{R}}C^{\bar{S}} + j_{PRS} D^P D^R D^S\right\}$, and we have only included terms that are linear in the flux and torsion parameters in $V_{\alpha'}$.
\subsection* {Accelerating Power-law Solutions}
The $\alpha'$ correction adds three new terms with two new $\vec \alpha$ vectors to the potential but still we get the same scale factor as before:
$P=\nicefrac{1}{2}.$ So, in principle we can have only generic scaling solution.
\section{STZ on Generalized Half-flat}
Generalized half-flat manifolds were first proposed in \cite{D'Auria:2004tr}. These manifolds generalize half-flat manifolds by incorporating magnetic flux as well as electric flux in their curvature. For our purposes they are defined by the failure of certain forms to be closed, as in the half-flat case. More detailed discussion about these spaces can be found either in the appendix or the references \cite{D'Auria:2004tr,Gurrieri2}.\\
The K\"ahler form for the generalized half-flat compactification is the same. As mentioned before, for the STZ case we find that the superpotential is given by:
\begin{align}
W= (\xi - e T)+ (\epsilon - p T) Z + \frac{1}{2} (\mu - q T) Z^2 - \frac{1}{6} (\rho - r T) Z^3,
\end{align}
where the flux parameters $\xi, \epsilon, \mu, \rho$ and the torsion parameters $e, p, q, r$ are subject to two sets of constraints. However, the first set of these constraints (\ref{constraint1}) is trivially satisfied for the STZ model, the second set (\ref{constraint2}) reduces to
\begin{equation} \label{cons}
-\xi \ r - \epsilon \ q + \mu \ p + \rho \ e = 0 .
\end{equation}
The full potential for the generalized half-flat manifold is significantly more complicated than the mirror half-flat case and we don't reproduce it here. By using (\ref{potential}) we can find this scalar potential and it has nine terms with the above mentioned eight flux and torsion parameters.
\subsection{Power-law Solutions}
As pointed out in mirror half-flat case, to illustrate the scaling behaviour we will write down the nine $\vec \alpha$-vectors. They are as follows:\\\\
\noindent $\bullet$ \ \underline{{\bf {$\vec\alpha$}-Vectors:}} \footnote{ $\vec \alpha_3,\vec \alpha_5,\vec \alpha_6,\vec \alpha_7$ are the $\vec \alpha$-vectors we have found for the mirror half-flat case.}
\begin{eqnarray}
\vec \alpha_1&=&\{0,-\sqrt{2},-2\sqrt{\nicefrac{2}{3}}\}, \ \ \ \
\vec \alpha_2=\{-\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{\nicefrac{2}{3}}\}, \
\vec \alpha_3=\{-\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{6}\}, \cr
\vec \alpha_4&=&\{\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{\nicefrac{2}{3}}\}, \
\vec \alpha_5=\{\sqrt{\nicefrac{2}{3}},-\sqrt{2},-\sqrt{6}\}, \ \ \ \ \
\vec \alpha_6=\{-\sqrt{6},-\sqrt{2},-\sqrt{\nicefrac{2}{3}}\}, \cr
\vec \alpha_7&=&\{-\sqrt{6},-\sqrt{2},-\sqrt{6}\}, \ \ \
\vec \alpha_8=\{\sqrt{6},-\sqrt{2},-\sqrt{\nicefrac{2}{3}}\}, \ \ \ \ \
\vec \alpha_9=\{\sqrt{6},-\sqrt{2},-\sqrt{6}\}.
\end{eqnarray}
These vectors are affinely connected. The largest $P$ value once again is $\nicefrac{1}{2}$, which rules out the possibility of getting an accelerating scaling solution. However, we might get ekpyrotic solutions by turning off flux/torsion parameters.
\subsubsection{Ekpyrotic Solutions }
For ekpyrotic solutions we need the scale factor $P<\nicefrac{1}{3}$ and the potential is stabilized at a negative minimum. Since for the full potential $P=\nicefrac{1}{2}$ we need to turn off fluxes to lower the $P$ value.\\
\begin{itemize}
\item \textbf{Solution 1}\\
Lets make the choice of flux and torsion parameters:
\begin{equation}
\epsilon=0, \ \xi=0, \ r=0, \ e=0,\ \rho=0, \ \mu=0, \ p=0.
\end{equation}
They satisfy the constraint (\ref{cons}).
The full potential reduces to only five terms as follows
\begin{eqnarray}
V&=&-\frac{ q^2 }{384} \ e^{\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}}\ \tau}+ \frac{ q^2 t^2}{384}\ e^{\sqrt{\frac{2}{3}}\ \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} + \frac{q^2 t^2 z^2}{96} \ e^{-\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} \cr
&+& \frac{q^2 z^4}{384} \ e^{-\sqrt{6} \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}} \ \tau} + \frac {q^2 t^2 z^4}{128} \ e^{-\sqrt{6} \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} .
\end{eqnarray}
The fields $t$ and $z$ are stabilized at zero. The potential becomes
\begin{equation}
V=-\frac{1}{384} \ e^{\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}} \ \tau} q^2 =e^{\sqrt{\frac{10}{3}}\ \phi_1} \left \{-\frac{q^2}{384}\right \},
\end{equation}
with $P=\frac{3}{10}<\frac{1}{3}$ and the potential is minimized at a negative value. This represents an {\bf ekpyrotic solution} in the following direction,
\begin{equation}
\phi_1=\sqrt{\frac{1}{5}} \ \zeta - \sqrt{\frac{3}{5}}\ \sigma - \sqrt{\frac{1}{5}}\ \tau.
\end{equation} \\
\item \textbf{Solution 2}\\
We can also get a second solution for the following choice of flux and torsion parameters, which satisfy the constraint (\ref{cons}) as well,
\begin{equation}
\epsilon=0, \ \xi=0, \ r=0, \ e=0,\ \rho=0, \ \mu=0, \ q=0.
\end{equation}
The potential then reduces to
\begin{eqnarray}
V&=&-\frac{ p^2 }{96} \ e^{-\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}}\ \tau}+ \frac{ p^2 t^2}{96}\ e^{-\sqrt{\frac{2}{3}}\ \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} + \frac{p^2 t^2 z^2}{32} \ e^{-\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{6} \tau} \cr
&+& \frac{p^2 z^2}{96} \ e^{-\sqrt{6} \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}} \ \tau} .
\end{eqnarray}
The fields $t$ and $z$ once again are stabilized at zero and the potential becomes
\begin{equation}
V=-\left \{\frac{ p^2 }{96} \right \} \ e^{-\sqrt{\frac{2}{3}} \ \zeta - \sqrt{2} \sigma - \sqrt{\frac{2}{3}}\ \tau}= e^{\sqrt{\frac{10}{3}} \phi_1} \left \{-\frac{ p^2 }{96} \right \}
\end{equation}
Clearly we get $P=\frac{3}{10}<\frac{1}{3}$ and the potential is minimized at a negative value. The scaling direction for this {\bf ekpyrotic solution} is
\begin{equation}
\phi_1=-\sqrt{\frac{1}{5}} \ \zeta - \sqrt{\frac{3}{5}}\ \sigma - \sqrt{\frac{1}{5}}\ \tau.
\end{equation}
\end{itemize}
\noindent In both of the cases above we denote the scalar that has the ekpyrotic scaling behaviour as $\phi_1$. There are two other flat orthogonal directions, $\phi_2$ and $\phi_3$, whose explicit expressions we leave out here. If we look closely, we find that the choices of flux and torsion parameters that give rise to these solutions are $p \neq 0 $ or $q\neq 0$. These non-zero torsion parameters $p$ and $q$ come from $\mathrm{TZ}$ and $\mathrm{TZ}^2$ terms of the superpotential respectively. These are the terms we get due to the generalization from mirror half-flat to generalized half-flat manifolds. \\
As explained before, we did not look at the full $\a'$ corrected potential for generalized half-flat manifolds in this paper due to the fact that it is difficult, even for the simplest STZ models, to diagonalize the kinetic terms when $\a'$ corrections are taken into account.
\section{Discussion}
In this paper, we have investigated cosmological power-law solutions from the $\mr{E}_8\times \mr{E}_8$ heterotic superstring theory. Our goal was to search for power law solutions -- both accelerating and ekpyrotic -- as alternatives to de Sitter vacua and inflation, respectively. With this aim we explored the STZ models which are the simplest models that can arise from heterotic strings with NS-NS fluxes compactified on mirror half-flat and generalized half-flat manifolds with only one complex structure and one K\"ahler structure moduli. \\
For the STZ model on mirror half-flat manifolds we found that there are four exponential terms in the full scalar potential. We searched for accelerating solutions, but the only $P$-value we found was $\nicefrac{1}{2} $. Due to the fact that the half-flat mirror manifold has nonzero intrinsic torsion, the theory has geometric fluxes. These fluxes can be turned off and we tried to reduce the $P$-value by turning off fluxes and searched for ekpyrotic solutions. We did not find any such solution. We also calculated the $\alpha'$ correction to the potential which adds three extra terms in the scalar potential. However, including the $\alpha'$ correction to the potential did not change the situation. \\
We then considered a much more generic scenario which arises from compactification on generalized half-flat manifolds. We restricted ourselves to manifolds which give rise to STZ models and hence we still have the minimal six real scalar fields. But importantly for the generalized half-flat manifolds, we have more fluxes and torsion parameters to play with and consequently the full potential we find was much more complicated. To be precise, the full scalar potential has nine exponential terms with eight flux and torsion parameters at zeroth order in $\a'$. Using the same method as before we searched for accelerating solutions but did not find any. However, by turning off most of the fluxes we were able to get $P<\nicefrac{1}{3}$ and produced two ekpyrotic solutions.
We believe that these are the first ekpyrotic solutions from heterotic string theory. Previously ekpyrotic solutions were found from type IIA supergravity \cite{Blaback:2013sda}. It appears that the realization of ekpyrotic solutions is quite natural in string theory. Due to the complicated nature of the kinetic terms when $\a'$ corrections are included we have not explored how these solutions are changed once these corrections are taken into account.\\
Before we close this paper let us make a few brief comments about possible extensions. We haven't explored the full $\a'$ corrected potential for the generalized half-flat manifold due to its extremely complicated nature. It would a next step to explore the full potential to see if we can obtain accelerating solutions and more ekpyrotic solutions. Another obvious extension of this work would be to consider more realistic half-flat and generalized half-flat manifolds which have more than one moduli in either the complex structure or the K\"ahler structure sectors or both. The structure of the potential would be much more complicated and it might give rise to an interesting dynamics that may include different cosmological phases. Another ingredient that one may consider are instanton corrections to the potential coming from world-sheet instantons. Next, since we want to make connection between cosmology and particle physics, as should be natural in the heterotic setting, one should consider the dynamics of the $\mr{E}_6$ matter field as well.
\section*{Acknowledgement}
We would like to thank J. Bl{\aa}b\"ack and S. Nampuri for reading the draft and giving us some useful feedback. We also thank Ilies Messamah for discussions. The work of SSH is supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation. SSH also would like to thank Perimeter Institute for their hospitality where part of the work was done. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.
|
1,108,101,563,693 | arxiv |
\section{Introduction}
The Khovanov-Kuperberg algebras $K^\epsilon$ were introduced in 2012 by Mackaay -- Pan -- Tubbenhauer \cite{2012arXiv1206.2118M} and the author \cite{LHR1} in order to give an algebraic definition of $\ensuremath{\mathfrak{sl}}_3$-homology for tangles.
In \cite{2012arXiv1206.2118M}, the split Grothendieck group of the category $K^\epsilon$-$\mathsf{pmod}$ is computed. This shows, as in the $\ensuremath{\mathfrak{sl}}_2$ case, that we have a categorification of $\hom_{U_q(\ensuremath{\mathfrak{sl}}_3)}(\CC, V^{\otimes \epsilon})$ (here $\epsilon$ is an admissible sequence of signs).\marginpar{sens: $\CC$ avant ou après ?}
The proof of this result is far from being easy\footnote{In the $\ensuremath{\mathfrak{sl}}_2$ case this is quite direct thanks to a Schur lemma argument.}, furthermore, the non-elliptic webs, the good candidates to correspond
to the indecomposable modules, fails to play this role, the story starts to become dramatically more complicated than in the $\ensuremath{\mathfrak{sl}}_2$ case.
This paper have a down-to-earth approach and gives a fully combinatorial characterisation of indecomposable web-modules.
\subsubsection*{Acknowledgments} The author wishes to thank Christian Blanchet for suggesting the subject.
\label{sec:acknoledgment}
\section{The Khovanov--Kuperberg algebras}
\label{sec:khov-kuperb-algebr}
\subsection{The 2-category of web-tangles}
\label{sec:webs-foams}
\subsubsection{Webs}
In the following $\epsilon=(\epsilon^1,\dots,\epsilon^n)$ (or $\epsilon_0$, $\epsilon_1$ etc.) will always be a finite sequence of signs, its \emph{length} $n$ will be denoted by $l(\epsilon)$, such an $\epsilon$ will be \emph{admissible} if $\sum_{i=1}^{l(\epsilon)}\epsilon^i$ is divisible by 3.
\label{sec:webs}
\label{sec:two-category-web}
\begin{dfn}[Kuperberg, \cite{MR1403861}]\label{dfn:closed-web}
A \emph{closed web} is a plane trivalent oriented finite graph (with possibly some vertexless loops and multiple edges) such that every vertex is either a sink or a source.
\end{dfn}
\begin{figure}[h]
\centering
\begin{tikzpicture}[yscale= 0.8, xscale= 0.8]
\input{./sw_ex_closed_web}
\end{tikzpicture}
\caption{Example of a closed web.}
\label{fig:example-closed-web}
\end{figure}
\begin{req}\label{req:basic-on-web}
The orientation condition is equivalent to say that the graph is bipartite (by sinks and sources).
\begin{prop}
A closed web contains at least a square, a digon or a vertexless circle.
\end{prop}\label{prop:closed2elliptic}
\begin{proof}
It is enough to consider $w$ a connected web. A connected web is always 2-connected (because of the flow), hence it makes sense to use the Euler characteristic. Suppose that $w$ is not a circle. We have:
\[\#F - \#E + \#V = 2, \] but we have $3\#V=2\#E$. And if we denote by $F_i$ the set of faces with $i$ sides we have:
\[\sum_{i>0}i\#i\mathcal{F}_i = 2\#E.\]
All together, this gives:
\[
\sum_{i>0}F_i - \frac{i}6 F_i =2,
\]
and this proves that some faces have strictly less than 6 sides.
\end{proof}
\end{req}
\begin{prop}[Kuperberg\cite{MR1403861}] \label{prop:Kup}
There exists one and only one map $\kup{\cdot}$ from closed webs to Laurent polynomials in $q$ which is invariant by isotopy, multiplicative with respect to disjoint union and which satisfies the following local relations~:
\begin{align*}
\kup{\websquare[0.4]} &= \kup{\webtwovert[0.4]} + \kup{\webtwohori[0.4]}, \\
\kup{\webbigon[0.4]\,} &= [2] \cdot \kup{\webvert[0.4]\,},\\
\kup{\webcircle[0.4]} &= \kup{\webcirclereverse[0.4]} = [3],
\end{align*}
where $[n]\eqdef\frac{q^n-q^{-n}}{q-q^{-1}}.$ We call this polynomial the \emph{Kuperberg bracket}. It's easy to check that the Kuperberg bracket of a web is symmetric in $q$ and $q^{-1}$.
\end{prop}
\begin{proof}
Uniqueness comes from remark \ref{req:basic-on-web}. The existence follows from the representation theoretic point of view developed in \cite{MR1403861}. Note that a non-quantified version of this result is in \cite{MR1172374}.
\end{proof}
\begin{dfn}
The \emph{degree} of a symmetric Laurent polynomial $P(q)=\sum_{i\in \ZZ}a_iq^{i}$ is $\max_{i\in \ZZ}\{i \textrm{ such that }a_i\neq 0\}$.
\end{dfn}
\begin{dfn}\label{dfn:webtangle}
A \emph{$(\epsilon_0,\epsilon_1)$-web-tangle} $w$ is an intersection of a closed web $w'$ with $[0,1]\times [0,1]$ such that :
\begin{itemize}
\item there exists $\eta_0 \in ]0,1]$ such that: \\ \(w\cap [0,1] \times [0,\eta_0] = \{\frac{1}{2l(\epsilon_0)}, \frac{1}{2l(\epsilon_0)} + \frac{1}{l(\epsilon_0)}, \frac{1}{2l(\epsilon_0)} + \frac{2}{l(\epsilon_0)}, \dots, \frac{1}{2l(\epsilon_0)} + \frac{l(\epsilon_0)-1}{l(\epsilon_0)} \}\times [0,\eta_0],\)
\item there exists $\eta_1 \in [0,1[$ such that: \\ \(w\cap [0,1] \times [\eta_1,1] = \{\frac{1}{2l(\epsilon_1)}, \frac{1}{2l(\epsilon_1)} + \frac{1}{l(\epsilon_1)}, \frac{1}{2l(\epsilon_1)} + \frac{2}{l(\epsilon_1)}, \dots, \frac{1}{2l(\epsilon_1)} + \frac{l(\epsilon_1)-1}{l(\epsilon_1)} \}\times [\eta_1,1],\)
\item the orientations of the edges of $w$, match $-\epsilon_0$ and $+\epsilon_1$ (see figure \ref{fig:exampl_webtangle} for conventions).
\end{itemize}
When $\epsilon_1$ is the empty sequence, then we'll speak of \emph{$\epsilon_0$-webs}. And if $w$ is an $\epsilon$-web we will say that $\epsilon$ is the \emph{boundary} of $w$.
\end{dfn}
If $w_1$ is a $(\epsilon_0,\epsilon_1)$-web-tangle and $w_2$ is a $(\epsilon_1, \epsilon_2)$-web-tangle we define $w_1w_2$ to be the $(\epsilon_0,\epsilon_2)$-web-tangle obtained by gluing $w_1$ and $w_2$ along $\epsilon_1$ and resizing. Note that this can be thought as a composition if we think about a $(\epsilon,\epsilon')$-web-tangle as a morphism from $\epsilon'$ to $\epsilon$ (i.~e.~ the web-tangles should be read as morphisms from top to bottom). The \emph{mirror image} of a $(\epsilon_0,\epsilon_1)$-web-tangle $w$ is mirror image of $w$ with respect to $\RR\times \{\frac12\}$ with all orientations reversed. This is a $(\epsilon_1,\epsilon_0)$-web-tangle and we denote it by $\bar{w}$. If $w$ is a $(\epsilon,\epsilon)$-web-tangle the \emph{closure} of $w$ is the closed web obtained by connecting the top and the bottom by simple arcs (this is like a braid closure). We denote it by $\mathrm{tr}(w)$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[yscale= 0.5, xscale= 0.5]
\input{./sw_ex_webtangle}
\end{tikzpicture}
\caption{Two examples of $(\epsilon_0,\epsilon_1)$-web-tangles with $\epsilon_0 = (-,-,-)$ and $\epsilon_1=(-,-,+,+)$ and the mirror image of the second one.}
\label{fig:exampl_webtangle}
\end{figure}
\begin{dfn}
A web-tangle with no circle, no digon and no square is said to be \emph{non-elliptic}. The non-elliptic web-tangles are the minimal ones in the sense that they cannot be reduced by the relations of proposition \ref{prop:Kup}.
\end{dfn}
\begin{prop}[Kuperberg, \cite{MR1403861}] \label{prop:NEFinite}
For any given couple $(\epsilon_0,\epsilon_1)$ of sequences of signs there are finitely many non-elliptic $(\epsilon_0,\epsilon_1)$-web-tangles.
\end{prop}
\begin{req}
From the combinatorial flow modulo 3, we obtain that there exist some $(\epsilon_0,\epsilon_1)$-webs if and only if the sequence $-\epsilon_0$ concatenated with $\epsilon_1$ is admissible.
\end{req}
\subsubsection{Foams} All material here comes from \cite{MR2100691}.
\label{sec:foams}
\begin{dfn}
A \emph{pre-foam} is a smooth oriented compact surface $\Sigma$ (its connected component are called \emph{facets}) together with the following data~:
\begin{itemize}
\item A partition of the connected components of the boundary into cyclically ordered 3-sets and for each 3-set $(C_1,C_2,C_3)$, three orientation preserving diffeomorphisms $\phi_1:C_2\to C_3$, $\phi_2:C_3\to C_1$ and $\phi_3:C_1\to C_2$ such that $\phi_3 \circ \phi_2 \circ \phi_1 = \mathrm{id}_{C_2}$.
\item A function from the set of facets to the set of non-negative integers (this gives the number of \emph{dots} on each facet).
\end{itemize}
The \emph{CW-complex associated with a pre-foam} is the 2-dimensional CW-complex $\Sigma$ quotiented by the diffeomorphisms so that the three circles of one 3-set are identified and become just one called a \emph{singular circle}.
The \emph{degree} of a pre-foam $f$ is equal to $-2\chi(\Sigma')$ where $\chi$ is the Euler characteristic, $\Sigma'$ is the CW-complex associated with $f$ with the dots punctured out (i.~e.~ a dot increases the degree by 2).
\end{dfn}
\begin{req}
The CW-complex has two local models depending on whether we are on a singular circle or not. If a point $x$ is not on a singular circle, then it has a neighborhood diffeomorphic to a 2-dimensional disk, else it has a neighborhood diffeomorphic to a Y shape times an interval (see figure \ref{fig:yshape}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\input{./sw_yshape}
\end{tikzpicture}
\caption{Singularities of a pre-foam}
\label{fig:yshape}
\end{figure}
\end{req}
\begin{dfn}
A \emph{closed foam} is the image of an embedding of the CW-complex associated with a pre-foam such that the cyclic orders of the pre-foam are compatible with the left-hand rule in $\RR^3$ with respect to the orientation of the singular circles\footnote{We mean here that if, next to a singular circle, with the forefinger of the left hand we go from face 1 to face 2 to face 3 the thumb points to indicate the orientation of the singular circle (induced by orientations of facets). This is not quite canonical, physicists use more the right-hand rule, however this is the convention used in \cite{MR2100691}.}. The \emph{degree} of a closed foam is the degree of the underlying pre-foam.
\end{dfn}
\begin{dfn}\label{dfn:wwfoam}
If $w_b$ and $w_t$ are $(\epsilon_0,\epsilon_1)$-web-tangles, a \emph{$(w_b,w_t)$-foam} $f$ is the intersection of a foam $f'$ with $\RR\times [0,1]\times[0,1]$ such that
\begin{itemize}
\item there exists $\eta_0 \in ]0,1]$ such that $f\cap \RR \times [0,\eta_0]\times [0,1] = \{\frac{1}{2l(\epsilon_0)}, \frac{1}{2l(\epsilon_0)} + \frac{1}{l(\epsilon_0)}, \frac{1}{2l(\epsilon_0)} + \frac{2}{l(\epsilon_0)}, \dots, \frac{1}{2l(\epsilon_0)} + \frac{l(\epsilon_0)-1}{l(\epsilon_0)} \}\times [0,\eta_0]\times [0,1]$,
\item there exists $\eta_1 \in [0,1[$ such that $f\cap \RR \times [\eta_1,1]\times [0,1] = \{\frac{1}{2l(\epsilon_1)}, \frac{1}{2l(\epsilon_1)} + \frac{1}{l(\epsilon_1)}, \frac{1}{2l(\epsilon_1)} + \frac{2}{l(\epsilon_1)}, \dots, \frac{1}{2l(\epsilon_1)} + \frac{l(\epsilon_1)-1}{l(\epsilon_1)}\}\times [\eta_1,1]\times [0,1]$,
\item there exists $\eta_b \in ]0,1]$ such that $f\cap \RR \times [0,1 ]\times [0, \eta_b] = w_b \times [0, \eta_b]$,
\item there exists $\eta_t \in [0,1[$ such that $f\cap \RR \times [0,1 ]\times [\eta_t, 1] = w_t \times [\eta_t,1]$,
\end{itemize}
with compatibility of orientations of the facets of $f$ with the orientation of $w_t$ and the reversed orientation of $w_b$.
The \emph{degree} of a $(w_b,w_t)$-foam $f$ is equal to $\chi(w_b)+\chi(w_t)-2\chi(\Sigma)$ where $\Sigma$ is the underlying CW-complex associated with $f$ with the dots punctured out.
\end{dfn}
If $f_b$ is a $(w_b,w_m)$-foam and $f_t$ is a $(w_m, w_t)$-foam we define $f_bf_t$ to be the $(w_b,w_t)$ foam obtained by gluing $f_b$ and $f_t$ along $w_m$ and resizing. This operation may be thought as a composition if we think of a $(w_1,w_2)$-foam as a morphism from $w_2$ to $w_1$ i.~e.~ from the top to the bottom. This composition map is degree preserving. Like for the webs, we define the \emph{mirror image} of a $(w_1,w_2)$-foam $f$ to be the $(w_2,w_1)$-foam which is the mirror image of $f$ with respect to $\RR\times\RR\times \{\frac12\}$ with all orientations reversed. We denote it by $\bar{f}$.
\begin{dfn}
If $\epsilon_0=\epsilon_1=\emptyset$ and $w$ is a closed web, then a $(\emptyset,w)$-foam is simply called \emph{foam} or \emph{$w$-foam} when one wants to focus on the boundary of the foam.
\end{dfn}
All these data together lead to the definition of a monoidal 2-category.
\begin{dfn}
The 2-category $\mathcal{WT}$ is the monoidal\footnote{Here we choose a rather strict point of view and hence the monoidal structure is strict (we consider everything up to isotopy), but it is possible to define the notion in a non-strict context, and the same data gives us a monoidal bicategory.} 2-category given by the following data~:
\begin{itemize}
\item The objects are finite sequences of signs,
\item The 1-morphism from $\epsilon_1$ to $\epsilon_0$ are isotopy classes (with fixed boundary) of $(\epsilon_0,\epsilon_1)$-web-tangles,
\item The 2-morphism from $\widehat{w_t}$ to $\widehat{w_b}$ are $\mathbb Q$-linear combinations of isotopy classes of $(w_b,w_t)$-foams, where\ \ $\widehat{\cdot}$\ \ stands for the ``isotopy class of''. The 2-morphisms come with a grading, the composition respects the degree.
\end{itemize}
The monoidal structure is given by concatenation of sequences at the $0$-level, and disjoint union of vertical strands or disks (with corners) at the $1$ and $2$ levels.
\end{dfn}
\subsection{Khovanov's TQFT for web-tangles}
\label{sec:khovanov-tqft-web}
In \cite{MR2100691}, Khovanov defines a numerical invariant for pre-foams and this allows him to construct a TQFT $\mathcal{F}$ from the category $\hom_{\mathcal{WT}}(\emptyset,\emptyset)$ to the category of graded $\mathbb Q$-modules, (via a universal construction à la BHMV
\cite{MR1362791}). This TQFT is graded (this comes from the fact that pre-foams with non-zero degree are evaluated to zero), and satisfies the following local relations (brackets indicate grading shifts)~:
\begin{align*}
\mathcal{F}\left(\websquare[0.4]\,\right) &=
\mathcal{F}\left({\webtwovert[0.4]}\,\right) \oplus
\mathcal{F}\left({\webtwohori[0.4]}\,\right), \\
\mathcal{F}\left({\webbigon[0.4]}\,\right) &=
\mathcal{F}\left({\webvert[0.4]}\,\right)\{-1\}\oplus \mathcal{F}\left({\webvert[0.4]}\right)\{1\},\\
\mathcal{F}\left({\webcircle[0.4]\,}\right) &=
\mathcal{F}\left({\webcirclereverse[0.4]}\,\right) = \mathbb Q\{-2\} \oplus \mathbb Q \oplus\mathbb Q\{2\}.
\end{align*}
These relations show that $\mathcal{F}$ is a categorified counterpart of the Kuperberg bracket. We sketch the construction below.
\begin{dfn}
We denote by $\mathcal{A}$ the Frobenius algebra $\ZZ[X]/(X^3)$ with trace $\tau$ given by:
\[\tau(X^2)=-1, \quad \tau(X)=0, \quad \tau(1)=0.\]
We equip $\mathcal{A}$ with a graduation by setting $\deg(1)=-2$, $\deg(X)=0$ and $\deg(X^2)=2$. With these settings, the multiplication has degree 2 and the trace has degree -2. The co-multiplication is determined by the multiplication and the trace and we have :
\begin{align*}
&\Delta(1) = -1\otimes X^2 - X\otimes X - X^2\otimes 1 \\
&\Delta(X) = -X\otimes X^2 - X^2\otimes X \\
&\Delta(X^2) = -X^2\otimes X^2
\end{align*}
\end{dfn}
This Frobenius algebra gives us a 1+1 TQFT (this is well-known, see~\cite{MR2037238} for details), we denote it by $\mathcal{F}$~: the circle is sent to $\mathcal{A}$, a cup to the unity, a cap to the trace, and a pair of pants either to multiplication or co-multiplication. A dot on a surface will represent multiplication by $X$ so that $\mathcal{F}$ extends to the category of oriented dotted (1+1)-cobordisms.
We then have a surgery formula given by figure~\ref{fig:surg}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=0.5]
\input{./sw_surg}
\end{tikzpicture}
\caption{The surgery formula for the TQFT $\mathcal{F}$.}
\label{fig:surg}
\end{figure}
This TQFT gives of course a numercial invariant for closed dotted oriented surfaces. If one defines numerical values for the differently dotted theta pre-foams (the theta pre-foam consists of 3 disks with trivial diffeomorphisms between their boundary see figure \ref{fig:thetapre}) then by applying the surgery formula, one is able to compute a numerical value for pre-foams.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\input{./sw_thetaprefoam}
\end{tikzpicture}
\caption{The dotless theta pre-foam.}
\label{fig:thetapre}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale = 0.7]
\input{./sw_thetaeval}
\end{tikzpicture}
\caption{The evaluations of dotted theta foams, the evaluation is unchanged when one cyclically permutes the faces. All the configurations which cannot be obtained from these by cyclic permutation are sent to $0$ by $\mathcal{F}$.}
\label{fig:thetaeval}
\end{figure}
In \cite{MR2100691}, Khovanov shows that setting the evaluations of the dotted theta foams as shown on figure \ref{fig:thetaeval}, leads to a well defined numerical invariant $\mathcal{F}$ for pre-foams. This numerical invariant gives the opportunity to build a (closed web, $(\cdot,\cdot)$-foams)-TQFT~: for a given web $w$, consider the $\mathbb Q$-module generated by all the $(w,\emptyset)$-foams, and mod this space out by the kernel of the bilinear map $(f,g)\mapsto \mathcal{F}(\bar{f}g)$. Note that $\bar{f}g$ is a closed foam. Khovanov showed that the obtained graded vector spaces are finite dimensional with graded dimensions given by the Kuperberg formulae, and he showed that we have the local relations described on figure~\ref{fig:localrel}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.25]
\input{./sw_bubblesrel}
\end{tikzpicture}
\vspace{0.7cm}
\begin{tikzpicture}[scale=0.24]
\input{./sw_bamboorel}
\end{tikzpicture}
\vspace{0.7cm}
\begin{tikzpicture}[scale=0.4]
\input{./sw_digonrel}
\end{tikzpicture}
\vspace{0.7cm}
\begin{tikzpicture}[scale=0.32]
\input{./sw_squarerel}
\end{tikzpicture}
\vspace{0.7cm}
\begin{tikzpicture}[scale=0.5]
\input{./sw_dotsmigrel}
\end{tikzpicture}
\caption{Local relations for 2-morphism in $\mathbb{WT}$. The first 3 lines are called bubbles relations, the 2 next are called bamboo relations, the one after digon relation, then we have the square relation and the 3 last ones are the dots migration relations.}
\label{fig:localrel}
\end{figure}
This method allows us to define a new graded 2-category $\mathbb{WT}$. Its objects and its 1-morphisms are the ones of the 2-category $\mathcal{WT}$ while its 2-morphisms-spaces are the ones of $\mathcal{WT}$ mod out like in the last paragraph. One should notice that a $(w_b,w_t)$-foam can always be deformed into a $(\mathrm{tr}(\bar{w_b}w_t),\emptyset)$-foam and vice-versa. Khovanov's results restated in this language give that if $w_b$ and $w_t$ are $(\epsilon_0,\epsilon_1)$-web-tangles, the graded dimension of $\hom_{\mathbb{WT}}(w_t,w_b)$ is given by $\kup{\mathrm{tr}(\bar{w_b}w_t)}\cdot q^{l(\epsilon_0)+l(\epsilon_1)}$. Note that when $\epsilon_1=\emptyset$, there is no need to take the closure, because $w_b\bar{w_t}$ is already a closed web. The shift by $l(\epsilon_0)+l(\epsilon_1)$ comes from the fact that $\chi(\mathrm{tr}(\bar{w_b}w_t)) = \chi(w_t)+\chi(w_b) - (l(\epsilon_0)+l(\epsilon_1))$.
\begin{prop}\label{prop:relFR}
We consider the set \FR{} of local relations which consists of:
\begin{itemize}
\item the surgery relation,
\item the evaluations of the dotted spheres and of the dotted theta-foams,
\item the square relations and the digon relations (see figure~\ref{fig:localrel}).
\end{itemize}
We call them the \emph{foam relations} or relations \FR, then for any closed web $w$ $\mathcal{F}(w)$ is isomorphic to $\mathcal{G}(w)$ modded out by \FR.
\end{prop}
\subsection{The Kuperberg-Khovanov algebra $K^\epsilon$}
\label{sec:algebra-k_s}
We want to extend the Khovanov TQFT to the 0-dimensional objects i.~e.~ to build a 2-functor from the 2-category $\mathcal{WT}$ to the 2-category of algebras. We follow the methodology of \cite{MR1928174} and we start by defining the image of the $0$-objects~: they are the algebras $K^\epsilon$. This can be compared with \cite{2012arXiv1206.2118M}.
\begin{dfn}
Let $\epsilon$ be an admissible finite sequence of signs. We define $\tilde{K}^\epsilon$ to be the full sub-category of $\hom_{\mathbb{WT}}(\emptyset,\epsilon)$ whose objects are non-elliptic $\epsilon$-webs. This is a graded $\mathbb Q$-algebroid. We recall that a $k$-algebroid is a $k$-linear category. This can be seen as an algebra by setting~:
\[K^\epsilon = \bigoplus_{(w_b,w_t)\in (\mathrm{ob}(\tilde{K}^\epsilon))^2} \hom_{\mathbb{WT}}(w_b,w_t)\]
and the multiplication on $K^\epsilon$ is given by the composition of morphisms in $\tilde{K}_\epsilon$ whenever it's possible and by zero when it's not. We will denote $\ensuremath{\otimes}[_{w_1}]{K}{^{\epsilon}_{w_2}}\eqdef \hom_{\mathbb{WT}}(w_2,w_1)$. This is a unitary algebra because of proposition \ref{prop:NEFinite}. The unite element is $\sum_{w\in \mathrm{ob}(\tilde{K}_\epsilon)} 1_w$. Suppose $\epsilon$ is fixed, for $w$ a non-elliptic $\epsilon$-web, we define $P_w$ to be the left $K^\epsilon$-module~:
\[
P_w=\bigoplus_{w'\in\mathrm{ob}(\tilde{K}_\epsilon)}\hom_{\mathbb{WT}}(w,w') = \bigoplus_{w'\in\mathrm{ob}(\tilde{K}_\epsilon)} \ensuremath{\otimes}[_{w'}]{K}{^\epsilon_{w}}.
\]The structure of module is given by composition on the left.
\end{dfn}
For a given $\epsilon$, the modules $P_w$ are all projective and we have the following decomposition in the category of left $K^\epsilon$-modules~:
\[
K^\epsilon=\bigoplus_{w\in \mathrm{ob}(\tilde{K}_\epsilon)} P_w.
\]
\begin{prop}
Let $\epsilon$ be an admissible sequence of signs, and $w_1$ and $w_2$ two non-elliptic $\epsilon$-webs, then the graded dimension of $\hom_{K^\epsilon}(P_{w_1}, P_{w_2})$ is given by $\kup{(\bar{w_1}w_2)}\cdot q^{l(\epsilon)}$.
\end{prop}
\begin{proof}
An element of $\hom_{K^\epsilon}(P_{w_1}, P_{w_2})$ is completely determined by the image of $1_{w_1}$ and this element can be sent on any element of $\hom_{\mathbb{WT}}(P_{w_2}, P_{w_1})$, and $\mathop{\mathrm{dim}}\nolimits_q(\hom_{\mathbb{WT}}(P_{w_1}, P_{w_2}))=\kup{(\bar{w_1}w_2)}\cdot q^{l(\epsilon)}$.
\end{proof}
In what follows we will prove that some modules are indecomposable, they all have finite dimension over $\mathbb Q$ hence it's enough to show that their rings of endomorphisms contain no non-trivial idempotents. It appears that an idempotent must have degree zero, so we have the following lemma~:
\begin{lem}\label{lem:monic2indec}
If $w$ is a non-elliptic $\epsilon$-web such that $\kup{\bar{w}w}$ is monic of degree $l(\epsilon)$, then the graded $K^\epsilon$-module $P_w$ is indecomposable.
\end{lem}
\begin{proof}
This follows from previous discussion~: if $\hom_{K^\epsilon}(P_w,P_w)$ contained a-non trivial idempotent, there would be at least two linearly independent elements of degree 0, but $\mathop{\mathrm{dim}}\nolimits((\hom_{\mathbb{WT}}(P_{w}, P_{w})_0) = a_{-l(\epsilon)}$ if we write $\kup{\bar{w}w}=\sum_{i\in \ZZ}a_iq^i$ but as $\kup{\bar{w}w}$ is symmetric (in $q$ and $q^{-1}$) of degree $l(\epsilon)$ and monic, $a_{-l(\epsilon)}$ is equal to 1 and this is a contradiction.
\end{proof}
We have a similar lemma to prove that two modules are not isomorphic.
\begin{lem}\label{lem:0monic2noiso}
If $w_1$ and $w_2$ are two non-elliptic $\epsilon$-webs such that $\kup{\bar{w_1}w_2}$ has degree strictly smaller than $l(\epsilon)$, then the graded $K^\epsilon$-modules $P_{w_1}$ and $P_{w_2}$ are not isomorphic.
\end{lem}
\begin{proof}
If they were isomorphic, there would exist two morphisms $f$ and $g$ such that $f\circ g=1_{P_{w_1}}$ and therefore $f\circ g$ would have degree zero. The hypothesis made implies that $f$ and $g$ (because $\kup{\bar{w_1}w_2} = \kup{\bar{w_2}w_1}$) have positive degree so that the degree of their composition is as well positive.
\end{proof}
\begin{req}
The way we constructed the algebra $K^\epsilon$ is very similar to the construction of $H^n$ in \cite{MR1928174}. Using the same method we can finish the construction of a $0+1+1$ TQFT~:
\begin{itemize}
\item If $\epsilon$ is an admissible sequence of signs, then $\mathcal{F}(\epsilon) = K^\epsilon$.
\item If $w$ is a $(\epsilon_1,\epsilon_2)$-web-tangle with $\epsilon_1$ and $\epsilon_2$ admissible, then
\[\mathcal{F}(w) = \bigoplus_{\substack{u\in \mathrm{ob}(\tilde{K}^{\epsilon_1}) \\ v\in \mathrm{ob}(\tilde{K}^{\epsilon_2})}} \mathcal{F}(\bar{u}wv),
\] and it has a structure of graded $K^{\epsilon_1}$-module-$K^{\epsilon_2}$. Note that if $w$ is a non-elliptic $\epsilon$-web, then $\mathcal{F}(w)=P_w$.
\item If $w$ and $w'$ are two $(\epsilon_1,\epsilon_2)$-web-tangles, and $f$ is a $(w,w')$-foam, then we set
\[
\mathcal{F}(f) = \sum_{\substack{u\in \mathrm{ob}(\tilde{K}^{\epsilon_1}) \\ v\in \mathrm{ob}(\tilde{K}^{\epsilon_2})}} \mathcal{F}(\ensuremath{\otimes}[_{\bar{u}}]{f}{_v}),
\] where $\ensuremath{\otimes}[_{\bar{u}}]{f}{_v}$ is the foam $f$ with glued on its sides $\bar{u}\times[0,1]$ and $v\times [0,1]$. This is a map of graded $K^{\epsilon_1}$-modules-$K^{\epsilon_2}$.
\end{itemize}
We encourage the reader to have a look at this beautiful construction for the $sl_2$ case in \cite{MR1928174}.
\end{req}
In the $\ensuremath{\mathfrak{sl}}_2$ case the classification of projective indecomposable modules is fairly easy, and a analogous result, would state in our context that the projective indecomposable modules are exactly the modules associated with non-elliptic webs. However we have:
\begin{prop}[\cite{MR2457839}, see\cite{LHRThese} for details]\label{prop:Pwdec}
Let $\epsilon$ be the sequence of signs: $(+,-,-,+,+,-,-,+,+,-,-,+)$, and let $w$ and $w_0$ be the two $\epsilon$-webs given by figure~\ref{fig:thewebw}. Then the web-module $P_w$ is decomposable and admits $P_{w_0}$ as a direct factor.
\end{prop}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale= 0.7]
\input{./sw_thewebw}
\end{tikzpicture}
\caption{The $\epsilon$-webs $w$ (on the left) and $w_0$ (on the right), to fit in formal context of the 2-category one should stretch the outside edges to horizontal line below the whole picture, we draw it this way to enjoy more symmetry. To simplify we didn't draw the arrows.}
\label{fig:thewebw}
\end{figure}
\input{rgchapter4}
\bibliographystyle{alpha}
\chapter[Characterisation of indecomposable web-modules]{A characterisation of indecomposable web-modules}
\label{cha:red-graphs}
In \cite{LHR1}, one gives a sufficient condition for a web-module to be indecomposable. All the argumentation relies on the computation of the dimension of the space of the degree 0 endomorphisms of web-modules: in fact, when for a web $w$, this space has dimension 1, then the web-module $P_w$ is indecomposable. Translated in terms of Kuperberg bracket, it says (see as well lemma~\ref{lem:monic2indec}):
\begin{quotation}
\noindent\emph{If $w$ is an $\epsilon$-web such that $\kup{\overline{w}w}$ is monic of degree $l(\epsilon)$, then the $K^\epsilon$-module $P_w$ is indecomposable.}
\end{quotation}
The aim of this chapter is to prove the converse. This will give the following characterisation of indecomposable web-modules:
\begin{thm}
Let $w$ be an $\epsilon$-web. The $K^\epsilon$-module $P_w$ is indecomposable if and only if $\kup{\overline{w}w}$ is monic of degree $l(\epsilon)$. Furthermore if the $K^\epsilon$-module $P_w$ is decomposable it contains another web-module as a direct factor.
\end{thm}
The proof relies on some combinatorial tools called red graphs. In a first part we give an explicit construction (in terms of foams) of a non-trivial idempotent associated to a red graph. In a second part we show that when an $\epsilon$-web $w$ is such that $\kup{\overline{w}w}$ is not monic of degree $l(\epsilon)$, then it contains a red graph.
\section{Red graphs}
\label{sec:redgraph}
\subsection{Definitions}
\label{sec:defRG}
The red graphs are sub-graphs of the dual graphs webs, we recall here the definition of a dual graph. For an introduction to graph theory we refer to~\cite{MR0256911} and \cite{MR2368647}.\marginpar{trouver une bonne référence pour la theorie des graphe topologique}
\begin{dfn}
Let $G$ be a plane graph (with possibly some vertex-less loops), we define \emph{the dual graph $D(G)$ of $G$} to be the abstract graph given as follows:
\begin{itemize}
\item The set of vertices $V(D(G))$ of $D(G)$ is in one-one correspondence with the set of connected components of $\RR^2\setminus G$ (including the unbounded connected component). Such connected component are called \emph{faces}.
\item The set of edges of $D(G)$ is in one-one correspondence with the set of edges of $G$ (in this construction, vertex-less loops are not seen as edges). If an edge $e$ of $G$ is adjacent to the faces $f$ and $g$ (note that $f$ may be equal to $g$ if $e$ is a bridge), then the corresponding edge $e'$ in $D(G)$ joins $f'$ and $g'$, the vertices of $D(G)$ corresponding to $f$ and $g$.
\end{itemize}
\end{dfn}
Note that in general the faces need not to be diffeomorphic to disks. It is easy to see that the dual graph of a plane graph is planar: we place one vertex inside each face, and we draw an edge $e'$ corresponding to $e$ so that it crosses $e$ exactly once and it crosses no other edges of $G$. Such an embedding of $D(G)$ is a \emph{plane dual} of the graph $G$ (see figure~\ref{fig:exdualgrpah}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_exdualgraph}
\end{tikzpicture}
\caption{In black an $\epsilon$-web $w$ and in blue the dual graph of $w$. The dotted edges are all meant to belong to $D(w)$ and to reach the vertex $u$ corresponding to the unbounded component of $\RR^2\setminus w$.}
\label{fig:exdualgrpah}
\end{figure}
\begin{dfn}\label{dfn:red-graph}
Let $w$ be an $\epsilon$-web, a \emph{red graph} for $w$ is a non-empty subgraph $G$ of $D(w)$ such that:
\begin{enumerate}[(i)]
\item\label{item:dfnRG1} All faces belonging to $V(G)$ are diffeomorphic to disks. In particular, the unbounded face is not in $V(G)$.
\item\label{item:dfnRG2} If $f_1$, $f_2$ and $f_3$ are three faces of $w$ which share together a vertex, then at least one of the three does not belong to $V(G)$.
\item\label{item:dfnRG3} If $f_1$ and $f_2$ belongs to $V(G)$ then every edge of $D(w)$ between $f_1$ and $f_2$ belongs to $E(G)$, i.~e.~ $G$ is an induced subgraph of $D(w)$.
\end{enumerate}
If $f$ is a vertex of $G$ we define $\ensuremath{\mathrm{ed}}(f)$, \emph{the external degree of $f$}, by the formula: \[ \ensuremath{\mathrm{ed}}(f) = \deg_{D(w)}(f) - 2\deg_G(f).\]
\end{dfn}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\input{./th_exRG}
\end{tikzpicture}
\caption{Example of a red graph.}
\label{fig:exRG}
\end{figure}
\begin{req}\label{rk:evenextdeg}
Note that the external degree of a face $f$ is always an even number because $w$ being bipartite, all cycles are of even length and hence $\deg_{D(w)}$ is even.
\end{req}
Let $G$ be a red graph for $w$, then if on the web we colour the faces which belongs to $V(G)$, then the external degree of a face $f$ in $V(G)$ is the number of half-edges of $w$ which touch the face $f$ and lie in the uncoloured region. These half-edges are called the \emph{grey half-edges} of $f$ in $G$ or of $G$ when we consider the set of all grey half-edges of all vertices of $G$. See figure~\ref{fig:halfgrey}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.6]
\input{./th_halfgrayedge}
\end{tikzpicture}
\caption{Interpetation of the external degree in terms of grey half-edges. On the left, a portion of a web $w$ with a red graph; on the right, the same portion of $w$ with the vertices of $G$ orange-coloured. The external degree of $f$ is the number of half edges touching $f$ which are not orange. In our case $\ensuremath{\mathrm{ed}}(f)=2$.}
\label{fig:halfgrey}
\end{figure}
An oriented red graph is a red graph together with an orientation, \emph{a priori} there is no restriction on the orientations, but as we shall see just a few of them will be relevant to consider.
\begin{dfn}\label{dfn:index-red-graph}
Let $w$ be an $\epsilon$-web, $G$ be a red graph for $w$ and $o$ an orientation for $G$, we define the level $i_o(f)$ (or $i(f)$ when this is not ambiguous) of a vertex $f$ of $G$ by the formula:
\begin{align*}
i_o(f) &\eqdef 2 - \frac{1}{2}\ensuremath{\mathrm{ed}}(f) - \#\{\textrm{edges of $G$ pointing to $f$}\} \\
&= 2- \frac{\deg_{D(w)}}2 + \#\{\textrm{edges of $G$ pointing away from $f$}\}
\end{align*}
and the level $I(G)$ of $G$ is the sum of levels of all vertices of $G$.
\end{dfn}
\begin{req}\label{req:formindex}
The level is an integer because of remark~\ref{rk:evenextdeg}.
Note that the level of $G$ does not depend on the orientation of $G$ and we have the formula:\[ I(G) = 2\#V(G) - \#E(G) - \frac{1}{2}\sum _{f\in v(G)}\ensuremath{\mathrm{ed}}(f).\]
\end{req}
\begin{dfn}\label{dfn:RG-admissible}
A red graph is \emph{admissible} if one can choose an orientation such that for each vertex $f$ of $G$ we have: $i(f)\geq 0$. Such an orientation is called \emph{a fitting orientation}. An admissible red graph $G$ for $w$ is \emph{exact} if $I(G)=0$.
\marginpar{For a web $w$ we define $M(w) = \max I(G)$ where the maximum is taken over all admissible red graph $G$ for $w$.???}
\end{dfn}
\begin{dfn}\label{dfn:paired-RG}\marginpar{should we put the orientation ?}
Let $w$ be an $\epsilon$-web and $G$ be a red graph for $w$. A \emph{pairing} of $G$ is a partition of the grey half-edges of $G$ into subsets of 2 elements such that for any subset the two half-edges touch the same face $f$, and one points to $f$ and the other one points away from $f$. A red graph together with a pairing is called a \emph{paired red graph.}
\end{dfn}
\begin{dfn}
A red graph $G$ in an $\epsilon$-web $w$ is \emph{fair} (resp.~{} \emph{nice}) if for every vertex $f$ of $G$ we have $\ensuremath{\mathrm{ed}}(f)\leq 4$ (resp.~{} $\ensuremath{\mathrm{ed}}(f)\leq 2$).
\end{dfn}
\begin{lem}\label{lem:RGadm2fair}
If $G$ is an admissible red graph in an $\epsilon$-web $w$, then $G$ is fair.
\end{lem}
\begin{proof}
It follows directly from the definition of the level.
\end{proof}
\begin{cor}\label{cor:RGinNEhaveVs}
Let $w$ be a non-elliptic $\epsilon$ web, then if $G$ is an admissible red graph for $w$ then it has at least two edges.
\end{cor}
\begin{proof}
If G would contain just one vertex $f$, this would have external degree greater or equal to 6, contradicting lemma~\ref{lem:RGadm2fair}. We can actually show that such a red graph contains at least 6 vertices (see corollary~\ref{cor:no-tree} and proposition~\ref{prop:largecycle}).
\end{proof}
\begin{req}\label{req:number-of-pairing}
If a red graph $G$ is nice, there is only one possible pairing. If it is fair the number of pairing is $2^n$ where $n$ denote the number of vertices with external degree equal to 4.
If on a picture one draws together a web $w$ and a red graph $G$ for $w$, one can encode a pairing of $G$ on the picture by joining\footnote{We impose that $w$ intersect the dashed lines only at their ends.} with dashed line the paired half-edges. Note that if $G$ is fair it's always possible to draw disjoint dashed lines (see figure~\ref{fig:pairings} for an example).
\end{req}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_pairing}
\end{tikzpicture}
\caption{A web $w$, a red graph $G$ and the two possible pairings for $G$.}
\label{fig:pairings}
\end{figure}
The rest of the chapter (respectively in section~\ref{sec:RG2idem} and \ref{sec:kup2RG}) will be devoted to show the following two theorems:
\begin{thm}\label{thm:RG2idempotent}
To every exact paired red graph of $w$ we can associate a non trivial idempotent of $Hom_{K^\epsilon}(P_w,P_w)$. Further more the direct factor associated with the idempotent is a web-module.
\end{thm}
\begin{thm}\label{thm:on-monic-2-RG}
Let $w$ be a non-elliptic $\epsilon$-web, then if $\kup{\overline{w}w}$ is non-monic or have degree bigger than $l(\epsilon)$, then there exists an exact red graph for $w$, therefore the $K^\epsilon$-module $P_w$ is decomposable.
\end{thm}
\subsection{Combinatorics on red graphs}
\label{sec:comb-red-graphs}
On the one hand, the admissibility of a red graph relies on the local non-negativity of the level for some orientation, on the other hand the global level $I$ does not depend on the orientation. However, it turns out that the existence of admissible red graph $G$ for an $\epsilon$-web $w$ can be understood thanks to $I$ in some sense:
\begin{prop}\label{prop:2admissible}
Let $w$ be an $\epsilon$-web, suppose that there exists $G$ a red graph for $w$ such that $I(G)\geq 0$, then there exists an admissible red graph $\widetilde{G}$ for $w$ such that $I(\widetilde{G})\geq I(G)$.\marginpar{mettre non-empty dans la defnition de red graph}
\end{prop}
\begin{proof} If $G$ is already admissible, there is nothing to show, hence we suppose that $G$ is not admissible. Among all the orientations for $G$, we choose one such that $\sum_{f\in V(G)} |i(f)|$ is minimal, we denote is by $o$. From now on $G$ is endowed with this orientation. As $G$ is not admissible there exists some vertices with negative level and some with positive level.
We first show that there is no oriented path from a vertex $f_p$ with $i_o(f_p)> 0$ to a vertex $f_n$ with $i_o(f_n)<0$. Suppose there exists $\gamma$ such a path. Let us inspect $o'$ the orientation which is the same as $o$ expect along the path $\gamma$ where it is reversed. For all vertices $f$ of $G$ but $f_p$ and $f_n$, we have $i_o(f)=i_{o'}(f)$ (for all the vertices inside the path, the position of the edges pointing to them is changed, but not their number), and we have:
\[
i_{o'}(f_p) = i_o(f_p)-1 \quad i_{o'}(f_n) = i_o(f_n)+1.
\] But then $\sum_{f\in V(G)} |i_{o'}(f)|$ would be strictly smaller than $\sum_{f\in V(G)} |i_o(f)|$ and this contradicts that $o$ is minimal.
We consider $(\widetilde{G}, \tilde{o})$ the induced oriented sub-graph of $(G,o)$ with set of vertices $V(\widetilde{G})$ equal to the vertices of $G$ which can be reach from a vertex with positive level by an oriented path. This set is not empty since it contains the vertices with positive degree. It contains no vertex with negative degree. For all vertices of $\widetilde{G}$, we have:
\begin{align*} i_{\tilde{o}}(f) &= 2 - \frac{\deg_{D(w)}(f)}2 + \#\{\textrm{edges of $G$ pointing away from $f$ in $\widetilde{G}$}\} \\ &=
2 - \frac{\deg_{D(w)}(f)}2 + \#\{\textrm{edges of $G$ pointing away from $f$ in $G$}\} \\
&= i_o(f).\end{align*}
The second equality holds because if $f$ is in $V(\widetilde{G})$ all the edges in $E(G)\setminus E(G')$. $G$ which are not in $\widetilde{G}$ point to $f$ by definition of $\widetilde{G}$. This shows that $\widetilde{G}$ is admissible and $I(\widetilde{G})>I(G)$.
\end{proof}
\begin{lem}\label{lem:RGinNE2niceRG}
Let $w$ be a non-elliptic web, suppose that it contains a red graph of level $k$, then it contains an admissible nice red graph of level at least $k$.
\end{lem}
\begin{proof}
We consider $G$ a red graph of $w$ of level $k$. Thanks to lemma \ref{prop:2admissible} we can suppose that it is admissible. We can take a minimal red graph $G$ for the property of being of level at least $k$ and admissible. The graph $G$ is endowed with a fitting orientation. Now suppose that it is not nice, it means that there exists a vertex $v$ of $G$ which have exterior degree equal to 4. But $G$ being admissible all the edges of $G$ adjacent to $v$ point out of $v$, so that we can remove $v$ i.~e.~ we can consider then induced sub-graph $G'$ with all the vertex of $G$ but $v$ with the induced orientation. Then it is admissible, with the same level, hence $G$ is not minimal, contradiction.
\end{proof}
For a non-elliptic $\epsilon$-web, the existence of an exact red graph may appear as an exceptional situation between the case where there is no admissible red graph and the case where all admissible red graphs are non-exact. The aim of the rest of this section is to show the proposition~\ref{prop:exist-exact} which indicates that this is not the case. On the way we state some small results which are not directly useful for the proof but may alight what red-graphs look like.
\begin{prop}\label{prop:exist-exact}
Let $w$ be a non-elliptic $\epsilon$-web. If there exists an admissible red graph for $w$ then there exists an exact red graph for $w$.
\end{prop}
\begin{dfn}\label{dfn:subred}
Let $w$ be an $\epsilon$-web, and $G$ and $G'$ two admissible red graphs for $w$. We say that $G'$ is a \emph{red sub-graph} of $G$ if $V(G')\subset V(G')$. We denote by $\ensuremath{\mathcal{G}}(G)$ the set of all admissible red sub-graphs. It is endowed with the order given by the inclusion of sets of vertices. We say that $G$ is \emph{minimal} if $\ensuremath{\mathcal{G}}(G) = \{G\}$.
\end{dfn}
Note that a red sub-graph is an induced sub-graph and that a minimal red-graph is connected.
\begin{lem}\label{lem:no_cut}
Let $w$ be an $\epsilon$-web and $G$ a minimal admissible red graph endowed with a fitting orientation. There is no non-trivial partition of $V(G)$ into two sets $V_1$ and $V_2$ such that for each vertex $v_1$ in $V_1$ and each vertex $v_2$ in $V_2$ every edge between $v_1$ and $v_2$ is oriented from $v_1$ to $v_2$.
\end{lem}
\begin{proof}
If there were a such a partition, we could consider the red sub-graph $G'$ with $V(G') =V_2$. For every vertex in $V_2$ the level is the same in $G$ and in $G'$ and hence, $G'$ would be admissible and $G$ would not be minimal.
\end{proof}
\begin{cor}
\label{cor:noleaf4minimal} Let $w$ be a $\epsilon$-web and $G$ a minimal admissible red graph for $w$, then the graph $G$ has no leaf\footnote{We mean vertex of degree 1.}. Therefore if it has 2 or more vertices, then it is not a tree.
\end{cor}
\begin{proof}
Indeed, if $v$ were a leaf of $G$, the vertex $v$ would be either a sink or a source, hence $V(G)\setminus\{v\}$ and $\{v\}$ would partitioned $V(G)$ in a way forbidden by lemma~\ref{lem:no_cut}.
\end{proof}
\begin{cor}\label{cor:no-tree}
If $G$ is an admissible red graph for a non-elliptic $\epsilon$-web $w$, then $G$ is not a tree.
\end{cor}
\begin{proof}
Consider a minimal red sub-graph of $G$. Thanks to corollaries~\ref{cor:RGinNEhaveVs} and \ref{cor:noleaf4minimal}, it is not a tree, hence $G$ is not a tree.
\end{proof}
\begin{lem}
Let $w$ be an $\epsilon$-web and $G$ a minimal red graph for $w$. If $G$ has more than 2 vertices, then it is nice.
\end{lem}
\begin{proof}
Suppose that we have a vertex $v$ of $G$ with external degree equal to 4. Consider a fitting orientation for $G$. All edges of $G$ adjacent to $v$ must point out, otherwise the degree of $v$ would be negative. So $v$ would be a sink and, thanks to lemma~\ref{lem:no_cut}, this is not possible.
\end{proof}
\begin{lem}\label{lem:strong-connected}
Let $w$ be a non-elliptic $\epsilon$-web and $G$ a minimal admissible red graph. If the red graph $G$ is endowed with a fitting orientation, then it is strongly connected.
\end{lem}
The terms \emph{weakly connected} and \emph{strongly connected} are classical in graph theory the first means that the underlying unoriented graph is connected in the usual sense. The second that for any pair of vertices $v_1$ and $v_2$, there exists an oriented path from $v_1$ to $v_2$ and an oriented path from $v_2$ to $v_1$.
\begin{proof}
Let $v$ be a vertex of $G$, consider the subset $V_v$ of $V(G)$ which contains the vertices of $G$ reachable from $v$ by an oriented path. The sets $V_v$ and $V(G)\setminus V_v$ form a partition of $V(G)$ which must be trivial because of lemma~\ref{lem:no_cut}, but $v$ is in $V_v$ therefore $V_v=V(G)$, this is true for any vertex $v$, and this shows that $G$ is strongly connected.
\end{proof}
\begin{prop}\label{prop:largecycle}
If $G$ is a red graph for a non-elliptic $\epsilon$-web $w$, then any (not-oriented) simple cycle has at least 6 vertices.
\end{prop}
\begin{proof}Take $C$ a non-trivial simple cycle in $G$. \marginpar{We can suppose that $C$ is minimal. A red graph contains no triangle, this shows that some faces of $w$ must be nested in the cycle $C$.} We consider the collection of faces of $w$ nested by $C$ (this is non empty thanks to condition~(\ref{item:dfnRG3}) of the definition of red graphs). This defines a plane graph $H$. We define $H'$ to be the graph $H$ with the bivalent vertices smoothed (we mean here that if locally $H$ looks like $\vcenter{\hbox{\tikz{\draw (0,0)-- (1,0); \filldraw (0.5,0) circle (1pt);}}}$, then $H'$ looks like $\vcenter{\hbox{\tikz{\draw (0,0) -- (1,0);}}}$). An example of this construction is depicted on figure~\ref{fig:exlargecycle}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.95]
\input{./rg_exlargecycle}
\end{tikzpicture}
\caption{On the left the $\epsilon$-web $w$ and the red graph $G$, in the middle the graph $H$, and on the right, the graph $H'$.}
\label{fig:exlargecycle}
\end{figure}
The $\epsilon$-web $w$ being non-elliptic, each face of $H$ has at least $6$ sides. We compute the Euler characteristic of $H'$:
\[\chi(H')= \#F(H')-\#E(H')+\#V(H') =2. \]
As in proposition~\ref{prop:closed2elliptic}, this gives us $\sum_{i\in \NN} F_i(H')(1-\frac i6) =2$ where $F_i(H)$ is the number of faces of $H'$ with $i$ sides. Restricting the sum to $i\leq 5$ and considering $F'_i$ the number of bounded faces, we have:
\[\sum_{i=0}^5 F'_i(H')(6 - i) \geq 6.\]
But the bounded faces of $H'$ with less that 6 sides come from bounded faces of $H$ which have at least 6 sides. The number $n$ of bivalent vertices in $H$ is therefore greater than or equal to $\sum_{i=0}^5 F'_i(H')(6 - i)$ i.~e.~ greater than or equal to 6. But $n$ is as well the length of the cycle $C$. \marginpar{faire un exemple ?}
\end{proof}
Note that a cycle in a red graph can have an odd length (as in the example of figure~\ref{fig:exlargecycle}).
\begin{lem}
Let $G$ be a minimal admissible red graph for a non-elliptic $\epsilon$-web $w$. Then $G$ has at least one vertex with degree 2.
\end{lem}
\begin{proof}
Suppose that all vertices of $G$ have degree greater or equal to 3, then the graph $G$ would contain a face with less than 5 sides (this is the same argument than in proposition~\ref{prop:closed2elliptic} which tells that a closed web contains a circle, a digon or a square). But this contradicts lemma~\ref{prop:largecycle}. \marginpar{terminology: oriented graph or digraph ?}
\end{proof}
The proposition~\ref{prop:exist-exact} is a direct consequence of the following lemma:
\begin{lem}\label{lem:minimal2exact}
Let $w$ be a non-elliptic $\epsilon$-web and $G$ a minimal admissible red graph for $w$. Then $G$ is exact.
\end{lem}
\begin{proof}
We endow $G$ with $o$ a fitting orientation. Suppose $G$ is not exact, then we can find a vertex $f$ with $i_o(f)>0$.
We first consider the case where $\deg(f)=2$. The $\epsilon$-web $w$ being non-elliptic, $\ensuremath{\mathrm{ed}}(f)\geq 2$. This shows that the two edges adjacent to $f$ point away from $f$, hence, $f$ is a sink and this contradicts lemma~\ref{lem:no_cut}.
Now, let us consider the general case. Let $f'$ be a vertex with degree 2. The lemma \ref{lem:strong-connected} implies that there exists $\gamma$ an oriented path from $f$ to $f'$. Let us reverse the orientations of the edges of $\gamma$. We denote by $o'$ this new orientation. Then we have $i_{o'}(f)=i_{o}(f)-1\geq 0$ and $i_{o'}(f')=i_G(f')+1\geq 1$. The levels of all other edges are not changed, hence $o'$ is a fitting orientation, and we are back in the first situation (where $f'$ plays the role of $f$).
\end{proof}
\section{Idempotents from red graphs}
\label{sec:RG2idem}
\input{rgchapter4bis}
\subsection{On the identity foam}
\label{sec:identity-foam}
\begin{dfn}
Let $w$ be an $\epsilon$-web, and $f$ a $(w,w)$-foam, we say that $f$ is \emph{reduced} if every facet of $f$ is diffeomorphic to a disk and if $f$ contains no singular circle (i.~e.~ only singular arcs). In particular this implies that every facet of $f$ meets $w\times \{0\}$ or $w\times\{1\}$.
\end{dfn}
The aim of this section is to prove the following proposition:
\begin{prop}\label{prop:idfoam}
Let $w$ be a non-elliptic $\epsilon$-web. If $f$ is a reduced $(w,w)$-foam which is equivalent (under the foam relations \FR{}) to a non-zero multiple of $w\times [0,1]$, then the underlying pre-foam is diffeomorphic to $w\times [0,1]$ and contains no dot.
\end{prop}
For this purpose we begin with a few technical lemmas:
\begin{lem} \label{lem:foamwithdots}
Let $w$ be a closed web and $e$ an edge of $w$. Then there exists $f$ a $(\emptyset, w)$-foam which is not equivalent to 0 such that the facet of $f$ touching the edge $e$ contains at least one dot.
\end{lem}
\begin{proof}
We prove the lemma by induction on the number of edges of the web $w$.
It is enough to consider the case $w$ connected because the functor $\mathcal{F}$ is monoidal. If the web $w$ is a circle this is clear, since a cap with one dot on it is not equivalent to 0. If $w$ is the theta web, then this is clear as well, since the half theta foam with one dot on the facet meeting $e$ is not equivalent to 0.
Else, there exists a square or digon in $w$ somewhere far from $e$. Let us denote $w'$ the web similar to $w$ but with the digon replaced by a single strand or the square smoothed in one way or the other. By induction we can find an $(\emptyset, w')$-foam $f'$ non-equivalent to 0 with one dot on the facet touching $e$.
Next to the strand or the smoothed square, we consider a digon move or a square move (move upside down the pictures of figure~\ref{fig:smdmc}). Seen as a $(w',w)$-foam it induces an injective map. Therefore, the composition of $f'$ with this $(w',w)$-foam is not equivalent to 0 and has one dot on the facet touching $e$.
\end{proof}
\begin{notation}
Let $w$ be an $\epsilon$-web, and $e$ be an edge of $w$. We denote by $f(w,e)$ the $(\emptyset,\overline{w}w)$-foam which is diffeomorphic to $w\times [0,1]$ with one dot on the facet $e\times [0,1]$. We denote by $f(w,\emptyset)$ the $(\emptyset,\overline{w} w)$-foam which is diffeomorphic to $w\times [0,1]$ with no dot on it.
\end{notation}
\begin{cor}
Let $w$ be an $\epsilon$-web, and $e$ an edge of $w$, then $f(w,e)$ is non-equivalent to 0.
\end{cor}
\begin{proof}
From lemma~\ref{lem:foamwithdots} we know that for any $w$, there exists a $(w,w)$-foam which is non equivalent to 0 and is the product of $f(w,e)$ with another $(w,w)$-foam. This proves that the $(w,w)$-foam $f(w,e)$ is not equivalent to 0.
\end{proof}
\begin{dfn}
If $w$ is an $\epsilon$-web. We say that it contains a $\lambda$ (resp.~a $\cap$, resp.~a $H$) if next to the border $w$ looks like one of the pictures of figure~\ref{fig:dfnUHY}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./sw_dfnUHY}[scale=0.6]
\end{tikzpicture}
\caption{From left to right: a $\lambda$, a $\cap$ and a $H$.}
\label{fig:dfnUHY}
\end{figure}
\end{dfn}
\begin{lem}\label{lem:UHYinNE}
Every non-elliptic $\epsilon$-web contains at least a $\lambda$, a $\cap$ or an $H$.
\end{lem}
\begin{proof}
The closed web $\bar{w}w$ contains a circle a digon or a square, and this happens only if $w$ contains a $\cap$ a $\lambda$ or a $H$.
\end{proof}
\begin{req}
In fact, one can ``build'' every non-elliptic web with this three elementary webs. This is done via the ``growth algorithm'' (see~\cite{MR1684195}).
\end{req}
\begin{lem}\label{lem:NEwebwithdots}
Let $w$ be a non-elliptic $\epsilon$-web. Then the elements of $\left( f(w,e)\right)_{e\in E(w)}$ are pairwise non-equivalent (but they may be linearly dependant).
\end{lem}
\begin{proof}[Sketch of the proof]
We proceed by induction on the number of edges of $w$. The initiation is straightforward since if $w$ has only one edge there is nothing to prove. We can distinguish several case thanks to lemma~\ref{lem:UHYinNE}:
If $w$ contains a $\cap$, we denote by $e$ the edge of this $\cap$, and by $w'$ the $\epsilon'$-web similar to $w$ but with the cap removed. Suppose that $e_1=e$, then $e_2\neq e$ and, then the $(\emptyset,\overline{w}w)$-foams $f(w,e_1)$ and $f(w,e_2)$ are different because if we cap the cup (we mean $e\times I$) by a cap with one dot on it, on the one hand we obtain a $(\emptyset,\overline{w'}w')$-foam equivalent to $0$ and on the other hand a $(\emptyset,\overline{w'}w')$-foam equivalent to $f(w',\emptyset)$. Thanks to lemma~\ref{lem:foamwithdots},
we know that this last $(\emptyset,\overline{w'}w')$-foam is not equivalent to 0. If $e_1$ and $e_2$ are different from $e$, it is clear as well, because $f(w, e_1)$ and $f(w,e_2)$ can be seen as compositions of $f(w', e_1)$ and $f(w',e_2)$ with a birth (seen as a $(\overline{w'}w', \overline{w}w)$-foam) which is known to correspond to injective map.
This is the same kind of argument for the two other cases. The digon relations and the square relations instead of the sphere relations.
\end{proof}
\begin{lem}\label{lem:red1touch2wI}
Let $w$ be an $\epsilon$-web and $f$ a reduced $(w,w)$-foam $f$. Suppose that every facet touches $w\times\{0\}$ on at most one edge, and touches $w\times\{1\}$ on at most one edge, then it is isotopic to $w\times [0,1]$.
\end{lem}
\begin{proof}
The proof is inductive on the number of vertices of $w$. If $w$ is a collection of arcs, the foam $f$ has no singular arc. As $f$ is supposed to be reduced, it has no singular circle. Therefore it is a collection of disks which corresponds to the arcs of $w$, and this proves the result in this case.
We suppose now that $w$ has at least one vertex. Let us pick a vertex $v$ which is a neighbour (via an edge that we call $e$) of the boundary $\epsilon$ of $w$. We claim that the singular arc $\alpha$ starting at $v\times\{0\}$ must end at $v\times\{1\}$.
Indeed, the arc $\alpha$ cannot end on $w\times\{0\}$, for otherwise, the facet $f$ touching $e$ would touch another edge of $w$. Therefore the arc $\alpha$ ends on $w\times\{1\}$.
For exactly the same reasons, it has to end on $v\times\{1\}$, so that the facet which touches $e\times\{0\}$ is isotopic to $e\times I$, now we can remove a neighbourhood of this facet and we are back in the same situation with a $\epsilon'$-web with less vertices, and this concludes.
\end{proof}
\begin{proof}[Proof of proposition~\ref{prop:idfoam}]
We consider $w$ a non-elliptic $\epsilon$-web. Let $f$ be a reduced $(w,w)$-foam such that $f$ is equivalent to $w\times I$ up to a non-trivial scalar. Because of lemma~\ref{lem:NEwebwithdots}, the foam $f$ satisfies the hypotheses of lemma~\ref{lem:red1touch2wI}, so that $f$ is isotopic to $w\times [0,1]$.
\end{proof}
We conjecture that the proposition~\ref{prop:idfoam} still holds without the non-ellipticity hypothesis. However the proof has to be changed since lemma \ref{lem:NEwebwithdots} cannot be extend to elliptic webs (consider the facets around a digon).
\begin{cor}
If $w$ is a non-elliptic $\epsilon$-web and $w'$ is an $\epsilon$-web with strictly less vertices than $w$, then if $f$ is a $(w,w')$-foam and $g$ is a $(w',w)$-foam, then the $(w,w)$-foam $fg$ cannot be equal to a scalar times the identity.
\end{cor}
\begin{landscape}
\begin{figure}
\begin{tikzpicture} [scale = 0.7]
\begin{scope}
\input{./th_exdecofmodules}
\end{scope}
\node at (2.9, 0) {$\simeq \, P \, \oplus$};
\begin{scope}[xshift = 5.8cm]
\input{./th_exdecofmodules1}
\end{scope}
\begin{scope}[xshift = 11.9cm]
\input{./th_exdecofmodules1}
\end{scope}
\node at (8.7, 0) {$\{-1\}\,\,\,\,\oplus$};
\node at (14.5,0) {$\{+1\}$};
\begin{scope}[yshift =-4.5cm, xshift = -2cm]
\node at (2, 0) {$\oplus$};
\begin{scope}[xshift = 4.4cm]
\input{./th_exdecofmodules2}
\end{scope}
\node at (6.6, 0) {$\oplus$};
\begin{scope}[xshift = 8.8cm, yscale= -1]
\input{./th_exdecofmodules2}
\end{scope}
\node at (11, 0) {$\oplus$};
\begin{scope}[xshift = 13.2cm, yscale= -1]
\input{./th_exdecofmodules3}
\end{scope}
\node at (15.4, 0) {$\oplus$};
\begin{scope}[xshift = 17.6cm, yscale= 1]
\input{./th_exdecofmodules4}
\end{scope}
\node at (19.8, 0) {$\oplus$};
\begin{scope}[xshift = 22.0cm, yscale= -1]
\input{./th_exdecofmodules4}
\end{scope}
\node at (24.2, 0) {$\oplus$};
\begin{scope}[xshift = 26.4cm, yscale= 1]
\input{./th_exdecofmodules5}
\end{scope}
\end{scope}
\begin{scope}[yshift =-9cm, xshift = -2cm]
\node at (2, 0) {$\oplus$};
\begin{scope}[xshift = 4.4cm, xscale = -1]
\input{./th_exdecofmodules2}
\end{scope}
\node at (6.6, 0) {$\oplus$};
\begin{scope}[xshift = 8.8cm, yscale= -1, xscale = -1]
\input{./th_exdecofmodules2}
\end{scope}
\node at (11, 0) {$\oplus$};
\begin{scope}[xshift = 13.2cm, yscale= -1, xscale = -1]
\input{./th_exdecofmodules3}
\end{scope}
\node at (15.4, 0) {$\oplus$};
\begin{scope}[xshift = 17.6cm, yscale= 1, xscale = -1]
\input{./th_exdecofmodules4}
\end{scope}
\node at (19.8, 0) {$\oplus$};
\begin{scope}[xshift = 22.0cm, yscale= -1, xscale = -1]
\input{./th_exdecofmodules4}
\end{scope}
\node at (24.2, 0) {$\oplus$};
\begin{scope}[xshift = 26.4cm, yscale= 1, xscale = -1]
\input{./th_exdecofmodules5}
\end{scope}
\end{scope}
\end{tikzpicture}
\hspace{1cm}
\caption{Example of a decomposition of a web-module into indecomposable modules. All direct factors which are web-modules are obtained through idempotents associated with red graphs. The module $P$ is not a web-module but is a projective indecomposable module.}
\end{figure}
\end{landscape}
\section{Characterisation of indecomposable web-modules}
\label{sec:kup2RG}
\subsection{General View}
\label{sec:general-view}
The lemma~\ref{lem:monic2indec} states that the indecomposability of a web-modules $P_w$ can be deduced from the Laurent polynomial $\kup{\overline{w}w}$. In this section we will show a reciprocal statement. We first need a definition:
\begin{dfn}
Let $\epsilon$ be an admissible sequence of signs of length $n$, an $\epsilon$-web $w$ is said to be \emph{virtually indecomposable} if $\kup{\overline{w}w}$ is a monic symmetric Laurent polynomial of degree $n$.
An $\epsilon$-web which is not virtually indecomposable is \emph{virtually decomposable}. If $w$ is a virtually decomposable $\epsilon$-web, we define the \emph{level of $w$} to be the integer $\frac12 (\deg \kup{\overline{w}w} -n)$.
\end{dfn}
Despite of its fractional definition, the level is an integer.
With this definition, lemma~\ref{lem:monic2indec} can be rewritten:
\begin{lem}
\label{lem:VI2indec}
If $w$ is a virtually indecomposable $\epsilon$-web, then $M(w)$ is an indecomposable $K^\epsilon$-module.
\end{lem}
The purpose in this section is to prove a reciprocal statement in order to have:
\begin{thm}\label{thm:Charac}
Let $\epsilon$ be an admissible sequence of signs of length $n$, and $w$ an $\epsilon$-web. Then the $K^\epsilon$-module $P_w$ is indecomposable if and only if $w$ is virtually indecomposable.
\end{thm}
\begin{req}
Note that we do not suppose that $w$ is non-elliptic, but as a matter of fact, if $w$ is elliptic then $\kup{\overline{w}w}$ is not monic of degree $n$ and the module $P_w$ is decomposable.
\end{req}
To prove the unknown direction of theorem~\ref{thm:Charac} we use red graphs developed in the previous section and will show a more precise version of the theorem:
\begin{thm}\label{thm:thmwithRG}
If $w$ is a non-elliptic virtually decomposable $\epsilon$-web of level $k$, then $w$ contains an admissible red graph of level $k$, hence $\mathrm{End}_{K^\epsilon}(P_w)$ contains a non-trivial idempotent and $P_w$ is decomposable.
\end{thm}
\begin{proof}[Proof of theorem \ref{thm:Charac} assuming theorem \ref{thm:thmwithRG}] Let $w$ be a virtually decomposable $\epsilon$-web and let us denote by $k$ its level. From theorem~\ref{thm:thmwithRG} we know that there exists a red graph $G''$ of level $k$. But then, thanks to proposition~\ref{prop:2admissible}, there exists $G'$ a sub red graph of $G''$ which is admissible. And finally, the proposition \ref{prop:exist-exact} shows the existence of an exact red graph $G$ in $w$. We can apply theorem \ref{thm:RG2idempotent} to $G$ and this tells that $P_w$ is decomposable.
\end{proof}
The proof of theorem \ref{thm:thmwithRG} is a recursion on the number of edges of the web $w$. But for the recursion to work, we need to handle elliptic webs as well. We will actually show the following:
\begin{prop}\label{prop:techRG}
\begin{enumerate}
\item\label{it:ptRG1} If $w$ is a $\partial$-connected $\epsilon$-web which is virtually decomposable of level $k\geq 1$ then there exists $S$ a stack of nice red graphs for $w$ of level greater or equal to $k$ such that $w_S$ is $\partial$-connected.
\item\label{it:ptRG2} If $w$ is a $\partial$-connected $\epsilon$-web which is virtually decomposable of level $k\geq 1$, contains no digon and contains exactly one square which is supposed to be adjacent the unbounded face then there exists a nice red graph $G$ in $w$ of level greater or equal to $k$ such that $w_G$ is $\partial$-connected.
\item\label{it:ptRG3} If $w$ is a non-elliptic $\epsilon$-web which is virtually decomposable of level $k\geq 0$ then there exists a nice red graph $G$ in $w$ of level greater of equal to $k$ such that $w_G$ is $\partial$-connected.
\end{enumerate}
\end{prop}
Before proving the proposition we need to introduce \emph{stacks of red graphs} (see below), and the notion of $\partial$-connectedness (see section \ref{sec:part-conn}).
Then we will prove the proposition~\ref{prop:techRG} thanks to a technical lemma (lemma \ref{lem:tech}) which will be proven in section~\ref{sec:proof-lemmatech} after an alternative glance on red graphs (section~\ref{sec:new-approach-to-red-graph}).
\begin{req}
It is easy to see that a non-elliptic superficial $\epsilon$-web contains no red graphs of non-negative level, hence this result is strictly stronger than the result of~\cite{LHR1}.
\end{req}
\begin{dfn}
Let $w$ be an $\epsilon$-web, \emph{a stack of red graphs $S=(G_1, G_2,\dots, G_l)$ for $w$} is a finite sequence of paired red graphs such that $G_1$ is a red graph of $w_1\eqdef w$, $G_2$ is a red graph of $w_2\eqdef w_{G_1}$, $G_3$ is a red graph of $w_3\eqdef (w_{G_1})_{G_2} = (w_2)_{G_2}$ etc. We denote $(\cdots((w_{G_1})_{G_2})\cdots)_{G_l}$ by $w_S$ and we denote $l$ by $l(S)$ and we say that it is \emph{the length of $S$}. We define the level of a stack to be the sum of the levels of the red graphs of the stack.\marginpar{change level into level and $k$-irregular into $k$ -virtually decomposable}
\end{dfn}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_stack}
\end{tikzpicture}
\caption{A stack of red graphs of length 2.}
\label{fig:stack}
\end{figure}
\begin{dfn}
A stack of red graphs is \emph{nice} if all its red graphs are nice. Note that in this case the pairing condition on red graphs is empty.
\end{dfn}
\subsection{The $\partial$-connectedness}
\label{sec:part-conn}
\begin{dfn}
An $\epsilon$-web is \emph{$\partial$-connected} if every connected component of $w$ touches the border.
\end{dfn}
A direct consequence is that a $\partial$-connected $\epsilon$-web contains no circle.
\begin{lem}
A non-elliptic $\epsilon$-web is $\partial$-connected.
\end{lem}
\begin{proof}
An $\epsilon$-web which is not $\partial$-connected has a closed connected component, this connected component contains at least a circle, a digon or a square and hence is elliptic.
\end{proof}
\begin{lem}
Let $w$ be a $\partial$-connected $\epsilon$-web with a digon, the web $\epsilon$-web $w'$ equal to $w$ except that the digon reduced (see figure~\ref{fig:reddig}) is still $\partial$-connected. In other words $\partial$-connectedness is preserved by digon-reduction.
\end{lem}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_reddig}
\end{tikzpicture}
\caption{On the left $w$, on the right $w'$.}
\label{fig:reddig}
\end{figure}
\begin{proof}
This is clear because every path in $w$ can be projected onto a path in $w'$.
\end{proof}
Note that $\partial$-connectedness is not preserved by square reduction, see for example figure~\ref{fig:sq2partial}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.2]
\input{./th_sq2partial}
\end{tikzpicture}
\caption{The $\partial$-connectedness is not preserved by square reduction.}
\label{fig:sq2partial}
\end{figure}
However we have the following lemma:
\begin{lem} If $w$ is a $\partial$-connected $\epsilon$-web which contains a square $S$ then one of the two $\epsilon$-webs obtained from $w$ by a reduction of $S$ (see figure~\ref{fig:tworeduction}) is $\partial$-connected.
\end{lem}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_tworeduction}
\end{tikzpicture}
\caption{On the right the $\epsilon$-web $w$ with the square $S$, on the middle and on the right, the two reductions of the square $S$.}
\label{fig:tworeduction}
\end{figure}
\begin{proof} Consider the oriented graph $\tilde{w}$ obtained from $w$ by removing the square $S$ and the 4 half-edges adjacent to it (see figure~\ref{fig:sqremoved}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale= 0.7]
\input{./th_square-removed}
\end{tikzpicture}
\caption{On the left $w$, on the right $\tilde{w}$.}
\label{fig:sqremoved}
\end{figure}
We obtain a graph with 4 less cubic vertices than $w$ and 4 more vertices of degree 1 than $w$. We call $E_S$ the cyclically ordered set of the 4 vertices of $\tilde{w}$ of degree 1 next to the removed square $S$. The orientations of the vertices in $E_S$ are $(+,-,+,-)$. Note that in $\tilde{w}$, the flow modulo 3 is preserved everywhere, so that the sum of orientation of vertices of degree 1 of any connected component must be equal to 0 modulo 3. Suppose now that there is a connected component $t$ of $\tilde{w}$ which has all its vertices of degree 1 in $E_S$, the flow condition implies that either all vertices of $E_S$ are vertices of $t$ or exactly two consecutive vertices of $E_S$ are vertices of $t$, or that $t$ has no vertex of degree 1. The first situation cannot happen because by adding the square to $t$ we would construct a free connected component of $w$ which is supposed to be $\partial$-connected, the last situation neither for the same reason. So the only thing that can happen is the second situation. If there were two different connected components $t_1$ and $t_2$ of $\tilde{w}$ such that $t_1$ and $t_2$ have all their vertices of degree 1 in $E_S$, then adding the square to $t_1\cup t_2$ would lead to a free connected component of $w$, so their is at most one connected component of $\tilde{w}$ with all this vertex of degree 1 in $E_S$ call this vertices $e_+$ and $e_-$, and call $e'_+$ and $e'_-$ the two other vertices of $E_S$ (the indices gives the orientation). If we choose $w'$ to be the $\epsilon$-web corresponding to the smoothing which connects $e_+$ with $e'_-$ and $e_-$ with $e'_+$, then $w'$ is $\partial$-connected.
\end{proof}
\begin{dfn}
Let $w$ be a $\partial$-connected $\epsilon$-web and $S$ a square in $w$. The square $S$ is \emph{a $\partial$-square} if the two $\epsilon$-webs $w_{=}$ and $w_{||}$ obtained from $w$ by the two reductions by the square $S$ are $\partial$-connected.
\end{dfn}
\begin{lem}\label{lem:pc2ps}
If $w$ is a $\partial$-connected web, then either it is non-elliptic, or it contains either a digon or a $\partial$-square.
\end{lem}
\begin{proof}
Suppose that $w$ is not non-elliptic. As $w$ is $\partial$-connected it contains no circle. If must contains at least a digon or a square, if it contains a digon we are done, so suppose $w$ contains no digon. We should show that at least one square is a $\partial$-square. Suppose that there is no $\partial$-square, it means that for every square $S$, there is a reduction such that the $\epsilon$-web resulting $w_{s(S)}$ obtained by replacing $w$ by the reduction has a free connected component $t_S$. Let us consider a square $S_0$ such that $t_{S_0}$ is as small as possible (in terms of number of vertices for example). The web $t_{S_0}$ is closed and connected, so that either it is a circle, or it contains a digon or at least two square. If $t_{S_0}$ is a circle then $w$ contains a digon just next to the square $S_0$, and we excluded this case (see figure \ref{fig:circle2digon}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale= 0.7]
\input{./th_circle2digon}
\end{tikzpicture}
\caption{on the left $w_{S_0}$, on the right $w$. If $t_{S_0}$ is a circle, then $w$ contains a digon.}
\label{fig:circle2digon}
\end{figure}
If it contains a digon, the digon must be next to where $S_0$ was smoothed else the digon would already be in $w$. It appears hence that the digon comes from a square $S_1$ in $w$ ($S_1$ is adjacent to $S_0$), and $t_{S_1}$ has two vertices less than $T_{S_0}$ which is excluded (see figure~\ref{fig:digon2square}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_digon2square}
\end{tikzpicture}
\caption{On the left $w_{S_0}$, on the right $w$. If $t_{S_0}$ contains a digon then $w$ contains a square adjacent to $S_0$.}
\label{fig:digon2square}
\end{figure}
The closed web $t_{S_0}$ contains at least two squares so that we can pick up one, we denote it by $S'$, which is far from $S_0$ and hence comes from a square in $w$. Now at least one of the two smoothings of the square $S'$ must disconnect $t_{s_0}$ else the square $S'$ would be a $\partial$-square in $w$. But as it disconnects $t_{S_{0}}$, $t_{S'}$ is a strict sub graph of $t_{S_0}$, and this contradict the minimality of $S_0$. And this concludes that $w$ must contain a $\partial$-square.
\end{proof}
\subsection{Proof of proposition~\ref{prop:techRG}}
\label{sec:proof-propkey}
In this section we prove the proposition~\ref{prop:techRG} admitting the following technical lemma:
\begin{lem}\label{lem:tech}
Let $w$ be a $\partial$-connected $\epsilon$-web which contains, no digon and one square which touches the unbounded face. Let $G$ be a nice red graphs of $w$ and $G'$ a nice red graph of $w_{G}$ such that $w_G$ and $w_{G'}$ are $\partial$-connected, then there exists $G''$ a red graph of $w$ such that $(w_{G})_{G'} = w_{G''}$ and the level of $G''$ is bigger or equal to the level of $G$ plus the level of $G'$.
\end{lem}
This lemma says that under certain condition one can ``flatten'' two red graphs.
\begin{proof}[Proof of proposition~\ref{prop:techRG}]
As we announced this will be done by recursion on the number of edges of $w$.
We supposed than \ref{it:ptRG1}, \ref{it:ptRG2} and \ref{it:ptRG3} hold for all $\epsilon$-webs with strictly less than $n$ edges, and we
consider an $\epsilon$-web with $n$ edges. Note that whenever $w$ is non-elliptic the statement \ref{it:ptRG3} is stronger than the statement \ref{it:ptRG1}, so that we won't prove \ref{it:ptRG1} in this case.
We first prove \ref{it:ptRG1}:
If $w$ contains a digon, then we apply the result on $w'$ the $\epsilon$-web similar to $w$ but with the digon reduced (i.~e.~ replaced by a single strand).
The red graph $G$ which consist of only one edge (the digon) and no edge is nice and has level equal to 1 (see figure~\ref{fig:rganddig}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale = 0.7]
\input{./sw_fig8}
\end{tikzpicture}
\caption{On the left $w'$, on the right $w$ with the red graph $G$.}
\label{fig:rganddig}
\end{figure}
If $w'$ is not virtually decomposable or virtually decomposable of level 0, then $w$ is virtually decomposable of level 1. In this case, the stack with only one red graph equal to $G$ is convenient and we are done.
Else we know that $w'$ is of level $k-1$ and that there exists a nice stack of red graphs $S'$ of level $k-1$ in $w'$ and we consider the stack $S$ equal to the concatenation of $G$ with $S'$, it is a nice stack of red graphs of level $k$ and we are done.
Suppose now that the $\epsilon$-web $w$ contains no digon, but a square, then it contains a $\partial$-square (see lemma \ref{lem:pc2ps}). Suppose that the level of $w$ is $k\geq 1$ (else there is nothing to show), then at least one of the two reductions is virtually decomposable of level $k$ (this is a Cauchy-Schwartz inequality see~\cite[Section 1.1]{LHRThese} for details). Then we consider $w'$ the $\epsilon$-web obtained by a reduction of the square so that it is of level $k$. From the induction hypothesis we know that there exists a stack of red graphs $S'$ in $w'$ of level $k$. If all the red graphs of $S'$ are far from the location of the square, then we can transform the stack $S'$ into a stack of $w$ with the same level. Else, we consider $G'$ the first red graph of $S'$ which is close from the square location and according to the situation we define $G$ by the moves given on figure~\ref{fig:S2Sp}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.6]
\begin{scope}[xshift = 0cm, yshift= 8cm]
\input{./sw_fig1}
\end{scope}
\begin{scope}[xshift = 0cm, yshift= 4cm]
\input{./sw_fig2}
\end{scope}
\begin{scope}[xshift = 6cm, yshift= 0cm]
\input{./sw_fig3}
\end{scope}
\begin{scope}[xshift = 12cm, yshift= 8cm]
\input{./sw_fig4}
\end{scope}
\begin{scope}[xshift = 12cm, yshift= 4cm]
\input{./sw_fig5}
\end{scope}
\end{tikzpicture}
\caption{Transformations of $G'$ to obtain $G$.}
\label{fig:S2Sp}
\end{figure}
Replacing $G'$ by $G$ we can transform, the stack $S'$ into a stack for the $\epsilon$-web $w$.
We now prove \ref{it:ptRG2}.
From what we just did, we know that $w$ contains a nice stack of red graphs of level $k$. Among all the nice stacks of red graphs of $w$ with level greater or equal to $k$, we choose one with a minimal length, we call it $S$. If its length were greater or equal to $2$, then lemma \ref{lem:tech} would tell us that we could take the first two red graphs and replace them by just one red graph with a level bigger or equal to the sum of their two levels, so that $S$ would not be minimal, this prove that $S$ has length $1$, therefore, $w$ contains a nice red graph of level at least $k$.
We now prove \ref{it:ptRG3}.
The border of $w$ contains at least a $\cap$, a $\lambda$, or an $H$ (see figure~\ref{fig:dfnUHY}). In the two first cases, we can consider $w'$ the $\epsilon$-web with the $\cap$ removed or the $\lambda$ replaced by a single strand, then $w'$ is non-elliptic and virtually decomposable of level $k$ and there exists a nice red graph in $w'$ of level at least $k$, this red graph can be seen as a red graph of $w$, and we are done. If the border of $w$ contains no $\lambda$ and no $\cap$, then it must contains an $H$. There are two ways to reduce the $H$ (see figure~\ref{fig:Hreduced}). At least one of the two following situation happens:
$w_{||}$ is virtually decomposable of level $k$ or $w_{\_}$ is virtually decomposable of level $k+1$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./sw_Hreduced}
\end{tikzpicture}
\caption{The $H$ of $w$ (on the left) and its two reductions: $w_{||}$ (on the middle) and $w_=$ (on the right).}
\label{fig:Hreduced}
\end{figure}
In the first situation, one can do the same reasoning as before:
$w_{||}$ being non-elliptic, the induction hypothesis gives that we can find a nice red graph of level at least $k$ in $w_{||}$, this red graph can be seen as a red graph of $w$ and we are done.
In the second situation, we consider $w_{\_}$, we can apply the induction hypothesis to $w_{\_}$ (we are either in case \ref{it:ptRG2} or in case \ref{it:ptRG3}), so we can find a nice red graph of level at least $k+1$, coming back to $H$ this gives us a red graph of level at least $k$ (but maybe not nice), and we can conclude via the lemma~\ref{lem:RGinNE2niceRG}.
\end{proof}
\input{rgchapter4ter}
\subsection{Foam diagrams}
\label{sec:foam-diagrams}
\begin{dfn}
Let $w$ be an $\epsilon$-web, a \emph{foam diagram} $\kappa$ for $w$ consists of the following data:
\begin{itemize}
\item the $\epsilon$-web $w$,
\item a fair paired red graph $G$,
\item a function $\delta$ (called a \emph{dot function for $w$}) from $E(w)$ the set of edges of $w$ to $\NN$ the set of non-negative integers. This function will be represented by the appropriate number of dots on each edge of $w$.
\end{itemize}
With a foam diagram $\kappa$ we associate $f(\kappa)$ the $(w_G,w_G)$-foam given by $p_G\circ s_w(\delta) \circ i_G$, where $s_w(\delta)$ is $\ensuremath{\mathrm{id}}_w=w\times [0,1]$ the identity foam of $w$ with on every facet $e\times [0,1]$ (with $e\in E(w)$) exactly $\delta(e)$ dots. The $(w_G,w_G)$-foam $f(\kappa)$ is equal to $p_G\circ i_G$, with dots encoded by $\delta$. A foam diagram will be represented by the $\epsilon$-web drawn together with the red graph, and with some dots added on the edges of the $\epsilon$-web in order to encode $\delta$.
\end{dfn}
We will often assimilate $\kappa=(w,G,\delta)$ with $f(\kappa)$ and it will be seen as an element of $\hom_{K^\epsilon}(P_{w_G},P_{w_G})$. We can rewrite
some of the relations depicted on figure~\ref{fig:localrel} in terms of foam diagrams:
\begin{prop}\label{prop:relfoamdiag} The following relations on foams associated with foam diagrams hold:
\begin{itemize}
\item The 3-dots relation:
\[
\begin{tikzpicture}
\input{./fd_3dots}
\end{tikzpicture}
\]
\item The sphere relations:
\[
\begin{tikzpicture}
\input{./fd_spheres}
\end{tikzpicture}
\]\marginpar{dashed instead of dotted}
\item The digon relations:
\[
\begin{tikzpicture}
\input{./fd_digon}
\end{tikzpicture}
\]
\[
\begin{tikzpicture}
\input{./fd_digon2}
\end{tikzpicture}
\]
\item The square relations:
\[
\begin{tikzpicture}
\input{./fd_square}
\end{tikzpicture}
\]
\item The E-relation:
\[
\begin{tikzpicture}
\input{./fd_Erel}
\end{tikzpicture}
\]
\end{itemize}
The dashed lines indicate the pairing, and when the orientation of the $\epsilon$-web is not depicted the relation holds for any orientation.
\end{prop}
\begin{proof}
This is equivalent to some of the relations depicted on figure~\ref{fig:localrel}.
\end{proof}
\begin{lem}\label{lem:fd2idwdots}
Let $w$ be an $\epsilon$-web and $\kappa = (w,G,\delta)$ a foam diagram, with $G$ a fair paired red graph. Then $f(\kappa)$ is equivalent to a $\ZZ$-linear combination of $s_{w_G}(\delta_i)= f((w_G,\emptyset, \delta_i))$ for $\delta_i$ some dots functions for $w_G$.
\end{lem}\marginpar{Signaler qu'on travail avec la TQFT pour $R= \mathbb Q$}
\begin{proof}
Thanks to the E-relation of proposition~\ref{prop:relfoamdiag}, one can express $f(\kappa)$ as a $\ZZ$-linear combination of $f((w_j,G_j,\delta_j))$ where the $G_j$'s are red graphs without any edge. Tanks to the sphere, the digon and square relations of proposition~\ref{prop:relfoamdiag}, each $f((w_j,G_j,\delta_j))$ is equivalent either to 0 or to $\pm f(w_G,\emptyset, \delta'_j)$. This proves the lemma.
\end{proof}
\begin{lem}\label{lem:fde2id}
Let $w$ be an $\epsilon$-web and $\kappa = (w,G,\delta)$ a foam diagram, with $G$ exact, then $f(\kappa)$ is equivalent to a multiple of $w_G\times [0,1]$.
\end{lem}
\begin{proof}
From the previous lemma we know that $f(\kappa)$ is equivalent to a $\ZZ$-linear combination of $w_G\times [0,1]$ with some dots on it. We will see that the foam $f(\kappa)$ has the same degree as the foam $w_G\times [0,1]$. This will prove the lemma because adding a dot on a foam increases its degree by 2.
To compute the degree of $f(\kappa)$ we see it as a composition of elementary foams thanks to its definition:
\begin{align*}
\deg f(\kappa) =& \deg(w\times [0,1]) + 2\left( 2\#V(G) - \left(\#E(G)+ \frac{\#\{\textrm{grey half-edges of $G$}\}}{2} \right) \right) \\
=& |\partial w| + 2\cdot 0 \\
=& \deg w_G \times [0,1].
\end{align*}
The first equality is due to the decomposition pointed out in remark ~\ref{req:ssandcap} and because an unzip (or a zip) has degree -1 and a cap (or a cup) has degree +2. The factor 2 is due to the fact $f(\kappa)$ is the composition of $i_G$ and $p_G$. The second one follows from the exactness of $G$.
\end{proof}
To prove the proposition~\ref{prop:pciisid}, we just need to show that in the situation of the last lemma, the multiple is not equal to zero. In order to evaluate this multiple, we extend foam diagrams to (partially) oriented paired red graphs by the local relation indicated on figure~\ref{fig:fd2oRG}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./fd_orRG}
\end{tikzpicture}
\caption{Extension of foam diagrams to oriented red graphs. }
\label{fig:fd2oRG}
\end{figure}
By ``partially oriented'' we mean that some edges may be oriented some may not. If $G$ is partially oriented, and $\kappa$ is a foam diagram with red graph $G$, we say that $\kappa'$ is the \emph{classical foam diagram} associated with $\kappa$ if it obtained from $\kappa$ by applying the relation of figure~\ref{fig:fd2oRG} on every oriented edges. Note that $\kappa$ and $\kappa'$ represent the same foam.
\begin{dfn}
If $w$ is an $\epsilon$-web, $G$ a red graph for $w$ and $o$ a partial orientation of $G$ we define $\gamma(o)$ to be equal to ${\#\{\textrm{negative edges of $G$}\}}$. A \emph{negative} (or \emph{positive}) edge is an oriented edge of the red graph, and it's negativity (or positivity) is given by figure~\ref{fig:nepe}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./fd_posnegedge}
\end{tikzpicture}
\caption{On the left, a positive edge. On the right, a negative edge.}
\label{fig:nepe}
\end{figure}
\end{dfn}
\begin{lem}\label{lem:sumoforientation}
Let $w$ be an $\epsilon$-web and $G$ a partially oriented red graph with $e$ a non-oriented edge of $G$, then we have the following equality of foams:
\[ \begin{tikzpicture}
\input{./fd_orientedge}
\end{tikzpicture} \]
If $G$ is an un-oriented red graph for $w$ and $\delta$ a dots function for $w$, then:
\[f(w,G,\delta) =\sum_{o} (-1)^{\gamma(o)}f(w, G_o, \delta),\]
where $G_o$ stands for $G$ endowed with the orientation $o$, and $o$ runs through all the $2^{\#E(G)}$ complete orientations of $G$.
\end{lem}
\begin{proof}
The first equality is the translation of the E-relation (see proposition~\ref{prop:relfoamdiag}) in terms of foam diagrams of partially oriented red graphs. The second formula is the expansion of the first one to all edges of $G$.
\end{proof}
\begin{lem}\label{lem:nonfitting20}
If $w$ is an $\epsilon$-web, $G$ an exact paired red graph for $w$, $o$ a non-fitting orientation for $G$ and $\delta$ the null dot function on $w$, then the $(w_g,w_G)$-foam $f(w, G_o, \delta)$ is equivalent to 0.
\end{lem}
\begin{proof}
The orientation $o$ is a non-fitting orientation. Hence, there is at least one vertex $v$ of $G$ so that $i_o(v)>0$. There are two different situations, either $i_o(v)=1$ or $i_o(v)=2$. Using the definition of a foam diagrams for oriented red graphs (figure~\ref{fig:fd2oRG}), we deduce that $\kappa'$ the classical foam diagram associated with $f(w, G_o, \delta)$ looks around $v$ like one of the three following situations:
\[
\begin{tikzpicture}
\input{./fd_0lowdegree}
\end{tikzpicture}
\]
The sphere relations and the digon relations provided by proposition~\ref{prop:relfoamdiag} we see that the foam $f(w, G_o, \delta)$ is equivalent 0.
\end{proof}
\begin{lem}\label{lem:fitting2id}
If $w$ is an $\epsilon$-web, $G$ an exact paired red graph for $w$, $o$ a fitting orientation for $G$ and $\delta$ the null dots function on $w$, then the $(w_G,w_G)$-foam $f(w, G_o, \delta)$ is equivalent to $(-1)^{\mu(o)} w_G\times I$, where $\mu(o)=\#V(G) + \#\{\textrm{positive digons of $G_o$}\}$ (see definition figure~\ref{fig:5localsituation}).
\end{lem}
\begin{proof}
Let $\kappa'=(w',G',\delta')$ be the classical foam diagram associated with $(w, G_o,\delta)$.
The red graph $G'$ has no edge. Locally, the foam diagram $\kappa'$ corresponds to one of the 5 situation depicted on figure~\ref{fig:5localsituation}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./fd_5situations}
\end{tikzpicture}
\caption{The 5 different local situations of a foam diagram $\kappa'$ next to a vertex of $G'$. On the second line, the digon on the left is \emph{positive} and the digon on the right is \emph{negative}.}
\label{fig:5localsituation}
\end{figure}
But now using some relations of proposition~\ref{prop:relfoamdiag} we can remove all the vertices of $G'$, we see that $f(w,G_o, \delta)$ is equivalent to $(-1)^{\#V(G')-\#\{\textrm{positive digons}\}}$ because the positive digon is the only one with no minus sign in the relations of prop~\ref{prop:relfoamdiag}. This proves the result because $V(G)=V(G')$.
\end{proof}
\begin{lem}
If $w$ is an $\epsilon$-web, $G$ an exact paired red graph for $w$ and $o_1$ and $o_2$ two fitting orientations for $G$, then $\mu(o_1) + \gamma(o_1) = \mu(o_2) + \gamma(o_2) $. \marginpar{ toujours nul ?}
\end{lem}
\begin{proof}
We consider $\kappa'_1=(w',G',\delta_1)$ and $\kappa'_2(w',G',\delta_2)$ the two classical foam diagrams corresponding to $(w,G_{o_1},\delta)$ and $(w,G_{o_2},\delta)$, with $\delta$ the null dots function for $w$.
The red graph $G'$ has no edge, and the local situation are depicted on figure~\ref{fig:5localsituation}. Consider $v$ a vertex of $G'$, then a side of the face of $w$ corresponding to $v$ is either clockwise or counterclockwise oriented (with respect to this face). From the definition of $\gamma$ we obtain that for $i=1,2$, $\gamma(o_i)$ is equal to the number of dots in $\kappa'_i$ on clockwise oriented edges in $w'$. The dots functions $\delta_1$ and $\delta_2$ differs only next to the digons, so that $\gamma(o_1) - \gamma(o_2)$ is equal to the number of negative digons in $\kappa'_1$ minus the number of negative digons in $\kappa'_2$. So that we have:
\begin{align*}
\gamma(o_1) - \gamma(o_2) &= \mu(o_2) -\mu(o_1) \\
\gamma(o_1)+ \mu(o_1) &= \gamma(o_2)+\mu(o_2).
\end{align*}
\end{proof}
\begin{proof}[Proof of proposition~\ref{prop:pciisid}.]
The foam $p_G\circ i_G$ is equal to $f(w,G,\delta)$ with $\delta$ the null dot function on $w$. From the lemmas~\ref{lem:sumoforientation}, \ref{lem:nonfitting20} and \ref{lem:fitting2id} we have that:
\begin{align*}
f(w,G,\delta)& = \sum_{\textrm{$o$ fitting orientation of $G$}} (-1)^{\gamma(o)} f(w, G_o, \delta) \\
=& \sum_{\textrm{$o$ fitting orientation of $G$}} (-1)^{\gamma(o) + \mu(o)} w_G\times [0,1]\\
= & \pm\#\{\textrm{fitting orientations of $G$}\} w_G\times [0,1].
\end{align*}
The red graph $G$ is supposed to be exact. This means in particular that the set of fitting orientation is not empty. So that $p_G\circ i_G$ is a non-trivial multiple of $\ensuremath{\mathrm{id}}_{w_G}=w_G\times [0,1]$.
\end{proof}
\begin{proof}[Proof of theorem \ref{thm:RG2idempotent}]
From the proposition~\ref{prop:pciisid}, we know that there exists a non zero integer $\lambda_G$ such that $p_G\circ i_G= \lambda_G w_G$. Hence, $\frac{1}{\lambda_G}i_G\circ p_G$ is an idempotent. It's clear that it's non-zero. It is quite intuitive that it is not equivalent to the identity foam, for a proper proof, see proposition~\ref{prop:idfoam}.
\end{proof}
\subsection{A new approach to red graphs.}
\label{sec:new-approach-to-red-graph}
In this section we give an alternative approach to red graphs: instead of starting with a web and simplifying it with a red graph we construct a red graph from a web and a simplification of this web. For this we need a property of webs that we did not use so far.
\begin{prop}
Let $w$ be a closed web, then it admits a (canonical) face-3-colouring with the unbounded face coloured $c\in \ZZ/3\ZZ$. We call this colouring \emph{the face-colouring of base $c$} of $w$. When $c$ is not mentioned it is meant to be $0$.
\end{prop}
\begin{proof}
We will colour connected components of $\RR^2\setminus w$ with elements of $\ZZ/3\ZZ$.
We can consider the only unbounded component $U$ of $\RR^2\setminus w$. We colour it by $c$, then for each other connected component $f$, we consider $p$ an oriented path from a point inside $U$ to a point inside $f$, which crosses the $w$ transversely, we then define the colour of $f$ to be the sum (modulo 3) of the signs of the intersection of the path $p$ with $w$ (see figure~\ref{fig:signcol} for signs convention). This does not depend on the path because in $w$ the flow is always preserved modulo 3. And, by definition, two adjacent faces are separated by an edge, so that they do not have the same colour.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_signcol}
\end{tikzpicture}
\caption{On the left a positive crossing, on the right a negative one. The path is dashed and the web is solid.}
\label{fig:signcol}
\end{figure}
\end{proof}
\begin{cor}
Let $w$ be an $\epsilon$-web, then the connected component of $\RR\times \RR_+ \setminus w$ admits a (canonical) 3-colouring with the unbounded connected component coloured by $c$. We call this colouring \emph{the face-colouring of base $c$} of $w$.
\end{cor}
\begin{proof}
We complete $w$ with $\overline{w}$ and we use the previous proposition to obtain a colouring of the faces. This gives us a canonical colouring for $\RR\times \RR_+ \setminus w$.
\end{proof}
Note that in this corollary it is important to consider the connected component of $\RR\times \RR_+ \setminus w$ instead of the connected component of $\RR^2 \setminus w$. Let us formalise this in a definition.
\begin{dfn}
If $w$ is an $\epsilon$-web, the \emph{regions} of $w$ are the connected components of $\RR\times \RR_+ \setminus w$. The \emph{faces} of $w$ are the regions which do not intersect $\RR \times \{0\}$.
\end{dfn}
\begin{dfn}
Let $w$ be an $\epsilon$-web, an $\epsilon$-web $w'$ is a
\emph{simplification of $w$} if
\begin{itemize}
\item the set of vertices of $w'$ is
included in the set of vertices of $w$,
\item every edge $e$ of $w'$ is
divide into an odd number of intervals
$([a_i,a_{i+1}])_{i\in[0,2k]}$ such that for every $i$ in $[0, k]$,
$[a_{2i},a_{2i+1}]$ is an edge of $w$ (with matching orientations)
and for every $i$ in $[0, k-1]$, $[a_{2i+1},a_{2i+2}]$ lies in the
faces of $w$ opposite to $[a_{2i}, a_{2i+1}]$ with respect to $a_{2i+1}$ (see figure~\ref{fig:simplificationedge}).
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_simplificationedge}
\end{tikzpicture}
\caption{Local picture around $a_k$. The edge of $w'$ is orange and large, while the $\epsilon$-web $w$ is black and thin.}
\label{fig:simplificationedge}
\end{figure}
\end{dfn}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale= 0.8]
\input{./sw_thewebwsimplification}
\end{tikzpicture}
\caption{The $\epsilon$-web $w$ (on black) and $w_0$ (in orange) of proposition~\ref{prop:Pwdec} seen in terms of simplification.}
\label{fig:exsimplification}
\end{figure}
\begin{lem}\label{lem:coherent-col}
Let $w$ be a $\epsilon$-web and $w'$ a $\partial$-connected simplification of $w$ If $e$ is an edge of $w$ which is as well a (part of an) edge of $w'$, then in the face-colourings of base $c$ of $w$ and $w'$, the the regions adjacent to $e$ in $w$ and in $w'$ are coloured in the same way.
\end{lem}
\begin{proof}
This is an easy recursion on how $e$ is far from the border.
\end{proof}
Note that in this definition the embedding of $w'$ with respect to $w$ is very important.
\begin{dfn}
Let $w$ be an $\epsilon$-web and $w'$ a simplification of $w$. We consider the face-colourings of $w$ and $w'$. A face $f$ of $w$ lies in one or several regions of $w'$. This face $f$ is \emph{essential with respect to $w'$} if all regions of $w'$ it intersects do not have the same colour as $f$.
\end{dfn}
\begin{req}
We could have write this definition with region of $w$ instead of faces, but it is easy to see that a region of which is not a face $w$ is never essential.
\end{req}
\begin{lem}\label{lem:intersect2essential}
Let $w$ be a $\partial$-connected $\epsilon$-web and $w'$ a $\partial$-connected simplification of $w$. If a face $f$ of $w$ is not essential with respect to $w'$ then it intersects only one region of $w'$.
\end{lem}
\begin{proof}
Consider a face $f$ of $w$ which intersects more than one region of $w'$. We will prove that it is essential with respect to $w'$. Consider an edge $e'$ of $w'$ which intersects $f$ (there is at least one by hypothesis), when we look next to the border of $f$ next to $e'$ we find a vertex $v$ of $w$ (see figure~\ref{fig:localfacefp}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_localfacefp}
\end{tikzpicture}
\caption{A part of the face $f'$ next to an edge $e'$ of $w'$. Above $v$ the colours of $w$ and $w'$ are coherent thanks to lemma~\ref{lem:coherent-col}.}
\label{fig:localfacefp}
\end{figure}
We want to prove that none of the faces of $w'$ which are adjacent to $e'$ has the same colour as the face $f$. This follows from the lemma \ref{lem:coherent-col}, and from the fact that the part of $e'$ above $v$ is an edge of $w$ (see figure~\ref{fig:localfacefp}).
\end{proof}
\begin{cor}\label{cor:how-are-essential-faces}
Let $w$ be a $\partial$-connected $\epsilon$-web and $w'$ a $\partial$-connected simplification of $w$. If a face $f$ of $w$ intersects a region of $w'$ which has the same colour, it is not essential.
\end{cor}
\begin{prop}
Let $w$ be a $\partial$-connected $\epsilon$-web (this implies that every face of $w$ is diffeomorphic to a disk) and $w'$ a $\partial$-connected simplification of $w$. Then there exists a (canonical) paired red graph $G$ such that $w'$ is equal to $w_G$. We denote it by $G_{w\to w'}$.
\end{prop}
\marginpar{face vs connected component of...}
\begin{proof}
We consider the canonical colourings of the faces of $w$ and $w'$. The red graph $G$ is the induced sub-graph of $w^\star$ (the dual graph of $w$) whose vertices are essential faces of $w$ with respect to $w'$. The pairing is given by the edges of $w'$. We need to prove first that this is indeed a red graph, and in a second step that $w_G = w'$.
We consider a vertex $v$ of $w$ and the 3 regions next to it. We want to prove that at least one of the 3 regions is not essential with respect to $w'$. If the vertex $v$ is a vertex of $w'$ then lemma~\ref{lem:coherent-col} and corollary \ref{cor:how-are-essential-faces} give that none of the three regions is essential. Else, $v$ either lies inside an edge of $w'$ or it lies in a face of $w'$ (see figure~\ref{fig:3possvertex}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\input{./th_3possvertex}
\end{tikzpicture}
\caption{The three configurations for the vertex $v$ of $w$: it is a vertex of $w'$ (on the left), it lies inside an edge of $w'$ (on the middle), it lies inside a region of $w'$ (on the right).}
\label{fig:3possvertex}
\end{figure}
Consider the first situation: one of the 3 regions intersects two different regions of $w'$ hence it is essential thanks to lemma~\ref{lem:intersect2essential}, the two others are not thanks to corollary~\ref{cor:how-are-essential-faces}.
In the last situation, the 3 regions have different colours so that one of them has the same colour than the colour of the region of $w'$ where $v$ lies in, this region is therefore not essential (corollary \ref{cor:how-are-essential-faces}). This shows that $G$ is a red graph (we said nothing about the admissibility).
Let us now show that $w'=w_G$. We consider a collection $(N_{f})_{f\in V(G)}$ of regular neighbourhoods of essential faces of $w$ with respect to $w'$. Let us first show that for every essential face $f$ of $w$, if $N_f$ is a regular neighbourhood of $f$, the restriction of $w_G$ and $w'$ matchs. As $f$ is essential it is a vertex of $G$. Then the restriction of $w_G$ to $N_f$ is just a collection of strands joining the border to the border, just as $w'$.
In $\RR\times \RR_+\setminus \left(\bigcup_{f\in V(G)}N_f\right)$ the $\epsilon$-webs $w'$ and $w_G$ are both equal to $w$. This complete the proof.
\marginpar{illustration with $w$ and $w_0$}
\end{proof}
Note that $G_{w\to w'}$ depends on how $w'$ is embedded to see it as a simplification of $w$.
\begin{dfn}
Let $w$ a $\epsilon$-web and $w'$ a simplification of $w$, then the simplification is \emph{nice}, if for every region $r$ of $w$, $r\cap w'$ is either the empty set or connected.
\end{dfn}
We have the natural lemma:
\begin{lem}
Let $w$ be a $\partial$-connected $\epsilon$-web and $w'$ a $\partial$-connected simplification of $w$. The simplification is nice if and only if the red graph $G_{w\to w'}$ is nice
\end{lem}
\begin{proof}
Thanks to lemma \ref{lem:intersect2essential}, only essential faces of $w$ with respect to $w'$ can have non trivial intersection with $w'$, and for an essential face $f$, twice the number of connected component of $f\cap w'$ is equal to the exterior degree of the vertex of $G_{w\to w'}$ corresponding to $f$.
\end{proof}
\begin{lem}\label{lem:essfacesmatters}
If $w$ is a $\partial$-connected $\epsilon$-web, and $w'$ is a $\partial$-connected simplification of $w$. Then the level of $G_{w\to w'}$ is given by the following formula:
\[
i(G_{w\to w'}) = 2\#\{\textrm{essential faces of $w$ wrt. $w'$}\} - \frac{\#V(w) - \# V(w')}{2}.
\]
\end{lem}
This shows that the embedding of $w'$ influences the level of $G_{w\to w'}$ only on the number of essential faces of $w$ with respect to $w'$
\begin{proof}
The level of a red graph $G$ is given by:
\[
i(G) = 2\#V(G) - \#E(G) - \sum_{f\in V(G)}\frac{\ensuremath{\mathrm{ed}}(f)}{2}.
\]
By definition of $G_{w\to w'}$, we have:
\[ \{\textrm{essential faces of $w$ wrt. $w'$}\} = V(G_{w\to w'}).\]
The only thing to realise is that
we have:
\[
2\left(\#E(G)_{w\to w'} + \sum_{f\in V(G_{w\to w'})}\frac{\ensuremath{\mathrm{ed}}(f)}{2}\right) = \#V(w) - \# V(w'),
\] and this follows from the definition of $w_{G_{w\to w'}}=w'$.
\end{proof}
\begin{dfn}\label{dfn:avoidfill}
If $f$ is a face of $w$, $w'$ a simplification of $w$ and $r$ a region of $w'$, we say that $f$ avoids $r$ if $f\cap r=\emptyset$ or if the boundary of $r$ in each connected component of $f\cap r$ joins two consecutive vertices of $f$ (see figure \ref{fig:avoiding}). In the first case we say that $f$ avoid $r$ trivially.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_nontrivavoid}
\end{tikzpicture}
\caption{The local picture of a face $f$ (in white) of $w$ (in black) non-trivially avoiding a region $r$ (in yellow) of $w'$ (in orange).}
\label{fig:avoiding}
\end{figure}
If $f$ is an essential face of $w$ with respect to $w'$ and $r$ is a region of $w'$, we say that \emph{$f$ fills $r$}, if $f$ does not avoid $r$. If $F'$ is a set of region of $w'$ we say that \emph{$f$ fills (resp.~{} avoids) $F'$} if it fills at least one region of $F'$ (resp.~{} avoids all the regions of $F'$). We define:
\[
n(f,F') \eqdef \#\{r\in F' \,\textrm{such that $f$ fills $r$}\}.
\]
If $G'$ is a red graph of $w'$, we write $n(f,G')$ for $n(f,V(G'))$.
\end{dfn}
With the same notations, and with $F$ a set of face of $w$, we have the following equality:
\begin{align}
\#F = \#\{\textrm{faces $f$ of $F$ avoiding $F'$}\} + \sum_{f'\in F'}\sum_{\substack{f \in F \\ \textrm{$f$ fills $f'$}}} \frac1{n(f,F')}. \label{eq:sumofcontrib}
\end{align}
\begin{lem}
Let $w$ be a $\partial$-connected $\epsilon$-web and $w'$ a nice $\partial$-connected simplification of $w$. Let $F'$ be a collection of faces of $w'$, then for every face $f$ of $w$, we have: $n(f, F') \leq 2$.
\end{lem}
\begin{proof}
This is clear since $f\cap w'$ consists of at most one strand, so that it intersects at most 2 faces of $F'$.
\end{proof}
\begin{req}\label{rqe:hex2bign}
Let $w$ be a $\epsilon$-web, $w'$ a nice $\partial$-connected simplification of $w$ and $f$ an essential face of $w$ with respect to $w'$. Suppose that $f$ has at least 6 sides of $w$. Suppose furthermore that it intersects two regions $r_1$ and $r_2$ of $w'$, then either it (non-trivially) avoids one of them, either it fills both of them. If $f$ avoids $r_2$ then at least two neighbours (in $G_{w\to w'})$ of $f$ fill $r_1$ (see picture~\ref{fig:fillhex}).
If on the contrary $f$ has just one neighbour which fills $r_1$, then $f$ fills $r_2$. Under this condition, for any collection $F'$ of regions of $w'$ with $\{r_1, r_2\} \subseteq F'$ we have: $n(f, F')=2$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\input{./th_fillhex}
\end{tikzpicture}
\caption{On the left $f$ avoids $r_2$, on the right it fills $r_1$ and $r_2$.}
\label{fig:fillhex}
\end{figure}
\end{req}
\begin{dfn}\label{dfn:sigma}
We set $\displaystyle{\sigma(f', F\to F') \eqdef \sum_{\substack{f \in F \\ \textrm{$f$ fills $f'$}}} \frac1{n(f,F')} }$. If $G$ is a red graph for $w$ and $G'$ a red graph for $w'$ we write $\sigma(f', G\to G' )$ for $\sigma(f', V(G)\to V(G'))$.
\end{dfn}
\subsection{Proof of lemma~\ref{lem:tech}}
\label{sec:proof-lemmatech}
In this section we use the point of view developed in section~\ref{sec:new-approach-to-red-graph} to prove the lemma~\ref{lem:tech}. We restate it with this new vocabulary:
\begin{lem}\label{lem:techNV}
Let $w$ be an a $\partial$-connected $\epsilon$-web which contains no digon and exactly one square. We suppose furthermore that this square touches the unbounded face. Let $G$ be a nice red graphs of $w$ and $G'$ a nice red graph of $w'=w_{G}$, then there exists $\widetilde{w}$ a nice simplification of $w$ such that:
\begin{enumerate}[A)]
\item\label{it:condA} the $\epsilon$-webs $(w_{G})_{G'}$ and $\widetilde{w}$ are isotopic,
\item\label{it:condB} the following equality holds:
\[
\#V( \widetilde{G}) \geq \# V(G) + \# V(G'),
\]
\end{enumerate}
where $\widetilde{G}$ denote the red graph $G_{w\to \widetilde{w}}$.
\end{lem}
\begin{proof}
Because of the condition~\ref{it:condA}, we already know the isotopy class of the web $\widetilde{w}$. To describe it completely, we only need to specify how $\widetilde{w}$ is embedded. For each face $f'$ of $w'$ which is a vertex of $G'$, let us denote $N_{f'}$ a regular neighbourhood of $f'$. We consider $U$ the complementary of $\bigcup_{f'} N_{f'}$. Provided this is done in a coherent fashion, it's enough to specify how $\widetilde{w}$ looks like in $U$ and in $N_{f'}$ for each face $f'$ of $w'$.
If $f'$ is a face of $w'$ which is in $G'$, we consider two different cases:
\begin{enumerate}
\item\label{it:face0} the face $f'$ corresponds to a vertex of $G'$ with exterior degree equal to 0,
\item\label{it:face2}the face $f'$ corresponds to a vertex of $G'$ with exterior degree equal to 2.
\end{enumerate}
These are the only cases to consider since $G'$ is nice.
Let us denote by $w''$ the $\epsilon$-web $(w')_{G'}$. We want $\widetilde{w}$ and $w''$ to be isotopic. So let us look at $w''\cap U$ and at $w''\cap N_{f'}$ in the two cases.
Around $U$, the $\epsilon$-web $w'$ does not ``see'' the red graph $G'$, so that $U\cap w''= U\cap w'_{G'} = U\cap w'$.
If the face $f'$ has exterior degree equal to 0 (case~\ref{it:face0}), then we have: $N_{f'}\cap w'' = U\cap w'_{G'}= \emptyset$. \marginpar{example ?}
If the face $f'$ has exterior degree equal to 2 (case~\ref{it:face2}), then we have: $N_{f'}\cap w'' = U\cap w'_{G'}$ is a single strands cutting $N_{f'}$ into two parts. \marginpar{example}
We embed $\widetilde{w}$ such that $U\cap \widetilde{w}$ and $U\cap w''$ are equal and for each face $f'$ corresponding to a vertex of $G'$,
$N_{f'}\cap \widetilde{w}$ and $N_{f'}\cap w''$ are isotopic (relatively to the boundary).
We claim that if $f'$ is a vertex with external degree equal to 0 then:
\begin{align}\begin{cases}\sigma (f', \widetilde{G}\to G') \geq \sigma(f', G \to G') +\frac12 & \textrm {if $S \subseteq N_{f'}$,} \\
\sigma (f', \widetilde{G}\to G') \geq \sigma(f', G \to G') +1 & \textrm {if $S\nsubseteq N_{f'}$,}
\end{cases}\label{eq:sigmaface0}
\end{align}
where $S$ is the square of $w$.
The restriction\footnote{We only consider the vertices of $G$ which fill $f'$.} $G_{f'}$ of $G'$ to $f'$ is a graph which satisfies the following conditions:
\begin{itemize}
\item it is bi-coloured (because the vertices of $G$ are essential faces of $w$ with respect to $w'$),
\item it is naturally embedded in a disk because $N_{f'}$ is diffeomorphic to a disk,
\item the degree of the vertices inside the disks have degree at least three (because the only possible square of $w$ touches the border) and the vertices on the border (these are the one which intersect an other region of $w'$) have degree at least 1.
\end{itemize}
The regions of $G_{f'}$ and the vertices of one of the two colours of $G_{f'}$ become vertices of $\widetilde{G}$ (see example depicted on figure~\ref{fig:exampleed0}). To proves the inequality (\ref{eq:sigmaface0}), one should carefully count regions of $G_{f'}$. There are two different cases in (\ref{eq:sigmaface0}) because the remark~\ref{rqe:hex2bign} do not apply to the square.
Hence we can apply the lemma~\ref{lem:techtech0} which proves (\ref{eq:sigmaface0}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale= 1.4]
\input{./rg_exampleed0}
\end{tikzpicture}
\caption{Example of the procedure to define $\widetilde{G}$ when the exterior degree of $f'$ is equal to 0.}
\label{fig:exampleed0}
\end{figure}
The only thing remaining to specify is how $\widetilde{w}$ looks in $N_{f'}$ where $f'$ is a vertex of $G'$ of exterior degree equal to 2. Note that we need to embed $\widetilde{w}$ so that it is a nice simplification of $w$. We claim that it is always possible to find such an embedding so that the inequality (\ref{eq:sigmaface0}) is satisfied. In this case the graph $G_{f'}$ is in the same situation as before, but is important to notice that the faces of $G_{f'}$ have at least 6 sides (this is a consequence of proposition~\ref{prop:largecycle}).
The vertices of $\widetilde{G}$ are the regions of $G_{f'}$ and the vertices of $G_{f'}$ of one of the two colours on one side of the strand and the vertices of $G_{f'}$ of the other colour on the other side of the strand (see figure~\ref{fig:exampleed0} for an example). Hence in order to show that the inequality (\ref{eq:sigmaface0}) holds, one should carefully count the regions and the vertices of $G_{f'}$, this is done by lemma~\ref{sec:case-with-exterior2}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale =1.4]
\input{./rg_exampleed2}
\end{tikzpicture}
\caption{Example of the procedure to define $\widetilde{G}$ when the exterior degree of $f'$ is equal to 2.}
\label{fig:exampleed2}
\end{figure}
So now we have a simplification $\widetilde{w}$ of $w$, such that the graph $\widetilde{G}=G_{w\to \widetilde{w}}$ satisfies (\ref{prop:largecycle}) for each region $f'$ of $G'$. The square $S$ of $w$ is in at most one $N_{f'}$ so that if we sum (\ref{prop:largecycle}) for all the vertices of $f'$, we obtain:
\[
\sum_{f'\in F'} \sigma(f', \widetilde{G}\to G') \geq \sum_{f'\in F'} \sigma(f', G \to G') + \#V(G') - \frac12,
\]
and using (\ref{eq:sumofcontrib}), we have:
\[
\#V(\widetilde{G}) \geq \#V(G') + \#V(G) - \frac12,
\]
but $\#V(\widetilde{G})$ being an integer we have $V(\widetilde{G}) \geq \#V(G') + \#V(G)$
\end{proof}
\subsection{Proof of combinatorial lemmas}
\label{sec:proof-techn-lemm}
This proof is dedicated to the two technical lemmas used in the last section. We first introduce the ad-hoc objects and then state and prove the lemmas.
\begin{dfn}
A \emph{$D$-graph} is a graph $G$ embedded in the disk $D^2$. The set of vertices $V(G)$ is partitioned in two sets: $V^\partial(G)$ contains the vertices lying on $\partial D^2$, while $V^\textrm{in}$ contains the others. The set $F(G)$ of connected components of $D^2\setminus G$ is partitioned into two sets $F^\textrm{in}$ contains the connected component included in $\mathring{D^2}$, while $F^\partial$ contains the others.
A $D$-graph is said to be \emph{non-elliptic} if:
\begin{itemize}
\item every vertex $v$ of $V^\textrm{in}$ has degree greater or equal to 3,
\item every vertex $v$ of $V^\partial$ has degree greater or equal than 1,
\item the faces of $F^\mathrm{in}$ are of size at least 6.
\end{itemize}
A \emph{coloured $D$-graph} is a $D$-graph $G$ together with:
\begin{itemize}
\item a vertex-2-colouring (by $\mathrm{\color{darkgreen}green}$ and $\mathrm{\color{darkblue}blue}$) of the vertices of $G$ (this implies that $G$ is bipartite),
\item a subdivision of $\mathcal{D}^2$ into two intervals (we allow one interval to be the empty set and the other one to be the full circle, in this case we say that $G$ is \emph{circled-coloured}): a $\mathrm{\color{darkgreen}green}$ one and a $\mathrm{\color{darkblue}blue}$ one (denoted by $I_\mathrm{\color{darkblue}blue}$ and $I_\mathrm{\color{darkgreen}green}$) when they are real intervals we defines $x$ and $y$ to be the two intersection points of $I_\mathrm{\color{darkgreen}green}$ and $I_\mathrm{\color{darkblue}blue}$ with the convention that when one scans $\partial D^2$ clockwise, one see $x$, then $I_\mathrm{\color{darkgreen}green}$, then $y$ and finally $I_\mathrm{\color{darkblue}blue}$.
\end{itemize}
The vertices of $V^\partial$ are supposed neither on $x$ nor on $y$. The colour of a vertex is not supposed to fit the colour of the interval it lies on. We set $V_{\mathrm{\color{darkgreen}green}}$ (resp.~{} $V_\mathrm{\color{darkblue}blue}$) the set of $\mathrm{\color{darkgreen}green}$, (resp.~{} $\mathrm{\color{darkblue}blue}$) vertices, and
$V_{\mathrm{\color{darkgreen}green}}^\partial$, $V_{\mathrm{\color{darkgreen}green}}^\mathrm{in}$, $V_{\mathrm{\color{darkblue}blue}}^\partial$ and $V_{\mathrm{\color{darkblue}blue}}^\mathrm{in}$ in the obvious way.
If $G$ is a coloured $D$-graph, and $v$ is a vertex of $V^\partial$ of we set:
\[
n(v) = \begin{cases} 2 & \textrm{if $v$ has degree 1 and the colour of $v$ fits the colour of the interval,} \\ 1 & \textrm{else.}
\end{cases}
\]
If $v$ is a vertex of $V^\mathrm{in}$, we set $n(v)=1$. Note that this definition of $n$ is a translation of the $n$ of the previous section (see remark~\ref{rqe:hex2bign}).
\end{dfn}
\subsubsection{Case with exterior degree equal to 0}
\label{sec:case-with-exterior0}
\begin{lem} \label{lem:techtech0}
Let $G$ be a non-elliptic circled-coloured $D$-graph (with the circle coloured by a colour $c$), then:
\[\#F \geq 1 + \sum_{v\in V_c} \frac{1}{n(v)}.
\]
\end{lem}
\begin{proof}By symmetry, we may suppose that $c=\mathrm{\color{darkgreen}green}$.
To show this we consider the graph $H$ obtain by gluing to copies of $G$ along the boundary of $D^2$ this is naturally embedded in the sphere. We write the Euler characteristic:
\begin{align}
\#F(H) - \#E(H) + \#V(H) = 1 + \#C(H), \label{eq:eulerchar}
\end{align}
where $C(H)$ is the set of connected components of $H$. We have the following equalities:
\begin{align*}
\#F(H) &= 2\#F^{\textrm{in}}(G) + \#F^{\partial}(G),\\
\#F^{\partial}(G) &= {\#V^{\partial}(G)} + 1 -\#C(H), \\
\#E(H) &= 2 \#E(G) = \sum_{v\in V(G)} \deg(v) = 2\sum_{v\in V_{\mathrm{\color{darkgreen}green}}(G)} \deg(v),\\
\#V(H) &= 2\#V^{\textrm{in}}(G) + \#V^{\partial}(G)
\end{align*}
So that we can rewrite (\ref{eq:eulerchar}):
\begin{align}
2\#F^{\textrm{in}}(G) + 2\#F^{\partial}(G) +2\#V^{\textrm{in}}(G) = 2 + 2 \#E(G). \label{eq:EX2}
\end{align}
Now we use the what we know about degrees of the vertices:
\begin{align*}
\#E(G)&\geq \frac{3}2 \# V^{\textrm{in}}(G)+ \frac12 \#V^{\partial,1}(G) +1 \#V^{\partial,>1}(G), \\
\#E(G)&\geq 3 \# V_{\mathrm{\color{darkgreen}green}}^{\textrm{in}}(G)+ \#V_{\mathrm{\color{darkgreen}green}}^{\partial,1}(G) +2 \#V_{\mathrm{\color{darkgreen}green}}^{\partial,>1}(G).
\end{align*}
Where $V^{\partial,1}$ (resp.~{} $V^{\partial,>1}$) denotes the subset of $V^\partial$ with degree equal to $1$ (resp.~{} with degree strictly bigger than 1).
If we sum $\frac23$ of the first inequality and $\frac13$ of the second one, and inject this in~(\ref{eq:EX2}) we obtain:
\begin{align*}
\#F(G) + \#V^{\textrm{in}}(G) &\geq 1+ \#E(G) \\
\#F(G) + \#V^{\textrm{in}}(G) &\geq V^{\textrm{in}}(G)+ \frac13 \#V^{\partial,1}(G) +\frac23 \#V^{\partial,>1}(G) \\
& \quad + \# V_{\mathrm{\color{darkgreen}green}}^{\textrm{in}}(G) + \frac13\#V_{\mathrm{\color{darkgreen}green}}^{\partial,1}(G) + \frac{2}3 \#V_{\mathrm{\color{darkgreen}green}}^{\partial,>1}(G) \\
\#F(G) &\geq \# V_{\mathrm{\color{darkgreen}green}}^{\textrm{in}}(G)+ \frac23 \#V_{\mathrm{\color{darkgreen}green}}^{\partial,1}(G) +\frac43 \#V_{\mathrm{\color{darkgreen}green}}^{\partial,>1}(G) \\
&\quad + \frac13\#V_{\mathrm{\color{darkblue}blue}}^{\partial,1}(G) + \frac{2}3 \#V_{\mathrm{\color{darkblue}blue}}^{\partial,>1}(G) \\
&\geq \# V_{\mathrm{\color{darkgreen}green}}^{\textrm{in}}(G)+ \frac12 \#V_{\mathrm{\color{darkgreen}green}}^{\partial,1}(G) + \#V_{\mathrm{\color{darkgreen}green}}^{\partial,>1}(G) \\
& \geq \sum_{v\in V_\mathrm{\color{darkgreen}green}} \frac{1}{n(v)}.
\end{align*}
\end{proof}
\subsubsection{Case with exterior degree equal to 2}
\label{sec:case-with-exterior2}
\begin{lem}\label{lem:bg1UYH}
If $G$ is a non-elliptic $D$-graph, then all the faces of $F$ are diffeomorphic to disks, and if it is non-empty, then at least one of the following situations happens:
\begin{enumerate}[(1)]
\item\label{it:bg1} the set $V^{\partial,>1}$ is non empty,
\item\label{it:cap} there exists, two $\cap$'s (see figure~\ref{fig:UDgraphUHY}) (if $G$ consists of only one edge, the two $\cap$'s are actually the same one counted two times because it can be seen as a $\cap$ on its two sides),
\item\label{it:3YH} there exists three $\lambda$'s or $H$'s (see figure~\ref{fig:UDgraphUHY}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_DgraphUHY}
\end{tikzpicture}
\caption{From left to right: a $\cap$, a $\lambda$ and an $H$. The circle $\partial D^2$ is thick and grey, the $D$-graph is thin and black. Note that the vertices inside $D^2$ may have degree bigger than 3.}
\label{fig:UDgraphUHY}
\end{figure}
\end{enumerate}
\end{lem}
\begin{proof}
This is the same Euler characteristic-argument that we used in lemma~\ref{lem:UHYinNE}.
\end{proof}
\begin{dfn}
A cut in a (not circled-) coloured $D$-graph is a simple oriented path $\gamma:[0,1] \to D$ such that:
\begin{itemize}
\item we have $\gamma(0)=x$ and $\gamma(1)=y$, therefore $I_\mathrm{\color{darkgreen}green}$ is on the left and $I_\mathrm{\color{darkblue}blue}$ is on the right\footnote{We use the convention that the left and right side are determined when one scans $\gamma$ from $x$ to $y$.}. (see figure \ref{fig:cut}),
\item for every face $f$ of $G$, $f\cap \gamma$ is connected,
\item the path $\gamma$ crosses $G$ either transversely at edges joining a $\mathrm{\color{darkgreen}green}$ vertex on left and a $\mathrm{\color{darkblue}blue}$ vertex on the right, or at vertices of $V^\partial$ whose colours do not fit with the intervals they lie on.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_cut.tex}
\end{tikzpicture}
\caption{A cut in a coloured $D$-graph (note that $G$ is elliptic).}
\label{fig:cut}
\end{figure}
If $\gamma$ is a cut we denote by $\ensuremath{\otimes}[^{l(\gamma)}]{V}{}(G)$ and $\ensuremath{\otimes}[^{r(\gamma)}]{V}{}(G)$ the vertices located on the left (resp.~{} on the right) of $\gamma$. (The vertices located on $\gamma$ are meant to be both on the left and on the right).
\end{dfn}
\begin{lem}\label{lem:techtech2}
Let $G$ be is a non-elliptic (not circled-) coloured $D$-graph, then there exists a cut $\gamma$ such that:
\[\#F(G) \geq 1+ \sum_{v\in \ensuremath{\otimes}[^{l(\gamma)}]{V}{_\mathrm{\color{darkgreen}green}}} \frac1{n(v)} + \sum_{v\in \ensuremath{\otimes}[^{r(\gamma)}]{V}{_\mathrm{\color{darkblue}blue}}}\frac1{n(v)}. \]
\end{lem}
\begin{proof}
The proof is done by induction on $s(G)\eqdef 3\# E(G) +4\# V^{\partial,>1}(G)$. If this quantity is equal to zero then the $D$-graph is empty, then we choose $\gamma$ to be any simple arc joining $x$ to $y$, and the lemma says $1\geq 1$ which is true.
We set:
\[
C(G,\gamma)\eqdef\sum_{v\in \ensuremath{\otimes}[^{l(\gamma)}]{V}{_\mathrm{\color{darkgreen}green}}} \frac1{n(v)} + \sum_{v\in \ensuremath{\otimes}[^{r(\gamma)}]{V}{_\mathrm{\color{darkblue}blue}}}\frac1{n(v)}.
\]
It is enough to check the situations (\ref{it:bg1}), (\ref{it:cap}) and (\ref{it:3YH}) described in lemma \ref{lem:bg1UYH}.
\paragraph*{Situation (\ref{it:bg1})}
Let us denote by $v$ a vertex of $V^{\partial, >1}$. There are two cases: the colour of $v$ fits with the colours of the intervals it lies on or not.
If the colours fit, say both are $\mathrm{\color{darkgreen}green}$, we consider $G'$ the same coloured $D$-graph as $G$ but with $v$ split into $v_1$, $v_2$ \dots $v_{\deg(v)}$ all in $V^{\partial,1}(G')$ (see figure~\ref{fig:GGpsit1}). We have $s(G')= S(G)-4< s(G)$ and $G'$ non-elliptic, therefore we can apply the induction hypothesis. We can find a cut $\gamma'$ with $\#F(G')\geq 1+ C(G',\gamma')$. Note that $\gamma'$ does not cross any $v'$, so that we can lift $\gamma'$ in the $D$-graph $G$. This gives us $\gamma$. We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma') + \frac1{n(v)} - \sum_{k=1}^{\deg(v)}\frac1{n(v_k)} \\
&= C(G',\gamma') + 1 - \frac{\deg(v)}2 \\
&\geq C(G',\gamma').
\end{align*}
On the other hand $\#F(G)= \#F(G')$ so that we have $\#F(G)\geq 1+ C(G,\gamma)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_GGpsit1}
\end{tikzpicture}
\caption{Local picture of $G$ (on the left) and $G'$ (on the right) around, when $v$ is $\mathrm{\color{darkgreen}green}$ and lies on $I_\mathrm{\color{darkgreen}green}$.}
\label{fig:GGpsit1}
\end{figure}
If the colours do not fit (say $v$ is $\mathrm{\color{darkblue}blue}$), we construct $G'$ a coloured $D$-graph which is similar to $G$ every where but next to $v$. The vertex $v$ is pushed in $D^2$ (we denote it by $v'$) and we add a new vertex $v''$ on $\partial D^2$ and an edge $e$ joining $v'$ and $v''$. The coloured $D$-graph $G'$ is non-elliptic and $s(G')= s(G)-4+3<s(G)$ so that we can apply the induction hypothesis and find a cut $\gamma'$ with $\#F(G')\geq 1+ C(G',\gamma')$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_GGpsit1b}
\end{tikzpicture}
\caption{Local picture of $G$ (on the left) and $G'$ (on the right) around $v$, when $v$ is $\mathrm{\color{darkblue}blue}$ and lies on $I_\mathrm{\color{darkgreen}green}$.}
\label{fig:GGpsit1b}
\end{figure}
If $\gamma'$ does not cross $e$ we can lift $\gamma'$ in $G$ (this gives us $\gamma$). We have
\begin{align*}C(G,\gamma)= C(G',\gamma')+ \frac{1}{n(v)} - \frac{1}{n(v')} =C(G',\gamma')+ 1- 1 = C(G',\gamma').
\end{align*}
On the other hand, we have $F(G)= F(G')$, so that $\#F(G)\geq 1+ C(G,\gamma)$.
Consider now the case where $\gamma'$ crosses $e$. Then we consider the cut $\gamma$ of $G$ which is the same as $\gamma$ far from $v$, and which around $v$ crosses $G$ in $v$ (see figure~\ref{fig:GGpsitc}). We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v)} - \frac{1}{n(v')} - \frac{1}{n(v'')}\\
C(G,\gamma)&= C(G',\gamma') +1-1-\frac12\\
C(G,\gamma)&\geq C(G',\gamma').
\end{align*}
But $\#F(G)= \#F(G')$, so that we have $\#F(G)\geq 1+ C(G,\gamma)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_GGpsit1c}
\end{tikzpicture}
\caption{How to transform $\gamma'$ into $\gamma$.}
\label{fig:GGpsitc}
\end{figure}
\paragraph*{Situation (\ref{it:cap})}
We now suppose that $G$ contains two $\cap$'s. Let us denote by $v_g$ (resp.~{} $v_b$) the $\mathrm{\color{darkgreen}green}$ (resp.~{} $\mathrm{\color{darkblue}blue}$) vertex of the $\cap$ and by $e$ the edge of the cap. There are different possible configurations depending where $x$ and $y$ lies. As there are at least two caps, we may suppose $y$ is far from the $\cap$.
There are 3 different configurations (see figure~\ref{fig:3configsit2}):
\begin{itemize}
\item the point $x$ is far from the $\cap$,
\item the point $x$ is in the $\cap$ and $v_g\in I_\mathrm{\color{darkgreen}green}$ and $v_b \in I_\mathrm{\color{darkblue}blue}$,
\item the point $x$ is in the $\cap$ and $v_g\in I_\mathrm{\color{darkblue}blue}$ and $v_b \in I_\mathrm{\color{darkgreen}green}$,
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_3configsit2}
\end{tikzpicture}
\caption{The three possible configurations.}
\label{fig:3configsit2}
\end{figure}
We consider $G'$ the coloured $D$-graph similar to $G$ except that the $\cap$ is removed. The coloured $D$-graph $G'$ is non-elliptic and $s(G')= s(G)- 3 <s(G)$ so that we can apply the induction hypothesis and find a cut $\gamma'$ with $\#F(G')\geq 1+ C(G',\gamma')$.
Let us suppose first that $x$ is far from the $\cap$, then $v_b$ and $v_g$ both lie either on $I_\mathrm{\color{darkgreen}green}$ or on $I_\mathrm{\color{darkblue}blue}$. By symmetry we may consider that they both lie on $I_\mathrm{\color{darkgreen}green}$. We can lift $\gamma'$ in $G$ (this gives $\gamma$) so that it does not meet the $\cap$. We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_g)} \\
&= C(G',\gamma')+ \frac12.
\end{align*}
But $\#F(G) = \#F(G')+1$, hence $\#F(G)\geq 1+ C(G,\gamma)$.
Suppose now that the point $x$ is in the $\cap$ and $v_g\in I_\mathrm{\color{darkgreen}green}$ and $v_b \in I_\mathrm{\color{darkblue}blue}$. We can lift $\gamma'$ in $G$ so that it crosses $e$ (see figure~\ref{fig:ggpsit2b}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_ggpsit2b}
\end{tikzpicture}
\caption{How to transform $\gamma'$ into $\gamma$.}
\label{fig:ggpsit2b}
\end{figure}
We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_g)} +\frac{1}{n(v_b)} \\
&= C(G',\gamma')+ \frac12+\frac12 \\
&= C(G',\gamma')+1.
\end{align*}
But $\#F(G) = \#F(G')+1$, hence $\#F(G)\geq 1+ C(G,\gamma)$.
Suppose now that the point $x$ is in the $\cap$ and $v_g\in I_\mathrm{\color{darkblue}blue}$ and $v_b \in I_\mathrm{\color{darkgreen}green}$. We can lift $\gamma'$ in $G$ so that it crosses\footnote{We could have chosen to cross $v_b$.} $v_g$ (see figure~\ref{fig:ggpsit2c}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_ggpsit2c}
\end{tikzpicture}
\caption{How to transform $\gamma'$ into $\gamma$.}
\label{fig:ggpsit2c}
\end{figure}
We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_g)} \\
&= C(G',\gamma')+1.
\end{align*}
But $\#F(G) = \#F(G') +1 $, hence $\#F(G)\geq 1+ C(G,\gamma)$.
\paragraph{Situation (\ref{it:3YH})}
\label{sec:situation-3}
We suppose now that there are three $\lambda$'s or $H$'s. One can suppose that a $\lambda$ or an $H$ is far from $x$ and from $y$.
Consider first that there is a $\lambda$ far from $x$ and $y$. Let us denote by $v_1$ and $v_2$ the two vertices of the $\lambda$ which belongs to $V^\partial(G)$, by $v$ the vertex of the $\lambda$ which is in $V^\textrm{in}(G)$ and by $e_1$ (resp.~{} $e_2$) the edge joining $v$ to $v_1$ (resp.~{} $v_2$). We consider $G'$ the $D$-graph where the $\lambda$ is replaced by a single strand: the edges $e_1$ and $e_2$ and the vertices $v_1$ and $v_2$ are suppressed. The vertex $v$ is moved to $\partial D^2$ (and renamed $v'$). This is depicted on figure~\ref{fig:2confY}. The coloured $D$-graph $G'$ is non-elliptic and $s(G')<s(G)$ so that we can apply the induction hypothesis and find a cut $\gamma'$ with $\#F(G')\geq 1+ C(G',\gamma')$.
The vertices $v_1$ and $v_2$ have the same colour, by symmetry we may suppose that they are both $\mathrm{\color{darkgreen}green}$. It implies that $v$ and $v'$ are both $\mathrm{\color{darkblue}blue}$.
There are two different configurations:
\begin{itemize}
\item the vertices $v_1$ and $v_2$ lie on $I_\mathrm{\color{darkgreen}green}$,
\item the vertices $v_1$ and $v_2$ lie on $I_\mathrm{\color{darkblue}blue}$.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\input{./rg_2confY}
\end{tikzpicture}
\caption{On the center, the two possible configurations for a $\lambda$, on the sides, the $D$-graphs $G'$ obtained from $G$.}
\label{fig:2confY}
\end{figure}
Let us first suppose that the vertices $v_1$ and $v_2$ lie on $I_\mathrm{\color{darkgreen}green}$. If the cut $\gamma'$ does not cross $v'$ then we can canonically lift it in $G$. This gives us $\gamma$. We have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_1)} + \frac{1}{n(v_2)} \\
&= C(G',\gamma')+\frac12 + \frac12.
\end{align*}
But $\#F(G) =\# F(G')+1$, hence $\#F(G)\geq 1+ C(G,\gamma)$.
If the cut $\gamma'$ crosses $v'$, we lift $\gamma'$ in $G$ so that it crosses $e_1$ and $e_2$ (see figure~\ref{fig:ggpsit2c}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.1]
\input{./th_ggpYsit2}
\end{tikzpicture}
\caption{How to transform $\gamma'$ into $\gamma$.}
\label{fig:ggpYsit2}
\end{figure}
In this case we have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ + \frac{1}{n(v)} - \frac1{n(v')} + \frac{1}{n(v_1)} + \frac{1}{n(v_2)} \\
&= C(G',\gamma')+1 -1+ \frac12 + \frac12.
\end{align*}
Hence, $\#F(G)\geq 1+ C(G,\gamma)$.
Now suppose that the vertices $v_1$ and $v_2$ lie on $I_\mathrm{\color{darkblue}blue}$, this implies that $\gamma'$ does not meet $v'$, so that we can lift $\gamma'$ canonically in $G$, this gives us $\gamma$, we have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v)} - \frac{1}{n(v')} \\
&= C(G',\gamma')+1-1.
\end{align*}
Hence $\#F(G)\geq 1+ C(G,\gamma)$.
We finally consider a $H$ far from $x$ and $y$. We take notation of the figure~\ref{fig:GGsitH} to denote vertices and edges of the $H$, we consider $G'$ the $D$-graph where the the $H$ is simplified (see figure~\ref{fig:GGsitH} for details and notation). The coloured $D$-graph $G'$ is non-elliptic and $s(G')= s(G) -3\times 3 + 2\times 4<s(G)$ so that we can apply the induction hypothesis and find a cut $\gamma'$ with $\#F(G')\geq 1+ C(G',\gamma')$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.4]
\input{./th_GGsitH}
\end{tikzpicture}
\caption{How to transform $G$ into $G'$.}
\label{fig:GGsitH}
\end{figure}
Up to symmetry there is only one configuration, therefore we may suppose that $v_1$ is $\mathrm{\color{darkgreen}green}$ and lies on $I_\mathrm{\color{darkgreen}green}$. This implies that $v_2$ and $v_3$ are $\mathrm{\color{darkblue}blue}$ and that $v_4$ is $\mathrm{\color{darkgreen}green}$. Because of the colour condition, the cut $\gamma'$ does not cross $v'_4$ and may cross $v'_3$. If it does not cross $v'_3$, one can canonically lift $\gamma'$ in $G'$ and we have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_1)} + \frac{1}{n(v_4)} - \frac{1}{n(v'_4)} \\
&\geq C(G',\gamma')+\frac12 +1 -1. \\
&\geq C(G',\gamma') +\frac12.
\end{align*}
But $\#F(G) =\# F(G')+1$, hence $\#F(G)\geq 1+ C(G,\gamma)$.
If the cut $\gamma'$ crosses $v'_3$, on lift it in $\ensuremath{\mathcal{G}}$ so that it crosses $e_1$ and $e_2$ (see figure~\ref{fig:GGsitHb}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.4]
\input{./th_GGsitHb}
\end{tikzpicture}
\caption{How to transform $\gamma'$ into $\gamma$.}
\label{fig:GGsitHb}
\end{figure}
So that we have:
\begin{align*}
C(G,\gamma)&= C(G',\gamma')+ \frac{1}{n(v_1)} + \frac{1}{n(v_3)} - \frac{1}{n(v'_3)} + \frac{1}{n(v_4)} - \frac{1}{n(v'_4)} \\
&\geq C(G',\gamma')+\frac12 +1 -1 + 1 -\frac12. \\
&\geq C(G',\gamma') +1.
\end{align*}
But $\#F(G) =\# F(G')+1$, hence $\#F(G)\geq 1+ C(G,\gamma)$.
\paragraph{Conclusion}
\label{sec:conclusion}
For all situations, using the induction hypothesis we can construct a cut $\gamma$ such that: $\#F(G)\geq 1+ C(G,\gamma)$. This proves the lemma.
\end{proof}
|
1,108,101,563,694 | arxiv | \section{Introduction}
The nonequilibrium dynamics of integrable many-body systems has received a large amount of attention recently, especially in view of experimental realizations in cold atomic gases \cite{giamarchi,review_giamarchi,Batch13}. It is known that in situations with slow, large-scale variations in space and time, the principles of hydrodynamics hold \cite{NP66,ResiboisDeLeener,S91}. The recently developed generalized hydrodynamics (GHD) \cite{cdy,BCDF16} applies these principles to the presence of infinitely-many conservation laws afforded by integrability. The original works \cite{cdy,BCDF16} strongly suggest that GHD, in the quasi-particle formulation, has wide applicability within quantum systems, including quantum chains and quantum field theory (QFT), requiring only a restricted set of dynamical and kinematical data. These data arise from the thermodynamic Bethe ansatz (TBA) \cite{yy,tba,moscaux}. In the quantum context (and omitting the simple cases of free particles), GHD has been explicitly worked out in general integrable QFT with diagonal scattering (such as the sinh-Gordon and Lieb-Liniger models) \cite{cdy,dynote,dsdrude}, in the XXZ quantum chain \cite{BCDF16,LNCBF17}, and in the Hubbard model, which displays ``nested Bethe ansatz" \cite{IdNnested}, and is expected to apply to all known integrable QFT and Bethe-ansatz integrable models. The structure of GHD, however, transcends its origin from the Bethe ansatz, and GHD can be shown to apply to an even larger variety of models, including classical integrable field theory \cite{Bastia17} and classical gases such as the hard rod model \cite{S82,dobrods,BS97,dsdrude} and soliton gases \cite{solgas1,solgas2,solgas3,solgas4,dyc,Bu17}. The theory has been quite successful, see for instance \cite{spin1,spin2,spin3,dshardrods,K17,bvkm2,ddky,dsy17}. GHD, as developed until now, is valid at the Euler scale, but viscous and other corrections have been considered, see \cite{BS97,zlp17,dshardrods,mkp17i,Spohn17,mkp17ii,F17}. In the present paper, we restrict to the Euler scale.
An important problem is that of evaluating dynamical correlations. For definiteness, let an initial state $\langle \cdots\rangle_{\rm ini}$ be of the form
\begin{equation}\label{inis}
\langle O\rangle_{\rm ini} = \frc{{\rm Tr}\lt(e^{-\int_{\mathbb{R}} {\rm d} x\,\sum_i\beta_i(x) \frak{q}_i(x)}\,O\rt)}{
{\rm Tr} \lt( e^{-\int_{\mathbb{R}} {\rm d} x\,\sum_i\beta_i(x) \frak{q}_i(x)}\rt)}
\end{equation}
(for any observable $O$). Here $\frak{q}_i(x),\;i\in{\mathbb{N}}$ form a basis of local and quasi-local densities \cite{quasiloc} of homogeneous, extensive conserved quantities $Q_i = \int {\rm d} x\,\frak{q}_i(x)$ in involution, and $\beta_i(x)$ are parameters, which can be interpreted as generalized local temperatures or local chemical potentials of the integrable hierarchy. We use a continuous space notation $x$, and the trace notation ${\rm Tr}$. This is for convenience, and the problem is posed in its most general setting, for classical (where the trace means a summation over classical configurations) or quantum models, on a one-dimensional infinite space that can be continuous or discrete.
The state \eqref{inis} is an inhomogeneous version of a generalized Gibbs ensemble \cite{Eisrev,EFreview,VRreview}. Let the evolution of a local observable ${\cal O}(x)$ be generated by some homogeneous dynamics that is integrable, for instance with Hamiltonian $H$,
\begin{equation}\label{tevol}
{\cal O}(x,t) = e^{{\rm i} Ht}{\cal O}(x)e^{-{\rm i} Ht}.
\end{equation}
Then one would like to evaluate the set of dynamical connected correlation functions\footnote{Here and below, the superscript $^{\rm c}$ means ``connected".}
\begin{equation}\label{gen}
\langle{\cal O}_1(x_1,t_1)\cdots {\cal O}_{n}(x_n,t_n)\rangle_{\rm ini}^{\rm c}
\end{equation}
for local observables ${\cal O}_k(x_k,t_k)$.
The problem can be divided into two classes. First, if all $\beta_i(x)=\beta_i$ are independent of position, then the initial state is (homogeneous) GGE. The evaluation of exact correlations functions within GGEs is a difficult problem, and in classical models has been little studied. One-point functions of conserved densities ${\frak q}_i$ are directly accessible from the TBA, and those of conserved currents ${\frak j}_i$ (with $\partial_t \frak{q}_i + \partial_x \frak{j}_i = 0$) were obtained as part of the development of GHD \cite{cdy,BCDF16}. There is also the Leclair-Mussardo formula for GGE one-point functions of generic local fields in integrable QFT \cite{LM99,M13}, based on form factors \cite{ff1,ff2,ff3}, and formulae for certain one-point functions in the Lieb-Liniger model \cite{KCI11,Po11} and the sinh-Gordon model \cite{NS13,N14,BP16}. For GGE two-point functions, various types of spectral expansions exist \cite{D1,Ta1,Ta2,D2,EK09}, including new results of the Leclair-Mussardo type \cite{PS17}, as well as exact results in free-particle models based on integrable partial differential equations \cite{ItsIK90,ItsIKS90,ItsIKV92,KBI93,DG08} (mostly Gibbs states are considered, but the techniques are extendable to GGEs). In integrable quantum spin chains, expressions for correlation functions in Gibbs states \cite{GKS04,Sa07} and in GGEs \cite{PGGE13,PVW17,FG13} have been obtained, but large space-time asymptotics are still to be fully addressed. Stronger results exist in the hydrodynamic regime: Lieb-Liniger particle density correlations from form factors \cite{dNP16}, and more generally a set of efficient formulae for two-point functions of all local densities and currents in any integrable model \cite{dsdrude,IdNnested}, obtained by combining GHD with hydrodynamic projection methods \cite{Sp15,MS14}.
Second, more interestingly, let $\beta_i(x)$ depend on the position $x$ in a weak enough fashion. This may arise, in good approximation, as initial ground states or finite-temperature states of quantum or classical systems in weakly varying potentials, or after a (short) local-relaxation time in the partitioning protocol of non-equilibrium steady states \cite{BDreview}. In this case, much less is known. GHD gives direct access to local GGEs describing the mesoscopic fluid cells, hence to all space-time dependent one-point functions of observables whose GGE averages can already be evaluated. However, for two- and higher-point functions, results only exist in the context of free field theory. Importantly, this includes Luttinger Liquids, and gives access, using the local density approximation and related hydrodynamic ideas, to the low-temperature limit of inhomogeneous integrable models, such as the Lieb-Liniger model in inhomogeneous potentials or the Heisenberg chain with in homogeneous interaction coupling, see for instance \cite{GS03,Ghosh04,CPOPC08,DSVC17,BD17,DSC17,EB17}. Inhomogeneous two- and higher-point functions have never been studied in more general interacting integrable systems.
In this paper, we provide both a first step in the study of correlation functions in inhomogeneous situations, and further develop the theory of correlation functions in (homogeneous) GGEs. We evaluate Euler-scaled dynamical connected correlation functions in inhomogeneous, non-stationary states, in the generality of GHD (without inhomogeneous force fields). The results apply not only to conserved densities and currents, but also to more general local fields where correlation function formulae are obtained purely from the knowledge of GGE one-point functions. The latter are new also when specialized to GGEs.
More precisely, the objects we study are as follows. Consider the scaled initial state
\begin{equation}
\langle O\rangle_{{\rm ini},\lambda} = \frc{{\rm Tr}\lt(e^{-\int_{\mathbb{R}} {\rm d} x\,\sum_i\beta_i(\lambda^{-1} x) \frak{q}_i(x)}\,O\rt)}{
{\rm Tr} \lt( e^{-\int_{\mathbb{R}} {\rm d} x\,\sum_i\beta_i(\lambda^{-1} x) \frak{q}_i(x)}\rt)},
\end{equation}
for smooth functions $\beta_i(x)$. The scaling in $\lambda$ guarantees that the Lagrange parameters of the initial state depend weakly on the position. Let us denote by ${\cal N}_\lambda(x,t)$ a mesoscopic fluid cell: this can be taken as a space-time region whose extent scales as $\lambda^\nu$ for some $\nu_0<\nu<1$, around the scaled point $\lambda x$, say ${\cal N}_\lambda(x,t) = \{(y,s): \sqrt{(y-\lambda x)^2 + (s-\lambda t)^2}<\lambda^{\nu}\}$. The value of $\nu_0$ depends on the subleading corrections to Euler hydrodynamics; if they are diffusive, then we would expect $\nu_0=1/2$. Let us also denote by $|{\cal N}_\lambda|=\int_{{\cal N}_\lambda(x,t)} {\rm d} y{\rm d} s$ its volume. The ``Eulerian scaling limit" for correlation functions is defined as the limit
\begin{eqnarray}\label{cf}
\lefteqn{\langle {\cal O}_1(x_1,t_1) \cdots {\cal O}_N(x_N,t_N)\rangle_{[n_0]}^{\rm Eul}}\\ && = \lim_{\lambda\to\infty} \lambda^{N-1}\,
\int_{{\cal N}_\lambda(x_1,t_1)} \frc{{\rm d} y_1{\rm d} s_1}{|{\cal N}_\lambda|}\cdots \int_{{\cal N}_\lambda(x_N,t_N)} \frc{{\rm d} y_N{\rm d} s_N}{|{\cal N}_\lambda|}\,
\langle{\cal O}_1(y_1,s_1)\cdots {\cal O}_{N}(y_N,s_N)\rangle_{\rm ini,\lambda}^{\rm c}\nonumber
\end{eqnarray}
for fixed $x_k$'s and $t_k$'s. Here the superscript ${\rm c}$ means that we take connected correlation functions, and $n_0$ represents the initial GHD occupation function, which characterizes the initial state at the Euler scale (the GGEs of the initial fluid cells). Fluid-cell averaging, $\int_{{\cal N}_\lambda(x_k,t_k)} \frc{{\rm d} y_k {\rm d} s_k}{|{\cal N}_\lambda|}\cdots$, is necessary in order to avoid non-Eulerian oscillations, and averaging can be performed in various ways (see \cite{Bastia17} for a discussion of fluid-cell averaging and oscillations). For one-point functions, numerical observations and exact calculations in free models suggest that fluid-cell averaging is not necessary, and one has $\langle {\cal O}(x,t)\rangle^{\rm Eul}_{[n_0]} = \lim_{\lambda\to\infty} \langle {\cal O}(\lambda x,\lambda t)\rangle_{{\rm ini},\lambda}$.
We propose a generating function method in order to evaluate \eqref{cf}, based on combining an Euler-scale fluctuation-dissipation principle with the ``nonlinear method of characteristics" introduced in \cite{dsy17}. We expect the generating function method to be valid whenever equal-time correlations vanish fast enough in space. It is expected to work in all quantum and classical systems that have been shown to be accessible by GHD, and applies to conserved densities $\frak{q}_i$ and currents $\frak{j}_i$. In the cases of two-point functions, we show that the method provides explicit nonlinear integral equations which can in principle be solved numerically, and from which various special cases can be extracted. The results on two-point functions agree with the GHD projection operators derived in \cite{dsdrude}, and in homogeneous states, reproduce the formulae found in \cite{dsdrude,IdNnested}.
Further, using hydrodynamic projections, we find formulae for Euler-scale two-point functions of arbitrary local fields, expressed purely in terms of their homogeneous GGE averages. To every local field we associate a hydrodynamic spectral function obtained from its GGE averages, which enters the two-point function formula. Combining with the Leclair-Mussardo expansion in integrable QFT (or its counterpart in classical field theory \cite{dLM16}), we obtain form factor series for Euler-scale dynamical two-point functions for any local field. Using the Bertini-Piroli-Calabrese simplification of the Negro-Smirnov formula \cite{NS13,N14,BP16} we also obtain explicit results for two-point functions of exponential fields in the sinh-Gordon model, and using Pozsgay's formula \cite{B11}, of powers of the density operator in the Lieb-Liniger model. These constitute the first such exact results not only in inhomogeneous, non-stationary states, but also in homogeneous GGEs.
Finally, we obtain all Euler-scale $n$-point functions in free models, study two-point functions of conserved densities in the partitioning protocol, obtaining a number of new results for its solution by characteristics, and study the large-time asymptotics of two-point functions from arbitrary inhomogeneous initial conditions.
The paper is organized as follows. In Section \ref{secrev}, we review the basics of GHD, with emphasis both on the general framework accounting for all known examples, and on aspects which are important for the study of dynamical correlation functions. In Section \ref{seccorr}, we present the main results about correlation functions, including the generating function method, the two-point functions of conserved densities and currents, the hydrodynamic projection interpretation, and the extension to generic local observables. In Section \ref{sectex}, we give examples of the main formulae, in the sinh-Gordon and Lieb-Liniger models, and in free-particle models. In Section \ref{sectdis} we provide some discussion and analysis of the results, including a study of two-point functions in the partitioning protocol, and a precise analysis of the large-time asymptotics of two-point functions for a large class of initial states. Finally, we conclude in Section \ref{sectcon}. The details of the computations are reported in appendices.
\section{Review of GHD} \label{secrev}
Making full sense of the state \eqref{inis} is not a trivial matter. If the infinite sum in the exponential truncates, then -- at least in classical and quantum chains -- there is a well developed mathematical theory \cite{BR1,Simon,Sak93}. In the case of homogeneous states, $\beta_i(x) = \beta_i$, there are many studies that discuss the precise terms that must be included within the infinite series $\sum_i \beta_i Q_i$ in various situations, and its convergence in terms of averages of local observables, see the review \cite{EFreview}. A mathematically rigorous framework has been given \cite{D17} showing that the infinite sum can be interpreted as a decomposition in a basis of the Hilbert space of pseudolocal charges; in particular, the infinite series itself is a pseudolocal conserved charge. Later, it was understood how GGEs connect to the quasi-particle description of TBA \cite{cauxilievski}, and an in-depth analysis of finite-series truncations and convergence of local averages was given \cite{PVW17}.
Here we concentrate on the quasi-particle description of GHD as originally developed \cite{cdy,BCDF16}. The generality of GHD has been claimed in various works and the same basic ingredients extracted, see e.g. \cite{dsy17,dsdrude,IdNnested}. In order to establish the notation, which follows \cite{cdy}, we recall these ingredients. We further provide general notions concerning correlation functions, and we make a full account of situations with non-symmetric differential scattering phase (or TBA kernel), making apparent the invariance under quasi-particle reparametrization. It has been noted that this general framework needs small adjustments in order to deal with spin-carrying quantities in the massive regime of the XXZ Heisenberg chain, see \cite{LNCBF17}; we will not consider this subtlety here.
\subsection{GGEs in the quasi-particle formulation}
We denote by ${\cal S}$ the spectral space of the model. The space ${\cal S}$ can be seen roughly as the space of all quasi-particle characteristics admitted in the thermodynamics of the model; it is the space of excitations emerging after diagonalizing the scattering in the thermodynamic limit. In general, ${\cal S}$ is decomposed into disconnected components: each component represents a quasi-particle type, and is a continuum representing the allowed momenta for this quasi-particle type. The spectral space, therefore, has the form of a disjoint union ${\cal S}= \cup_{a\in A} I_a$, where ${\cal A}$ is the set of quasi-particle types, and $I_a$ are continuous subsets of copies ${\mathbb{R}}$ representing the continua of momenta for each particle type. We will parametrise each continuum by a variable $\theta\in I_a$, which we will refer to as the rapidity\footnote{Note however that this is not necessarily any of the rapidities that may appear in natural ways in Bethe ansatz solutions, it is simply some faithful parametrisation of the continua of momenta.}. One may write a spectral parameter as $\bm{\theta} = (\theta,a)$ with $\theta\in I_a$ and $a\in {\cal A}$. We will use the notation
\begin{equation}
\int_{\cal S} {\rm d}\bm{\theta} = \sum_{a\in {\cal A}} \int_{I_a} {\rm d} \theta.
\end{equation}
Besides the set ${\cal S}$, the model is specified by giving the momentum and energy functions $p(\bm{\theta})$ and $E(\bm{\theta})$ respectively, and the differential scattering phase (or more generally the TBA kernel occurring after diagonalization of the scattering) $\varphi(\bm{\theta},\bm{\alpha})$, a function of two spectral parameters. The momentum function $p(\bm{\theta})$ defines physical space and specifies the parametrisation used. Without loss of generality, by faithfulness of the parametrisation we assume that it satisfies
\[
p'(\bm{\theta})>0
\]
where $p'(\bm{\theta}) = {\rm d} p(\bm{\theta})/{\rm d}\theta$ (here and below the prime $'$ denotes a rapidity derivative). The energy function, on the other hand, defines physical time, and equals the ``one-particle eigenvalue" (or the equivalent in classical systems) of the conserved charge that generates time translations (the Hamiltonian), see for instance \cite{dynote}. The differential scattering phase, of course, specifies the interaction.
All equations below are independent of the momentum parametrisation $\theta$ used. This invariance involves certain transformation properties of the objects introduced, which are either scalar fields or vector fields. Under rapidity reparametrisations, the differential scattering phase $\varphi(\bm{\theta},\bm{\alpha})$ transforms as a vector field (i.e. as $\partial/\partial\theta$) in $\theta$, and a scalar field in $\alpha$, that is
\begin{equation}
\varphi(\bm{\theta},\bm{\alpha}){\rm d}\theta \qquad\mbox{is invariant under reparametrisation $\theta\mapsto f(\theta),\;\alpha\mapsto f(\alpha)$.}
\end{equation}
For instance, the differential scattering phase is defined, in diagonal scattering models, as $\varphi(\bm{\theta},\bm{\alpha}) = -{\rm i}\,{\rm d} S(\bm{\theta},\bm{\alpha})/{\rm d}\theta$ where $S(\bm{\theta},\bm{\alpha})$ is the two-body scattering matrix. The momentum and energy functions are scalar fields, while their derivatives, $p'(\bm{\theta})$ and $E'(\bm{\theta})$, are vector fields.
Also given is a set of one-particle eigenvalues, scalar fields $h_i(\bm{\theta})$ for $i\in{\mathbb{N}}$ associated to the conserved charges $Q_i$. The space spanned by these functions is assumed to be in bijection with a dense subspace of the Hilbert space of pseudolocal conserved charges (this Hilbert space is induced by the inner product defined via integrated correlations, see \cite{D17} and the Remark in Subsection \ref{ssectgenfluid}).
The important dynamical quantities, which specify the GGE in the TBA quasi-particle formulation, are an occupation function $n(\bm{\theta})$, a pseudo-energy $\epsilon(\bm{\theta})$, a particle density $\rho_{\rm p}(\bm{\theta})$ and a state density $\rho_{\rm s}(\bm{\theta})$, which are all related to each other \cite{tba,moscaux}. The former two are scalar fields, the latter vector fields. Associated to these is the dressing map $h\mapsto h^{\rm dr}_{[n]}$, which is a functional of $n(\bm{\theta})$ and a linear operator on (an appropriate space of) spectral functions $h$. We define it, in general, differently for its action on vector fields and on scalar fields: it is defined by solving the linear integral equations
\begin{equation}\label{dressing}\begin{aligned}
h^{\rm dr}_{[n]}(\bm{\theta}) = h(\bm{\theta}) + \int_{\cal S} \frc{{\rm d} \bm{\alpha}}{2\pi}\,\varphi(\bm{\theta},\bm{\alpha})n(\bm{\alpha})h^{\rm dr}_{[n]}(\bm{\alpha}) & \mbox{
\qquad (if $h(\bm{\theta})$ is a vector field)} \\
h^{\rm dr}_{[n]}(\bm{\theta}) = h(\bm{\theta}) + \int_{\cal S} \frc{{\rm d} \bm{\alpha}}{2\pi}\,\varphi(\bm{\alpha},\bm{\theta})n(\bm{\alpha})h^{\rm dr}_{[n]}(\bm{\alpha}) & \mbox{
\qquad (if $h(\bm{\theta})$ is a scalar field).}
\end{aligned}
\end{equation}
The dressing operation preserves the transformation property under rapidity reparametrization.
For lightness of notation in this paper, omitting the index $[n]$ means dressing with respect to the occupation function denoted $n(\bm{\theta})$, that is $h^{\rm dr} = h^{\rm dr}_{[n]}$.
It will be convenient to employ an integral-operator notation. We introduce the scattering operator $T$, with kernel $T(\bm{\theta},\bm{\alpha}) = \varphi(\bm{\theta},\bm{\alpha})/(2\pi)$, acting on spectral functions $h$ as
\begin{equation}
(Th)(\bm{\theta}) = \int_{\cal S} \frc{{\rm d} \bm{\alpha}}{2\pi} \varphi(\bm{\theta},\bm{\alpha}) h(\bm{\alpha}),
\end{equation}
as well as its transposed $T^{\rm T}$ with kernel $T^{\rm T}(\bm{\theta},\bm{\alpha}) = \varphi(\bm{\alpha},\bm{\theta})/(2\pi)$. By a slight abuse of notation, we also sometimes use $n$ for the diagonal operator acting as multiplication by $n(\bm{\theta})$. In these terms,
\begin{equation}\begin{aligned}
h^{\rm dr} = (1-Tn)^{-1} h & \mbox{
\qquad (if $h(\bm{\theta})$ is a vector field)}\\
h^{\rm dr} = (1-T^{\rm T}n)^{-1} h & \mbox{
\qquad (if $h(\bm{\theta})$ is a scalar field)}
\end{aligned}
\end{equation}
Both the occupation function and the particle density may be taken as characterising a thermodynamic state (a GGE). Other state quantities are related to them:
\begin{equation}\label{tba}
2\pi \rho_{\rm s} = (p')^{\rm dr},\qquad \rho_{\rm p} = n\rho_{\rm s}
\end{equation}
where $p'(\bm{\theta})$ is a vector field\footnote{Note that if the state density is given by some other means -- for instance via its fundamental geometric interpretation \cite{dsy17} -- then the first equation in \eqref{tba} can be seen as a definition of the momentum function for the chosen spectral parametrisation.}.
The relation between the pseudo-energy $\epsilon(\bm{\theta})$ and the occupation function $n(\bm{\theta})$ depends on the type of excitation mode considered: it is different for quantum fermionic or bosonic degrees of freedom (as discussed in \cite{tba}), for classical particle-like modes such as solitons (as discussed in \cite{CJF86,TSPCB86}) or hard rods (as discussed in \cite{dshardrods,dsdrude}), and for classical radiative modes occurring for instance in classical field theory (the GHD of classical field theory is developed in \cite{Bastia17} based on \cite{CJF86,TSPCB86}). We have $n(\theta,a) = \partial \mathsf{F}_a(\epsilon)/\partial \epsilon\,|_{\epsilon=\epsilon(\theta,a)}$ where the free energy function $\mathsf{F}_a$ is given by
\begin{equation}\label{Faw}
\mathsf{F}_a(\epsilon) = \lt\{\ba{ll} -\log(1+e^{-\epsilon}) & \\[1mm] \displaystyle
\log(1-e^{-\epsilon}) & \\[1mm] \displaystyle
-e^{-\epsilon} & \\[1mm] \displaystyle
\log \epsilon &
\end{array}\rt. \quad \Rightarrow\quad
n(\bm{\theta}) = \lt\{\ba{ll} 1/\big(e^{\epsilon(\bm{\theta})}+1\big) &\quad \mbox{($a$ is a fermion)} \\[1mm] \displaystyle
1/\big(e^{\epsilon(\bm{\theta})}-1) & \quad \mbox{($a$ is a boson)} \\[1mm] \displaystyle
e^{-\epsilon(\bm{\theta})} &\quad \mbox{($a$ is a classical particle)} \\[1mm] \displaystyle
1/\epsilon(\bm{\theta}) &\quad \mbox{($a$ is a radiative mode)}
\end{array}\rt.
\end{equation}
(recall that the mode type is encoded within the particle type $a$ of the spectral parameter $\bm{\theta} = (\theta,a)$). Note that the free energy function determines the ``generalized free energy" of the GGE, given by $\int {\rm d}\bm{\theta}\,p'(\bm{\theta})\,\mathsf{F}_a(\epsilon(\bm{\theta}))$.
Averages in GGEs will be denoted by $\langle O\rangle_{[n]}$, functionals of the state variable $n(\bm{\theta})$. Averages of conserved densities and currents are found to be \cite{cdy,BCDF16}
\begin{eqnarray}\label{onepointq}
\langle \frak{q}_i\rangle_{[n]} &=& \int_{\cal S} {\rm d}\bm{\theta}\,\rho_{\rm p}(\bm{\theta})\,h_i(\bm{\theta})\ =\ \int_{\cal S} \frc{{\rm d} p(\bm{\theta})}{2\pi} n(\bm{\theta}) h_i^{\rm dr}(\bm{\theta}) \\
\langle \frak{j}_i\rangle_{[n]} &=& \int_{\cal S} {\rm d}\bm{\theta}\,v^{\rm eff}(\bm{\theta})\rho_{\rm p}(\bm{\theta})\,h_i(\bm{\theta})\ =\ \int_{\cal S} \frc{{\rm d} E(\bm{\theta})}{2\pi} n(\bm{\theta}) h_i^{\rm dr}(\bm{\theta}).\label{onepointj}
\end{eqnarray}
The effective velocity is \cite{bonnes,cdy,BCDF16}
\begin{equation}\label{veff}
v^{\rm eff}(\bm{\theta}) = \frc{(E')^{\rm dr}(\bm{\theta})}{
(p')^{\rm dr}(\bm{\theta})}.
\end{equation}
Here we recall that $h_i(\bm{\theta})$ are scalar fields and $E'(\bm{\theta})$ and $p'(\bm{\theta})$ are vector fields.
The Lagrange parameters $\{\beta_i\}$ of a GGE fix the state, formally, via the trace expression
\begin{equation}\label{GGEn}
\langle O\rangle_{[n]} = \frc{{\rm Tr}\lt( e^{-\sum_i \beta_i Q_i}\,O\rt)}{{\rm Tr} \lt(e^{-\sum_i \beta_i Q_i}\rt)}.
\end{equation}
One can recover the occupation function $n(\bm{\theta})$ from the set $\{\beta_i:i\in{\mathbb{N}}\}$, and vice versa, via a set of nonlinear integral equations: one defines the GGE driving term $w(\bm{\theta}) = \sum_i \beta_i h_i(\bm{\theta})$, which involves the one-particle eigenvalues $h_i(\bm{\theta})$ associated to the conserved charges $Q_i$, and one solves $\epsilon(\bm{\theta}) = w(\bm{\theta}) + \int ({\rm d}\bm{\gamma}/2\pi)\,\varphi(\bm{\gamma},\bm{\theta}) F_b(\epsilon(\bm{\gamma}))$ (where $\bm{\gamma} = (\gamma,b))$. For our purposes, we mainly need the derivative of $n(\bm{\theta})$ with respect to $\beta_i$. Again the result depends on the type of excitation mode considered, and may be written as
\begin{equation}\label{dbn}
\frc{\partial}{\partial\beta_i} n(\bm{\theta}) = -h_i^{\rm dr}(\bm{\theta})\, n(\bm{\theta})\,f(\bm{\theta})
\end{equation}
where the statistical factor of the mode is $f(\theta,a) = -\partial^2_\epsilon \mathsf{F}_a(\epsilon)/\partial_\epsilon \mathsf{F}_a(\epsilon)\,|_{\epsilon=\epsilon(\theta,a)}$, giving
\begin{equation}\label{f}
f(\bm{\theta})= \lt\{\ba{ll}
1-n(\bm{\theta}) & \mbox{(fermions)}\\
1+n(\bm{\theta}) & \mbox{(bosons)}\\
1 & \mbox{(classical particles)}\\
n(\bm{\theta}) & \mbox{(radiative modes).}
\end{array}\rt.
\end{equation}
The quantities $\epsilon(\bm{\theta})$, $\rho_{\rm s}(\bm{\theta})$, $\rho_{\rm p}(\bm{\theta})$ $v^{\rm eff}(\bm{\theta})$ and $f(\bm{\theta})$ are all functionals of an occupation function; below we use these symbols for the quantities associated to the occupation function denoted $n(\bm{\theta})$.
\subsection{Generalized fluids in space-time} \label{ssectgenfluid}
Recall that the Eulerian scaling limit \eqref{cf} is defined as a large-scale limit, with fluid cell averaging, of connected correlation functions. This exactly extracts the information about the correlations that is present in the physics of Euler fluids. In order to describe it, we need to construct fluid configurations where at every Euler-scale space-time position $(x,t)\in{\mathbb{R}}\times {\mathbb{R}}$ lies a GGE. We thus need a family of state functions, which we denote equivalently as
\[
n_{x,t}(\bm{\theta}) \equiv n_t(x;\bm{\theta}),
\]
with $\bm{\theta}\in{\cal S}$ the spectral parameter. The function $n_{x,t}(\bm{\theta})$, as a function of $\bm{\theta}$ for $x,t$ fixed, is the occupation function describing the GGE in the fluid cell at $(x,t)$. Below we will use the index $[n_{x,t}]$ for averages in the GGE at the space-time point $(x,t)$, which are functionals of this function of $\bm{\theta}$. On the other hand, $n_t(x;\bm{\theta})$ seen as a function of the doublet $(x,\bm{\theta})$ for $t$ fixed, is the fluid state on the time slice $t$. We will use the index $[n_t]$ for functionals that depend on this function of $(x;\bm{\theta})$. For instance, the Eulerian scaling limit \eqref{cf} is a functional of the initial state $n_0$, while by definition, evolving for a (Euler-scale) time $t$ gives
\begin{equation}\label{evol}
\Big\langle \prod_k {\cal O}_k(x_k,t_k+t)\Big\rangle^{\rm Eul}_{[n_0]} =
\Big\langle \prod_k {\cal O}_k(x_k,t_k)\Big\rangle^{\rm Eul}_{[n_t]}.
\end{equation}
Recall that the dressing operation \eqref{dressing} as well as the various TBA quantities are all functionals of an occupation function. For readability, we will use the notation $h^{\rm dr}(x,t;\bm{\theta})=h^{\rm dr}_{[n_{x,t}]}(\bm{\theta})$, as well as $\rho_{\rm s}(x,t;\bm{\theta})$, $\rho_{\rm p}(x,t;\bm{\theta})$, $v^{\rm eff}(x,t;\bm{\theta})$ and $f(x,t;\bm{\theta})$ for the quantities associated to the occupation function $n_{x,t}(\bm{\alpha})$ (as a function of $\bm{\alpha}$ for $(x,t)$ fixed).
The fluid state on any time slice $t$ takes a factorized form, where on each fluid cell lies a GGE. That is, at large scales correlation functions factorize as
\begin{equation}\label{facto}
\lim_{\lambda\to\infty}
\Big\langle \prod_{k=1}^N {\cal O}_k(\lambda x_k,\lambda t)\Big\rangle_{{\rm ini},\lambda} =
\prod_{k=1}^N\langle {\cal O}_k(x_k)\rangle_{[n_{x_k,t}]}\qquad
(x_j\neq x_k\mbox{ for }j\neq k).
\end{equation}
Here $\langle {\cal O}_k(x_k)\rangle_{[n_{x_k,t}]}$ is the average of the local (Schr\"odinger-picture) operator ${\cal O}_k(x_k)$, in the GGE $n_{x_k,t}$ which lies at Euler-scale space-time position $(x_k,t)$. In order for the results below to be valid, we in fact require that equal-time, space-separated connected correlation functions vanish fast enough\footnote{In non-equilibrium steady states emerging form the partitioning protocol, this requirement is broken by certain fields, see e.g. \cite[Eq.33]{DLSB15}.},
\begin{equation}\label{facto2}
\lim_{\lambda\to\infty}
\lambda^{N-1}\Big\langle \prod_{k=1}^N {\cal O}_k(\lambda x_k,\lambda t)\Big\rangle_{{\rm ini},\lambda}^{\rm c} = 0\qquad
(x_j\neq x_k\mbox{ for }j\neq k).
\end{equation}
Thus the Eulerian scaling limit \eqref{cf} is zero whenever all times are the same and no two positions coincide. Relation \eqref{facto2} is expected to hold for all conserved densities and currents, and for most other local observables, in a large family of states; for instance, it holds in any homogeneous, nonzero-temperature Kubo-Martin-Schwinger state of local quantum chains.
The initial fluid state $n_0(x;\bm{\theta})$ is the Euler scale version of the state \eqref{inis}. According to \eqref{facto}, it factorizes into local GGEs. The local GGE at space-time position $(x,0)$ is determined by the parameters $\{\beta_i(x):i\in{\mathbb{N}}\}$ which appear in \eqref{inis} as per \eqref{GGEn}:
\begin{equation}\label{inistate}
\quad \langle O\rangle_{[n_{x,0}]} = \frc{{\rm Tr}\lt(e^{-\sum_i\beta_i(x)Q_i}\,O\rt)}{
{\rm Tr} \lt( e^{-\sum_i\beta_i( x) Q_i}\rt)}.
\end{equation}
In particular, according to \eqref{dbn}, it satisfies the functional derivative equation
\begin{equation}\label{dern}
\frc{\delta}{\delta\beta_i(y)} n_0(x;\bm{\theta}) = -\delta(x-y)\,h_i^{\rm dr}(x,0;\bm{\theta})\, n_0(x;\bm{\theta})\,f(x,0;\bm{\theta}).
\end{equation}
In accordance with the factorized form \eqref{facto} and especially \eqref{facto2}, equal-time scaled connected correlation functions have support only at coinciding points. In fact, taking the Eulerian scaling limit, they can be written in the form
\begin{equation}\label{delta}
\Big\langle \prod_{k=1}^N {\cal O}_k(x_k)\Big\rangle_{[n_t]}^{\rm Eul} =
C_{[n_{x_1,t}]}^{{\cal O}_1,\ldots,{\cal O}_N}\prod_{j=2}^N \delta(x_1-x_j).
\end{equation}
By integration, one can identify the pre-factor as the full integral of the connected correlation function in the homogeneous local state at $x_1$,
\begin{equation}\label{Cn0}
C_{[n_{x_1,t}]}^{{\cal O}_1,\ldots,{\cal O}_N} = \int_{{\mathbb{R}}^{N-1}} {\rm d} x_2\cdots {\rm d} x_N\,
\Big\langle\prod_{k=1}^N{\cal O}_k(x_k)\Big\rangle_{[n_{x_1,t}]}^{\rm c}.
\end{equation}
Note that the scaling factor $\lambda^{N-1}$ exactly cancels that coming from the re-scaling of the integration variables, and that thanks to the space integration, it is not necessary anymore to average over fluid cells.
\medskip
\noindent{\bf Remark.} For every GGE $n(\bm{\theta})$, there is a Hilbert space formed by the completion, under the natural topology, of the space of local observables with the $(x,t)$-dependent inner ``hydrodynamic inner product"
\begin{equation}\label{hils}
\langle {\cal O}_1|{\cal O}_2\rangle_{[n]} = C_{[n]}^{{\cal O}_1^\dag,{\cal O}_2} = \int_{\mathbb{R}} {\rm d} x\,
\langle{\cal O}_1^\dag(x){\cal O}_2(0)\rangle_{[n]}^{\rm c}.
\end{equation}
There is a sub-Hilbert space formed by the set of conserved densities ${\cal O}_1,{\cal O}_2\in\{\frak{q}_i:i\in{\mathbb{N}}\}$ within this Hilbert space. The space spanned by $h_i(\bm{\theta}),\,i\in{\mathbb{N}}$ is required to be dense within this sub-Hilbert space, and this, for all $n=n_t(x)$. This, generically, imposes the inclusion of quasi-local conserved densities. See the review \cite{quasiloc} for quasi-local densities, and \cite{D17} for a rigorous description of these Hilbert spaces and the way they are involved in generalized thermalization.
\subsection{Time evolution}
Consider a generalized fluid in space-time that is obtained, after the Eulerian scaling limit, by evolving an initial state \eqref{inis} using a homogeneous dynamics as in \eqref{tevol}, \eqref{gen}. This satisfies an Eulerian fluid equation \cite{cdy,BCDF16}. This is the main equation of GHD, which can be written as the convective evolution equation
\begin{equation}\label{ghd}
\partial_t n_t(x;\bm{\theta}) + v^{\rm eff}(x,t;\bm{\theta}) \partial_x n_t(x;\bm{\theta}) = 0.
\end{equation}
Its ``solution by characteristics" was discovered in \cite{dsy17}. Given the initial condition $n_0(x;\bm{\theta})$, one introduces the characteristics, a function $u(x,t;\bm{\theta})$, which one evaluates along with the evolved state $n_t(x;\bm{\theta})$ by solving the following set of equations:
\begin{equation}\label{sol}\begin{aligned}
n_t(x;\bm{\theta}) &= n_0(u(x,t;\bm{\theta});\bm{\theta})\\
\int_{x_0}^{x} {\rm d} y\,\rho_{\rm s}(y,t;\bm{\theta})
&= \int_{x_0}^{u(x,t;\bm{\theta})} {\rm d} y\,\rho_{\rm s}(y,0;\bm{\theta})
+ v^{\rm eff}(x_0,0;\bm{\theta})\rho_{\rm s}(x_0,0;\bm{\theta}) \, t.
\end{aligned}
\end{equation}
In these equations, $x_0$ is an ``asymptotically stationary point": it must be chosen far enough on the left in such a way that $ n_s(x;\bm{\theta}) = n_0(x;\bm{\theta})$ for all $x<x_0$ and $s\in[0,t]$ (typically, one should think of it as $x_0=-\infty$). This provides the evolution from the initial condition $n_0$ for a time $t$.
It is worth noting that the function $u(x,t;\bm{\theta})$ has the simple interpretation as the position, at time $0$, from where a quasi-particle trajectory of spectral parameter $\bm{\theta}$ would reach the position $x$ at time $t$. Indeed, it solves
\begin{equation}\label{equ}
\partial_t u(x,t;\bm{\theta}) + v^{\rm eff}(x,t;\bm{\theta}) \partial_x u(x,t;\bm{\theta}) = 0,\qquad
u(x,0;\bm{\theta}) = x.
\end{equation}
Thus, defining the trajectory $x(t)$, starting at $x(0)=y$, via
\begin{equation}
u(x(t),t;\bm{\theta}) = y,
\end{equation}
we find
\begin{equation}
\frc{{\rm d} x(t)}{{\rm d} t} \partial_x u(x,t;\bm{\theta})|_{x=x(t)}
+ \partial_t u(x,t;\bm{\theta})|_{x=x(t)} = 0\ \Rightarrow \
\frc{{\rm d} x(t)}{{\rm d} t} = v^{\rm eff}(x(t),t;\bm{\theta}).
\end{equation}
Below we assume the following: (i) the state density $\rho_{\rm s}(\bm{\theta})$ is positive for all $\bm{\theta}$, and (ii) the equations \eqref{sol} have a unique solution. Thanks to these assumptions, differentiating with respect to $x$ the second equation in \eqref{sol}, we have the inequality
\begin{equation}\label{invertibility}
\partial_x u(x,t;\bm{\theta})>0,
\end{equation}
which imply that the function $u(x,t;\bm{\theta})$ is invertible with respect to the position.
\medskip
\noindent {\bf Remark.} Note that if we assume that the effective velocity $v^{\rm eff}(\bm{\theta})$ is a monotonically increasing function of the rapidity $\theta$, then (Appendix \ref{appinv})
\begin{equation}\label{invertibility2}
u'(x,t;\bm{\theta})<0
\end{equation}
so that that the function $u(x,t;\bm{\theta})$ is invertible with respect to the rapidity. The latter condition is satisfied for instance in Galilean of relativistic field theories. This condition slightly simplifies some of the considerations, and in particular it guarantees that $(v^{\rm eff})'(\bm{\theta})\neq 0$. In fact, if the latter inequality is not satisfied, then some of the asymptotic results below do not apply. Yet, we will not make use of the monotonicity assumption, but we will implicitly assume that $(v^{\rm eff})'(\bm{\theta})\neq 0$ when it appears in denominators, keeping the discussion of how a vanishing derivative of the effective velocity may change some results for the conclusion.
\section{Correlation functions}\label{seccorr}
Despite the factorization properties \eqref{facto} and \eqref{facto2} on equal-time slices, scaled connected correlation functions \eqref{cf} are nontrivial when fields do not all lie on the same time slice. That is, a connected dynamical $N$-point function vanishes, at the Euler scale, as $\lambda^{1-N}$ with a generically nonzero coefficient, which is extracted (after fluid-cell average) by taking the Eulerian scaling limit \eqref{cf}.
In this section, we develop a recursive procedure that generates all scaled dynamical correlation functions \eqref{cf}. The procedure is based on linear responses and an extension of the fluctuation-dissipation theorem to Euler scale correlations. We identify the {\em propagator}, propagating from time 0 to time $t$, as (simply related to) the linear response of $n_t$ to variations of the initial condition $n_0$. We explain how, in the cases of two-point functions involving conserved densities $\frak{q}_i(x,t)$ and currents $\frak{j}_i(x,t)$, one can obtain from this procedure explicit integral equations. We also explain how one can extend these formulae, combining hydrodynamic projection principles with the Leclair-Mussardo formula, to two-point functions involving other local fields. We finally state the general results for scaled $n$-point functions in free models.
It is worth noting that in general, correlation functions depend on much more than the information present in the Euler hydrodynamics. For instance, although the knowledge of the GGE equations of states is sufficient to determine the full thermodynamics and Euler hydrodynamics, it cannot be sufficient to determine correlation functions of the type $\langle \frak{q}_i(x,t) \frak{q}_j(0)\rangle_{\rm ini}^{\rm c}$. Indeed, GGE equations of state give information about conserved charges $Q_i = \int {\rm d} x\,\frak{q}_i(x)$, but conserved densities $\frak{q}_i(x)$ are defined from these only up to total spatial derivatives of local fields. Thus any result from GHD for two-point correlation function $\langle \frak{q}_i(x)\frak{q}_j(0)\rangle_{\rm ini}^{\rm c}$ cannot depend on the precise definition of $\frak{q}_i(x)$. The Eulerian scaling limit \eqref{cf} only probes large wavelengths, and derivative corrections to $\frak{q}_i(x)$ are expected to give vanishing contributions. This is why it is possible to obtain exact results purely from GHD for this scaling limit. Any correction to the Eulerian scaling limit necessitates additional information, hence cannot lie entirely within the present GHD framework.
Euler-scaled dynamical correlations can be seen as being produced by ``waves" of conserved quantities ballistically propagating in the fluid between the fields involved in the correlation function. The problem can thus be seen as that of propagating Euler-scale waves from the initial delta-function correlation \eqref{delta}, essentially using the evolution equation \eqref{ghd}. This form of the problem is made more explicit in the case of two-point functions in Subsection \ref{sshydroproj} using hydrodynamic projection theory.
\subsection{Generating higher-point correlation functions}\label{ssecgen}
The main idea of the method is to use responses to local (in the Euler sense) disturbance in order to generate dynamical correlations. Indeed, consider the state \eqref{inis}. The response to a small change of the local potential $\beta_i(x)$ at the point $x$ should provide information about the correlation between the observable $O$ (which can be a product of local observables) and the local conserved density $\frak{q}_i(x)$. At the Euler scale \eqref{cf}, the functional differentiation with respect to $\beta_i(x)$ brings down the density $\frak{q}_i(x)$, and does nothing else. This is clear in classical models as it follows from differentiation of the exponential function. In quantum models, terms coming from nontrivial commutators between local conserved densities are negligible at the Euler scale: they only give rise to derivatives of local operators, see \cite[eqs. 91-93]{dynote}, which can be neglected in Eulerian correlation functions\footnote{Note that at the Euler scale, $\mathfrak{q}_i(x,t)$ is completely characterised by the corresponding conserved charge $Q_i$, hence only defined up to a total derivative.}. Therefore,
\begin{equation}\label{qderb}
\Big\langle \frak{q}_i(x,0) \prod_k {\cal O}_k(x_k,t_k)\Big\rangle_{[n_0]}^{\rm Eul} = -\frc{\delta}{\delta\beta_i(x)}\Big\langle \prod_k {\cal O}_k(x_k,t_k)\Big\rangle_{[n_0]}^{\rm Eul}.
\end{equation}
We see that Eulerian dynamical correlation functions are related to response functions. This constitutes a generalisation, both out of equilibrium and to the presence of the higher conserved charges of integrable models, of the fluctuation-dissipation theorem.
Let us consider Euler scale correlation functions \eqref{cf} involving charge densities and currents. The one-point functions are given by \eqref{onepointq}, \eqref{onepointj}. Evolving in time and taking the Eulerian scaling limit is simple,
\begin{eqnarray}\label{qi}
\langle \frak{q}_i(x,t)\rangle_{[n_0]}^{\rm Eul} = \langle \frak{q}_i(x)\rangle_{[n_t]}^{\rm Eul} &=& \int_{\cal S} \frc{{\rm d} \bm{\theta}}{2\pi}\,
p'(\bm{\theta}) \,n_t(x;\bm{\theta})\, h_i^{\rm dr}(x,t;\bm{\theta})\\
\label{ji}
\langle \frak{j}_i(x,t)\rangle_{[n_0]}^{\rm Eul} = \langle \frak{j}_i(x)\rangle_{[n_t]}^{\rm Eul} &=& \int_{\cal S} \frc{{\rm d} \bm{\theta}}{2\pi}\,
E'(\bm{\theta}) \,n_t(x;\bm{\theta})\, h_i^{\rm dr}(x,t;\bm{\theta}).
\end{eqnarray}
Higher-point functions with many insertions of conserved densities are obtained recursively as follows. Let $\prod_{k=1}^N {\cal O}_k(x_k,t_k)$ be a product of local observables at various space-time positions. It is convenient to assume that $t_N=0$, without loss of generality as we can always evolve in time using \eqref{sol}. Assume that $\langle \prod_{k=1}^N {\cal O}_k(x_k,t_k)\rangle_{[n_0]}^{\rm Eul}$ is known as a functional of $n_0(x;\theta)$. This is the case for $N=1$ with ${\cal O}_1$ being a conserved density or current (see below for other one-point functions). From this, we may obtain correlation functions $\langle \prod_{k=1}^{N} {\cal O}_k(x_k,t_k+t)\,{\cal O}_{N+1}(x_{N+1},0)\rangle_{[n_0]}^{\rm Eul}$ with ${\cal O}_{N+1}= \frak{q}_j$ for any $j$. This is of the same form as the correlation at order $N$: it contains $N+1$ observables, where all $N$ previous local observables have been evolved for a time $t$, and a new conserved density has been inserted at time $t_{N+1}=0$. We obtain:
\begin{eqnarray} \label{dU}
\Big\langle \prod_{k=1}^N {\cal O}_k(x_k,t_k+t)\;
\frak{q}_j(y,0) \Big\rangle_{[n_0]}^{\rm Eul} &=& -
\frc{\partial}{\partial\beta_j(y)} \Big\langle \prod_{k=1}^N {\cal O}_k(x_k,t_k)\Big\rangle_{[n_t]}^{\rm Eul}\\ &=& \int_{\cal S} {\rm d}\bm{\alpha}\int_{\cal S}{\rm d}\bm{\theta}\int_{\mathbb{R}}{\rm d} z\,n_0(y;\bm{\alpha})\,f(y,0;\bm{\alpha})\,h_j^{\rm dr}(y,0;\bm{\alpha})\;\times\nonumber\\ && \qquad \times\;\frc{\delta n_t(z;\bm{\theta})}{\delta n_0(y;\bm{\alpha})} \frc{\delta}{\delta \t n(z;\bm{\theta})}\Big\langle \prod_{k=1}^N {\cal O}_k(x_k,t_k)\Big\rangle_{[\t n]}^{\rm Eul}\Bigg|_{\t n = n_t}.\nonumber
\end{eqnarray}
We have used \eqref{qderb}, \eqref{evol} and \eqref{dern}. In this expression, $\delta n_t(z;\bm{\theta})/\delta n_0(y;\bm{\alpha})$ is the functional derivative of the time-evolved occupation function $n_t(z;\bm{\theta})$ with respect to variations of the initial condition $n_0(y;\bm{\alpha})$ from which it is evolved.
Density-density two-point functions take a particularly simple form thanks to the general formula
\begin{equation}\label{dmu}
\partial_\mu \int_{\cal S} \frc{{\rm d}\bm{\theta}}{2\pi} \,g(\bm{\theta})\, n(\bm{\theta})\,h^{\rm dr}(\bm{\theta})
= \int_{\cal S} \frc{{\rm d}\bm{\theta}}{2\pi} \,g^{\rm dr}(\bm{\theta})\, \partial_\mu n(\bm{\theta})\,h^{\rm dr}(\bm{\theta})
\end{equation}
obtained in \cite{dsdrude}, where $\mu$ is any parameter on which a GGE state $n(\bm{\theta})$ may depend, and $g(\bm{\theta})$, $h(\bm{\theta})$ are any spectral functions (either $g$ is a vector field and $h$ is a scalar field, or vice versa). The functional derivative on the right-hand side in \eqref{dU} may be evaluated using this along with \eqref{qi} (specialized to $t=0$), giving
\begin{eqnarray}
\lefteqn{\langle \frak{q}_i(x,t)\frak{q}_j(y,0) \rangle_{[n_0]}^{\rm Eul}} \nonumber\\ && =
\int_{\cal S} {\rm d}\bm{\alpha}\int_{\cal S}{\rm d}\bm{\theta}\,n_0(y;\bm{\alpha})\,f(y,0;\bm{\alpha})\,h_j^{\rm dr}(y,0;\bm{\alpha})\,\frc{\delta n_t(x;\bm{\theta})}{\delta n_0(y;\bm{\alpha})}
\rho_{\rm s}(x,t;\bm{\theta})\, h_i^{\rm dr}(x,t;\bm{\theta}).
\end{eqnarray}
Recall that $\rho_{\rm s}(x,t;\bm{\theta})$ is the state density \eqref{tba} evaluated with respect to the occupation function at space-time position $(x,t)$. The density-current two-point function can be obtained similarly. Higher-point functions are obtained using \eqref{dU} by further functional differentiation, using similar techniques.
The crucial objects in these formulae are the functional derivatives of the time-evolved occupation function $n_t(x;\bm{\theta})$ with respect to its initial condition $n_0(y;\bm{\alpha})$. These describe the dynamical responses of the fluid at time $t$ to a change of initial condition. The two-point function only involves the first derivative, while higher-point functions will involve higher derivatives.
Below it will be convenient to define the {\em propagator} as a simple conjugation of the first derivative of the evolution operator:
\begin{equation}\label{propdef}
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha}) =
\big(n_t(x;\bm{\theta})\,f(x,t;\bm{\theta})\big)^{-1}
\,\frc{\delta n_t(x;\bm{\theta})}{\delta n_0(y;\bm{\alpha})}\,
n_0(y;\bm{\alpha})\,f(y,0;\bm{\alpha}).
\end{equation}
In terms of the propagator, the density-density two-point function takes the form
\begin{eqnarray}
\lefteqn{\langle \frak{q}_i(x,t) \frak{q}_j(y,0) \rangle_{[n_0]}^{\rm Eul}} \nonumber\\ && =
\int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S} {\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,f(x,t;\bm{\theta})\, h_i^{\rm dr}(x,t;\bm{\theta})
\,h_j^{\rm dr}(y,0;\bm{\alpha}).
\label{qq1}
\end{eqnarray}
Note that the propagator is a vector field as a function of its first argument, and a scalar field as a function of its second.
In the following, we concentrate on two-point functions: we explain how to evaluate the propagator via integral equations, and how to go beyond correlation functions involving conserved densities. It turns out that the propagator, as defined in \eqref{propdef}, satisfies a linear integral equation whose source term and kernel stay well defined even at points where the occupation function vanish. We leave for future studies the developments of expressions for higher-point functions and the evaluation of higher-derivatives of the time evolved occupation function.
\subsection{Exact two-point functions of densities and currents}\label{ssectexact}
The derivation of the following formulae, based on the techniques introduced above, is presented in Appendix \ref{sappMain}. Here we describe the main results.
In order to express the results, it is convenient to introduce the ``star-dressing" operation, which for a vector field $g(\bm{\theta})$ and a GGE occupation function $n(\bm{\theta})$ is defined by
\begin{equation}\label{stardressing}
g^{*{\rm dr}}(\bm{\theta}) = \big(Tn\, g\big)^{\rm dr}(\bm{\theta}) = g^{\rm dr}(\bm{\theta})-g(\bm{\theta}).
\end{equation}
Note that without interaction, we have $g^{*{\rm dr}}=0$. We will also need the effective acceleration $a^{\rm eff}_{[n_0]}(x;\bm{\theta})$ introduced in \cite{dynote}. This is a functional of $n_0(x;\bm{\theta})$ (seen as a function of $(x,\bm{\theta})$). It is defined as $a^{\rm eff}_{[n_0]}(x;\bm{\theta}) = - (\partial_x w(x))^{\rm dr}(x,0;\bm{\theta})/ (p')^{\rm dr}(x,0;\bm{\theta})$ where $w(x;\bm{\theta}) = \sum_i \beta_i(x) h_i(\bm{\theta})$ is a scalar field, the TBA driving term of the GGE $n_0(x;\bm{\theta})$ (see \eqref{inistate}). For our purpose, we may write it in the equivalent forms
\begin{equation}\label{aeff}
a^{\rm eff}_{[n_0]}(x;\bm{\theta})=\frc{\partial_x n_0(x;\bm{\theta})}{2\pi \rho_{{\rm p}}(x,0;\bm{\theta}) f(x,0;\bm{\theta})} =
-\frc{\partial_x \epsilon(x,0;\bm{\theta})}{2\pi \rho_{\rm s}(x,0;\bm{\theta})}.
\end{equation}
The effective acceleration encodes the inhomogeneity of the fluid state $n_0(x;\bm{\theta})$.
It will be convenient to see the propagator $\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})$ as the kernel of a linear integral operator acting on scalar fields via contraction on the spectral parameter $\bm{\alpha}$:
\begin{equation}\label{propg}
\big(\mathsf{\Gamma}_{(y,0)\to (x,t)}g\big)(\bm{\theta}) =
\int_{\cal S} {\rm d}\bm{\alpha}\,\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})g(\bm{\alpha}).
\end{equation}
This can be interpreted as bringing the spectral function $g$ from the point $(y,0)$ to the point $(x,t)$ starting in the initial state $n_0$. We show in Appendix \ref{sappMain} that the propagator satisfies, for $x,y>x_0$ (recall \eqref{sol} for the quantity $x_0$), the following integral equation:
\begin{equation}\begin{aligned}
& \big(\mathsf{\Gamma}_{(y,0)\to(x,t)}g\big)(\bm{\theta})
- 2\pi a^{\rm eff}_{[n_0]}(u;\bm{\theta})
\int_{x_0}^x {\rm d} z\,\Big(\rho_{\rm s}(z,t) f(z,t)\,
\mathsf{\Gamma}_{(y,0)\to(z,t)}g
\Big)^{*{\rm dr}}(z,t;\bm{\theta})
\\
&\qquad =\quad \delta(y-u)\,g(\bm{\theta})
- 2\pi a^{\rm eff}_{[n_0]}(u;\bm{\theta})\,
\Theta(u-y)\,\Big(\rho_{\rm s}(y,0) f(y,0)\,g\Big)^{*{\rm dr}}(y,0;\bm{\theta})\\
&\mbox{with}\quad u = u(x,t;\bm{\theta})
\end{aligned}\label{Gamma}
\end{equation}
where $\Theta(\ldots)$ is Heavyside's Theta-function. This defines $\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})$. In this and other equations below, functions such as $\rho_{\rm s}(x,t;\bm{\theta})$ and $f(x,t;\bm{\theta})$ with {\em omitted} spectral argument $\bm{\theta}$, are to be seen as diagonal integral operators, acting simply by multiplication by the associated quantity.
Remark that if the initial state is homogeneous, in which case the evolution is trivial $n_t(x;\bm{\theta}) = n(\bm{\theta})$, then we have $a^{\rm eff}_{[n_0]}(u;\bm{\theta})=0$ and $u = x-v^{\rm eff}(\bm{\theta}) t$, and we find
\begin{equation}\label{homo}
\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha}) = \delta(x-y-v^{\rm eff}(\bm{\theta}) t)\,\delta_{\cal S}(\bm{\theta}-\bm{\alpha})
\qquad\mbox{(homogeneous states).}
\end{equation}
Here and below, $\delta_{\cal S}(\bm{\theta}-\bm{\alpha}) = \delta(\theta-\alpha)\delta_{b,a}$ for $\bm{\alpha} = (\alpha,a)$ and $\bm{\theta}=(\theta,b)$. In the absence of interaction, we have $u = x-v^{\rm gr}(\bm{\theta})t$ where $v^{\rm gr}(\bm{\theta}) = E'(\bm{\theta})/p'(\bm{\theta})$ is the group velocity, and
\begin{equation}\label{free}
\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha}) = \delta(x-y-v^{\rm gr}(\bm{\theta}) t)\,\delta_{\cal S}(\bm{\theta}-\bm{\alpha})
\qquad\mbox{(without interactions).}
\end{equation}
Further, at $t=0$, one obtains
\begin{equation}\label{gammainit}
\mathsf{\Gamma}_{(y,0)\to(x,0)}(\bm{\theta},\bm{\alpha}) = \delta(x-y)\,\delta_{\cal S}(\bm{\theta}-\bm{\alpha})
\qquad\mbox{(vanishing time difference).}
\end{equation}
The propagator \eqref{Gamma} allows one to evaluate two-point functions of conserved densities in inhomogeneous states as per \eqref{qq1}. For two-point functions involving currents, the results are simple modifications of the above, where the effective velocity multiplies the dressed one-particle eigenvalues. The results are
\begin{eqnarray}\label{qq}
\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul} &=&
\int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S} {\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})
\;\times\\ && \qquad\qquad\qquad \times\;
h_i^{\rm dr}(x,t;\bm{\theta})\,
h_j^{\rm dr}(y,0;\bm{\alpha})\nonumber\\\label{jq}
\langle \frak{j}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul} &=&
\int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S} {\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})
\;\times\\ && \qquad\qquad\qquad \times\;
v^{\rm eff}(x,t;\bm{\theta})h_i^{\rm dr}(x,t;\bm{\theta})\,
h_j^{\rm dr}(y,0;\bm{\alpha})\nonumber\\
\label{qj}
\langle \frak{q}_i(x,t)\frak{j}_j(y,0)\rangle_{[n_0]}^{\rm Eul} &=& \int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S} {\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})
\;\times\\ && \qquad\qquad\qquad \times\;
h_i^{\rm dr}(x,t;\bm{\theta})v^{\rm eff}(y,0;\bm{\alpha})h_j^{\rm dr}(y,0;\bm{\alpha})\nonumber\\
\label{jj}
\langle \frak{j}_i(x,t)\frak{j}_j(y,0)\rangle_{[n_0]}^{\rm Eul} &=& \int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S}{\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})\;\times\\ && \qquad\qquad\qquad \times\;v^{\rm eff}(x,t;\bm{\theta})h_i^{\rm dr}(x,t;\bm{\theta})\,
v^{\rm eff}(y,0;\bm{\alpha})h_j^{\rm dr}(y,0;\bm{\alpha}).\nonumber
\end{eqnarray}
These expressions are similar to those obtained in \cite{dsdrude}, except for the nontrivial propagator $\mathsf{\Gamma}_{(y.0)\to(x.t)}(\bm{\theta},\bm{\alpha})$. In the homogeneous case, using \eqref{homo}, we indeed recover the result of \cite{dsdrude}. Formulae \eqref{qq}, \eqref{jq}, \eqref{qj} and \eqref{jj}, with \eqref{Gamma}, are the main results of this paper.
It is natural to separate the propagator into two terms,
\begin{equation}\label{GammaDelta}
\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})
=
\delta(y-u(x,t;\bm{\theta}))\delta_{\cal S}(\bm{\theta}-\bm{\alpha}) +
\mathsf{\Delta}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha}).
\end{equation}
We will refer to the first term as the {\em direct propagator}, and the last as the {\em indirect propagator}. As explained in Appendix \ref{sappDelta}, the indirect propagator satisfies the following linear integral equation:
\begin{equation}\label{Delta}
\frc{\big(\mathsf{\Delta}_{(y,0)\to(x,t)}g\big)(\bm{\theta})}{2\pi a^{\rm eff}_{[n_0]}(u(x,t;\bm{\theta});\bm{\theta})} = \big(\mathsf{W}_{(y,0)\to(x,t)}g\big)(\bm{\theta})
+ \int_{x_0}^x {\rm d} z\,\Big(\rho_{\rm s}(z,t)f(z,t)
\mathsf{\Delta}_{(y,0)\to(z,t)}g\Big)^{*{\rm dr}}(z,t;\bm{\theta})
\end{equation}
where the source term is
\begin{eqnarray}\label{W}
\big(\mathsf{W}_{(y,0)\to(x,t)}g\big)(\bm{\theta}) &=& \int_{x_0}^x{\rm d} z\,
\sum_{\bm{\gamma}\in \bm{\theta}_\star(z,t;y)}\frc{\rho_{\rm s}(z,t;\bm{\gamma})n_0(y;\bm{\gamma})f(y,0;\bm{\gamma})}{
|u'(z,t;\bm{\gamma})|}T^{\rm dr}(z,t;\bm{\theta},\bm{\gamma}) g(\bm{\gamma}) \nonumber\\
&& -\;\Theta(u(x,t;\bm{\theta})-y)\big(\rho_{\rm s}(y,0)f(y,0)g\big)^{*{\rm dr}}(y,0;\bm{\theta}).
\end{eqnarray}
Here the dressed scattering operator is
\begin{equation}\label{Tdr}
T^{\rm dr} = (1-Tn)^{-1}T,
\end{equation}
and $T^{\rm dr}(z,t;\bm{\theta},\bm{\gamma})$ is, as a function of $\bm{\theta},\bm{\gamma}$, the kernel of $T^{\rm dr}(z,t)$ (with dressing with respect to the state $[n_{z,t}]$). $T^{\rm dr}(z,t;\bm{\theta},\bm{\gamma})$ is a vector field as function of the $\theta$, and a scalar field as function of $\gamma$. The root set is
\begin{equation}\label{um1}
\bm{\theta}_\star(x,t;y)=\{\bm{\theta}:u(x,t;\bm{\theta})=y\}.
\end{equation}
If the effective velocity is monotonic with respect to the rapidity, by virtue of \eqref{invertibility2}, the function $u(x,t;\bm{\theta})$ is locally invertible on the rapidity $\theta$, wherefore the set $\bm{\theta}_\star(x,t;y)$ contains at most one element $\bm{\theta} = (\theta_a,a)$ per particle type $a\in {\cal A}$. In general, however, the set may contain more solution per particle type. In terms of \eqref{GammaDelta}, we have for instance
\begin{eqnarray}\label{qq2}
\lefteqn{\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul}}\qquad && \\
&=& \sum_{\bm{\gamma}\in \bm{\theta}_\star(x,t;y)}\frc{\rho_{\rm s}(x,t;\bm{\gamma})\,
n_0(y;\bm{\gamma})f(y,0;\bm{\gamma})}{|u'(x,t;\bm{\gamma})|}\,h_i^{\rm dr}(x,t;\bm{\gamma})\,h_j^{\rm dr}(y,0;\bm{\gamma})
\; + \nonumber\\
&& \qquad + \;
\int_{\cal S} {\rm d}\bm{\theta}\int_{\cal S}{\rm d}\bm{\alpha}\,
\mathsf{\Delta}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})\,h_i^{\rm dr}(x,t;\bm{\theta})\,h_j^{\rm dr}(y,0;\bm{\alpha})\nonumber
\end{eqnarray}
(and remark that $n_0(y;\bm{\gamma})f(y,0;\bm{\gamma}) = n_t(x;\bm{\gamma})f(x,t;\bm{\gamma})$ in the first term on the right-hand side).
\subsection{Connection with hydrodynamic projections}\label{sshydroproj}
The main ideas of hydrodynamic projections, in the cases of two-point functions at the Euler scale, can be gathered within two statements. First, correlations are transported solely by (ballistically propagating) conserved densities. Second, the overlap between a local observable and such a propagating conserved density, in the fluid cell containing the local observable, is obtained by the hydrodynamic inner product \eqref{hils} within this cell. Using this, the expressions \eqref{jq}-\eqref{jj}, involving currents, are in fact consequences of \eqref{qq} using hydrodynamic projection theory. Here we first re-write the expressions obtained above in the hydrodynamic-projection form. We then show how taking this form implies the Euler-scale fluctuation-dissipation principle we have used to derive the expressions \eqref{jq}-\eqref{jj}.
\subsubsection{Re-writing in hydrodynamic-projection form}
First, the integral operator $S_{(y,0)\to(x,t)}$, acting on scalar fields and giving vector fields, that generates the charge two-point function as
\begin{equation}\label{qqop}
\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul} = \int_{\cal S} {\rm d} \bm{\theta}\,h_i(\bm{\theta}) \big(S_{(y,0)\to(x,t)}h_j\big)(\bm{\theta})
\end{equation}
is given by
\begin{equation}\label{Cop}
S_{(y,0)\to(x,t)} = (1-n_{x,t}T)^{-1}\, \rho_{\rm p}(x,t)\, f(x,t)\,\mathsf{\Gamma}_{(y,0)\to(x,t)}\,
(1-T^{\rm T}n_{y,0})^{-1}.
\end{equation}
Relation \eqref{Cop} is simply a re-writing of \eqref{qq}. Note that by symmetry of the correlation functions, $S_{(y,0)\to(x,t)}^{\rm T} = S_{(x,t)\to (y,0)}$ where ${\rm T}$ denotes transpose. From hydrodynamic projection, it is known that correlation functions involving currents are obtained by using the linearized Euler operator, which, in a GGE state $n(\theta)$, is given by \cite{dsdrude}
\begin{equation}
A_{[n]} = (1-nT)^{-1} v^{\rm eff} (1-nT);
\end{equation}
it acts on vector fields and gives vector fields.
Re-writing \eqref{jq}-\eqref{jj}, currents correlations are obtained as:
\begin{equation}\begin{aligned}
A_{[n_{x,t}]}S_{(y,0)\to(x,t)} &\mbox{\quad (for the current on the left)}\\
S_{(y,0)\to(x,t)}A_{[n_{y,0}]}^{\rm T} &\mbox{\quad (for the current on the right) }\\
A_{[n_{x,t}]}S_{(y,0)\to(x,t)} A_{[n_{y,0}]}^{\rm T} & \mbox{\quad (for both observables being currents).}
\end{aligned}
\end{equation}
These are indeed expressions that are expected form hydrodynamic projection principles \cite{dsdrude}. In the homogeneous case we have that $n_t=n_0$ and, using \eqref{homo}, that $S_{(y,0)\to(x,t)} =S_{(x,0)\to (y,-t)} = S_{(x,t)\to (y,0)}$. In this case one usually denotes the operator as $S(x-y,t)$, and we recover $S(x-y,t)^{\rm T} = S(x-y,t)$.
Further, according to hydrodynamic projection principles, one would expect $S_{(y,0)\to(x,t)}$ to solve the evolution equation
\begin{equation}\label{prj1}
\partial_t S_{(y,0)\to(x,t)} + \partial_x \big(A_{[n_{x,t}]} S_{(y,0)\to(x,t)}\big) = 0
\end{equation}
with the initial condition $S_{(y,0)\to(x,0)} = \delta(x-y)C_{[n_{x,0}]}$. Here $C_{[n]}$, for a GGE state $n(\theta)$, is the correlation operator. Its matrix elements, in the space of conserved densities, are the connected integrated two-point functions $C_{[n]}^{\frak{q}_i\frak{q}_j}$ (see \eqref{Cn0} and \eqref{hils}), and as an operator it is \cite{dsdrude}
\begin{equation}\label{Cn}
C_{[n]} = (1-nT)^{-1} \,\rho_{\rm p} f\, (1-T^{\rm T}n)^{-1}.
\end{equation}
Eq.~\eqref{prj1} is the generalisation to space-time dependent states of the equation that was solved in \cite{dsdrude} in order to obtain Euler-scale correlations in homogeneous states. It is an explicit expression of the problem of propagating Euler-scale waves from the initial delta-function correlation \eqref{delta}, in the case of two-point correlations. It is simple to verify that indeed \eqref{prj1} follows from the results obtained: the initial condition holds by using \eqref{gammainit}, and \eqref{prj1} follows from \eqref{qq} and \eqref{jq} and the conservation law for local densities.
\subsubsection{From hydrodynamic projections to Euler-scale fluctuation-dissipation principle}
Above, we saw the hydrodynamic projection evolution problem emerging as a consequence of defining space-time correlation functions using an Euler-scale fluctuation-dissipation principle. Let us now reverse the logic: let us take \eqref{qqop} with \eqref{prj1}, along with the appropriate correct initial condition as stated after \eqref{prj1}, as a definition of the scaled dynamical two-point functions of conserved densities. From this, let us show that the Euler-scale fluctuation-dissipation principle \eqref{qderb} holds for two-point functions of conserved densities. The relation $\partial_t \langle\frak{q}_i(x,t)\rangle_{[n_0]}^{\rm Eul} + \partial_x \langle\frak{j}_i(x,t)\rangle_{[n_0]}^{\rm Eul} =0$ follows from the basic GHD results. Taking functional derivatives with respect to $\beta(y)$, it implies
\[
\partial_t \frc{\delta\langle\frak{q}_i(x,t)\rangle_{[n_0]}^{\rm Eul}}{\delta \beta(y)}
+ \partial_x \frc{\delta\langle\frak{j}_i(x,t)\rangle_{[n_0]}^{\rm Eul}}{\delta\beta(y)}
=0.
\]
The result for these functional derivatives are the expressions on the right-hand sides of \eqref{qq} and \eqref{jq}. Using these in the above equation, we indeed find that \eqref{Cop} satisfies \eqref{prj1} with the correct initial condition. Hence, we conclude that $\delta\langle\frak{q}_i(x,t)\rangle_{[n_0]}^{\rm Eul}/\delta \beta(y) = \langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul}$, as it should.
\subsection{Two-point correlations of generic local observables} \label{ssectgeneric}
Let $n(\bm{\theta})$ be a GGE. We consider the inner product $\langle {\cal O}|\frak{q}_i\rangle_{[n]}$, and it is assumed that spatial correlations within the state $n(\bm{\theta})$ decay faster than the inverse distance. Without loss of generality we assume ${\cal O}$ to be hermitian. If the average of ${\cal O}$ is known in a generic GGE $n(\bm{\theta})$, then we can use
\begin{equation}\label{Oqder}
\langle {\cal O}|\frak{q}_i\rangle_{[n]} = -\frc{\partial}{\partial\beta_i} \langle{\cal O}\rangle_{[n]}.
\end{equation}
In integral operator form, by linearity of the result in $h_i$, there exists a scalar field $V^{\cal O}(\bm{\theta})$ (a hydrodynamic spectral function associated to the local field ${\cal O}$), which is also a functional of $n(\bm{\theta})$, such that
\begin{equation}\label{Oq}
\langle {\cal O}|\frak{q}_i\rangle_{[n]} = \int_{\cal S} {\rm d}\bm{\theta}\, \rho_{\rm p}(\bm{\theta}) f(\bm{\theta})\,V^{\cal O}(\bm{\theta})\, h_i^{\rm dr}(\bm{\theta}).
\end{equation}
Here for later convenience we introduced the factors $\rho_{\rm p}(\bm{\theta}) f(\bm{\theta})$ and used $h_i^{\rm dr}(\bm{\theta})$ instead of $h_i(\bm{\theta})$. For instance, according to results of \cite{dsdrude}, we have
\begin{equation}\label{Vjq}
V^{\frak{q}_i} = h_i^{\rm dr},\qquad V^{\frak{j}_i} = v^{\rm eff}\,h_i^{\rm dr}.
\end{equation}
We denote by $(C_{[n]})_{ij} = C_{[n]}^{\frak{q}_i\frak{q}_j} = \langle \frak{q}_i|\frak{q}_j\rangle_{[n]}$ the overlap, within a fluid cell of GGE $n(\bm{\theta})$, between conserved densities, as per \eqref{hils} -- this is the correlation matrix of conserved densities, the case $N=2$ of \eqref{Cn0}. In integral operator form, this is the correlation operator \eqref{Cn} introduced above. Let us now consider a generalized fluid, with space-time state described by $n_{x,t}(\bm{\theta})\equiv n_t(x;\bm{\theta})$. According to hydrodynamic projection principles, Euler-scale correlation functions can be written as
\begin{eqnarray}
\lefteqn{\langle {\cal O}(x,t){\cal O}'(y,0)\rangle_{[n_0]}^{\rm Eul}}\qquad && \nonumber\\
&=& \sum_{i,j,k,l}
\langle {\cal O}|\frak{q}_i\rangle_{[n_{x,t}]}
(C_{[n_{x,t}]})^{-1}_{ij}\langle \frak{q}_j(x,t)\frak{q}_k(y,0)\rangle_{[n_0]}
(C_{[n_{y,0}]})^{-1}_{k l} \langle \frak{q}_l|{\cal O}\rangle_{[n_{y,0}]}\nonumber\\
&=&
\rho_{\rm p}(x,t)f(x,t) V^{\cal O}(x,t)(1-T^{\rm T}n_{x,t})^{-1}\,\times\nonumber\\
&&\;\times\,
C_{[n_{x,t}]}^{-1}\,S_{(y,0)\to(x,t)}C_{[n_{y,0}]}^{-1}\,
\times\nonumber\\
&&\;\times\,
(1-n_{y,0}T)^{-1}\rho_{\rm p}(y,0)f(y,0) V^{\cal O}(y,0)\nonumber\\
&=&
\int_{\cal S} {\rm d}\bm{\theta}\,\rho_{\rm p}(x,t;\bm{\theta}) f(x,t;\bm{\theta})\, V^{\cal O}(x,t;\bm{\theta}) \,\Big(\mathsf{\Gamma}_{(y,0)\to(x,t)}
V^{{\cal O}'}(y,0)\Big)(\bm{\theta}).\label{tpfctgen}
\end{eqnarray}
The first equality is explained as follows. Reading from the right to the left, we first overlap the observable ${\cal O}'$ with a complete set of conserved quantities $\frak{q}_l$, with respect to the inner product \eqref{hils} for the state $n_{y,0}$ (at the space-time point $(y,0)$, where the observable ${\cal O}'$ lies). Because the conserved quantities $\frak{q}_l$'s don't necessarily form an orthonormal set, we introduced the inverse correlation matrix $C_{[n_{y,0}]}$ at the space-time point $(y,0)$. These two factors represent the amplitude for ${\cal O}'$ to produce Euler propagating waves of conserved quantities. We then ``transport" these waves from $(y,0)$ to $(x,t)$ by using the dynamical two-point function $\langle\frak{q}_j(x,t)\frak{q}_k(y,0)\rangle_{[n_0]}$ between conserved densities. Finally, we represent the amplitude for the transported wave to correlate with ${\cal O}$ by overlapping with ${\cal O}$ with respect to the inner product at $x,t$, introducing the inverse correlation matrix $C_{[n_{x,t}]}$ for orthonormality. The second equality is a re-writing in terms of integral operators, using \eqref{Oq} and \eqref{qqop}. Finally, the last equality is obtained by replacing with the expressions \eqref{Cop} and \eqref{Cn}. As a check, note that using \eqref{Vjq}, the above indeed reproduces the formulae \eqref{qq}-\eqref{jj}. In particular, in homogeneous states, we use \eqref{homo} and find
\begin{eqnarray}
\langle {\cal O}(x,t){\cal O}'(0,0)\rangle_{[n]}^{\rm Eul} &=&
\int_{\cal S} {\rm d}\bm{\theta}\, \delta(x-v^{\rm eff}(\bm{\theta})t)\,
\rho_{\rm p}(\bm{\theta}) f(\bm{\theta})\, V^{\cal O}(\bm{\theta})
V^{{\cal O}'}(\bm{\theta})\nonumber\\
&=&
t^{-1}\,\sum_{\bm{\theta}\in\bm{\theta}_\star(\xi)}
\frc{\rho_{\rm p}(\bm{\theta}) f(\bm{\theta})}{|(v^{\rm eff})'(\bm{\theta})|}
\, V^{\cal O}(\bm{\theta})
V^{{\cal O}'}(\bm{\theta})\label{tpfctgenhomo}
\end{eqnarray}
where $\xi=x/t$ and $\bm{\theta}_\star(\xi)$ is the set of solutions to $v^{\rm eff}(\bm{\theta}) = \xi$.
It turns out that, in integrable QFT, there exists a formula for the averages $\langle{\cal O}\rangle_{[n]}$ in GGEs, for any local field ${\cal O}$ \cite{LM99,M13}. This formula, called the Leclair-Mussardo formula, involves an infinite summation over multiple integrals of form factors of the field ${\cal O}$. Nevertheless, its truncations can be used to numerically approximate expectation values. The Leclair-Mussardo formula was proven in \cite{B11}, and it was used in \cite{cdy} in order to provide further evidence for the proposed one-point averages of currents.
The formula has the structure of a sum over all numbers of particles $k$ of ``connected" diagonal matrix elements $F^{\cal O}_k(\bm{\theta}_1,\ldots,\bm{\theta}_k) = \langle \bm{\theta}_1,\ldots\bm{\theta}_k|{\cal O}|\bm{\theta}_1,\ldots,\bm{\theta}_k\rangle_{\rm conn.}$ of the field ${\cal O}$ (this is defined in \cite{LM99}). Consider a GGE $n(\bm{\theta})$. Then the formula is
\begin{equation}
\langle{\cal O}\rangle_{[n]} = \sum_{k=0}^{\infty} \frc1{k!}
\int_{{\cal S}^{\times k}} \prod_{j=1}^k \lt(\frc{{\rm d}\bm{\theta}_j}{2\pi}\,n(\bm{\theta}_j)\,\rt)
F^{\cal O}_k(\bm{\theta}_1,\ldots,\bm{\theta}_k).
\end{equation}
Recall that in our simplified notation, $\bm{\theta}$ represents the combination of a rapidity and any particle type the model may admit.
Importantly, in this formula, the information about the state is fully contained within the integration measure ${\rm d}\bm{\theta}\, n(\bm{\theta})$. Using \eqref{Oqder} and \eqref{dbn}, we therefore find
\begin{equation}
\langle {\cal O}|\frak{q}_i\rangle_{[n]} =
\sum_{k=1}^{\infty} \frc1{k!}
\int_{{\cal S}^{\times k}} \prod_{j=1}^k \lt(\frc{{\rm d}\bm{\theta}_j}{2\pi}\,n(\bm{\theta}_j)\,\rt)\,
\sum_{j=1}^k h_i^{\rm dr}(\bm{\theta}_j) \, f(\bm{\theta}_j)\,
F^{\cal O}_k(\bm{\theta}_1,\ldots,\bm{\theta}_k).
\end{equation}
The function $F_k^{\cal O}$ is symmetric in all its arguments, and we may identify
\begin{equation}\label{VOr}
V^{\cal O}(\bm{\theta}) =
\sum_{k=0}^{\infty} \frc1{k!}
\int_{{\cal S}^{\times k}} \prod_{j=1}^{k} \lt(\frc{{\rm d}\bm{\theta}_j}{2\pi}\,n(\bm{\theta}_j)\,\rt)\,
(2\pi\rho_{\rm s}(\bm{\theta}))^{-1}\,
F^{\cal O}_{k+1}(\bm{\theta}_1,\ldots,\bm{\theta}_{k},\bm{\theta})
\end{equation}
where $\rho_{\rm s}(\bm{\theta})$ is the density of state, given in \eqref{tba}. The state dependence is within the integration measure and the density of state; the regularized diagonal matrix element $F^{\cal O}_{k+1}(\bm{\theta}_1,\ldots,\bm{\theta}_{k},\bm{\theta})$ is purely a property of the field ${\cal O}$. It is interesting to re-specialize to ${\cal O}$ being a conserved density or current in order to verify that one indeed recovers \eqref{Vjq} from \eqref{VOr}. This is done in Appendix \ref{appLMspectral}.
Using \eqref{Cop} and reverting to the explicit notation $\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})$ for the propagator via \eqref{propg}, we thus obtain
\begin{eqnarray}\label{OO}
\lefteqn{\langle {\cal O}(x,t){\cal O}'(y,0)\rangle_{[n_0]}^{\rm Eul}} &&\\
&& =
\int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S}{\rm d}\bm{\alpha}\,\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(x,t;\bm{\theta}) f(x,t;\bm{\theta}) \;\times\nonumber\\ && \qquad \times\;
\sum_{k,k'=0}^{\infty} \frc1{k!(k')!}
\int_{{\cal S}^{\times k}} \prod_{j=1}^{k} \lt(\frc{{\rm d}\bm{\theta}_j}{2\pi}\,n_t(x;\bm{\theta}_j)\,\rt)\,
\int_{{\cal S}^{\times k'}} \prod_{j=1}^{k'} \lt(\frc{{\rm d}\bm{\theta}_{j}'}{2\pi}\,n_0(y;\bm{\theta}_{j}')\,\rt)
\;\times\nonumber\\
&&\qquad \times\;
\big(2\pi\rho_{\rm s}(x,t;\bm{\theta})\big)^{-1}
F^{\cal O}_{k+1}(\bm{\theta}_1,\ldots,\bm{\theta}_{k},\bm{\theta})\;
\big(2\pi\rho_{\rm s}(y,0;\bm{\alpha})\big)^{-1}
F^{{\cal O}'}_{k+1}(\bm{\theta}_1',\ldots,\bm{\theta}_{k'}',\bm{\alpha}).\nonumber
\end{eqnarray}
It is remarkable that such a complete formula exists in integrable field theory, for very general dynamical Euler-scale two-point correlation functions of local fields in inhomogeneous, non-stationary states. This formula is new both in the inhomogeneous case, and in the case of a homogeneous GGE; in the latter case, recall that the propagator simplifies to \eqref{homo}.
\medskip
\noindent {\bf Remark.} It is very likely that the form factor series \eqref{OO}, in homogeneous GGEs, can be obtained directly using an appropriate spectral expansion of the two-point function. Indeed, the structure of this series is extremely suggestive of the techniques introduced in \cite{D1,D2}, based on the ideas of the Gelfand-Naimark-Segal construction. From these techniques, the trace expression representing the GGE average of a product of local fields is expressed as an expansion in ``GGE form factors" very similar to the form factor expansion of vacuum two-point functions, in which each GGE form factor is itself a GGE trace of a single local field with additional particle creation / annihilation operator inserted. The leading term at the Euler scale is that with one particle and its hole at the same rapidity, so that, pictorially,
\begin{eqnarray}
\lefteqn{\frc{{\rm Tr} \lt(e^{-\sum_i \beta_i Q_i} {\cal O}(x,t) {\cal O}'(y,0)\rt)}{{\rm Tr}\lt(e^{-\sum_i \beta_i Q_i}\rt)}} && \nonumber\\
&\sim& \int {\rm d}\bm{\theta}\frc{{\rm Tr} \lt(e^{-\sum_i \beta_i Q_i} {\cal O}(x,t) A(\bm{\theta})A^\dag(\bm{\theta})\rt)}{{\rm Tr}\lt(e^{-\sum_i \beta_i Q_i}\rt)}\;
\frc{{\rm Tr} \lt(e^{-\sum_i \beta_i Q_i} {\cal O}'(y,0) A(\bm{\theta})A^\dag(\bm{\theta})\rt)}{{\rm Tr}\lt(e^{-\sum_i \beta_i Q_i}\rt)}.
\end{eqnarray}
Each single-field trace can be evaluated using the Leclair-Mussardo formula, giving a right-hand side similar to that of \eqref{OO}. We hope to come back to this problem in a future work.
\section{Examples}\label{sectex}
\subsection{Sinh-Gordon and Lieb-Liniger models}
\subsubsection{Sinh-Grodon model}
The sinh-Gordon model is an integrable relativistic QFT with Lagrangian density
\begin{equation}
\mathcal{L}= \frac{1}{2}\partial_\mu\Phi\partial^\mu\Phi-\frac{M^2}{g^2}(\cosh(g\Phi)-1). \label{ShGaction}
\end{equation}
for a real scalar field $\Phi$, where $g$ is a coupling parameter and $M$ is a mass scale. Its TBA description contains a single particle of Fermionic type, so that ${\cal S} = {\mathbb{R}}$, $\bm{\theta}=\theta$ and $f(\theta) = 1-n(\theta)$. We may choose $\theta$ as the rapidity, with $p(\theta) = m\sinh\theta$ and $E(\theta) = m\cosh\theta$, and the physical mass and differential scattering phase are given by
\begin{equation}
m^2 = \frc{\sin \pi a}{\pi a} M^2,\qquad \varphi(\theta,\alpha)=\frac{2\sin\pi a}{\sinh^2(\theta-\alpha)+\sin^2\pi a},\qquad \mbox{with}\quad a=\frac{g^2}{8\pi+g^2}.
\end{equation}
As a set of natural local fields, one may consider the local conserved densities and currents of the model. They correspond to the spectral functions
\begin{equation}
h_s(\theta) = e^{s\theta},\qquad s=\pm 1,\pm 3,\pm 5,\ldots
\end{equation}
This includes the density of momentum ($h_1-h_{-1}$) and the density of energy ($h_1+h_{-1}$), as well as higher-spin local conserved densities. The formulae derived in subsections \ref{ssecgen} and \ref{ssectexact} immediately give correlation functions for these densities in inhomogeneous, non-stationary situations (generalizing the homogeneous, stationary two-point function formulae found in \cite{dsdrude}).
One may also obtain two-point correlation function formulae for other local fields that are not local conserved densities and currents, using the results of subsection \ref{ssectgeneric}. There exist explicit results for one-point function of certain exponential fields in GGEs, avoiding the complicated LM series, which thus can be used to extract $V^{\cal O}(\theta)$ as defined in \eqref{Oqder}, \eqref{Oq}. It was found in \cite{NS13,N14,BP16} that
\begin{equation}
\frac{\langle e^{(k+1)g\Phi}\rangle_{[n]}}{\langle e^{kg\Phi}\rangle_{[n]}}=1+4\sin(\pi a(2k+1))\int\frc{{\rm d} \theta}{2\pi}\, e^\theta n(\theta) [e^{-1}]_k^{\rm dr}
(\theta)\label{BPeq}\, ,
\end{equation}
where
\begin{equation}
[e^{-1}]_k^{\rm dr}
(\theta)=e^{-\theta}+\int \frc{{\rm d}\alpha}{2\pi}\, \chi_k(\theta,\alpha)n(\alpha) [e^{-1}]_k^{\rm dr}(\alpha)
\end{equation}
is (in our interpretation) the $k$-dressing of the function $e^{-1}(\theta) = e^{-\theta}$ seen as a vector field: the dressing with respect to a different, $k$-dependent scattering kernel given by
\begin{equation}
\chi_k(\theta,\alpha)=2\,{\rm Im}\left(\frac{e^{2k{\rm i}\pi a}}{\sinh(\theta-\alpha-{\rm i}\pi a)}\right).
\end{equation}
Defining
\begin{equation}
H_k = 1+4\sin(\pi a(2k+1))\int\frc{{\rm d} \theta}{2\pi}\, e^\theta n(\theta) [e^{-1}]_k^{\rm dr}(\theta).
\end{equation}
and using $\langle 1\rangle_{[n]}=1$, the one-point function can be obtained for all $k\in{\mathbb{N}}$ as $\langle e^{kg\Phi}\rangle_{[n]}= \prod_{j=0}^{k-1} H_j$. Differentiating $H_k$ with respect to $\beta_i$ can be done using \eqref{dbn} and \eqref{dmu}, giving
\begin{equation}
-\frc{\partial}{\partial\beta_i}H_k =
4\sin(\pi a(2k+1))\int\frc{{\rm d} \theta}{2\pi}\, e^{\rm dr}_k(\theta) n(\theta) f(\theta) h_i^{\rm dr}(\theta) [e^{-1}]_k^{\rm dr}
(\theta)
\end{equation}
where
\begin{equation}
e_k^{\rm dr}
(\theta)=e^{\theta}+\int \frc{{\rm d}\alpha}{2\pi}\, \chi_k(\alpha,\theta)n(\alpha) e_k^{\rm dr}(\alpha)
\end{equation}
is the $k$-dressing of the function $e(\theta) = e^{\theta}$ seen as a scalar field. Using \eqref{Oq}, we then find, for all $k\in{\mathbb{N}}$,
\begin{equation}
V^k(\theta) = \frc2{\pi\rho_{\rm s}(\theta)}\sum_{j=0}^{k-1}
\sin(\pi a(2j+1))\,e^{\rm dr}_j(\theta)\, [e^{-1}]_j^{\rm dr} (\theta)\,
\prod_{l=0\atop l\neq j}^{k-1} H_l.
\end{equation}
By the ${\mathbb{Z}}_2$ symmetry\footnote{We exclude states which are not ${\mathbb{Z}}_2$ symmetric, as they require an extension of the present formalism.} $\Phi\mapsto-\Phi$ we have $V^{-k}(\theta) = V^k(\theta)$. Further \cite{BP16}, there is a symmetry $\langle e^{kg\Phi}\rangle_{[n]} = \langle e^{(k+a^{-1})g\Phi}\rangle_{[n]}$, and thus $V^k(\theta) = V^{k+a^{-1}}(\theta)$, which, for irrational couplings $a$, allows us to reach arbitrary values of $k\in{\mathbb{R}}$. The resulting $V^k(\theta)$ can be inserted into \eqref{tpfctgen} and \eqref{tpfctgenhomo} in order to get Euler-scale two-point correlation functions of fields $e^{kg\Phi}$ and $e^{k'g\Phi}$ for any $k,k'$.
The generalized hydrodynamics of classical limit of the sinh-Gordon model was investigated in \cite{Bastia17}, where the classical limit of $V^k(\theta)$ was derived. Euler-scale two-point functions obtained by \eqref{tpfctgenhomo} were verified to agree with direct numerical simulations.
\subsubsection{Lieb-Liniger model}
The repulsive Lieb-Liniger model is defined (for mass equal to $1/2$) by the second-quantized Hamiltonian
\begin{equation}
H = \int {\rm d} x\,\big(\partial_x \Psi^\dag \partial_x\Psi(x) + c\Psi^\dag\Psi^\dag\Psi\Psi\big)
\end{equation}
for a single complex bosonic field $\Psi$, where $c>0$ is a coupling parameter. It is Galilean invariant, and its TBA description contains a single quasi-particle type, so we take ${\cal S} = {\mathbb{R}}$ and write $\bm{\theta} = \theta$. One may choose the parametrization given by the momentum, $\theta = p\in{\mathbb{R}}$ (so that $p'(\theta)=1$ and $\rho_{\rm s} = 1^{\rm dr}/(2\pi)$). There are various TBA descriptions possible, but in one convenient description, the quasi-particle is of Fermionic type, hence $f(p) = 1-n(p)$. In this description, the differential scattering phase is given by
\begin{equation}
\varphi(p) = \frc{2c}{p^2+c^2}.
\end{equation}
Again, as a set of natural local fields, one may consider the local conserved densities and currents of the model; they correspond to the spectral functions
\begin{equation}
h_r(p) = p^{r-1},\qquad r=1,2,3,\ldots
\end{equation}
This includes the density of particles ($r=1$), the density of momentum ($r=2$) and the density of energy ($r=3$). Again, the formulae derived in subsections \ref{ssecgen} and \ref{ssectexact} give correlation functions for these densities in inhomogeneous, non-stationary situations.
One may also obtain two-point correlation formulae for other local fields that are not local conserved densities and currents, using the results of subsection \ref{ssectgeneric}. Consider the $K^{\rm th}$ power of the particle density,
\begin{equation}
{\cal O}_K = \frc1{(K!)^2}(\Psi^\dag)^K (\Psi)^K.
\end{equation}
It was shown in \cite{Po11} that in a homogeneous state characterized by the occupation function $n(p)$, its average takes the form
\begin{equation}
\langle {\cal O}_K\rangle_{[n]} = \int_{{\mathbb{R}}^{K}}\Bigg(\prod_{r=1}^{K} \frc{{\rm d} p_1}{2\pi}\, n(p_r)h_{r}^{\rm dr}(p_r)\Bigg) \prod_{j\geq l} \frc{p_j-p_l}{(p_j-p_l)^2 + c^2}.
\end{equation}
Taking the $\beta_i$-derivative is simple, by using \eqref{dbn} and the general formula \eqref{dmu}:
\begin{equation}
-\frc{\partial}{\partial\beta_i} \langle {\cal O}_K\rangle_{[n]} =
\int_{{\mathbb{R}}^{K}}\Bigg(\prod_{r=1}^{K} \frc{{\rm d} p_r}{2\pi}\, n(p_r)h_{r}^{\rm dr}(p_r)\Bigg)
\sum_{s=1}^{K}
g_s^{\rm dr}(p_s) f(p_s)h_i^{\rm dr}(p_s)
\end{equation}
where
\begin{equation}
g_s(p_s) =
\prod_{j\geq l} \frc{p_j-p_l}{(p_j-p_l)^2 + c^2}
\end{equation}
is defined as a function of $p_s$ with $p_{r\neq s}$ fixed parameters. From this we identify
\begin{equation}
V^{{\cal O}_K}(p) =
\sum_{s=1}^{K}
\int_{{\mathbb{R}}^{K-1}}\Bigg(\prod_{r=1\atop r\neq s}^{K} \frc{{\rm d} p_r}{2\pi}\, n(p_r)h_{r}^{\rm dr}(p_r)\Bigg)
\frc{h_s^{\rm dr}(p)\,g_s^{\rm dr}(p)}{1^{\rm dr}(p)}
\end{equation}
where $1^{\rm dr}(p)$ is the dressed constant function $1$. This gives two-point functions by insertion in \eqref{tpfctgen} and, in the homogeneous case, in \eqref{tpfctgenhomo}.
We note finally that in the very recent paper \cite{BPC18} new expressions for expectation values of the fields ${\cal O}_K$ are obtained using the non-relativistic limit of the sinh-Gordon model and the results of \cite{NS13,N14,BP16} recalled above. These appear to be more efficient. By using the methods shown here, this can in turn be used to obtain different expressions for $V^{{\cal O}_K}(\theta)$. This will be worked out in a future work \cite{BPprepa}.
\subsection{Free particle models}
In free particle models, formulae \eqref{qq} - \eqref{jj} simplify. Using \eqref{free}, the fact that the dressing operator is trivial, and $n_t(x;\bm{\theta}) = n_0(x-v^{\rm gr}(\bm{\theta})t,\bm{\theta})$, one obtains the simple expression
\begin{eqnarray}
\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul}
&=& \int_{\cal S} {\rm d} \bm{\theta}\,\rho_{\rm p}(x,t;\bm{\theta})\,
f(x,t;\bm{\theta})\,h_i(\bm{\theta})\,h_j(\bm{\theta})\,\delta(x-y-v^{\rm gr}(\bm{\theta})t) \nonumber\\
&=& \sum_{\bm{\theta}\; \in\; (v^{\rm gr})^{-1}(\frc{x-y}t)} \frc{\rho_{\rm p}(y,0;\bm{\theta})f(y,0;\bm{\theta})}{|(v^{\rm gr})'(\bm{\theta})|\,t}\,h_i(\bm{\theta})\,h_j(\bm{\theta}).
\label{qqfree}
\end{eqnarray}
Here $(v^{\rm gr})^{-1}(\xi) = \{\bm{\theta}:v^{\rm gr}(\bm{\theta}) = \xi\}$. The integral form has the clear physical interpretation of a correlation coming from the ballistically propagating particles on the ray connecting the two fields. It is evaluated by summing over all solutions to $v^{\rm gr}(\bm{\theta}) = (x-y)/t$, of which there is at most one for every particle type. One similarly obtains current correlations by multiplying by factors of $v^{\rm gr}(\bm{\theta})$.
For instance, in the quantum Ising model (a free Majorana fermion), where there is a single particle type, one has $p(\theta) = m\sinh\theta$, $E(\theta) = m\cosh\theta$, $v(\theta) = \tanh\theta$ and $f(y,0;\theta) = 1-2\pi \rho_{\rm p}(y,0;\theta)/(m\cosh\theta)$. If the initial state is locally thermal with local inverse temperature $\beta(y)$, then
\begin{equation}
2\pi \rho_{\rm p}(y,0;\theta) = \frc{m \cosh\theta}{1+\exp\lt[-\beta(y)
m\cosh\theta\rt]}.
\end{equation}
In this case, the energy density dynamical two-point function (writing $\frak{q}_1 = T^{00}$, the time-time component of the stress-energy tensor) is zero outside the lightcone, and otherwise is
\begin{eqnarray}
\lefteqn{\langle T^{00}(x,t) T^{00}(y,0)\rangle^{\rm Eul}_{[n_0]}} && \\
&&=
\frc{m^3 \cosh^5\theta}{2\pi t\,(1+\exp\lt[-\beta(y)
m\cosh\theta\rt])(1+\exp\lt[\beta(y)
m\cosh\theta\rt])}\Big|_{\theta = {\rm arctanh}\;((x-y)/t)} \nonumber\\
&&=
\frc{m^3 t^4}{8\pi s^5\cosh^2\lt(\frc{\beta(y)mt}{2s}\rt)}\qquad\qquad
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mbox{(Ising model)}\nonumber
\end{eqnarray}
where $s = \sqrt{t^2-(x-y)^2}$ is the relativistic time-like distance between the fields.
Similarly, consider the correlation function of particle densities (writing the density as $\frak{q}_0 = \frak{n}$) in the free nonrelativistic, spinless fermion, evolved from a state with space-dependent temperature $\beta(y)$ and chemical potential $\mu(y)$. For instance, this describes the Tonks-Girardeau limit of the Lieb-Liniger model. We find
\begin{equation}
\langle \frak{n}(x,t)\frak{n}(y,0)\rangle^{\rm Eul}_{[n_0]}=
\frc{m}{8\pi t \cosh^2\lt(\frc{\beta(y)}2
\lt(\frc{m(x-y)^2}{2t^2}-\mu(y)\rt)\rt)}
\qquad\qquad\mbox{(Tonks-Girardeau)}.
\end{equation}
In free particle models, it is possible to develop the full program outlined in Subsection \ref{ssecgen}, and to obtain explicit expressions for every $N$-point correlation functions, at least for conserved densities. The procedure is quite straightforward, and the results can be expressed as follows. Let $w(x;\bm{\theta}) = \sum_{j=0}^{\infty} \beta_j(x)h_j(\bm{\theta})$ be the TBA driving term of the GGEs \eqref{inistate}. Then, for $N=2,3,4,\ldots$ we have
\begin{eqnarray}\label{qqqfree}
\lefteqn{\Big\langle \prod_{k=1}^N \frak{q}_{i_k}(x_k,t_k)\Big\rangle^{\rm Eul}_{[n_0]} }\\ && = \sum_{\bm{\theta}\; \in\; (v^{\rm gr})^{-1}(\frc{x_2-x_1}{t_2-t_1})}
\frc{p'(\bm{\theta})\,(t_2-t_1)^{-1}}{2\pi |(v^{\rm gr})'(\bm{\theta})|}
\,g_N\lt(\frc{x_1t_2 - x_2t_1}{t_2-t_1};\bm{\theta}\rt)\;\times\nonumber\\ && \qquad\qquad\times\;
\prod_{k=3}^N \delta\lt(\frc{(x_k-x_1)(t_2-t_1)-(x_2-x_1)(t_k-t_1)}{t_2-t_1}\rt)\,
\prod_{k=1}^N h_{i_k}(\bm{\theta}). \nonumber
\end{eqnarray}
Here the functions $g_N(y;\bm{\theta})$ are defined via generating functions as
\begin{equation}
\mathsf{F}_a(w(y;\bm{\theta})) - \mathsf{F}_a(w(y;\bm{\theta})-z) =
\sum_{N=1}^{\infty} \frc{z^N}{N!}\, g_N(y;\bm{\theta})
\end{equation}
where $a$ is the quasi-particle type associated to $\bm{\theta} = (\theta,a)$, and where the free energy function $\mathsf{F}_a(w)$ is given in \eqref{Faw}. In \eqref{qqqfree}, the delta-functions on the right-hand side constrain the equalities $(x_k-x_1)/(t_k-t_1) = (x_2-x_1)/(t_2-t_1)$ (for all $k$), and thus $(x_k-x_j)/(t_k-t_j) = (x_2-x_1)/(t_2-t_1)$ (for all $j\neq k$), which equal $v^{\rm gr}(\bm{\theta})$. Therefore, we may replace the argument $(x_1t_2 - x_2t_1)/(t_2-t_1)$ of the function $g_N(\cdot;\bm{\theta})$ by $(x_jt_k - x_kt_j)/(t_k-t_j)$ or by $x_k - v^{\rm gr}(\bm{\theta})t_k$ for any $j\neq k$.
In fact, all Euler-scale correlation functions, for $N=1,2,3,4,\ldots$, can be obtained formally by using generating functionals over the generating parameters $\varepsilon_k(x)$ via
\begin{eqnarray}
\lefteqn{\Big\langle \exp\lt[\sum_k \int_{{\mathbb{R}}} {\rm d} x\,\varepsilon_k(x)\frak{q}_{i_k}(x,t_k)\rt]-1\Big\rangle_{[n_0]}^{\rm Eul}} \\ && =
\int_{\cal S} \frc{{\rm d}\bm{\theta}}{2\pi}\, p'(\bm{\theta}) \int_{{\mathbb{R}}} {\rm d} u\,\lt(\mathsf{F}_a(w(u;\bm{\theta})) -
\mathsf{F}_a\Big(w(u;\bm{\theta}) - \sum_k \varepsilon_k(u+v^{\rm gr}(\bm{\theta})t_k)h_{i_k}(\bm{\theta})\Big) \rt)
\nonumber
\end{eqnarray}
where on the right-hand side $\bm{\theta} = (\theta,a)$.
Conjecturally, correlation functions involving currents are obtained by replacing factors $h_{i_k}(\bm{\theta})$ by $v^{\rm gr}(\bm{\theta}) h_{i_k}(\bm{\theta})$.
We note that \eqref{qqfree} and \eqref{qqqfree} give general, explicit expressions for Euler-scale $N$-point correlation functions of conserved densities in free theories. There is no integral over spectral parameters: for every particle type, there is a single velocity that contributes to the connected Euler-scale correlation, which is the velocity of the particle propagating from the initial to the final point. For similar reasons, correlation functions for $N\geq 3$ have a delta-function structure which imposes colinearity of all space-time positions. Connected Euler-scale correlations can only arise from single quasi-particles travelling through each of the space-time points. Due to this, all correlation functions depend on the initial state only through the local state at a single position: the position, at time 0, crossed by the single ray passing through all space-time points (this is $y$ in \eqref{qqfree} and more generally $x_k - v^{\rm gr}(\bm{\theta})t_k$ in \eqref{qqqfree}). Therefore, the only effect of the weak inhomogeneity is to give a dependence on the state via this single position. All these properties are expected to be broken in inhomogeneous states of interacting models. The dependence is not solely on the state at a single position, as the knowledge of the state at other positions is necessary in order to evaluate the effect of a disturbance on the quasi-particle trajectories (hence to evaluate the response function). Similarly, we do not expect a delta-function structure for higher-point functions in interacting models.
\section{Discussion and analysis}\label{sectdis}
\subsection{Interpretation of the general formulae}
Formulae \eqref{qq}-\eqref{jj} can be given a relatively clear interpretation. A correlation function is expressed as an integral over all spectral parameters of the product of the quantity of charge of the first observable, $h_i^{\rm dr}(x,t,\bm{\theta})$, carried by the spectral parameter $\bm{\theta}$ and dressed with respect to the the local bath $n_t(x)$, times the propagation to the point $(x,t)$, of the quantity of charge of the second observable $h_j^{\rm dr}(0,y;\bm{\alpha})$ dressed by the local bath $n_0(y)$. The propagation factor is $\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})\,\rho_{\rm p}(x,t;\bm{\theta}) f(x,t;\bm{\theta})$. It includes the propagator itself $\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})$, representing the effect of quasi-particles ballistically propagating from $(y,0)$ to $(x,t)$, as well as the density $\rho_{\rm p}(x,t;\bm{\theta})$, which weigh this effect with the quantity of quasi-particles actually propagating. It also includes the factor $f(x,t;\bm{\theta})$, which modulates the weight according the quasi-particle statistics; for instance, for fermions, this factor forbids the entry of a new quasi-particles if the local occupation function is saturated to $n_t(x;\bm{\theta})=1$, making the correlation effect of this quasi-particle vanish. The same structure occurs for more general local observables in \eqref{OO}, where the only complication is in the dressing by the local bath, which involves a sum over all form factors. The nontrivial physics of the propagation of correlations -- the response of the operator at $(x,t)$ to a disturbance by the observable at $(y,0)$ -- is fully encoded within the propagator.
There is an apparent asymmetry between the initial position $(y,0)$ and the final position $(x,t)$, as the factors $\rho_{\rm p}(x,t;\bm{\theta})\,f(x,t;\bm{\theta})$ are only present for the latter position. However we note that the points $(y,0)$ and $(x,t)$ are not independent, they are related by the evolution equation \eqref{ghd}: thus every quasi-particle at $(x,t)$ has an antecedent at $(y,0)$. In nontrivial (inhomogeneous, interacting) cases, asymmetry is also explicit in the equation defining the propagator \eqref{Gamma}, where quantities pertaining to the initial state density appear naturally. In these cases, the propagator is not an intrinsic, state-independent property of quasi-particle propagation: it is affected by the initial state in nontrivial ways. It is possible to write the two-point functions in more symmetric ways, such as in \eqref{qqop}, but the choice \eqref{propdef} for the propagator has the advantage that (i) it specializes to delta-functions in simple cases \eqref{homo}, \eqref{free}, \eqref{gammainit}, and (ii) its defining integral equation \eqref{Gamma} only involves quantities that are explicitly non-divergent in any GGE.
The propagator is composed of two elements, as per \eqref{GammaDelta}. The first, the direct propagator, comes from the direct propagation of the disturbance of the initial state due to the observable at $(y,0)$. At the Euler scale, this travels with quasi-particles along their characteristics described by the function $u(x,t;\bm{\theta})$. Only particles with just the right spectral parameter will travel from $(y,0)$ to $(x,t)$, and thus this element should indeed give a delta-function contribution to the propagator.
The second element, the indirect propagator $\mathsf{\Delta}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})$, is more subtle. It comes from the {\em change of trajectories} of quasi-particles due to the disturbance at $(y,0)$. In the explicit calculation in Appendix \ref{sappMain}, it is seen as the change of the characteristics $u(x,t;\bm{\theta})$ upon differentiation with respect to the Lagrange parameter $\beta_j(0)$. The indirect propagator $\mathsf{\Delta}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})$ is still applied to the local dressed quantity $h_j^{\rm dr}(0,y;\bm{\alpha})$, as it is this quantity that is to travel on the slightly modified trajectory in order to create correlations. However all spectral parameters $\bm{\alpha}$ may generically participate, instead of a single one, because all are involved in determining the trajectory.
As has been noted above, the indirect propagator vanishes in homogeneous states and in free-particle models (see \eqref{homo} and \eqref{free}). The above interpretation makes these fact clear: in homogeneous states, it does not matter if the quasi-particle trajectories are modified, as the state is everywhere the same; and in free models, the trajectories do not depend on the local states, thus are not affected by the disturbance at $(y,0)$.
One can see that the indirect propagator is largely controlled by the effective acceleration $a^{\rm eff}_{[n_0]}(u(x,t;\bm{\theta});\bm{\theta})$, and in particular that it vanishes if the latter does. Recall that the effective acceleration was initially introduced in order to describe force terms due to external, space-dependent fields (that is, weakly inhomogeneous evolution Hamiltonians) \cite{dynote}. Here, it instead encodes the (weak) spatial inhomogeneity of the initial state. The space-dependent GGEs in the fluid cells of the initial fluid state are associated with an inhomogeneous ``Hamiltonian" $\sum_i \beta_i(x) Q_i$, and it would be an evolution with respect to this that would generate force terms controlled by the acceleration field $a^{\rm eff}_{[n_0]}(z;\bm{\theta})$. Here we see that the effective acceleration instead determines, in part, the way in which characteristics are modified due to disturbances.
It is in principle possible to numerically evaluate the expressions \eqref{qq}-\eqref{jj}. Recall that the exact solution \eqref{sol} can be solved very efficiently by iteration, as explained in \cite{dsy17}. Therefore, we may assume $n_0(z;\bm{\theta})$, $n_t(z;\bm{\theta})$ and $u(z,t;\bm{\theta})$ to be readily numerically available for all $z,\bm{\theta}$. It is then straightforward to evaluate dressed quantities, which can be done by solving \eqref{dressing} either by iteration, or by discretizing the linear integral equation and inverting the resulting matrix $1-Tn$. Therefore, the only ingredient in \eqref{qq}-\eqref{jj} that is not readily numerically available from previous works is the propagator $\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})$. For this, we write it in the form \eqref{GammaDelta}. We can then evaluate the indirect propagator by solving \eqref{Delta}, where the function $g(\bm{\theta})$ is chosen as either $h_j^{\rm dr}(y,0;\bm{\theta})$ or $v^{\rm eff}(y,0;\bm{\theta})h_j^{\rm dr}(y,0;\bm{\theta})$ depending on the correlator sought for. The source term can be evaluated from the quantities already numerically available, and \eqref{Delta} is a linear integral equation which can be solved, for instance, by iterations. One difficulty might lie in the evaluation of derivatives, for instance $u'(x,t;\bm{\gamma})$. One might find it more efficient to differentiate the integral equation \eqref{sol} and solve for $u'(x,t;\bm{\gamma})$ instead of directly taking the derivative numerically.
\subsection{Partitioning protocol (domain wall initial condition)}\label{ssectpartprot}
Consider the evolution from an initial density operator
\[
\exp\lt[-\int_{-\infty}^0 {\rm d} x\, \sum_i \beta_i^L \frak{q}_i(x) - \int_{0}^\infty {\rm d} x\, \sum_i \beta_i^R \frak{q}_i(x)\rt],
\]
where the state is spatially separated between two different homogeneous states on the left and right. This is referred to as the partitioning or cut-and-glue protocol, or as the evolution with domain wall initial condition, and has been studied extensively, see the review \cite{BDreview}. Even though the initial condition is not smooth, as the initial generalized temperatures display an abrupt jump at the origin, profiles quickly smooth out and the fluid approximation is very accurate after a small relaxation time. The GHD solution \cite{cdy,BCDF16}, obtained with initial fluid state of the form
\begin{equation}\label{inidm}
n_0(x;\bm{\theta}) = n_{\rm L}(\bm{\theta})\Theta(-x) + n_{\rm R}(\bm{\theta})\Theta(x),
\end{equation}
gives extremely accurate predictions, as verified in the XXZ model \cite{BCDF16} and in the hard rod gas \cite{dshardrods}. The solution is a set of ray dependent states
\begin{equation}\label{soldm}
n_t(x;\bm{\theta}) = n(\xi;\bm{\theta}) = n_{\rm L}(\bm{\theta})\Theta(v^{\rm eff}(\xi;\bm{\theta})-\xi)
+ n_{\rm R}(\bm{\theta})\Theta(\xi-v^{\rm eff}(\xi;\bm{\theta}))
\end{equation}
where $\xi=x/t$. Below we will denote
\[
\langle {\cal O}(x,t){\cal O}'(y,0)\rangle_{[n_{\rm L},n_{\rm R}]}^{\rm Eul}
\]
scaled correlation functions in this protocol.
Let us analyze certain correlation functions in this setup. Naively, one might think that scaled two-point correlation functions \eqref{qq}-\eqref{jj} for two fields lying on the same ray should equal those in the homogeneous state of this ray, as obtained using \eqref{homo}: correlations should be carried by particles traveling along this ray alone. This is however incorrect. In order to see this, we consider two different situations. For simplicity we concentrate on charge-charge correlations \eqref{qq}, but a similar analysis holds in other cases. See Appendix \ref{apppart} for a study of the characteristics in the partitioning protocol.
\subsubsection{Correlations on a ray away from connection time} Consider the initial domain-wall state to be at time $-t_0<0$ (so $n_0$ is the fluid state after the evolution by $t_0$ from the domain wall), and let $(x,t)$ and $(y,0)$ lie on the same ray emanating from $(-t_0,0)$, that is $\xi = x/(t+t_0)=y/t_0$. Then the fluid state is the same at $(x,t)$ and at $(y,0)$. Thus we have
\begin{equation}
\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_{\rm L},n_{\rm R}]}^{\rm Eul}
= \int_{\cal S}{\rm d}\bm{\theta}\int_{\cal S} {\rm d}\bm{\alpha}\,
\mathsf{\Gamma}_{(y,0)\to (x,t)}(\bm{\theta},\bm{\alpha})\,
\rho_{\rm p}(\xi;\bm{\theta})\,
f(\xi;\bm{\theta})\,h_i^{\rm dr}(\xi;\bm{\theta})\,
h_j^{\rm dr}(\xi;\bm{\alpha}).
\end{equation}
First let us look at the contribution from the direct propagator $\delta(y-u(x,t;\bm{\theta}))\delta_{\cal S}(\bm{\alpha}-\bm{\theta})$ (see \eqref{GammaDelta}). For $(x,t)$ and $(y,0)$ on the same ray, the only solutions $\bm{\theta}$ to $y=u(x,t;\bm{\theta})$ are the solutions to $y = x-v^{\rm eff}(\bm{\theta})t$. However, we cannot replace $\delta(y-u(x,t;\bm{\theta}))$ by $\delta(x-y-v^{\rm eff}(\xi;\bm{\theta})t)$, as would be required to reproduce the homogeneous correlators according to \eqref{homo}. Indeed, the variation, with respect to $\theta$, of $u(x,t;\bm{\theta})$ is not the same as that of $v^{\rm eff}(\xi;\bm{\theta})t$, due to the second equation in \eqref{derutilde}. Instead, the direct-propagator contribution to the two-point function is
\begin{equation}\label{dircontr}
\int_{\cal S}{\rm d}\bm{\theta}\,
\frc{\delta(x-y-v^{\rm eff}(\xi;\bm{\theta})t)}{V(\bm{\theta})} \,
\rho_{\rm p}(\xi;\bm{\theta})\,
f(\xi;\bm{\theta})\,h_i^{\rm dr}(\xi;\bm{\theta})\,
h_j^{\rm dr}(\xi;\bm{\theta}).
\end{equation}
where $V(\bm{\theta})$ is defined in \eqref{Vdef}.
The contribution from the indirect propagator $\mathsf{\Delta}_{(y,0)\to(x,t)}(\bm{\alpha};\bm{\theta})$ gives an additional correction. This contribution is generically nonzero, in particular the state at $t=0$ is not homogeneous and thus the effective acceleration $a^{\rm eff}_{[n_0]}(x;\bm{\theta})$ is nonzero.
Therefore, as compared to the homogeneous correlator obtained using \eqref{homo} in \eqref{qq}, there are two corrections: the factor $1/V(\bm{\theta})$ in the direct-propagator contribution \eqref{dircontr}, and the indirect-propagator contribution. We have not shown that these two corrections don't cancel each other, but this seems unlikely. Both corrections are due to the fact that the insertion of an observable in a correlation function perturbs the state as seen by other observables, and that due to the nonlinearity of GHD, this perturbation generically affects the trajectories of quasi-particles. Thus other rays are explored, and the two-point function is not that in the homogeneous state of a single ray.
\subsubsection{Correlations with one observable at connection time} Second, consider the initial domain wall to be at $t=0$. In this case, the state is locally homogeneous at $(y,0)$ for any $y\in{\mathbb{R}}\setminus\{0\}$, therefore $a^{\rm eff}_{[n_0]}(y;\bm{\theta})=0$. As a consequence only the direct propagator remains,
\begin{equation}
\mathsf{\Gamma}_{(y,0)\to(x,t)}(\bm{\theta},\bm{\alpha})
=
\delta(y-u(x,t;\bm{\theta}))\delta_{\cal S}(\bm{\alpha}-\bm{\theta}).
\end{equation}
The expression for the scaled two-point function simplifies to a finite sum, as per \eqref{qq2}. Then we have, for any $x,t,y$ and with $\xi=x/t$,
\begin{equation}\label{partprotasympt}
\langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{n_{\rm L},n_{\rm R}}^{\rm Eul}
= \frc1t \sum_{\bm{\gamma}\in \bm{\theta}_\star(x,t;y)}\frc{\rho_{\rm p}(\xi;\bm{\gamma})\,
f(\xi;\bm{\gamma})}{|\partial_{\gamma} \t u(\xi;\bm{\gamma})|}\,h_i^{\rm dr}(\xi;\bm{\gamma})\,h_j^{\rm dr}({\rm sgn}(y)\,\infty;\bm{\gamma})
\end{equation}
where $\t u(\xi;\bm{\theta}) = u(x,t;\bm{\theta})/t$ (see Appendix \ref{apppart}).
Taking $y\to0^\pm$, we can use again \eqref{derutilde} and the argument above to obtain
\begin{equation}\label{qqpart}
\lim_{y\to 0^\pm} \langle \frak{q}_i(x,t)\frak{q}_j(y,0)\rangle_{[n_{\rm L},n_{\rm R}]}^{\rm Eul}
=
\int_{\cal S}{\rm d}\bm{\theta}\,\frc{\delta(x-v^{\rm eff}(\xi;\bm{\theta})t)}{V(\bm{\theta})}\,
\rho_{\rm p}(\xi;\bm{\theta})\,
f(\xi;\bm{\theta})\,h_i^{\rm dr}(\xi;\bm{\theta})\,
h_j^{\rm dr}(\pm\infty;\bm{\theta}).
\end{equation}
This again looks very similar to the two-point function in a homogeneous state, except for two differences: the factor $V(\bm{\theta})$, and the fact that the state at $(y=0^\pm,0)$ is not equal to that on the ray $\xi$ that emanates from the origin: it is instead the initial condition, equal to the state at $\xi=\pm\infty$. Thus, again, the two-point function on a ray is not that in the homogeneous state of that ray.
The question of the two-point function with $y=0$, that is, with one observable within the original discontinuity, is more subtle and answered below.
\subsection{Long-time asymptotics}\label{ssectlongtime}
Consider an initial state $n_0(x;\bm{\theta})$. Suppose it has well-defined asymptotic behavior at large distances, where it becomes homogeneous:
\begin{equation}\label{asympnx}
\lim_{x\to\pm\infty}n_0(x;\bm{\theta}) = n_0^\pm(\bm{\theta}).
\end{equation}
In particular, we suppose that $x_0$ can be set to $-\infty$ in \eqref{sol}. Suppose also that the asymptotic is uniform enough, so that the following integrals converge absolutely:
\begin{equation}\label{uniform}
\int_{x}^{\infty} {\rm d} z\,\big(\rho_{\rm s}(z,t;\bm{\theta}) - \rho_{\rm s}^+(\bm{\theta})\big) <\infty,\quad
\int_{-\infty}^{x} {\rm d} z\,\big(\rho_{\rm s}(z,t;\bm{\theta}) - \rho_{\rm s}^-(\bm{\theta})\big) <\infty \qquad \forall\;x\in{\mathbb{R}}
\end{equation}
(here and below we denote by $\rho_{\rm s}^\pm(\bm{\theta})=\lim_{x\to\pm\infty} \rho_{\rm s}(x,0;\bm{\theta})$ the asymptotic forms of the initial state density). For instance, the initial state could be a state that varies nontrivially only on some finite region. Consider the long-time limit $t\to\infty$ of scaled two-point functions \eqref{qq}-\eqref{jj}, along rays $x=\xi t$ with $y,\xi$ fixed. By a simple scaling argument, they should decay like $1/t$. We provide a derivation of the coefficient of this decay:
\begin{equation}\label{longtimeres}
\langle \frak{q}_i(\xi t,t) \frak{q}_j(y,0)\rangle_{[n_0]}^{\rm Eul} \sim \frc{A_{ij}(\xi;y)}t\qquad (t\to\infty).
\end{equation}
Again we concentrate on the charge-charge two-point function as the derivation and result is easy to generalize to the currents.
In order to derive this result, we further assume that in the limit $t\to\infty$ along any ray $x=\xi t$, one obtains the state $n(\xi;\bm{\theta})$, given by \eqref{soldm}, of the partitioning protocol with initial condition specified by $n_0^\pm(\bm{\theta})$:
\begin{equation}\label{limn}
\lim_{t\to\infty} n_t(\xi t;\bm{\theta}) = n(\xi;\bm{\theta}),\qquad
\mbox{$n(\xi;\bm{\theta})$ from the partitioning protocol with $n_{\rm R,L}(\bm{\theta}) = n_0^\pm(\bm{\theta})$}.
\end{equation}
We provide in Appendix \ref{applongtime} a proof under certain more basic assumptions of uniform convergence. Here and below, for lightness of notation, we take the convention that GHD functions explicitly evaluated on a ray, say $\xi$, instead of a space-time doublet $(x,t$), are understood as the functions obtained in this limit, for instance $\rho_{\rm s}(\xi;\bm{\theta}) = \lim_{t\to\infty}\rho_{\rm s}(\xi t,t;\bm{\theta})$. These are set by the solution to the partitioning protocol \eqref{soldm}, see also Appendix \ref{apppart}. From this viewpoint, we note that the exact initial condition $n_0(x;\bm{\theta})$ provides a regularization of the initial discontinuity at $x=0$ of the partitioning protocol.
An important observation of the result below is that $A_{ij}(\xi;y)$ is {\em not} determined solely by the partitioning protocol; in particular it is not the coefficient obtained in either of the two situations studied in Subsection \ref{ssectpartprot}, and it depends on the point $y$ and on the details of $n_0(x;\bm{\theta})$. What this means is that, from the viewpoint of the partitioning protocol, correlation functions on a single ray, with one space-time point being at the initial time $t=0$ and lying on the initial discontinuity of the protocol, explicitly depend on the regularization $n_0(x;\bm{\theta})$ of this initial discontinuity, and on the exact position $y$, within the regularized region, of the observable at initial time.
Consider the following quantity, which encodes the difference between the regularized initial condition $n_0(x;\bm{\theta})$ and the discontinuous one determined by $n_{\rm R,L}(\bm{\theta})$. Given a point $y\in{\mathbb{R}}$, a ray $\xi$ and a time $t$, we look for the spectral parameters $\bm{\theta}$ of quasi-particles starting at $y$ that reaches the position $\xi t$ at time $t$, under the full initial condition $n_0(x;\bm{\theta})$. If the effective velocity is monotonic with respect to the rapidity, then thanks to \eqref{invertibility2}, this is unique once the quasi-particle type is determined. In general, we simply consider the set of such $\bm{\theta}$. We then look for the position $r$ of a quasi-particle $\bm{\theta}$ that would reach the same point $(xt,t)$, but in the partitioning protocol. See Fig.~\ref{fig}. Finally we take the limit $t\to\infty$ of this position. In this limit, $\bm{\theta}\in \bm{\theta}_\star(\xi)$ (that is $v^{\rm eff}(\xi,\bm{\theta})=\xi$). Given this value of $\bm{\theta}$, the ray $\xi$ is known uniquely (see Appendix \ref{apppart}), and thus it fully encodes the ray $\xi$. The result is $r(y;\bm{\theta})$, which depends on both $y$ and on this limiting value of $\bm{\theta}$. This is defined for all values of $\bm{\theta}$ (there is always a solution to $v^{\rm eff}(\xi,\bm{\theta})=\xi$). In formulae, this is expressed as follows in terms of the function $\t u(\xi;\bm{\theta})$ of the partitioning protocol, which has the explicit form \eqref{ftj}. We define $\bm{\theta}_t$ (whose depence on $\xi,y$ we keep implicit) as $u(\xi t,t;\bm{\theta}_t)=y$, and then $r(y;\bm{\theta}_\infty) = \lim_{t\to\infty} t\t u(\xi;\bm{\theta}_t)$ with $\bm{\theta}_\infty = \lim_{t\to\infty} \bm{\theta}_t\in\bm{\theta}_\star(\xi)$.
\begin{figure}
\begin{center}\includegraphics[width=5cm]{trajectories.pdf}\end{center}
\caption{A pictorial representation of how to evaluate the quantity $r$, given $\xi,t,y$. Start at the point $y$, and find the quasi-particle's rapidity which is such that its trajectory joins $y$ with $(\xi t,t)$, in the full problem with initial condition $n_0(x;\bm{\theta})$. Then, using the same quasi-particle type and rapidity, evaluate the backward trajectory from $(\xi t,t)$ in the partitioning protocol. The value of $r$ is the position obtained at time 0. In this picture, the shade indicates the space-time region where the fluid states in the full problem and in the partitioning protocol are substantially different, thus affecting the trajectories.}
\label{fig}
\end{figure}
The above defines $r(y;\bm{\theta})$ in a very delicate way, that involves the full time evolution: one needs to evaluate the finite difference between the end-points of two trajectories that start far in time and stay near to each other for a long time. In order to go further, we need to make certain assumptions about $r(y;\bm{\theta})$, which appear to be natural but which we do not know how to verify explicitly. The main assumption is simply that $r(y;\bm{\theta})$ is finite. This seems natural if the space-time region where the effects of the regularized partitioning is felt, is of finite extent, as pictorially suggested in Fig. \ref{fig}. Other more subtle assumptions relate to the exchange of $y$-derivative and large-time limit, see Appendix \ref{sappr}.
Under these assumptions, we show in Appendix \ref{sappr} that the following integral equation holds:
\begin{equation}\label{hgu}
\begin{aligned}
&\int_{-\infty}^{\xi_\star(\bm{\theta})} {\rm d} \eta\,\sum_{\bm{\gamma}\in\bm{\theta}_\star(\eta)}\,\frc{\rho_{\rm s}(\eta;\bm{\gamma})T^{\rm dr}(\eta;\bm{\theta},\bm{\gamma})}{
V(\bm{\gamma}) |(v^{\rm eff})'(\eta;\bm{\gamma})|}
\int_{\mathbb{R}} {\rm d} z \lt|\frc{\partial r(z,\bm{\gamma})}{\partial z}\rt|
\big(n_0(z;\bm{\gamma}) - n_0^{{\rm sgn}(r(z;\bm{\gamma}))}\big)\\
= &\quad
\int_{-\infty}^{r(y;\bm{\theta})} {\rm d} z\,\big(\rho_{\rm s}(z,0;\bm{\theta}) -
\rho_{\rm s}^{{\rm sgn}(z)}(\bm{\theta})\big)
+\int_{r(y;\bm{\theta})}^y {\rm d} z\,\rho_{\rm s}(z,0;\bm{\theta}).
\end{aligned}
\end{equation}
Recall the dressed scattering operator \eqref{Tdr}. Equation \eqref{hgu} is a powerful result, as it determines $r(y;\bm{\theta})$ entirely in terms of initial data, without the need for time evolution. Even more powerful is the fact that, although the left-hand side depends on $\bm{\theta}$, it is independent of $y$. Thus, by uniform convergence \eqref{uniform}, we must have
\begin{equation}\label{rylarge}
r(y;\bm{\theta})\sim y\qquad (|y|\to\infty).
\end{equation}
This is simply saying that for $y$ far from the regularization region, there is no difference with the partitioning protocol. Further, differentiating with respect to $y$, we obtain
\begin{equation}\label{dry}
\frc{\partial r(y;\bm{\theta})}{\partial y} = \frc{\rho_{\rm s}(y,0;\bm{\theta})}{\rho_{\rm s}^{{\rm sgn}(r(y;\bm{\theta}))}(\bm{\theta})} >0,
\end{equation}
thus $r(y;\bm{\theta})$ is monotonic in $y$. This means that $r(y;\bm{\theta})$ has a unique zero $y_\star(\bm{\theta})$, which determines its sign:
\begin{equation}
r(y_\star(\bm{\theta});\bm{\theta})=0,\qquad
r(y;\bm{\theta}) \gtrless 0 \quad \mbox{if} \quad y \gtrless y_\star(\bm{\theta}).
\end{equation}
It is this zero that plays a fundamental role for the long-time asymptotics of correlation functions. Consider the sign function
\begin{equation}
\sigma(y;\bm{\theta}) = {\rm sgn}\big(y-y_\star(\bm{\theta})\big) =
{\rm sgn}(r(y;\bm{\theta})).
\end{equation}
An equation determining this zero is inferred from \eqref{hgu}:
\begin{equation}\label{hguzero}
\begin{aligned}
&\int_{-\infty}^{\xi_\star(\bm{\theta})} {\rm d} \eta\,\sum_{\bm{\gamma}\in\bm{\theta}_\star(\eta)}\,\frc{\rho_{\rm s}(\eta;\bm{\gamma})\varphi^{\rm dr}(\eta;\bm{\theta},\bm{\gamma})}{
V(\bm{\gamma}) |(v^{\rm eff})'(\eta;\bm{\gamma})|}
\int_{\mathbb{R}} {\rm d} z
\frc{\rho_{\rm s}(z,0;\bm{\gamma})}{\rho_{\rm s}^{\sigma(z;\bm{\gamma})}(\bm{\gamma})}
\big(n_0(z;\bm{\gamma}) - n_0^{\sigma(z;\bm{\gamma})}\big)\\
= &\quad
\int_{-\infty}^{0} {\rm d} z\,\big(\rho_{\rm s}(z,0;\bm{\theta}) -
\rho_{\rm s}^{-}(\bm{\theta})\big)
+\int_{0}^{y_\star(\bm{\theta})} {\rm d} z\,\rho_{\rm s}(z,0;\bm{\theta}).
\end{aligned}
\end{equation}
The function $r(y;\theta)$ then takes the simple form
\begin{equation}\label{simpler}
r(y;\bm{\theta})
=
\frc1{\rho_{\rm s}^{\pm}(\bm{\theta})}
\int_{y_\star(\bm{\theta})}^y {\rm d} z\,\rho_{\rm s}(z,0;\bm{\theta})
\qquad \mbox{for $y \gtrless y_\star(\bm{\theta})$.}
\end{equation}
We show in Appendix \ref{sapplongtimeder} that:
\begin{equation}\begin{aligned}
A_{ij}(\xi;y) &=
\sum_{\bm{\gamma}\in \bm{\theta}_\star(\xi)}\frc{\rho_{\rm s}(\xi;\bm{\gamma})
h_i^{\rm dr}(\xi;\bm{\gamma})}{\rho_{\rm s}^{\sigma(y;\bm{\gamma})}(\bm{\gamma})\,
V(\bm{\gamma})\,|(v^{\rm eff})'(\xi;\bm{\gamma})|}\,
\Bigg[
\rho_{\rm p}(y,0;\bm{\gamma})f(y,0;\bm{\gamma})\,
h_j^{\rm dr}(y,0;\bm{\gamma}) \\ &
+ \Big(n_0(y;\bm{\gamma}) - n_0^{\sigma(y;\bm{\gamma})}(\bm{\gamma})\Big)\,
\big(\rho_{\rm s}(y,0)f(y,0)h_j^{\rm dr}(y,0)\big)^{*{\rm dr}}
(y,0;\bm{\gamma})
\Bigg].
\end{aligned}\label{coefficient}
\end{equation}
This provides the long-time asymptotic coefficient explicitly in terms of initial data. In the special case where both sides have the same asymptotics,
\begin{equation}
n_0^+(\bm{\theta}) = n_0^-(\bm{\theta})=n(\bm{\theta}),
\end{equation}
the partitioning protocol is homogeneous, and the formula simplifies to:
\begin{equation}\begin{aligned}
A_{ij}(\xi;y) &=
\sum_{\bm{\gamma}\in \bm{\theta}_\star(\xi)}\frc{
h_i^{\rm dr}(\bm{\gamma})}{
|(v^{\rm eff})'(\bm{\gamma})|}\,
\Bigg[
\rho_{\rm p}(y,0;\bm{\gamma})f(y,0;\bm{\gamma})\,
h_j^{\rm dr}(y,0;\bm{\gamma}) \\ &
+ \Big(n_0(y;\bm{\gamma}) - n(\bm{\gamma})\Big)\,
\big(\rho_{\rm s}(y,0)f(y,0)h_j^{\rm dr}(y,0)\big)^{*{\rm dr}}
(y,0;\bm{\gamma})
\Bigg]
\end{aligned}\label{coefficientsame}
\end{equation}
where GHD quantities depending only on the spectral variable are to be evaluated in the asymptotic GGE state $n(\bm{\theta})$. Here we have used the fact that $V(\bm{\theta})=1$ in the homogeneous case, as $v^{\rm eff}(\bm{\theta})=\xi_\star(\bm{\theta})$ (see \eqref{Vdef}).
As a check, we can verify that the limit $|y|\to\infty$ of \eqref{coefficientsame} gives the two-point correlation function in the homogeneous case \eqref{qq} with \eqref{homo} (whose full dependence on time is in the factor $t^{-1}$):
\begin{equation}\label{yinfinity1}
\lim_{|y|\to\infty} \frc{A_{ij}(\xi;y)}{t} =
\langle \frak{q}_i(\xi t,t)\frak{q}_j(0,0)\rangle_{[n]}^{\rm Eul}
\qquad\qquad\qquad\qquad\mbox{(same left and right asymptotics).}
\end{equation}
Indeed, this follows from (using \eqref{asympnx}):
\begin{equation}\label{thelimits}
\lim_{|y|\to\infty} n_0(y;\bm{\theta}),\, \rho_{\rm s}(y,0;\bm{\theta}),\,h_j^{\rm dr}(y,0;\bm{\theta}) = n(\bm{\theta}),\,\rho_{\rm s}(\bm{\theta}),\,h_j^{\rm dr}(\bm{\theta}).
\end{equation}
We see that the homogeneous correlation function is at the point $y=0$: this is natural, as on the left-hand side, the limit $t\to\infty,\,x=\xi t$ is taken before $|y|\to\infty$.
We can similarly check that the limit $y\to\infty$ of \eqref{coefficient} gives the the two-point correlation function
\begin{equation}\label{yinfinity2}
\lim_{y\to\pm\infty} \frc{A_{ij}(\xi;y)}{t} =
\lim_{y\to0^\pm}
\langle \frak{q}_i(\xi t,t)\frak{q}_j(y,0)\rangle_{[n_0^+,n_0^-]}^{\rm Eul}
\qquad\mbox{(different left and right asymptotics)}
\end{equation}
in the partitioning protocol, see \eqref{qqpart}. We therefore find the natural result that the limit $|y|\to\infty$ of the regularized partitioning protocol, where $y$ starts within the inhomogeneous region that regularizes the discontinuity and goes away from it, exactly agrees with the limit $y\to0^\pm$ of the exact partitioning protocol, where $y$ starts within the homogeneous region and goes towards the discontinuity.
\medskip
\noindent {\bf Remark.} We note a somewhat surprising result that is derived in Appendix \ref{sapplongtimeder}, and that leads to the particular form of the results expressed above. It can be expressed equivalently as a ``sum rule"
\begin{equation}\label{sumrule}
\int_{-\infty}^\infty {\rm d} z\,\frc{\rho_{\rm p}(z,0;\bm{\theta})f(z,0;\bm{\theta})}{\rho_{\rm s}^{\sigma(z;\bm{\theta})}(\bm{\theta})}
a^{\rm eff}_{[n_0]}(z;\bm{\theta}) = 0,
\end{equation}
or as an ``occupation equipartition" relation,
\begin{equation}\label{nystar}
n_0(y_\star(\bm{\theta});\bm{\theta})
= \frc{n_0^-(\bm{\theta})\rho_{\rm s}^+(\bm{\theta})-
n_0^+(\bm{\theta})\rho_{\rm s}^-(\bm{\theta})}{
\rho_{\rm s}^+(\bm{\theta})-
\rho_{\rm s}^-(\bm{\theta})}.
\end{equation}
The latter relation is extremely nontrivial, as it relates the zero $y_\star(\bm{\theta})$ to state properties at the asymptotics and at the point $y_\star(\bm{\theta})$ only, while \eqref{hguzero} defines the zero in terms of states at other points as well. Also, it is not {\em a priori} obvious that \eqref{nystar} has a solution at all. The derivation we provide in Appendices \ref{sappr} and \ref{sapplongtimeder} imply that there is at least one solution. We believe this is deeply related to the requirements of finiteness of $r(y;\bm{\theta})$ and the possibility of exchanging $y$-derivative and large-$t$ limit.
\section{Conclusion}\label{sectcon}
In this paper we have obtained exact expressions for dynamical connected correlation functions at the Euler scale in non-equilibrium integrable models. These represent correlations obtained under unitary time evolution from inhomogeneous density matrices in quantum models, or deterministic evolution from random initial configurations in classical models. The time evolution is taken to be the homogeneous evolution of the integrable model (and thus this excludes the cases of evolutions in external potentials). The results are expressed solely in terms of quantities that are available within the thermodynamic Bethe ansatz framework. They are valid at the Euler scale, where variations of averages of local fields occur on very large scales. Interestingly, this shows that hydrodynamic ideas provide, in principle, all large-scale correlation functions. The range of applicability of our results is the same as that of GHD, and thus includes a wide variety of integrable models. Our derivation is based on a natural Euler-scale fluctuation-dissipation principle combined with the exact GHD general solution found in \cite{dsy17}. We showed that our results agree with the general principles of the hydrodynamic projection theory, with in particular the hydrodynamic operators found in \cite{dsdrude}.
We also showed how two-point functions of arbitrary observables can be obtained from the knowledge of their one-point functions using the hydrodynamic projection theory. From the Leclair-Mussardo formula, valid for one-point functions, we therefore obtained Euler-scale two-point functions as infinite form factor series. This formula is new both in the inhomogeneous case, and in homogeneous GGEs. We also remark that recently, an exact recursion relation was obtained for expectation values of vertex operators of the form $e^{a\phi}$ in the sinh-Gordon model \cite{NS13,N14,BP16}. This can be used to extract some of the spectral function $V^{e^{a\phi}}$, and deduce their Euler-scale two-point functions using \eqref{tpfctgen}.
The general Euler-scale hydrodynamic argument presented in this paper supports the assumption that $N$-point correlation functions vanish, under scaling by $\lambda$, as $\lambda^{1-N}$, as per the formula \eqref{cf}. In particular, two-point functions vanish as $1/t$ at large times. This is clear in various formulae established, for instance in the homogeneous case \eqref{tpfctgenhomo}, in free models \eqref{qqfree}, in the partitioning protocol \eqref{partprotasympt}, and in the long-time asymptotics \eqref{longtimeres}. However, in these formulae, the quantity $|\partial_\theta v^{\rm eff}(\bm{\theta})|$ appears in denominators, evaluated in particular states and at particular values of $\bm{\theta}$ (for instance, $|\partial_\theta v^{\rm eff}(\xi,\bm{\theta})|$ in the state at ray $\xi$ of the partitioning protocol, evaluated at $\bm{\theta}$ such that $v^{\rm eff}(\xi;\bm{\theta})=\xi$). This quantity may vanish if the effective velocity is not strictly monotonic with respect to the rapidity, and may thus lead to singularities (except, in some cases, if there is zero density of quasi-particles at this rapidity, or, in fermionic systems, of quasi-holes). In such situations, the asymptotic formulae we show do not apply, and we expect a modification of the large-time limit, naively as $1/\sqrt{t}$. The physical intuition is that, if there is a finite quasi-particle density at a rapidity for which $\partial_\theta v^{\rm eff}(\bm{\theta})=0$, then, as the effective velocity is stationary in $\theta$, there is an accumulation of quasi-particles around this effective velocity. If, for instance, the observation ray in the partitioning protocol is along this velocity, then this accumulation may increase the correlation. This might happen at the boundary of the ``light cone" emanating from the connection point, if there is a maximal velocity. Similar effects might appear in fully inhomogeneous situations if the rapidity derivative of the characteristic function $u(x,t;\bm{\theta})$ vanishes, as again singularities may occur for instance in \eqref{W} and \eqref{qq2}. It would be interesting to further investigate this aspect.
Comparing exact hydrodynamic predictions for two-point functions with numerics is a very important problem. Steps forwards are made in this direction in \cite{Bastia17}, where the classical sinh-Gordon model is studied, both for one-point functions in the partitioning protocol and correlation functions in GGEs. In particular, the classical spectral functions for an infinite family of vertex operators are evaluated, and numerical comparisons are made. Comparison with quantum field theories are however more challenging.
It would be interesting to investigate if hydrodynamic ideas provide more than the Euler-scale part of correlation functions, the least decaying part found along ballistic rays. For instance, the recent works \cite{DSVC17,BD17,DSC17,EB17} suggest that it is possible to combine hydrodynamics with a more detailed knowledge of local observables in order to go further.
Other ways of deriving Euler-scale correlation functions in homogeneous cases are based on form factors. This was done in \cite{dNP16} based on form factors obtained in \cite{dNP15}. The form factor techniques of \cite{D1,D2} might also be applicable as explained in the Remark in Subection \ref{ssectgeneric}. We note that using the general results of \cite{PS17} for space-like two-point functions in arbitrary homogeneous GGEs, one may combine this with the spectral function method of subsection \ref{ssectgeneric} in order to get various configurations of dynamical higher-point functions. It would be interesting to see if form factors can be used to derive results in the inhomogeneous situations considered here. It would also be interesting to obtain correlations in situations with evolution in weakly varying external potentials or temperature fields. The GHD theory for such situations was developed in \cite{dynote}, however the equivalent of the solution by characteristics \eqref{sol} has not yet been written. We leave this for future works.
{\bf Acknowledgments.} I am grateful to Alvise Bastianello, Olalla Castro Alvaredo, Jacopo De Nardis, J\'er\^ome Dubail, Bal\'azs Pozsgay, Herbert Spohn, Gerard Watts and Takato Yoshimura for discussions, comments and encouragement, and for collaborations on related aspects. I am grateful to the Institut d'\'Etude Scientifique de Carg\`ese, France and the Perimeter Institute, Waterloo, Canada for hospitality during completion of this work. I thank the Centre for Non-Equilibrium Science (CNES).
|
1,108,101,563,695 | arxiv | \section*{Abstract}
{\bf
Machine learning has achieved dramatic success over the past decade, with applications ranging from face recognition to natural language processing. Meanwhile, rapid progress has been made in the field of quantum computation including developing both powerful quantum algorithms and advanced quantum devices. The interplay between machine learning and quantum physics holds the intriguing potential for bringing practical applications to the modern society. Here, we focus on quantum neural networks in the form of parameterized quantum circuits. We will mainly discuss different structures and encoding strategies of quantum neural networks for supervised learning tasks, and benchmark their performance utilizing Yao.jl, a quantum simulation package written in Julia Language. The codes are efficient, aiming to provide convenience for beginners in scientific works such as developing powerful variational quantum learning models and assisting the corresponding experimental demonstrations.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
The interplay between machine learning and quantum physics may revolutionize many aspects of our modern society
\cite{Biamonte2017Quantum,Sarma2019Machine,Dunjko2018Machine}.
With the recent rise in deep learning, intriguing commercial applications have spread all over the world \cite{LeCun2015Deep,Goodfellow2016Deep}.
Remarkably, machine learning methods have cracked a number of problems that are notoriously challenging, such as playing the game of Go with AlphaGo program \cite{Silver2016Mastering,Silver2017Mastering} and predicting protein structures with the AlphaFold system \cite{Senior2020Improved}.
As a powerful method, machine learning may be utilized to solve complex problems in quantum physics.
In parallel, over the recent years, quantum-enhanced machine learning models have emerged in various contexts. Notable examples include quantum support vector machines \cite{Anguita2003Quantum,Rebentrost2014Quantum}, quantum generative models \cite{Gao2018Quantum,Lloyd2018Quantum,Hu2019Quantum}, quantum neural networks \cite{Li2021Recent,Cong2019Quantum,Grant2018Hierarchical,Li2020VSQL}, etc., with some of them holding the potential of providing exponential speedups.
In Ref~\cite{Liu2021Rigorous}, a rigorous quantum speedup is proven in supervised learning tasks assuming some complexity conjectures.
With the rapid development of quantum devices, these advanced quantum learning algorithms may also be experimentally demonstrated in the near future.
Artificial neural networks, which can be seen as a highly abstract model of human brains, lie at the heart of modern artificial intelligence \cite{LeCun2015Deep,Goodfellow2016Deep}. Noteworthy ones include feedforward \cite{Bebis1994Feedforward,Svozil1997Introduction}, convolutional \cite{Lawrence1997Face}, recurrent \cite{Mikolov2011Extensions,Zaremba2015Recurrent}, and capsule neural networks \cite{Hinton2011Transforming,Hinton2018Matrix,Sabour2017Dynamic,Xi2017Capsule,Xinyi2018Capsule}, each of which bears its own special structures and capabilities. More recently, the attention mechanism has been playing an important role in the field of both computer vision \cite{Guo2021Attention,Dosovitskiy2020Image} and natural language processing \cite{Young2018Recent,Devlin2018BERT}, which ignites tremendous interest in exploring powerful and practical deep learning models.
Motivated by the success of classical deep learning as well as advances in quantum computing, quantum neural networks (QNNs), which share similarities with classical neural networks and contain variational parameters, have drawn a wide range of attention \cite{Cerezo2021Variational}.
Early works include quantum convolutional neural networks \cite{Cong2019Quantum},
continuous-variable quantum neural networks \cite{Killoran2019Continuousvariable},
tree tensor network classifiers, multi-scale entanglement renormalization ansatz classifiers \cite{Grant2018Hierarchical}, etc.
Alone this line, various QNN models have been proposed over the past three years \cite{Blance2021Quantum,Kerenidis2019Quantum,Liu2021Hybrid,Wei2022Quantum,Chen2021Federated,Yano2020Efficient,Chen2022Quantum,MacCormack2022Branching}, as well as some theoretical works analyzing QNNs' expressive power \cite{Abbas2021Power,Meyer2021Fisher,Wang2021Understanding,Schuld2021Effect,Caro2021Encoding,Wu2021Expressivity,Funcke2021Dimensional,Banchi2021Generalization,Du2022Efficient,Du2020Learnability,Haug2021Capacity}.
Furthermore, QNNs are also candidates that might be commercially available in the noisy intermediate-scale quantum (NISQ) era \cite{Preskill2018Quantum,Bharti2022Noisy}.
When designing new QNN models or testing a QNN's performance on a given dataset, an important step is to efficiently simulate the QNN's training dynamics.
At the current stage, a number of quantum simulation platforms have been built by institutes and companies worldwide \cite{Luo2020Yao,Bezanson2017Julia,Broughton2020TensorFlow,Bergholm2020PennyLanea,Killoran2019Strawberry,Aleksandrowicz2019Qiskit,Svore2018Enabling,Zhang2019Alibaba,Huang2019Alibaba,Nguyen2021HiQProjectQ,Green2013Quipper,JavadiAbhari2015ScaffCC,Khammassi2017QX,Johansson2012QuTiP}. In this paper and the accompanying open-source code, we will utilize Yao.jl \cite{Luo2020Yao}, a quantum simulation framework written in Julia Language \cite{Bezanson2017Julia}, to construct the models introduced and benchmarked in the following sections. For the training process, the automatic differentiation engines ensure the fast calculation of gradients for efficient optimization.
To benchmark the performance, we utilize the data from several datasets, e.g., the Fashion MNIST dataset \cite{FashionMNIST}, the MNIST handwritten digit dataset \cite{LeCun1998Mnist}, and the symmetry-protected topological (SPT) state dataset.
The quantum neural network models that we will utilize in the following include amplitude-encoding based QNNs and block-encoding based QNNs.
More specifically, amplitude-encoding based QNNs handle data that we have direct access to. For example, if we have a quantum random access memory \cite{Giovannetti2008Quantum} to fetch the data or the data comes directly from a quantum process, we can assume the data is already prepared at the beginning and use a variational circuit to do the training task.
Differently, block-encoding based QNNs handle data that we need to encode classically. The classical encoding strategy may seem inefficient, yet it is more practical for experimental demonstrations on the NISQ devices compared with amplitude-encoding based QNNs.
One feature of our work is that we mainly focus on the classification of relatively high dimensional data, especially for block-encoding strategies while the present numerical and experimental works are mainly focusing on relatively simple and lower-dimensional datasets.
Our benchmarks and open-source repository may provide helpful guidance for future explorations when the datasets scale up.
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{framework.pdf}
\caption{A schematic illustration of quantum neural networks (QNNs) with two data encoding ansatze: amplitude-encoding based QNNs and block-encoding based QNNs. The output states' expectation values of some observables often serve in the cost function to measure the distance between the current and the target predictions, while a classical optimizer is utilized to optimize the QNNs' parameters to minimize the distance.
}
\label{framework}
\end{figure*}
The sections below are organized as follows:
In Section $2$, we will review some basic concepts of QNNs, including quantum circuit structures, training strategies, and data encoding ansatze.
In Section $3$, we will introduce amplitude-encoding based QNNs, which are suitable for situations where we can directly access the quantum data or where we have a quantum random access memory \cite{Giovannetti2008Quantum} to fetch the data.
In Section $4$, we will introduce block-encoding based QNNs, which are suitable for situations where we need to encode the classical data into the QNNs.
In both Section $3$ and Section $4$, we will provide codes and performance benchmarks for the models as well as address several caveats.
In the last section, we will make a summary and give an outlook for future research.
We mention that, due to the explosive growth of quantum neural network models, we are not able to cover all the recent progress.
Instead, we choose to focus only on some of the representative models, which depends on the authors' interest and might be highly biased.
\section{Basic concepts}
\label{sec:Concept}
\subsection{The variational circuit structure of QNNs}
Quantum neural networks are often presented in the form of parameterized quantum circuits, where the variational parameters can be encoded into the rotation angles of some quantum gates.
The basic framework is illustrated in Fig.~\ref{framework}, which mainly consists of the quantum circuit ansatz, the cost function ansatz, and the classical optimization strategy.
For the basic building blocks, commonly used choices include parameterized single-qubit rotation gates ($R_x(\theta)$, $R_y(\theta)$, and $R_z(\theta)$), Controlled-NOT gates, and Controlled-Z gates, which are illustrated below:
$$
\begin{quantikz}
& \gate{R_x(\theta)} & \qw
\end{quantikz}
= e^{-i \frac{\theta}{2} X},
\begin{quantikz}
& \gate{R_y(\theta)} & \qw
\end{quantikz}
= e^{-i \frac{\theta}{2} Y},
\begin{quantikz}
& \gate{R_z(\theta)} & \qw
\end{quantikz}
= e^{-i \frac{\theta}{2} Z},
$$
$$
\begin{quantikz}
& \ctrl{1} & \qw \\
& \targ{} & \qw
\end{quantikz}
=\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
\end{pmatrix},
\begin{quantikz}
& \ctrl{1} & \qw \\
& \gate{Z} & \qw
\end{quantikz}
=\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1\\
\end{pmatrix}.
$$
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{amplitude_encoding.pdf}
\caption{A schematic diagram of amplitude-encoding based QNNs, where the input quantum state can be either from a quantum process (e.g., a quantum system's evolution, quantum state preparation) or from a directly available quantum memory. The followed measurements provide expectation values on some observables, which serve as a classification criterion for making predictions.}
\label{amplitude_encoding}
\end{figure*}
Besides, there are also a number of blocks that might be convenient for our use, e.g., Controlled-SWAP gates might be experimentally-friendly for demonstrations on superconducting quantum processors. In our numerical simulations, we mainly choose parameterized single-qubit rotation gates ($R_x(\theta)$, $R_y(\theta)$, and $R_z(\theta)$) and Controlled-NOT gates since replacing some of them will not significantly influence the performance.
In the following part of this subsection, we mainly introduce two data encoding ansatze: amplitude-encoding ansatz and block-encoding ansatz.
\subsubsection{Amplitude-encoding ansatz}
As the name suggests, amplitude encoding means that the data vector is encoded into the amplitude of the quantum state and then fed to the quantum neural network.
In this way, a $2^n$-dimensional vector may be encoded into an $n$-qubit state.
The basic structure is illustrated in Fig.~\ref{amplitude_encoding}, where the output can be the expectation values of some measurements and $V_i(\theta_i)$ denotes a variational quantum block with parameter $\theta_i$.
It is worthwhile to note that, as indicated in \cite{Huang2021Power,Schuld2021Supervised}, amplitude-encoding based QNNs can be regarded as kernel methods, stressing the importance of the way to encode the data.
\subsubsection{Block-encoding ansatz}
For near-term experimental demonstrations of quantum neural networks, especially for supervised learning tasks, the block-encoding ansatz might be more feasible as there is no need for a quantum random access memory and the data can be directly encoded into the circuit parameters.
The basic structure for this ansatz is illustrated in Fig.~\ref{block_encoding}, where the block-encoding strategy can be very flexible with variational quantum blocks $U_i(x^i_k)$ and
$V_i(\theta_j)$ encoding the input data and variational parameters, respectively.
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{block_encoding.pdf}
\caption{A schematic diagram of block-encoding based QNNs, where the initial quantum state is fixed and the input data requires to be encoded into the QNN circuit classically in a similar way of encoding the variational parameters. The followed measurements provide expectation values on some observables for making the classification decisions.}
\label{block_encoding}
\end{figure*}
\subsection{Optimization strategies during the training process}
When we have a quantum neural network model, we wish to train it and apply it to classification tasks. In most cases, we need to first formalize the task to be an optimization problem.
How to adapt classification tasks into optimization-based QNN models can be illustrated using the following simple example:
After the data preparation and data encoding procedure, the output state after the QNN is followed by a final measurement $M$. For classification tasks, for simplicity, we can consider a binary classification between two kinds of vectorized digits labeled ``cat'' and ``dog'' and assume that the final measurement is applied on the computational basis of a certain qubit. If the input is a ``cat'', our goal is to maximize the probability of measuring $\hat{\sigma}_{z}$ with output ``$1$'', i.e., maximize $P(\ket{0})$. Instead, if the input is a ``dog'', our goal is to maximize the probability of measuring $\hat{\sigma}_{z}$ with output ``$-1$'', i.e., maximize $P(\ket{1})$.
In the prediction phase, we simply compare the probability of different measurement outcomes and assign the label corresponding to the highest probability to the input.
To achieve this goal, we need to optimize the trainable parameters to obtain desirable predictions. To begin with, we generally need to define a cost function to measure the distance between the current output and the target output. Widely used cost functions include the mean square error (MSE) and cross entropy (CE):
\begin{eqnarray}
L_{MSE}\left(h\left(\vec{x} ; \Theta\right), \mathbf{a}\right)=\sum_{k} (a_{k} - g_{k})^2,
\\
L_{CE}\left(h\left(\vec{x} ; \Theta\right), \mathbf{a}\right)=-\sum_{k} a_{k} \log g_{k},
\end{eqnarray}
where $\mathbf{a}\equiv (a_1, \cdots, a_m)$ is the label of the input data $\vec{x}$ in the form of one-hot encoding \cite{Lu2020Quantum}, $h$ denotes the hypothesis function determined by the quantum neural network (with parameters collectively denoted by $ \Theta$), and $\mathbf{g}\equiv (g_1,\cdots, g_m)=\text{diag} (\rho_{\text{out}})$ presents all the probabilities of the corresponding output categories with $\rho_{\text{out}}$ denoting the output state.
With a cost function defined, the task of training the quantum neural network to output the correct predictions can be transformed into the task of minimizing the cost function, where gradient-based strategies can be well utilized.
In general, after calculating the gradient of the loss function $L$ with respect to the parameters denoted collectively as $\Theta$ at step $t$, the update of parameters can be expressed as
\begin{eqnarray}
\Theta_{t+1}=\Theta_{t}-\gamma \nabla L\left(\Theta_{t}\right),
\end{eqnarray}
where $\gamma$ is the learning rate. In practice, gradient methods are often cooperated with optimizers such as Adam \cite{Kingma2014Adam}, to gain higher performance.
Now, we have obtained an overall idea of optimization, while the remaining problem is how to efficiently calculate the gradients.
For example, if we take cross entropy as the cost function, the gradient with respect to a particular parameter $\theta$ can be written as
\begin{eqnarray}
\frac{\partial L\left(h\left(\vec{x} ; \Theta\right), \mathbf{a}\right)}{\partial \theta} = -\sum_{k} \frac{a_{k}}{g_{k}} \frac{\partial g_{k}}{\partial \theta},
\end{eqnarray}
where $g_k$ can be seen as an expectation value of an observable,
and calculating the gradient of the cost function can be reduced to calculating the gradient of the observables.
To address this issue, a number of strategies have been proposed such as the ``parameter shift rule'' \cite{Romero2018Strategies,Li2017Hybrid,Mitarai2018Quantum,Mari2021Estimating}, quantum natural gradient \cite{Stokes2020Quantum,Koczor2019Quantum}, etc. For a wider-range literature review, readers can refer to review articles in Ref.~\cite{Cerezo2021Variational,Li2021Recent}.
In this paper, we choose and explain the basic ideas of parameter shift rules as follows:
For amplitude-encoding ansatz, given an input state $\ket{\psi_{x}}$, a QNN circuit $U_{\Theta}$, and an observable $O_k$, the hypothesis function can be written as
\begin{eqnarray}
g_k = h_k\left(|\psi_{x}\rangle ; \Theta\right) = \bra{x} U^{\dagger}_{\Theta} O_k U_{\Theta} \ket{x}.
\end{eqnarray}
For block-encoding ansatz, without loss of generality, we assume that the initial state is $\ket{0}$, and the QNN circuit $U_{\Theta,\vec{x}}$ contains both the input data and the trainable parameters. The hypothesis function in this setting can be written as
\begin{eqnarray}
g_k = h_k\left(|0\rangle, \vec{x} ; \Theta\right) = \bra{0} U^{\dagger}_{\Theta,\vec{x}} O_k U_{\Theta,\vec{x}} \ket{0}.
\end{eqnarray}
For both the two ansatze, the parameter shift rule tells us that if the parameter $\theta$ is encoded in the form $\mathcal{G}(\theta)=e^{-i \frac{\theta}{2} P_n}$ where $P_n$ belongs to the Pauli group, the derivative of the expectation value with respect to a parameter $\theta$ can be expressed as
\begin{eqnarray}
\frac{\partial g_k}{\partial \theta}=\frac{\partial h_k}{\partial \theta}=\frac{h_k^{+}-h_k^{-}}{2},
\end{eqnarray}
where $h_k^{\pm}$ denotes the expectation value of $O_k$ with the parameter $\theta$ being $\theta \pm \frac{\pi}{2}$.
In the QNNs presented in this paper, the parameters are encoded in single-qubit Pauli-rotation gates, thus fitting the requirements discussed above.
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{ent_layer.pdf}
\caption{A schematic illustration of the basic building blocks of quantum neural network classifiers. (a) An entangling layer consisting of a single layer of Controlled-Z gates. (b) An entangling layer consisting of a single layer of Controlled-CNOT gates. (c) An entangling layer consisting of two layers of Controlled-CNOT gates. (d) An entangling layer implemented by a many-body Hamiltonian's time evolution. (e) A composite block consisting of three layers of variational single-qubit rotation gates and an entangling layer.}
\label{ent_layer}
\end{figure*}
\section{Amplitude-encoding based QNNs}
\label{sec:Amplitude}
\subsection{The list of variables and caveats}
In the above sections, we have discussed the structures and features of amplitude-encoding based QNNs.
Here, we start to numerically explore the performances of amplitude-encoding based QNNs under different hyper-parameter settings.
To make the benchmarks more organized, we first list the hyper-parameters in this subsection as well as present some basic codes to numerically build the framework while leaving the benchmarks of these QNNs' performances in the next subsection.
\textbf{Entangling layers.}
When designing a QNN structure, it is convenient to encode the variational parameters into single-qubit gates.
At the same time, we also need entangling layers to connect different qubits.
For this purpose, we design two sets of entangling layers.
First, we consider entangling layers in a digital circuit setting,
e.g., these entangling layers are composed of Controlled-NOT gates or Controlled-Z gates (see Fig.~\ref{ent_layer}(a-c) for illustrations):
\begin{jllisting}
using Yao, YaoPlots
using Quantum_Neural_Network_Classifiers: ent_cx, ent_cz, params_layer
# number of qubits
num_qubit = 10
# function that creates the entangling layers with Controlled-NOT gates
ent_layer = ent_cx(num_qubit)
# display the entangling layer's structure
YaoPlots.plot(ent_layer)
# function that creates the entangling layers with Controlled-Z gates
ent_layer = ent_cz(num_qubit)
# display the entangling layer's structure
YaoPlots.plot(ent_layer)
# double the entangling layer of Controlled-NOT gates
ent_layer = chain(num_qubit, ent_cx(num_qubit), ent_cx(num_qubit))
# display the entangling layer's structure
YaoPlots.plot(ent_layer)
\end{jllisting}
Second, we consider entangling layers in an analog circuit setting,
where the time evolution of a given Hamiltonian is utilized as an entangling layer (see Fig.~\ref{ent_layer}(d)):
\begin{jllisting}
# create entangling layers through Hamiltonian evolutions
t = 1.0 # evolution time
# h denotes the matrix formulation of a predefined Hamiltonian
# the code creating the Hamiltonian may take plenty of space
# thus we leave the Hamiltonian's implementation in the Github repository
@const_gate ent_layer = exp(-im * h * t)
# display the entangling layer's structure
YaoPlots.plot(ent_layer)
\end{jllisting}
With these entangling layers, we further define the composite blocks that consist of parameterized single-qubit gates and entangling layers as the building blocks of QNNs from a higher level (see Fig.~\ref{ent_layer}(e)):
\begin{jllisting}
# given the entangling layer ent_layer
# we define a parameterized layer consisting of 3 layers of single-qubit
# rotation gates: ent_cx(nbit), ent_cz(nbit), and ent_cx(nbit)
parameterized_layer = params_layer(num_qubit)
# display the parameterized layer's structure
YaoPlots.plot(parameterized_layer)
# build a composite block
composite_block(nbit::Int64) = chain(nbit,params_layer(nbit),ent_cx(nbit))
# display the composite block's structure
YaoPlots.plot(composite_block(num_qubit))
\end{jllisting}
\textbf{Circuit depth.}
Intuitively, if a QNN circuit gets deeper, the expressive power should also be higher (before getting saturated).
To verify this, we create a set of QNN classifiers with various depths.
We denote that, for a QNN classifier with depth $N$,
the classifier contains $N$ composite blocks.
To implement this in numerical simulations,
it is convenient to repeat the composite blocks exhibited above as follows:
\begin{jllisting}
# set the QNN circuit's depth
depth = 4
# repeat the composite block to reach the target depth
circuit = chain(composite_block(num_qubit) for _ in 1:depth)
dispatch!(circuit,:random)
# display the QNN circuit's structure
YaoPlots.plot(circuit)
\end{jllisting}
\textbf{A caveat on the global phase.}
For amplitude-encoding based QNNs, we would like to mention a caveat concerning the data encoded into the states.
It is well-known that if two quantum states only differ by a global phase, they represent the same quantum system.
For concreteness, the expectation values of the two states on a given observable will be the same,
thus ruling out the possibility of our QNN classifiers distinguishing between them.
In practice, the indistinguishability of the ``global phase'' may also appear in our tasks.
For image datasets with objects such as images of handwritten digits or animals, we can convert the images into vectors and normalize them to form quantum datasets, and the QNN will work to handle them in general.
Before introducing the caveat, we would like to point out that, for a dataset described by different features (e.g., a mobile phone's length, weight, resolution of the camera, etc), it is convenient to bring down all the features to the same scale before further processings.
To achieve this goal, data standardization is often applied such that each rescaled feature value has mean value $0$ and standard bias $1$.
However, for simple datasets with two labels, the standardization operation may mainly create a global phase $\pi$ between the two classes of data.
For example, the data in the original first class concentrates to $(1,1)$ and the data in the original second class concentrates to $(3,3)$.
After standardization, they become $(-1,-1)$ and $(1,1)$, respectively, which become indistinguishable by our QNN classifiers in the amplitude encoding setting.
To overcome this problem, a possible way is to transform the ``global phase separation'' into ``local phase separation'':
Assume we already have a dataset with two classes of data concentrating on $(-1,-1)$ and $(1,1)$.
We add multiple ``1''s to each data vector such that the data becomes $(-1,-1,1,1)$ or $(1,1,1,1)$, thus letting the data learnable again by the QNN classifier.
We met this problem in practice when handling the Wisconsin Diagnostic Breast Cancer dataset \cite{H.Wolberg1992Uci} which consists of ten features such as radius, texture, perimeter, etc.
The classification is far from ideal when we simply standardize the dataset, and after transforming the ``global phase'' to ``local phase'', the training becomes much easier and an accuracy of over $95\%$ is achieved.
\subsection{Benchmarks of the performance}
\begin{figure}
\begin{algorithm}[H]
\caption{Amplitude-encoding based quantum neural network classifier}
\label{Algo_AEQNN}
\begin{algorithmic}[1]
\REQUIRE The untrained model $h$ with variational parameters $\Theta$, the loss function $L$, the training set $\{(\ket{\mathbf{x}_m}, \mathbf{a}_m)\}_{m=1}^n$ with size $n$, the batch size $n_b$, the number of iterations $T$, the learning rate $\epsilon$, and the Adam optimizer $f_{\text{Adam}}$
\ENSURE The trained model
\STATE Initialization: generate random initial parameters for $\Theta$
\FOR{ $i \in [T]$}
\STATE Randomly choose $n_b$ samples $\{\ket{\mathbf{x}_{\text{(i,1)}}},\ket{\mathbf{x}_{\text{(i,2)}}},...,\ket{\mathbf{x}_{\mathrm{(i,n_b)}}}\}$ from the training set
\STATE Calculate the gradients of $L$ with respect to the parameters $\Theta$, and take the average value over the training batch $\mathbf{G}\leftarrow\frac{1}{n_b}\Sigma_{k=1}^{n_b}\nabla \mathcal{L}(h(\ket{\mathbf{x}_{\text{(i,k)}}};{\Theta}),\mathbf{a}_{\text{(i,k)}})$
\STATE Updates: ${\Theta} \leftarrow f_{\text{Adam}}({\Theta},\epsilon,\mathbf{G})$
\ENDFOR
\STATE Output the trained model
\end{algorithmic}
\end{algorithm}
\end{figure}
Here, we provide numerical benchmarks of the amplitude-encoding based QNNs' performances with different hyper-parameters (depths, entangling layers) and four datasets: the FashionMNIST dataset \cite{FashionMNIST}, the MNIST handwritten digit dataset \cite{LeCun1998Mnist}, an $8$-qubit symmetry-protected topological (SPT) state dataset, and a $10$-qubit symmetry-protected topological (SPT) state dataset.
The FashionMNIST dataset and the MNIST handwritten digit dataset are well-known datasets that have been widely utilized in both commercial fields and scientific research.
For the FashionMNIST dataset, we take the samples labeled ``ankle boot'' and ``T-shirt'' as our training and test data.
For the MNIST handwritten digit dataset, we take the samples labeled ``$1$'' and ``$9$'' as our training and test data.
For the SPT state dataset, we consider a one-dimensional cluster-Ising model with periodic boundary conditions with the following Hamiltonian \cite{Smacchia2011Statistical}:
\begin{eqnarray}
H(\lambda)=-\sum_{j=1}^{N} \hat{\sigma}_{x}^{(j-1)} \hat{\sigma}_{z}^{(j)} \hat{\sigma}_{x}^{(j+1)}+\lambda \sum_{j=1}^{N} \hat{\sigma}_{y}^{(j)} \hat{\sigma}_{y}^{(j+1)},
\end{eqnarray}
where $\hat{\sigma}_{\alpha}^{(i)}$, $\alpha = x, y, z,$ are Pauli matrices and $\lambda$ determines the relative strength of the nearest-neighbour interaction compared to the next-to-nearest-neighbor interaction. The model undergoes a continuous quantum phase transition at $\lambda = 1$ which separates a cluster phase with a nonlocal hidden order for $\lambda < 1$ from an antiferromagnetic phase with long-range order and a nonvanishing staggered magnetization for $\lambda > 1$. We uniformly vary $\lambda$ from $0$ to $2$ at $0.001$ intervals and obtain their corresponding ground states, which serve as our training set and test set.
The basic pseudocode for the learning procedure is shown in Algorithm~\ref{Algo_AEQNN}.
In the rest of this subsection, we will present detailed benchmarks considering the variables listed above, where the trained model's accuracy is the main criterion that will be exhibited in the following data tables. Moreover, the code for calculating the accuracy and loss over the training set, the test set, or a batch set is shown below:
\begin{jllisting}
using Quantum_Neural_Network_Classifiers: acc_loss_evaluation
# calculate the accuracy & loss for the training & test set
# with the method acc_loss_evaluation(circuit::ChainBlock,reg::ArrayReg,
# y_batch::Matrix{Float64},batch_size::Int64,pos_::Int64)
# pos_ denotes the index of the qubit to be measured
train_acc, train_loss = acc_loss_evaluation(circuit, x_train, y_train,
num_train, pos_)
test_acc, test_loss = acc_loss_evaluation(circuit, x_test, y_test,
num_test, pos_)
\end{jllisting}
\begin{table}[htp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc|ccc}
\multicolumn{1}{c}{\multirow{2}{*}{\bf Datasets}}& \multicolumn{3}{c}{\bf FashionMNIST} &\multicolumn{3}{c}{\bf MNIST} &\multicolumn{3}{c}{\bf SPT(8 qubits)}&\multicolumn{3}{c}{\bf SPT(10 qubits)}\\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$} &\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}
\\ \hline \\
{\bf Block Depth} & & & & & & & & & & & & \\
1 &0.652 &0.907 &0.669 &0.517 &0.666 &0.813 &0.500 &0.522 &0.500 &0.529 &0.525 &0.557\\
2 &0.994 &0.989 &0.980 &0.546 &0.831 &0.848 &0.506 &0.865 &0.887 &0.528 &0.892 &0.678\\
3 &0.997 &0.991 &0.981 &0.915 &0.935 &0.949 &0.775 &0.951 &0.986 &0.767 &0.915 &0.865\\
4 &0.999 &0.993 &0.995 &0.937 &0.974 &0.962 &0.871 &0.981 &0.988 &0.865 &0.886 &0.906\\
5 &0.998 &0.992 &0.995 &0.943 &0.984 &0.976 &0.964 &0.982 &0.985 &0.857 &0.902 &0.904\\
6 &0.999 &0.993 &0.997 &0.964 &0.984 &0.976 &0.973 &0.984 &0.987 &0.897 &0.899 &0.909\\
7 &0.999 &0.993 &0.998 &0.959 &0.984 &0.981 &0.974 &0.986 &0.987 &0.893 &0.903 &0.921\\
8 &0.998 &0.997 &0.997 &0.973 &0.982 &0.981 &0.978 &0.988 &0.988 &0.902 &0.907 &0.926\\
9 &0.999 &0.996 &0.998 &0.980 &0.983 &0.984 &0.981 &0.989 &0.989 &0.900 &0.912 &0.928\\
10 &0.999 &0.997 &0.998 &0.985 &0.984 &0.984 &0.982 &0.988 &0.988 &0.905 &0.929 &0.932\\
11 &0.999 &0.998 &0.997 &0.986 &0.986 &0.985 &0.983 &0.989 &0.990 &0.908 &0.932 &0.935\\
12 &0.999 &0.999 &0.998 &0.987 &0.985 &0.985 &0.984 &0.989 &0.988 &0.908 &0.935 &0.936\\
\end{tabular}}
\end{center}
\caption{Accuracy of the trained amplitude-encoding based QNN model with $12$ different depths and $3$ digital entanglement
layers shown in Fig.~\ref{ent_layer}(a-c). For each hyper-parameter setting, the codes have been repeatedly run $100$ times and the average test accuracy is exhibited.}
\label{table:AE_Ent_Depth}
\end{table}
\textbf{a) Different depths with digital entangling layers.}
In Table~\ref{table:AE_Ent_Depth}, we provide the numerical performances of the QNN classifiers introduced above with $12$ different depths, $4$ real-life and quantum datasets, and $3$ different digital entangling layers shown in Fig.~\ref{ent_layer}(a-c).
We use the test accuracy as a measure of the trained model's performance.
It should be noted that since different initial parameters may lead to very different training behaviors (very high accuracy at the beginning, stuck at local minima, volatile training curves, etc).
To avoid being largely affected by occasional situations, we reinitialize the model $100$ times and take the average test accuracy as the result.
According to this table, we can see that the accuracy approximately increases with QNN classifiers' depths.
We also find that when the QNN classifiers' depths are relatively low (from $1$ to $5$),
different digital entangling layers may lead to gapped performances.
When the depths are over $10$,
the accuracy saturates to a decent value and the QNN classifiers with different entangling layers have similar performances,
except for the case of learning $10$-qubit SPT data.
\begin{table}[ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc|ccc}
\multicolumn{1}{c}{\multirow{2}{*}{\bf Datasets}}& \multicolumn{3}{c}{\bf FashionMNIST} &\multicolumn{3}{c}{\bf MNIST} &\multicolumn{3}{c}{\bf SPT(8 qubits)}&\multicolumn{3}{c}{\bf SPT(10 qubits)}\\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{\bf Dep$_1$} & \multicolumn{1}{c}{\bf Dep$_2$} & \multicolumn{1}{c}{\bf Dep$_3$} &\multicolumn{1}{c}{\bf Dep$_1$} & \multicolumn{1}{c}{\bf Dep$_2$} & \multicolumn{1}{c}{\bf Dep$_3$}&\multicolumn{1}{c}{\bf Dep$_1$} & \multicolumn{1}{c}{\bf Dep$_2$} & \multicolumn{1}{c}{\bf Dep$_3$}&\multicolumn{1}{c}{\bf Dep$_1$} & \multicolumn{1}{c}{\bf Dep$_2$} & \multicolumn{1}{c}{\bf Dep$_3$}
\\ \hline \\
{\bf Time} & & & & & & & & & & & & \\
0.1 &0.977 &0.994 &0.996 &0.577 &0.707 &0.918 &0.579 &0.810 &0.883 &0.631 &0.671 &0.758\\
0.3 &0.995 &0.996 &0.998 &0.777 &0.960 &0.980 &0.720 &0.961 &0.978 &0.678 &0.864 &0.906\\
0.5 &0.999 &0.996 &0.998 &0.906 &0.979 &0.988 &0.862 &0.979 &0.984 &0.728 &0.894 &0.909\\
0.7 &0.967 &0.996 &0.996 &0.901 &0.982 &0.985 &0.931 &0.985 &0.986 &0.804 &0.907 &0.907\\
1.0 &0.969 &0.991 &0.996 &0.857 &0.985 &0.983 &0.961 &0.983 &0.987 &0.865 &0.905 &0.918\\
2.0 &0.910 &0.992 &0.996 &0.954 &0.986 &0.984 &0.948 &0.986 &0.987 &0.903 &0.913 &0.920\\
3.0 &0.901 &0.991 &0.996 &0.944 &0.985 &0.985 &0.909 &0.981 &0.987 &0.861 &0.921 &0.908\\
5.0 &0.993 &0.997 &0.998 &0.947 &0.983 &0.984 &0.933 &0.975 &0.985 &0.914 &0.915 &0.923\\
7.0 &0.931 &0.992 &0.996 &0.966 &0.981 &0.984 &0.968 &0.982 &0.988 &0.889 &0.916 &0.919\\
10.0 &0.994 &0.996 &0.997 &0.962 &0.982 &0.985 &0.979 &0.986 &0.988 &0.911 &0.908 &0.923\\
\end{tabular}}
\end{center}
\caption{Accuracy of the trained amplitude-encoding based QNN model with $10$ different Hamiltonian evolution time and $3$ depths: $\mathrm{Dep}_1 = 1$, $\mathrm{Dep}_2 = 3$, and $\mathrm{Dep}_3 = 5$. For each hyper-parameter setting, the codes have been repeatedly run $100$ times and the average test accuracy is exhibited.}
\label{table:AE_Analog_Time}
\end{table}
\textbf{b) Analog layers’ Hamiltonian evolution time.}
In addition to the digital entangling layers, we also consider entangling layers that are implemented by a many-body Hamiltonian's time evolution as shown in Fig.~\ref{ent_layer}(d).
Once given a Hamiltonian, it is important to explore the relationship between the QNNs' performances and the Hamiltonian's evolution time.
Here, we first fix the depth to be $\mathrm{Dep}_1 = 1$, $\mathrm{Dep}_2 = 3$, $\mathrm{Dep}_3 = 5$ (if the QNN circuit is too deep, the accuracy will be very high regardless of the entangling layers) and set $10$ discrete evolution time
with the Aubry-Andr\'e Hamiltonian \cite{Aubry1980Analyticity}:
\begin{eqnarray}\label{AAH}
H/\hbar = -\frac{g}{2}\sum_{k}(\hat{\sigma}_{x}^{(k)}\hat{\sigma}_{x}^{(k+1)}+\hat{\sigma}_{y}^{(k)}\hat{\sigma}_{y}^{(k+1)}) - \sum_{k}\frac{V_k}{2}\hat{\sigma}_{z}^{(k)}.
\end{eqnarray}
Here, $g$ is the coupling strength,
$V_k=V\cos(2\pi\alpha k+\phi)$ denotes the incommensurate potential where $V$ is the disorder magnitude,
$\alpha=(\sqrt{5}-1)/2$ and $\phi$ is randomly distributed on $[0,2\pi)$ evenly.
For simplicity, we set $g=1$ and $V=0$ before running each setting $100$ times.
According to the results from Table~\ref{table:AE_Analog_Time}, it is shown that the performance will be relatively poor with a very short evolution time. This can be explained as, the shorter the evolution time, the closer the entangling layer will be to identity.
Thus, these layers can not provide enough connections between different qubits.
In addition, the accuracy may not have a positive relation with the evolution time at depth $1$.
For experimentalists, there might be more considerations for setting the evolution time, where the open-source codes may provide help.
\begin{table}[ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc|ccc}
\multicolumn{1}{c}{\multirow{2}{*}{\bf Datasets}}& \multicolumn{3}{c}{\bf FashionMNIST} &\multicolumn{3}{c}{\bf MNIST} &\multicolumn{3}{c}{\bf SPT(8 qubits)}&\multicolumn{3}{c}{\bf SPT(10 qubits)}\\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$} &\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}
\\ \hline \\
{\bf Block Depth} & & & & & & & & & & & & \\
1 &0.989 &0.991 &0.900 &0.927 &0.915 &0.894 &0.914 &0.968 &0.949 &0.904 &0.901 &0.864\\
2 &0.992 &0.989 &0.993 &0.960 &0.944 &0.967 &0.958 &0.974 &0.972 &0.909 &0.908 &0.865\\
3 &0.995 &0.993 &0.993 &0.975 &0.974 &0.975 &0.960 &0.982 &0.979 &0.906 &0.911 &0.861\\
4 &0.996 &0.994 &0.995 &0.979 &0.979 &0.977 &0.976 &0.984 &0.983 &0.906 &0.907 &0.894\\
5 &0.997 &0.995 &0.995 &0.982 &0.981 &0.979 &0.980 &0.985 &0.985 &0.918 &0.917 &0.898\\
6 &0.998 &0.996 &0.996 &0.981 &0.982 &0.980 &0.983 &0.986 &0.986 &0.924 &0.923 &0.914\\
7 &0.996 &0.996 &0.997 &0.982 &0.983 &0.982 &0.985 &0.987 &0.986 &0.923 &0.927 &0.925\\
8 &0.997 &0.996 &0.997 &0.983 &0.982 &0.982 &0.988 &0.987 &0.987 &0.928 &0.930 &0.923\\
9 &0.997 &0.996 &0.997 &0.983 &0.984 &0.984 &0.987 &0.987 &0.987 &0.931 &0.931 &0.926\\
10 &0.998 &0.997 &0.997 &0.985 &0.984 &0.985 &0.988 &0.988 &0.988 &0.930 &0.931 &0.932\\
11 &0.997 &0.997 &0.997 &0.985 &0.985 &0.985 &0.987 &0.988 &0.988 &0.934 &0.934 &0.931\\
12 &0.997 &0.998 &0.997 &0.987 &0.986 &0.986 &0.988 &0.989 &0.989 &0.932 &0.934 &0.933\\
\end{tabular}}
\end{center}
\caption{Accuracy of the trained amplitude-encoding based QNN model with $12$ different depths and $3$ analog entanglement
layers. For each hyper-parameter setting, the codes have been repeatedly run $100$ times and the average test accuracy is exhibited.}
\label{table:AE_Analog_Depth}
\end{table}
\textbf{c) Different depths with analog entangling layers.}
Similar to \textbf{a)}, here, we provide the numerical performances of the QNN classifiers with $12$ different depths, $4$ real-life and quantum datasets, and $3$ different analog entanglement
layers (evolution with the Aubry-Andr\'e Hamiltonian with evolution time $5.0$, $10.0$, and $15.0$).
This is also complementary to the information provided in Table~\ref{table:AE_Analog_Time}.
As shown in Table~\ref{table:AE_Analog_Depth}, we find that the accuracy is relatively low at depth $1$ for several cases, which might be caused by the fact that the information of the input states can not be fully captured by measuring a certain qubit (some qubits are outside the ``light cone'' of the measured qubit).
When the depth increases, the accuracy quickly converges to a high value, which is consistent with the results of the amplitude-encoding based QNNs utilizing digital entangling layers shown in Table~\ref{table:AE_Ent_Depth}.
\section{Block-encoding based QNNs}
\label{sec:Block}
\subsection{The list of variables and caveats}
In this section, we explore the performances of block-encoding based QNNs under different hyper-parameter settings.
Similar to the section for amplitude-encoding based QNNs, here we list the hyper-parameters that will be changed to different values to test the corresponding performance. The exhibition of block-encoding based QNNs' performances will be left to the next subsection.
\textbf{Entangling layers.}
The entangling layers are the same as Section~\ref{sec:Amplitude}, and we refer to that section for more details.
\textbf{A caveat on the scaling of encoded elements.}
For block-encoding based QNNs, we need to encode the input data into the blocks, and the rotation angles of single-qubit gates are popular choices.
However, given a data point vectorized as $\vec{x} = (x_1,x_2,\dots,x_N)$, we need to decide the scaling factor $c$ such that each element $x_i$ will be encoded as $c x_i$ as a rotation angle.
We find that the performance during the training process is very sensitive to this scaling factor, which should be considered seriously. We will numerically verify this property in the next subsection to provide some guidance for future works.
Here, we provide a framework for the data preparation and QNN circuit's design:
\begin{jllisting}
using Yao, YaoPlots, MAT
using Quantum_Neural_Network_Classifiers: ent_cx, params_layer
# import the FashionMNIST data
vars = matread("../dataset/FashionMNIST_1_2_wk.mat")
num_qubit = 10
# set the size of the training set and the test set
num_train = 500
num_test = 100
# set the scaling factor for data encoding c = 2
c = 2
x_train = real(vars["x_train"][:,1:num_train])*c
y_train = vars["y_train"][1:num_train,:]
x_test = real(vars["x_test"][:,1:num_test])*c
y_test = vars["y_test"][1:num_test,:];
# define the QNN circuit, some functions have been defined before
depth = 9
circuit = chain(chain(num_qubit, params_layer(num_qubit),
ent_cx(num_qubit)) for _ in 1:depth)
# assign random initial parameters to the circuit
dispatch!(circuit, :random)
# record the initial parameters
ini_params = parameters(circuit);
YaoPlots.plot(circuit)
\end{jllisting}
Furthermore, to exhibit the idea of block-encoding based QNNs, here we select and present an interleaved block-encoding QNN's encoding strategy in advance, whose performance will be tested in the next subsection with complete codes at \href{https://github.com/LWKJJONAK/Quantum_Neural_Network_Classifiers/blob/main/Project.toml}{\color{blue}Github QNN}:
\begin{jllisting}
# we illustrate the idea of block-encoding based QNNs through a simple example
# the FashionMNIST dataset has been resized to be 256-dimensional
# we expand them to 270-dimensional by adding zeros at the end of the vectors
dim = 270
x_train_ = zeros(Float64,(dim,num_train))
x_train_[1:256,:] = x_train
x_train = x_train_
x_test_ = zeros(Float64,(dim,num_test))
x_test_[1:256,:] = x_test
x_test = x_test_
# the input data and the variational parameters are interleaved
# this strategy has been applied to [Ren et al, Experimental quantum adversarial
# learning with programmable superconducting qubits, arXiv:2204.01738]
# later we will numerically test the expressive power of this encoding strategy
train_cir = [chain(chain(num_qubit, params_layer(num_qubit), ent_cx(num_qubit))
for _ in 1:depth) for _ in 1:num_train]
test_cir = [chain(chain(num_qubit, params_layer(num_qubit), ent_cx(num_qubit))
for _ in 1:depth) for _ in 1:num_test];
for i in 1:num_train
dispatch!(train_cir[i], x_train[:,i]+ini_params)
end
for i in 1:num_test
dispatch!(test_cir[i], x_test[:,i]+ini_params)
end
\end{jllisting}
\begin{figure}
\begin{algorithm}[H]
\caption{Block-encoding based quantum neural network classifier}
\label{Algo_BEQNN}
\begin{algorithmic}[1]
\REQUIRE The untrained model $h$ with variational
parameters $\Theta$, the loss function $L$, the training set $\{(\mathbf{x}_m, \mathbf{a}_m)\}_{m=1}^n$ with size $n$, the batch size $n_b$, the number of iterations $T$, the learning rate $\epsilon$, and the Adam optimizer $f_{\text{Adam}}$
\ENSURE The trained model
\STATE Initialization: generate random initial parameters for $\Theta$
\FOR{ $i \in [T]$}
\STATE Randomly choose $n_b$ samples $\{\mathbf{x}_{\text{(i,1)}},\mathbf{x}_{\text{(i,2)}},...,\mathbf{x}_{\mathrm{(i,n_b)}}\}$ from the training set
\STATE Calculate the gradients of $L$ with respect to the parameters $\Theta$, and take the average value over the training batch $\mathbf{G}\leftarrow\frac{1}{n_b}\Sigma_{k=1}^{n_b}\nabla \mathcal{L}(h(\mathbf{x}_{\text{(i,k)}};{\Theta}),\mathbf{a}_{\text{(i,k)}})$
\STATE Updates: ${\Theta} \leftarrow f_{\text{Adam}}({\Theta},\epsilon,\mathbf{G})$
\ENDFOR
\STATE Output the trained model
\end{algorithmic}
\end{algorithm}
\end{figure}
\subsection{Benchmarks of the performance}
Here, we provide numerical benchmarks of the block-encoding based QNNs' performances with different hyper-parameters (scaling factors for data encoding, entangling layers) and two datasets: the FashionMNIST dataset \cite{FashionMNIST} and the MNIST handwritten digit dataset \cite{LeCun1998Mnist}.
The basic pseudocode for the block-encoding based QNNs' learning procedure is shown in Algorithm~\ref{Algo_BEQNN}.
In the rest of this subsection, we will benchmark the QNNs' performances with respect to the variables listed above, where the achieved accuracy will be exhibited in the following data tables. The code for calculating the accuracy and loss over the training set, the test set, or a batch set is shown below:
\begin{jllisting}
using Quantum_Neural_Network_Classifiers: acc_loss_evaluation
# calculate the accuracy & loss for the training & test set
# with the method acc_loss_evaluation(nbit::Int64,circuit::Vector,
# y_batch::Matrix{Float64},batch_size::Int64,pos_::Int64)
# pos_ denotes the index of the qubit to be measured
train_acc, train_loss = acc_loss_evaluation(num_qubit, train_cir, y_train,
num_train, pos_)
test_acc, test_loss = acc_loss_evaluation(num_qubit, test_cir, y_test,
num_test, pos_)
\end{jllisting}
\begin{table}[ht]
\begin{center}
\resizebox{0.60\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\multicolumn{1}{c}{\multirow{2}{*}{\bf Datasets}}& \multicolumn{3}{c}{\bf FashionMNIST} &\multicolumn{3}{c}{\bf MNIST} \\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$} &\multicolumn{1}{c}{\bf Ent$_1$} & \multicolumn{1}{c}{\bf Ent$_2$} & \multicolumn{1}{c}{\bf Ent$_3$}
\\ \hline \\
{\bf Scaling Factor} & & & & & & \\
0.1 &0.526 &0.531 &0.537 &0.579 &0.574 &0.583\\
0.4 &0.776 &0.786 &0.765 &0.731 &0.768 &0.736\\
0.7 &0.959 &0.964 &0.972 &0.880 &0.863 &0.874\\
1.0 &0.988 &0.988 &0.988 &0.940 &0.948 &0.945\\
1.3 &0.991 &0.992 &0.991 &0.965 &0.967 &0.965\\
1.6 &0.994 &0.993 &0.992 &0.972 &0.975 &0.973\\
2.0 &0.994 &0.994 &0.994 &0.976 &0.976 &0.975\\
2.4 &0.995 &0.994 &0.994 &0.978 &0.978 &0.977\\
2.8 &0.995 &0.994 &0.994 &0.977 &0.976 &0.979\\
3.2 &0.995 &0.995 &0.994 &0.976 &0.975 &0.977\\
4.0 &0.993 &0.992 &0.994 &0.967 &0.965 &0.966\\
5.0 &0.987 &0.987 &0.988 &0.938 &0.938 &0.931\\
6.0 &0.979 &0.978 &0.980 &0.878 &0.874 &0.882\\
8.0 &0.947 &0.949 &0.947 &0.740 &0.736 &0.747\\
10.0 &0.895 &0.896 &0.889 &0.644 &0.641 &0.637\\
\end{tabular}}
\end{center}
\caption{Accuracy of the trained block-encoding based QNN model with $15$ different scaling factors for data encoding and $3$ digital entangling layers shown in Fig.~\ref{ent_layer}(a-c). For each hyper-parameter setting, the codes have been repeatedly run $100$ times and the average test accuracy is exhibited.}
\label{table:BE_Ent_Scaling}
\end{table}
\begin{table}[hbt]
\begin{center}
\resizebox{0.55\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\multicolumn{1}{c}{\multirow{2}{*}{\bf Datasets}}& \multicolumn{3}{c}{\bf FashionMNIST} &\multicolumn{3}{c}{\bf MNIST} \\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{\bf Enc$_1$} & \multicolumn{1}{c}{\bf Enc$_2$} & \multicolumn{1}{c}{\bf Enc$_3$} &\multicolumn{1}{c}{\bf Enc$_1$} & \multicolumn{1}{c}{\bf Enc$_2$} & \multicolumn{1}{c}{\bf Enc$_3$}
\\ \hline \\
{\bf Time} & & & & & & \\
0.1 &0.992 &0.994 &0.995 &0.972 &0.974 &0.979\\
0.3 &0.992 &0.994 &0.994 &0.970 &0.978 &0.977\\
0.5 &0.992 &0.994 &0.995 &0.969 &0.978 &0.978\\
0.7 &0.991 &0.994 &0.994 &0.969 &0.979 &0.978\\
1.0 &0.992 &0.995 &0.995 &0.972 &0.976 &0.977\\
2.0 &0.993 &0.994 &0.995 &0.970 &0.977 &0.979\\
3.0 &0.993 &0.994 &0.995 &0.974 &0.977 &0.976\\
5.0 &0.993 &0.993 &0.994 &0.973 &0.977 &0.979\\
7.0 &0.991 &0.994 &0.995 &0.970 &0.977 &0.979\\
10.0 &0.992 &0.994 &0.995 &0.971 &0.979 &0.979\\
\end{tabular}}
\end{center}
\caption{Accuracy of the trained block-encoding based QNN model with $10$ different Hamiltonian evolution time and $3$ scaling factors for data encoding: $\mathrm{Enc}_1 = 1.5$, $\mathrm{Enc}_2 = 2.0$, and $\mathrm{Enc}_3 = 2.5$.
For each hyper-parameter setting, the codes have been repeatedly run $100$ times and the average test accuracy is exhibited.}
\label{table:BE_Analog_Time}
\end{table}
\textbf{a) Different scaling factors for data encoding with digital entangling layers.}
In Table~\ref{table:BE_Ent_Scaling}, we provide the numerical performances of the block-encoding based QNN classifiers with $15$ different scaling factors for data encoding, $2$ real-life datasets, and $3$ different digital entangling layers shown in Fig.~\ref{ent_layer}(a-c).
Here, we mention that, before applying the scaling factor for data encoding, the original data vectors are already normalized.
Similar to the amplitude-encoding based QNNs' setting, we reinitialize the model $100$ times and take the average test accuracy as the result.
The numerical results in Table~\ref{table:BE_Ent_Scaling} reveal an important fact that the optimal scaling factors for data encoding lie in a certain interval, while higher or lower of them may lead to comparably low performances.
In our simulations for handling $256$-dimensional data with a $10$-qubit QNN circuit,
choosing scaling factors between $1.5$-$3.5$ results in a decent performance.
It is worthwhile to mention that, in Ref.~\cite{Ren2022Experimental}, we have experimentally demonstrated quantum adversarial machine learning for a $256$-dimensional medical dataset with block-encoding based QNNs.
The QNN structure utilized on the superconducting platform is similar to the ones in this paper, and we also carefully choose a scaling factor for better experimental performance.
\textbf{b) Analog layers’ Hamiltonian evolution time.}
Here, to explore the effect of analog layers’ Hamiltonian evolution time on the block-encoding based QNNs' performance, we first fix the scaling factor for data encoding to be $\mathrm{Enc}_1 = 1.5$, $\mathrm{Enc}_2 = 2.0$, and $\mathrm{Enc}_3 = 2.5$.
Then, we set $10$ discrete evolution time with the Aubry-Andr\'e Hamiltonian.
The model for each setting has been reinitialized and run $100$ times to provide an average test accuracy.
The numerical results are shown in Table~\ref{table:BE_Analog_Time}, where we see that the evolution time turns out to have a minor influence one the average performance.
In Table~\ref{table:AE_Analog_Time}, we find that a low-depth QNN circuit may perform poorly with a short evolution time.
This does not happen in the present block-encoding based QNNs' setting since the QNNs' depth in this setting is considerable and multiple entangling layers provide enough connections between different qubits.
\section{Conclusion and outlooks}
\label{sec:Conclusion}
In this paper, we have briefly reviewed the recent advances in quantum neural networks and utilized Yao.jl's framework to construct our code repository which can efficiently simulate various popular quantum neural network structures.
Moreover, we have carried out extensive numerical simulations to benchmark these QNNs' performances, which may provide helpful guidance for both developing powerful QNNs and experimentally implementing large-scale demonstrations of them.
Meanwhile, we mention that our open-source project is not complete since there are a lot of interesting structures and training strategies of QNNs unexplored by us, e.g. quantum convolutional neural networks, quantum recurrent neural networks, etc. We would be very happy to communicate with members from both the classical and quantum machine learning communities to discuss about and enrich this project.
For the future works, we would like to mention several points that may trigger further explorations:
\begin{itemize}
\item At the current stage, the non-linearity of quantum neural networks can not be designed flexibly, since the QNN circuit itself can be seen as a linear unitary transformation. This seems like a restriction compared with classical neural networks, where the latter can flexibly design and utilize non-linear transformations such as activation functions, pooling layers, convolutional layers, etc. A possible future direction is to explore how to efficiently implements flexible non-linear transformations to enhance QNNs' expressive power, or to design hybrid quantum-classical models such that the quantum and classical parts of the model can demonstrate their advantages, respectively.
\item As mentioned in \cite{Schuld2021Supervised,Caro2021Encoding}, the data-encoding strategies are important for the performance. In our numerical simulations, we mainly adjust the hyper-parameters to empirically improve QNNs' performance. More theoretical guidance will be helpful for future works, especially for near-term and large-scale experimental demonstrations.
\item In this work, the variational parameters are mainly encoded into the single-qubit rotation gates. One may also explore the possibility of encoding the parameters into some global operations. For example, considering the time evolution of Aubry-Andr\'e Hamiltonian, the evolution time and coupling strength can be utilized as variational parameters to be optimized. For numerical simulations, we can use Yao.jl to define such customized gates and utilize the builtin automatic differentiation (AD) engine or a general AD engine such as Zygote.jl \cite{Innes2019Differentiable} to efficiently simulate the optimization procedure.
\item Interpretable quantum machine learning: in recent years, the interpretability of machine learning models has been considered crucially important, especially in sensitive applications such as medical diagnosis or self-driving cars \cite{Samek2017Explainable}.
Alone this line, popular methods used in interpreting deep learning models include saliency maps and occlusion maps, which explore the importance of one pixel in an image to the final evaluation function.
To our knowledge, there are few works focusing on this direction \cite{Baughman2022Study}, and further explorations might be needed.
\end{itemize}
\section*{Acknowledgements}
We would like to thank Wenjie Jiang, Weiyuan Gong, Xiuzhe Luo, Peixin Shen, Zidu Liu, Sirui Lu for helpful discussions, and Jinguo Liu in particular for very helpful communications about the organization of the code repository. We would also like to thank the \href{https://github.com/QuantumBFS/Yao.jl/graphs/contributors}{\color{blue}developers} of Yao.jl for providing an efficient quantum simulation framework and Github for the online resources to help open source the code repository.
\paragraph{Funding information}
This work is supported by the start-up fund from Tsinghua University (Grant. No. 53330300322), the National Natural Science Foundation of China (Grant. No. 12075128), and the Shanghai Qi Zhi Institute.
\begin{appendix}
\section{Preparation before running the code}
The quantum neural networks in this work are built under the framework of Yao.jl in Julia Programming Language.
Detailed installation instructions of Julia and Yao.jl can be found at \href{https://julialang.org/}{\color{blue}Julia} and \href{https://yaoquantum.org/}{\color{blue}Yao.jl}, respectively.
The environments for the codes provided in jupyter-notebook formats can be built with the following commands:
\begin{jllisting}
$ git clone https://github.com/LWKJJONAK/Quantum_Neural_Network_Classifiers
$ cd Quantum_Neural_Network_Classifiers
$ julia --project=amplitude_encode -e "using Pkg; Pkg.instantiate()"
$ julia --project=block_encode -e "using Pkg; Pkg.instantiate()"
\end{jllisting}
In addition, for better compatibility, using version 1.7 or higher of Julia is suggested.
\section{Complete example codes}
The codes for numerical simulations to benchmark QNNs' performances can be found at:
\url{https://github.com/LWKJJONAK/Quantum_Neural_Network_Classifiers}, \\
where we provide:
\begin{itemize}
\item Detailed tutorial codes for building amplitude-encoding based and block-encoding based QNNs with annotations.
\item All the data generated for the tables in this paper from Table~\ref{table:AE_Ent_Depth} to Table~\ref{table:BE_Analog_Time}, which includes more than $55000$ files in ``.mat'' format.
In addition to the average accuracy provided in the paper, in the complete data files, we also record the learning rate, the batch size, the number of iterations, the size of the training and test sets, and the accuracy/loss curves during the training process.
\end{itemize}
\end{appendix}
|
1,108,101,563,696 | arxiv | \section{Introduction}
This paper aims at deriving a homotopy classification for super vector bundles. The importance of it is in connection with finding proper generalization of Chern classes in supergeometry. Indeed, Chern classes are some cohomology elements associated to isomorphism classes of complex vector bundles in common geometry. In the category of vector bundles, it is shown that canonical vector bundles $\gamma^n_k$ on grassmannians $Gr(n,k)$ are universal. Equivalently, associated to each vector bundle $\cal{E}$ on M, up to homotopy, there exists a unique map $f:M\to Gr(n,k)$, for sufficiently large $n$, such that $\cal{E}$ is isomorphic with the induced bundle of $\gamma^n_k$ under $f$. In addition, Chern classes of a vector bundle may be described as the pullback of Chern classes of the universal bundle.
\\
To have an appropriate generalization of homotopy classification theorem, one should have a proper generalization of grassmannian. Supergrassmannian, introduced in \cite{Manin} and Grassmannian, in some sense, are homotopy equivalent (cf. subsection 2.2). Therefore, cohomology group associated to supergrassmannian is equal to that of grassmannian. In other words, the former group contains no information about superstructure. Hence, from classifying space viewpoint, supergrassmannians are not good generalization of Grassmannians.
\\
In this paper, first, following \cite{Bahadory}, we introduce $\nu$-grassmannians denoted by $_{\nu}Gr(m|n)$, as a new supergeneralization of Grassmannians. In addition, we show the existence of $\Gamma$, a canonical super vector bundle over $_{\nu}Gr(m|n)$. After introducing Gauss supermaps for super vector bundles, universal property of $\Gamma$ is studied. At the end, we extend one of the main theorems on homotopy classification for vetor bundles to supergeometry.
\\
There are different approaches to generalize Chern classes in supergeometry, such as homotopy or analytic approach. In this paper our approach is homotopic. Although there are not many articles with homotopy approach, but one may refer to \cite{V-manin} as a good example for such papers. Nevertheless, much more efforts have been made for generalizing Chern classes in supergeometry by analytic approach. One may refer to \cite{Bartocci-B}, \cite{Bartocci-B-H}, \cite{B-H}, \cite{Landi}, \cite{Manin-P}, \cite{V-Manin-P}. But, in all these works, the classes obtained in this way are nothing but the Chern classes of the reduced vector bundle(s) and they do not have any information about the superstructure.
\section{Preliminaries}
In this section, first, we recall some basic definitions of supergeometry. Then, we introduce a supergeneralization of Grassmannian called $\nu$-grassmannian.
\subsection{Supermanifolds}
A \textit{super ringed space} is a pair $(X, \mathcal{O})$ where $X$ is a topological space and $\mathcal{O}$ is a sheaf of commutative $\mathbb{Z}_{2}$-graded rings with units on $X$. Let $\mathcal{O}(U)= \mathcal{O}^{ev}(U) \oplus \mathcal{O}^{odd}(U)$ for any open subset $U$ of $X$. An element $a$ of $\mathcal{O}(U)$ is called a homogeneous element of parity $p(a)=0$ if $a \in \mathcal{O}^{ev}(U)$ and it is a homogeneous element of parity $p(a)=1$ if $a \in \mathcal{O}^{odd}(U)$. A morphism between two super ringed spaces $(X, \mathcal{O}_{X})$ and $(Y, \mathcal{O}_{Y})$ is a pair $(\widetilde{\psi}, \psi^{*})$ such that $\widetilde{\psi}: X \longrightarrow Y$ is a continuous map and $\psi^{*}: \mathcal{O}_{Y} \longrightarrow \widetilde{\psi}_{*}(\mathcal{O}_{X})$ is a homomorphism between the sheaves of $\mathbb{Z}_{2}$-graded rings.
\\
A \textit{superdomain} is a super ringed space $\mathbb{R}^{p|q}:=(\mathbb{R}^{p}, \mathcal{O})$ where
\begin{equation*}
\mathcal{O}= \mathbf{C}^{\infty}_{\mathbb{R}^{p}} \otimes_{\mathbb{R}} \wedge \mathbb{R}^{q}, \qquad p, q \in \mathbb{N}.
\end{equation*}
By $\mathbf{C}^{\infty}_{\mathbb{R}^{p}}$ we mean the sheaf of smooth functions on $\mathbb{R}^{p}$. A super ringed space which is locally isomorphic to $\mathbb{R}^{p|q}$ is called a \textit{supermanifold} of dimension $p|q$. Note that a morphism $(\widetilde{\psi}, \psi^{*})$ between two supermanifolds $(X, \mathcal{O}_{X})$ and $(Y, \mathcal{O}_{Y})$ is just a morphism between the super ringed spaces such that for any $x \in X$, $\psi^*: \mathcal{O}_{Y, \widetilde{\psi}(x)} \longrightarrow \widetilde{\psi}_{*}(\mathcal{O}_{X, x})$ is local, i.e., $\psi^{*}(m_{\widetilde{\psi}(x)}) \subseteq m_{x}$, where $m_x$ is the unique maximal ideal in $\mathcal{O}_{X, x}$.
\subsection{$\nu$-grassmannian}\label{grassmannian}
Supergrassmannians are not good generalization of Grassmannians. Indeed these two, in some sense, are homotopy equivalent. This equivalency may be shown easily in the case of projective superspaces.
\\
To this end, let $\mathbb{P}^{m|n} = (\mathbb{RP}^{m}, \mathcal{O}_{m|n})$ be the real projective superspace.
By a \textit{deformation retraction} from $\mathcal{O}_{m|n}$ to $\mathcal{O}_{m}$, the sheaf of rings of all real valued functions on the real projective space $\mathbb{P}^{m}$, we mean that there is a sheaf of $\mathbb{Z}_{2}$-graded rings, say $\mathcal{A}$, on $\mathbb{RP}^{m} \times \mathbb{R}$ such that $\mathcal{A}_{\mathbb{RP}^{m} \times \{0\}} = \mathcal{O}_{m|n}$, and $\mathcal{A}_{\mathbb{RP}^{m} \times \{1\}} = \mathcal{O}_{m|n}$. In addition, there are morphisms $r: \mathcal{O}_{m} \longrightarrow \mathcal{O}_{m|n}$, $j: \mathcal{O}_{m|n} \longrightarrow \mathcal{O}_{m}$ and $H: \mathcal{O}_{m|n} \longrightarrow \mathcal{A}$ along $\mathbb{RP}^{m} \times \mathbb{R} \longrightarrow \mathbb{RP}^{m}$ with $(z, t) \longmapsto z$ such that $j_{0} \circ H=id$ and $j_{1} \circ H= r \circ j$ where $j_{0}: \mathcal{A} \longrightarrow \mathcal{O}_{m|n}$ and $j_{1}: \mathcal{A} \longrightarrow \mathcal{O}_{m|n}$ are morphisms of sheaves respectively satisfying the following conditions:
\begin{equation*}
x_{i} \otimes 1 \longmapsto x_{i}, \qquad e_{j} \otimes 1 \longmapsto e_{j}, \qquad 1 \otimes t \longmapsto 0,
\end{equation*}
\begin{equation*}
x_{i} \otimes 1 \longmapsto x_{i}, \qquad e_{j} \otimes 1 \longmapsto e_{j}, \qquad 1 \otimes t \longmapsto 1.
\end{equation*}
\begin{proposition}
There is a deformation retraction from $\mathcal{O}_{m|n}$ to $\mathcal{O}_{m}$.
\end{proposition}
\textit{Proof}:
Set $\mathcal{A} = \mathcal{O}_{m|n} \otimes_{\mathbb{R}} \mathbf{C}^{\infty}_{\mathbb{R}}$ and $U_i=\mathbb{R}^m$, $1 \leq i \leq m+1$. Now, consider $H: \mathcal{O}_{m|n} \longrightarrow \mathcal{A}$ which is locally defined along $U_{i} \times \mathbb{R} \longrightarrow U_{i}$ as follows:
\begin{equation*}
x_{k} \longmapsto x_{k}, \qquad e_{l} \longmapsto (1-t)e_{l}.
\end{equation*}
One may easily show that $j_{0} \circ H=id$ and $j_{1} \circ H= r \circ j$ where $r: \mathcal{O}_{m} \longrightarrow \mathcal{O}_{m|n}$ is a morphism with $x_{k} \longmapsto x_{k}$ and $j: \mathcal{O}_{m|n} \longrightarrow \mathcal{O}_{m}$ is a morphism with $x_{k} \longmapsto x_{k}$ and $e_{l} \longmapsto 0$.
\begin{flushright}
$\square$
\end{flushright}
This proposition shows that in the construction of projective superspaces, the odd variables do not play principal roles. Solving this problem
is our motivation for defining $\nu$- Projective spaces or generally $\nu$- grassmannians. Before that, it is necessary to recall some basic concepts.
\\
A $\nu$-\textit{domain} of dimension $p|q$ is a super ringed space $\mathbb{R}^{p|q}$ which carries an odd involution $\nu$, i.e.,
\begin{equation*}
\nu: \mathcal{O} \longrightarrow \mathcal{O}, \qquad \nu(\mathcal{O}^{ev}) \subseteq \mathcal{O}^{odd}, \qquad \nu(\mathcal{O}^{odd}) \subseteq \mathcal{O}^{ev}, \qquad \nu^{2}=id.
\end{equation*}
In addition, $\nu$ is a homomorphism between $\mathbb{C}^{\infty}$-modules.
\\
Let $k$, $l$, $m$ and $n$ be non-negative integers with $k<m$ and $l<n$. For convenience from now on, we set $p=k(m-k)+l(n-l)$ and $q=k(n-l)+l(m-k)$.
\\
A \textit{real} $\nu$-\textit{grassmannian}, $_{\nu}Gr_{k|l}(m|n)$, or shortly $_{\nu}Gr= (Gr_{k}^{m} \times Gr_{l}^{n}, \mathcal{G})$, is a real superspace obtained by gluing $\nu$-domains $(\mathbb{R}^{p}, \mathcal{O})$ of dimension $p|q$.
\\
Here, we need to set some notations that are useful later.
\\
Let $I$ be a $k|l$ multi-index, i.e., an increasing finite sequence of $\{1, \cdots, m+n\}$ with $k+l$ elements. So one may set
\begin{equation*}
I:= \{i_{a}\}_{a=1}^{k+l}.
\end{equation*}
A standard $(k|l) \times (m|n)$ supermatrix, say $T$, may be decomposed into four blocks as follows:
$$\left[ \! \! \!
\begin{tabular}{ccc}
$A_{k\times m}$ & $\vline$ & $B_{k \times n}$\\
--- --- --- & $\vline$ & --- --- --- \\
$C_{l \times m}$ & $\vline$ & $D_{l \times n}$
\end{tabular}
\! \! \! \right]$$
The upper left and lower right blocks are filled by even elements. These two blocks together form the even part of $T$. The upper right and lower left blocks are filled by odd elements and form the odd part of $T$.
\\
The columns with indices in $I$ together form a minor denoted by $M_I(T)$.
\\
A pseudo-unit matrix $id_{I}$ corresponding to $k|l$ multi-index $I$, is a $(k|l) \times (k|l)$ matrix whose all entries are zero except on its main diagonal that are $1$ or $1\nu$, where $1\nu$ is a formal expression used as odd unit. For each open subset $U$ of $\mathbb{R}^p$ and each $z \in \mathcal{O}(U)$, we also need the following rules:
\begin{equation*}
z.(1\nu):= \nu(z), \qquad (1\nu)(1\nu)=1.
\end{equation*}
So for each $I$, one has
\begin{equation*}
id_{I}.id_{I}= id.
\end{equation*}
As a result, for each $I$ and each $(k|l) \times(k|l)$ supermatrix $T$, we can see that
\begin{equation*}
T=(T.id_{I}).id_{I}.
\end{equation*}
The following steps may be taken in order to construct the structure sheaf of $_{\nu}Gr$:
\\
Step1: For each $k|l$ multi-index $I$, consider the $\nu$-domain $(V_{I}, \mathcal{O}_{I})$.
\\
Step2: Corresponding to each $I$, consider a $(k|l) \times (m|n)$ supermatrix $A^{I}$ which its columns with indices in $I$ together form $id_{I}$. The formal expression $1\nu$ appears when a diagonal entry of $id_{I}$ places in odd part of $A^{I}$.
\\
In addition, the other columns of $A^{I}$, from left to right, and each one from up to down, are filled by even and odd coordinates of $\nu$-domain $\mathcal{O}_{I}$, i.e.,
$x_1, ..., x_k, e_1, ..., e_l, ..., x_{(m- k-1)k+1}, ..., x_{(m- k)k},$ $e_{(m- k-1)l+1}, ..., e_{(m- k)l}, e_{(m- k)l+1} ...e_{(m- k)l+ k}, x_{(m- k)k+1}, ..., x_{(m- k)k+ l}, ...,$
respectively. Afterwards, each entry, say $a$, that is in a block with opposite parity is replaced by $\nu(a)$.
\\
As an example, consider $_{\nu}Gr_{2|2}(3|3)$ with $I=\{ 1, 2, 3, 6\}$. Then one has
$$\left[ \! \! \! \!
\begin{tabular}{ccccccc}
1 & 0 & 0 & ; & $\nu x_{1}$ & $e_{3}$ & 0\\
0 & 1 & 0 & ; & $\nu x_{2}$ & $e_{4}$ & 0\\
-- &-- & -- & --& -- -- & -- -- & --\\
0 & 0 & $1\nu$ & ; & $\nu e_{1}$ & $x_{3}$ & 0
\\
0 & 0 & 0 & ; & $\nu e_{2}$ & $x_{4}$ & 1
\end{tabular}
\! \! \! \! \right]$$
The columns of $A^{I}$ with indices in $I$ together form the following supermatrix:
$$M_{I}(A^{I}):= id_{I}=\left[ \! \! \! \!
\begin{tabular}{ccccccc}
1 & 0 & 0 & ; & 0\\
0 & 1 & 0 & ; & 0\\
-- &-- & -- & --& --\\
0 & 0 & $1\nu$ & ; & 0
\\
0 & 0 & 0 & ; & 1
\end{tabular}
\! \! \! \! \right]$$
For each pair multi-indices $I$ and $J$, define the set $V_{IJ}$ to be the largest subset of $V_{I}$ on which $M_{J}(A^{I}).id_{J}$ is invertible on it.
\\
Step3: On $V_{IJ}$, the equality
\begin{equation*}
\big(M_{J}(A^{I}).id_{J}\big)^{-1}.A^{I} = A^{J}
\end{equation*}
defines a correspondence between even and odd coordinates of $V_{J}$ and rational expressions in $\mathcal{O}_{I}$ appear as corresponding entries of matrices on the two sides of the equality. By (\cite{Varadarajan}, Th 4.3.1), one has a unique homomorphism
\begin{equation*}
\varphi_{IJ}^{*}: \mathcal{O}_{J|_{V_{JI}}} \longrightarrow \mathcal{O}_{I|_{V_{IJ}}}
\end{equation*}
Step4: The homomorphisms $\varphi_{IJ}^{*}$ satisfy the gluing conditions, i.e., for each $I$, $J$ and $K$, we have
\begin{itemize}
\item[1)]
$\varphi_{II}^{*}= id$
\item[2)]
$\varphi_{IJ}^{*} \circ \varphi_{JI}^{*} = id$
\item[3)]
$\varphi_{IK}^{*} \circ \varphi_{KJ}^{*} \circ \varphi_{JI}^{*}= id$
\end{itemize}
In the first condition, $\varphi_{II}^{*}$ is defined by the following equality:
\begin{align*}
\big(M_{I}(A^{I}).id_{I}\big)^{-1}.A^{I} = \big(id_{I}.id_{I}\big)^{-1}.A^{I}
\end{align*}
Since $\big(id_{I}.id_{I}\big)^{-1}= id$, we have $\varphi_{II}^{*}= id$.
\\
For the last condition, note that $\varphi_{KJ}^{*} \circ \varphi_{JI}^{*}$ is obtained from the equality
\begin{equation*}
\Big( M_{I}\Big( \big(M_{J}(A^{K}) .id_{J}\big)^{-1}.A^{K}\Big).id_{I}\Big)^{-1}\Big( \big(M_{J}(A^{K}) .id_{J}\big)^{-1}.A^{K}\Big)= A^{I},
\end{equation*}
For the left hand side of this equality, one has
\begin{align*}
&\Big(\big(M_{J}(A^{K}) .id_{J}\big)^{-1}. M_{I}(A^{K}).id_{I}\Big)^{-1}\Big( \big(M_{J}(A^{K}) .id_{J}\big)^{-1}.A^{K}\Big)
\\
=&\big(M_{I}(A^{K}).id_{I}\big)^{-1} \big(M_{J}(A^{K}) .id_{J}\big)\big(M_{J}(A^{K}) .id_{J}\big)^{-1}. A^{k}
\\
=&\big(M_{I}(A^{K}).id_{I}\big)^{-1}.A^{K}= A^{I}
\end{align*}
Thus the third condition is established.
\\
The second condition may results from other conditions as follows:
\begin{align*}
&\varphi_{IJ}^{*} \circ \varphi_{JI}^{*}=\varphi_{II}^{*}
\\
&\varphi_{II}^{*}=id.
\end{align*}
\section{Super vector bundles}
Here, we recall the definition of super vector bundles and their homomorphisms. Then, we introduce a canonical super vector bundle over $\nu$-grassmannian.
\\
By a \textit{super vector bundle} $\mathcal{E}$ of rank $k|l$ over a supermanifold $(M, \mathcal{O})$, we mean a sheaf of $\mathbb{Z}_{2}$-graded $\mathcal{O}$-modules on $M$ which locally is a free $k|l$ module.
\\
In other words, there exists an open cover $\{U_{\alpha}\}_{\alpha}$ of $M$ such that
\begin{equation*}
\mathcal{E}(U_{\alpha}) \simeq \big(\mathcal{O}(U_{\alpha}) \big)^{k} \oplus \big(\pi\mathcal{O}(U_{\alpha}) \big)^{l}
\end{equation*}
or equivalently,
\begin{equation*}
\mathcal{E}(U_{\alpha}) \simeq \mathcal{O}(U_{\alpha}) \otimes_{\mathbb{R}} \big(\mathbb{R}^{k} \oplus \pi(\mathbb{R}^{l}) \big).
\end{equation*}
For example, let $\mathcal{O}_{M}^{k|l} :=(\oplus_{i=1}^{k} \mathcal{O}) \oplus (\oplus_{j=1}^{l} \pi \mathcal{O})$ where $\pi \mathcal{O}$ is an $\mathcal{O}$-module which satisfies
\begin{equation*}
(\pi \mathcal{O})^{ev} = \mathcal{O}^{odd}, \qquad (\pi \mathcal{O})^{odd} = \mathcal{O}^{ev}.
\end{equation*}
The right multiplication is the same as in $\mathcal{O}$ and the left multiplication is as follows:
\begin{equation*}
z(\pi w):= (-1)^{p(z)}\pi(zw)
\end{equation*}
where $\pi z$ is an element of $\pi\mathcal{O}$.
\\
Hence, $\mathcal{O}_{M}^{k|l}$ is a super vector bundle over the supermanifold $M$.
\\
Let $\mathcal{E}$ and $\mathcal{E}^{\prime}$ be two super vector bundles over a supermanifold $(M, \mathcal{O})$. By a homomorphism from $\mathcal{E}$ to $\mathcal{E}^{\prime}$, we mean an even sheaf homomorphism $\tau: \mathcal{E} \longrightarrow \mathcal{E}^{\prime}$.
\\
Each super vector bundle over $M$ isomorphic to $\mathcal{O}_{M}^{k|l}$, is called a \textit{trivial super vector bundle} of rank $k|l$.
\subsection{Canonical super vector bundle over $\nu$-grassmannian}
Let $I$ be a $k|l$ multi-index and let $(V_{I}, \mathcal{O}_{I})$ be a $\nu$-domain. Consider the trivial super vector bundle
\begin{equation*}
\Gamma_{I}:= \mathcal{O}_{I} \otimes_{\mathbb{R}} \big(\mathbb{R}^{k} \oplus \pi(\mathbb{R}^{l}) \big) = \mathcal{O}_{I} \otimes_{\mathbb{R}} \mathbb{R}^{k|l}
\end{equation*}
By gluing these super vector bundles through suitable homomorphisms, one may construct a super vector bundle $\Gamma$ over $\nu$-grassmannian $_{\nu}Gr$. For this, consider a basis $\{e_{1}, \cdots, e_{k}, f_{1}, \cdots, f_{l}\}$ for $\mathbb{R}^{k|l}$ and set
\begin{equation*}
m:= \big(M_{J}(A^{I}) .id_{J}\big)^{-1}
\end{equation*}
where $\big(M_{J}(A^{I}) .id_{J}\big)^{-1}$ is introduced in subsection 2.2. Gluing morphisms are defined as follows:
\begin{equation*}
\psi_{IJ}^{*}: \Gamma_{J|_{V_{JI}}} \longrightarrow \Gamma_{I|_{V_{IJ}}}
\end{equation*}
\begin{equation*}
a \otimes e_{i} \longmapsto \varphi_{IJ|_{V_{JI}}}^{*}(a) \Big( \sum_{t \leq k} m_{it} \otimes e_{t} + \sum_{t > k}m_{it}\otimes f_{t}\Big),
\end{equation*}
where the elements $m_{it}$ are the entries of the $i$-th column of the supermatrix $m$. The morphisms $\psi_{IJ}^{*}$ satisfy the gluing conditions. So $\Gamma_{I}^,$s may glued together to form a super vector bundle denoted by $\Gamma$.
\section{Gauss supermaps}
In common geometry, a Gauss map is defined as a map from the total space of a vector bundle, say $\xi$, to a Euclidean space such that its restriction to any fiber is a monomorphism. Equivalently, one may consider a $1-1$ strong bundle map from $\xi$ to a trivial vecor bundle. The Gauss map induces a homomorphism between the vector bundle and the canonical vector bundle on a grassmannian $Gr_{k}(n)$ with sufficiently large value of $n$. A simple method for making such a map is the use of coordinate representation for $\xi$. In this section, for constructing a Gauss supermap of a super vector bundle, one may use the same method.
\\
We call a super vector bundle $\mathcal{E}$ over a supermanifold $(M, \mathcal{O})$ is of finite type, whenever there is a finite open cover $\{U_{\alpha}\}_{\alpha=1}^{t}$ for $M$ such that the restriction of $\mathcal{E}$ to $U_{\alpha}$ is trivial, i.e., there exists isomorphisms
\begin{equation*}
\psi_{\alpha}^*: \mathcal{E}|_{U_{\alpha}} \overset{\simeq}{\longrightarrow} \mathcal{O}|_{U_{\alpha}} \otimes _{\mathbb{R}} \mathbb{R}^{k|l}.
\end{equation*}
A Gauss supermap over $\mathcal{E}$ is a homomorphism from $\mathcal{E}$ to the trivial super vector bundle over $(M, \mathcal{O})$ so that its kernel is trivial.
\\
Let $\{e_{1}, \cdots, e_{k}, f_{1}, \cdots, f_{l}\}$ be a basis for $\mathbb{R}^{k|l}$ so that $\{e_{i}\}$ and $\{f_{j}\}$ are respectively bases for $\mathbb{R}^{k}$ and $\pi(\mathbb{R}^{l})$, then $B:=\{1 \otimes e_{1}, \cdots, 1 \otimes e_{k}, 1 \otimes f_{1}, \cdots, 1 \otimes f_{l}\}$ is a generator for the $\mathcal{O}(U_{\alpha})$-module, $\mathcal{O}(U_{\alpha}) \otimes_{\mathbb{R}} \mathbb{R}^{k|l}$.
\\
Set
\begin{equation}\label{salpha}
s_{i}^{\alpha}:= \psi_{\alpha}^{*^{-1}} (1 \otimes e_{i}), \qquad t_{j}^{\alpha}:= \psi_{\alpha}^{*^{-1}} (1 \otimes f_{j}).
\end{equation}
So $(\psi_{\alpha}^{*})^{-1} (B)$ is a generator for $\mathcal{E}(U_{\alpha})$ as an $\mathcal{O}(U_{\alpha})$-module.
\\
Choose a partition of unity $\{\rho_{\alpha}\}_{\alpha=1}^{t}$ subordinate to the covering $\{U_{\alpha}\}_{\alpha=1}^{t}$. Considering $s$ as a global section of $\mathcal{E}(M)$, we can write
\begin{equation*}
s=\sum_{\alpha=1}^{t} \rho_{\alpha}r_{\alpha}(s)
\end{equation*}
where $r_{\alpha}$ is the restriction morphism. In addition, one has
\begin{equation*}
r_{\alpha}(s)=\sum_{i=1}^{k} \lambda_{i}^{\alpha}s_{i}^{\alpha} + \sum_{j=1}^{l} \delta_{j}^{\alpha}t_{j}^{\alpha}, \qquad \lambda_{i}^{\alpha}, \delta_{j}^{\alpha} \in \mathcal{O}(U_{\alpha}).
\end{equation*}
By the last two equalities, we have
\begin{equation*}
s=\sum_{\alpha=1}^{t} \rho_{\alpha} \Big( \sum_{i=1}^{k} \lambda_{i}^{\alpha}s_{i}^{\alpha} + \sum_{j=1}^{l} \delta_{j}^{\alpha}t_{j}^{\alpha}\Big) = \sum_{\alpha=1}^{t}\sum_{i=1}^{k} \sqrt{\rho_{\alpha}}\lambda_{i}^{\alpha}.
\sqrt{\rho_{\alpha}}s_{i}^{\alpha}+ \sum_{\alpha=1}^{t}\sum_{j=1}^{l} \sqrt{\rho_{\alpha}}\delta_{j}^{\alpha}.\sqrt{\rho_{\alpha}}t_{j}^{\alpha},
\end{equation*}
where $\sqrt{\rho_{\alpha}}s_{i}^{\alpha}$ and $\sqrt{\rho_{\alpha}}t_{j}^{\alpha}$ are even and odd sections of $\mathcal{E}(M)$ repectively, and $\sqrt{\rho_{\alpha}}\lambda_{i}^{\alpha}$ and $\sqrt{\rho_{\alpha}}\delta_{j}^{\alpha}$
are sections of $\mathcal{O}(M)$. So $A:= \{\sqrt{\rho_{\alpha}}s_{i}^{\alpha}\}_{\alpha, i} \cup \{\sqrt{\rho_{\alpha}}t_{j}^{\alpha}\}_{\alpha, j}$ is a generating set of $\mathcal{E}(M)$.
\\
Now, for each $\alpha$, consider the following monomorphism between $\mathcal{O}(U_{\alpha})$-modules:
\begin{equation*}
i_{\alpha}: \mathcal{O}(U_{\alpha}) \otimes_{\mathbb{R}} \mathbb{R}^{k|l} \longrightarrow \mathcal{O}(U_{\alpha}) \otimes_{\mathbb{R}} \mathbb{R}^{tk|tl}
\end{equation*}
\begin{equation*}
1 \otimes e_{i} \longmapsto 1 \otimes e_{(\alpha-1)k+i}
\end{equation*}
\begin{equation*}
1 \otimes f_{j} \longmapsto 1 \otimes f_{(\alpha-1)l+j}.
\end{equation*}
Set
\begin{equation}
g(s):= \sum_{\alpha=1}^{t} \rho_{\alpha}. i_{\alpha} \circ \psi_{\alpha}^* \circ r_{\alpha}(s).
\end{equation}
It is easy to see that $g$ is a Gauss supermap of $\mathcal{E}(M)$.
\subsection{Gauss supermatrix}
Now, we are going to obtain the matrix of the gauss supermap $g$.
\\
By a Gauss supermatix associated to super vector bunle $\mathcal{E}$, we mean a supermatrix, say $G$, which is obtained as follows with respect to the generating set $A$:
\begin{equation*}
g\big(\sqrt{\rho_{\beta}}s_{j}^{\beta}\big) = \sum_{\alpha=1}^{t} \rho_{\alpha}. i_{\alpha} \circ \psi_{\alpha}^* \circ r_{\alpha}\big( \sqrt{\rho_{\beta}}s_{j}^{\beta}\big)
\end{equation*}
where $g$ is a Gauss supermap over $\mathcal{E}$.
\\
By \eqref{salpha}, we have
\begin{align}\label{grho}
g\big(\sqrt{\rho_{\beta}}s_{j}^{\beta}\big) &= \sum_{\alpha=1}^{t} \rho_{\alpha}. i_{\alpha} \circ \psi_{\alpha}^* \circ \psi_{\beta}^{*^{-1}}\big( \sqrt{\rho_{\beta}}e_{j}\big) \nonumber
\\
&=\sum_{i=1}^{k}\sum_{\alpha=1}^{t} \rho_{\alpha} \sqrt{\rho_{\beta}} a_{ij}^{\alpha\beta}e_{(\alpha-1)k+i}+\sum_{i=1}^{l}\sum_{\alpha=1}^{t}\rho_{\alpha} \sqrt{\rho_{\beta}}a_{(k+i)j}^{\alpha\beta}f_{(\alpha-1)l+i}
\end{align}
where $[a_{ij}^{\alpha\beta}]$ is a matrix of $\psi_{\alpha}^* \circ \psi_{\beta}^{*^{-1}}$ relative to the basis $B$. The natural ordering on $\{i_{\alpha}e_{i}\}_{\alpha, i}$ and $\{i_{\alpha}f_{s}\}_{\alpha, s}$ induces an ordering on their coefficients in \eqref{grho}. Let G be a $tk|tl \times tk|tl$ standard supermatrix. Fill the even and odd top blocks of $G$ by these coefficients according to their parity from left to right along the $\big((\beta-1)k+j\big)$-th row, $1 \leq j \leq k$, $1 \leq \beta \leq t$. Similarly, by coefficients in the decomposition of $g\big(\sqrt{\rho_{\beta}}t_{r}^{\beta}\big)$, one may fill the odd and even down blocks of $G$ along the $\big((\beta -1)k+r\big)$-th row, $1 \leq r \leq l$, $1 \leq \beta \leq t$.
\begin{example}
For $k|l=2|1$ and a suitable covering with two elements, i.e., $t=2$, we have
\begin{equation*}
g\big(\sqrt{\rho_{2}}s_{2}^{2}\big)= \rho_{1} \sqrt{\rho_{2}} a_{12}^{12}e_{1} +\rho_{1} \sqrt{\rho_{2}} a_{22}^{12}e_{2}+\rho_{2} \sqrt{\rho_{2}} a_{12}^{22}e_{3}+\rho_{2} \sqrt{\rho_{2}} a_{22}^{22}e_{4}+\rho_{1} \sqrt{\rho_{2}} a_{32}^{12}f_{1}+\rho_{2} \sqrt{\rho_{2}} a_{32}^{22}f_{2}
\end{equation*}
Then the $4$-th row of the associated supermatrix $G$ is as below:
$$\left[ \! \! \! \!
\begin{tabular}{ccccccc}
& & & \vdots
& ; & &
\\
$\rho_{1} \sqrt{\rho_{2}} a_{12}^{12}$ & $\rho_{1} \sqrt{\rho_{2}} a_{22}^{12}$ & $\rho_{2} \sqrt{\rho_{2}} a_{12}^{22}$ & $\rho_{2} \sqrt{\rho_{2}} a_{22}^{22}$ & ; &$\rho_{1} \sqrt{\rho_{2}} a_{32}^{12}$ & $\rho_{2} \sqrt{\rho_{2}} a_{32}^{22}$\\
-- -- -- -- -- -- & -- -- -- -- -- -- & -- -- -- -- -- -- & -- -- -- -- -- -- & -- -- -- -- -- -- &-- -- -- -- -- -- & -- -- -- -- -- -- \\
& & & \vdots
& ; & &
\\
\end{tabular}
\! \! \! \! \right].$$
\end{example}
On the other hand, one may consider a covering $\{U_{\alpha}\}_{\alpha}$ so that for each $\alpha$, we have an isomorphism
\begin{equation}\label{isomorphism}
\mathcal{O}(U_{\alpha}) \overset{\simeq}{\longrightarrow} \mathbf{C}^{\infty}(\mathbb{R}^{m}) \otimes _{\mathbb{R}} \wedge \mathbb{R}^{n}
\end{equation}
Let $\nu$ be an odd involution on $\mathbf{C}^{\infty}\big(\mathbb{R}^{(k^{2}+l^{2})(t-1)}\big) \otimes _{\mathbb{R}} \wedge \mathbb{R}^{2kl(t-1)}$ preserving $\mathbf{C}^{\infty}(\mathbb{R}^{m}) \otimes _{\mathbb{R}} \wedge \mathbb{R}^{n}$ as a subalgebra. Thus, it induces an odd involution on $\mathcal{O}(U_{\alpha})$ through the isomorphism \eqref{isomorphism} which is denoted by the same notation $\nu$.
\begin{theorem}
Let $\cal{E}$ be a super vector bundle over a supermanifold $(M, \cal{O})$ and let $G$ be a Gauss supermatrix associated to $\cal{E}$. Then the Gauss supermatrix induces a homomorphism from $\mathcal{G}$, the structure sheaf of $_{\nu}Gr$, to $\mathcal{O}$.
\end{theorem}
\textit{Proof}.
Let $h$ be an element of $\mathcal{G}(M)$, and $\{\rho_{I}^{\prime}\}$ be a partition of unity subordinate to the covering $\{V_{I}\}_{I \subseteq \{1, \cdots, tk\}}$, then one has
\begin{equation}
h= \sum_{\substack{I \subseteq \{1, \cdots, tk\},\\ |I|=k+l}} \rho_{I}^{\prime}.h|_{V_{I}}
\end{equation}
Consider the rows of $G$ with indices in $I$ as a $(k+l) \times t(k+l)$ supermatrix and name it $G(I)$. Then multiply it by $id_{I}$ from left, i.e., $id_{I}. G(I)$ and delete the columns with indices in $I$, we get
$$\left[ \! \! \! \!
\begin{tabular}{ccccccc}
$y_{11}^{I}$ $\qquad$ & $\cdots$ $\qquad$ &$y_{1\big((t-1)(k+l)\big)}^{I}$\\
$\vdots$ $\qquad$ & $\ddots$ $\qquad$ &$\vdots$ \\
$y_{(k+l)1}^{I}$ $\qquad$ & $\cdots$ $\qquad$ &$y_{(k+l)\big((t-1)(k+l)\big)}^{I}$
\end{tabular}
\! \! \! \! \right].$$
Note that all entries of this supermatrix are sections of $\mathcal{O}(M)$.
\\
Let $A^{I}$ be the matrix introduced in subsection 2.2 and let $x_{ij}^{I}$ be its entry out of $M_{I}(A^{I})=id_{I}$. Then the correspondence $x_{ij}^{I} \longmapsto y_{ij}^{I}$ defines a homomorphism
\begin{equation*}
\varphi_{I}^{*}: \mathcal{O}_{I}(V_{I}) \longrightarrow \mathcal{O}(M).
\end{equation*}
Now, for each global section $h$ of $\mathcal{G}$, one may define
\begin{equation*}
\widetilde{h} = \sum_{\substack{I \subseteq \{1, \cdots, tk\},\\ |I|=k+l}} \varphi_{I}^{*}( \rho_{I}^{\prime}.h|_{V_{I}})
\end{equation*}
Then the correspondence
\begin{equation}\label{sigma}
\sigma^*: h \longmapsto \widetilde{h}
\end{equation}
is a well-defined homomorphism from $\mathcal{G}(Gr_{k}^{tk} \times Gr_{l}^{tl})$ to $\mathcal{O}(M)$ and so induces a smooth map $\widetilde{\sigma}$, from $M$ to $Gr_{k}^{tk} \times Gr_{l}^{tl}$ \cite{Roshandel}.
\begin{flushright}
$\square$
\end{flushright}
The homomorphism $\sigma=(\widetilde{\sigma},\sigma^*)$ is called the associated morphism with the Gauss supermap $g$.
\section{Pullback of the canonical super vector bundle}
\begin{theorem}
The super vector bundle $\mathcal{E}$ and the pullback of $\Gamma$ ,the canonical super vector bundle over $_{\nu}Gr$, under $\sigma$ are isomorphic.
\end{theorem}
\textit{Proof}.
First note that one may define a $\mathcal{G}$-module structure on $\mathcal{O}(M)$ by applying $\sigma$ in \eqref{sigma} as follows:
\begin{equation*}
a * b:= \sigma(a).b, \qquad a \in \mathcal{G}(Gr_{k}^{tk} \times Gr_{l}^{tl}), \qquad b \in \mathcal{O}(M).
\end{equation*}
The pullback of $\Gamma$ along $\sigma$ is defined as $\mathcal{O} \otimes _{\mathcal{G}}^{\sigma} \Gamma$.
\\
We show that this module is isomorphic to $\mathcal{E}$. Let $s^{\prime}$ be a global section on $\Gamma$. One has
\begin{equation*}
s^{\prime}= \sum_{\substack{I \subseteq \{1, \cdots, tk\},\\ |I|=k+l}} \rho_{I}^{\prime}. r_{I}^{\prime}(s^{\prime})
\end{equation*}
where $\{\rho_{I}^{\prime}\}$ is the partition of unity of $_{\nu}Gr$ subordinate to the open cover $\{V_{I}\}$, and $r_{I}^{\prime}$ is the restriction morphism giving sections over $V_{I}$. On the other hand, one may write each section $r_{I}^{\prime}(s^{\prime})$ as below:
\begin{equation*}
r_{I}^{\prime}(s^{\prime}) = \sum_{j=1}^{k+l} h_{j}^{I} s_{j}^{\prime^{I}}
\end{equation*}
where $s_{j}^{\prime^{I}}$ are generators of $\Gamma(V_{I})$ and the coefficients $h_{j}^{I}$ are the sections of $\mathcal{O}_{I}$. Therefore, we can write
\begin{equation*}
s^{\prime}= \sum_{\substack{I \subseteq \{1, \cdots, tk\},\\ |I|=k+l}} \sum_{j=1}^{k+l} (\rho_{I}^{\prime}h_{j}^{I})s_{j}^{\prime^{I}}
\end{equation*}
Note that each row of $G$ is in correspondence with a section in the generator set $A$. So there is a morphism from the pullback of $\Gamma$ to $\mathcal{E}$ as
\begin{align}\label{sigmatensor}
&\mathcal{O} \otimes_{\mathcal{G}}^{\sigma} \Gamma \longrightarrow \mathcal{E} \nonumber
\\
&u \otimes s^{\prime} \longmapsto u.\delta(s^{\prime})
\end{align}
where $\delta(s^{\prime})$ is
\begin{equation*}
\sum_{\substack{I \subseteq \{1, \cdots, tk\},\\ |I|=k+l}} \sum_{\substack{j=1}}^{k+l} \sigma(\rho_{I}^{\prime}h_{j}^{I}). s_{j}^{I}
\end{equation*}
and $s_{j}^{I}$ is the section corresponding to the $j$-th row of $G(I)$ (cf. subsection 4.1).
\\
One may show that the morphism in \eqref{sigmatensor} is an isomorphism. To this end, first note that every locally isomorphism between two sheaves of $\mathcal{O}$-modules with the same rank is a globally isomorphism. Also for the super vector bundle $\Gamma$ of rank $k|l$ over $\mathcal{G}$, one can write a locally isomorphism
\begin{equation*}
\mathcal{O} \otimes_{\mathcal{G}}^{\sigma} \Gamma \overset{\simeq}{\longrightarrow} \mathcal{O} \otimes_{\mathbb{R}}\mathbb{R}^{k|l},
\end{equation*}
because for each sufficiently small open set $V$ in $Gr_{k}^{tk} \times Gr_{l}^{tl}$ one can write
\begin{equation*}
\Gamma(V) \simeq \mathcal{G}(V) \otimes_{\mathbb{R}}\mathbb{R}^{k|l}
\end{equation*}
and then
\begin{equation*}
\mathcal{O} \big(\widetilde{\sigma}^{-1}(V)\big) \otimes_{\mathcal{G}}^{\sigma} \Gamma(V) \simeq \mathcal{O}\big(\widetilde{\sigma}^{-1}(V)\big) \otimes_{\mathcal{G}}^{\sigma} \mathcal{G}(V) \otimes_{\mathbb{R}} \mathbb{R}^{k|l}
\end{equation*}
This shows that the morphism in \eqref{sigmatensor} may be represented locally by the following isomorphism:
\begin{equation}
\mathcal{O} \otimes \mathcal{G}\otimes \mathbb{R}^{k|l} \to \mathcal{O}\otimes \mathbb{R}^{k|l}.
\end{equation}
Thus \eqref{sigmatensor} defines a global isomorphism.
\begin{flushright}
$\square$
\end{flushright}
\section{Homotopy properties of Gauss supermaps and their associated morphisms}
Let $\mathcal{O}_{M}^{m|n}$ and $\mathcal{O}_{M}^{m^{\prime}|n^{\prime}}$ be two trivial super vector bundles where $m^{\prime}=2m-k$, $n^{\prime}=2n-l$; then, one can write the inclusion homomorphisms
\begin{equation*}
J^{e}, J^{o}, J: \mathcal{O}(M) \otimes_{\mathbb{R}} \mathbb{R}^{m|n} \longrightarrow \mathcal{O}(M) \otimes_{\mathbb{R}} \mathbb{R}^{2m|2n}
\end{equation*}
by the conditions
\begin{align*}
J^{e}: &1 \otimes e_{i} \longmapsto 1 \otimes e_{2i} &J^{o}: 1 \otimes e_{i} \longmapsto 1 \otimes e_{2i-1} &\qquad \quad J: 1 \otimes e_{i} \longmapsto 1 \otimes e_{i} \\
&1 \otimes f_{j} \longmapsto 1 \otimes f_{2j} & 1 \otimes f_{j} \longmapsto 1 \otimes f_{2j-1} &\qquad \qquad \quad 1 \otimes f_{j} \longmapsto 1 \otimes f_{j}
\end{align*}
Now, let $(V_I, \mathcal{O}_I)$ be $\nu$-domains introduced in subsection 2.2. In addition assume $(W_J, \mathcal{O}_J)$ be $\nu$-domains of dimension $2p|2q$. For each $k|l$ multi-index $I=\{i_{1}, \cdots, i_{k+l}\} \subset \{1, ..., m+n\}$, one can associate the following multi-indices
\begin{equation*}
I^{e}:=\{2i_{1}, \cdots, 2i_{k+l}\},
\end{equation*}
\begin{equation*}
I^{o}:=\{2i_{1}-1, \cdots, 2i_{k+l}-1\},
\end{equation*}
\begin{equation*}
\bar{I}:=\{i_{1}, \cdots, i_{b}, i_{b+1}+m-k, \cdots, i_{k+l}+m-k\},
\end{equation*}
where $i_a \in I$ ,$1 \leq i_a \leq k+l$, and $i_b$ is an element of $I$ for which $i_b \leq m \leq i_{b+ 1}$.
\\
So the maps $J^{e}$, $J^{o}$ and $J$ induce the homomorphisms
\begin{equation*}
\bar{J}^{e}, \bar{J}^{o}, \bar{J}: _{\nu}Gr(m|n) \longrightarrow _{\nu}Gr(m^{\prime}|n^{\prime}).
\end{equation*}
In fact, $(\bar{J}^{e})^*|_{W_{I^{e}}}$ is obtained by
\begin{align*}
&\mathcal{O}_{I^{e}}(W_{I^{e}}) \longrightarrow \mathcal{O}_{I}(V_{I})
\\
&\left\{
\begin{array}{rl}
y_{i(2j-1)} \longmapsto x_{ij}, \quad i=1, \cdots, k+l, \quad j=1, \cdots, m+n-k-l
\\
\text{other generators} \longmapsto 0 \hspace{6cm}
\end{array}\right.
\end{align*}
\begin{theorem}
Let $f, f_{1}:(M, \mathcal{O}) \longrightarrow _{\nu}Gr(m|n)$ are induced by the gauss supermaps $g$ and $g_{1}$. Then, $\bar{J}f$ and $\bar{J}f_{1}$ induced by $Jg$ and $Jg_{1}$ are homotopic.
\end{theorem}
\textit{Proof}.
Consider the homomorphisms $J^{e}g$ and $J^{o}g_{1}$, with the induced homomorphisms $\bar{J}^{e}f$ and $\bar{J}^{o}f$. One can define a family of homomorphisms
\begin{align*}
&F_{t}: \mathcal{E}(M) \longrightarrow \mathcal{O} \otimes_{\mathbb{R}} \mathbb{R}^{2m|2n}
\\
&\varphi \longmapsto (1-t).(J^{e}g)(\varphi)+t.(J^{o}g_{1})(\varphi)
\end{align*}
where $F_{0}=J^{e}g$ and $F_{1}=J^{o}g_{1}$. By section 4, a family of morphisms $\bar{F}_{t}$ from $(M,\mathcal{O})$ to $_{\nu}Gr(m^{\prime}|n^{\prime})$ are induced. Obviousely $\bar{F}_{0}=\bar{J}^{e}f$ and $\bar{F}_{1}=\bar{J}^{o}f_{1}$, thus $\bar{F}_{t}$ is a homotopy from $\bar{J}f$ to $\bar{J}f_1$.
|
1,108,101,563,697 | arxiv | \section{Introduction}
\label{sec:intro}
Cloud radio access network (C-RAN) \cite{CRANwhitepaper}, which centralizes the baseband functions at the baseband units (BBUs), can efficiently reduce the complexity of the remote radio units (RRUs), and thus the operation and deployment costs. Centralized baseband processing also enables efficient cooperative signal processing to increase the network capacity. In C-RAN, the fronthaul network transports the baseband signals between the BBUs and the RRUs. However, for fully centralized C-RAN, i.e., all baseband functions are centralized at the BBUs, the fronthaul rate requirement is high, which poses a major design challenge on C-RAN. For example, in a single 20MHz LTE antenna-carrier system, 1Gbps fronthaul rate is required with the standard CPRI interface \cite{CPRI}. To support massive MIMO and other emerging technologies, the required fronthaul rate will be too high to bear.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{functionalSplit.pdf}
\caption{Illustration of baseband functions with multiple candidate split modes.}
\label{fig:functionalsplit}
\end{figure}
Different from fully centralized C-RAN, by placing some baseband and network functions at RRUs, functional split is a promising technique to reduce the fronthaul rate requirement \cite{SplittingBS, impact}. There are multiple candidate functional split modes corresponding to different split points on the chain of baseband functions, as illustrated in Fig. \ref{fig:functionalsplit}. For each mode, the functions placed at the right side of the corresponding vertical dashed line are placed at the RRU, while the others are centralized at the BBU. The fronthaul rate requirement and processing complexity requirement at the RRUs vary, under different functional split modes. In general, with more baseband functions at the RRUs, the required fronthaul rate is lower, but the processing complexity is higher \cite{LTEmodel, centralize}, which also means more energy consumption at the RRUs. With certain functional split modes, for example, split between the physical layer and the MAC layer, the required fronthaul rate depends on the traffic load, and thus exploiting the fronthaul statistical multiplexing gain can further reduce the fronthaul rate requirement \cite{redesign, multiplexing}. With the development of software defined network (SDN) and network function virtualization (NFV), baseband functions can be virtualized and implemented on the general purpose computation platforms \cite{SDN, SDRandFS}. As a result, the functions placed at the RRUs and the BBUs can be reconfigured according to the network state \cite{Flex5G, Flex5GSurvey}.
By harvesting renewable energy from the environment, the RRUs are able to consume less or no energy from the power grid \cite{EHNodes, grid, greendelivery}. Another benefit is that the RRUs can be flexibly deployed at the places where the grid does not reach.
However, reliable communication is challenging due to the randomness of renewable energy arrivals and the limitation of batteries, and thus the operation of RRUs should be well managed \cite{EHSurvey}. In terms of power control, different from conventional ``water-filling", the throughput-optimal ``directional water-filling" power control policy is found in a fading energy harvesting channel\cite{waterfilling}, where the ``water", i.e., the energy, can only flow from the past to the future. If the processing energy consumption is considered, the throughput-optimal transmission policy should become bursts, a ``glue pouring" power control policy is proved to be optimal when there is only one energy arrival and no transmission deadline \cite{glue}. The burst transmission is due to the fact that more processing energy is consumed with longer transmission time.
For energy harvesting system with processing cost and multiple energy arrivals, a ``directional backward glue-pouring" algorithm is proposed in \cite{procost}.
{There are some recent works on the flexible functional split mode selection in energy harvesting C-RAN systems. The grid power consumption and system outage rate are jointly studied by optimizing the offline placement of baseband functions, where the small base station is powered by renewable energy and the macro base station is powered by the grid \cite{split1}. Reinforcement learning based online placement of functional split options is studied in \cite{split2} for efficient utilization of the harvested energy, where the small cell is powered by renewable power with flexible functional split modes.
To improve energy efficiency and throughput, RRU active/sleep mode and functional split mode selection in the energy harvesting C-RAN are determined according to the renewable energy levels and the number of users in the covering area of the RRU \cite{sleepfs}.
However, to the best of our knowledge, the joint optimization of power control and flexible functional split mode selection has not been considered yet.
}
If the functional split mode is fixed in the energy harvesting communication system, the processing power is a constant, and thus ``directional backward glue-pouring" algorithm \cite{procost} can be used to find the optimal power control policy. However, it is expensive and sometimes difficult to deploy fibers between the RRUs and the BBUs, and thus wireless fronthaul may be used as a low cost solution \cite{wirelessBH}. Especially for RRUs powered by energy harvesting, they in general have no wired connection neither for power supply nor for fronthaul. In this case, flexible functional split is necessary, due to not only the fronthauling overhead brought by the wireless fronthaul, but also the unstable renewable energy supply. To this end, there are more than one candidate functional split modes, with different processing costs, and thus existing schemes like ``directional backward glue-pouring" no longer apply. Functional split can tradeoff between the baseband processing complexity of RRUs and the fronthaul data rate requirement. In general, with more baseband functions at the RRU, the baseband processing complexity is higher, but the required fronthaul data rate is lower. Conversely, with more baseband functions at the RRU, the baseband processing power is lower, but the required fronthaul data rate is higher. This calls for new mechanisms that can determine the optimal functional split with the joint consideration of fronthaul properties and renewable energy arrivals.
In this paper, we study the selection of the functional split modes and power control policy for an energy harvesting RRU in C-RAN. We first consider the offline case, where the energy arrivals and the channel fading are non-causally known in advance. The functional split is jointly determined with the corresponding user data transmission duration and transmission power,
and the objective is to maximize the throughput, while satisfying the energy and the average fronthaul rate constraints. For the optimal offline policy, we find that in each interval between successive energy arrivals, at most two modes are selected, the transmission power of the modes are the same for each channel fading block. We further analyze the scenarios with only one instance of energy arrival and two alternative functional split modes, and get the closed-from expression of the transmission power and transmission duration for each split mode, given the average fronthaul rate constraint. Based on the analysis, we propose a heuristic online policy, where the future energy arrivals and the channel fading are unknown in advance. Numerical results show that the heuristic online policy has similar performance with the optimal online policy developed by solving the Markov decision process (MDP) formulation.
The main contributions of this paper are summarized as follows.
\begin{itemize}
\item We jointly optimize the functional split mode selection and power control for an RRU powered with renewable energy, to maximize the throughput under the average fronthaul rate constraint and random energy arrival.
\item For the offline problem where the energy arrivals and the channel fading are non-causally known, the throughput maximization problem is formulated and analyzed. We find the structure of the optimal solution that at most two functional split modes are selected between two successive energy arrivals. The online problem where the channel fading are causally known, is solved by its corresponding MDP formulation through value iteration.
\item To deal with the curse of dimensionality in solving an MDP, the closed-form expression of the transmission power and transmission duration in the special case with one energy arrival is derived, based on which a low-complexity heuristic online policy is proposed, and is shown to have near-optimal performance via extensive simulations.
\end{itemize}
The paper is organized as follows. The system model is described in Section \ref{sec:sysmodel}. The offline optimization problem is formulated and analyzed in Section \ref{sec:maximize}, and the online problem is introduced and solved by an MDP formulation in Section \ref{sec:mdp}. The expression of optimal power control policy with one energy arrival, two functional split modes is derived in Section \ref{sec:single}. A heuristic online policy is proposed in Section \ref{sec:online}. The numerical results are presented in Section \ref{sec:num}. The paper is concluded in Section \ref{sec:conclusion}.
\section{System Model} \label{sec:sysmodel}
{
Consider a two-tier network, where a macro base station (MBS) covers a large area, while an RRU has small coverage areas within the coverage area of
the MBS. The MBS has stable power supply, while the RRU is powered by renewable energy.
The RRU transmits as much data as possible to the users with the harvested energy, while the remaining data is transmitted via the MBS.
We thus aim to maximize the throughput of the RRU to reduce the traffic load of the MBS.
We consider the downlink transmission from a particular RRU to its users, as described in Fig. \ref{fig:mdpmodel}(a).
Assume that the BBU has sufficient data to transmit to users.
}
The system is slotted with normalized slot length.
Assume that the wireless channel of the users is block fading, where the channel gain varies every block but remains constant within one block. Each block has $L$ slots. For each slot, the RRU serves the user with the best channel state, i.e., the user with the largest channel gain.
We assume that the energy arrives over a larger time granularity than that of the wireless channel fading \cite{Huang14, Gong18}. The energy arrival rate stays constant in $N$ blocks, which is denoted by an epoch. We assume that the energy only arrives at the beginning of each epoch. The approximation is adopted to analyze the effect of different time scales of energy arrival and channel fading on the power control policy.
As illustrated in Fig. \ref{fig:mdpmodel}(b),
$E_m$ units of energy arrives at the beginning of the $m$-th epoch. The arrived energy is stored in a battery with capacity $B_{\text{max}}$ before it is used. Without loss of generality, we assume that $E_m \leq B_{\text{max}}$, i.e., the amount of arrived energy is at most $B_{\text{max}}$. There is no initial energy in the battery, i.e., the battery is empty before the first epoch.
For the $n$-th block of epoch $m$, the maximum channel gain of the users is denoted as $\gamma_{m,n}$, which corresponds to the modulation and coding scheme (MCS) with the highest transmission rate, that the channel gain can support. Note that $\gamma_{m,n}$ is measured when the reference transmit power is 1W.
{
For the scenario with multiple carriers, we assume that all carriers are used for transmission at the same time and have the same transmission power.
If there are $C$ carriers, the channel gain of carrier $c$ in the $n$-th block of epoch $m$ is denoted by $\gamma_{m,n,c}$. The spectrum efficiency is
\begin{align}
\frac{1}{C}\sum_{c=1}^{C}{\log\left(1+\gamma_{m,n,c}p\right)} & = \frac{1}{C}{\log\left(\prod_{c=1}^{C}(1+\gamma_{m,n,c}p)\right)} \\
& ={\log\left(\sqrt[C]{\prod_{c=1}^{C}(1+\gamma_{m,n,c}p)}\right)}
\approx \log\left(1+\sqrt[C]{\prod_{c=1}^{C}\gamma_{m,n,c}}p\right).
\end{align}
where $p$ is the transmission power.
In the optimal power control policy, blocks with good channel states are used for transmission, and the transmission power should be large enough due to the baseband processing power. The values of $\gamma_{m,n,c}p$ should be large, and thus the approximation is accurate. We now get the approximated channel gain in each block, i.e., $\gamma_{m,n} = \sqrt[C]{\prod_{c=1}^{C}\gamma_{m,n,c}}$, and the problem with multiple carries can be approximated as scenarios with single carrier. We thus explore the scenario with single carrier in the remaining part of this paper.
}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{mdpmodel.pdf}
\caption{Illustration of C-RAN with renewable energy powered RRU. (a) C-RAN system with renewable energy powered RRU. (b) Energy arrival and channel fading with different time scale.}
\label{fig:mdpmodel}
\end{figure}
The RRU can be configured with $X$ candidate functional split modes.
In each block, one or more functional split modes can be selected, but at most one functional split mode can be selected at any slot.
In the $n$-th block of epoch $m$, the number of slots that functional split mode $x$ is selected is denoted by $\theta_{m, n, x}$. Note that $\theta_{m,n,x}=0$ means that mode $x$ is not selected in the $n$-th block of epoch $m$.
During one block, the total number of slots used for transmission of the $X$ modes should satisfy
$\sum_{x=1}^{X}\theta_{m,n,x} \leq L$,
where $L$ is number of slots in each block.
The transmission power with mode $x$ in each block should be constant, denoted by $p_{m,n,x}$. The maximum transmission power is $P_{\text{max}}$, i.e., $0 \leq p_{m,n,x} \leq P_{\text{max}}$.
{
The processing power of mode $x$ is denoted by $\varepsilon_k$, and the fronthaul rate requirement is denoted by $R_x$. The processing power $\varepsilon_k$ and fronthaul rate requirement $R_x$ are related to the MCS \cite{centralize}, and thus related to the transmission power, which makes the problem difficult to analyze. To simplify the problem, we assume that for each functional split mode $x$, the processing power $\varepsilon_k$ and fronthaul rate requirement $R_x$ are constant, which correspond to the MCS with the maximum transmission power $P_{\text{max}}$.
}
Also for the overhead of fronthaul, the average fronthaul rate is constrained to be no more than a given threshold $D$.
As the downlink scenario is considered, the energy consumption of the fronthaul happens at the BBU.
The RRU only consumes energy when it is transmitting data to the users.
In this case, $\theta_{m,n,x}\log (1+ \gamma_{m,n} p_{m,n,x})$ bits of data are transmitted to the users with energy consumption $\theta_{m,n,x}(p_{m,n,x}+\varepsilon_x)$ in the $n$-th block of epoch $m$ with mode $x$.
{
For scenarios with multiple RRUs, if RRUs are self-powered and there is no cooperative transmission, the functional split selection and power control can be done separately at each RRU while treating the signals of other RRUs on the same frequency as noise. As for the scenario with cooperative transmission, we need to further optimize the precoding, and each RRU has its own energy constraints. However, due to the wireless fronthaul implementation and much more complex fronthaul topology, fronthaul sharing and and multiplexing gain should be further considered. Scenarios with cooperative transmission and fronthaul resource management are left as future work.
}
\section{Maximizing the Throughput} \label{sec:maximize}
We consider the offline throughput maximization problem over a finite time of $M$ epochs.
Due to the causality constraints, the energy that has not arrived can not be used, we have
\begin{align}
\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}(p_{m,n,x}+\varepsilon_x) \leq \sum_{m=1}^{\hat{m}}E_m, \quad \hat{m}=1, 2, ..., M,
\end{align}
note that $\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}(p_{m,n,x}+\varepsilon_x)$ is the energy consumed in epoch $m$.
There may be energy waste due to the limited battery size when the maximum transmit power is limited, which makes the energy constraints difficult to express. We thus ignore the maximum transmit power constraint when establishing the offline throughput maximization problem, and then approximate the transmit power that is larger than $P_{\text{max}}$ as $P_{\text{max}}$ in the optimal solution of the problem.
As the energy in the battery at any time can not exceed the battery capacity, at the beginning of epoch $m$, at which time the battery has the most energy in epoch $m$,
there should be
\begin{align}
\sum_{m=1}^{\hat{m}+1}E_m-\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}(p_{m,n,x}+ \varepsilon_x) \leq B_{\text{max}}, \quad \hat{m}=1, 2, ..., M-1.
\end{align}
Denoted by $\alpha_{m,n,x}=\theta_{m,n,x}p_{m,n,x}$, which is the energy consumed by the radio transmission in the $n$-th block of epoch $m$ with mode $x$, the optimization problem can be formulated as
\begin{align}
\max_{\theta_{m, n, x}, \alpha_{m,n,x}} \quad & \sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m, n, x} \log(1+\gamma_{m,n} \frac{\alpha_{m,n,x}}{\theta_{m,n,x}}) \label{eq:obj}\\
\text{s.t.} \quad &\frac{1}{MNL}\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{k=1}^{K}\theta_{m, n,k}R_k \leq D, \label{eq:FHConstraint}\\
&\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{x=1}^{X}(\alpha_{m,n,x}+\varepsilon_x\theta_{m,n,x}) \leq \sum_{m=1}^{\hat{m}}E_m, \quad 1 \leq \hat{m} \leq M\\
&\sum_{m=1}^{\hat{m}+1}E_m-\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{k=1}^{K}(\alpha_{m,n,k}+\varepsilon_k\theta_{m,n,k}) \leq B_{\text{max}}, \quad 1 \leq \hat{m} \leq M-1\\
&\sum_{x=1}^{X}\theta_{m, n,x} \leq L, \quad \forall m,n \label{eq:blockLength}\\
&\alpha_{m,n,x} \geq 0, \theta_{m,n,x} \geq 0, \quad \forall m, n, x
\end{align}
where Eq. (\ref{eq:FHConstraint}) is the constraint of the average fronthaul rate, and Eq. (\ref{eq:blockLength}) is the constraint of the block length.
Note that the functional split mode is included in the optimization of $\alpha_{m,n,x}$, i.e., $\alpha_{m,n,x}>0$ means that mode $x$ is selected in the $n$-th block of epoch $m$, otherwise mode $x$ is not selected.
Note that we can treat the number of slots $\theta_{m,n,x}$ as a continuous variable in the first place, in which case the complexity of solving the optimization problem can be greatly reduced, and some intuitive results can be given, while at the same time the effect on the throughput is small after approximating $\theta_{m,n,x}$ into an integer when $L$ is large.
As the optimization objective in Eq. (\ref{eq:obj}) is convex, and the constraints are linear, this is a convex problem. With Lagrangian multiplier method, we are able to get the following structure of the optimal solution.
\begin{prop} \label{prop:1}
{
In the $n$-th block of epoch $m$, during which the channel gain stays constant, the optimal transmission power $p_{m, n, x}$ of the selected modes are the same for any mode $x$ in the optimal solution.
}
\end{prop}
\begin{proof}
The Lagrangian with $\phi \geq 0$, $\mu_{\hat{m}} \geq 0$, $\nu_{\hat{m}} \geq 0$, $\tau_{m,n}\geq0$, $\eta_{m,n,x} \geq 0$ and $ \xi_{m,n,x} \geq 0$ can be written as
\begin{align}
\mathcal{L}=&\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x} \log(1+\gamma_{m,n} \frac{\alpha_{m,n,x}}{\theta_{m,n,x}}) -\phi\left(\frac{1}{MNL}\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m, n,x}R_x-D \right) \nonumber\\
& -\sum_{\hat{m}=1}^{M} \mu_{\hat{m}} \left[\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{x=1}^{X}(\alpha_{m,n,x}+\varepsilon_x\theta_{m,n,x}) - \sum_{m=1}^{\hat{m}}E_m\right] \nonumber\\
& -\sum_{\hat{m}=1}^{M-1} \nu_{\hat{m}} \left[\sum_{m=1}^{\hat{m}+1}E_m-\sum_{m=1}^{\hat{m}}\sum_{n=1}^{N}\sum_{x=1}^{X}(\alpha_{m,n,x}+\varepsilon_x\theta_{m,n,x}) - B_{\text{max}} \right] \nonumber \\
& -\sum_{m=1}^{M} \sum_{n=1}^{N} \tau_{m,n} \left(\sum_{x=1}^X\theta_{m, n,x} - L\right) \\
& + \sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{x=1}^{X} \eta_{m,n,x} \alpha_{m,n,x} + \sum_{m=1}^{M}\sum_{n=1}^{N} \sum_{x=1}^{X}\xi_{m,n,x}\theta_{m,n,x}
\end{align}
Taking derivatives with respect to $\alpha_{m,n,x}$ and $\theta_{m,n,x}$ , there should be
\begin{align}
\frac{\partial{\mathcal{L}}}{\partial{\alpha_{m,n,x}}}=&\frac{\gamma_{m,n}\theta_{m,n,x}}{\theta_{m,n,x}+\gamma_{m,n}\alpha_{m,n,x}}-\sum_{\hat{m}=m}^{M}\mu_{\hat{m}}+\sum_{\hat{m}=m}^{M-1}\nu_{\hat{m}}
+\eta_{m,n,x}, \label{eq:alpha} \\
\frac{\partial{\mathcal{L}}}{\partial{\theta_{m,n,x}}}=&\log(1+\gamma_{m,n} \frac{\alpha_{m,n,x}}{\theta_{m,n,x}})-\frac{\gamma_{m,n}\alpha_{m,n,x}}{\theta_{m,n,x}+\gamma_{m,n}\alpha_{m,n,x}}-\frac{\phi}{MNL} R_x -\sum_{\hat{m}=m}^{M}\mu_{\hat{m}}\varepsilon_x\nonumber \\
&+\sum_{\hat{m}=m}^{M-1}\nu_{\hat{m}}\varepsilon_x -\tau_{m,n} + \xi_{m,n,x} \label{eq:theta}
\end{align}
If mode $x$ is selected in the $n$-th block of epoch $m$, we have $\alpha_{m,n,x}>0$, with the complementary slackness condition
$\eta_{m,n,x} \alpha_{m,n,x}=0$,
we have $\eta_{m,n,x}=0$. According to (\ref{eq:alpha}), let $\frac{\partial{\mathcal{L}}}{\partial{\alpha_{m,n,x}}}=0$,
\begin{equation}
\frac{\gamma_{m,n}\theta_{m,n,x}}{\theta_{m,n,x}+\gamma_{m,n}\alpha_{m,n,x}}=\sum_{\hat{m}=m}^{M}\mu_{\hat{m}}-\sum_{\hat{m}=m}^{M-1}\nu_{\hat{m}},
\end{equation}
i.e., for $\forall n$ and $\forall x$, the transmit power $p_{m,n,x}$ can be expressed as
\begin{equation} \label{eq:sumeq}
p_{m,n,x}=\frac{1}{\sum_{\hat{m}=m}^{M}\mu_{\hat{m}}-\sum_{\hat{m}=m}^{M-1}\nu_{\hat{m}}} - \frac{1}{\gamma_{m,n}}.
\end{equation}
The values of $p_{m,n,x}$ are the same for any selected mode $x$ in the $n$-th block of epoch $m$.
\end{proof}
Proposition \ref{prop:1} reveals that in one block, the transmission power with different functional modes are the same. Further more, we can find that the sum of the transmit power $p_{m,n,x}$ and the reciprocal of the channel gain $\frac{1}{\gamma_{m,n}}$ are the same for any selected mode $x$ and block $n$ in epoch $m$ according to Eq. (\ref{eq:sumeq}).
\begin{prop} \label{prop:2}
{
In each epoch, i.e., the duration between successive energy arrivals, the optimal functional split mode selection policy satisfies that at most two functional split modes are selected.
}
\end{prop}
\begin{proof}
Denoted by $p_{m,n}^*$ the optimal transmission power in the $n$-th block of epoch $m$, and the corresponding transmission duration with functional split mode $x$ is $\theta_{m,n,x}^*$.
The baseband data amount transmitted via fronthaul in epoch $m$ is defined as $F_m^*$, which can be expressed as
$F_m^*=\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}^* R_x$.
The number of slots used for transmission in block $n$ is defined as $\theta_{m,n}^{\text{block}}$, i.e.,
$\theta_{m,n}^{\text{block}}=\sum_{x=1}^{X}\theta_{m,n,x}^*$.
Given $\theta_{m,n}^{\text{block}}$ and $p_{m,n}^*$, the throughput and the energy consumed by radio transmissions are fixed. The transmission duration $\theta_{m,n,x}^*$ should be an optimal solution of the following subproblem:
\begin{align}
\min_{\theta_{m,n,x}} \quad &\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}\epsilon_{x} \nonumber \\
\text{s.t.} \quad &\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}R_{x} = F_{m}^*, \nonumber \\
&\sum_{x=1}^{X}\theta_{m,n,x}=\theta_{m,n}^{\text{block}}, \quad \forall n \nonumber \\
& \theta_{m,n,x} \geq 0,
\end{align}
where the optimization objective $\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}\epsilon_{x}$ is the energy consumed by baseband processing,
which means that with transmission duration $\theta_{m,n,x}^*$, the least energy is consumed by baseband processing, i.e., we aim to minimize the energy consumption while guaranteeing the transmission time and average fronthaul rate constraint.
The number of slots used to transmit in epoch $m$ is defined as $\theta_m^{\text{epoch}}$, where
$\theta_m^{\text{epoch}}=\sum_{n=1}^{N}\theta_{m,n}^{\text{block}}$.
We consider the constraints of the total transmission duration in each epoch, instead of the constraint of the total transmission duration in each block, the subproblem can be relaxed as:
\begin{align}
\min_{\theta_{m,n,x}} \quad &\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}\epsilon_{x} \nonumber \\
\text{s.t.} \quad &\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}R_{x} = F_{m}^*, \nonumber \\
&\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}=\theta_m^{\text{epoch}}, \nonumber \\
& \theta_{m,n,x} \geq 0.
\end{align}
In epoch $m$, the energy consumed by baseband processing and the amount of data transmitted via fronthaul are only related to the total transmission duration of each mode, i.e., $\hat{\theta}_x^{\text{mode}}=\sum_{n=1}^{N}\theta_{m,n,x}$. For any optimal solution of the relaxed subproblem, we can find an equivalent solution of the subproblem, and thus the optimal solution of the subproblem is also the optimal solution of the relaxed subproblem.
The Lagrangian of the relaxed subproblem is
\begin{align}
\mathcal{Z}=&\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}\epsilon_{x}-\rho \left(\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}R_{x} - F_{m}^*\right) \nonumber \\
&-\upsilon \left(\sum_{n=1}^{N}\sum_{x=1}^{X}\theta_{m,n,x}-\theta_{m}^*\right)+\sum_{n=1}^{N}\sum_{x=1}^{X}\psi_{n,x} \theta_{m,n,x}
\end{align}
Taking derivatives with respect to $\theta_{m,n,x}$, we have
$\frac{\partial{\mathcal{Z}}}{\partial{\theta_{m,n,x}}}=\epsilon_{x}-\rho R_{x} - \upsilon + \psi_{n,x}$.
If mode $x$ is selected in the $n$-th block of epoch $m$, we have $\theta_{m,n,x}>0$
according to the complementary slackness condition that $\psi_{n,x} \theta_{m,n,x}=0$, we have $\psi_{n,x}=0$.
Let $\frac{\partial{\mathcal{Z}}}{\partial{\theta_{m,n,x}}}=0$, there should be
$\epsilon_{x}-\rho R_{x} - \upsilon=0$.
If more than two functional split modes are selected, assume that the number of selected functional split modes is $Z$, and the selected modes are $x_z$ for $1 \leq z \leq Z $. The following equations should have solution
\begin{equation} \label{eq:formulation}
\begin{cases}
\epsilon_{x_1}-\rho R_{x_1} & - \upsilon =0 \\
\epsilon_{x_2}-\rho R_{x_2} & - \upsilon =0 \\
&\vdots\\
\epsilon_{x_Z}-\rho R_{x_Z} & - \upsilon =0 \\
\end{cases}
\end{equation}
Note that the formulation (\ref{eq:formulation}) has solution only when $Z \leq 2$, or $R_x$ and $\epsilon_x$ satisfies that
\begin{equation}
\frac{\epsilon_{x_2}-\epsilon_{x_1}}{R_{x_1}-R_{x_2}}=\frac{\epsilon_{x_3}-\epsilon_{x_1}}{R_{x_1}-R_{x_3}},
\end{equation}
for any 3 selected modes, which is a trivial scenario that can be ignored, and thus at most two functional split modes can be selected at each epoch.
\end{proof}
{ The solution obtained with continuous transmission duration is denoted by \emph{`upper bound'}. We now introduce how to round the `upper bound' into integer transmission duration.
Slots with good channel states are used for transmission.
The number of slots used for transmission with functional split mode $x$ in block $n$ is denoted by $\hat{\theta}_{n,x}^{\text{block}}$, with the corresponding transmission power $p_{m,n,x}$, the energy used for transmission is $E_x^{\text{T}} = \sum_{n=1}^N \hat{\theta}_{n,x}^{\text{block}} p_{m,n,x}$. The energy used for baseband processing is $E_x^{\text{B}} = \sum_{n=1}^N \hat{\theta}_{n,x}^{\text{block}} \epsilon_x$, where $\epsilon_x$ is the baseband processing power.
Number of slots used for transmission of each selected functional split mode is rounded into integer, denoted by $\tilde{\theta}_{n,x}^{\text{block}}$.
Besides the baseband processing energy, i.e., $\tilde{E}_x^{\text{B}} = \sum_{n=1}^N \tilde{\theta}_{n,x}^{\text{block}} \epsilon_x$, the energy used for transmission is $\tilde{E}_x^{\text{T}} = E_x^{\text{B}} + E_x^{\text{T}} - \tilde{E}_x^{\text{B}}$.
The transmission power of each slot after rounding is then calculated according to Proposition \ref{prop:1}, with the constraints of the total transmission energy $\tilde{E}_x^{\text{T}}$.
}
{
According to Proposition \ref{prop:2}, we conclude that at most two functional split modes are selected in one epoch, which means that the functional split mode selection can be determined at the time scale of energy arrival, rather than at the time scale of channel fading.
In this sense, the switching of functional split mode can be done in a large time scale. The switching of functional split mode can be implemented by activating and deactivating functions in RRUs and BBU when RRUs and BBU are constructed by using container technologies, the introduced delay (less than
millisecond \cite{sleepfs}) and energy can be neglected.
}
\section{Optimal Online Policy} \label{sec:mdp}
For the online policy, only the causal (past and present ) energy states and channel states are known at the RRU.
To find the optimal online policy, we formulate the online problem as an MDP.
The channel gain varies at the beginning of each block, and each block has $L$ slots. The beginning of the $(n+1)$-th block is the $(nL+1)$-th slot.
The channel gain is modeled as a Markov chain with $G$ states, and the channel gain of state $g$ is $\Gamma_g$.
The transition probability from state $g_1$ to state $g_2$ at the beginning of the $(n+1)$-th block is denoted as $p_{g_1,g_2}=\text{Pr}\{\gamma_{nL+1}=\Gamma_{g_2}|\gamma_{nL}=\Gamma_{g_1}\}$.
The energy arrives once an epoch. We assume that the energy arrives at the beginning of each epoch. An epoch has $N$ blocks, i.e., $NL$ slots.
The energy arrival is modeled as a finite state Markov chain with $E_{\text{max}}$ states, and the arrived energy amount with state $e$ is $A_e$.
The transition probability from state $e_1$ to state $e_2$ at the beginning of the $(m+1)$-th epoch is $q_{e_1,e_2}=\text{Pr}\{E_{mNL+1}=A_{e_2}|E_{(m-1)NL+1}=A_{e_1}\}$.
The arrived energy is stored in a battery with capacity $B_{\text{max}}$ before it is used.
The transmission power in slot $k$ is denoted as $P_k$. Denoted by $x_k$ the functional split mode selected in slot $k$, the baseband processing power is $\epsilon_{x_k}$.
The energy is consumed only when the RRU transmits data to the users, i.e., when $P_k>0$, the state of energy in the battery $B_k$ is updated as
\begin{subequations}
\begin{numcases} {B_{k+1}=}
\min\{B_k+E_k-\epsilon_{x_k}-P_k, B_{\text{max}}\}, &$P_k>0$ \nonumber \\
\min\{B_k+E_k, B_{\text{max}}\}, &$P_k=0$ \nonumber
\end{numcases}
\end{subequations}
To simplify the expression, we introduce a new variable, defined as
\begin{subequations}
\begin{numcases} {\delta_k=}
1, &$P_k>0$\\
0, &$P_k=0$
\end{numcases}
\end{subequations}
then the battery state is updated as
\begin{align}
B_{k+1}=\min\{B_k+E_k-\delta_k(\epsilon_{x_k}+P_k), B_{\text{max}}\}.
\end{align}
The system state is
\begin{align}
s_k=(B_k, E_k, Y_k, \gamma_k, n_k, l_k),
\end{align}
where $B_k$ is the state of energy available in the battery, $E_k$ is the energy arrived in stage $k$, $Y_k$ records the energy arrival rate of the current epoch, $\gamma_k$ is the channel gain, $n_k$ indicates how many blocks the current epoch has lasted, $l_k$ indicates how many slots the current block has lasted.
The state transition probability is
\begin{align}
\text{Pr}\{s_{k+1}|s_k,P_k,x_k\}= & \text{Pr}\{B_{k+1}|B_k, P_k, E_k,x_k\}\text{Pr}\{E_{k+1}|Y_k,E_k,n_k,l_k\} \times \nonumber\\
&\text{Pr}\{Y_{k+1}|Y_k,E_k,n_k,l_k\}\text{Pr}\{n_{k+1}|n_k,l_k\}\text{Pr}\{\gamma_{k+1}|\gamma_k,l_k\}\text{Pr}\{l_{k+1}|l_k\}
\end{align}
{
The value of $n_k$ varies at the beginning of each block, and the state transition is described in Fig. \ref{fig:trans}(a). } The transition probability of $n_k$ is expressed as
\begin{subequations}
\begin{numcases} {\text{Pr}\{n_{k+1}|n_k,l_k\}=}
1, &$\text{if} \quad n_{k+1}=\text{mod}(n_k, N)+1, l_k = L$ \nonumber\\
1, &$\text{if} \quad n_{k+1}=n_k, l_k < L$ \nonumber\\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
where $\text{mod}(n_k, N)$ is modulus operation which returns the remainder after division of $n_k$ by $N$.
The value of $l_k$ varies at the beginning of each slot. The transition probability of $l_k$ is expressed as
\begin{subequations}
\begin{numcases} {\text{Pr}\{l_{k+1}|l_k\}=}
1, &$\text{if} \quad l_{k+1}=\text{mod}(l_k, L)+1$ \nonumber\\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{flowchart_v3.pdf}
\caption{{Illustration of state transitions: (a) state transition of ($n_k$, $l_k$) when $N$=2 and $L$=2;
(b) state transition of ($\gamma_k$, $l_k$) when G=2 and $L$=2;
(c) state transition of ($E_k$, $Y_k$, $n_k$, $l_k$) when $E_{\text{max}}$=2, $N$=2 and $L$=2.}} \label{fig:trans}
\end{figure}
{
The state transition of channel state is described in Fig. \ref{fig:trans}(b), and
the transmission probability of $\gamma_k$ is
}
\begin{subequations}
\begin{numcases} {\text{Pr}\{\gamma_{k+1}|\gamma_k,l_k\}=}
q_{g_1,g_2}, &$\text{if} \quad \gamma_{k+1}=\Gamma_{g_2}, \gamma_k=\Gamma_{g_1}, l_k=L$ \nonumber \\
1, &$\text{if} \quad \gamma_{k+1}=\gamma_k, l_k<L$ \nonumber \\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
The transition probability of battery state is
\begin{subequations}
\begin{numcases}{\text{Pr}\{B_{k+1}|B_k, P_k, E_k,x_k\}=}
1, &$\text{if} \quad B_{k+1}=\min\{B_k+E_k-\delta_k(P_k+\epsilon_{x_k}), B_{\text{max}}\}$ \nonumber \\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
As energy arrives every $N$ blocks, the energy only arrives at the beginning of each epoch.
{
The state transition of energy arrival state is described in Fig. \ref{fig:trans}(c), and
the transmission probability of $E_k$ is expresses as
}
\begin{subequations}
\begin{numcases} {\text{Pr}\{E_{k+1}|Y_k,n_k,l_k\}=}
1, &$\text{if} \quad E_{k+1}=0, n_k<N$ \nonumber \\
1, &$\text{if} \quad E_{k+1}=0, l_k<L$ \nonumber \\
p_{e_1,e_2}, &$\text{if} \quad E_{k+1}=A_{e_2}, Y_k=A_{e_1}, n_k=N, l_k=L$ \nonumber \\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
The value of $Y_k$ only changes after a new instance of energy arrival. The transmission probability of $Y_k$ is expressed as
\begin{subequations}
\begin{numcases} {\text{Pr}\{Y_{k+1}|Y_k,E_k,n_k,l_k\}=}
1, &$\text{if} \quad n_k>1, Y_{k+1}=Y_k$ \nonumber\\
1, &$\text{if} \quad l_k>1, Y_{k+1}=Y_k$ \nonumber\\
1, &$\text{else if} \quad n_k=1, l_k=1, Y_{k+1}=E_k$ \nonumber \\
0, &$\text{else}$ \nonumber
\end{numcases}
\end{subequations}
Due to the constraints of the energy in the battery and the maximum transmit power, the transmit power should be constrained as
\begin{align}
0 \leq P_k \leq \min\{ B_k-\epsilon_{x_k}, P_{\text{max}}\},
\end{align}
where $P_{\text{max}}$ is the maximum allowed transmission power.
According to Shannon's equation, denoted by
\begin{align}
r(P_k,\gamma_k)=\log(1+\gamma_k P_k).
\end{align}
{
The objective function is set as
\begin{align}
\lim_{K \to \infty} \max_{P_k, x_k} \frac{1}{K}E\left[\sum_{k=1}^{K}r(P_k,\gamma_k)-\eta\sum_{k=1}^{K}\delta_kR_{x_k}\right],
\end{align}
where $r(P_k,\gamma_k)$ is the transmission rate of stage $k$ given the transmission power $P_k$ and the channel gain $\gamma_k$, which corresponds to the throughput in stage $k$, the expectation is taken over the channel gain and the energy arrival rate; $\delta_kR_{x_k}$ is the amount of baseband signals transmitted via fronthaul in slot $k$, i.e., the fronthaul overhead, which corresponds to the average fronthaul rate.
The optimization variable is the transmission power and the functional split mode selection, and $\eta$ is a weighting factor. We can tradeoff between the throughput and the fronthaul overhead by adjusting $\eta$. With large $\eta$, we have stringent constraint on the average fronthaul rate. To satisfy a given constraint of average fronthaul rate, we can iterate the weighting factor $\eta$ with algorithms such as the gradient descent method \cite{iter}.
}
The average throughput maximization problem is formulated as an MDP, and the value iteration algorithm can be used to find the optimal policy \cite{DP}. Every slot is treated as a stage. Denoted by $a_k=\{P_k, x_k\}$ the action taken in stage $k$. The reward function in stage $k$ is denoted by
\begin{equation}
g(s_k,a_k)=\log(1+\gamma_k P_k)-\eta \delta_kR_{x_k},
\end{equation}
The objective is to minimize the average per-stage reward of the infinite horizon problem, which is denoted by
\begin{equation}
J^*(s_0)=\lim_{K \to \infty}\max_{\pi} \frac{1}{K}E\left[\sum_{k=0}^{K-1}g(s_k,a_k)\right], \label{eq:mdp}
\end{equation}
where $s_0$ is the initial state, $\pi=\{a_0, a_1, ..., a_{K-1}\}$ is the possible policy.
Problem (\ref{eq:mdp}) can be solved with value iteration algorithm. Denoted by $\lambda$ the average per-stage reward, $h(i)$ the relative reward when starting at state $i$, the Bellman equation is expressed as
\begin{equation}
\lambda+h(i)=\max_{a}\{ g(i, a)+ \sum_{j \in \mathcal{S}}\text{Pr}\{j|i,a\}h(j)\},
\end{equation}
where $\mathcal{S}$ is the set of all possible states.
Initialize $h^{(0)}(i)=0$. Given any state $s$, for the $(b+1)$-th iteration, we have
\begin{align}
h^{(b+1)}(i)= \max_{a}\{ g(i, a)+ \sum_{j \in \mathcal{S}}\text{Pr}\{j|i,a\}h^{(b)}(j)\}-\max_{a}\{ g(s, a)+ \sum_{j \in \mathcal{S}}\text{Pr}\{j|s,a\}h^{(b)}(j)\},
\end{align}
note that $\max_{a}\{ g(s, a)+ \sum_{j \in \mathcal{S}}\text{Pr}\{j|s,a\}h^{(b)}(j)\}$ converges to $\lambda$.
A more general iteration formulation is
\begin{align}
h^{(b+1)}(i)= &(1-\tau) h^{(b)}(i)+\max_{a}\{ g(i, a)+ \tau \sum_{j \in \mathcal{S}}\text{Pr}\{j|i,a\}h^{(b)}(j)\} \nonumber\\
&-\max_{a}\{ g(s, a)+ \tau \sum_{j \in \mathcal{S}}\text{Pr}\{j|s,a\}h^{(b)}(j)\},
\end{align}
where $0 < \tau < 1$. Denote the gap between $h^{(b+1)}(i)$ and $h^{(b)}(i)$ as $d^{(b)}(i)$, i.e.,
\begin{align}
d^{(b)}(i)=h^{(b+1)}(i)-h^{(b)}(i).
\end{align}
The iteration is considered as convergence when
\begin{align}
\max_{i}d^{(b)}(i)-\min_{i}d^{(b)}(i)<\omega,
\end{align}
where $\omega$ is a threshold which determines the convergence speed. The detailed value iteration algorithm is described in Algorithm \ref{alg:mdp}.
\begin{algorithm}
\caption{Value Iteration Algorithm}
\label{alg:mdp}
\begin{algorithmic}
\STATE {Initialize $b=0$, $h^{(0)}(i)=0$ for $\forall i \in \mathcal{S}$, $\lambda^{(0)}=0$, select a fixed state $s_0$ }
\REPEAT
\STATE{1. Update the average per-stage reward $\lambda$}
\STATE{\begin{equation}
\lambda^{(b+1)}=\max_{a}\{ g(s_0, a) + \tau \sum_{j \in \mathcal{S}}\text{Pr}\{j|s_0,a\}h^{(b)}(j)\}\}. \nonumber
\end{equation}
}
\STATE{2. Update $h$}
\STATE{
\begin{align}
h^{(b+1)}(i)= (1-\tau) h^{(b)}(i) + \max_{a}\{ g(i, a)+ \tau \sum_{j \in \mathcal{S}}\text{Pr}\{j|i,a\}h^{(b)}(j)\}-\lambda^{(b+1)} \nonumber
\end{align}
}
\STATE{3. Update $b=b+1$}
\UNTIL{$\max_{i}d^{(b)}(i)-\min_{i}d^{(b)}(i)<\omega$}
\end{algorithmic}
\end{algorithm}
For the optimal online problem, the state number of the MDP model is $(B_{\text{max}}+1)\times E_{\text{max}} \times E_{\text{max}} \times G \times N \times L$, and the number of actions is $(P_{\text{max}}+1)\times X$. The state space can be very large if some of the elements is of large size. The value iteration algorithm may encounter curse of dimensionality. In this case, lower-complexity algorithm is in need. In the next section, we will first analyze the power control policy with one instance of energy arrival, based on which a heuristic online algorithm is proposed.
\section{Single Energy Arrival, Constant Channel Gain} \label{sec:single}
According to Proposition \ref{prop:2}, at most two functional split modes are selected in each epoch in the optimal offline problem.
To gain some insights, we will give some intuitive results when there is only one instance of energy arrival, and the channel gain is constant, i.e., $M=1$, $X=2$. Note that if the channel gain is averaged over an epoch, one epoch can be approximated to have only one block, where the approximated channel gain is the average channel gain over the epoch.
For brevity, we will use $\theta_1$, $\theta_2$, $p_1$, $p_2$ instead of $\theta_{1,1,1}$, $\theta_{1,1,2}$, $p_{1,1,1}$ and $p_{1,1,2}$ in this section, i.e., $\theta_1$ and $\theta_2$ are the corresponding transmission durations with the 2 functional split modes, $p_1$ and $p_2$ are the transmission power, the amount of available energy in this epoch is denoted by $E$, the epoch length is denoted by $L$. If there are more than two candidate functional split modes, i.e., $X>2$, we can first calculate the throughput when any two of the functional split modes are selected (there are totally $\frac{X(X-1)}{2}$ possible combinations), and obtain the optimal power control policy by comparing the throughput of all the possible scenarios.
If only one mode is selected, denoted by mode $x$, the optimal power control policy can be obtained with ``glue pouring" \cite{glue}. Given the processing power $\varepsilon_j$ and channel gain $\gamma$, and without maximum transmission duration constraint, the throughput maximization problem can be simplified to
\begin{equation}
\max_{p_x} \frac{E}{p_x+\varepsilon_x}\log(1+\gamma p_x),
\end{equation}
where $p_x$ is the transmission power, and denote $v_x^*$ as the optimal transmission power obtained by solving the optimization problem.
The optimal transmission power $v_x^*$ satisfies:
\begin{align}
(1+\gamma v_x^*)\log(1+\gamma v_x^*)-\gamma v_x^*=\gamma \varepsilon_x. \label{eq:glue}
\end{align}
Note that the expression on the left side of the equality is an increasing function of $v_x^*$, the equation has an unique solution, and $v_x^*$ increases with $\varepsilon_x$. Due to the constraints of epoch length and average fronthaul rate, the transmission duration is limited. Denoted by $\theta_x^{\text{max}}=\min\{\frac{DL}{R_x}, L\}$, which is the maximum transmission duration when only mode $x$ is selected. When $E<\theta_x^{\text{max}}(v_x^*+\varepsilon_x)$, the optimal power control policy is
\begin{equation}
p_x=v_x^*,\quad \theta_x=\frac{E}{v_x^*+\varepsilon_x}.
\end{equation}
When $E \geq \theta_x^{\text{max}}(v_x^*+\varepsilon_x)$, the optimal power control policy is
\begin{equation}
p_x=\frac{E}{\theta_x^{\text{max}}}-\varepsilon_x, \quad \theta_x=\theta_x^{\text{max}}.
\end{equation}
Due to the average fronthaul rate constraint $D$, the power control policy is affected. We will derive the optimal power control policy under different values of $D$ in the following part of this section. We assume that the two modes are mode 1 and mode 2, where mode 1 has less baseband functions at the RRU, and we thus have $R_1 > R_2$, $\epsilon_1 < \epsilon_2$.
\subsection{$D \geq R_1$}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{case1.pdf}
\caption{The optimal power control policy when $D \geq R_1$, where $\theta_1$ and $p_1$ are represented by the width and height of the black shadowed block with up diagonal respectively: (a) $E<(v_1^*+\varepsilon_1)L$; (b) $E\geq(v_1^*+\varepsilon_1)L$. }
\label{fig:case1}
\end{figure}
When $D \geq R_1$, the average fronthaul rate constraint can always be satisfied, and thus only mode $1$, which has smaller processing power, is selected.
When $E<(v_1^*+\varepsilon_1)L$, the optimal power control policy is
\begin{align}
\theta_1=\frac{E}{v_1^*+\varepsilon_1}, \quad p_1=v_1^*, \quad \theta_2=0, \quad p_2=0,
\end{align}
as described in Fig. \ref{fig:case1}(a).
When $E\geq(v_1^*+\varepsilon_1)L$, the optimal power control policy is
\begin{align}
\theta_1=L, \quad p_1=\frac{E}{L}-\varepsilon_1, \quad \theta_2=0, \quad p_2=0,
\end{align}
as described in Fig. \ref{fig:case1}(b).
\subsection{$R_2 < D < R_1$} \label{sec:sub2}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{case2.pdf}
\caption{The optimal power control policy when $R_2 < D < R_1$, $\theta_1$ and $p_1$ are represented by the width and height of the black shadowed block with up diagonal, respectively, $\theta_2$ and $p_2$ are represented by the width and height of the red shadowed block with down diagonal, respectively: (a) $E \leq \frac{DL(v_1^*+\varepsilon_1)}{R_1}$; (b) $\frac{DL(v_1^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_1)}{R_1}$; (c) $\frac{DL(v_3^*+\varepsilon_1)}{R_1} < E \leq Lv_3^*+\frac{DL(\varepsilon_1-\varepsilon_2)+(R_1\varepsilon_2-R_2\varepsilon_1)L}{R_1-R_2}$; (d) $E>Lv_3^*+\frac{DL(\varepsilon_1-\varepsilon_2)+(R_1\varepsilon_2-R_2\varepsilon_1)L}{R_1-R_2}$.}
\label{fig:case2}
\end{figure}
When $E \leq \frac{DL(v_1^*+\varepsilon_1)}{R_1}$, if only functional split mode 1 is selected with transmission power $v_1^*$, the transmission time is $\frac{E}{v_1^*+\varepsilon_1}$, where the average fronthaul rate constraint can be satisfied.
Thus the optimal power control policy is
\begin{align}
\theta_1=\frac{E}{v_1^*+\varepsilon_1}, \quad p_1=v_1^*, \quad \theta_2=0, \quad p_2=0,
\end{align}
as described in Fig. \ref{fig:case2}(a).
If $\theta_1R_1+\theta_2R_2<DL$, i.e., the amount of data transmitted via fronthaul is less than the allowed amount $DL$, functional split mode 2, which has larger processing power, should not be selected.
When $D<R_1$, and $E \geq \frac{DL(v_1^*+\varepsilon_1)}{R_1}$, if only functional split mode 1 is selected, $\theta_1=\frac{DL}{R_1}$, we have $\theta_1R_1+\theta_2R_2=DL$. We can draw the conclusion that when $E \geq \frac{DL(v_1^*+\varepsilon_1)}{R_1}$, there should be
$\theta_1R_1+\theta_2R_2=DL$.
According to Proposition \ref{prop:1}, the transmission power of the two modes are the same, denoted by $p$, and thus we have
$\theta_1(p+\varepsilon_1)+\theta_2(p+\varepsilon_2)=E$,
the transmission duration can be expressed as
\begin{align}
\theta_1=\frac{(p+\varepsilon_2)DL-R_2E}{R_1(p+\varepsilon_2)-R_2(p+\varepsilon_1)}, \quad \theta_2=\frac{R_1E-(p+\varepsilon_1)DL}{R_1(p+\varepsilon_2)-R_2(p+\varepsilon_1)}.
\end{align}
The throughput is
\begin{align}
H=\frac{(R_1-R_2)E+(\varepsilon_2-\varepsilon_1)DL}{R_1(p+\varepsilon_2)-R_2(p+\varepsilon_1)}\log(1+\gamma p).
\end{align}
Taking the derivative of $H$ with respect to $p$, we have
\begin{align}
\frac{\partial H}{\partial p}=&\frac{(R_1-R_2)\left[(R_1-R_2)E+(\varepsilon_2-\varepsilon_1)DL\right]}{\left[(R_1-R_2)p+R_1\varepsilon_2-R_2\varepsilon_1 \right]^2} \nonumber \\
& \times \left[\frac{\gamma(p+\varepsilon_2+\frac{R_2(\varepsilon_2-\varepsilon_1)}{R_1-R_2})}{1+\gamma p}-\log(1+\gamma p)\right]
\end{align}
Let $\frac{\partial H}{\partial p}=0$, we have
\begin{align} \label{eq:v3}
\frac{\gamma(v_3^*+\varepsilon_2+\frac{R_2(\varepsilon_2-\varepsilon_1)}{R_1-R_2})}{1+\gamma v_3^*}-\log(1+\gamma v_3^*)=0,
\end{align}
this equation is equivalent to (\ref{eq:glue}), which obtains the optimal transmission power in glue pouring, denote by $v_3^*$. Since $\frac{R_2(\varepsilon_2-\varepsilon_1)}{R_1-R_2}>0$ and $\varepsilon_2>\varepsilon_1$, we have $v_3^*>v_1^*$.
When $p<v_3^*$, we have $\frac{\partial H}{\partial p}>0$, the throughput increases with $p$. The transmission power $p$ should be as large as possible, while satisfying that $\theta_1 \geq 0$ and $\theta_2 \geq 0$.
When $\frac{DL(v_1^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_1)}{R_1}$, the maximum transmission power $p=\frac{ER_1}{DL}-\varepsilon_1$ is achieved when $\theta_1=\frac{DL}{R_1}$ and $\theta_2=0$,
i.e., the optimal power control policy is
\begin{align}
\theta_1=\frac{DL}{R_1}, \quad p_1=\frac{ER_1}{DL}-\varepsilon_1, \quad \theta_2=0, \quad p_2=0,
\end{align}
i.e., only functional split mode 1 is selected, the transmission power increases with $E$, while the transmission duration remains unchanged, as described in Fig. \ref{fig:case2}(b).
When $p>v_3^*$, we have $\frac{\partial H}{\partial p}<0$, the throughput decreases with $p$.
If $\frac{DL(v_3^*+\varepsilon_1)}{R_1} < E \leq Lv_3^*+\frac{DL(\varepsilon_1-\varepsilon_2)+(R_1\varepsilon_2-R_2\varepsilon_1)L}{R_1-R_2}$, the transmission power can be $v_3^*$, and the transmission duration can be obtained by solving the following equations:
\begin{align}
\theta_1R_1+\theta_2R_2=DL, \quad \theta_1(v_3^*+\varepsilon_1)+\theta_2(v_3^*+\varepsilon_2)=E.
\end{align}
The optimal power control policy is
\begin{align}
\theta_1=\frac{(v_3^*+\varepsilon_2)DL-R_2E}{R_1(v_3^*+\varepsilon_2)-R_2(v_3^*+\varepsilon_1)}, \quad
\theta_2=\frac{R_1E-(v_3^*+\varepsilon_1)DL}{R_1(v_3^*+\varepsilon_2)-R_2(v_3^*+\varepsilon_1)}, \quad p=v_3^*, \label{eq:case23}
\end{align}
as described in Fig. \ref{fig:case2}(c). With the increasing of $E$, the transmission power remains unchanged, the transmission duration of functional split mode 1 decreases while the transmission duration of functional split mode 2 increases.
Note that when $E=Lv_3^*+\frac{DL(\varepsilon_1-\varepsilon_2)+(R_1\varepsilon_2-R_2\varepsilon_1)L}{R_1-R_2}$, the total transmission duration is equal to the epoch length, i.e., $\theta_1+\theta_2=L$.
When $E>Lv_3^*+\frac{DL(\varepsilon_1-\varepsilon_2)+(R_1\varepsilon_2-R_2\varepsilon_1)L}{R_1-R_2}$, due to the epoch length constraint, we have $p>v_3^*$, and the transmission durations of the two functional split modes should satisfy
\begin{align}
\theta_1+\theta_2=L, \quad \theta_1R_1+\theta_2R_2=DL,
\end{align}
i.e., $\theta_1=\frac{DL-R_2L}{R_1-R_2}$, $\theta_2=\frac{R_1L-DL}{R_1-R_2}$. As there is no energy waste, we have
$\theta_1(p+\varepsilon_1)+\theta_2(p+\varepsilon_2)=E$,
i.e., the optimal power control policy is
\begin{align}
\theta_1=\frac{DL-R_2L}{R_1-R_2}, \quad \theta_2=\frac{R_1L-DL}{R_1-R_2}, \quad p=\frac{E}{L}-\frac{D(\varepsilon_1-\varepsilon_2)}{R_1-R_2}-\frac{R_1\varepsilon_2-R_2\varepsilon_1}{R_1-R_2},
\end{align}
as described in Fig. \ref{fig:case2}(d). With the increasing of $E$, the transmission durations of both functional split modes stay unchanged, while the transmission power increases.
\subsection{$D \leq R_2$}
When $D \leq R_2$, the derivation of the optimal transmission power control policy is similar to the analysis in Section \ref{sec:sub2}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{case3.pdf}
\caption{The optimal power control policy when $D \leq R_2$, $\theta_1$ and $p_1$ are represented by the width and height of the black shadowed block with up diagonal, respectively, $\theta_2$ and $p_2$ are represented by the width and height of the red shadowed block with down diagonal, respectively: (a) $E\leq\frac{DL(v_1^*+\varepsilon_1)}{R_1}$; (b) $\frac{DL(v_1^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_1)}{R_1}$; (c) $\frac{DL(v_3^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_2)}{R_2}$; (d) $E>\frac{DL(v_3^*+\varepsilon_2)}{R_2}$. }
\label{fig:case3}
\end{figure}
When $E<\frac{DL(v_1^*+\varepsilon_1)}{R_1}$, the optimal power control policy is
\begin{align}
\theta_1=\frac{E}{v_1^*+\varepsilon_1}, \quad p_1=v_1^*, \quad \theta_2=0, \quad p_2=0,
\end{align}
as described in Fig. \ref{fig:case3}(a).
When $\frac{DL(v_1^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_1)}{R_1}$, the optimal power control policy is
\begin{align}
\theta_1=\frac{DL}{R_1}, \quad p_1=\frac{ER_1}{DL}-\varepsilon_1, \quad \theta_2=0, \quad p_2=0,
\end{align}
as described in Fig. \ref{fig:case3}(b).
When $\frac{DL(v_3^*+\varepsilon_1)}{R_1} < E \leq \frac{DL(v_3^*+\varepsilon_2)}{R_2}$, the optimal transmission power $v_3^*$ can be achieved, and the optimal power control policy is
\begin{align}
\theta_1=\frac{(v_3^*+\varepsilon_2)DL-R_2E}{R_1(v_3^*+\varepsilon_2)-R_2(v_3^*+\varepsilon_1)}, \quad \theta_2=\frac{R_1E-(v_3^*+\varepsilon_1)DL}{R_1(v_3^*+\varepsilon_2)-R_2(v_3^*+\varepsilon_1)}, \quad p=v_3^*,
\end{align}
as described in Fig. \ref{fig:case3}(c).
When $E>\frac{DL(v_3^*+\varepsilon_2)}{R_2} $, due to the average fronthaul rate constraint, the transmission duration is limited, we have $p>v_3^*$. As the throughput $H$ decreases with $p$, the transmission power $p$ should be as small as possible. Functional split mode $2$, which has smaller fronthaul rate requirement is selected, and the optimal power control policy is
\begin{align}
\theta_1=0, \quad p_1=0, \quad \theta_2=\frac{DL}{R_2}, \quad p_2=\frac{ER_2}{DL}-\varepsilon_2,
\end{align}
as described in Fig. \ref{fig:case3}(d).
\section{Heuristic Online Policy} \label{sec:online}
{In each block, if the available amount of energy in the battery $E$, the transmission deadline $T$, the channel gain $\gamma$ and the average fronthaul rate $D$ are accurately estimated, the optimal transmission policy can be easily obtained, according to the analyses in Section \ref{sec:single}.
Due to the energy constraint, only blocks with good channel states are used for transmission to improve the energy efficiency. On the other hand, to avoid energy waste introduced by the limited battery size, we prefer to use all energy in the battery before the end of each epoch. In fact, because only blocks with good channel states are used for transmission, the energy can flow to the next epoch if there is no block with good channel states in the current epoch, which guarantees that energy is used in blocks with good channel states.}
To simplify the expression, we define a function $f$ as:
\begin{equation} \label{fun:single}
[\boldsymbol{\theta}, \boldsymbol{p}] = f(E,T,\gamma,D),
\end{equation}
where $\boldsymbol{\theta}=[\theta_1, \theta_2, \cdot \cdot \cdot, \theta_X]$, $\theta_x$ is the optimal transmission duration that functional split mode $x$ is selected, $\boldsymbol{p}=[p_1,p_2,\cdot \cdot \cdot ,p_X]$, and $p_x$ is the optimal transmission power when functional split mode $x$ is selected. We then propose a low-complexity heuristic online algorithm.
\begin{algorithm}
\caption{Heuristic Online Policy}
\label{alg:heuristic}
\begin{algorithmic}
\STATE {Initialize $B_0=0$, $D_0=0$}
\FOR{$n=1, 2, ..., $}
\STATE{1. Update the energy state $B_{(n-1)L+1}=B_{(n-1)L}+E_{(n-1)L+1}$}
\STATE{2. Get the expected number of blocks with good channel states in the current epoch $N_{\text{good}}$, and the average channel gain $\gamma_{\text{avg}}$}
\STATE{3. Update the average fronthaul rate constraint}
\STATE{\begin{align}
d_n=\frac{(n+n_{\text{heu}})D-D_{n-1}}{N_{\text{heu}}} \nonumber
\end{align}}
\STATE{4. Get the power control policy, with $B_{(n-1)L+1}$, $N_{\text{good}}$, $\gamma_{\text{avg}}$, $d_n$ and thus the value of the function in (\ref{fun:single})}
\STATE{5. Transmit with the power control policy $\bf{\theta}$ and $\bf{p}$, update the battery state and the cumulative amount of data transmitted via fronthaul}
\STATE{\begin{align}
&B_{nL} = B_{(n-1)L+1} - \sum_{x=1}^{X}\theta_x(p_x+\epsilon_x), \quad D_n = D_{n-1} + \sum_{x=1}^{X}\theta_xR_x \nonumber
\end{align}}
\ENDFOR
\end{algorithmic}
\end{algorithm}
The detail algorithm is described in Algorithm \ref{alg:heuristic}.
We evaluate the transmission policy at the beginning of each block, denoted by block $n$ without losing generality. The RRU transmits only when the channel state is good, i.e., the channel gain is larger than a threshold denoted by $\gamma_{\text{th}}$.
At the first step,
evaluate the amount of energy in the battery $B_{(n-1)L+1}$, which can be obtained with the remaining energy in the battery at the end of the last block $B_{(n-1)L}$, and the arrived energy in this block, $E_{(n-1)L+1}$. We have $B_{(n-1)L+1}=B_{(n-1)L}+E_{(n-1)L+1}$.
At the second step, update the expected number of blocks in the remaining time of the epoch with good channel states, denoted by $N_{\text{good}}$, and the average channel gain, denoted by $\gamma_{\text{avg}}$, which can be obtained according to the distribution of the channel gain.
{
Denoted by $\text{Pr}\{n, n_v, w\}$ the probability that there are $n_v$ blocks with state $v$ in the next $n$ blocks, and the channel state in the $n$-th block is $w$.
We have
\begin{align}
\text{Pr}\{n+1, n_v, y\} = \sum_{w}\text{Pr}\{n, n_v, w\}q_{w,y}, \quad y \neq v, \\
\text{Pr}\{n+1, n_v+1, v\} = \sum_{w}\text{Pr}\{n, n_v, w\}q_{w,v}.
\end{align}
Given the channel state $w$ of the first block, if $w = v$, we have $\text{Pr}\{1, n_v = 1, w\} = 1$, and $\text{Pr}\{1, n_v , y \} = 0$ for the other parameters; if $w \neq v$, we have $\text{Pr}\{1, n_v = 0, w\} = 1$, and $\text{Pr}\{1, n_v , y \} = 0$ for the other parameters.
With the initial iteration values and the iterative formula, we can get the distribution of the number of blocks with channel state $v$, i.e., $\sum_{w}\text{Pr}\{N_{\text{r}}, n_v, w\}$, in the remaining $N_{\text{r}}$ blocks.
With distributions of the number of blocks with each channel state, the expected number of blocks with good channel states can be easily obtained.
}
At the third step, update the average fronthaul rate constraint, denoted by $d_n$. We guarantee that the data amount transmitted via fronthaul does not exceed $nLD$ from block 1 to block $n$.
Denote the total amount of transmitted data in the first $n$ blocks as $D_n$. To guarantee that the average fronthaul rate constraint in the first $n+n_{\text{heu}}$ blocks, the amount of data transmitted via fronthaul in the next $n_{\text{heu}}$ blocks should not exceed $(n+n_{\text{heu}})D-D_{n-1}$, where $n_{\text{heu}}$ is a constant number of blocks used in the heuristic algorithm. In the next $n_{\text{heu}}$ blocks, the expected number of blocks with good channel states is denoted as $N_{\text{heu}}$. The average fronthaul rate constraint is estimated as $d_n=\frac{(n+n_{\text{heu}})D-D_{n-1}}{N_{\text{heu}}}$.
At the fourth step, get the power control policy including the transmission duration $\boldsymbol{\theta}$ and the transmission power $\boldsymbol{p}$, with function (\ref{fun:single}).
At the fifth step, transmit with the policy, update the energy in the battery at the end of the block, and the amount of data transmitted via fronthaul in the first $n$ blocks, i.e.,
\begin{align}
B_{nL} = B_{(n-1)L+1} - \sum_{x=1}^{X}\theta_x(p_x+\epsilon_x), \quad D_n = D_{n-1} + \sum_{x=1}^{X}\theta_xR_x.
\end{align}
\section{Numerical Results} \label{sec:num}
{
We consider the downlink transmission of an energy harvesting RRU, where the baseband processing is according to the LTE protocol, as shown in Fig. \ref{fig:functionalsplit}.
The baseband functions at the RRU and the BBU are realized with general purpose processors via function virtualization.
Three candidate functional split modes are considered, including: mode 1, which splits between RF and IFFT, is the classical CPRI functional split; mode 2, which splits between resource mapping and precoding, is a reference split by eCPRI \cite{eCPRI}; mode 3, which splits between RLC and PDCP, is a reference split by 3GPP.
The RRU has one antenna, one carrier component, the bandwidth of the air interface is set as 20MHz, and the sampling rate is 30.72 MHz. We assume that there is only 1 user per TTI, occupying all PRBs. The highest modulation order is 64QAM. When the RRU works in different functional split modes,
the corresponding required fronthaul rates are $R_1=983$Mbps, $R_2=466$Mbps and $R_3=151$Mbps, respectively \cite{SplittingBS}. The corresponding processing powers of the RRU are $\varepsilon_1=2$W, $\varepsilon_2=4$W, $\varepsilon_3=5$W respectively, according to the downlink power model in \cite{LTEmodel}.
}
{
We assume that each slot lasts 10 seconds, and set that each block has $L=4$ slots, each epoch has $N=2$ blocks.
The RRU is powered with renewable energy.
The harvested energy can be used after being stored in the battery with capacity $B_{\text{max}}=1000$J. The initially stored energy in the battery is 0 J.
Without loss of generality, we assume that the energy arrival is a Poisson process, which can be used to model the solar panel or wind generation \cite{energymodel}, The distribution of the amount of arrived energy in each epoch is
\begin{equation}
\text{Pr}\{E_m=ANL\}=\frac{E_{{\text{avg}}}^A}{A!}e^{-E_{\text{avg}}},
\end{equation}
where $E_{\text{avg}}$ is the average energy arrival rate normalized by the number of slots in each epoch.
}
We consider the channel gain between each user and the RRU follows Rayleigh channel distribution, with average channel gain $\gamma_\text{avg}=2$/W. The channel gain is discrete into $G=4$ consecutive intervals without overlapping, and the probability that the channel gain is in each interval is $\frac{1}{G}$. The channel gain in each interval is represented by its average value, denoted the channel gain in the $g$-th interval by $\gamma_g$. Assume the channel gain of different users are i.i.d., the best channel gain of the users in each block $\gamma_{\text{best}}$ follows:
\begin{equation}
\text{Pr}\{\gamma_{\text{best}} = \gamma_g \}=\left( \frac{g}{G} \right)^U - \left( \frac{g-1}{G} \right)^U, \quad 1 \leq g \leq G,
\end{equation}
where $U$ is the number of users. We consider the scenario where $U=2$, and the corresponding probability of each interval is $[\frac{1}{16}, \frac{3}{16},\frac{5}{16},\frac{7}{16}]$.
We first study the offline throughput maximization problem.
{
The solution obtained with continuous variables is denoted by the \emph{`upper bound'}, and the solution after rounding is denoted by \emph{`relax and round'}.
}
The relationship between the throughput and the average fronthaul rate is presented in Fig. \ref{fig:offline} and Fig.\ref{fig:online} with `relax and round' and optimal online policy when $E_{{\text{avg}}}=5$W, respectively. We can see that the throughput grows rapidly with the average fronthaul rate when the fronthaul rate is small, and the growth slows down when the fronthaul rate gets large. When the average fronthaul rate is small, the performance of fixing functional split mode as mode 3 can achieve similar performance with the flexible functional split, because fronthaul is the main constraint in this scenario. When the average fronthaul rate is large, fixing functional split mode as mode 1 can achieve close performance with the flexible functional split, because the energy is the main constraint in this scenario, and functional split mode 1 requires the lowest processing power.
{Throughput of the `upper bound', the `relax and round', optimal online policy and heuristic online policy are compared in Fig. \ref{fig:compare}. We can see that the `relax and round' has close performance with the `upper bound', which means that ``relax and round'' performs very close to the optimal solution. } The heuristic online policy has similar performance with the optimal online policy. The heuristic online policies with fixed functional split mode are adopted as baselines to show the benefit of flexible functional split, as shown in Fig. \ref{fig:heuristic}. We can see that with flexible functional split, the heuristic online policy have better performance than any fixed functional split mode.
\begin{figure}[H]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{offline.pdf}
\caption{{Comparison of the `relax and round' with flexible functional split and fixed functional split under different average fronthaul rate constraints.}}
\label{fig:offline}
\end{minipage}
\qquad
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{online.pdf}
\caption{Comparison of the online policies with flexible functional split and fixed functional split under different average fronthaul rate constraints.}
\label{fig:online}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{compare.pdf}
\caption{{Comparison of the `upper bound', the `relax and round', optimal online policy and heuristic online policy under different average fronthaul rate constraints.}}
\label{fig:compare}
\end{minipage}
\qquad
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{heuristic.pdf}
\caption{Comparison of the heuristic online policies with flexible functional split and fixed functional split under different average fronthaul rate constraints.}
\label{fig:heuristic}
\end{minipage}
\end{figure}
To show the effects of the energy arrival rate $E_{\text{avg}}$, the relationship between the average throughput and the energy arrival rate is given in Fig. \ref{fig:energy_compare}, where the average fronthaul rate constraint is 360Mbps. The throughput increases with the energy arrival rate, for both the flexible functional split and the functional split with fixed mode. When the energy arrival is small, the throughput increases approximate linearly with the energy arrival rate. Because in this scenario, the time used for transmission is short, and the energy is the main constraint, rather than the average fonthaul rate and the channel states. However, due to the constraints of average fronthaul rate, functional split modes with smaller fronthaul rate requirement should be selected if longer transmission time is needed, which means the processing power is larger. On the other hand, the number of blocks with good channel states is limited, which means that the available transmission time with good channel states is limited. When the energy arrival rate is large, the increasing of throughput slows down due to the average fronthaul rate constraint and the limited number of blocks with good channel states. With flexible functional split, the throughput is larger compared with the fixed functional split modes.
\begin{figure}[H]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{energy_compare.pdf}
\caption{{Comparison of the `relax and round' with flexible functional split and fixed functional split under different energy arrival rate.}}
\label{fig:energy_compare}
\end{minipage}
\qquad
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{energy.pdf}
\caption{{Throughput versus the energy arrival rate with flexible functional split under different battery size.}}
\label{fig:energy}
\end{minipage}
\end{figure}
The relationship between the average throughput and the energy arrival rate with flexible functional split is given under different battery sizes in Fig. \ref{fig:energy}. We can see that when the energy arrival rate is small, the throughput with different battery sizes are almost the same. When the energy arrival rate is large, a small battery size leads to a larger probability of energy overflow, and less energy can transfer among different epochs, which results in a smaller throughput.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we have studied the selection of the optimal functional split modes, and the corresponding transmission duration and transmission power with each mode, to maximize the throughput given the average fronthaul rate in C-RAN with renewable energy powered RRU. The optimal offline policy has the property that at most two modes should be selected in each epoch, and the sum of the transmission power and the reciprocal of the channel gain are the same for the selected functional split modes. Numerical results show that with flexible functional split, the throughput can be notably improved compared with any mode with fixed functional split. To deal with the curse of dimensionality of the online MDP problem, We derive the closed-form expression of the optimal power control policy in the scenario with one instance of energy arrival and two candidate functional split modes. We then propose a heuristic online algorithm, and numerical results show that the proposed heuristic online policy has similar performance with the optimal online policy.
{In the future, the optimal policy with multiple carriers will be explored, and we will further study the scenarios with multiple RRUs and random packet arrivals.}
\bibliographystyle{IEEEtran}
|
1,108,101,563,698 | arxiv | \section{INTRODUCTION}
\label{introduction}
Perseus is one of the most interesting and well studied star forming complexes, containing massive stars (Per OB2 association), young stellar clusters (e.g. IC348 and NGC1333) and several dense star forming cores within Perseus molecular cloud (e.g. \citet{kirk2006,ridge2006,rosolowsky2008}).
Perseus OB2 is the second closest OB association to the Sun (after Scorpius-Centaurus association) at a distance of $\sim 300~$pc. \citet{dezeeuw99} estimated that Per OB2 is approximately 6~Myr old while other studies also suggest different generation of star formation with an age less than 15 Myr (e.g. \citet{bally2008} and references therein).
Per OB2 contains both massive and intermediate mass stars while low mass to intermediate mass stars are currently forming within the Perseus molecular cloud
(e.g. \citet{enoch2009,arce2010,sadavoy2014}).
Therefore identifying the members of the association and determining their characteristics such as their age and mass will provide critical information on the process of different generations of star formation in this region.
We constructed a panchromatic census of young stellar objects (YSOs) in Per OB2 using Wide-field Infrared Survey Explorer ($\it WISE$) point source catalogue. We also employed all available optical and infrared all sky surveys including $\it Spitzer$, 2MASS and SDSS to construct spectral energy distribution (SED) of each source in order to determine its physical parameters. We have performed a $\it WISE$ based survey to identify young stellar objects throughout a $12^{\circ} \times 12^{\circ}$ area covering the Per OB2 association.
The Perseus molecular cloud covers a $2^{\circ}\times6^{\circ}$ ($\sim10\times30$ pc) area and was observed with the {\it Spitzer} space observatory in both IRAC ($3.6-8.0~\mu$m) and MIPS ($24-160~\mu$m) as part of ``From Molecular Cores to Planet-Forming Disks" (c2d) Legacy Survey \citep{jorgensen2006,rebull2007}. We selected the c2d covered region as footprint, but did not limit our survey to the c2d area. c2d only covers very young star forming regions such as IC348 and NGC1333 clusters. To study the complete population of Perseus OB2 association we need to track all the members of the original ensemble, in particular the entire OB association which has blown up the initial molecular cloud and other young stellar objects, now spread into a larger area.
Assuming an average speed of $1-2$~km~s$^{-1}$ for stars, members can travel more than 20~pc in 10~Myrs in each direction which converts to an extended area of $\sim 10^\circ$ from the cloud center at $250-300$~pc distance.
To be complete, we selected a $12^\circ \times 12^\circ$ region ($50^{\circ}<$RA$<64^{\circ}$ and $26^{\circ}<$Dec$<38^{\circ}$) around the Perseus c2d footprint which completely covers the Perseus OB2 association. This region slightly overlaps with Taurus star forming region in South-West and may contain older members of Pleiades in South as well. Figure \ref{regions} shows the c2d footprint, and the area selected around it in our survey. The Blue dots present $\it WISE$ point sources.
Unfortunately we do not have the spectral or dynamical data for each source, hence cannot confirm if each individual source in our final catalog is associated with the Per OB2 or the molecular cloud, but the luminosity and other parameters matches with the properties we expect for YSOs at the estimated distances. In addition, we are looking out of the Galactic plane and no other known background star forming region is along the line of sight. Foreground young stars, if any, would be much brighter than our criteria.
Section \ref{archive_data} explains the data collection and data reduction procedures. In section \ref{selection} we describe our YSO candidate selection method and compare the results with other YSO identification schemes.
Section \ref{candidate_analysis} contains the analysis of our sample of YSO candidates. In Section \ref{modeling} we describe our SED fitting modeling and the results. We summarize this study in Section \ref{summary}.
\section{Archival Data and Data Reduction}
\label{archive_data}
To identify young stars we established a multi-wavelength database of point sources in Per OB2 star forming region, using all available large sky surveys. New YSO candidates then were selected through evidence for infrared excess in their spectral energy distributions (SED); specifically, we used power-law fits to the longer infrared wavelengths covered by the $\it WISE$ all sky survey. In section \ref{selection} we detail the candidate selection procedure.
To summarize, $\it WISE$ is an all sky survey in four infrared wavebands 3.4, 4.6, 12, and 22 $\mu$m (hereafter W1, W2, W3 and W4) with an angular resolution of 6-6.5$''$ for W1 to W3 and 12.0$''$ for W4. The $\it WISE$ depth is not homogenous, but achieved 5$\sigma$ point source sensitivities of 0.068, 0.098, 0.86 and 5.4 mJy (16.6, 15.6, 11.3, 8.0 Vega mag) in unconfused regions in the ecliptic plane for the four bands respectively. Our point source database was initially constructed using the $\it WISE$ preliminary data release \citep{wright2010}, but we upgraded to the final source release in March 2012. The $12^\circ \times 12^\circ$ selected region in our census of Per OB2 includes more than 1.6 million $\it WISE$ point sources (Figure \ref{regions}).
After an initial inspection we realized that W4 sensitivity is very low for faint sources and in particular any W4 profile magnitude $>$8 should be considered with caution. We therefore limited our search for infrared excess to the range of $3-12$~$\mu$m, and restricted our source list to have S/N$>$7 in the three corresponding W1, W2, and W3 bands. This criteria decreased the number of candidates to 65,662. We did not use the $\it WISE$ catalog flags to further filter our candidate list because we found the flags to be unreliable; in particular many sources were not flagged as extended or questionable when in fact we found that no point source actually existed. Instead all of the final candidates were visually inspected to remove contaminated or false detections.
Our primary goal was the building of composite spectral energy distributions (SED) using archival data products. We initially considered $\it WISE$ as the base catalog to astrometrically match with other data sets, but the $\it WISE$ beam is significantly larger than other relevant all sky surveys. We also decided to use a near-IR band as a tracer of the stellar spectral type. Therefore we adopted the 2MASS \citep{2mass} catalog and coordinates as the base catalog. Requiring our sources to have a 2MASS companion reduced the size of our sample to 48,692.
\subsection{Additional ancillary data}
The Perseus star forming region is well covered by many large sky surveys (see Figure \ref{regions}) from infrared to X-ray including: 2MASS, UKIRT Infrared Deep Sky Survey (UKIDSS), {\it Spitzer} (e.g., c2d, \citet{c2d2003}), the InfraRed Imaging Surveyor (AKARI) \citep{akari2007}, Sloan Digital Sky Survey (SDSS, DR8+), the AAVSO Photometric All-Sky Survey (APASS, Release 7)\footnote{http://www.aavso.org/apass}, Chandra X-Ray Observatory and X-ray Multi-Mirror Mission (XMM-Newton) as well as catalogs based on older scanned photographic plates (US Naval Observatory: USNO-B1, UCAC3) and the USNO-B1/2MASS re-reduction, PPMXL \citep{ppmxl2010}.
SDSS and APASS catalog measurements are adopted at optical wavelengths if available; otherwise, we have used PPMXL cataloged photometry for B and R bands. Figure \ref{optical} presents APASS and SDSS coverage of our surveyed region, showing how APASS provided nearly complete, deep optical coverage. While PPMXL covers the entire area, it is intrinsically based on USNO-b1 and contains much larger errors than APASS. Because PPMXL contains corrected positions, proper motions and optical-infrared matches of stars, we used it to break degeneracies for multiple matches between different catalogs. X-ray observations were very concentrated toward the known star forming regions, and did not cover the entire region. We benefited from archival X-ray data in publications to identify known YSOs. Because of its limited spatial scope (Figure \ref{regions}) we neglected the UKIDSS catalog entirely. AKARI flux measurements were consistent with IRAC and MIPS observations, but only a few of our candidates had high quality AKARI data, therefore this catalog was not used in final SED fitting.
There are several narrow field catalogs at different wavebands in Perseus region, but to keep a high degree of uniformity in source detection and flux calibration we do not account for such observations. We further downloaded all SIMBAD sources in the survey area to identify previously known YSOs or census contaminating sources such as planetary nebulae, background galaxies or other type of known evolved stars.
We employed the {\it CDS X-match} Service \footnote{http://cdsxmatch.u-strasbg.fr/xmatch, Strasbourg Astronomical Data Center} to match different catalogs and investigate the astrometric uncertainties. This service employs the coordinate errors to calculate the probability density of two sources from two different catalogs. The probability density is given by convolution between two Gaussian distributions around each source \citet{pineau2011}. We consider a completeness of 99.7 \% at 3$\sigma$ criterion (only 0.3\% of the counterparts could be missed).
\section{IDENTIFYING DISK CANDIDATES}
\label{selection}
We adopt the shape of the spectral energy distribution (SED) to identify sources with excess infrared emission as these are probable young stars with passively reradiating circumstellar disks \citep{lada87}.
Following on the {\it Spitzer} based SED analysis of \citet{lada06} and \citet{muench07}, we used a simple power-law, least-squares fit of the $3-12~\mu$m portion of the YSO's SEDs as traced by the W1, W2, and W3 $\it WISE$ bands (3.4, 4.6 and 12~$\mu$m, respectively).
While \citet{muench07} showed that the {\it Spitzer} $3-8~\mu$m index was relatively insensitive to extinction, we acknowledge that including 12 microns into the SED slope fit will result in larger intrinsic dispersions in the slope distribution because of variations in the silicate feature. Unreddened classical T-Tauri stars with $SiO_4$ in emission will have very large slopes. We do not anticipate silicate absorption due to A$_V$ to ``remove" a source from our sample, by lowering the measured $3-12$ micron slope.
Figure \ref{irac_wise} presents a comparison between IRAC $\alpha_{3-8}$ and WISE $\alpha_{3-12}$ for 135 sources which have both data set in our sample. Solid line shows a least square fit (with slope of $0.92\pm0.2$, excluding three data points at right bottom corner with poor IRAC photometry) and dashed line presents equal values. There is a large scatter due to different wavelength, band width for each wavelength, beam size and sensitivity for Spitzer and WISE. A selection bias is involved as well. In Figure \ref{irac_wise} all IRAC data is included if available, but only $\it WISE$ with SNR$>$7 sources are selected in our initial sample. In general for $\alpha_{3-8} < -2$, we have found larger slopes for WISE $\alpha_{3-12}$, where anemic disks and stars are located (discussed further in section \ref{color-cuts}). IRAC and $\it WISE$ slopes are not expected to match completely, for this reason we are not reporting classification for individual objects based on categories defined in \citet{muench07}.
The 22~$\mu$m was not considered in the initial slope-fit for two reasons: a) to keep detection uniformity for sources which have not been detected or have very low SNR in W4 and b) to avoid the overestimation of slopes due to poor sensitivity of W4 for faint sources with W4~$> 8$. However 22~$\mu$m flux, when regarded as real, is considered in final complete SED modeling to study the disk characteristics.
In the first attempt we filtered candidates based on $\it WISE$ flags.
We found a few strong YSO candidates which were located nearby saturated stars or within nebulous regions, had been flagged as poor detection or photometry due to contamination. On the other hand in visual inspection we found sources with very poor detection in cloudy areas which had high S/N fluxes reported with $\it WISE$ high photometry quality flags but they did not survive our visual or point source profile inspections. Finally we decided to ignore the $\it WISE$ flags at this stage and reconsider them after manual inspection.
We considered all sources with S/N$>$7 in first 3 $\it WISE$ bands in our candidate pool and fitted a power-law to each source. Figure \ref{selection} shows the calculated slopes versus 2MASS {\it J}~magnitude for 48,692 sources with S/N$>$7 in first 3 $\it WISE$ bands in grey dots. To select the sources with excess emission we looked at the slope distributions in bins of 0.5 $J$~magnitude and fitted a Gaussian to the slopes distribution within each interval. $J$ band is the shortest wavelength that we have uniform data for, and is thus the least contaminated wavelength by circumstellar disk emission, which makes it the best tracer of photospheric luminosity.
The selection criteria can be considered to be a convolution of variation in the intrinsic (photospheric) color as a function of apparent magnitude and the $\it WISE$ sensitivity function.
We did not consider disentangling these two functions as a useful exercise, and instead posited all sources with an excess greater than 5 sigma from the typical color as evidence for infrared excess.
The black locus in Figure \ref{selection} shows a 5$\sigma$ limit above the peak of distribution for each bin. All sources above the locus have significant excess in slope (and hence in IR emission) and are considered as candidates with excess. On average the slope increases with the $J_{mag}$ after magnitude 12. Fainter objects in $J$ band are either embedded sources, or background galaxies. Hence, faint YSOs are missed in this selection, but in return the final sample is less contaminated with background galaxies.
The $J_{mag}$ error is not considered in calculating the locus, but it will not dramatically change the results. The maximum error in $J_{mag}$ in our entire data set is only 1.7$\%$. The locus has been drawn based on $5\sigma$ above the mean population in each $J_{mag}$ bin which means a confidence of 99.999943$\%$ of IR excess. Considering the largest error for $J_{mag}$ around the locus, selected candidates are still more than $4\sigma$ above the mean, which translates into a 99.994$\%$ confidence in IR excess. Therefore by neglecting $J_{mag}$ error in our selection procedure we are not maintaining the $5\sigma$ confidence for all selected YSO candidates, but all of the selected candidates have statistically significant IR excess.
669 sources met this criteria. However we are aware that the sample may contain none-stellar objects, contamination and false detections. In next step we filter candidate by manually inspecting them in optical and infrared images.
\subsection{Visual inspection}
\label{inspection}
$\it WISE$ and 2MASS and if available SDSS images of each candidate which passed the locus selection were manually inspected. Visual inspection revealed that half of the selected sources cannot be considered resolved point sources. The first group were rejected as being identified as extended objects like galaxies and the second portion could not be identified as distinguished point sources in nebulous regions in a combination of at least any three $\it WISE$ bands. We performed the eye inspection in three rounds and categorized the sources as contaminated, saturated, extended and not resolved. All extended objects were identified as galaxies and PNe in further investigations and were rejected. Candidates that could not be resolved as point sources in at least any three $\it WISE$ bands needed to be inspected for the luminosity profile. Contaminated, saturated and not resolved sources were accepted and back to the list if they were resolved in 2MASS and if luminosity profile presented a point source in at least three $\it WISE$ bands.
354 out of 669 candidates survived the visual inspection. These final candidates are presented by open red circles (new candidates) and blue triangles (known candidates) in Figure \ref{selection}.
The known AGB stars are presented with black dots. The sources in our list which are less likely to be YSOs and closer to the AGB stars are noted by black crosses. They are separated from the YSO candidates employing magnitude diagrams and will be discussed in more details in section \ref{candidate_analysis}. The known OB stars are also shown as yellow dots. The only OB star which is above the locus is $BD +30 540$. \citet{rebull2007} identified this YSO as a class I source in their {\it Spitzer} c2d survey. It is a B8V star and have an estimated mass of 2.6 M$_\odot$.
Figure \ref{wwt} presents the spatial distribution of 354 final candidates on a $\it WISE$ multi-color image. The brightness of each source is proportional to the the W1-3 slope; sources with larger slopes, which presumably have stronger disks, appear brighter in Figure \ref{wwt}.
\subsection{Comparing classification schemes}
\label{classification}
\citet{rebull2011} identified new YSO candidates in Taurus-Auriga region using preliminary $\it WISE$ data. Nearly $25\%$ of our survey area is covered in their study, therefore we looked for common candidates to evaluate our detection method accuracy. \citet{rebull2011} report 94 new YSO candidates in Taurus-Auriga region, 22 of which lie within our survey area. Astrometry and photometry of more than $25\%$ of sources have changed in $\it WISE$ all-sky release, especially for bright sources in our list. Therefore preliminary and all sky release $\it WISE$ catalogs do not contain identical IDs for all sources. Instead of matching IDs we performed a position match. Some photometries in the new $\it WISE$ catalog also have sightly changed, but they are consistent enough with the preliminarily data to help selecting the right match. Rebull et al. catalog is plotted over our selection diagram in Figure \ref{rebull}. Our complete catalog of 354 sources are shown in black triangles. Rebull et al. known YSOs are shown with green dots and their new YSO candidates are shown with blue dots. They have also rejected many sources (shown in magenta dots) after various inspections. We have identified only 8 of Rebull's new YSO candidates in our field. 14 unidentified sources are almost fainter than 12 magnitude in 2MASS $J$ band and despite color excess they lie below our selection criteria.
We also have 17 of Rebull's {\it known} candidates in our field. 14 have been classified as YSO candidates in our sample, one does not have 2MASS match and therefore not selected in our sample and two are below our selection criteria.
Rebull et al. also have 129 rejected candidates in our field which passed their color cuts in first place.
82 of their rejected sources (as background galaxies) match with our initial sample but lie below our selection criteria. 35 do not have 2MASS point source match and therefore not selected in our list.
We have 8 common sources in our visually rejected list which they have identified as confused, galaxy, PNe or foreground/background star. Noticeably 7 of their rejected sources do not have a match in new WISE within 1$''$, and 4 do not have a match within 2$''$ , therefore we cannot comment about these sources.
\subsection{Comparing with color-cuts}
\label{color-cuts}
We compared our candidates with the color-cut spaces described by \citet{koenig2012}. The color cuts are defined in multiple color spaces based on known different types of YSOs and other contaminant objects such as AGNs, galaxies with PAH-feature emissions and shock emission blobs. \citet{koenig2012} selections are employing the results from {\it Spitzer} surveys to identify various objects with color excess \citep{guter2009,rebull2010} based on where they statistically lie on $\it WISE+2MASS$ color-color spaces.
Similar to $\alpha$ selection technique, color-cut criteria in general can be adjusted to extract weaker disks with smaller IR excess or exclude them. In this section we compare our results with criteria defined by \citet{koenig2012} which is frequently adopted in similar studies.
Following \citet{koenig2012}, no AGNs were identified in our sample of candidates, but three sources (J03504369+3507088, J04022740+3057450, J03504333+3346024) might be considered as PAH emitting/star forming galaxies. These objects have no match in the literature.
They are out of the {\it Spitzer} field and not covered by SDSS, therefore their identity cannot be determined by the spectrum.
It worths to mention that Koenig color-cuts are defined based on the $\it WISE$ preliminary data, while photometry of $~25\%$ of sources have had slight changes in all sky release. That may partially affect their selection criteria at the borders.
\citet{koenig2012} also used color-cuts to categorize their YSOs into different class types.
\citet{muench07} (appendix A) explored the influence of dust extinction on $\alpha$ detection technique and presented that IRAC SED slope for a diskless K0 star requires A$_V > 100$ to inflect into a positive slope and even A$_V > 200$ cannot inflect $\alpha_{5.8-24}$ of background stars into positive. Such large column densities within typical molecular clouds occur only in protosetllar envelop of embedded YSOs. But smaller values of dust extinction also may affect $\alpha$ at the classification of ClassI/ClassII candidates. In contrast mid-IR color cut techniques can be defined such that they minimize extinction-induced bias. The goal of this study is only identifying YSOs and not classifying them. We present the classification only to compare the slope-fitting and the color cut methods and will not report the class types for our final candidates.
According to \citet{muench07}, $\alpha_{IRAC3-8}=-2.66$ corresponds to the predicted slope of M0 star photosphere. Accepting a typical precision of SED fits, they characterized sources with $-2.56<\alpha<-1.80$ as ``anemic disks". Transitions disks, disks with inner holes and heavily depleted optically thin disks may locate in this category. Objects with $\alpha_{IRAC3-8}<-2.56$ are considered stars, and objects with $\alpha_{IRAC3-8}>-1.80$ are accepted as IR excess objects ($-1.80<\alpha_{IRAC3-8}<-0.5$ as Class~II and $-0.5<\alpha_{IRAC3-8}$ as Class~I).
Figure \ref{alpha_compare} compares the disk classification by \citet{muench07} based on the slopes ($\alpha$) and Koenig color cuts in one plot.
Grey dots show all 48,692 $\it WISE$ sources in our field with SNR$>$7 for first three $\it WISE$ bands. Black dots are $\alpha-$stars. Yellow dots show $\alpha-$anemic disks. Blue dots present Class~II and red dots present Class~I $\alpha$-disks.
Based on Koenig classifications, green open circles present Class~II and black open squares present Class~I sources (selected in $[3.4]-[4.6]$ and $[4.6]-[12]$ color-color diagrams). There is a good agreement in disk identification type between two methods. However, color-cuts select less Class~I protostars and many $\alpha-$Class~I sources are identified as Class~II by color-cuts.
We should remind that $\alpha$ selection fails to detect faint sources ($J_{mag} > 12$) even if they have noticeable color excess. On the other hand color-cut method misses the weak or evolved disks, even if they are bright sources. For example anemic disks have rarely been distinguished by color-cuts. The comparison between the selected YSOs in \citet{rebull2011} and this work in Figure \ref{rebull} also presents how the faint sources even with strong disks have been missed in $\alpha$ selection, and how the bright sources with weak disks have been missed by color-cut selection.
We suggest that to have a complete sample of faint to bright and weak to strong disks, both methods needed to be considered.
To summarize, we establish our working sample of candidate young stellar objects as selected by the spectral energy distribution power-law slope in range of $3.4-12~\mu$m from $\it WISE$ all sky point source catalog. Using this method we can identify the weak and more evolved circumstellar disks if they are hosted by a bright protostar. These type of disks are barely identified with color-color diagrams. In contrast $\alpha_{3-12}$ selection miss fainter sources, in particular those with J$_{mag}>$12 sometimes even if they have strong disks. Color-color diagrams are capable to select many fainter sources but the results are contaminated ($\sim 67\%$ for example, in Rebull's primary selected sources) with background galaxies and other fake detections compared to $\alpha$ selection method. For example in pool of 1014 potential YSOs identified by \citet{rebull2011}, 686 were rejected as known galaxies or objects to be likely background galaxies in their final list.
\section{CANDIDATE ANALYSIS}
\label{candidate_analysis}
\subsection{Known YSOs}
\label{known_ysos}
We cross matched our candidates with the SIMBAD database to identify known YSOs and possible contaminants. Among 354 candidates passed our manual inspection as resolved point sources, J035523.11+310245.0 is a known massive X-ray binary and was removed. 12 other sources are known AGBs, Mira variables or other type of known evolved stars. 76 have matches with previously known YSOs or T-tau stars while 80 others are listed as YSO or T-tau candidates in SIMBAD. All these sources are categorized as known YSOs in our final catalog and listed in Table \ref{tbl_known}. 185 sources remain in our list as new YSO candidates, but 66 of them identified to be more likely evolved dusty stars and were separated from final new YSO candidates list (Table \ref{tblAGBs}. We will discuss them in section \ref{dusty_evolved_stars}. Finally 119 sources survived as new YSO candidates in our final catalog and they are listed in Table \ref{tbl_new}. The errors (range of Lower limit and upper limit for each parameters) are provided in online tables.
\subsection{Dusty evolved stars}
\label{dusty_evolved_stars}
We checked the distribution of our sample in various color and magnitude diagrams to look for possible category and grouping objects. In particular 22~$\mu$m is the best indicator to probe the embedded objects. Figure \ref{k_w4} shows 2MASS K$_{mag}$ plotted vs. $\it WISE$ 22~$\mu$m for all 353 point sources (J035523.11+310245.0, the known massive X-ray binary is removed). It was noticeable that sources are divided into two main populations. Sources with the same W4 magnitude are divided into two brightness branches in K$_{mag}$. The lower group are usually bright stars with smaller slopes but they have statistically significant excess emission in longer wavelengths compared to main sequence stars. Known YSOs and YSO candidates lie in the upper group, with larger K magnitudes. Extreme brightness in K$_{mag}$ of the lower population suggests that they can be dusty evolved stars. The majority of them are out of Perseus cloud and have low extinctions.
To characterize this ``lower population" we compared them with the known AGB stars in our sample and in the literature. Grey dots in Figure \ref{k_w4} present all $\it WISE$ sources in our survey field with SNR$>$7 for all four bands which mostly contain main sequence stars. Then we matched all known evolved stars in the Galaxy (including AGB stars, carbon stars and Mira variables) from SIMBAD with $\it WISE$ catalog. Cyan dots show evolved stars which have a $\it WISE$ match with SNR$>$10 in all four bands and we call it AGB branch in our diagram. The AGB branch in this plot is above the main sequence and perfectly matches with the ``lower population". 10 of the known evolved stars in our sample also lie in this area. We selected 66 objects from the ``lower population" in K$-$W4 diagram as {\em dusty evolved star} candidates. These objects are shown by black crosses in Figure \ref{k_w4} left panel. Right panel in Figure \ref{k_w4} presents the K and M type stars in our sample which had known spectral type in the literature. While the majority of them locate within known and candidate YSOs, a smaller population lie within the AGB branch, indicating these objects are most likely evolved dusty stars rather than YSOs.
In addition we checked our AGB candidate with \citet{blum2006} color-magnitude diagrams from their Large Magellanic Cloud {\em Spitzer} survey. In Figures $3-6$ of their paper they present the location of carbon stars and extreme AGB stars on [3.6] vs. J$-$[3.6], [8] vs. J$-$[8], [3.6] vs. [3.6]$-$[8] and [24] vs. [8]$-$[24]. We replaced [3.6] by W1 ($3.4~\mu$~m), [8] by W3 ($12~\mu$~m), and [24] by W4 (22 $\mu$~m) and applied their criteria.
Although the wave bands are slightly different, our 66 AGB candidates reasonably matches the AGB stars criteria on their various plots. These candidates are listed in Table \ref{tblAGBs}.
We also did a statistical estimation of how many AGB stars we expect to identify in our field. A recent $\it WISE$ study \citep{tu2013} estimated 470,000 of AGB stars in the Galaxy with contamination uncertainty of $~20\%$ of other sources including YSOs. \citet{jackson2002} estimated a total of 200,000 AGB stars in the Galaxy. Using their volume density around solar neighborhood and integrating over 10~kpc (distance at which we can detect bright AGBs in our sample) we expect to have 100-400 AGB stars in our field. Therefore 66 likely AGB candidates plus 12 known evolved stars in our field seems a reasonable number.
\subsection{Spatial Distribution}
Figure \ref{agb_dist} presents the location of known and new YSO candidates, known and new candidate AGB stars and OB stars in $12^{\circ}\times12^{\circ}$ Per OB2 field. All the known YSOs (blue triangles) and the majority of new YSO candidates (red circles) are located within the nebulous region, bright in green color in Figure \ref{wwt}. New YSO candidates in North-East corner of the field follow the shell like dust feature and a large number of new YSOs in North-West side, toward the California nebula, are located where the remaining of the original Perseus cloud forms dense filamentary structures.
OB stars and known AGBs are shown in yellow and black dots respectively. Black crosses present our AGB candidates discussed above. These candidates are expected to be foreground/background sources, therefore we expect them to be randomly distributed in the field. Figure \ref{agb_dist} does not confirm a random distribution. AGB candidates shown with crosses are following the cloud structure and also concentrated in North-West. Some of AGBs also might be below our locus and not selected in our sample. Therefore we suggest that a fraction of these objects might be YSOs with inaccurate photometry or at the border of contamination with AGBs.
\section{MODELING SPECTRAL ENERGY DISTRIBUTION}
\label{modeling}
A common procedure to find the spectral energy distribution model that better reproduces data is using $\chi^2$ minimization: comparing a grid of models to the data points, finding the $\chi^2$ value using the measurement errors, and picking the model that minimizes the $\chi^2$ (e.g. \citet{robitaille07}). We call this solution the ``Best Fit" model. However, besides the Best Fit solution, there might be many other models that have slightly larger $\chi^2$ but that are also a reasonable solution for our fitting problem. If we consider the distribution of various physical parameters from all models with $\chi^2$ smaller than a certain threshold, the peak of such distribution may not occur at the value given by the Best Fit. In other words, the Best Fit solution might not be located in the region of the parameter space where more models are a good representation of our data. Alternatively, we can assume that the model parameters are random variables, then derive probability distribution functions (PDFs) for them given our data, and find the parameter values that maximize those PDFs. In this paper we call the latter solutions the ``Peak" values for the parameters, and we will use a Bayesian method to find them.
\subsection{Bayesian approach}
In Bayesian inference, the model parameters are considered as random variables for which posterior probability distributions can be derived from the likelihood of the solutions (that we obtain from the data and the measurement errors) and the parameter priors, where we encode the prior knowledge in these parameters before any data has been taken. Once we derive the posterior PDFs, we can draw samples from them in order to visualize the solution and find the absolute PDF maximum for each parameter. An efficient way to perform this sample is by using a Markov Chain Monte Carlo (MCMC) method that randomly steps across the parameter space and at each iteration decides whether to accept the step based on an acceptance probability that depends on the ratio of probabilities between the current position and the proposed new position. This approach is particularly useful for multivariate problems like the one at hand here.
Our Bayesian algorithm is very similar to the implementation of the \textsc{Chiburst} code for fitting SEDs of star-forming systems \citep{mg14}. Here we will describe only the aspects of the code that are relevant for the present work. As a model grid, we use the SED models of \citet{robitaille06}, that have calculated the radiative transfer for the emission from the YSO as it traverses the disk and/or envelope that surrounds it. Synthetic SEDs are produced for a broad range of stellar masses ($0.1-50 M_{\odot}$), ages ($10^3$-$10^7$ yr), as well as disk and envelope sizes and geometries. Given a set of photometry and assuming this model grid is a fair description of the actual distribution of physical parameters in YSOs, we attempt to find the probability that a particular model represents the properties of the observed YSO.
This allows us to explore the inherent degeneracies that arise when we attempt to fit a limited amount of photometric data with a multivariate model. By solving for the PDF rather than just finding solutions listed by increasing $\chi^2$, we are able to visualize all the likely solutions for a particular object photometry, given the observational uncertainties and assuming that the parameter space of the model grid is a fair universal sample of the population of YSOs in the Perseus cloud. We focus on the determination of three main model parameters, namely the YSO stellar mass ($m_*$), age ($t_*$), and $A_V$ in the line of sight towards our objects. To some extent, all other model parameters are determined by the selection of mass and age, or difficult to constrain without unavailable photometric bands.
\subsubsection{ The Probability Distribution Functions}
Suppose that we have obtained photometry data (D) of a YSO at different bands, with certain observational uncertainties associated. Given our data, we can calculate $P(\Theta|D)$, the posterior PDF for the set of parameters $\Theta$ as the product of the likelihood $P(D|\Theta)$ and the prior P(M). In the case of normally distributed measurement errors, the likelihood $P(D|\Theta)$ can be obtained from the distribution of reduced $\chi^2$ values:
\begin{equation}
P(D|\Theta) = \sum\exp\left(-1/2\, \chi_{\rm{red}}^2\right)
\end{equation}
where the sum is marginalized for each model parameter over all possible models with a given value of the parameter. The \textit{prior} P(M) is a measure of any previous knowledge that we have on a particular parameter or set of parameters. For example, if we have reliable extinction measurements in the line of sight towards a particular YSO, then we can constrain the possible solutions to our problem by constructing a prior on ($A_V$) that is compatible with those extinction measurements. Finally, we need to apply a normalization factor to our posterior PDF to guarantee that the probability of at least one model being a representation for our YSO equals one.
\subsubsection{Priors}
We are mostly interested in obtaining a distribution of masses, evolutionary stages and optical extinctions for a sample of newly identified YSOs in the Perseus region. For the mass and age of the YSOs we have adopted the flat, uniform priors already set by the model grid, with boundaries set by $0.1\, \rm{M}_{\odot} < M_* < 50\, \rm{M}_{\odot}$ and $10^3\, \rm{yr} < t_* < 10^7\, \rm{yr} $. We also chose a flat, uniform prior for the disk inclination, since we expect most of the inclination effects to cancel out for a sample of many YSO models. We must emphasize here that these priors are discreet (i.e., not all possible values of mass, age and inclination are possible), but we consider that the sampling is such that any uncertainty due to sampling is smaller than the uncertainties imposed by the observational errors and the model degeneracies. Most of the other priors (disk and envelope mass, accretion rate, etc.) are set by the mass and age of a particular YSO, and have been thoroughly explained in \citep{robitaille06}. We adopt a distance of 320 kpc to the Perseus molecular cloud, but allow for a random scaling of the model YSOs to account for a distance uncertainty of 25$\%$. We also obscure the models and characterize this obscuration by an $A_V$ value, to account for any foreground extinction in the line of sight towards the Perseus cloud. We construct the prior on $A_V$ using an extinction map produced by \citet{lombardi2010}. The prior is simple: we restrict the possible values of $A_V$ to be within 0.4 dex of the value obtained from the extinction map. We will latter modify this prior to show the effect of the $A_V$ on determining the mass of the YSOs.
\subsubsection{Stepping across the parameter space}
Calculating the posterior PDF at every single point of the model grid and for all possible values of distance scaling and $A_V$ would be computationally time consuming, specially as more data points are added and additional model parameters (i.e. multiplicity of YSOs) are considered. Instead, we use a Monte Carlo Markov Chain (MCMC) approach to draw samples from the posterior PDF across the multidimensional space of parameters $\Theta$. We use the Metropolis algorithm with a uniform transition probability for the Markov Chain (i.e., given our current position in the parameter space, it is equally probable moving in any direction). After enough iterations of the MCMC, the histogram of model parameters in our chain should be a good representation of the posterior PDF. It is from this final histogram that we select the ``Peak" solutions for our parameters.
\subsection{Examples}
Figure \ref{app1} shows the best fits and associated posterior PDFs for several types of SEDs found in Perseus sample. The method is quite successful at finding solutions for almost all types of SEDs. More importantly, the plotted PDFs show the usefulness of the method in determining the uniqueness of the solution.
It is important to realize the meaning of the best fit value as compared to the most likely solution as represented by the peak of the posterior PDF. They do not always coincide, which should not be surprising. The reason is that the best fit can be located in a region of the parameter space where not too many models agree with the data. The value obtained from the peak of the PDF, on the other hand, provides the most likely value for a given parameter given all possible combinations of a parameter value with all the other parameters, constrained by the information contained in the priors. In a sense, the posterior PDF is a full description of all possible solutions. This is a significant improvement over the listing of the ten best fitting models, since these can be all located very near the best fit value, but not necessarily include the peak of the PDF. Furthermore, as we will see later, bimodal solutions are also possible due to degeneracies between the parameters, and usually the 10 best fitting models all fall near one of the two possible solutions to the problem.
\subsubsection{Degeneracies}
Degeneracy is a common issue of all data-fitting problems that more than one combination of model parameters give a reasonable solution within the observational uncertainties. This problem of model degeneracies has been particularly neglected in the case of YSO SED fitting, partly because of the lack of enough data to break those degeneracies, but also due to a blind faith on the {\textquotedblleft}best fit{\textquotedblright} solution, even for objects with only a few photometric observations. Our method allows to clearly visualize the degeneracies between model parameters and hence it helps us consider other possible solutions that might be hidden in the complexity of our multi-parametrical model, and that might be in better agreement with independent determinations of the parameters. It also allows us to explore how those degeneracies behave as a function of the priors, i.e., how additional information or additional data points can break these degeneracies.
Figures \ref{app2} and \ref{app3} shows an example of model degeneracies. Shown are two different realizations of the fit to the source J03284618+3116385. Two groups of solutions are possible, one with low foreground extinction ($A_V < 1$) and a low stellar mass, and one with higher $A_V$ and higher mass. Physically, this can be understood as a need for a higher stellar mass to produce the same UV and optical flux when more obscuration is at work. In the unconstrained case shown in the figure, both fits are equally satisfying, but the solution with small $A_V$ appears as more likely. Notice that the PDF for age remains unchanged. This implies that there is a strong degeneracy between stellar mass and extinction when fitting multi-wavelength SEDs of YSOs. Without a better prior for the $A_V$ (or the stellar mass), we might choose the low mass-solution, but additional evidence could make the other solution more likely. For example, if we know that the $A_V$ has to be greater than 1, then the low mass solution has to be neglected. This degeneracy is observed in many of the SEDs, in particular those of class IIa YSOs, as classified by \citep{guter2009}. Figure \ref{app4} shows the same effect much more clearly in the 2D $m_*$-$A_V$ plane. The choice of a particular $A_V$ determines the estimated mass of the YSO.
In fact, rather than thinking in terms of a {\textquotedblleft}best fit{\textquotedblright}, we should think in terms of likelihood of a solution given certain constrains on the values and all possible combinations of parameters.
It is possible to modify the prior on $A_V$ by considering independent determinations of the physical conditions. We have obtained extinction values from the maps produced by \citet{lombardi2010} and constrained the $A_V$ prior accordingly. Since the extinction map have their own uncertainties, we allow our $A_V$ values to vary within 0.4 dex of the value obtained from the maps. By doing so, we hope to break the degeneracy observed in Figure \ref{app4} for at least some of the objects. It is important to notice here that the values of $A_V$ measured from the map represent in fact only an upper limit to the total extinction, since part of the measured $A_V$ can actually be coming from behind the source of interest. By selecting a range of values around the measured $A_V$, we are assuming that most of the extinction we see is due to foreground material between us and the source.
Figure \ref{app4} shows the solution and the resulting marginalized posterior PDFs after the prior has been modified as described, for the same object of Figures \ref{app2} and \ref{app3}. As expected, the additional information of the prior breaks the existing degeneracy and leaves only the most massive of the two solutions. This solution is in fact in better agreement with the expected mass for YSOs that we would be able to detect at the distance to the Perseus cloud. Of course, another way to break degeneracies might be by including additional data-points at longer wavelengths that are compatible with only one of the two possible solutions. Photometry from the PACS instrument onboard the Herschel Space Observatory will be instrumental for this task.
\subsection{Results and discussion}
\label{discussion}
\subsubsection{SED slope analysis}
We have applied our SED fitting method to 275 identified YSOs (total of known and new candidates) in order to perform a census of masses and evolutionary stages. The calculated parameters are highly dependent on the extinction, and therefore we had to estimate the dust absorptions in order to constrain A$_V$ before the SED fitting was applied. The extinction values are calculated based on 2MASS K band extinction map provided by \citet{lombardi2010}. Figure \ref{av_dist} presents the location of high (A$_V > 2$) and low (A$_V < 2$) extinction regions. Currently active star forming regions such as NGC~1333, IC~348, L1448, L1455 and Taurus all are embedded within Perseus molecular clouds and have higher extinctions.
Figures \ref{slope_hist} presents the distribution of calculated $\alpha_{3.4-12}$ and position of candidates in three distinct groups. For the first group, $\alpha_{3.4-12}$ peaks at $\approx -2$ and these sources are noted by blue circles in right panel for objects with $\alpha _{3.4-12} < -1.75$.
This group have more evolved disks and are spread over the field with a concentration at North-West of the field, above the California nebula.
The main peak at $\alpha_{3.4-12} \approx -1$ contains many YSOs with stronger disks. Green triangles present this group, selected as $ -1.75< \alpha_{3.4-12} < 0$ in the right panel. They are more concentrated within the Perseus and Taurus star forming regions and molecular cloud.
Candidates with very strong disks are presented with red dots and are selected as $ 0 < \alpha_{3.4-12}$. They are highly concentrated within active star forming clusters, NGC1333, IC348, and Taurus with a few spread over the field. It is not clear if they are formed individually or escaped from their original birthplace.
Distinct population in Perseus region has been noted and discussed in several studies.
For example \citet{herbig98} found evidence of different generation of stars within and around IC348 cluster. Also suggested by a hipparcos study (\citet{dezeeuw99} and references therein), and later confirmed by \citet{belikov2002} Per OB2 contains two kinematically separated subgroups. \citet{belikov2002} identified more than 800 members
for Per OB2 within a 50~pc diameter.
We also have identified three separated evolutionary stages which will be discussed more in next section.
\subsubsection{Parameters from SED fitting}
Using the Bayesian MCMC method described, we have obtained best fit and peak values for all the YSOs in our sample. We focus on the peak values for stellar mass, age, total luminosity and extinction. Given the limited amount of information at wavelengths longer than $22~\mu$m, we do not attempt to constrain the disk/envelope properties with accurate details. Instead, we are interested in studying the distribution of masses and ages in our wide select field that includes NGC~1333 and IC~348, as well as another less dense concentration of YSOs located near California nebula.
We should note that the derived parameters using \citet{robitaille07} models are ``model-dependent-fit parameters" and not physical values. The YSO grid of models are randomly sampled in mass and age, and then use evolutionary tracks from \citet{siess2000} to calculate all the other stellar properties. This means that the ``age" derived here is not measured with respect to any physical stellar process, such as the start of deuterium burning. Rather, the reported ages are indications of the relative evolutionary stages of the members which are dependent on model assumptions and priors.
\citet{willis2013} found a noticeable difference in total mass of their large complete sample of YSOs in NGC6334, derived from the same set of models used in this paper, compared to the expected total mass for a complete Kroupa IMF. Robitaille models employed in this study are simulated based on observation and physical parameters, therefor results are valid as a relative ratio of the evolutionary stage of all sources in the survey that satisfies the goal of this study.
Figure \ref{age_mass} shows the age probability vs. mass probability for 2MASS J03311069+3049405. The blue cross which presents the best fit is very close to the most probable models for both mass and age in left panel. In some cases, however, the best fit model is very different from the probability peak. 2MASS 03253790+3108207 (right panel) is a good example where $Best~Fit$ and $Peak~Value$ do not match well.
There are two main reasons for this mismatch: first, the PDFs plotted here are marginalized versions of the PDFs, and are therefore integrated over all other parameters not shown in the plot. It is therefore likely that the absolute maximum of the multi-dimensional PDF does not always coincide with the maximum of a marginal distribution. Also, degeneracies in the models, specially given the lack of far-IR data points, might produce multiple maxima in the probability distributions.
We also realized that the correct estimation of the optical extinction is very important to constrain the mass and other physical parameters. The fitting tool lets $A_V$ to accept the values in a wide range, to find the best SED models. But not constraining $A_V$ will dramatically affect the final results. We fixed this problem by constraining $A_V$ from observed values of extinctions toward each individual candidate using the A$_K$ value from the extinction map provided by \citet{lombardi2010} toward Taurus and Perseus star forming regions, based on 2MASS counts. We have recorded the best fit values for the parameters for each source, as well as the most likely solution or the peak value as obtained from the peak of the posterior PDF.
Figure \ref{age_hist} presents the distribution of estimated ages for our candidates and the age spatial distribution in the field. Three distinct age groups are noticeable in left histogram: age$<$1~Myr, 1$<$age$<$5~Myr and 5~Myr$<$age. As expected, the younger YSOs are having stronger disks and locate within the high extinction inner parts of the Perseus cloud and NGC1333 and IC348 clusters. Noticeably the North-Western association do not contain any young YSOs and only a few mid-age ones.
The estimated masses for our YSO candidates varies in a wide range from 0.1 to 5 M$_\odot$. Figure \ref{mass_hist} presents the mass distribution and the location of low mass and high mass candidates. The majority of YSOs with masses larger than 1~M$_\odot$ are located within areas with larger A$_V$, i.e. within Perseus molecular cloud and particularly star-forming clusters.
The massive YSOs from older populations spread over the field are probably too evolved to have strong, detectable disks.
\section{summary}
\label{summary}
We have performed a census of YSO candidates in Perseus OB2 association covering $\sim$144 deg$^2$ using $\it WISE$ catalog. We derived physical characteristics of all identified YSOs within the region by employing other optical and infrared data including 2MASS, {\it Spitzer}, SDSS, PPMXL and APASS. Following \citet{lada06} \citet{muench07} we calculated the SED slope in range of $3-12~\mu$m for 48,692 sources which had a S/N$\leq$7 in first three $\it WISE$ bands. 669 point sources survived to have slopes larger than the 5$\sigma$ above the peak in bins of 0.5 2MASS J magnitude. Removing known non-stellar extended objects such as galaxies and PNe we still had some sources remained in our sample that could not be identified as point sources in optical or infrared images or did not show point source profiles in at least three out of four $\it WISE$ bands. Among 354 remained point sources one was identified as a massive X-ray binary and 12 as evolved stars such as carbon stars or Mira variables. 156 of remained sources were previously identified as YSO or YSO candidates. In this work we present 119 new YSO candidate toward the Perseus region. We also identified 66 new point sources with infrared excess emission but brighter than normal YSOs in the region. We separated them as likely AGB and evolved star candidates.
The majority of known candidates are concentrated toward active star forming regions such as Taurus, IC348 and NGC1333 clusters. We add more YSO candidates in these regions which had poor photometry in c2d or other previous studies to be confirmed as YSOs. New candidates also follow the remaining gas of the original Perseus cloud with a concentration toward North-West near California nebula. In total, new candidates have weaker disks and are more scattered within the field, while previously known sources locate within high A$_V$ regions in Perseus molecular cloud.
We employed the SED fitting models described by \citet{robitaille07}, but used a Markov Chain Monte Carlo method to explore the large parameter space of the model grid. Instead of selecting the model with the lowest $\chi^2$ solution as the best fit, we accept the physical parameters from the most probable fitted models. Similar to the slope ($\alpha_{3.4-12}$) distribution, derived mass and ages also present separated populations. $\alpha_{3.4-12}$ histogram shows two separated peaks at $\approx -2$ and $\approx -1$. Candidates with slopes larger than zero (i.e. very strong disks) are mainly found within IC348 and NGC1333 clusters with a few scattered in the field. The mid-slope population which peaks around $-1$ are also located within the Perseus cloud and nebulous region.
As expected, younger sources (age$<$1~Myr) are located within active star forming clusters while older population (age$>$5~Myr) are scattered within the field with a noticeable concentration toward North-West.
In contrast both low mass (M$<$1~M$_\odot$) and higher mass (M$>$1~M$_\odot$) candidates are found equally scattered within the filed or located within the Perseus cloud.
Finally, we compared our method of selecting circumstellar disks with other YSO selecting methods based on optical and infrared color and magnitudes. While our $\alpha$ method is very successful to identify bright sources with weak disks that will be filtered in other methods, our method misses YSOs with strong disks if they are faint and in particular fainter than J$_{mag}=12$.
A combination of both $\alpha$ and color-cut selection would be a complete method to identify both strong and weak disks.
\noindent
{\bf Acknowledgment}
We thank Matthew Templeton for providing us with APASS data and Marco Lombardi and Joao Alves for sharing Taurus-Perseus A$_K$ map based on their 2MASS study. We also would like to thank Luisa Rebull for helpful discussion and comments and the anonymous referee for detailed comments and
suggestions that helped to improve this work.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the JPL/California Institute of Technology, funded by NASA. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by NASA and NSF.
This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by JPL, California Institute of Technology, under contract with NASA. This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. This research has made use of the SIMBAD database and X-Match tool, operated at CDS, Strasbourg, France.
This research funded by the National Aeronautics and Space Administration Grants NNX08AJ66G, NNX10AD68G, NNX12AI55G, and JPL RSA 717352 to the Smithsonian Astrophysical Observatory.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f1.pdf}
\caption{\label{regions} Surveyed Region and its coverage by a sample of surveys: WISE (blue), Spitzer (c2d) (yellow), SDSS (green), UKIDSS (red), Gallex (grey), XMM (black) and Chandra (magenta). }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f2.pdf}
\caption{\label{optical} Field coverage by SDSS, APASS and WISE}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f3.pdf}
\caption{\label{irac_wise} IRAC $\alpha_{3-8}$ vs. WISE $\alpha_{3-12}$. Solid blue line presents a linear least square fit with slope of $0.92\pm0.2$ (excluding three data points at right bottom corner with poor IRAC photometry) and red dashed line presents equal values.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f4.pdf}
\caption{\label{selection}
$\alpha_{3.4-12}$ versus 2MASS J magnitude were used to identify the YSO candidates. Grey dots show all $\it WISE$ point sources with SNR$>$7 in first three bands. Black locus separates sources with $\alpha_{3.4-12}$ which are 5$\sigma$ above the Gaussian peak of slope distribution in each $J_{mag}$ 0.5 magnitude bins. New identified YSO candidates are presented with open red circles while known candidates from literature are shown in blue triangles. Known AGB stars are presented in black bold dots in this plot while AGB candidates identified in this study are presented in black crosses. Yellow dots present OB stars in Per OB2. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f5.pdf}
\caption{\label{wwt}
YSO candidates overlaid on multi band WISE image of Perseus region ($4.6 \mu$m in blue, $12 \mu$m in green and $22 \mu$m in red). The brightness of each source presents its $\alpha_{3.4-12}$ slope. Larger slope presumably means stronger disks. Picture created using Microsoft Worldwide Telescope ($http://worldwidetelescope.org$)}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f6.pdf}
\caption{\label{rebull}
Comparing our results with \citet{rebull2011} study of Taurus-Auriga star forming region. Black triangles present our sample of 354 candidates in Perseus. Green, blue and pink dots presents known YSO candidates, new YSO candidates and rejected candidates respectively from Rebull et.al.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f7.pdf}
\caption{\label{alpha_compare}
Comparing disk classification based on the slopes ($\alpha$) and Koenig color cuts in one plot. Black dots show $\alpha-$stars ($\alpha<-2.56$) and yellow dots show $\alpha-$anemic disks ($-2.56<\alpha<-1.8$). Blue dots show Class~II ($-1.8<\alpha<-0.5$) and red dots present Class~I $\alpha$-disks ($\alpha>-0.5$). Selected by $[3.4]-[4.6]$ and $[4.6]-[12]$ color-color diagrams, green open circles present Class~II and black open squares present Class~I sources. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plottwo{f8a.pdf}{f8b.pdf}
\caption{\label{k_w4} Left: our candidate with infrared excess are divided into two groups. The brighter sources with brighter K$_{mag}$ match with known Galactic AGB stars (cyan dots) in this plot and are likely AGB stars and not YSOs. They are noted with black crosses. Right: M and K type stars in our sample similarly follow the same pattern, empowering the suggestion that sources noted by cross in left panel are likely M and K type evolved dusty stars. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f9.pdf}
\caption{\label{agb_dist}
Distribution of YSO, AGB and OB stars in Per OB2. As expiated YSOs are concentrated toward star forming clusters (IC348, NGC1333) and more numerous within the Perseus molecular cloud. AGB candidates are expected to be randomly distributed in the field but they are following the cloud structure and also concentrated in North and North-West of the field. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f10.pdf}
\caption{\label{app1}
Multi-wavelength SEDs for a selection of objects in Perseus. The black diamonds are the extracted photometry, while the red line is the best fit (the one that minimizes the $\chi^2$). Also shown are the posterior PDFs for $m_*$, $t_*$, $L_{\rm{tot}}$ and $A_V$, with the best fit values represented by the dotted lines.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f11.pdf}
\caption{\label{app2}
The $m_*$-$A_V$ degeneracy. Two possible solutions for a single set of photometry. The PDFs show the two possible solutions, while the best fit in each case tends to select only one.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f12.pdf}
\caption{\label{app3}
The $m_*$-$A_V$ plane with contours of normalized probability showing the degeneracy between these two model parameters. The blue cross indicates the best fit for this particular realization.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f13.pdf}
\caption{\label{app4}
Only one solution is possible when the $A_V$ prior is modified to account for additional evidence on the extinction.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plottwo{f14a.pdf}{f14b.pdf}
\caption{\label{age_mass}
Mass-Age probability distribution for all fitted SED models for two different YSO candidates. The brightest region presents the most probable values from different fitted SED models. In the right panel the best-fit values (blue cross)is close to the most probable values but in the right panel there is a large difference.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{f15.pdf}
\caption{\label{av_dist}
Spatial distribution of extinction for 275 YSO candidates. A$_V$s are calculated based on 2MASS K band extinction map by Lombardi et al. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plottwo{f16a.pdf}{f16b.pdf}
\caption{\label{slope_hist}
Distribution of $\alpha_{3.4-12}$ for 275 YSO candidates. The location of members of three noticeable groups are shown in right panel. $\alpha<-1.75$ in blue circles, $-1.75<\alpha<0$ in green triangles and $0<\alpha$ in red dots.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plottwo{f17a.pdf}{f17b.pdf}
\caption{\label{age_hist}
Age distribution for 275 YSO candidates. The location of members of three noticeable groups are shown in right panel. $age<1$~Myr in red dots, $1<age<5$~Myr in green triangles and $5<age$ in blue circles.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plottwo{f18a.pdf}{f18b.pdf}
\caption{\label{mass_hist}
Mass distribution for 275 YSO candidates. The location of two noticeable groups are shown in right panel. M$<$ 1~M$_\odot$ in red dots and M$>$ 1~M$_\odot$ in blue squares.}
\end{center}
\end{figure}
\clearpage
\begingroup
\let\clearpage\relax
\rotate
\begin{deluxetable}{ccccc}
\tabletypesize{\scriptsize}
\tablecaption{Physical parameters measured for the entire sample \label{tbl1}}
\tablewidth{0pt}
\tablehead{
\colhead{Catalog} & \colhead{Coverage Type} & \colhead{Coverage of $12^\circ\times12^\circ$}&
\colhead{$ Waveband$} & \colhead{Number of Sources}
}
\startdata
2MASS&all sky& 100$\%$ & J, H, K & 887,729\\
WISE&all sky& 100$\%$ &3.4, 4.6, 12, 22 $\mu$m&1,657,265\\
USNO-UCAC3&all sky&100$\%$ &579-642 nm&205,279\\
USNO-B1&all sky&100$\%$ & B, R, I &2,072,601\\
PPMXL&all sky&100$\%$ &B, R, I &833,291\\
NOMAD&all sky&100$\%$ &B, R, I&815,242 \\
GSC&all sky&100$\%$ &J, V, N, U B&850,673\\
AKARI&all sky&100$\%$ &9, 18 $\mu$m& 1399\\
SDSS DR8&selected regions& $\sim10\%$ &u, g, i, r, z &59,4967\\
UKIDSS DR7&selected regions& $\sim50\%$& J,H,K,Y,Z &1,842,326\\
C2D-cloud&selected regions&$\sim15\%$&3.6, 4.5, 5.8, 8.0, 24,70,160 $\mu$m&777484\\
C2D-off&selected regions&$\sim2\%$&3.6, 4.5, 5.8, 8.0, 24,70,160 $\mu$m&55,193\\
Chandra &selected regions&$< 2\%$&0.3-3 \AA&887\\
XMM-Newton&selected regions&$< 2\%$&0.6-6 \AA&819\\
\enddata
\end{deluxetable}
\rotate
\begin{deluxetable}{cccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Physical parameters measured for the 119 new candidates \label{tbl_new}}
\tablewidth{0pt}
\tablehead{
\colhead{Order} & \colhead{Catalog Number} &\colhead{RA} & \colhead{Dec}& \colhead{Name} &\colhead{$\alpha$} &\colhead{ Mass$_{BF}$} & \colhead{Mass$_{peak}$} & \colhead{Age$_{BF}$} &
\colhead{Age$_{peak}$} \\
\colhead{} &\colhead{} &\colhead{(deg)} & \colhead{(deg)}&
\colhead{} &\colhead{} &\colhead{(M$_\odot$)} &
\colhead{(M$_\odot$)} & \colhead{(Myr)} & \colhead{(Myr)}
}
\startdata
1 & J032126.73+304645.5 & 50.3613824 & 30.7793282 & & -1.74 $\pm$ 0.05 & 0.2 & 0.35 $\pm$ 0.07 & 0.56 & 3.06 $\pm$ 0.23\\
2 & J032151.66+362327.7 & 50.4652808 & 36.3910322 & & -1.39 $\pm$ 0.05 & 1.42 & 0.93 $\pm$ 0.22 & 9.67 & 9.32 $\pm$ 0.13\\
3 & J032224.49+265637.9 & 50.602025 & 26.943878 & BD+26 545 & -2.63 $\pm$ 0.04 & 1.97 & 1.86 $\pm$ 0.06 & 9.79 & 8.35 $\pm$ 0.1\\
4 & J032231.00+311527.2 & 50.629184 & 31.25754 & & 2.62 $\pm$ 0.04 & 2.54 & 2.8 $\pm$ 0.0 & 1.04 & 1.12 $\pm$ 0.0\\
5 & J032251.21+371842.7 & 50.713394 & 37.3118834 & & -2.25 $\pm$ 0.06 & 1.16 & 1.15 $\pm$ 0.22 & 4.53 & 5.85 $\pm$ 0.23\\
6 & J032333.14+302949.8 & 50.8881155 & 30.4971848 & IRAS 03204+3019 & 1.65 $\pm$ 0.03 & 0.52 & 0.64 $\pm$ 0.32 & 0.03 & 0.03 $\pm$ 0.6\\
7 & J032503.08+310755.9 & 51.2628556 & 31.1322201 & & -1.54 $\pm$ 0.05 & 0.23 & 0.38 $\pm$ 0.18 & 3.37 & 3.41 $\pm$ 0.46\\
8 & J032507.48+304941.9 & 51.2811739 & 30.8283286 & & -1.07 $\pm$ 0.05 & 0.33 & 0.46 $\pm$ 0.16 & 2.42 & 3.82 $\pm$ 0.26\\
9 & J032510.59+304834.7 & 51.2941413 & 30.8096523 & & -1.08 $\pm$ 0.05 & 0.24 & 0.39 $\pm$ 0.33 & 4.18 & 3.53 $\pm$ 0.26\\
10 & J032526.49+305237.4 & 51.3604161 & 30.877062 & & -1.04 $\pm$ 0.04 & 0.66 & 0.35 $\pm$ 0.61 & 8.02 & 3.55 $\pm$ 0.25\\
11 & J032533.16+305544.1 & 51.3881758 & 30.9289289 & & -1.19 $\pm$ 0.04 & 0.2 & 0.3 $\pm$ 0.09 & 0.38 & 1.12 $\pm$ 0.31\\
12 & J032534.52+304728.0 & 51.3938412 & 30.7911121 & & -0.69 $\pm$ 0.04 & 1.29 & 1.46 $\pm$ 0.21 & 3.3 & 8.81 $\pm$ 0.25\\
13 & J032546.87+305720.4 & 51.4453023 & 30.9556944 & 2MASS J03254686+3057204 & -0.61 $\pm$ 0.04 & 1.38 & 1.39 $\pm$ 0.37 & 0.05 & 0.04 $\pm$ 0.61\\
14 & J032550.10+305554.1 & 51.4587901 & 30.9317144 & BD+30 540 & -2.25 $\pm$ 0.05 & 2.49 & 2.63 $\pm$ 0.17 & 4.34 & 6.15 $\pm$ 0.19\\
15 & J032552.76+305449.0 & 51.469849 & 30.9136219 & 2MASS J03255275+3054490 & -1.53 $\pm$ 0.05 & 0.65 & 0.5 $\pm$ 0.11 & 2.63 & 2.8 $\pm$ 0.2\\
16 & J032555.72+305705.4 & 51.4822053 & 30.9515063 & & -1.88 $\pm$ 0.05 & 1.06 & 0.89 $\pm$ 0.1 & 4.61 & 3.31 $\pm$ 0.22\\
17 & J032557.49+273436.3 & 51.4895635 & 27.5767754 & & -0.29 $\pm$ 0.04 & 1.5 & 1.19 $\pm$ 0.3 & 6.3 & 9.16 $\pm$ 0.16\\
18 & J032619.81+310637.1 & 51.5825609 & 31.1103143 & & -1.19 $\pm$ 0.04 & 1.62 & 1.36 $\pm$ 0.06 & 2.84 & 1.03 $\pm$ 0.34\\
19 & J032624.73+353310.1 & 51.603061 & 35.552788 & HD 278643 & -2.62 $\pm$ 0.04 & 2.07 & 2.03 $\pm$ 0.15 & 4.23 & 5.58 $\pm$ 0.35\\
20 & J032644.45+374319.9 & 51.6852328 & 37.7221956 & HD 275417 & -2.5 $\pm$ 0.04 & 1.97 & 1.99 $\pm$ 0.29 & 9.79 & 9.03 $\pm$ 0.19\\
21 & J032722.41+311528.7 & 51.8434013 & 31.2579774 & & -1.85 $\pm$ 0.05 & 0.42 & 0.56 $\pm$ 0.08 & 2.35 & 2.6 $\pm$ 0.28\\
22 & J032725.66+364542.0 & 51.8569303 & 36.7616939 & & -1.22 $\pm$ 0.06 & 0.1 & 0.11 $\pm$ 0.3 & 2.44 & 4.81 $\pm$ 0.32\\
23 & J032745.45+315821.0 & 51.9394058 & 31.9725183 & & -0.91 $\pm$ 0.04 & 0.85 & 0.95 $\pm$ 0.08 & 8.03 & 6.43 $\pm$ 0.16\\
24 & J033121.45+300614.3 & 52.8393947 & 30.1039948 & & -1.12 $\pm$ 0.05 & 0.12 & 0.13 $\pm$ 0.11 & 2.3 & 2.87 $\pm$ 0.27\\
25 & J033203.36+365810.0 & 53.0140256 & 36.9694462 & & 0.45 $\pm$ 0.06 & 0.61 & 0.74 $\pm$ 0.38 & 5.27 & 9.59 $\pm$ 0.12\\
26 & J033437.78+281536.8 & 53.6574328 & 28.2602328 & & -1.67 $\pm$ 0.06 & 1.46 & 1.5 $\pm$ 0.0 & 0.51 & 0.19 $\pm$ 0.0\\
27 & J033544.16+302357.5 & 53.9340301 & 30.3993128 & & -0.01 $\pm$ 0.05 & 1.49 & 1.47 $\pm$ 0.11 & 3.03 & 5.7 $\pm$ 0.17\\
28 & J033652.68+335658.0 & 54.2195016 & 33.9494472 & & -1.7 $\pm$ 0.05 & 0.65 & 0.79 $\pm$ 0.13 & 1.64 & 3.02 $\pm$ 0.24\\
29 & J033703.64+303929.0 & 54.2651805 & 30.6580796 & TYC 2355-740-1 & -0.86 $\pm$ 0.03 & 1.27 & 1.94 $\pm$ 0.1 & 0.46 & 0.57 $\pm$ 0.11\\
30 & J033800.72+340112.6 & 54.5030092 & 34.020184 & 2MASS J03380072+3401126 & -1.73 $\pm$ 0.06 & 1.07 & 1.54 $\pm$ 0.0 & 0.27 & 0.29 $\pm$ 0.0\\
31 & J033805.39+325428.5 & 54.5224599 & 32.9079195 & & -1.16 $\pm$ 0.07 & 0.62 & 0.47 $\pm$ 0.07 & 6.58 & 2.91 $\pm$ 0.3\\
32 & J033900.55+294145.7 & 54.7523308 & 29.6960465 & V* V1185 Tau & -0.08 $\pm$ 0.04 & 3.3 & 2.17 $\pm$ 0.1 & 4.35 & 7.95 $\pm$ 0.13\\
33 & J033909.97+322421.8 & 54.7915742 & 32.4060745 & & -2.15 $\pm$ 0.08 & 0.76 & 0.85 $\pm$ 0.26 & 3.51 & 4.86 $\pm$ 0.23\\
34 & J033941.31+361604.3 & 54.9221469 & 36.2678697 & GSC 02367-01706 & -1.0 $\pm$ 0.15 & 2.76 & 2.8 $\pm$ 0.0 & 0.1 & 0.11 $\pm$ 0.0\\
35 & J033953.88+320742.6 & 54.9745076 & 32.1285048 & & -0.53 $\pm$ 0.07 & 1.25 & 0.3 $\pm$ 0.16 & 4.39 & 8.39 $\pm$ 0.22\\
36 & J034021.42+364248.4 & 55.0892783 & 36.7134484 & & -0.28 $\pm$ 0.04 & 0.3 & 0.31 $\pm$ 0.11 & 8.02 & 7.9 $\pm$ 0.12\\
37 & J034046.96+323153.7 & 55.1956827 & 32.5315911 & V* IP Per & -1.92 $\pm$ 0.05 & 2.44 & 2.19 $\pm$ 0.04 & 6.73 & 5.39 $\pm$ 0.17\\
38 & J034102.75+344111.8 & 55.2614645 & 34.686616 & HD 278919 & -2.27 $\pm$ 0.04 & 1.91 & 1.85 $\pm$ 0.3 & 8.85 & 7.29 $\pm$ 0.0\\
39 & J034110.99+311308.0 & 55.2958295 & 31.2188936 & & 0.0 $\pm$ 0.06 & 0.3 & 0.34 $\pm$ 0.03 & 8.02 & 7.83 $\pm$ 0.15\\
40 & J034117.95+320250.9 & 55.3247937 & 32.0474798 & & -0.75 $\pm$ 0.06 & 0.21 & 0.29 $\pm$ 0.13 & 3.6 & 8.28 $\pm$ 0.26\\
41 & J034128.70+311658.1 & 55.3695924 & 31.2828254 & & -0.23 $\pm$ 0.07 & 0.47 & 0.35 $\pm$ 0.21 & 0.22 & 0.4 $\pm$ 0.32\\
42 & J034130.52+315452.3 & 55.3771731 & 31.9145545 & & -1.36 $\pm$ 0.05 & 0.18 & 0.72 $\pm$ 0.12 & 2.6 & 3.01 $\pm$ 0.3\\
43 & J034141.12+311924.7 & 55.4213386 & 31.3235488 & & -1.34 $\pm$ 0.05 & 0.44 & 0.59 $\pm$ 0.06 & 6.21 & 6.88 $\pm$ 0.16\\
44 & J034158.52+314855.7 & 55.4938374 & 31.8154967 & SSTc2d J034158.6+314855 & -1.07 $\pm$ 0.04 & 2.22 & 2.12 $\pm$ 0.22 & 9.89 & 8.46 $\pm$ 0.3\\
45 & J034221.27+335743.5 & 55.5886603 & 33.9620971 & BD+33 698B & -2.16 $\pm$ 0.05 & 1.91 & 1.89 $\pm$ 0.05 & 8.85 & 7.49 $\pm$ 0.0\\
46 & J034257.36+313124.5 & 55.7390175 & 31.5234811 & & -1.33 $\pm$ 0.07 & 1.22 & 1.1 $\pm$ 0.21 & 6.66 & 5.42 $\pm$ 0.17\\
47 & J034314.92+333516.1 & 55.8122066 & 33.5878064 & HD 278934 & -1.9 $\pm$ 0.04 & 2.29 & 2.21 $\pm$ 0.13 & 9.02 & 9.12 $\pm$ 0.1\\
48 & J034329.67+315808.9 & 55.8736578 & 31.9691642 & & -1.86 $\pm$ 0.09 & 1.22 & 1.13 $\pm$ 0.23 & 9.86 & 7.41 $\pm$ 0.18\\
49 & J034336.78+310516.5 & 55.9032525 & 31.0879277 & & -1.01 $\pm$ 0.04 & 0.54 & 0.45 $\pm$ 0.16 & 7.16 & 2.81 $\pm$ 0.38\\
50 & J034432.72+322243.0 & 56.1363454 & 32.3786267 & & -2.35 $\pm$ 0.04 & 1.75 & 1.1 $\pm$ 0.12 & 1.48 & 1.69 $\pm$ 0.24\\
51 & J034516.57+294224.4 & 56.319061 & 29.7067829 & & -1.28 $\pm$ 0.04 & 3.62 & 3.39 $\pm$ 0.12 & 0.0 & 0.0 $\pm$ 0.33\\
52 & J034542.58+344631.9 & 56.42743 & 34.775547 & NSV 1272 & -1.76 $\pm$ 0.04 & 2.0 & 0.95 $\pm$ 0.11 & 0.7 & 2.31 $\pm$ 0.25\\
53 & J034617.26+293540.6 & 56.5719328 & 29.5946356 & & -0.86 $\pm$ 0.04 & 0.99 & 0.79 $\pm$ 0.11 & 7.21 & 6.23 $\pm$ 0.21\\
54 & J034621.58+295920.8 & 56.5899559 & 29.9891256 & & -1.36 $\pm$ 0.05 & 0.37 & 0.35 $\pm$ 0.13 & 6.07 & 4.13 $\pm$ 0.22\\
55 & J034641.48+352502.0 & 56.6728344 & 35.4172376 & & -1.0 $\pm$ 0.03 & 1.01 & 0.81 $\pm$ 0.04 & 2.38 & 3.31 $\pm$ 0.27\\
56 & J034719.71+295200.0 & 56.8321391 & 29.8666734 & HD 281258 & -1.93 $\pm$ 0.04 & 1.78 & 1.91 $\pm$ 0.08 & 7.63 & 3.5 $\pm$ 0.26\\
57 & J034742.94+350044.1 & 56.9289537 & 35.0122731 & NSV 1302 & -1.1 $\pm$ 0.04 & 1.55 & 1.4 $\pm$ 0.15 & 5.9 & 3.59 $\pm$ 0.25\\
58 & J034937.39+292205.2 & 57.4058169 & 29.368137 & & -1.12 $\pm$ 0.04 & 0.37 & 0.34 $\pm$ 0.09 & 3.61 & 3.65 $\pm$ 0.2\\
59 & J035007.75+350603.3 & 57.5322953 & 35.1009271 & GSC 02364-00805 & -1.72 $\pm$ 0.07 & 1.42 & 1.01 $\pm$ 0.14 & 9.67 & 9.5 $\pm$ 0.11\\
60 & J035043.33+334602.5 & 57.6805755 & 33.7673872 & & 0.62 $\pm$ 0.04 & 0.24 & 0.24 $\pm$ 0.19 & 9.42 & 8.08 $\pm$ 0.3\\
61 & J035043.70+350708.9 & 57.6820945 & 35.1191647 & & -0.6 $\pm$ 0.06 & 0.53 & 0.53 $\pm$ 0.17 & 0.04 & 0.04 $\pm$ 0.14\\
62 & J035058.53+350528.5 & 57.7438986 & 35.0912715 & & -0.97 $\pm$ 0.05 & 0.13 & 0.25 $\pm$ 0.17 & 2.6 & 4.04 $\pm$ 0.22\\
63 & J035128.19+351825.8 & 57.8674971 & 35.3071818 & & 0.14 $\pm$ 0.04 & 0.46 & 0.5 $\pm$ 0.24 & 0.36 & 0.44 $\pm$ 0.36\\
64 & J035138.16+360546.2 & 57.9090149 & 36.0961775 & & -0.49 $\pm$ 0.05 & 1.25 & 0.19 $\pm$ 0.0 & 4.39 & 4.0 $\pm$ 0.22\\
65 & J035152.40+335541.1 & 57.9683514 & 33.9280997 & HD 279119 & -2.09 $\pm$ 0.04 & 1.91 & 1.88 $\pm$ 0.04 & 8.85 & 7.62 $\pm$ 0.0\\
66 & J035216.28+332422.1 & 58.0678615 & 33.4061622 & HD 279128 & -1.27 $\pm$ 0.05 & 2.59 & 2.89 $\pm$ 0.04 & 2.71 & 2.1 $\pm$ 0.28\\
67 & J035224.19+361221.4 & 58.1008192 & 36.2059517 & HD 279075 & -2.5 $\pm$ 0.05 & 1.91 & 1.77 $\pm$ 0.14 & 8.85 & 8.28 $\pm$ 0.25\\
68 & J035234.42+353358.9 & 58.1434471 & 35.5663787 & & -1.12 $\pm$ 0.04 & 0.36 & 0.35 $\pm$ 0.24 & 4.49 & 2.27 $\pm$ 0.2\\
69 & J035323.82+271838.3 & 58.3492848 & 27.310649 & & -0.99 $\pm$ 0.04 & 0.11 & 0.38 $\pm$ 0.14 & 7.59 & 6.91 $\pm$ 0.18\\
70 & J035357.09+350239.2 & 58.4878805 & 35.0442238 & & -1.13 $\pm$ 0.04 & 1.87 & 0.69 $\pm$ 0.03 & 3.61 & 4.17 $\pm$ 0.3\\
71 & J035407.02+315101.8 & 58.5292786 & 31.850522 & BD+31 666E & -2.6 $\pm$ 0.04 & 1.91 & 1.97 $\pm$ 0.07 & 8.85 & 8.94 $\pm$ 0.13\\
72 & J035431.06+361747.7 & 58.629428 & 36.2966037 & & -2.06 $\pm$ 0.07 & 1.17 & 0.91 $\pm$ 0.14 & 9.81 & 7.49 $\pm$ 0.17\\
73 & J035432.45+342245.7 & 58.6352227 & 34.3793737 & & -1.22 $\pm$ 0.05 & 0.52 & 0.44 $\pm$ 0.1 & 5.17 & 4.55 $\pm$ 0.19\\
74 & J035438.91+322616.8 & 58.6621312 & 32.4380016 & & -0.85 $\pm$ 0.05 & 0.25 & 0.27 $\pm$ 0.21 & 0.38 & 0.38 $\pm$ 0.31\\
75 & J035503.75+313157.4 & 58.7656359 & 31.5326213 & & -0.91 $\pm$ 0.03 & 2.15 & 2.13 $\pm$ 0.09 & 7.32 & 0.91 $\pm$ 0.32\\
76 & J035528.93+350435.5 & 58.8705711 & 35.0765338 & & -1.32 $\pm$ 0.04 & 0.75 & 0.72 $\pm$ 0.16 & 3.19 & 3.06 $\pm$ 0.24\\
77 & J035603.22+351450.5 & 59.0134429 & 35.2473748 & & -1.22 $\pm$ 0.04 & 1.03 & 0.62 $\pm$ 0.18 & 0.77 & 3.02 $\pm$ 0.29\\
78 & J035622.85+350618.5 & 59.0952433 & 35.1051579 & & -0.46 $\pm$ 0.05 & 0.12 & 0.21 $\pm$ 0.19 & 3.62 & 3.67 $\pm$ 0.24\\
79 & J035629.21+365717.2 & 59.1217283 & 36.9547837 & & -0.75 $\pm$ 0.09 & 0.3 & 0.3 $\pm$ 0.1 & 8.02 & 9.6 $\pm$ 0.12\\
80 & J035635.23+370421.2 & 59.1468169 & 37.0725814 & & -1.93 $\pm$ 0.07 & 0.79 & 0.75 $\pm$ 0.05 & 5.67 & 7.57 $\pm$ 0.13\\
81 & J035647.69+370500.3 & 59.1987252 & 37.0834308 & & -1.52 $\pm$ 0.05 & 1.54 & 1.42 $\pm$ 0.11 & 7.1 & 7.9 $\pm$ 0.17\\
82 & J035656.17+373202.1 & 59.2340637 & 37.5339431 & & -1.95 $\pm$ 0.04 & 0.57 & 0.91 $\pm$ 0.08 & 2.99 & 2.73 $\pm$ 0.22\\
83 & J035723.92+364615.4 & 59.3496905 & 36.7709473 & & -2.15 $\pm$ 0.07 & 1.17 & 1.23 $\pm$ 0.13 & 6.97 & 8.79 $\pm$ 0.24\\
84 & J035746.60+371006.7 & 59.4441938 & 37.1685303 & & -1.12 $\pm$ 0.07 & 0.46 & 0.31 $\pm$ 0.1 & 8.87 & 8.4 $\pm$ 0.15\\
85 & J035751.71+365501.2 & 59.4654968 & 36.917001 & & -1.68 $\pm$ 0.09 & 0.86 & 0.61 $\pm$ 0.1 & 9.46 & 8.35 $\pm$ 0.11\\
86 & J035808.98+365520.7 & 59.5374326 & 36.9224253 & & -0.62 $\pm$ 0.07 & 0.3 & 0.3 $\pm$ 0.12 & 8.02 & 7.72 $\pm$ 0.08\\
87 & J035817.28+370120.5 & 59.5720174 & 37.0223663 & & -1.84 $\pm$ 0.05 & 0.62 & 1.07 $\pm$ 0.04 & 2.75 & 3.06 $\pm$ 0.24\\
88 & J035851.36+363140.0 & 59.7140014 & 36.5277881 & HD 279222 & -2.34 $\pm$ 0.05 & 1.91 & 2.03 $\pm$ 0.22 & 8.85 & 8.3 $\pm$ 0.1\\
89 & J035920.81+362308.0 & 59.836732 & 36.385564 & & -0.77 $\pm$ 0.07 & 0.38 & 0.82 $\pm$ 0.11 & 6.72 & 8.31 $\pm$ 0.11\\
90 & J035930.57+363031.4 & 59.8773908 & 36.5087443 & & -0.67 $\pm$ 0.08 & 0.3 & 0.23 $\pm$ 0.19 & 8.02 & 8.53 $\pm$ 0.18\\
91 & J035936.50+362231.5 & 59.9021105 & 36.3754322 & & -1.01 $\pm$ 0.07 & 0.77 & 0.67 $\pm$ 0.07 & 5.56 & 9.32 $\pm$ 0.17\\
92 & J040019.35+365123.6 & 60.0806382 & 36.8565579 & & -2.12 $\pm$ 0.08 & 1.22 & 0.95 $\pm$ 0.04 & 9.86 & 8.25 $\pm$ 0.12\\
93 & J040026.64+334326.5 & 60.1110131 & 33.7240537 & HD 281352 & -1.49 $\pm$ 0.04 & 1.54 & 1.59 $\pm$ 0.15 & 7.1 & 8.22 $\pm$ 0.11\\
94 & J040056.13+314301.3 & 60.2338993 & 31.7170481 & TYC 2357-1345-1 & -1.16 $\pm$ 0.04 & 3.27 & 2.2 $\pm$ 0.28 & 0.44 & 0.49 $\pm$ 0.44\\
95 & J040107.93+334320.9 & 60.2830825 & 33.7224846 & & -1.21 $\pm$ 0.04 & 1.13 & 0.55 $\pm$ 0.05 & 1.14 & 1.88 $\pm$ 0.33\\
96 & J040141.98+301516.3 & 60.424928 & 30.2545303 & V* WW Tau & -2.54 $\pm$ 0.09 & 4.24 & 4.1 $\pm$ 0.06 & 0.49 & 0.62 $\pm$ 0.2\\
97 & J040159.15+321941.2 & 60.4964818 & 32.328137 & HD 281479 & -2.24 $\pm$ 0.04 & 1.81 & 1.92 $\pm$ 0.07 & 8.29 & 8.31 $\pm$ 0.35\\
98 & J040159.27+290344.2 & 60.4969797 & 29.062297 & & -1.47 $\pm$ 0.06 & 1.29 & 0.76 $\pm$ 0.12 & 6.92 & 7.51 $\pm$ 0.0\\
99 & J040219.09+324015.2 & 60.5795592 & 32.6709063 & & -0.8 $\pm$ 0.04 & 0.2 & 0.23 $\pm$ 0.05 & 0.92 & 0.63 $\pm$ 0.4\\
100 & J040225.72+365025.4 & 60.6071825 & 36.8404101 & HD 279280 & -2.35 $\pm$ 0.04 & 1.96 & 2.02 $\pm$ 0.08 & 5.06 & 8.36 $\pm$ 0.34\\
101 & J040227.39+305745.3 & 60.6141306 & 30.9625879 & & 0.39 $\pm$ 0.04 & 0.24 & 0.3 $\pm$ 0.13 & 9.42 & 9.22 $\pm$ 0.05\\
102 & J040259.96+315703.9 & 60.7498591 & 31.9510947 & & -2.05 $\pm$ 0.04 & 0.6 & 0.51 $\pm$ 0.11 & 0.31 & 2.23 $\pm$ 0.27\\
103 & J040323.26+360327.2 & 60.8469493 & 36.0575797 & & -2.33 $\pm$ 0.06 & 1.36 & 1.36 $\pm$ 0.08 & 9.49 & 4.97 $\pm$ 0.2\\
104 & J040401.78+271545.4 & 61.0074512 & 27.2626172 & IRAS 04010+2707 & -1.92 $\pm$ 0.05 & 0.81 & 0.78 $\pm$ 0.3 & 0.17 & 0.28 $\pm$ 0.21\\
105 & J040506.06+361129.2 & 61.2752807 & 36.1914563 & & -1.63 $\pm$ 0.07 & 0.95 & 0.89 $\pm$ 0.13 & 7.92 & 6.9 $\pm$ 0.13\\
106 & J040512.95+361102.9 & 61.3039635 & 36.1841551 & & -2.53 $\pm$ 0.06 & 1.28 & 1.04 $\pm$ 0.1 & 1.33 & 1.2 $\pm$ 0.25\\
107 & J040520.46+273608.2 & 61.3352698 & 27.6022827 & IRAS 04023+2728 & 0.08 $\pm$ 0.04 & 0.27 & 0.26 $\pm$ 0.1 & 6.18 & 6.04 $\pm$ 0.13\\
108 & J040520.61+360605.0 & 61.3358901 & 36.1014137 & & -2.08 $\pm$ 0.06 & 0.65 & 0.81 $\pm$ 0.17 & 5.11 & 5.36 $\pm$ 0.2\\
109 & J040559.62+295638.2 & 61.4984288 & 29.9439538 & & -1.05 $\pm$ 0.03 & 2.68 & 2.1 $\pm$ 0.07 & 0.37 & 0.52 $\pm$ 0.46\\
110 & J040600.41+361531.6 & 61.5017086 & 36.2587885 & & -1.83 $\pm$ 0.05 & 1.22 & 0.87 $\pm$ 0.11 & 9.86 & 4.38 $\pm$ 0.17\\
111 & J040616.84+333256.3 & 61.5701758 & 33.5489916 & HD 281534 & -2.26 $\pm$ 0.05 & 1.91 & 2.1 $\pm$ 0.08 & 8.85 & 7.58 $\pm$ 0.24\\
112 & J040623.36+375259.9 & 61.597374 & 37.883377 & & -2.1 $\pm$ 0.06 & 1.04 & 0.92 $\pm$ 0.05 & 5.55 & 9.26 $\pm$ 0.18\\
113 & J041025.61+315150.7 & 62.6067479 & 31.8640941 & HD 281664 & -2.43 $\pm$ 0.06 & 1.91 & 1.83 $\pm$ 0.15 & 8.85 & 7.21 $\pm$ 0.2\\
114 & J041044.36+334036.1 & 62.6848703 & 33.6767131 & & 0.14 $\pm$ 0.04 & 0.98 & 0.55 $\pm$ 0.13 & 2.17 & 3.04 $\pm$ 0.24\\
115 & J041248.58+274956.2 & 63.2024526 & 27.832299 & 2MASS J04124858+2749563 & -0.54 $\pm$ 0.04 & 0.33 & 0.46 $\pm$ 0.16 & 3.86 & 6.73 $\pm$ 0.11\\
116 & J041412.92+281212.2 & 63.5538407 & 28.2033957 & [CGI2005] 4 & -1.05 $\pm$ 0.05 & 3.68 & 3.78 $\pm$ 0.09 & 1.47 & 1.21 $\pm$ 0.62\\
117 & J041417.39+314247.7 & 63.5724876 & 31.7132503 & & -0.95 $\pm$ 0.04 & 0.51 & 0.41 $\pm$ 0.47 & 8.07 & 7.37 $\pm$ 0.22\\
118 & J041504.40+343040.1 & 63.7683363 & 34.5111655 & & -1.4 $\pm$ 0.07 & 1.29 & 0.77 $\pm$ 0.34 & 6.92 & 8.29 $\pm$ 0.11\\
119 & J041558.00+274617.1 & 63.9916924 & 27.7714435 & 2MASS J04155799+2746175 & -1.13 $\pm$ 0.04 & 0.23 & 0.22 $\pm$ 0.0 & 0.39 & 0.39 $\pm$ 0.31\\
\enddata
\end{deluxetable}
\rotate
\begin{deluxetable}{cccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Physical parameters measured for the 156 known candidates \label{tbl_known}}
\tablewidth{0pt}
\tablehead{
\colhead{Order} &\colhead{Catalog Number} &\colhead{RA} & \colhead{Dec}& \colhead{Name} &\colhead{$\alpha$} &\colhead{ Mass$_{BF}$} & \colhead{Mass$_{peak}$} & \colhead{Age$_{BF}$} &
\colhead{Age$_{peak}$} \\
\colhead{} &\colhead{} &\colhead{(deg)} & \colhead{(deg)}&
\colhead{} &\colhead{} &\colhead{(M$_\odot$)} &
\colhead{(M$_\odot$)} & \colhead{(Myr)} & \colhead{(Myr)}
}
\startdata
1 & J032200.52+305153.1 & 50.5022079 & 30.8647526 & 2MASS J03220052+3051531 & -1.45 $\pm$ 0.04 & 0.8 & 0.95 $\pm$ 0.05 & 1.53 & 1.3 $\pm$ 0.34\\
2 & J032202.57+305129.3 & 50.5107177 & 30.8581628 & 2MASS J03220256+3051292 & -1.56 $\pm$ 0.04 & 2.75 & 2.62 $\pm$ 0.04 & 2.01 & 2.13 $\pm$ 0.25\\
3 & J032506.71+310652.9 & 51.2779972 & 31.1147102 & 2MASS J03250672+3106528 & -1.4 $\pm$ 0.04 & 1.82 & 1.43 $\pm$ 0.23 & 0.72 & 0.52 $\pm$ 0.24\\
4 & J032512.59+305921.9 & 51.3024683 & 30.9894241 & 2MASS J03251260+3059215 & -1.16 $\pm$ 0.04 & 1.65 & 1.6 $\pm$ 0.21 & 1.06 & 0.63 $\pm$ 0.28\\
5 & J032537.91+310820.9 & 51.407985 & 31.1391488 & 2MASS J03253790+3108207 & -0.92 $\pm$ 0.04 & 0.55 & 0.95 $\pm$ 0.32 & 6.41 & 2.8 $\pm$ 0.26\\
6 & J032548.90+305725.8 & 51.4537885 & 30.9571679 & 2MASS J03254886+3057258 & -0.96 $\pm$ 0.04 & 1.49 & 1.61 $\pm$ 0.08 & 9.08 & 0.68 $\pm$ 0.4\\
7 & J032549.83+311023.8 & 51.4576476 & 31.1732939 & 2MASS J03254982+3110237 & -2.02 $\pm$ 0.04 & 3.24 & 1.79 $\pm$ 0.11 & 0.84 & 0.81 $\pm$ 0.36\\
8 & J032628.22+311207.8 & 51.6175851 & 31.2021747 & 2MASS J03262821+3112078 & -0.95 $\pm$ 0.03 & 2.04 & 1.77 $\pm$ 0.05 & 0.44 & 1.92 $\pm$ 0.4\\
9 & J032741.48+302016.8 & 51.9228373 & 30.3380137 & SSTc2d J032741.5+302017 & -1.02 $\pm$ 0.03 & 1.22 & 1.32 $\pm$ 0.15 & 2.21 & 3.21 $\pm$ 0.29\\
10 & J032747.68+301204.4 & 51.9486767 & 30.2012397 & NAME LDN 1455 IRS 2 & -0.11 $\pm$ 0.04 & 2.68 & 3.26 $\pm$ 0.22 & 2.88 & 6.92 $\pm$ 0.39\\
11 & J032800.09+300846.9 & 52.0004148 & 30.1463655 & 2MASS J03280010+3008469 & -1.4 $\pm$ 0.04 & 1.34 & 1.38 $\pm$ 0.27 & 3.47 & 3.47 $\pm$ 0.39\\
12 & J032842.43+302953.1 & 52.1768142 & 30.4980955 & [EDJ2009] 146 & -0.91 $\pm$ 0.04 & 0.39 & 0.63 $\pm$ 0.14 & 1.91 & 1.9 $\pm$ 0.33\\
13 & J032843.26+311732.9 & 52.1802544 & 31.2924824 & HBC 340 & -0.12 $\pm$ 0.04 & 1.48 & 2.12 $\pm$ 0.12 & 1.89 & 0.51 $\pm$ 0.42\\
14 & J032843.55+311736.6 & 52.1814862 & 31.2935018 & HBC 341 & -0.29 $\pm$ 0.06 & 3.17 & 1.47 $\pm$ 0.11 & 0.87 & 2.09 $\pm$ 0.42\\
15 & J032846.19+311638.6 & 52.192479 & 31.2773971 & EM* LkHA 351 & -1.57 $\pm$ 0.04 & 1.33 & 1.37 $\pm$ 0.45 & 2.51 & 2.3 $\pm$ 0.25\\
16 & J032847.83+311655.2 & 52.1992954 & 31.2820006 & 2MASS J03284782+3116552 & -0.96 $\pm$ 0.04 & 1.29 & 0.58 $\pm$ 0.47 & 3.3 & 1.39 $\pm$ 0.53\\
17 & J032851.02+311818.4 & 52.212614 & 31.3051357 & EM* LkHA 352A & -0.74 $\pm$ 0.04 & 2.61 & 2.19 $\pm$ 0.13 & 8.35 & 6.61 $\pm$ 0.36\\
18 & J032851.19+311954.8 & 52.2133275 & 31.3319087 & 2MASS J03285119+3119548 & -0.42 $\pm$ 0.04 & 2.78 & 1.52 $\pm$ 0.07 & 0.92 & 1.35 $\pm$ 0.3\\
19 & J032852.16+312245.3 & 52.2173627 & 31.3792507 & 2MASS J03285216+3122453 & -1.0 $\pm$ 0.04 & 1.47 & 1.4 $\pm$ 0.07 & 9.36 & 7.52 $\pm$ 0.21\\
20 & J032852.18+304505.5 & 52.217418 & 30.7515404 & 2MASS J03285217+3045055 & -0.78 $\pm$ 0.04 & 1.99 & 2.14 $\pm$ 0.17 & 9.84 & 7.13 $\pm$ 0.09\\
21 & J032854.62+311651.3 & 52.2275887 & 31.2809407 & 2MASS J03285461+3116512 & -0.99 $\pm$ 0.04 & 1.95 & 1.79 $\pm$ 0.43 & 7.29 & 8.18 $\pm$ 0.37\\
22 & J032856.64+311835.6 & 52.2360079 & 31.30989 & 2MASS J03285663+3118356 & -0.85 $\pm$ 0.04 & 2.64 & 1.83 $\pm$ 0.19 & 6.25 & 5.75 $\pm$ 0.24\\
23 & J032856.96+311622.3 & 52.2373374 & 31.27287 & 2MASS J03285694+3116222 & -0.49 $\pm$ 0.04 & 1.58 & 1.6 $\pm$ 0.06 & 8.48 & 7.48 $\pm$ 0.45\\
24 & J032859.55+312146.7 & 52.2481447 & 31.3629832 & EM* LkHA 353 & -1.22 $\pm$ 0.04 & 2.28 & 2.32 $\pm$ 1.12 & 7.65 & 6.67 $\pm$ 0.11\\
25 & J032903.77+311603.8 & 52.2657417 & 31.2677457 & V* V512 Per & 0.77 $\pm$ 0.04 & 1.22 & 25.69 $\pm$ 0.09 & 1.73 & 3.73 $\pm$ 1.09\\
26 & J032903.86+312148.7 & 52.2661055 & 31.3635495 & 2MASS J03290386+3121487 & -1.23 $\pm$ 0.04 & 1.98 & 2.37 $\pm$ 0.16 & 8.22 & 7.29 $\pm$ 0.09\\
27 & J032917.67+312244.9 & 52.3236529 & 31.3791443 & NAME NGC 1333 IRS 2 & -1.1 $\pm$ 0.04 & 2.17 & 2.49 $\pm$ 0.16 & 4.94 & 0.54 $\pm$ 0.56\\
28 & J032918.72+312325.5 & 52.3280363 & 31.3904245 & 2MASS J03291872+3123254 & 0.09 $\pm$ 0.04 & 1.82 & 1.34 $\pm$ 0.11 & 4.75 & 1.94 $\pm$ 0.33\\
29 & J032919.77+312457.1 & 52.3324158 & 31.4158781 & 2MASS J03291977+3124572 & 0.14 $\pm$ 0.04 & 2.56 & 2.55 $\pm$ 0.1 & 3.95 & 3.78 $\pm$ 0.23\\
30 & J032921.55+312110.3 & 52.3398199 & 31.3528862 & 2MASS J03292155+3121104 & -0.88 $\pm$ 0.05 & 1.17 & 0.95 $\pm$ 0.13 & 6.03 & 6.88 $\pm$ 0.16\\
31 & J032921.87+311536.2 & 52.3411571 & 31.2600729 & EM* LkHA 271 & -1.53 $\pm$ 0.04 & 1.08 & 1.82 $\pm$ 0.09 & 7.46 & 2.46 $\pm$ 0.28\\
32 & J032923.15+312030.3 & 52.3464585 & 31.3417503 & EM* LkHA 355 & -0.89 $\pm$ 0.04 & 1.48 & 0.96 $\pm$ 0.32 & 8.59 & 4.59 $\pm$ 0.19\\
33 & J032923.23+312653.0 & 52.3468186 & 31.4480694 & 2MASS J03292322+3126531 & -0.53 $\pm$ 0.04 & 0.39 & 0.39 $\pm$ 0.1 & 5.14 & 3.8 $\pm$ 0.38\\
34 & J032925.92+312640.1 & 52.3580264 & 31.4444756 & 2MASS J03292591+3126401 & -0.78 $\pm$ 0.03 & 1.48 & 1.87 $\pm$ 0.09 & 2.24 & 2.38 $\pm$ 0.4\\
35 & J032926.79+312647.4 & 52.3616596 & 31.44652 & 2MASS J03292681+3126475 & -1.94 $\pm$ 0.05 & 1.6 & 1.39 $\pm$ 0.16 & 5.2 & 3.44 $\pm$ 0.25\\
36 & J032928.89+305841.8 & 52.3703977 & 30.978281 & [EDJ2009] 248 & -0.77 $\pm$ 0.04 & 0.23 & 0.33 $\pm$ 0.29 & 5.25 & 4.26 $\pm$ 0.21\\
37 & J032929.26+311834.7 & 52.371941 & 31.309651 & 2MASS J03292925+3118347 & -1.34 $\pm$ 0.04 & 0.36 & 0.51 $\pm$ 0.19 & 2.17 & 2.42 $\pm$ 0.41\\
38 & J032929.78+312102.6 & 52.3741039 & 31.3507368 & 2MASS J03292978+3121027 & -0.69 $\pm$ 0.04 & 0.43 & 0.73 $\pm$ 0.12 & 2.68 & 2.67 $\pm$ 0.26\\
39 & J032930.39+311903.3 & 52.3766345 & 31.3176101 & EM* LkHA 356 & -1.13 $\pm$ 0.04 & 2.01 & 1.82 $\pm$ 0.15 & 8.07 & 7.42 $\pm$ 0.65\\
40 & J032932.56+312436.9 & 52.38569 & 31.4102637 & EM* LkHA 357 & -0.92 $\pm$ 0.04 & 0.55 & 0.43 $\pm$ 0.11 & 6.42 & 6.05 $\pm$ 0.22\\
41 & J032932.87+312712.5 & 52.3869611 & 31.4534793 & 2MASS J03293286+3127126 & -1.11 $\pm$ 0.06 & 0.22 & 0.41 $\pm$ 0.23 & 4.8 & 9.09 $\pm$ 0.18\\
42 & J032954.03+312053.0 & 52.475129 & 31.3480571 & 2MASS J03295403+3120529 & -0.64 $\pm$ 0.04 & 0.43 & 0.56 $\pm$ 0.17 & 2.68 & 2.88 $\pm$ 0.34\\
43 & J033024.09+311404.3 & 52.6004078 & 31.2345343 & 2MASS J03302409+3114043 & -0.98 $\pm$ 0.05 & 0.22 & 0.27 $\pm$ 0.07 & 6.61 & 8.04 $\pm$ 0.13\\
44 & J033035.94+303024.5 & 52.6497626 & 30.5068066 & 2MASS J03303593+3030244 & -0.96 $\pm$ 0.04 & 3.29 & 2.99 $\pm$ 0.12 & 1.09 & 1.17 $\pm$ 0.36\\
45 & J033036.97+303127.7 & 52.6540781 & 30.5243642 & SSTc2d J033037.0+303128 & -1.16 $\pm$ 0.04 & 2.87 & 2.38 $\pm$ 0.1 & 7.94 & 1.09 $\pm$ 0.52\\
46 & J033043.99+303246.9 & 52.683332 & 30.5463736 & [EDJ2009] 269 & -0.99 $\pm$ 0.04 & 2.44 & 2.22 $\pm$ 0.16 & 6.73 & 0.78 $\pm$ 0.45\\
47 & J033052.52+305417.7 & 52.7188596 & 30.9049298 & 2MASS J03305252+3054177 & -0.47 $\pm$ 0.04 & 0.12 & 0.12 $\pm$ 0.25 & 0.1 & 0.18 $\pm$ 0.29\\
48 & J033110.70+304940.5 & 52.7945859 & 30.8279314 & SSTc2d J033110.7+304941 & -0.61 $\pm$ 0.04 & 0.57 & 0.32 $\pm$ 0.25 & 2.11 & 1.15 $\pm$ 0.36\\
49 & J033114.71+304955.3 & 52.811322 & 30.8320395 & [EDJ2009] 272 & -0.45 $\pm$ 0.04 & 0.11 & 0.13 $\pm$ 0.12 & 0.62 & 0.09 $\pm$ 0.56\\
50 & J033118.31+304939.4 & 52.8263068 & 30.8276353 & SSTc2d J033118.3+304940 & -0.71 $\pm$ 0.04 & 1.48 & 2.13 $\pm$ 0.14 & 2.24 & 0.78 $\pm$ 0.3\\
51 & J033120.11+304917.5 & 52.8338214 & 30.8215501 & [EDJ2009] 274 & -0.75 $\pm$ 0.04 & 0.37 & 0.36 $\pm$ 0.17 & 0.04 & 0.04 $\pm$ 0.37\\
52 & J033128.87+303053.2 & 52.8703166 & 30.5147946 & 2MASS J03312887+3030531 & -1.57 $\pm$ 0.04 & 2.25 & 1.69 $\pm$ 0.35 & 2.13 & 0.87 $\pm$ 0.25\\
53 & J033142.41+310624.7 & 52.9267459 & 31.106877 & [EDJ2009] 278 & -1.45 $\pm$ 0.04 & 0.48 & 0.64 $\pm$ 0.13 & 2.14 & 1.83 $\pm$ 0.46\\
54 & J033233.00+310221.6 & 53.1375316 & 31.0393575 & [EDJ2009] 281 & -1.02 $\pm$ 0.04 & 1.29 & 1.4 $\pm$ 0.12 & 3.3 & 1.26 $\pm$ 0.27\\
55 & J033234.06+310055.7 & 53.1419191 & 31.0154864 & IRAS 03295+3050 & -0.94 $\pm$ 0.04 & 2.17 & 1.64 $\pm$ 0.52 & 1.69 & 0.95 $\pm$ 0.33\\
56 & J033241.70+311045.7 & 53.1737635 & 31.1793821 & 2MASS J03324171+3110461 & -0.45 $\pm$ 0.03 & 0.27 & 1.44 $\pm$ 0.09 & 1.58 & 0.94 $\pm$ 0.48\\
57 & J033312.84+312124.1 & 53.3035245 & 31.3566981 & SSTc2d J033312.8+312124 & 0.01 $\pm$ 0.04 & 2.37 & 3.14 $\pm$ 0.05 & 7.2 & 3.01 $\pm$ 0.37\\
58 & J033330.42+311050.4 & 53.3767691 & 31.180692 & EM* LkHA 327 & -1.24 $\pm$ 0.05 & 2.69 & 2.98 $\pm$ 0.32 & 3.24 & 7.86 $\pm$ 0.21\\
59 & J033341.30+311340.9 & 53.4220887 & 31.2280551 & 2MASS J03334129+3113410 & -0.85 $\pm$ 0.04 & 1.08 & 0.45 $\pm$ 0.23 & 7.46 & 0.68 $\pm$ 1.0\\
60 & J033346.93+305350.1 & 53.4455792 & 30.8972686 & [EDJ2009] 299 & -1.1 $\pm$ 0.06 & 0.16 & 0.28 $\pm$ 0.24 & 8.87 & 4.15 $\pm$ 0.18\\
61 & J033401.67+311439.6 & 53.5069625 & 31.2443577 & EM* LkHA 328 & -0.97 $\pm$ 0.04 & 1.47 & 1.57 $\pm$ 0.18 & 1.19 & 0.95 $\pm$ 0.24\\
62 & J033430.79+311324.3 & 53.6283159 & 31.2234194 & [EDJ2009] 302 & -1.03 $\pm$ 0.05 & 0.18 & 0.57 $\pm$ 0.16 & 3.06 & 5.64 $\pm$ 0.21\\
63 & J033449.86+311550.1 & 53.7077643 & 31.263929 & 2MASS J03344987+3115498 & -1.42 $\pm$ 0.04 & 0.73 & 1.47 $\pm$ 0.12 & 0.97 & 1.81 $\pm$ 0.21\\
64 & J033711.38+330303.0 & 54.2974288 & 33.0508484 & 2MASS J03371138+3303032 & -0.95 $\pm$ 0.03 & 1.25 & 1.69 $\pm$ 0.19 & 1.97 & 2.56 $\pm$ 0.32\\
65 & J034109.14+314437.9 & 55.2880837 & 31.7438668 & IRAS 03380+3135 & -1.5 $\pm$ 0.04 & 0.63 & 2.59 $\pm$ 0.12 & 0.47 & 0.52 $\pm$ 0.31\\
66 & J034114.12+315946.1 & 55.308873 & 31.996148 & 2MASS J03411412+3159462 & -1.88 $\pm$ 0.05 & 1.97 & 2.04 $\pm$ 0.14 & 9.79 & 8.31 $\pm$ 0.1\\
67 & J034157.45+314836.6 & 55.4894069 & 31.810175 & [EDJ2009] 314 & -1.27 $\pm$ 0.04 & 2.28 & 2.21 $\pm$ 0.36 & 7.82 & 8.39 $\pm$ 0.62\\
68 & J034157.76+314800.7 & 55.490705 & 31.8002143 & [EDJ2009] 315 & -0.89 $\pm$ 0.04 & 0.42 & 1.73 $\pm$ 0.1 & 0.45 & 0.87 $\pm$ 0.37\\
69 & J034204.35+314711.4 & 55.5181477 & 31.7865217 & [EDJ2009] 319 & -0.84 $\pm$ 0.04 & 0.43 & 0.21 $\pm$ 0.39 & 1.81 & 1.83 $\pm$ 0.44\\
70 & J034219.28+314326.8 & 55.5803535 & 31.7241352 & 2MASS J03421927+3143269 & -0.8 $\pm$ 0.04 & 1.08 & 1.17 $\pm$ 0.26 & 7.46 & 0.98 $\pm$ 0.34\\
71 & J034220.34+320530.9 & 55.584758 & 32.0919442 & 2MASS J03422033+3205310 & -0.94 $\pm$ 0.05 & 0.42 & 0.37 $\pm$ 0.04 & 7.36 & 2.79 $\pm$ 0.31\\
72 & J034223.33+315742.6 & 55.5972411 & 31.9618447 & [C93] 46 & -1.73 $\pm$ 0.04 & 1.97 & 1.87 $\pm$ 0.18 & 9.79 & 8.62 $\pm$ 0.08\\
73 & J034232.92+314220.5 & 55.6371724 & 31.7057131 & [EDJ2009] 326 & -0.58 $\pm$ 0.04 & 0.45 & 0.57 $\pm$ 0.06 & 3.15 & 1.83 $\pm$ 0.51\\
74 & J034254.70+314345.1 & 55.7279365 & 31.7292217 & [EDJ2009] 332 & -1.4 $\pm$ 0.05 & 1.41 & 1.43 $\pm$ 0.19 & 3.38 & 5.77 $\pm$ 0.19\\
75 & J034255.96+315841.9 & 55.7331887 & 31.9783205 & 2MASS J03425596+3158419 & -0.36 $\pm$ 0.03 & 2.14 & 1.97 $\pm$ 0.08 & 0.65 & 0.6 $\pm$ 0.17\\
76 & J034306.80+314820.4 & 55.7783399 & 31.8056847 & [EDJ2009] 336 & -1.89 $\pm$ 0.08 & 0.26 & 0.8 $\pm$ 0.07 & 1.57 & 3.02 $\pm$ 0.28\\
77 & J034328.20+320159.1 & 55.8675276 & 32.0331078 & V* V338 Per & -1.4 $\pm$ 0.05 & 0.75 & 1.33 $\pm$ 0.08 & 2.11 & 2.13 $\pm$ 0.27\\
78 & J034344.49+314309.3 & 55.9353919 & 31.7192681 & 2MASS J03434449+3143092 & -0.83 $\pm$ 0.04 & 0.8 & 0.83 $\pm$ 0.14 & 1.05 & 1.18 $\pm$ 0.67\\
79 & J034344.61+320817.7 & 55.9359083 & 32.1382646 & 2MASS J03434461+3208177 & -0.94 $\pm$ 0.05 & 0.76 & 0.71 $\pm$ 0.08 & 2.77 & 2.34 $\pm$ 0.28\\
80 & J034345.16+320358.6 & 55.9381899 & 32.0663019 & 2MASS J03434517+3203585 & 0.37 $\pm$ 0.03 & 2.44 & 2.05 $\pm$ 0.25 & 6.73 & 6.45 $\pm$ 0.64\\
81 & J034348.81+321551.4 & 55.9534082 & 32.2643026 & 2MASS J03434881+3215515 & -0.8 $\pm$ 0.05 & 0.39 & 0.39 $\pm$ 0.32 & 5.14 & 4.91 $\pm$ 0.32\\
82 & J034356.03+320213.2 & 55.9834625 & 32.0370212 & 2MASS J03435602+3202132 & -1.07 $\pm$ 0.04 & 0.69 & 1.62 $\pm$ 0.14 & 0.37 & 0.67 $\pm$ 0.47\\
83 & J034358.12+321357.0 & 55.9922053 & 32.2325024 & Cl* IC 348 LRL 323 & 0.21 $\pm$ 0.07 & 0.3 & 0.23 $\pm$ 0.16 & 8.02 & 8.86 $\pm$ 0.3\\
84 & J034358.56+321727.5 & 55.9940193 & 32.2909808 & 2MASS J03435856+3217275 & -0.93 $\pm$ 0.04 & 1.84 & 1.0 $\pm$ 0.29 & 3.06 & 3.21 $\pm$ 0.27\\
85 & J034358.90+321127.1 & 55.9954458 & 32.1908736 & 2MASS J03435890+3211270 & -1.06 $\pm$ 0.06 & 0.18 & 0.59 $\pm$ 0.38 & 0.18 & 0.95 $\pm$ 0.5\\
86 & J034359.09+321421.3 & 55.9962085 & 32.2392547 & 2MASS J03435907+3214213 & -0.26 $\pm$ 0.05 & 0.55 & 0.79 $\pm$ 0.05 & 6.41 & 6.45 $\pm$ 0.67\\
87 & J034359.64+320154.0 & 55.9985229 & 32.0316944 & 2MASS J03435964+3201539 & -0.53 $\pm$ 0.04 & 3.03 & 2.46 $\pm$ 0.31 & 2.7 & 8.51 $\pm$ 0.2\\
88 & J034406.78+320754.0 & 56.028277 & 32.1316908 & 2MASS J03440678+3207540 & -0.35 $\pm$ 0.05 & 0.14 & 0.45 $\pm$ 0.07 & 1.03 & 6.33 $\pm$ 0.41\\
89 & J034408.47+320716.5 & 56.0353011 & 32.1212639 & HD 281160 & -1.71 $\pm$ 0.04 & 1.97 & 1.92 $\pm$ 0.08 & 9.79 & 7.81 $\pm$ 0.44\\
90 & J034409.14+320709.3 & 56.0381115 & 32.1192589 & BD+31 641A & -1.5 $\pm$ 0.04 & 1.97 & 1.95 $\pm$ 0.41 & 9.79 & 9.48 $\pm$ 0.1\\
91 & J034411.62+320313.1 & 56.048419 & 32.0536554 & 2MASS J03441162+3203131 & -0.94 $\pm$ 0.04 & 1.19 & 1.48 $\pm$ 0.24 & 1.33 & 0.98 $\pm$ 0.46\\
92 & J034418.16+320456.9 & 56.0756684 & 32.0824977 & 2MASS J03441816+3204570 & -1.13 $\pm$ 0.09 & 0.29 & 0.45 $\pm$ 0.33 & 0.17 & 0.48 $\pm$ 0.39\\
93 & J034421.61+321037.7 & 56.0900646 & 32.1771427 & & -0.63 $\pm$ 0.04 & 1.08 & 0.97 $\pm$ 0.26 & 7.46 & 1.01 $\pm$ 0.45\\
94 & J034422.29+320542.6 & 56.0928827 & 32.0951789 & & 0.35 $\pm$ 0.05 & 1.63 & 1.46 $\pm$ 0.08 & 4.83 & 3.3 $\pm$ 0.58\\
95 & J034422.34+321200.4 & 56.0930856 & 32.200113 & & -1.22 $\pm$ 0.04 & 1.87 & 1.41 $\pm$ 0.14 & 6.19 & 9.21 $\pm$ 0.16\\
96 & J034422.57+320153.6 & 56.0940739 & 32.0315756 & 2MASS J03442257+3201536 & -0.79 $\pm$ 0.04 & 2.55 & 2.43 $\pm$ 0.37 & 5.91 & 5.96 $\pm$ 0.44\\
97 & J034426.68+320820.5 & 56.1111899 & 32.1390304 & 2MASS J03442668+3208203 & -0.5 $\pm$ 0.05 & 1.34 & 1.61 $\pm$ 0.3 & 5.24 & 2.25 $\pm$ 0.47\\
98 & J034427.21+322028.7 & 56.1133914 & 32.3413181 & 2MASS J03442721+3220288 & -0.45 $\pm$ 0.07 & 0.25 & 0.28 $\pm$ 0.16 & 4.28 & 4.03 $\pm$ 0.22\\
99 & J034427.24+321420.9 & 56.1135363 & 32.2391618 & & -1.09 $\pm$ 0.1 & 0.6 & 0.54 $\pm$ 0.1 & 6.02 & 4.57 $\pm$ 0.23\\
100 & J034427.89+322718.9 & 56.1162134 & 32.4552638 & 2MASS J03442789+3227189 & -1.09 $\pm$ 0.04 & 0.17 & 0.16 $\pm$ 0.11 & 1.23 & 0.51 $\pm$ 0.34\\
101 & J034428.50+315954.0 & 56.1187879 & 31.9983367 & & -1.42 $\pm$ 0.05 & 0.88 & 0.93 $\pm$ 0.1 & 8.15 & 4.95 $\pm$ 0.2\\
102 & J034429.73+321039.6 & 56.1238824 & 32.1776775 & & -0.28 $\pm$ 0.04 & 1.58 & 1.41 $\pm$ 0.14 & 0.94 & 3.32 $\pm$ 0.27\\
103 & J034429.79+320054.6 & 56.1241648 & 32.0151765 & & -0.72 $\pm$ 0.06 & 0.66 & 0.77 $\pm$ 0.12 & 8.28 & 6.47 $\pm$ 0.38\\
104 & J034430.82+320955.8 & 56.1284416 & 32.1655078 & 2MASS J03443081+3209558 & -0.17 $\pm$ 0.05 & 2.65 & 2.33 $\pm$ 0.06 & 6.09 & 6.74 $\pm$ 0.17\\
105 & J034431.20+320622.1 & 56.1300119 & 32.1061402 & V* V705 Per & -2.49 $\pm$ 0.11 & 2.09 & 2.56 $\pm$ 0.23 & 5.35 & 2.6 $\pm$ 0.25\\
106 & J034432.03+321143.7 & 56.1334627 & 32.1954928 & Cl* IC 348 H 140 & 0.28 $\pm$ 0.04 & 2.23 & 2.22 $\pm$ 0.13 & 6.52 & 0.46 $\pm$ 0.61\\
107 & J034434.20+320946.4 & 56.1425188 & 32.1629105 & BD+31 643A & -2.07 $\pm$ 0.05 & 3.13 & 3.31 $\pm$ 0.16 & 2.07 & 2.42 $\pm$ 0.62\\
108 & J034434.80+315655.2 & 56.1450193 & 31.9486743 & 2MASS J03443481+3156552 & -1.4 $\pm$ 0.05 & 0.98 & 0.63 $\pm$ 0.28 & 4.09 & 7.89 $\pm$ 0.32\\
109 & J034434.97+321531.1 & 56.145737 & 32.2586537 & 2MASS J03443498+3215311 & -0.3 $\pm$ 0.06 & 0.57 & 0.57 $\pm$ 0.07 & 2.83 & 4.64 $\pm$ 0.22\\
110 & J034435.35+321004.5 & 56.1473265 & 32.1679432 & 2MASS J03443536+3210045 & -1.19 $\pm$ 0.04 & 2.55 & 3.22 $\pm$ 0.24 & 3.65 & 2.39 $\pm$ 0.29\\
111 & J034435.67+320303.6 & 56.1486569 & 32.0510013 & 2MASS J03443568+3203035 & 0.13 $\pm$ 0.04 & 0.47 & 0.95 $\pm$ 0.1 & 9.51 & 7.89 $\pm$ 0.46\\
112 & J034436.94+320645.3 & 56.1539579 & 32.1125835 & V* V918 Per & -1.92 $\pm$ 0.08 & 1.97 & 2.1 $\pm$ 0.11 & 1.55 & 0.79 $\pm$ 0.17\\
113 & J034437.40+320901.0 & 56.1558428 & 32.1502995 & V* V710 Per & 0.16 $\pm$ 0.06 & 2.22 & 1.51 $\pm$ 0.16 & 2.9 & 3.65 $\pm$ 0.25\\
114 & J034437.88+320804.0 & 56.1578576 & 32.1344452 & V* V920 Per & -1.66 $\pm$ 0.13 & 1.73 & 1.55 $\pm$ 0.14 & 7.67 & 1.01 $\pm$ 0.3\\
115 & J034437.96+320329.7 & 56.1582056 & 32.0582586 & V* V712 Per & -0.16 $\pm$ 0.04 & 1.26 & 1.65 $\pm$ 0.11 & 2.38 & 3.48 $\pm$ 0.33\\
116 & J034438.00+321137.0 & 56.1583528 & 32.193614 & V* V713 Per & -1.16 $\pm$ 0.17 & 0.77 & 0.16 $\pm$ 0.99 & 5.56 & 5.98 $\pm$ 0.42\\
117 & J034438.17+321021.2 & 56.1590704 & 32.1725691 & 2MASS J03443814+3210215 & 0.54 $\pm$ 0.19 & 5.13 & 25.69 $\pm$ 0.28 & 0.04 & 0.52 $\pm$ 0.86\\
118 & J034438.44+320735.7 & 56.1601957 & 32.1266101 & V* V715 Per & -0.64 $\pm$ 0.08 & 1.64 & 1.53 $\pm$ 0.77 & 4.5 & 0.98 $\pm$ 0.34\\
119 & J034438.72+320843.0 & 56.1613728 & 32.1452907 & V* V717 Per & -1.29 $\pm$ 0.33 & 0.57 & 1.79 $\pm$ 0.14 & 0.01 & 3.72 $\pm$ 1.09\\
120 & J034439.16+320918.4 & 56.1631908 & 32.1551189 & V* V921 Per & -1.38 $\pm$ 0.07 & 2.68 & 1.41 $\pm$ 0.32 & 1.18 & 1.28 $\pm$ 0.21\\
121 & J034439.78+321804.0 & 56.1657808 & 32.3011331 & 2MASS J03443979+3218041 & -0.82 $\pm$ 0.06 & 0.82 & 0.81 $\pm$ 0.34 & 4.22 & 1.71 $\pm$ 0.35\\
122 & J034441.73+321202.3 & 56.1738994 & 32.2006505 & 2MASS J03444173+3212022 & 0.14 $\pm$ 0.04 & 0.49 & 0.22 $\pm$ 0.2 & 1.37 & 1.46 $\pm$ 0.78\\
123 & J034442.09+320901.2 & 56.1753791 & 32.1503355 & Cl* IC 348 CB 119 & -0.72 $\pm$ 0.05 & 0.55 & 0.54 $\pm$ 0.2 & 0.67 & 1.09 $\pm$ 0.55\\
124 & J034442.56+321002.4 & 56.1773661 & 32.1673597 & 2MASS J03444256+3210025 & 0.02 $\pm$ 0.07 & 0.18 & 0.14 $\pm$ 0.51 & 0.27 & 0.44 $\pm$ 0.49\\
125 & J034443.76+321030.3 & 56.1823352 & 32.1751 & V* V719 Per & -0.25 $\pm$ 0.05 & 1.14 & 1.35 $\pm$ 0.33 & 9.65 & 3.75 $\pm$ 0.58\\
126 & J034444.59+320812.5 & 56.1857922 & 32.136829 & V* V925 Per & -0.21 $\pm$ 0.07 & 0.57 & 1.35 $\pm$ 0.15 & 3.76 & 4.23 $\pm$ 0.38\\
127 & J034444.70+320402.5 & 56.18629 & 32.0673732 & V* V926 Per & -1.04 $\pm$ 0.04 & 1.85 & 1.61 $\pm$ 0.08 & 6.92 & 0.8 $\pm$ 0.39\\
128 & J034447.71+321911.8 & 56.1988298 & 32.3199551 & [C93] 80 & -0.98 $\pm$ 0.05 & 1.78 & 1.58 $\pm$ 0.06 & 7.63 & 2.96 $\pm$ 0.24\\
129 & J034450.65+321906.4 & 56.2110465 & 32.3184715 & V* V927 Per & -2.4 $\pm$ 0.09 & 2.24 & 2.58 $\pm$ 0.37 & 4.19 & 2.73 $\pm$ 0.18\\
130 & J034456.13+320915.0 & 56.2338955 & 32.1541925 & 2MASS J03445614+3209152 & -1.44 $\pm$ 0.1 & 1.98 & 1.64 $\pm$ 0.05 & 3.3 & 2.35 $\pm$ 0.43\\
131 & J034501.41+320501.7 & 56.2558937 & 32.0838185 & 2MASS J03450142+3205017 & -2.04 $\pm$ 0.05 & 1.56 & 1.81 $\pm$ 0.07 & 8.36 & 4.09 $\pm$ 0.26\\
132 & J034501.51+321051.3 & 56.2563233 & 32.1809357 & 2MASS J03450151+3210512 & -0.73 $\pm$ 0.06 & 0.94 & 0.93 $\pm$ 0.14 & 3.65 & 8.05 $\pm$ 0.13\\
133 & J034501.74+321427.6 & 56.2572626 & 32.2410271 & 2MASS J03450174+3214276 & -1.24 $\pm$ 0.04 & 1.41 & 1.17 $\pm$ 0.08 & 3.38 & 2.84 $\pm$ 0.25\\
134 & J034507.62+321027.9 & 56.2817792 & 32.1744282 & 2MASS J03450762+3210279 & -1.75 $\pm$ 0.05 & 1.64 & 1.56 $\pm$ 0.05 & 7.44 & 2.82 $\pm$ 0.33\\
135 & J034514.71+294503.1 & 56.3113186 & 29.7508854 & HD 281192 & -1.09 $\pm$ 0.06 & 2.87 & 2.85 $\pm$ 0.26 & 4.67 & 4.32 $\pm$ 0.23\\
136 & J034516.34+320619.9 & 56.3180861 & 32.1055416 & EM* LkHA 99 & -0.8 $\pm$ 0.04 & 2.29 & 1.5 $\pm$ 0.15 & 7.17 & 0.59 $\pm$ 0.17\\
137 & J034520.45+320634.4 & 56.3352286 & 32.1095804 & 2MASS J03452046+3206344 & -1.15 $\pm$ 0.04 & 2.16 & 2.05 $\pm$ 0.24 & 2.02 & 2.44 $\pm$ 0.39\\
138 & J034525.14+320930.3 & 56.3547711 & 32.1584269 & 2MASS J03452514+3209301 & -0.79 $\pm$ 0.04 & 0.57 & 0.53 $\pm$ 0.27 & 3.11 & 1.11 $\pm$ 0.3\\
139 & J034536.85+322556.8 & 56.4035733 & 32.4324705 & EM* LkHA 329 & -1.31 $\pm$ 0.04 & 1.25 & 1.43 $\pm$ 0.14 & 0.65 & 0.67 $\pm$ 0.28\\
140 & J034548.28+322411.8 & 56.4512063 & 32.4032995 & IRAS 03426+3214 & -1.82 $\pm$ 0.06 & 2.46 & 3.11 $\pm$ 0.1 & 3.92 & 0.5 $\pm$ 0.31\\
141 & J034747.14+330403.2 & 56.9464261 & 33.0675598 & [OH83] B5 3 & -0.55 $\pm$ 0.03 & 0.56 & 1.81 $\pm$ 0.16 & 1.41 & 1.41 $\pm$ 0.23\\
142 & J034929.05+345800.6 & 57.3710713 & 34.9668454 & 2MASS J03492906+3458006 & -0.89 $\pm$ 0.04 & 2.36 & 3.13 $\pm$ 0.16 & 3.15 & 0.07 $\pm$ 0.93\\
143 & J040443.06+261856.3 & 61.1794518 & 26.3156406 & IRAS 04016+2610 & 0.29 $\pm$ 0.04 & 0.77 & 0.63 $\pm$ 0.1 & 0.01 & 0.01 $\pm$ 0.52\\
144 & J041320.01+311047.2 & 63.3334141 & 31.1797938 & HD 281789 & 0.23 $\pm$ 0.04 & 4.31 & 4.24 $\pm$ 0.43 & 0.7 & 1.96 $\pm$ 0.75\\
145 & J041353.28+281123.1 & 63.4720167 & 28.1897541 & IRAS 04108+2803A & -0.08 $\pm$ 0.04 & 0.49 & 1.65 $\pm$ 0.08 & 0.25 & 0.59 $\pm$ 0.48\\
146 & J041357.38+291819.1 & 63.4890872 & 29.305319 & IRAS 04108+2910 & -0.78 $\pm$ 0.03 & 0.6 & 0.62 $\pm$ 0.06 & 0.42 & 0.16 $\pm$ 0.31\\
147 & J041413.58+281249.0 & 63.5566088 & 28.2136261 & V* FM Tau & -0.78 $\pm$ 0.03 & 1.86 & 2.3 $\pm$ 0.39 & 8.41 & 8.21 $\pm$ 0.58\\
148 & J041414.59+282757.9 & 63.5608221 & 28.4661111 & V* FN Tau & -0.37 $\pm$ 0.04 & 3.97 & 2.79 $\pm$ 0.07 & 3.9 & 0.41 $\pm$ 0.65\\
149 & J041417.00+281057.6 & 63.5708561 & 28.1826907 & V* CW Tau & -1.48 $\pm$ 0.07 & 3.74 & 2.91 $\pm$ 0.12 & 9.87 & 4.01 $\pm$ 0.18\\
150 & J041417.61+280609.5 & 63.5733926 & 28.1026472 & [BCG93] 1 & -0.3 $\pm$ 0.03 & 0.29 & 1.64 $\pm$ 0.1 & 0.22 & 0.64 $\pm$ 0.72\\
151 & J041426.27+280603.1 & 63.6094781 & 28.1008827 & [BHS98] MHO 1 & -0.68 $\pm$ 0.08 & 4.02 & 3.63 $\pm$ 0.1 & 1.13 & 7.83 $\pm$ 0.76\\
152 & J041430.55+280514.4 & 63.6272982 & 28.0873454 & NAME IRAS 04114+2757G & 0.2 $\pm$ 0.04 & 3.5 & 3.54 $\pm$ 0.07 & 8.06 & 3.92 $\pm$ 0.23\\
153 & J041447.30+264626.3 & 63.697116 & 26.7739795 & V* FP Tau & -1.47 $\pm$ 0.03 & 0.4 & 0.42 $\pm$ 0.22 & 0.29 & 0.52 $\pm$ 0.21\\
154 & J041447.86+264810.9 & 63.6994418 & 26.8030313 & V* CX Tau & -0.72 $\pm$ 0.03 & 0.44 & 0.42 $\pm$ 0.3 & 0.14 & 0.27 $\pm$ 0.32\\
155 & J041449.28+281230.3 & 63.705357 & 28.2084404 & V* FO Tau & -1.19 $\pm$ 0.04 & 3.02 & 2.04 $\pm$ 0.23 & 0.48 & 0.65 $\pm$ 0.3\\
156 & J041539.16+281858.3 & 63.9132044 & 28.3162091 & 2MASS J04153916+2818586 & -1.49 $\pm$ 0.04 & 0.44 & 1.79 $\pm$ 0.15 & 0.4 & 0.67 $\pm$ 0.31\\
\enddata
\end{deluxetable}
\rotate
\begin{deluxetable}{cccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Physical parameters measured for the 66 AGB candidates \label{tblAGBs}}
\tablewidth{0pt}
\tablehead{
\colhead{Order} &\colhead{Catalog Number} &\colhead{RA} & \colhead{Dec}& \colhead{Name} &\colhead{$\alpha$} &\colhead{ Mass$_{BF}$} & \colhead{Mass$_{peak}$} & \colhead{Age$_{BF}$} &
\colhead{Age$_{peak}$} \\
\colhead{} &\colhead{} &\colhead{(deg)} & \colhead{(deg)}&
\colhead{} &\colhead{} &\colhead{(M$_\odot$)} &
\colhead{(M$_\odot$)} & \colhead{(Myr)} & \colhead{(Myr)}
}
\startdata
1 & J032231.00+311527.2 & 50.629184 & 31.25754 & & -2.62 $\pm$ 0.04 & 2.54 & 2.8 & 1.04 & 1.12\\
2 & J032241.26+341237.4 & 50.671772 & 34.210327 & V* LY Per & -2.67 $\pm$ 0.11 & 3.7 & 3.71 & 0.3 & 0.34\\
3 & J032243.15+364540.1 & 50.679838 & 36.761158 & & -2.6 $\pm$ 0.05 & 0.41 & 0.52 & 0.49 & 0.51\\
4 & J032244.12+355315.0 & 50.68377 & 35.887527 & & -2.65 $\pm$ 0.04 & 0.41 & 0.5 & 0.49 & 0.48\\
5 & J032421.19+334023.3 & 51.0883 & 33.673141 & & -2.36 $\pm$ 0.05 & 0.81 & 0.78 & 0.17 & 0.16\\
6 & J032429.84+341710.0 & 51.124323 & 34.286106 & HD 20994 & -2.66 $\pm$ 0.04 & 1.93 & 1.91 & 7.05 & 7.33\\
7 & J032435.96+351836.8 & 51.149879 & 35.3102 & TYC 2349-915-1 & -2.05 $\pm$ 0.11 & 4.24 & 4.29 & 0.49 & 0.44\\
8 & J032442.56+311554.9 & 51.177375 & 31.265232 & & -2.28 $\pm$ 0.04 & 0.82 & 0.53 & 0.55 & 0.44\\
9 & J032740.52+311539.2 & 51.91891 & 31.2609 & BD+30 543 & -2.57 $\pm$ 0.04 & 2.21 & 1.94 & 4.57 & 5.79\\
10 & J032810.45+332844.3 & 52.043516 & 33.478863 & NSV 1151 & -2.37 $\pm$ 0.48 & 17.26 & 25.69 & 0.09 & 3.72\\
11 & J033110.80+284231.2 & 52.794686 & 28.708618 & TYC 1810-104-1 & -1.91 $\pm$ 0.02 & 5.57 & 5.63 & 0.14 & 0.12\\
12 & J033238.02+295708.7 & 53.158482 & 29.952421 & IRAS 03295+2947 & -2.54 $\pm$ 0.07 & 3.46 & 2.84 & 1.27 & 0.6\\
13 & J033251.29+332210.9 & 53.213706 & 33.369652 & NIPSS 277C21 & -2.24 $\pm$ 0.05 & 0.55 & 0.51 & 0.34 & 0.49\\
14 & J033446.72+345157.9 & 53.694653 & 34.866066 & 2MASS J03344671+3451578 & -2.57 $\pm$ 0.06 & 0.81 & 0.78 & 0.17 & 0.19\\
15 & J033603.04+313900.6 & 54.012667 & 31.650137 & & -2.07 $\pm$ 0.04 & 0.41 & 0.37 & 0.49 & 0.55\\
16 & J033606.72+292405.6 & 54.028058 & 29.40156 & IRAS 03330+2914 & -2.01 $\pm$ 0.09 & 1.28 & 1.26 & 0.15 & 0.14\\
17 & J033628.62+315539.3 & 54.119444 & 31.927509 & [KSP2003] J033628.70+315540.1 & -2.63 $\pm$ 0.05 & 1.64 & 1.64 & 0.09 & 0.18\\
18 & J033652.31+305348.5 & 54.218042 & 30.896807 & IRAS 03337+3043 & -2.1 $\pm$ 0.06 & 3.8 & 3.33 & 0.83 & 1.35\\
19 & J033819.09+320313.5 & 54.579494 & 32.053757 & V* V734 Per & -2.42 $\pm$ 0.1 & 3.7 & 3.69 & 0.3 & 0.31\\
20 & J033829.54+344014.6 & 54.62315 & 34.670719 & V* V735 Per & -2.37 $\pm$ 0.06 & 3.39 & 3.07 & 1.37 & 1.07\\
21 & J033956.08+305601.1 & 54.983683 & 30.933563 & & -2.58 $\pm$ 0.04 & 2.53 & 1.78 & 2.69 & 1.87\\
22 & J034025.22+303304.5 & 55.10511 & 30.551268 & & -2.37 $\pm$ 0.05 & 2.42 & 2.57 & 1.15 & 1.2\\
23 & J034057.79+311805.9 & 55.240823 & 31.301645 & V* V900 Per & -2.59 $\pm$ 0.05 & 1.93 & 1.92 & 7.05 & 6.84\\
24 & J034222.38+363037.0 & 55.593101 & 36.510273 & V* AF Per & -2.54 $\pm$ 0.53 & 5.57 & 5.57 & 0.14 & 0.14\\
25 & J034311.08+321746.4 & 55.796156 & 32.296204 & & -2.59 $\pm$ 0.05 & 3.41 & 2.65 & 1.19 & 2.03\\
26 & J034402.53+360732.5 & 56.010547 & 36.125698 & & -2.6 $\pm$ 0.04 & 0.36 & 0.36 & 0.6 & 0.59\\
27 & J034404.68+293216.5 & 56.019508 & 29.537918 & & -2.63 $\pm$ 0.04 & 1.67 & 1.84 & 1.18 & 1.39\\
28 & J034429.97+322558.8 & 56.124909 & 32.433029 & & -2.55 $\pm$ 0.05 & 1.93 & 1.52 & 7.05 & 9.39\\
29 & J034430.35+322152.8 & 56.126505 & 32.364689 & & -2.38 $\pm$ 0.05 & 0.8 & 0.95 & 3.02 & 1.96\\
30 & J034547.88+320058.6 & 56.449552 & 32.016258 & HD 281161 & -2.41 $\pm$ 0.04 & 1.93 & 1.94 & 7.05 & 7.34\\
31 & J034640.87+321724.6 & 56.670314 & 32.290203 & HD 23478 & -2.48 $\pm$ 0.05 & 3.06 & 3.05 & 2.5 & 2.5\\
32 & J034701.91+372921.3 & 56.757843 & 37.489201 & IRAS 03437+3720 & -2.51 $\pm$ 0.11 & 2.84 & 2.83 & 0.23 & 0.35\\
33 & J034733.76+295851.3 & 56.89075 & 29.98098 & V* V1190 Tau & -2.57 $\pm$ 0.06 & 0.64 & 0.74 & 0.38 & 0.47\\
34 & J034750.97+371843.4 & 56.962434 & 37.312069 & IRAS 03446+3709 & -2.49 $\pm$ 0.05 & 0.71 & 0.72 & 0.5 & 0.48\\
35 & J034935.66+263033.8 & 57.3987 & 26.509445 & V* BI Tau & -2.6 $\pm$ 0.06 & 3.17 & 3.04 & 1.15 & 1.28\\
36 & J034936.08+363441.3 & 57.400339 & 36.578152 & & -2.64 $\pm$ 0.04 & 0.35 & 0.36 & 0.76 & 0.75\\
37 & J035028.16+274005.4 & 57.617117 & 27.668297 & V* V1192 Tau & -2.42 $\pm$ 0.13 & 3.41 & 3.42 & 0.08 & 0.07\\
38 & J035103.41+362849.8 & 57.764296 & 36.480648 & IRAS 03478+3619 & -2.65 $\pm$ 0.12 & 1.64 & 1.82 & 0.09 & 0.1\\
39 & J035124.25+361528.7 & 57.851054 & 36.257935 & V* V377 Per & -2.27 $\pm$ 0.04 & 0.34 & 0.37 & 0.5 & 0.57\\
40 & J035132.31+372509.4 & 57.88466 & 37.4193 & IRAS 03482+3716 & -2.53 $\pm$ 0.06 & 0.64 & 0.76 & 0.38 & 0.42\\
41 & J035245.26+372608.4 & 58.188624 & 37.43565 & & -2.65 $\pm$ 0.05 & 0.41 & 0.4 & 0.49 & 0.47\\
42 & J035402.26+363218.3 & 58.509482 & 36.538235 & V* V637 Per & -1.16 $\pm$ 0.02 & 6.71 & 6.92 & 0.04 & 0.03\\
43 & J035448.08+344520.7 & 58.700441 & 34.755768 & & -2.62 $\pm$ 0.04 & 0.41 & 0.43 & 0.49 & 0.5\\
44 & J035801.79+365502.2 & 59.507468 & 36.917244 & & -2.63 $\pm$ 0.04 & 0.36 & 0.34 & 0.6 & 0.79\\
45 & J035936.33+313651.4 & 59.901339 & 31.61425 & IRAS 03564+3128 & -2.21 $\pm$ 0.05 & 0.52 & 0.5 & 0.26 & 0.39\\
46 & J035946.41+344056.4 & 59.943341 & 34.682392 & V* V739 Per & -2.59 $\pm$ 0.09 & 3.7 & 2.85 & 0.3 & 0.3\\
47 & J040207.33+302353.6 & 60.530554 & 30.398191 & & -2.31 $\pm$ 0.04 & 1.26 & 0.77 & 0.73 & 0.77\\
48 & J040229.82+360656.5 & 60.624242 & 36.115685 & IRAS 03592+3558 & -2.49 $\pm$ 0.07 & 1.07 & 1.51 & 0.27 & 0.23\\
49 & J040233.34+282951.5 & 60.638931 & 28.497602 & GSC 01825-00286 & -2.37 $\pm$ 0.09 & 1.17 & 1.64 & 0.14 & 0.14\\
50 & J040536.53+320230.1 & 61.40227 & 32.041737 & IRAS 04024+3154 & -2.24 $\pm$ 0.06 & 0.84 & 0.57 & 0.28 & 0.43\\
51 & J040610.35+303928.8 & 61.543103 & 30.657948 & & -2.65 $\pm$ 0.05 & 0.82 & 0.85 & 0.43 & 0.45\\
52 & J040733.34+350211.1 & 61.888932 & 35.036388 & & -2.48 $\pm$ 0.04 & 0.41 & 0.42 & 0.49 & 0.58\\
53 & J040828.00+363219.6 & 62.116687 & 36.538727 & IRAS 04051+3624 & -2.25 $\pm$ 0.09 & 5.87 & 3.94 & 4.18 & 2.45\\
54 & J040833.90+265034.7 & 62.141092 & 26.843029 & V* TV Tau & -2.45 $\pm$ 0.11 & 7.21 & 7.02 & 1.38 & 1.39\\
55 & J040847.99+313053.8 & 62.199979 & 31.514896 & V* V743 Per & -2.64 $\pm$ 0.05 & 0.55 & 0.56 & 0.41 & 0.41\\
56 & J040853.64+371045.6 & 62.223552 & 37.179352 & & -2.46 $\pm$ 0.04 & 0.35 & 0.38 & 0.79 & 0.72\\
57 & J040936.96+332937.1 & 62.404023 & 33.493721 & V* V394 Per & -1.98 $\pm$ 0.03 & 2.8 & 2.7 & 0.14 & 0.14\\
58 & J040953.65+321154.0 & 62.473612 & 32.198292 & IRAS 04067+3204 & -2.29 $\pm$ 0.06 & 0.81 & 0.72 & 0.17 & 0.23\\
59 & J040954.92+353419.2 & 62.478869 & 35.572006 & & -2.63 $\pm$ 0.05 & 0.53 & 0.54 & 0.47 & 0.47\\
60 & J041047.04+371351.4 & 62.69601 & 37.230907 & & -2.58 $\pm$ 0.05 & 0.47 & 0.5 & 0.43 & 0.44\\
61 & J041142.35+262718.2 & 62.926491 & 26.455038 & V* V1247 Tau & -2.51 $\pm$ 0.07 & 1.07 & 1.74 & 0.27 & 0.48\\
62 & J041315.68+332955.4 & 63.315343 & 33.498722 & 2MASS J04131568+3329553 & -1.89 $\pm$ 0.11 & 1.64 & 2.91 & 0.09 & 0.15\\
63 & J041343.47+262456.6 & 63.431257 & 26.415569 & V* V482 Tau & -1.33 $\pm$ 0.16 & 7.26 & 4.32 & 0.04 & 0.02\\
64 & J041422.36+342522.6 & 63.593223 & 34.422928 & & -2.43 $\pm$ 0.04 & 0.41 & 0.39 & 0.49 & 0.48\\
65 & J041424.50+363642.9 & 63.602068 & 36.611908 & & -2.44 $\pm$ 0.04 & 0.4 & 0.36 & 0.92 & 1.29\\
66 & J041516.84+355414.6 & 63.820108 & 35.904034 & 2MASS J04151682+3554145 & -2.64 $\pm$ 0.08 & 0.81 & 0.82 & 0.17 & 0.15\\
\enddata
\end{deluxetable}
\bibliographystyle{apj}
|
1,108,101,563,699 | arxiv |
\section{Introduction}
\section{Introduction}
\label{sec:Introduction}
Networks---and their \emph{topologies}---have been studied in a broad range of disciplines, leading to terms like \emph{social, economic, biological}, or \emph{chemical} networks, and, of course, mechanical networks \cite{newman2010networks}.
Here we focus on the latter and expand the theoretical and numerical analysis introduced in a companion short paper \cite{heidemann2017b}. Networks of filamentous proteins, polysaccharides or nucleic acids, essentially all semiflexible filaments, play important roles for the mechanics and stability of biological cells and tissues \cite{MacKintosh1997,howard2001mechanics}.
An important design feature of biological materials is the response to large loads, including failure, rupture, damage limitation and their recovery properties.
To understand failure that starts with the rupture of single filaments when the local force exceeds a threshold, it is crucial to understand force distributions in filament networks.
It turns out that topology plays a critical role for the distribution of forces in elastic (e.g, polymer) networks, but this topic has received little attention to date.
The quantitative analysis of force distributions within random polymer networks has largely relied on computational modeling \cite{heussinger2007force,Heidemann2014}.
Analytical descriptions of fiber networks have primarily used \emph{effective-medium} \cite{Thorpe1985,Broedersz2011b,Sheinman2012a} or \emph{mean-field} \cite{Storm2005,Sharma2013a,Heidemann2014} approaches.
Effective-medium theories rely on mapping a disordered system to an ordered one.
It is unclear, however, how force distributions change under this mapping.
Mean-field approaches do not consider the full network topology, but only the \emph{local} degree of connectivity.
We show that such an approach fails to describe force distributions even for a very simple model system;
\sn{in fact, topological features, i.e., cycles/loops in the networks, cause global coupling that remains prevalent even when the system becomes large.}
\begin{figure}
\centering
\input{circle-springs-curved-with-winding-N5.pdf_tex}
\caption{(a) An example network on the circle, with $N=\num{5}$ and $z=\num{2.4}$. (b) Graph representation of the network in (a). The edge/spring orientations are depicted by black arrows. The network contains two fundamental cycles, for example: $\{l_1,l_2,l_3,l_4\}$ and $\{l_4,l_5,l_6\}$. After choosing arbitrary orientations for both cycles (gray arrows), we construct linear constraints that fix their winding numbers (\cref{eq:totalEnergy})---here: $l_1+l_2+l_3-l_4 = -1$ (winds around circle once) and $l_4+l_5+l_6 = 0$ (contractible). \sn{(c) The abstract cycle graph ($z=2$) with $N=5$ (left) and three realizations on the circle with distinct topologies (same graphs but different winding numbers $g$). Top and bottom row show initial and corresponding relaxed configurations, respectively. Note that, for visualization purposes, overlapping springs are drawn with a slight offset.}}
\label{img:schematic}
\end{figure}
The simple model system that we consider here consists of ensembles of one-dimensional random spring networks on a circle.
Considering such networks is equivalent to applying \emph{periodic boundary conditions} in one dimension.
To model the effect of external force applied to the network, we employ a generation procedure that inserts springs with pre-strain so that the resulting initial configurations are not in mechanical equilibrium.
We then study the resulting force distributions of the relaxed systems.
We generate \emph{initial} network configurations as follows (\cref{img:schematic}):
(i) Place $N$ node positions \sn{(indexed from 1 to $N$)} drawn from a \emph{uniform} distribution on the circle.
(ii) Connect these nodes in the order given by their indices into one connected \emph{cycle} via springs.
We always connect consecutive nodes via the \emph{shorter} of the two possible distances.
\sn{Note that the cycle may wrap around the circle zero, one, or multiple times (\cref{img:schematic}~(c)).}
This step guarantees that each network will always have only one connected component \sn{and prevents dangling ends}.
(iii) Connect further node pairs \emph{randomly}, such that each node pair is connected by at most one spring, until the network contains $Nz/2$ springs, where the \emph{average degree} of \sn{connectivity} $\barz$ is chosen such that $Nz/2$ is an integer.
Each spring is linear, has rest length zero, and unit spring constant. Its length is measured along the circumference of the circle.
In order to encode this construction in an unambiguous manner we work with signed spring lengths as degrees of freedom.
\sn{The orientation of a spring is chosen such that it goes from a node of lower index to a node of higher index. This is an arbitrary choice, but defined orientations are essential in our formalism. The sign of the spring length is chosen to be positive if its orientation on the circle points counter-clockwise and negative otherwise.}
\sn{
The network can be encoded within a \emph{graph representation}, where the springs \sn{together with their orientations} are the \emph{directed} edges of the graph, with signed lengths as edge weights (\cref{img:schematic}~(b)).
To lie on the circle, the graph and edge weights must be compatible in the sense that the sum of the edge weights around each cycle of the graph is equal to an integer, which we refer to as its winding number $g$.
Our network generation procedure guarantees this compatibility. It results in a \emph{random directed Hamiltonian graph}, i.e.,
a graph that contains a cycle that visits each node exactly once, with $N$ nodes and average degree $z$. This graph comes equipped with compatible initial spring lengths/edge weights $\{\bar l_i\}_{i=1}^{Nz/2}$ that are each \emph{uniformly} distributed as $\mathcal U(-0.5,0.5)$, but, since they are coupled by integer winding numbers, not mutually independent \cite{weiss2006course} as random variables.}
\sn{
We seek to characterize the length (i.e. force) distributions of springs in networks after they have relaxed to mechanical equilibrium. Relaxation preserves network topology, i.e., it preserves its graph together with a set of winding numbers, that arise from the generation process. Note that networks sharing the same graph may have different sets of winding numbers, and therefore distinct relaxed states (\cref{img:schematic}~(c)).
A particular realization of an initial network uniquely determines network topology and results in a known linear solution operator for the respective mechanical equilibrium.
However, a network ensemble, with a given connectivity and number of nodes includes many topologies. This leads to a random solution operator, which makes it more difficult to determine the ensemble-averaged distribution of relaxed lengths.
Motivated by experiments, where explicit information on particular realizations is hard to obtain, we study ensembles with a fixed number of nodes $N$ and average degree $z$, henceforth called $(N,z)$-ensembles.
Surprisingly, such ensembles have well defined force distributions despite varying topologies.
Explicitly accounting for these unknown underlying topologies makes our approach different from a mean-field description.
}
\section{Analytical theory}
\label{sec:Analytical theory}
Formally, as already described in \mw{\cite{heidemann2017b}}, our model can be described as the following optimization problem:
\begin{align}
\text{minimize}\quad &\frac{1}{2}\boldsymbol l^T \boldsymbol l\quad \text{subject to}\quad \mathbf C \boldsymbol l = \mathbf g = \mathbf C \boldsymbol{\bar l}\,,
\label{eq:totalEnergy}
\end{align}
where $\boldsymbol l \in \mathbb R^{Nz/2}$ is the vector of all spring lengths and $\mathbf g \in \mathbb Z^m$ is the vector of winding numbers, which is determined by the vector of initial spring lengths $\boldsymbol{\bar l}$ and the \emph{signed cycle matrix} $\mathbf C \in \mathbb Z^{m \times Nz/2}$, described below.
The first part in \cref{eq:totalEnergy} minimizes the total elastic energy of the system, whereas the second part preserves the topology of the network by fixing the winding numbers of a set of $m=N(z/2-1)+1$ \emph{fundamental cycles}.
A fundamental cycle is defined as a cycle that occurs when adding a single edge to a spanning tree of the graph. There are $N-1$ edges in the spanning tree, so $Nz/2 - (N-1)$ edges can be added. Therefore, there are $N(z/2-1)+1$ fundamental cycles.
Note that the choice of fundamental cycles corresponds to the choice of a basis and is therefore not unique.
The solution to \cref{eq:totalEnergy}, however, is independent of this choice (\cref{sec:Independence of the solution on the choice of the cycle basis}).
After choosing a cycle basis, the $\mathbf C$-matrix is constructed by specifying an orientation for each fundamental cycle and then setting $C_{ji}$ equal to: $1$ if spring $i$ is part of the $j$th fundamental cycle and their orientations agree, or $-1$ if their orientations are opposite, and $0$ otherwise.
For the example in \cref{img:schematic}~(a), the cycle matrix and vector of winding numbers are given by $C_{1}=(1,1,1,-1,0,0)$, $C_{2}=(0,0,0,1,1,1)$, and $\boldsymbol g=(-1,0)^T$, respectively.
Note that winding numbers correspond to the signed number of times a cycle wraps around the circle. Contractible cycles have winding number zero.
If all cycles were contractible, then \cref{eq:totalEnergy} would have a trivial solution with all springs collapsed to a single point. It is only the presence of nontrivial cycle constraints that prevents this outcome.
It is noteworthy to point out that the problem presented above is \emph{equivalent} to the classical problem of determining the currents (here $\boldsymbol l$) in an \emph{electrical network}. Force balance (or minimizing the energy $\boldsymbol l^T \boldsymbol l$) is equivalent to Kirchhoff's current law (signed currents add up to zero at a node) and---assuming unit resistances---the cycle constraints ($\mathbf C \boldsymbol l = \boldsymbol g$) correspond to Kirchhoff's voltage law (voltages in a closed loop sum up to zero), where the winding numbers $g_j$ represent voltage sources.
\Cref{eq:totalEnergy} defines a \emph{quadratic programming problem} with a unique analytic solution:
\begin{align}
\boldsymbol{l^*} &=\mathbf C^T(\mathbf C\mathbf C^T)^{-1}\underbrace{\mathbf C \boldsymbol{\bar l}}_{= \boldsymbol g} \eqqcolon \mathbf P \boldsymbol{\bar l}\,,
\label{eq:kkt}
\end{align}
which can be explicitly computed for each realization via, e.g., the optimization library IPOPT \cite{Wachter2006}.
To express the resulting force distributions of an $(N,z)$-ensemble we consider the expected histogram of the vector $\boldsymbol{l^*}$ of random variables.
This results in a \emph{univariate} probability density for the final spring lengths.
For a particular realization,
the corresponding \emph{cumulative histogram} $H_{\boldsymbol{l^*}}$ is given via
\begin{align}
H_{\boldsymbol{l^*}}(\ellstar) \coloneqq \frac{2}{Nz} \sum_{i=1}^{Nz/2}\mathbbm 1_{l_i^* \leq \ellstar}\,,
\end{align}
where $\mathbbm 1_{A}$ is the indicator function (one if $A$ is true, zero otherwise).
The quantity $H_{\boldsymbol{l^*}}(\ellstar)$ measures the number of elements in $\boldsymbol{l^*}$ with values less than or equal to $\ellstar$.
We are interested in a univariate \emph{cumulative distribution function} (cdf) $F_{\boldsymbol{l^*}}$, which we define as the expected value of the cumulative histogram of an $(N,z)$-ensemble:
\begin{equation}
\begin{split}
F_{\boldsymbol{l^*}}(\ellstar) \coloneqq& \E [H_{\boldsymbol{l^*}}(\ellstar)] = \frac{2}{Nz} \sum_{i=1}^{Nz/2} \E [\mathbbm 1_{l_i^*\leq \ellstar} ]\\
=& \frac{2}{Nz} \sum_{i=1}^{Nz/2} \Prob(l_i^* \leq \ellstar) = \frac{2}{Nz} \sum_{i=1}^{Nz/2} F_{l_i^*} (\ellstar)\,.
\label{eq:cumcdf}
\end{split}
\end{equation}
The quantity $F_{\boldsymbol{l^*}}(\ellstar)$ is the average over the marginal distribution functions of the individual $l_i^*$. This result defines the corresponding univariate \emph{probability density} (expected histogram), i.e.,
\begin{align}
p_{\boldsymbol{l^*}}(\ellstar) \coloneqq \frac{d}{d\ellstar}F_{\boldsymbol{l^*}}(\ellstar)= \frac{2}{Nz} \sum_{i=1}^{Nz/2} p_{l_i^*} (\ellstar)\,.
\label{eq:averagedDensity}
\end{align}
By decomposing the final length vector $\boldsymbol{l^*}$ into initial lengths $\boldsymbol{\bar l}$ and length changes $\dl$, i.e., $\boldsymbol{l^*} = \boldsymbol{\bar l} + \dl$, we compute
\begin{align*}
p_{l_i^*}(\ell^*) = p_{\bar l_i + \Delta l_i}(\ell^*) = \int\limits_{-\infty}^{+\infty} p_{\bar l_i }(\bar \ell) \cdot p_{\Delta l_i|\bar l_i = \bar \ell}(\ell^* - \bar \ell)\, d\ellbar\,,
\end{align*}
and therefore with \cref{eq:averagedDensity}:
\begin{align}
\label{eq:plfgeneral}
p_{\boldsymbol{l^*}}(\ellstar)
&= \frac{2}{Nz}\sum_{i=1}^{Nz/2}\int\limits_{-\infty}^{+\infty} p_{\bar l_i }(\bar \ell) \cdot p_{\Delta l_i|\bar l_i = \bar \ell}(\ell^* - \bar \ell)\, d\ellbar\,.
\end{align}
Remember that the initial spring lengths $\bar l_i$ are identically distributed, i.e., $p_{\bar l_i} = p_{\bar l}$.
\Cref{eq:plfgeneral} thus simplifies to:
\begin{align}
\label{eq:plf}
p_{\boldsymbol{l^*}}(\ellstar)= \int\limits_{-\infty}^{+\infty} p_{\bar l}\, (\ellbar) \cdot p_{\dl|\boldsymbol{\bar l}=\ellbar}\, (\ellstar-\ellbar) \,d\ellbar\,,\\
\text{with}\quad p_{\dl | \boldsymbol{\bar l}=\bar \ell}\,(\dell) \coloneqq\frac{2}{Nz} \sum_{i=1}^{Nz/2} p_{\Delta l_i | \bar l_i=\bar \ell}\,(\dell)\,.
\label{eq:averagep}
\end{align}
Note that $p_{\dl | \boldsymbol{\bar l}=\bar \ell}$, with the apparent dimensionality mismatch, is a shorthand notation that does not mean that $\bar l_i=\bar \ell$ for all indices $i$, but instead, corresponds to the average over all possible events that $\bar l_i=\bar \ell$ for some index $i$. In this sense, the $n$th raw moment of the conditional probability density \cref{eq:averagep} is defined as follows:
\begin{align}
\E\left[(\dlgiven)^n\right] \coloneqq \frac{2}{Nz} \sum_{i=1}^{Nz/2} \int\limits_{-\infty}^{+\infty} x^n\, p_{\Delta l_i|\bar l_i=\bar \ell}(x)\, dx\,.
\label{eq:moments}
\end{align}
In the following we characterize the conditional probability density given in \cref{eq:averagep} that completely determines the final distribution of spring lengths given the initial distribution (\cref{eq:plf}).
Reconsidering \cref{eq:kkt}, we write
\begin{align}
\dl = \boldsymbol{l^*} - \boldsymbol{\bar l} = (\mathbf P- \mathbf I)\boldsymbol{\bar l}\eqqcolon \mathbf S \boldsymbol{\bar l}\,.
\label{eq:defS}
\end{align}
\Cref{eq:defS} relates $\dl$ to $\boldsymbol{\bar l}$ and a random matrix $\mathbf S$, both of which vary with the topology of each realization.
It is therefore challenging to obtain $p_{\dlgiven}$ explicitly, especially since the individual $\bar l_i$ are not mutually independent.
Instead, we consider the first two moments of the probability distribution, $\E( \dlgiven )$ and $\vardlgivenl$, and investigate under which conditions $\dl|_{\boldsymbol{\bar l}=\ellbar}$ is approximately normally distributed.
In the following we will work with conditional random variables, so we now highlight two important aspects of our generation procedure that will be used extensively.
The first is that each graph cycle, and therefore constraint, contains at least three edges, implying that the edge lengths are pairwise independent as random variables.
The second aspect is that we can fix the abstract graph structure in our generation procedure, leading to $(N,z)$-ensembles with varying winding numbers, but with a constant $\mathbf{S}$-matrix (e.g., \cref{img:schematic}). These fixed-graph-ensembles still contain identically, uniformly distributed random variables $\bar l_j \sim \mathcal U(-0.5,0.5)$.
\subsection{Conditional mean}
\label{sub:Conditional mean}
In this section we compute $\E(\dlgiven)$ for $(N,z)$-ensembles.
We first derive the conditional mean for a fixed-graph-ensemble, i.e., $\E[ (\dlgiven) | \mathbf S ]$, and then generalize the result to $(N,z)$-ensembles.
\Cref{eq:defS,eq:moments} lead to:
\begin{equation}
\begin{split}
\E[(&\dlgiven) | \mathbf S] =
\frac{2}{Nz}\sum\limits_{i=1}^{Nz/2} \E[ (\Delta l_i|\bar l_i = \ellbar)|\mathbf S]\\%=\boldsymbol{\mathcal S} ]\\
&=\frac{2}{Nz}\sum\limits_{i=1}^{Nz/2}\left(S_{ii} \ellbar+\sum\limits_{j=1,\,j\neq i}^{Nz/2}S_{ij}\underbrace{\E[\bar l_j|\bar l_i = \bar \ell]}_{=\E[\bar l_j]=0} \right) \\
&= \frac{2\,\ellbar}{Nz} \tr \mathbf S\,,
\label{eq:muconditional1}
\end{split}
\end{equation}
where we used the fact that fixed-graph-ensembles have uniformly distributed edge random variables that are pairwise independent.
We further make use of our knowledge about the graph's cycle matrix $\mathbf C$ to determine $\tr \mathbf S$. First note that by definition ${\tr \mathbf S = \tr \mathbf P-Nz/2}$ (\cref{eq:defS}).
The projector property of $\mathbf P$ (i.e., ${\mathbf P^2=\mathbf P}$) leads to ${\tr \mathbf P = \dim (\im \mathbf P) = Nz/2 - \dim(\ker \mathbf P)}$, because $\mathbf P$ has eigenvalues $0$ and $1$ only.
Furthermore, $\ker \mathbf P = \ker \mathbf C$, by definition (\cref{eq:kkt}), and hence $\tr \mathbf S = - \dim(\ker \mathbf C)$.
Recall that $\mathbf C$ contains $N(z/2-1)+1$ linearly independent rows corresponding to a set of fundamental cycles of the graph, i.e., $\mathbf C$ has full rank and so $\dim(\ker \mathbf C) = Nz/2 - (N(z/2-1)+1) = N-1$. It follows that
\begin{align}
\tr \mathbf S = 1-N
\label{eq:traceS}
\end{align}
is an invariant of the $(N,z)$-ensemble as it surprisingly only depends on the number of nodes in the graph.
Making use of this invariance together with the general property of the expected value, $\E(X)=\E_Y[\E(X|Y)]$, we combine \cref{eq:muconditional1,eq:traceS} to obtain:
\begin{equation}
\begin{split}
\E(\dlgiven) &= \E_{\mathbf S}\big[\E[ (\dlgiven) | \mathbf S ]\big]\\
&=- \frac{2\, \ellbar}{z} \left(1-\frac{1}{N}\right)\,.
\end{split}
\label{eq:muconditional}
\end{equation}
\subsection{Conditional variance}
\label{sub:Conditional variance}
The conditional variance $\vardlgivenl$ remains challenging to express analytically for arbitrary $z$ and $N$.
To compute the conditional mean, we used the essential fact that \mw{expectation is always additive regardless of the dependencies between the random variables. This is not true for variances.
The variance is only additive if the terms are pairwise independent.}
While the edge random variables are pairwise independent, they are in general not conditionally pairwise independent since the remaining two edge random variables of a triangle (cycle with three edges) with one edge length fixed are coupled by the fact that they sum to an integer winding number.
For an $(N,z)$-ensemble, we do not know the abstract graphs, let alone their triangle structures, making the general computation of the conditional variance difficult.
For two extreme cases, namely the cycle graph ($z=2$, $N>3$) and the complete graph ($z=N-1$, each node connected to every other node), there exists only a single possible graph with a known triangle structure. Both are symmetric (i.e., vertex- and edge-transitive \cite{biggs1993algebraic}).
In particular, edge-transitivity (informally: edges are indistinguishable from each other) allows us to reduce to a single entry in $\boldsymbol{l^*}$, since $p_{\boldsymbol{l^*}} = p_{l_i^*}$.
The single component $l_i^*$ is given by a weighted sum of identically distributed, but dependent random variables (\cref{eq:kkt}), which we analyze to derive $\vardlgivenl$ explicitly.
We first present the conditional variance derivation for these extreme cases and then discuss the more general intermediate-connectivity regime $2<z<N-1$ that contains ensembles of multiple graphs.
The complexity in this regime is highlighted by the intricacies involved in deriving the variance for a fixed graph (e.g., the complete graph), which already requires a special choice of basis to obtain a tractable expression for $(\mathbf C \mathbf C^T)^{-1}$.
\subsubsection{Cycle graph}
For the cycle graph ($z=2$), there is only one cycle that contains all $N$ edges.
Therefore, the cycle matrix can be written as $\mathbf C = (1,1,\dots,1) \in \mathbb R^{N}$.
It follows that \cref{eq:kkt} simplifies to
\begin{align}
\boldsymbol{l^*} = N^{-1} \mathbf C^T\mathbf C \boldsymbol{\bar l} = (g/N) \mathbf I \,,
\label{eq:kkt-cycle}
\end{align}
where $g=\sum_{j=1}^N \bar l_j$ is the winding number of the cycle
and $\mathbf I\in \mathbb R^N$ is the vector of ones.
We derive the conditional variance $\vardlgivenl = N^{-1} \sum_{i=1}^N \Var( \Delta l_i| \bar l_i= \bar \ell )$ for the cycle graph.
By edge-transitivity $\vardlgivenl = \Var( \Delta l_i| \bar l_i= \bar \ell )$ and by \cref{eq:kkt-cycle}, $\Delta l_i = l^*_i - \bar l_i = g/N - \bar l_i = \sum_{j=1}^N \bar l_j/N - \bar l_i$.
For the cycle graph, if $N>3$, the conditional edge random variables are pairwise independent, so we compute:
\begin{align}
\Var( \Delta l_i| \bar l_i=\bar\ell ) &= \frac{1}{N^2} \sum_{j=1,\,j\neq i}^N \Var (\bar l_j)\\
&= \frac{N-1}{N^2} \Var( \bar l )\,,
\label{eq:var-dlicond}
\end{align}
a constant that is independent of the initial spring length $\bar \ell$, showing that $\vardlgivenl = \meanvar$.
Conditional pairwise independence only holds for $N>3$ because, for $N=3$, if we condition on one length ($\bar l_1=\barell$) the remaining two lengths are dependent via $\bar l_3 = g-\barell - \bar l_2$.
Therefore, a similar computation for the conditional variance of a general ensemble does not hold, since each graph may contain triangles with conditionally pairwise dependent edges.
\begin{figure*}
\centering
\includegraphics{variance-comparison-new-notation-graph-inset}
\caption{Normalized conditional variance $\vardlgivenl/\meanvar$ as a function of $\bar \ell$ for graphs with $N=100$ and varying $z$ values. For each value of $z$, data points correspond to ensemble averages (repeated simulations) with \num{4.95e6} springs in total. We use local linear regression with \num{3e4} nearest neighbors to estimate the variance for different values of $\bar \ell$. The solid lines correspond to the analytically derived expressions for cycle and complete graph (illustrated in the insets). In the intermediate regime of connectivity, the variance shows a continuous transition between the two extreme cases.}
\label{fig:conditionalVariance}
\end{figure*}
\subsubsection{Complete graph}
\label{sss:complete}
For the case of the complete graph ($z=N-1$), the derivation of the conditional variance is significantly more involved.
In order to obtain manageable algebraic expressions, {one} needs to carefully choose the cycle basis (i.e., spanning tree).
This choice of basis leads to a tractable expression for $(\mathbf C \mathbf C^T)^{-1}$, which can then be applied to reformulate the problem in terms of conditionally independent winding number random variables.
\begin{figure}
\centering
\input{spanning-tree-complete-graph-N8.pdf_tex}
\caption{(a) The complete graph ($z=N-1$) for $N=9$ vertices, here shown as undirected graph for clarity. (b) A spanning tree of a directed version of the complete graph in (a). The chosen spanning tree is based at the vertex at the center and, from there, reaches out to all $N-1$ other vertices. The edge orientations of the spanning tree edges (black) are chosen such that they have opposite orientation with respect to a connecting cycle (see, e.g., the cycle formed by the gray edge).\label{fig:spanning-tree-complete}}
\end{figure}
We choose a spanning tree as shown in \cref{fig:spanning-tree-complete}.
For the following derivation, we label the edges such that the first $N-1$ edges correspond to the edges of the spanning tree.
The other $m=N(N-1)/2-(N-1)$ edges are the ones that are added to the spanning tree to construct the fundamental cycles (here: all triangles).
We order the cycles in $\mathbf C$ according to these edges and decompose the cycle matrix into two parts:
\begin{align}
\mathbf C = (\underbrace{\mathbf A}_{N-1} | \underbrace{\mathbf I}_{N(N-1)/2-(N-1)})\,,
\label{eq:Cdecomp}
\end{align}
where $\mathbf I$ is the identity matrix.
Our first result is that for spanning tree edges $\boldsymbol l^*_\text{st}\coloneqq \{l^*_i\}_{i=1}^{N-1}$ we have that
\begin{align}
\boldsymbol l^*_{\text{st}} =
N^{-1} (\C^T \C)_\text{st} \cdot \boldsymbol{\bar l}=
N^{-1}
\begin{pmatrix}
\A^T \A & \A^T
\end{pmatrix}
\cdot \boldsymbol{\bar l}\,,
\label{eq:ctc}
\end{align}
where $(\C^T \C)_\text{st}$ corresponds to the first $N-1$ rows of $\C^T \C$. The importance of this result is that the symmetries of the complete graph allow it to extend to all edges.
Indeed, each vertex of the complete graph defines a spanning tree as shown in \cref{fig:spanning-tree-complete}; therefore every edge can be seen as such a spanning tree edge. Independence of the solution of the choice of the cycle basis, and therefore spanning tree, implies that the following derivations hold for all edges in the graph.
Since, by \cref{eq:kkt,eq:Cdecomp},
\begin{align*}
(\boldsymbol{l^*})_\text{st} =
\left(
\mathbf A^T (\mathbf A \mathbf A^T + \mathbf I)^{-1} \mathbf A \quad \mathbf A^T(\mathbf A \mathbf A^T + \mathbf I)^{-1} \right)\cdot \boldsymbol{\bar l}\,,
\end{align*}
proving \cref{eq:ctc} is equivalent to showing that
\begin{align}
\mathbf A^T(\mathbf A \mathbf A^T + \mathbf I)^{-1} = N^{-1}\A^T \nonumber\\
\Leftrightarrow \; \left[(N-1)\mathbf I - \mathbf A^T \mathbf A\right] \mathbf A^T = 0\,.
\label{eq:rightAT}
\end{align}
We can construct $\A^T \A \in \mathbb R^{(N-1)\times(N-1)}$ explicitly: The $ij$th entry counts the number of cycles that are shared by the edges $i$ and $j$---with a contribution of $1$ if the edges have the same orientation with respect to the cycle, and $-1$ otherwise.
As can be seen in \cref{fig:spanning-tree-complete}, two spanning tree edges only share one cycle with opposite orientations, hence $(A^T A)_{ij}=-1$ for $i\neq j$.
Each edge is itself part of $N-2$ cycles, so $(A^T A)_{ij}=N-2$ for $i=j$:
\begin{align}
\A^T \A =
\begin{cases}
N-2, \quad &\text{when} \; i=j \\
-1, \quad &\text{when} \; i\neq j
\end{cases}
= (N-1) \mathbf I - \mathbf J\,,
\label{eq:ATA}
\end{align}
where $\mathbf J \in \mathbb R^{(N-1)\times(N-1)}$ is the matrix with ones everywhere.
Substitution into \cref{eq:rightAT} yields:
\begin{align}
\mathbf J \mathbf A^T = 0\,.
\label{eq:ja}
\end{align}
\Cref{eq:ja} holds true since each column of $\mathbf A^T$ contains two nonzero entries, $1$ and $-1$, due to all fundamental cycles (triangles) involving two spanning tree edges with opposite orientations (\cref{fig:spanning-tree-complete}).
We have therefore shown that for the spanning tree edges of the complete graph, the following relation holds:
\begin{align}
\boldsymbol{l^*_{\text{st}}}
= N^{-1} (\C^T\C)_{\text{st}} \cdot \boldsymbol{\bar l} = N^{-1} (\C^T \boldsymbol g)_\text{st}\,,
\label{eq:spanningedges}
\end{align}
where $(\C^T \boldsymbol g)_\text{st}$ is the vector of the first $N-1$ entries of $\C^T \boldsymbol g$, and $\boldsymbol g = \C \boldsymbol{\bar l}$ is the vector of winding numbers (\cref{eq:totalEnergy}).
\Cref{eq:spanningedges} allows us to change perspective to winding number random variables.
For a particular edge in the spanning tree, we compute $l^*_\text{st} = \frac{1}{N}\sum_{j=1}^{N-2}g_j$, and therefore, $\Delta l_\text{st} = \frac{1}{N}\sum_{j=1}^{N-2}g_j - \bar l_\text{st}\,,$
since the edge is contained in exactly $N-2$ fundamental cycles, which we have assumed correspond to the first $N-2$ entries in the $\boldsymbol g$ vector, and edge and cycle orientations are aligned.
For the conditional random variable $\Delta l_\text{st}|_{\bar l_{\text{st}}=\barell}$, it follows that:
\begin{align}
\Delta l_\text{st}|_{\bar l_{\text{st}}=\barell} = \frac{1}{N}\sum\limits_{j=1}^{N-2}g_j|_{\barlst=\barell} - \barell\,.
\label{eq:lstarsumgcond}
\end{align}
Observe that the $g_j|_{\barlst=\barell}$ are independent random variables since their only potential dependence, their common edge, is conditioned out.
Each winding number $g_j|_{\barlst=\barell}$ corresponds to a fundamental cycle that is a triangle (\cref{fig:spanning-tree-complete}), i.e., involves only three edges of which one is fixed.
Therefore we cannot use conditional pairwise independence of edge lengths as in the case of the cycle graph to compute the variance.
Instead, we derive the winding number distribution of a triangle explicitly.
The initial edge lengths are distributed as $\bar l_j \sim \mathcal U(-0.5,0.5)$, so the winding numbers can only attain three values $\{-1,0,1\}$.
In particular, $g_j|_{\bar l_\text{st}=\barell} = \barell + \bar l_{j_1} +\bar l_{j_2}$, where we choose positive signs since the lengths are distributed symmetrically around zero.
We compute the probability of $g_j|_{\bar l_\text{st}=\barell}$ attaining the value zero:
\begin{equation}
\begin{split}
P(g_j|_{\bar l_\text{st}=\barell}=0) &= P(\bar l_{j_2} \in [-\barell - 0.5, -\barell + 0.5])\\
&= \int \limits_{-\barell - 0.5}^{-\barell + 0.5} \chi_{[-0.5,0.5]}(x) \, dx = 1 - |\barell|\,,
\label{eq:gdist1}
\end{split}
\end{equation}
where $\chi_{[-0.5,0.5]}(\cdot)$ is the characteristic function on the interval $[-0.5,0.5]$.
The remaining probability is assigned to either $g_j|_{\bar l_\text{st}=\barell}=1$ or $g_j|_{\bar l_\text{st}=\barell}=-1$ depending on whether the given $\barell$ is positive or negative:
\begin{align}
P(g_j|_{\bar l_\text{st}=\barell\geq 0}= 1) = P(g_j|_{\bar l_\text{st}=\barell\leq 0}= -1) = |\barell|\,.
\label{eq:gdist2}
\end{align}
Using \cref{eq:lstarsumgcond}, conditional independence of the $g_j|_{\bar l_\text{st}=\barell}$, and their probability distribution, \cref{eq:gdist1,eq:gdist2}, we have:
\begin{align}
\Var(\Delta l_\text{st}|{\bar l_{\text{st}}=\barell}) &= \frac{N-2}{N^2}\Var(g_j|\bar l_\text{st}=\barell)\\
&= \frac{N-2}{N^2} (|\barell|-\barell^2) \,.
\label{eq:varcomplete}
\end{align}
In contrast to the cycle graph (\cref{eq:var-dlicond}), the conditional variance $\vardlgivenl = \Var(\Delta l_\text{st}|{\bar l_{\text{st}}=\barell})$ for the complete graph (\cref{eq:varcomplete}) depends on the initial spring length $\bar \ell$, as is shown in \cref{fig:conditionalVariance}.
\subsubsection{Intermediate-connectivity regime}
\sn{For the intermediate-connectivity regime, $2<z<N-1$, a tractable expression for $(\mathbf C \mathbf C^T)^{-1}$, as for the complete graph, remains elusive; however, numerical data suggest that the variance exhibits a continuous transition between the two extremes (\cref{fig:conditionalVariance}).}
We also observe that the conditional variance is approximately constant given that $z \ll N$. This is the most relevant case for biological networks where typically $z\lesssim4$.
For $z\ll N$, we may thus approximate $\vardlgivenl\approx\meanvar$\sn{, which we now derive.}
The \emph{law of total variance} \cite{weiss2006course} \sn{states}:
\begin{align}
\E_{\boldsymbol{\bar l}}[\Var(\dl\,|\,\boldsymbol{\bar l})] = \Var( \dl ) - \Var_{\boldsymbol{\bar l}}[\E( \dl\,|\,\boldsymbol{\bar l})]\,.
\label{eq:totalvar}
\end{align}
We first compute $\Var( \dl )$, again, by initially fixing $\mathbf S$ and considering $\Var( \dl | \mathbf S )$.
With \cref{eq:defS,eq:moments} we compute
\begin{align*}
\Var[ \dl | \mathbf S ] = \frac{2}{Nz}\sum_{i=1}^{Nz/2} \Var(\Delta l_i | \mathbf S) =\frac{2 \Var(\bar l)}{Nz}\sum_{i,j=1}^{Nz/2} S_{ij}^2\,,
\end{align*}
where the second equality follows from fixed-graph ensembles having uniformly distributed edge random variables that are pairwise independent.
Again, we use that $\mathbf P^2=\mathbf P$, hence $\mathbf S^2 = -\mathbf S$, and therefore $\sum_{j=1}^{Nz/2} S_{ij}^2 = -S_{ii}$.
Insertion into the equation above yields:
\begin{align}
\frac{\Var( \dl | \mathbf S )}{\Var(\bar l)} =-\frac{2\tr \mathbf S}{Nz} = \frac{2}{z} \left(1-\frac{1}{N}\right)\,,
\end{align}
where the second equality is due to \cref{eq:traceS}.
Application of the law of total variance gives:
\begin{equation}
\begin{split}
\Var(\dl) &= \E_{\mathbf S}[\Var(\dl | \mathbf S)] + \Var_{\mathbf S}[\underbrace{\E(\dl | \mathbf S)}_{=0}] \\
&= \frac{2}{z}\left(1-\frac{1}{N}\right) \Var(\bar l)\,,
\label{eq:sigmadl2}
\end{split}
\end{equation}
where the second term in the sum vanishes using \cref{eq:defS} since $\E(\bar l_i)=0$ (analogous to the computation in \cref{eq:muconditional1}).
From \cref{eq:muconditional} we can use
$\Var_{\boldsymbol{\bar l}}[\E( \dl\,|\,\boldsymbol{\bar l})] = {(2/z(1-1/N))^2 \Var(\bar l)}$
and therefore find by substituting into \cref{eq:totalvar}:
\begin{align}
\frac{\E_{\boldsymbol{\bar l}} [\vardlgiven]}{\varlbar} &=\frac{2}{z}\left(1-\frac{1}{N}\right)\left[1 - \frac{2}{z}\left(1-\frac{1}{N}\right)\right] \,.
\label{eq:meanvar}
\end{align}
\subsection{Normality}
\label{sub:Normality}
If $\dl|_{\boldsymbol{\bar l}=\ellbar}$ were normally distributed, having estimates for mean and variance (\cref{eq:muconditional,eq:meanvar}) would be sufficient to fully characterize $p_{\dlgiven}$.
Indeed, for the two extremes, cycle and complete graph, we can prove that $\dl|_{\boldsymbol{\bar l}=\ellbar} $ is \emph{normally distributed} in the limit $N \to \infty$, with a rate of convergence proportional to $(N-2)^{-1/2}$.
This result might look like a direct application of the classical \emph{central limit theorem}.
However, since the edge lengths are not independent as random variables, more sophisticated techniques are required to represent the solution in terms of a suitable set of \sn{mutually} independent random variables.
In contrast to situations in time series analysis \cite{brockwell2009time}, where independence holds beyond a certain time window, in our case the cycle constraints prohibit localization of dependencies.
\sn{To deal with this problem, we reduce the number of variables by relaxing each integer cycle constraint to an interval constraint.}
Harnessing the resulting independence then requires a non-standard transformation of random variables, which complicates a direct application of the Berry-Esseen theorem \cite{berry1941accuracy,esseen1942liapounoff} (a deviation-bound version of the central-limit theorem) to obtain a quantitative bound on the distance to a normal distribution.
\mw{The rest of this section is split into three parts.
We begin by proving the results for the cycle and complete graph, and then investigate the intermediate-connectivity regime.
Throughout, note how the intricacies of the proofs of the extreme cases are further complicated in the intermediate-connectivity regime, where ensembles have varying graph structure and lack symmetry.}
\subsubsection{Cycle graph}
For the cycle graph ($z=2$), we prove that $\dlc$ is normally distributed in the limit $N \to \infty$.
The key idea is a relaxation of the integer constraint to an interval constraint.
Using $\Delta l_i = g/N - \bar l_i = \sum_{j=1}^N \bar l_j/N - \bar l_i$ (\cref{eq:kkt-cycle}) we introduce the standardized ($\E( Y_N ) =0$, $\Var( Y_N )=1$) random variable
\begin{align}
Y_{N} &\coloneqq \frac{\dlc-\E(\dlC)}{\sqrt{\Var(\dlC)}}\\
&= \frac{\gc/N - \barell - (\barell/N -\barell)}{\sqrt{\frac{N-1}{N^2} \varlbar}}\\
&= \frac{\gc - \barell}{\sqrt{(N-1)\varlbar}}\,,
\end{align}
and compare its cdf $F_{Y_N}(x)$ to that of the standard normal $\Phi_{0,1}(x)$.
We find that
\begin{align}
F_{Y_N}(x) &= \Prob(Y_N \leq x)\\
&= \Prob \left(g|_{\bar l_i=\barell} \leq x\,\sqrt{(N-1)\varlbar} + \barell\right)\\
&= \sum_{k = -\infty}^{\left\lfloor x\,\sqrt{(N-1)\varlbar} + \barell \right\rfloor} \Prob(g|_{\bar l_i=\barell} = k)\,,
\label{eq:Fcycle1}
\end{align}
where we have used the fact that $g \in \mathbb Z$ and $\lfloor \cdot \rfloor$ denotes the floor operator.
Using $g|_{l_i = \barell} = \barell + \sum_{j=1, \,j\neq i}^N \bar l_j$
we can formalize the integer relaxation by expressing the probability of the conditional winding number as follows:
\begin{align}
\Prob(g|_{\bar l_i=\barell}=k) = \int_{k-\barell-0.5}^{k-\barell+0.5} \nu_{N-2}(t)\, dt\,,
\label{eq:probg}
\end{align}
where $\nu_{N-2}(t)$ corresponds to the probability density of the sum of $N-2$ uniformly distributed independent random variables on the interval $[-0.5,0.5]$. We call this random variable $U_{N-2}$.
There are only $N-2$ independent random variables because one of the $N$ lengths is fixed to $\barell$ and another one is determined to make sure that an integer winding number is attained for $g$.
Substituting \cref{eq:probg} into \cref{eq:Fcycle1} and expressing $\left\lfloor x\,\sqrt{(N-1)\varlbar}+ \barell\right\rfloor = x\,\sqrt{(N-1)\varlbar} + \barell - \delta(x)$, with the random variable $\delta(x) \in [0,1)$, leads to:
\begin{align*}
F_{Y_N}(x) &= \sum_{k = -\infty}^{x\,\sqrt{(N-1)\varlbar} + \barell -\delta(x)} \int_{k-\barell - 0.5}^{k -\barell + 0.5} \nu_{N-2}(t)\, dt \\
&= \int_{-\infty}^{x\,\sqrt{(N-1)\varlbar} + 0.5 -\delta(x)} \nu_{N-2}(t)\, dt \\&= F_{U_{N-2}}\left(x\,\sqrt{(N-1)\varlbar} + 0.5 - \delta(x)\right)\,.
\end{align*}
We are interested in the distance to the cdf $\Phi_{0,1}(x)$ of the standard normal distribution.
To calculate this distance, this we perform a change of variables which results in a standardized sum of uniforms $U_{N-2}/(\sigmaU)$, where $\sigma_{\bar l}=\sqrt{\Var{(\bar l)}}$.
However, this causes the cdf of our random variable and the standard normal cdf to have different arguments.
We therefore split the computation into two steps: one that measures the distance to a shifted standard normal cdf, and the other that measures the deviations introduced by this shift.
With the shorthand notation $\newx \coloneqq x\,\sqrt{(N-1)\varlbar} + 0.5 - \delta(x)$, the described procedure corresponds to the following computation:
\begin{align*}
&|F_{Y_N}(x) - \Phi_{0,1}(x)|=\left|F_{U_{N-2}}\left( \newx \right) - \Phi_{0,1}(x)\right|\nonumber \\[1ex]
&=\left|F_{\frac{U_{N-2}}{\sigmaU}}\left(\frac{\newx}{\sigmaU}\right) - \Phi_{0,1}(x)\right|\nonumber\\[1ex]
&\leq\underbrace{\left|F_{\frac{U_{N-2}}{\sigmaU}}\left(\frac{\newx}{\sigmaU}\right) - \Phi_{0,1}\left(\frac{\newx}{\sigmaU}\right)\right|}_{\eqqcolon I(x)}
\\[1ex]
&+\underbrace{\left|\Phi_{0,1}\left(\frac{\newx}{\sigmaU}\right)-\Phi_{0,1}(x)\right|}_{\eqqcolon II(x)}\,.
\end{align*}
$I(x)$ can be bounded using the Berry-Esseen theorem. Bounding $II(x)$ requires a detailed case analysis (see~\cref{sec:app1} for details). We arrive at:
\begin{align}
&\supr |F_{Y_N}(x) - \Phi_{0,1}(x)| \leq I(x) + II(x)\\
&\leq\frac{12^{3/2} \,C}{32\sqrt{N-2}} +\frac{1}{\sqrt{2 \pi (N-2)\varlbar}}\\
&= \frac{1}{\sqrt{N-2}}\left(\frac{12^{3/2} \,C}{32} +\frac{1}{\sqrt{2 \pi\varlbar}}\right)\,.
\label{eq:boundcycle}
\end{align}
Therefore, the cdf of $\dlc$ converges to a normal distribution with the rate $(N-2)^{-1/2}$, independent of $\barell$.
Since we showed that $F_{\Delta l_i|\bar l_i=\barell}$ is independent of the edge $i$, \cref{eq:cumcdf} implies $F_{\dlgiven}\,(x)=F_{\Delta l_i|\bar l_i=\barell}\,(x)$, and therefore $F_{\dlgiven}$ converges to a normal distribution as well.
\subsubsection{Complete graph}
For the complete graph, our proof of normality relies on the reduction to spanning tree edges as outlined in the conditional variance section.
In particular, this allows us to write the cdf in terms of winding number random variables that are all triangles that share a common edge.
Conditioning on this edge then yields independence, not of the length variables, but of these winding number random variables, which allows us to apply the Berry-Esseen theorem.
To measure how far $\Delta l_\text{st}|_{\bar l_{\text{st}}=\barell}$ is from being normally distributed for finite $N$, we look at the standardized random variable
\begin{align}
Y_{N-2} \coloneqq \frac{\Delta l_\text{st}|_{\bar l_{\text{st}}=\barell}-\E(\Delta l_\text{st}|{\bar l_{\text{st}}=\barell})}{\sqrt{\Var(\Delta l_\text{st}|{\bar l_{\text{st}}=\barell})}}
\end{align}
and compare its cdf to the one of the standard normal.
Using \cref{eq:lstarsumgcond} and the probability distribution of $g_j|_{\bar l_\text{st}=\barell}$, \cref{eq:gdist1,eq:gdist2}, we obtain:
\begin{align}
\E(\Delta l_\text{st}|\bar l_{\text{st}}=\barell) &= \frac{N-2}{N}\E(g_j|\bar l_\text{st}=\barell) - \barell \\
&= \frac{(N-2)\barell}{N}- \barell\,, \\
\Var(\Delta l_\text{st}|{\bar l_{\text{st}}=\barell}) &= \frac{N-2}{N^2}\Var(g_j|\bar l_\text{st}=\barell) \\&= \frac{N-2}{N^2} (|\barell|-\barell^2)\,,
\label{eq:varcomplete2}
\end{align}
and therefore
\begin{align}
Y_{N-2} &= \frac{\frac{1}{N}\sum_{j=1}^{N-2} g_j|_{\bar l_\text{st}=\barell} - \frac{(N-2)\barell}{N}}{\sqrt{\frac{N-2}{N^2}\Var(g_j|\bar l_\text{st}=\barell)}} \\
&= \frac{\sum_{j=1}^{N-2}(g_j|_{\bar l_\text{st}=\barell} -\barell)}{\sqrt{\sum_{j=1}^{N-2} \Var(g_j|_{\bar l_\text{st}=\barell}-\barell)}}\,.
\end{align}
All $\{g_j|_{\bar l_\text{st}=\barell}\}_{j=1}^{N-2}$ are independent since the corresponding cycles only share one edge, which is the one that we condition on.
We can thus apply the Berry-Esseen theorem (\cref{thm:berryesseen}) to show that for $N\geq3$,
\begin{align}
\supr |F_{Y_{N-2}}(x)-\Phi_{0,1}(x)| \leq \frac{C \rho}{\sigma^3 \sqrt{N-2}}\,,
\end{align}
with $C < 0.4748$, $\rho = \E(\left|g_j|_{\bar l_\text{st}=\barell}-\barell\right|^3)$, and $\sigma^2 = \Var(g_j|_{\bar l_\text{st}=\barell}-\barell)=|\barell|-\barell^2$. Computing $\rho = (|\barell|-\barell^2)(\barell^2+(1-|\barell|)^2)$ via \cref{eq:gdist1,eq:gdist2} we arrive at
\begin{align}
\supr |F_{Y_{N-2}}(x)-\Phi_{0,1}(x)| \leq \frac{C}{\sqrt{N-2}} \frac{\barell^2+(1-|\barell|)^2}{\sqrt{|\barell|-\barell^2}}\,,
\label{eq:boundcomplete}
\end{align}
for $|\barell| > 0$. This proves convergence of the cdf of $\dlc$ to a normal distribution with the rate $(N-2)^{-1/2}$.
For $\barell = 0$, $\dlc \sim \delta_0$ (Dirac delta distribution around zero), it can only attain the value zero because $P(g_j|_{\bar l_\text{st}=\barell}=0) = 1$.
Note the $\barell$-dependence in \cref{eq:boundcomplete}, which is in stark contrast to the $\barell$-independent bound for the cycle graph (\cref{eq:boundcycle}).
For the complete graph, the approximation with a normal distribution becomes worse as $\barell$ approaches zero
\subsubsection{Intermediate-connectivity regime}
\begin{figure}
\centering
\includegraphics{conditional-densities-N-100-4diff-z-newlabels}
\caption{Conditional probability density $p_{\dlgiven}\,(\Delta \ell)$ for spring networks with $N=100$ and varying $z$, conditioned on different $\bar \ell$ values. For each value of $z$, data points correspond to ensemble averages (repeated simulations) with \num{4.95e6} springs in total. Solid lines correspond to best-fit normal distributions. The cycle graph ($z=2$) is close to being normally distributed---as proven for $N\to\infty$. Whereas for $z=2.2$, there are still deviations from a normal distribution, for $z=3$ and larger, the densities rapidly approach a normal distribution.}
\label{fig:pdlconditional}
\end{figure}
Recall that \sn{in the intermediate-connectivity regime, $2<z<N-1$, the $(N,z)$-ensembles contain graphs with varying cycle structures making a similar analysis significantly more challenging.}
\sn{In simulations}, \sn{however}, we observe that $\dl|_{\boldsymbol{\bar l}=\ellbar}$ is approximately normally distributed if $z$ is sufficiently large (\cref{fig:pdlconditional}).
\subsection{Density approximation}
\label{sub:Density approximation}
Our empirical observations and theoretical discussion above justify the following approximation for $3\leq z\ll N$:
\begin{align}
\dl|_{\boldsymbol{\bar l}=\ellbar} \sim \mathcal N\Big[\E( \dlgiven ),E_{\boldsymbol{\bar l}}[\vardlgiven]\Big]\,,
\label{eq:pdlconditional}
\end{align}
with the expressions for $\E( \dlgiven )$ and $E_{\boldsymbol{\bar l}}[\vardlgiven]$ given in \cref{eq:muconditional,eq:meanvar}.
Using \cref{eq:plf,eq:pdlconditional}, we obtain an explicit representation for the final length distribution $p_{\boldsymbol{l^*}}(\ellstar)$ in mechanical equilibrium (\cref{sec:bigequation}).
In \cref{fig:lengthdistribution} we compare this analytical expression to ensembles of simulated networks and observe excellent agreement.
\begin{figure}
\centering
\includegraphics{length-distribution-averaged-with-uniform-with-labels}
\caption{Probability density $p_{\boldsymbol{l^*}}(\ell^*)$ for the final spring lengths for networks with $N=1000$ and varying $z$. Solid black lines show the analytic expression for $p_{\boldsymbol{l^*}}(\ell^*)$ (\cref{eq:meanfield}); data points correspond to averages over 50 simulations. The error bars correspond to the standard deviation. \sn{For comparison, we show the initial uniform spring length distribution $p_{\boldsymbol{\bar l}}(\ellbar)$ as a gray dashed line.}}
\label{fig:lengthdistribution}
\end{figure}
\section{Comparison to a mean-field approach}
In order to evaluate the significance of our graph-theoretical analysis we compare it to a mean-field (mf) approach which neglects all topological features other than the local degree of connectivity.
\begin{figure}
\centering
\input{mean-field.pdf_tex}
\caption{Mean-field approach (here: $z=4$): The edge $\bar l$ connects two nodes $(\mathrm{n_1},\mathrm{n_2})$.
The initial node forces (gray arrows) are given by $f_{\mathrm n_1}= \sum_{i=1}^{z-1} \bar l_{\mathrm n_1,i}+\bar l$ and $f_{\mathrm n_2}= \sum_{i=1}^{z-1} \bar l_{\mathrm n_2,i} - \bar l$. The spring $\bar l$ contributes to the forces with different signs because its length is measured from $\mathrm{n_1}$ to $\mathrm{n_2}$ (depicted by the black triangle). While displacing an individual node by $u_\mathrm{n_k} = z^{-1}f_\mathrm{n_k}$ introduces force balance at that node, this approach neglects that nodes/edges are coupled, i.e., force balance has to be established at all nodes simultaneously, as in \cref{eq:totalEnergy}.}
\label{fig:mean-field}
\end{figure}
In contrast to the graph-theoretical model, where $z$ refers to the \emph{average} degree of a node, the mean-field approach assumes that each node is connected to \emph{exactly} $z$ other nodes.
Moreover, the node displacement $u_\text{node}$ during relaxation is calculated as if all other nodes in the network were fixed.
Therefore $u_\text{node} = f_\text{node}/z$,
where $f_\text{node}=\sum_{i=1}^z \bar l_i$ is the initial force acting on the node via the springs attached to it.
The displacement $\Delta l$ of a spring is given by the difference of the displacements of the two nodes that are connected by this edge:
\begin{equation}
\begin{split}
\Delta l &= u_{\mathrm n_2} - u_{\mathrm n_1} = z^{-1}(f_{\mathrm n_2}-f_{\mathrm n_1})\\
&=\frac{1}{z} \left(-2 \bar l + \sum_{i=1}^{z-1} \bar l_{\mathrm n_2,i}-\sum_{i=1}^{z-1} \bar l_{\mathrm n_1,i} \right)\,,
\label{eq:deltal}
\end{split}
\end{equation}
where we have taken into account that the two nodes share one spring, namely $\bar l$ (\cref{fig:mean-field}).
All springs are assumed to be independent identically distributed random variables with mean zero and variance $\varlbar$.
For the conditional mean, we have with \cref{eq:deltal}:
\begin{align}
\E(\dlgiven)|_\text{mf} = -\frac{2 \barell}{z}\,.
\end{align}
The mean-field result agrees with the exact solution \cref{eq:muconditional} in the limit $N \to \infty$, i.e., there is no significant difference for large node numbers.
In contrast, we will show at the end of this section that for the variance, the mean-field solution differs substantially from the exact result, even in the limit $N \to \infty$.
When considering normality of $\dl|_{\boldsymbol{\bar l}=\ellbar}$, the mean-field approach allows us to directly apply the Berry-Esseen theorem (\cref{thm:berryesseen}) because all edges are treated as independent.
Defining the normalized random variable
\begin{align}
Y_{2(z-1)} :=& \frac{\Delta l|_{\bar l=\barell}-\E(\Delta l|{\bar l}=\barell)}{\sqrt{\Var(\Delta l|{\bar l=\barell})}}\\
=& \frac{\sum_{i=1}^{2(z-1)} \bar l_i}{\sqrt{2(z-1) \Var(\bar l)}}\,,
\end{align}
the theorem implies:
\begin{align}
\supr |F_{Y_{2(z-1)}}(x)-\Phi_{0,1}(x)| \leq \frac{12^{3/2} \,C}{32\sqrt{2(z-1)}}\,.
\end{align}
The mean-field approach yields convergence to a normal distribution with a rate proportional to $(z-1)^{-1/2}$.
While this result agrees with the rate of convergence we proved for the complete graph, it is in stark contrast to what we proved for the cycle graph case ($z=2$), for which we showed convergence to a normal distribution even though $z$ is constant.
In the intermediate-connectivity regime, both the mean-field as well as our graph-theoretical approach suggest that $\dl|_{\boldsymbol{\bar l}=\ellbar}$ can be approximated by a normal distribution.
To complete the evaluation of our approach, it is therefore critical to also compare the second moments, i.e., the variances of the mean-field and graph-theoretical approach.
For the unconditional variance, we obtain using \cref{eq:deltal}:
\begin{align}
\Var( \dl )|_\text{mf}&= z^{-2} \left( 4 \varlbar + 2(z-1) \varlbar \right) \\&= \frac{2}{z} \left( 1+\frac{1}{z} \right) \varlbar\,.
\label{eq:vardl}
\end{align}
Clearly, this expression does not agree with the exact graph-theoretical value \cref{eq:sigmadl2}, even in the limit $N\to\infty$.
A mean-field approach assumes the conditional variance is constant and therefore equal to its expected value $\E_{\boldsymbol{\bar l}} [\vardlgiven]|_\text{mf}=\vardlgivenl|_\text{mf}={2/z(1-1/z) \Var(\bar l)}$.
For the cycle graph, we proved the conditional variance is indeed constant (\cref{eq:var-dlicond}).
However, we showed that the other extreme, the complete graph, exhibits non-constant conditional variance (\cref{eq:varcomplete}).
For $(N,z)$-ensembles in the intermediate-connectivity regime, we observe a continuous transition between the two extremes (\cref{img:mfcomparisonvar}).
\mw{Therefore, for the biological regime ($z \lesssim 4$), we approximated the conditional variance with its constant expected value $\E_{\boldsymbol{\bar l}}[\vardlgiven]$ (\cref{eq:meanvar}).}
However, it is exactly the regime $z\lesssim 4$ where the graph-theoretically derived expected conditional variance $\E_{\boldsymbol{\bar l}}[\vardlgiven]$ and the mean-field quantity $\E_{\boldsymbol{\bar l}} [\vardlgiven]|_\text{mf}$ exhibit the largest discrepancy (\cref{img:mfcomparisonvar}).
\begin{figure}
\centering
\includegraphics{variance-mf-vs-graph}
\caption{Comparison of graph-theoretical (black) and mean-field (gray) variances as a function of average degree $z$. Shown are the unconditional variance $\vardl$ as well as the expected conditional variance $\meanvar$ as derived in the text.
The graph-theoretically derived expected variance exhibits a maximum at $z=4$ (filled circle), while the corresponding mean-field expected variance monotonically decreases from its value at $z=2$.}
\label{img:mfcomparisonvar}
\end{figure}
\section{Discussion and conclusions}
In conclusion, we have presented a probabilistic theory of force distributions in one-dimensional random spring networks on a circle.
Here we have regarded networks with initially unbalanced forces that relax into mechanical equilibrium.
\sn{When drawing the analogy to a biological network, our approach, which focuses on the relaxation of the system after non-equilibrium starting conditions, is equivalent to assuming a separation of time scales where internal or external non-equilibrium processes slowly create forces in the network that rapidly equilibrate.}
We developed a \emph{graph-theoretical} approach that allows us to exactly compute mean and expected variance of the distribution of length changes conditioned on an initial configuration.
For the two extreme cases, the \emph{cycle graph} and the \emph{complete graph}, we could prove convergence of \sn{this distribution} to a \emph{normal distribution}.
A systematic analytical treatment of the---less symmetric---intermediate-connectivity regime is more demanding and not provided here.
\sn{However, our results suggest an approximation that shows excellent agreement with simulation for the biologically relevant regime of connectivity, $3\leq z\ll N$}.
It is straightforward to generalize the approach we present here to higher spatial dimensions $d$ if the probability densities $p_{\boldsymbol{\bar l}_k}$ for the components of the initial spring vectors are independent.
In that case, due to the linearity of spring forces with extension, the optimization problem decouples into the spatial components.
The probability density for the final spring vectors then is simply given as the product of the one-dimensional results:
\begin{align}
p_{\boldsymbol{l^*}}(\boldsymbol{\ell^*}) = \prod\limits_{k=1}^d p_{\boldsymbol{l^*}_k}(\ell^*_k)\,.
\label{eq:plfhigherdimension}
\end{align}
Hence, our results carry over to two- and three-dimensional networks, which are more commonly studied in practice and are of biological and physiological relevance.
Interestingly, a classical \emph{mean-field} approach \emph{fails} to correctly reproduce the mean and the variance of the relevant distributions. The error is particularly pronounced for the---biologically most relevant---regime of low degrees of connectivity, and does not vanish in the limit of infinite node number.
Our work demonstrates that \emph{network topology}---here manifested as \emph{cycle constraints}---is crucial for the correct determination of force distributions in an elastic spring network.
This opens the door for future research on the role of network topology in more complex elastic networks, e.g., in the presence of dynamics, spring nonlinearities or rupture.
Moreover, the mixture of probabilistic and graph-theoretical techniques may prove useful for other types of network theories.
\begin{acknowledgments}
The authors would like to thank Friedrich Bös, Alexander Hartmann, and Fabian Telchow for fruitful discussions. Funding from the Deutsche Forschungsgemeinschaft (DFG) within the collaborative research center SFB 755, project A3, is gratefully acknowledged.
C.F.S was additionally supported by a European Research Council Advanced Grant PF7 ERC-2013-AdG, Project 340528.
\end{acknowledgments}
\begin{widetext}
\begin{appendix}
\section{Independence of the choice of cycle basis}
\label{sec:Independence of the solution on the choice of the cycle basis}
A change of cycle basis corresponds to the transformation
\begin{align}
\mathbf{\tilde C} = \mathbf Q \C \mathbf \Per^{-1}\,,
\label{eq:changeofbasis}
\end{align}
where $\mathbf Q \in \mathrm{GL}(m)$ is an arbitrary change of basis matrix for the cycle space, \mw{and $\Per \in \mathrm{O}(Nz/2)$ is a permutation matrix that corresponds to relabeling the edges of the graph.}
The independence of the solution of the cycle matrix means that, given the change of basis in \cref{eq:changeofbasis} and the solution (\cref{eq:kkt})
\begin{align*}
\boldsymbol{\tilde l^*} = \CT^T (\CT \CT^T)^{-1} \CT \boldsymbol{\tilde{\bar l}}
\end{align*}
to the transformed problem,
\begin{align}
\boldsymbol{\tilde l^*} = \Per\, \boldsymbol{l^*}\,.
\end{align}
The above relation can be shown by direct computation:
\begin{align*}
\CT^T (\CT \CT^T)^{-1} \CT
&= (\Q \C \Per^{-1})^T (\Q\C\Per^{-1}(\Q \C \Per^{-1})^T)^{-1} (\Q \C\Per^{-1})
= \Per^{-T}\C^T \Q^T (\Q\C\Per^{-1}\Per^{-T} \C^T \Q^T)^{-1} (\Q \C\Per^{-1})\\
&= \Per^{-T}\C^T \Q^T \Q^{-T}(\C \C^T)^{-1} \Q^{-1} \Q \C\Per^{-1}
= \Per\C^T(\C \C^T)^{-1} \C\Per^{-1}\,,
\end{align*}
and therefore
\begin{align*}
\boldsymbol{\tilde l^*} &= \Per\C^T(\C \C^T)^{-1} \C\Per^{-1}\boldsymbol{\tilde {\bar l}}
= \Per \C^T(\C \C^T)^{-1} \C \boldsymbol{\bar l}
= \Per \boldsymbol{l^*}\,.
\end{align*}
\section{Upper bounds for $\mathbf{I(x)}$ and $\mathbf{II(x)}$}
\label{sec:app1}
For a uniform upper bound on the first term $I(x)$, we can apply the Berry-Esseen theorem, which is stated as follows \cite{athreya2006measure}.
\begin{thm}[Berry-Esseen]
\label{thm:berryesseen}
Let $X_1, X_2, \cdots$ be independent identically distributed (iid) random variables with $E(X_1)=0$, $E(X_1^2)=\sigma^2>0$, $E(|X_1|^3)=\rho < \infty$. Also, let
\begin{align*}
S_n = \frac{X_1 + X_2 + \cdots + X_n}{\sqrt{n}\sigma}
\end{align*}
be the normalized $n$-th partial sum. Denote $F_n$ the cdf of $S_n$, and $\Phi_{0,1}$ the cdf of the standard normal distribution.
Then there exists a positive constant $C<0.4785$ \cite{Tyurin2010} such that
\begin{align}
\sup_{x\in \mathbb R} |F_n(x) - \Phi_{0,1}(x)| \leq \frac{C \rho}{\sigma^3 \sqrt n}\,.
\end{align}
\end{thm}
By recalling that $U_{N-2}$ is the sum of $N-2$ independent uniformly distributed random variables on the interval $[-1/2,1/2]$, i.e., with variance $\varlbar = \sigma^2_{\bar l}=1/12$, third absolute moment $\rho_{\bar l} = 1/32$, and mean $\E(\bar l)=0$, we have that $U_{N-2}/(\sigma_{\bar l} \sqrt{N-2})$ is a normalized $n$-th partial sum.
The Berry-Esseen theorem therefore implies:
\begin{align}
\sup_{x \in \mathbb R} I(x) = \sup_{x \in \mathbb R} \left|F_{\frac{U_{N-2}}{\sigma_{\bar l}\sqrt{(N-2)}}}(x) - \Phi_{0,1}(x)\right|
\leq \frac{12^{3/2} \,C}{32\sqrt{N-2}}\,.
\label{eq:firsttermlimit}
\end{align}
An upper bound for the second term $II(x)$ can be found as well.
We write:
\begin{align}
II(x) &= \left|\Phi_{0,1}\left(\frac{x\,\sqrt{(N-1)\varlbar} + 0.5 - \delta(x)}{\sqrt{(N-2)\varlbar}}\right)-\Phi_{0,1}(x)\right|\\
&= \left|\Phi_{0,1}(\alpha x + \beta )-\Phi_{0,1}(x)\right| = \left|\Phi_{0,1}(y)-\Phi_{0,1}(x)\right|\,,
\end{align}
with $\alpha = \sqrt{\frac{N-1}{N-2}}$, $\beta = \frac{0.5-\delta(x)}{\sqrt{(N-2)\varlbar}}$, and $y=\alpha x + \beta$.
There are six cases that need to be distinguished: $(x<0<y)$, $(y<0<x)$, $(x<y<0)$, $(y<x<0)$, $(0<x<y)$, $(0<y<x)$.
For $(y<0<x)$, the following holds:
\begin{align}
\left|\Phi_{0,1}(\alpha x + \beta)-\Phi_{0,1}(x)\right| &\leq [({1-\alpha})x-\beta] \,\sup_{x \in \mathbb R} \Phi'_{0,1}(x)\leq -\frac{\beta}{\sqrt{2 \pi}} \\ &< \frac{1}{2\sqrt{2 \pi (N-2)\varlbar}}\,,
\label{eq:case1}
\end{align}
where we have used that $1-\alpha <0$ and $\delta(x) \in [0,1)$.
Analogously, the same bound holds for the case $(x<0<y)$.
For the other cases, we can make use of the convexity (concavity) of $\Phi_{0,1}(x)$ for $x<0$ ($x>0$).
For $(x<y<0)$, we have:
\begin{align}
\left|\Phi_{0,1}(\alpha x + \beta)-\Phi_{0,1}(x)\right| &\leq [({\alpha-1})x+\beta] \,\Phi'_{0,1}(\alpha x + \beta) \\
&=[(\alpha-1)x+\beta] (2\pi)^{-1/2} e^{-(\alpha x + \beta)^2/2 }\\
&\leq \frac{\beta}{\sqrt{2 \pi}}
< \frac{1}{2\sqrt{2 \pi (N-2)\varlbar}}\,,
\label{eq:case2}
\end{align}
and analogously the same for $(0<y<x)$.
Finally, for $(0<x<y)$:
\begin{align}
\left|\Phi_{0,1}(\alpha x + \beta)-\Phi_{0,1}(x)\right| &\leq [(\alpha-1)x+\beta] \,\Phi'_{0,1}(x)\\
&=[(\alpha-1)x+\beta] (\sqrt{2\pi})^{-1/2} e^{-x^2/2 }\\
&\leq \frac{\beta}{\sqrt{2 \pi}} + \frac{\alpha-1}{\sqrt{2\pi}} x e^{-x^2/2}
\leq \frac{\beta}{\sqrt{2 \pi}} + \frac{\alpha -1}{\sqrt{2\pi e}}\\
&= \frac{\beta}{\sqrt{2 \pi}}+ \frac{1}{\sqrt{2\pi e}}\frac{\sqrt{N-1}-\sqrt{N-2}}{ \sqrt{N-2}}\\
&\leq \frac{1}{2\sqrt{2 \pi (N-2)\varlbar}}+ \frac{1}{\sqrt{2\pi e}}\frac{1}{2(N-2)} \label{eq:root}\\
&\leq \frac{2}{2 \sqrt{2 \pi (N-2)\varlbar}}= \frac{1}{\sqrt{2 \pi (N-2)\varlbar}}\label{eq:combineterms}\,,
\end{align}
where we used the concavity of $\sqrt{x}$ in \cref{eq:root} and $N \geq \varlbar/e + 2$ in \cref{eq:combineterms}.
Analogously, the same bound holds for the last remaining case $(y<x<0)$.
Taking the maximum bound of all cases \cref{eq:case1,eq:case2,eq:combineterms} we obtain
\begin{align}
\supr II(x) \leq \frac{1}{\sqrt{ 2\pi (N-2)\varlbar}}\,.
\label{eq:II}
\end{align}
\section{Analytical expression for the final length distribution}
By combining \cref{eq:plf} with the normal approximation for $\dl|_{\boldsymbol{\bar l}=\ellbar}$ (\cref{eq:pdlconditional}) we obtain:
\begin{align}
\begin{split}
p_{\boldsymbol{l^*}}(\ellstar) =& \int\limits_{-\infty}^{+\infty} p_{\boldsymbol{\bar l}}\,(\ellbar) \cdot p_{\dlgiven}\,(\ellstar-\ellbar) \,d\ellbar
\simeq \frac{1}{\sqrt{2\pi \meanvar}} \int\limits_{-0.5}^{0.5}\exp\left[-\left(\frac{\ellstar-(1-\frac{2}{z}(1-\frac{1}{N}))\ellbar}{\sqrt{2 \meanvar}}\right)^2\right]\,d\ellbar \\
=& \frac{1}{2(1-\frac{2}{z}(1-\frac{1}{N}))}\left[\erf\left(\frac{\ellstar+(1-\frac{2}{z}(1-\frac{1}{N}))/2}{\sqrt{2\meanvar}}\right)
-\erf\left(\frac{\ellstar-(1-\frac{2}{z}(1-\frac{1}{N}))/2}{\sqrt{2\meanvar}}\right)\right]\,.
\end{split}
\label{eq:meanfield}
\end{align}
This expression is compared to simulated data in \cref{fig:lengthdistribution}.
\label{sec:bigequation}
\end{appendix}
\end{widetext}
|
1,108,101,563,700 | arxiv | \section{Acknowledgements}
The research herein was
carried out in part in the CMPMS Dept. (RMK and JM) and at
the Center for Functional Nanomaterials (MS), Brookhaven National Laboratory,
which is supported by the U.S. Department of Energy, Office of Basic
Energy Sciences, under Contract No. DE-AC02-98CH10886.
|
1,108,101,563,701 | arxiv | \section{Introduction}
The top quark is the heaviest known elementary particle with a mass approximately twice that of the electroweak vector bosons, and factor of 1.4 larger than that of the more recently discovered Higgs boson~\cite{Aad:2015zhl}.
Within the standard model (SM), this large mass arises from a large Yukawa coupling ($\approx 0.9$) to the Higgs field. Consequently, loops involving the top quark contribute significantly to
electroweak quantum corrections, and therefore a precise measurement of the top quark mass, $\mt$, provides a means to test the
consistency of the SM.
Furthermore, the precise values of both the mass of the Higgs boson and the Yukawa coupling of the top quark may play a critical role in the history and stability of the universe (see e.g., Ref.~\cite{Degrassi:2012ry}).
The top quark was discovered in 1995 by the CDF and D0\xspace\ experiments during Run~I\ (1992--1996) of the Fermilab Tevatron \ensuremath{p\bar p}\ collider at $\ensuremath{\sqrt s}=1.8~\ensuremath{\mathrm{Te\kern-0.1em V}}\xspace$~\cite{Abe:1995hr,Abachi:1995iq}.
Run~II\ (2001--2011) at $\ensuremath{\sqrt s}=1.96~\ensuremath{\mathrm{Te\kern-0.1em V}}\xspace$ followed, providing a factor of $\approx 150$ more top-antitop quark pairs than Run~I, and far more precise measurements of $\mt$.
Using \ttbar\ events produced in the D0\xspace\ detector~\cite{run1det,run2det,Abolins2008,Angstadt2010},
we have measured $\mt$ in different decay channels~\cite{Mtop1-D0-di-l-PRL,Mtop1-D0-di-l-PRD,Mtop1-D0-l+jt-new1,Mtop2-D0-di-l-Nu-PLB,Mtop2-D0-di-l-ME-PRD,Mtop2-D0-l+jt-PRL,Mtop2-D0-l+jt-PRD}
using the full integrated luminosity of Run~I\ ($\int \mathcal{L}\;dt= 0.1~\ensuremath{\mathrm{fb^{-1}}}$) and Run~II\ ($\int \mathcal{L}\;dt=9.7~\ensuremath{\mathrm{fb^{-1}}}$). This article reports
the combination of these direct top quark mass measurements.
Direct measurements of the top quark mass have also been performed by the CDF experiment (see {e.g.}~Ref.~\cite{cdf_latest_top_mass}) at the Tevatron, and by the ATLAS (see {e.g.}~Ref.~\cite{atlas_latest_top_mass}) and CMS (see {e.g.}~Ref.~\cite{cms_latest_top_mass}) experiments at the CERN LHC. In 2012, the Tevatron experiments combined their measurements in Ref.~\cite{TeVTopComboPRD}
with the result $\mt=173.18 \pm 0.94~\GeV$.
In 2014, a preliminary combination of ATLAS, CDF, CMS, and D0\xspace\ measurements~\cite{worldcombo} yielded $\mt=173.34 \pm 0.76~\GeV$. Both combinations are by now outdated as they do not include the latest and more precise measurements,
in particular, the final D0\xspace~Run~II~measurements discussed in this article.
The top quark mass is a fundamental free parameter of the SM. However, its definition depends on the scheme of theoretical calculations used for the perturbative expansion in quantum
chromodynamics (QCD). The inputs to the combination presented in this article are the direct measurements calibrated using Monte Carlo (MC) simulations. Hence, the measured mass corresponds to the MC mass parameter.
However, because of the presence of long range effects in QCD,
the relationship between the MC mass and other mass definitions, such as the pole mass or the mass in the modified minimal subtraction ($\overline{\rm{MS}}$) scheme,
is not well established and has been subject to debate for many years (see e.g., Ref.~\cite{Juste:2013dsa} and references therein).
A recent work obtains a difference of +0.6~GeV between the MC mass and the pole mass in the context of an $e^+e^- \to \ttbar$ simulation with an uncertainty of 0.3~GeV~\cite{Butenschoen:2016lpz}.
Further studies are needed to produce a similar estimate in the context of $\ensuremath{p\bar p}\to\ttbar$ production.
In Ref.~\cite{massxs}, we extracted
the pole mass of the top quark from the measured $t\bar t$ cross section~\cite{mass-from-xsec-theory}.
However, due to the ambiguity between the MC and pole mass,
the difficulty of properly assessing correlations between systematic uncertainties,
and the large uncertainty of the pole mass measurement,
the latter
is not part of the combination presented in this article.
This article is structured as follows: we first summarize the input measurements;
we subsequently present the combination of Run~II\ dilepton measurements, which provides one of the inputs to the D0\xspace\ combination;
we then discuss the different uncertainty categories and their correlations, and conclude with the final combined result.
\section{Decay channels and input measurements}
\begingroup
\squeezetable
\begin{table*}[!htbp]
\caption[Input measurements]{Summary of the input measurements to the combination. We indicate the method used to extract the mass of the top quark from the data (see the corresponding references for further details).
}\label{tab:input_publication}
\newcolumntype{M}[1]{>{\centering}m{#1}}
\begin{ruledtabular}
\begin{tabular}{lccM{5cm}cc}
Period & Channel & { $\int \mathcal{L}\;dt$ (\ensuremath{\mathrm{fb^{-1}}})}& Method & \mt (\GeV)& Reference\\ \hline
Run~I& \dil & $0.1$ & { Combination of matrix weighting and neutrino weighting } & $168.4\phantom{0} \pm12.3\phantom{0}{\rm\ (stat)}\pm 3.6\phantom{0}{\rm\ (syst)}$ &\cite{Mtop1-D0-di-l-PRL,Mtop1-D0-di-l-PRD} \\
Run~I& \lplus & $0.1$ & Matrix element & $180.1\phantom{0} \pm\phantom{0}3.6\phantom{0}{\rm\ (stat)} \pm 3.9\phantom{0}{\rm\ (syst)}$ &\cite{Mtop1-D0-l+jt-new1} \\
Run~II& \dil & $9.7$ & Neutrino weighting & $173.32\pm \phantom{0}1.36{\rm\ (stat)}\pm 0.85{\rm\ (syst)}$ &\cite{Mtop2-D0-di-l-Nu-PLB} \\
Run~II& \dil & $9.7$ & Matrix element & $173.93\pm \phantom{0}1.61{\rm\ (stat)}\pm 0.88{\rm\ (syst)}$&\cite{Mtop2-D0-di-l-ME-PRD} \\
Run~II& \lplus & $9.7$ & Matrix element & $174.98\pm \phantom{0}0.41{\rm\ (stat)}\pm 0.63{\rm\ (syst)}$ &\cite{Mtop2-D0-l+jt-PRL,Mtop2-D0-l+jt-PRD} \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\endgroup
\label{sec:inputs}
To measure the top quark mass, we use $p\bar p \to t\bar t$ events
and assume that the top and antitop quark masses are equal~\cite{Abazov:2011ch,Aaltonen:2012zb,Aad:2013eva,Chatrchyan:2016mqq}.
Within the SM, the top quark decays into a $W$ boson and a $b$ quark
almost 100\% of the time. Different channels arise from the possible decays of the pair of $W$ bosons:
\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}}
\item The ``dilepton'' channel (\dil)
corresponds to events ($\approx 4.5\%$ of the total) where both $W$ bosons decay into electrons or muons.
This channel is quite free from background
but has a small yield.
The background is mainly due to $Z$+jets production, but also receives contributions from diboson ($WW$, $WZ$, $ZZ$), $W$+jets, and multijet production.
\item The ``lepton+jets'' channel (\lplus)
corresponds to events ($\approx 30\%$ of the total) where one $W$ boson decays into $q\bar {q^{\prime}}$ and the other into an electron or a muon and a neutrino.
This channel has a moderate yield and a background arising from
$W$+jets production, $Z$+jets production, and multijet processes.
\item
The ``all jets'' channel ($\approx 46\%$ of the total)
has events in which both $W$ bosons decay to $q\bar {q^{\prime}}$ that evolve into jets.
The yield is high, but the background from multijet production is very large.
\item
The ``tau channel'' ($\approx 20\%$ of the total) arises from events in which at least one of the $W$ bosons decays into $\tau\nu_{\tau}$. As the decays $\tau\to\rm{hadrons}+\nu_{\tau}$ are difficult to distinguish from QCD jets,
it is not exploited for the top quark mass measurement.
However, the $\tau\to\ell\nu_{\ell}\nu_{\tau}$ decays provide
contributions to the \dil\ and \lplus\ channels.
\end{enumerate}
The high mass of the top quark means that
the decay products tend to have high transverse momenta ($\pt$) relative to the beam axis and large angular separations.
Reconstructing and identifying $t\bar t$ events requires
reconstruction and identification of high $\pt$ electrons,
muons, and jets, and the measurement of the imbalance in transverse momentum in each event ($\met$) due to escaping neutrinos.
In addition, identifying $b$ jets is an effective way of improving the purity of the selections.
Good momentum resolution is required for all these objects, and
the jet energy scale (JES) has to be known with high precision.
In the Run~II\ \lplus\ measurements, the uncertainty in the JES is reduced by performing an {in situ} calibration, which
exploits the $W\to q\bar{ q^{\prime}}$ decay by requiring the mass of the corresponding dijet system to be consistent with the mass of the $W$ boson ($ 80.4~\GeV$).
This calibration, determined using light-quark jets (including charm jets), is applied to jets of all flavors associated with $\ttbar$ decay.
It is then propagated to the
Run~II\ \dil\ measurements.
The input measurements of $\mt$ for the presented combination are shown
in Table~\ref{tab:input_publication},
and consist of measurements performed during Run~I\ and Run~II\ in the
\dil\ and \lplus\ channels using the full data sets.
D0\xspace\ also measured the top quark mass using the ``all jets'' channel in Run~I~\cite{Mtop1-D0-allh-PRL};
however, this measurement is not considered in the combination because its uncertainty is large
and some subcomponents of the systematic uncertainty are not available.
Just as in Run~I,
two \dil\ mass measurements were performed in Run~II\ using
a neutrino weighting~\cite{Mtop2-D0-di-l-Nu-PLB}
technique (NW) and a matrix element method (ME)~\cite{Mtop2-D0-di-l-ME-PRD}. We discuss their combination
in the following section.
To combine the $\mt$ measurements, we use the Best Linear Unbiased Estimate (BLUE)~\cite{Valassi:2003mu}, assuming Gaussian uncertainties,
both for the \dil\ Run~II\ and the final D0\xspace\ combinations.
\section{Combination of Run~II\ dilepton measurements}
In the \dil\ channel, the presence of two undetected neutrinos with high \pt\
makes it impossible to fully reconstruct the kinematics of the final state.
To overcome this problem,
we use two methods in Run~II.
The NW measurement~\cite{Mtop2-D0-di-l-Nu-PLB} is based on
a weight function for each event which is computed
by comparing the $x$-- and $y$-- components of the observed \met\ and
the hypothesized \pt\ components of the neutrinos, integrating over the neutrino pseudorapidities~\cite{Note0}.
The maximum weight value
indicates the most likely value of \mt\ in that event.
The first and second moments of this function are retained as the event-by-event variables sensitive to $\mt$.
Their distributions in MC events are used to form two-dimensional templates that depend upon the value of $\mt$. The templates are compared to the data to extract $\mt$. The ME~\cite{Mtop2-D0-di-l-ME-PRD} measurement uses per-event probability densities,
based on the reconstructed kinematic information, obtained by
integrating over the differential cross sections for the processes contributing to the observed events,
using leading order matrix elements for the $\ttbar$ production process and
accounting for detector resolution.
The unmeasured neutrino momentum components are integrated out in this computation. The probability densities from all data events are combined to form a likelihood as a function of $\mt$, which is then maximized to determine $\mt$.
\subsection{Statistical uncertainties and correlation}
\label{sec:dilepton_correlation}
The statistical uncertainties of the individual NW and ME measurements are given in Table~\ref{tab:dilepton}.
Both measurements are carried out using
the same full D0\xspace\ Run~II\ data set, and similar selection criteria. Approximately 90\% of the selected events are common to both analyses, and the measurements
are therefore statistically correlated.
We use an ensemble testing method to estimate these correlations.
In the first step, we generate 1000 ensembles of simulated background and signal events with mass \mt=172.5 GeV that pass the criteria
of either the NW or the ME selection (see Refs.~\cite{Mtop2-D0-di-l-Nu-PLB} and ~\cite{Mtop2-D0-di-l-ME-PRD} for the detailed descriptions of the selections).
Each ensemble is generated with the same number of events as observed in data,
using the expected signal and background fractions, separately for the \ensuremath{ee}, \ensuremath{\mu\mu}, and \ensuremath{e\mu}\ channels. The ME and NW ensembles are then obtained using the individual and slightly more restrictive selection criteria from each analysis, and $\mt$ is extracted following each of the analysis methods.
From the two-dimensional distribution of the measured masses shown in Fig~\ref{fig:2dcorr},
we obtain a statistical correlation of $\rho=0.64 \pm 0.02$ between the two sets of measurements.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth]{2D_COMB.eps}
\caption{
Two-dimensional distribution in the top quark masses
extracted from the MC event ensembles in the ME and NW analyses. The statistical correlation $\rho$ is obtained from this distribution.}
\label{fig:2dcorr}
\end{figure}
\subsection{Systematic uncertainties in \dil\ channel}
The different contributions to the systematic uncertainty considered in the NW and ME measurements are reported in Table~\ref{tab:dilepton}. The sources of uncertainty are listed
in the following and briefly described when the naming is not self-explanatory.
More detailed descriptions are given in Refs.~\cite{Mtop2-D0-di-l-Nu-PLB} and ~\cite{Mtop2-D0-di-l-ME-PRD}, and in Sec.~\ref{sec:uncertainty} for the signal modeling uncertainties.
\begin{description}
\item[In situ light-jet calibration:] The statistical uncertainty of the JES calibration, determined in the \lplus\ measurement using light-quark jets, and propagated to the \dil\ measurements.
\item[Response to \boldmath{$b$}, \boldmath{$q$}, and \boldmath{$g$} jets:] The part of the JES uncertainty that originates from
differences in detector response among $b$, light-quark, and gluon jets.\item[Model for \boldmath{$b$} jets:] The part of the JES uncertainty that originates from
uncertainties specific to the modeling of $b$ jets. This includes the dependence on semileptonic branching fractions and
modeling of $b$ quark fragmentation.
\item[Light-jet response:] The part of the JES
uncertainty that affects all jets and includes the dependence of the calibration upon jet energy and pseudorapidity, and the effect of the out-of-cone calorimeter showering correction.
\item[Jet energy resolution]\item[Jet identification efficiency]
\item[Multiple interaction model:] The systematic uncertainty that arises from modeling
the distribution of the number of interactions per Tevatron bunch crossing.
\item[\boldmath{$b$} tag modeling:]
The uncertainty related to the modeling of the $b$ tagging efficiency
for $b$, $c$, and light-flavor jets in MC simulation relative to data.
\item[Electron energy resolution]\item[Muon momentum resolution]\item[Lepton momentum scale:] The uncertainty arising from the calibration of electron energy and muon momentum scales.
\item[Trigger efficiency:]
The uncertainties in the estimation of lepton-based trigger efficiencies.
\item[Higher-order corrections:] The modeling of higher-order corrections in the simulation of \ttbar\ samples, obtained from the difference between the next-to-leading-order \textsc{MC@NLO}~\cite{MCNLO} and the leading-order \textsc{ALPGEN}~\cite{ALPGEN} event generators.
\item[Initial and final state radiation:] The uncertainty due to the modeling of initial and final state gluon radiation.
\item[Hadronization and underlying events:] The uncertainty associated with the modeling of hadronization and the underlying event, estimated from the difference between
different hadronization models.
\item[Color reconnection:] The uncertainty due to the model of color reconnection.
\item[PDF:] The uncertainty from the choice of parton density functions.
\item[Transverse momentum of \ttbar\ system:] The uncertainty in the modeling of the distribution of the \pt\ of the \ttbar\ system.
\item[Yield of vector boson + heavy flavor:] The uncertainty associated with the production cross section for $Z$+$b\bar b$ and $Z$+$ c\bar c$ relative to $Z$+jets events.
\item[Background from simulation:]
The systematic uncertainty on the MC background, which includes the uncertainty from detector effects and the theoretical cross section. It does not include the uncertainties on the ratios of $Z$+$b\bar b$ and $Z$+$ c\bar c$ to $Z$+jets cross sections, which belong to the previous category.
\item[Background based on data:]
The uncertainties from the modeling of the
multijet and $W$+jets backgrounds estimated using data.
\item[Template statistics:]
In the NW measurement, this uncertainty arises from the statistical fluctuations of individual bins in signal and background templates. In the ME measurement, there is no such uncertainty as there is no template used to fit the data.
\item[Calibration method:]
The calibration for both ME and NW measurements is determined
using an ensemble testing method. We generate pseudo-experiments
with the same number of events as observed in data, using MC events
for signal and both MC and data-based samples for backgrounds.
Ensembles at different top quark mass hypotheses are generated to
determine a linear relation between the uncorrected measurement and
the actual MC mass, {i.e.}, to determine slope and offset parameters.
The uncertainty in the calibration method arises from the uncertainty in
the slope and offset parameters due to the limited size of the MC and
data-based samples.
\end{description}
All systematic uncertainties are considered as fully correlated between ME and NW except for the calibration method uncertainty,
as the calibrations were performed using almost independent event samples.
The differences between the ME and NW uncertainties reported in Table~\ref{tab:dilepton} are consistent with
the expected statistical fluctuations in the various estimates.
The fluctuations are $\approx $~0.05--0.10~$\GeV$, depending on the source, and their overall contributions are well below the total uncertainties. They therefore have a negligible impact on the overall uncertainties in
the individual measurements and their combination.
\label{sec:dilepton_syst}
\begingroup
\squeezetable
\begin{table}[!h!tbp]
\caption[Input measurements]{Measurements in the \dil\ channel with contributions to the uncertainties, and their combination.
The total systematic uncertainty and the total uncertainty are obtained by adding the relevant contributions
in quadrature. All values are given in $\GeV$. The symbol ``n/a'' stands for ``not applicable''.}\label{tab:dilepton}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{lccc}
\hline \hline
& Run~II &Run~II & Run~II \\
& ME & NW & $\dil$ combination \\
\hline
\makeatletter{}top quark mass & 173.93 & 173.32 & 173.50 \\
\hline
{In situ} light-jet calibration & \phantom{00}0.46 & \phantom{00}0.47 & \phantom{00}0.47 \\
Response to $b$, $q$, and $g$ jets & \phantom{00}0.30 & \phantom{00}0.27 & \phantom{00}0.28 \\
Model for $b$ jets & \phantom{00}0.21 & \phantom{00}0.10 & \phantom{00}0.13 \\
Light-jet response & \phantom{00}0.20 & \phantom{00}0.36 & \phantom{00}0.31 \\
Jet energy resolution & \phantom{00}0.15 & \phantom{00}0.12 & \phantom{00}0.13 \\
Jet identification efficiency & \phantom{00}0.08 & \phantom{00}0.03 & \phantom{00}0.04 \\
Multiple interaction model & \phantom{00}0.10 & \phantom{00}0.06 & \phantom{00}0.07 \\
$b$ tag modeling & \phantom{00}0.28 & \phantom{00}0.19 & \phantom{00}0.22 \\
Electron energy resolution & \phantom{00}0.16 & \phantom{00}0.01 & \phantom{00}0.05 \\
Muon momentum resolution & \phantom{00}0.10 & \phantom{00}0.03 & \phantom{00}0.05 \\
Lepton momentum scale & \phantom{00}0.10 & \phantom{00}0.01 & \phantom{00}0.04 \\
Trigger efficiency & \phantom{00}0.06 & \phantom{00}0.06 & \phantom{00}0.06 \\
Higher-order corrections & \phantom{00}0.16 & \phantom{00}0.33 & \phantom{00}0.28 \\
Initial and final state radiation & \phantom{00}0.16 & \phantom{00}0.15 & \phantom{00}0.15 \\
Hadronization and underlying event & \phantom{00}0.31 & \phantom{00}0.11 & \phantom{00}0.17 \\
Color reconnection & \phantom{00}0.15 & \phantom{00}0.22 & \phantom{00}0.20 \\
PDF & \phantom{00}0.20 & \phantom{00}0.08 & \phantom{00}0.11 \\
Transverse momentum of $\ttbar$ system & \phantom{00}0.03 & \phantom{00}0.07 & \phantom{00}0.06 \\
Yield of vector boson + heavy flavor & \phantom{00}0.06 & \phantom{00}0.04 & \phantom{00}0.05 \\
Background from simulation & \phantom{00}0.06 & \phantom{00}0.01 & \phantom{00}0.02 \\
Background based on data & \phantom{00}0.07 & \phantom{00}0.00 & \phantom{00}0.02 \\
Template statistics & \phantom{00}n/a & \phantom{00}0.18 & \phantom{00}0.13 \\
Calibration method & \phantom{00}0.03 & \phantom{00}0.07 & \phantom{00}0.05 \\
\hline
Systematic uncertainty & \phantom{00}0.88 & \phantom{00}0.85 & \phantom{00}0.84 \\
Statistical uncertainty & \phantom{00}1.61 & \phantom{00}1.36 & \phantom{00}1.31 \\
\hline
Total uncertainty & \phantom{00}1.84 & \phantom{00}1.61 & \phantom{00}1.56 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\endgroup
\subsection{\dil\ combination}
To obtain the ME and NW combination through the BLUE method
we use the correlations and uncertainties discussed in
Sec.~\ref{sec:dilepton_correlation} and Sec.~\ref{sec:dilepton_syst}.
The result of the BLUE combination is $\mt=173.50\pm 1.31\,{\rm\ (stat)}\pm 0.84{\,\rm\ (syst)}~\GeV$.
The breakdown of uncertainties is given in Table~\ref{tab:dilepton}.
The weights for the NW and ME measurements are 71\% and 29\%, respectively.
The NW and ME measurements agree with a $\chi^2$ of 0.2 for one degree of freedom, corresponding to a probability of 65\%.
As a test of stability, we change the statistical correlation between the two methods from 0.50 to 0.70
to conservatively cover the range of systematic and statistical uncertainty in its determination.
The resulting \mt\ changes by less than 0.04~\GeV.
This combination of the Run~II\ \dil\ measurements is used as an input to the overall combination discussed in the next sections.
\section{Uncertainty categories in the overall combination}
\label{sec:uncertainty}
For the overall combination, the systematic uncertainties are grouped into sources of same or similar origin to form uncertainty categories. We employ categories similar to
those used in the Tevatron
top quark mass combination~\cite{TeVTopComboPRD} and use the same naming scheme.
\begin{description}
\item[{In situ} light-jet calibration:] The part of the
JES uncertainty that originates from the
{in situ} calibration procedure using light-quark jets. This uncertainty has a statistical origin.
For the Run~II\ $\dil$ measurement, the uncertainty
from transferring the $\lplus$ calibration to the dilepton event topology
is included in the {light-jet response } category described below.
\item[Response to \boldmath{$b$}, \boldmath{$q$}, and \boldmath{$g$} jets:]
As described in Sec.~\ref{sec:dilepton_syst}.
\item[Model for \boldmath{$b$} jets:]
As described in Sec.~\ref{sec:dilepton_syst}.
\item[Light-jet response:] The part of the JES
uncertainty that includes calibrations of the
absolute energy-dependent response and the relative $\eta$-dependent response,
and, for Run~II, the out-of-cone calorimeter showering correction. This uncertainty applies to jets of any flavor.
\item[Out-of-cone correction:] The part of the JES uncertainty that originates from
modeling of uncertainties associated with light-quark
fragmentation and out-of-cone calorimeter showering corrections in Run~I\ measurements. For Run~II\ measurements,
it is included in the { light-jet response} category.
\item[Offset:] This includes the uncertainty
arising from uranium noise in the D0\xspace\ calorimeter and from the
corrections to the JES due to multiple interactions. While such uncertainties were sizable in Run~I,
the shorter integration time in the
calorimeter electronics and the {in situ} JES calibration make them negligible in Run~II.
\item[Jet modeling:]
The systematic uncertainties arising from uncertainties in jet resolution
and identification.
\item[Multiple interactions model:]
As described in Sec.~\ref{sec:dilepton_syst}.
\item[\boldmath{$b$} tag modeling:]
As described in Sec.~\ref{sec:dilepton_syst}.
\item[Lepton modeling:] The uncertainties in the modeling
of the scale and resolution of lepton \pt, which were
taken to be negligible
in Run~I. \item[Signal modeling:] The systematic uncertainties arising from
\ttbar\ event modeling, which are correlated across all
measurements.
This includes the sources described below.
In Run~I, the breakdown into the first four items could not be performed,
because the MC generators used at that time did not have the same flexibility as the more modern generators. Instead,
the overall signal modeling uncertainty was estimated by changing the main parameters of a MC generator or comparing results from two different generators.
\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}}
\item The uncertainty associated with the modeling of initial and final state radiation,
obtained by changing
the renormalization scale in the scale-setting procedure relative to its default, as suggested in Ref.~\cite{Cooper:2011gk}. Studies of $Z\to\ell\ell$ data indicate
that a range of variation between factors of $\frac 1 2$ and 2
of this scale covers the mismodeling~\cite{Mtop2-D0-l+jt-PRD}.
\item The uncertainty
from higher-order corrections evaluated from a comparison of \ttbar\ samples generated using {\textsc MC@NLO}~\cite{MCNLO} and
{\textsc ALPGEN}~\cite{ALPGEN}, both interfaced to {\textsc HERWIG}~\cite{HERWIG5,HERWIG6} for the simulation of parton showers and hadronization.
\item The systematic uncertainty arising from a change in the
phenomenological description of color reconnection (CR) among final state partons~\cite{CR}. It is obtained from the difference between event samples generated using {\textsc PYTHIA}~\cite{pythia} with the Perugia 2011 tune and
using {\textsc PYTHIA} with the Perugia 2011NOCR tune~\cite{Skands2010}.
\item The systematic
uncertainty associated with the choice for modeling
parton-shower, hadronization, and underlying event. It includes the changes observed when
substituting {\textsc PYTHIA} for {\textsc HERWIG}~\cite{HERWIG5,HERWIG6} when
modeling \ttbar\ signal.
\item The uncertainty associated with
the choice of PDF used
to generate the \ttbar\ MC events.
It is estimated in Run~II\
by changing the 20 eigenvalues of the {\textsc{CTEQ6.1M}} PDF~\cite{Nadolsky:2008zw} within their uncertainties. In Run~I, it was obtained by comparing
{\textsc{CTEQ3M}}~\cite{Lai:1994bb} with {\textsc{MRSA}}~\cite{Martin:1994kn} for \dil, and {\textsc{CTEQ4M}}~\cite{Lai:1996mg} with {\textsc{CTEQ5L}}~\cite{Lai:1999wy} for \lplus\ events.
\end{enumerate}
\item[Background from theory:]
This systematic uncertainty on background originating from theory
takes into account the
uncertainty in modeling the background sources. It is correlated among
all measurements in the same channel, and includes uncertainties on background composition, normalization, and distributions.
\item[Background based on data:] This includes uncertainties associated with the modeling of
multijet background in the
\lplus\ channel, and
multijet and $W$+jets backgrounds in the \dil\ channel, which are estimated using data. This also includes the effects of trigger uncertainties determined from the data.
\item[Calibration method:] The uncertainty arising from any source specific
to a particular fitting method, includes effects such as
the finite number of MC events
available to calibrate each method.
\end{description}
Table~\ref{tab:inputs} summarizes the
input measurements
and their corresponding statistical and systematic uncertainties.
\section{Correlations}
\label{sec:corltns}
The following correlations are used to combine the measurements:
\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}}
\item The uncertainties listed as `{statistical uncertainty}',
`{calibration method}', and `{background based on data}'
are taken to be uncorrelated among the measurements.
\item The uncertainties in the `{in situ} {light-jet
calibration}'
category are taken to be correlated among the Run~II\ measurements
since the
$\dil$ measurement uses the JES calibration determined in
the $\lplus$ channel.
\item The uncertainties in `{response to $b$, $q$, and $g$ jets}', `{jet modeling}', `{$b$ tag modeling}', `{multiple interaction model}', and `{lepton modeling}'
are taken
to be 100\% correlated among Run~II\ measurements.
\item The uncertainties in `{out-of-cone correction}' and `{offset}'
categories are taken to be 100\% correlated among Run~I\ measurements.
\item The uncertainties in `{model for $b$ jets}' and `{signal modeling}'
categories are taken to be 100\% correlated among all measurements.
\item The uncertainties in `{light-jet response}' are taken
to be 100\% correlated among the Run~I\ and the Run~II\ measurements, but uncorrelated between Run~I\ and Run~II.
\item The uncertainties in `{background from theory}' are taken to be
100\% correlated among all measurements in the same channel.
\end{enumerate}
A summary of the correlations among the different systematic categories is shown in Table~\ref{tab:correl}.
Using the inputs from Table~\ref{tab:inputs} and the correlations specified in Table~\ref{tab:correl},
we obtain an overall matrix of correlation coefficients in
Table~\ref{tab:coeff}.
\begingroup
\squeezetable
\begin{table}[!h!tbp]
\caption[Input measurements]{Summary of measurements used to determine the
D0\xspace\ average \mt. Integrated luminosity ($\int \mathcal{L}\;dt$) has units of
\ensuremath{\mathrm{fb^{-1}}}, and all other values are in $\GeV$. The uncertainty categories and
their correlations are described in Sec.~\ref{sec:uncertainty}. The total systematic uncertainty
and the total uncertainty are obtained by adding the relevant contributions
in quadrature. The symbol ``n/a'' stands for ``not applicable'', and the symbol ``n/e'' for ``not evaluated'' (but expected to be negligible).
}
\label{tab:inputs}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\newcolumntype{H}{>{\setbox0=\hbox\bgroup}c<{\egroup}@{}} \begin{tabular}{lccccH}
\hline \hline
& \multicolumn{2}{c}{D0\xspace\ {Run~I}}
& \multicolumn{2}{c}{D0\xspace\ {Run~II }}\\
& $\lplus$ & \multicolumn{1}{c}{$\dil$} & $\lplus$ & $\dil$ \\
\hline
$\int \mathcal{L}\;dt$ & \phantom{00} 0.1\phantom{0} & \phantom{00} 0.1\phantom{0} & \phantom{00} 9.7\phantom{0} & \phantom{00} 9.7\phantom{0} \\
\hline
\makeatletter{}top quark mass & 180.10 & 168.40 & 174.98 & 173.50 & 174.95 \\
\hline
{In situ} light-jet calibration & \phantom{00}n/a & \phantom{00}n/a & \phantom{00}0.41 & \phantom{00}0.47 & \phantom{00}0.41 \\
Response to $b$, $q$, and $g$ jets & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.16 & \phantom{00}0.28 & \phantom{00}0.16 \\
Model for $b$ jets & \phantom{00}0.71 & \phantom{00}0.71 & \phantom{00}0.09 & \phantom{00}0.13 & \phantom{00}0.09 \\
Light-jet response & \phantom{00}2.53 & \phantom{00}1.12 & \phantom{00}0.21 & \phantom{00}0.31 & \phantom{00}0.21 \\
Out-of-cone correction & \phantom{00}2.00 & \phantom{00}2.00 & \phantom{00}n/a & \phantom{00}n/a & $<0.01$ \\
Offset & \phantom{00}1.30 & \phantom{00}1.30 & \phantom{00}n/a & \phantom{00}n/a & $<0.01$ \\
Jet modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.07 & \phantom{00}0.14 & \phantom{00}0.07 \\
Multiple interaction model & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.06 & \phantom{00}0.07 & \phantom{00}0.06 \\
$b$ tag modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.10 & \phantom{00}0.22 & \phantom{00}0.10 \\
Lepton modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.01 & \phantom{00}0.08 & \phantom{00}0.01 \\
Signal modeling & \phantom{00}1.10 & \phantom{00}1.80 & \phantom{00}0.35 & \phantom{00}0.43 & \phantom{00}0.35 \\
Background from theory & \phantom{00}1.00 & \phantom{00}1.10 & \phantom{00}0.06 & \phantom{00}0.05 & \phantom{00}0.06 \\
Background based on data & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.09 & \phantom{00}0.06 & \phantom{00}0.09 \\
Calibration method & \phantom{00}0.58 & \phantom{00}1.14 & \phantom{00}0.07 & \phantom{00}0.14 & \phantom{00}0.07 \\
\hline
Systematic uncertainty & \phantom{00}3.89 & \phantom{00}3.63 & \phantom{00}0.63 & \phantom{00}0.84 & \phantom{00}0.64 \\
Statistical uncertainty & \phantom{00}3.60 & \phantom{0}12.30 & \phantom{00}0.41 & \phantom{00}1.31 & \phantom{00}0.40 \\
\hline
Total uncertainty & \phantom{00}5.30 & \phantom{0}12.83 & \phantom{00}0.76 & \phantom{00}1.56 & \phantom{00}0.75 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{
Summary of correlations among sources of uncertainty. The symbols $\times$ or $\otimes$ within any category
indicate the uncertainties that are 100\% correlated.
The uncertainties marked as $\times$ are uncorrelated with those marked as $\otimes$.
The symbol $0$ indicates absence of correlations.
The symbol ``n/a'' stands for ``not applicable''.
\label{tab:correl} }
\begin{tabular}{l c c c c}
\hline\hline
&\multicolumn{2}{c }{D0\xspace\ Run~I} & \multicolumn {2}{c}{D0\xspace\ Run~II} \\
& \lplus & \dil & \lplus & \dil \\
\hline
{In situ} light-jet calibration & n/a & n/a & $\times$ & $\times$ \\
response to $b$, $q$, and $g$ jets & n/a & n/a & $\times$ & $\times$ \\
Model for $b$ jets & $\times$ & $\times$ & $\times$ & $\times$ \\
Light-jet response & $\otimes$ & $\otimes$ & $\times$ & $\times$ \\
Out-of-cone correction & $\times$ & $\times$ & n/a & n/a \\
Offset & $\times$ & $\times$ & n/a & n/a\\
Jet modeling &n/a & n/a & $\times$ & $\times$ \\
Multiple interactions model & n/a & n/a & $\times$ & $\times$ \\
$b$ tag modeling & n/a & n/a & $\times$ & $\times$ \\
Lepton modeling & n/a & n/a & $\times$ & $\times$ \\
Signal modeling & $\times$ & $\times$ & $\times$ & $\times$ \\
Background from theory & $\times$ & $\otimes$ & $\times$ & $\otimes$ \\
Background based on data & n/a & n/a & 0 & 0\\
Calibration method & 0 & 0 & 0 & 0 \\
Statistical & 0 & 0 & 0 &0 \\
\hline
\end{tabular}
\end{center}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}[h]
\caption[Global correlations between input measurements]{The matrix of correlation coefficients used to determine the
D0\xspace\ average top quark mass.}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\label{tab:coeff}
\begin{tabular}{l|cccc}
\makeatletter{} & \shortstack{Run~I,\\ $\lplus\phantom{^{\prime}}$ \,} & \shortstack{Run~I,\\ \dil \,} & \shortstack{Run~II,\\ $\lplus\phantom{^{\prime}}$ \,} & \shortstack{Run~II,\\ \dil \,} \\
\hline
Run~I, \lplus & 1.00 & & & \\
Run~I, \dil & 0.16 & 1.00 & & \\
Run~II, \lplus & 0.13 & 0.07 & 1.00 & \\
Run~II, \dil & 0.07 & 0.05 & 0.43 & 1.00 \\
\end{tabular}
\end{center}
\end{table}
\endgroup
\section{Results}
\label{sec:results}
We combine the D0\xspace\ input measurements of Table~\ref{tab:inputs} using the BLUE method.
The BLUE combination has a $\chi^2$ of 2.5 for 3 degrees of freedom, corresponding to
a probability of 47\%.
The pulls and weights for each of the inputs obtained from the BLUE method are listed in Table~\ref{tab:stat}.
Here, the pull associated to each input value $m_i$ with uncertainty $\sigma_i$ is calculated as $\frac {(m_i-\mt)}{\sqrt{\sigma_i^2-\sigma^2_{\mt}}}$, where $\sigma^2_{\mt}$
is the uncertainty in the combination, and indicates the degree of agreement of the input with the combined value.
The weight $w_i$ given to the input measurement $m_i$ is $ w_i = \sum_{j=1}^4 (\mathrm{Cov}^{-1})_{ij} / N$, where $\mathrm{Cov}$ is the covariance matrix of the input measurements, and $N$ is a normalization term ensuring
$\sum_{i=1}^4 w_i=1$. The covariance matrix expressed in terms of the correlation coefficients between the measurements ${c}_{ij}$ (with the convention ${c}_{ii}=0$) is:
$\mathrm{Cov_{ij}}= {\sigma_i}{\sigma_j}( \delta_{ij} + {c}_{ij})$, where $\delta_{ij}$ is the Kronecker $\delta$.
At first order in the correlation coefficients, its inverse is given by $(\mathrm{Cov}^{-1})_{ij}= \frac{1}{\sigma_i}\frac{ 1}{\sigma_j}( \delta_{ij} - {c}_{ij})$, so that the weight $w_i$ can be written as $w_i=\frac 1 {\sigma^2_i} ( 1 - \sum_{j\neq i}\frac {\sigma_i}{\sigma_j}{c}_{ij})/N'$, $N'$ being a normalization term.
This expression shows that the weight for the Run~I\ \dil\ measurement is negative mainly because the correlation with the Run~II\ \lplus\ measurement (0.07) is larger than the ratio of their uncertainties (0.76/12.7).
\begingroup
\squeezetable
\begin{table}[ht]
\caption[The pull and weight of each measurement]{The pull and weight for each
input channel when using the BLUE method to
determine the average top quark mass.
}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{lcccc}
\hline \hline
& \multicolumn{2}{c}{D0\xspace\ {Run~I}}
& \multicolumn{2}{c}{D0\xspace\ {Run~II }}\\
& \lplus & \dil & \lplus & \dil \\
\hline
\makeatletter{}
Pull & \phantom{$-$}0.98\phantom{0} & $-$0.51\phantom{0} & \phantom{$-$}0.63\phantom{0} & $-$1.06\phantom{0} \\
Weight\hspace{0.4cm} & \phantom{$-$}0.002 & $-$0.003 & \phantom{$-$}0.964 & \phantom{$-$}0.035 \\
\hline \hline
\end{tabular}
\end{center}
\label{tab:stat}
\end{table}
\endgroup
The resulting combined value for the top quark mass is
\begin{eqnarray}
\nonumber
\mt=\result .
\end{eqnarray}
Adding the statistical and systematic uncertainties
in quadrature yields a total uncertainty of $0.75$~\GeV, corresponding to a
relative precision of 0.43\% on the top quark mass. The breakdown of the uncertainties is
shown in Table\,\ref{tab:BLUEuncert}. The dominant sources of uncertainty are the statistical uncertainty, the JES calibration, which has statistical origin, and the modeling of the signal.
The total statistical and systematic
uncertainties are reduced relative to the published D0\xspace\ and CDF combination~\cite{TeVTopComboPRD}
due primarily to the latest and most accurate D0\xspace\ \lplus\ analysis~\cite{Mtop2-D0-l+jt-PRL,Mtop2-D0-l+jt-PRD}.
As a test of stability, we vary the correlation of the dominant source of uncertainties, `signal modeling', from 100\% to 0\%, first between Run~I\ and Run~II\ measurements,
and in a second check between all measurements.
The combined value of $\mt$ does not change by more than 50~MeV, while the uncertainty changes by no more than 20~MeV.
This is due to the fact that the Run~II\ \lplus\ measurement dominates the combination with a weight of 96\%. Thus, the combination is not sensitive to the detailed description of the correlation of systematic uncertainties.
Due to a much smaller total uncertainty resulting in the large weight for the \lplus\ measurement, the improvement in the combined uncertainty relative to the individual \lplus\ uncertainty is smaller than 10 MeV.
\begingroup
\squeezetable
\begin{table}[t]
\caption{\label{tab:BLUEuncert}
Combination of D0\xspace\ measurements of \mt\ and contributions to its overall uncertainty.
The uncertainty categories are
defined in the text. The total systematic uncertainty and the total
uncertainty are obtained
by adding the relevant contributions in quadrature.}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\newcolumntype{H}{>{\setbox0=\hbox\bgroup}c<{\egroup}@{}} \begin{tabular}{lHHHHc}
\hline
\hline
&&&&& D0\xspace\ combined values (\GeV) \\ \hline
\makeatletter{}top quark mass & 180.10 & 168.40 & 174.98 & 173.50 & 174.95 \\
\hline
{In situ} light-jet calibration & \phantom{00}n/a & \phantom{00}n/a & \phantom{00}0.41 & \phantom{00}0.47 & \phantom{00}0.41 \\
Response to $b$, $q$, and $g$ jets & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.16 & \phantom{00}0.28 & \phantom{00}0.16 \\
Model for $b$ jets & \phantom{00}0.71 & \phantom{00}0.71 & \phantom{00}0.09 & \phantom{00}0.13 & \phantom{00}0.09 \\
Light-jet response & \phantom{00}2.53 & \phantom{00}1.12 & \phantom{00}0.21 & \phantom{00}0.31 & \phantom{00}0.21 \\
Out-of-cone correction & \phantom{00}2.00 & \phantom{00}2.00 & \phantom{00}n/a & \phantom{00}n/a & $<0.01$ \\
Offset & \phantom{00}1.30 & \phantom{00}1.30 & \phantom{00}n/a & \phantom{00}n/a & $<0.01$ \\
Jet modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.07 & \phantom{00}0.14 & \phantom{00}0.07 \\
Multiple interaction model & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.06 & \phantom{00}0.07 & \phantom{00}0.06 \\
$b$ tag modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.10 & \phantom{00}0.22 & \phantom{00}0.10 \\
Lepton modeling & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.01 & \phantom{00}0.08 & \phantom{00}0.01 \\
Signal modeling & \phantom{00}1.10 & \phantom{00}1.80 & \phantom{00}0.35 & \phantom{00}0.43 & \phantom{00}0.35 \\
Background from theory & \phantom{00}1.00 & \phantom{00}1.10 & \phantom{00}0.06 & \phantom{00}0.05 & \phantom{00}0.06 \\
Background based on data & \phantom{00}n/e & \phantom{00}n/e & \phantom{00}0.09 & \phantom{00}0.06 & \phantom{00}0.09 \\
Calibration method & \phantom{00}0.58 & \phantom{00}1.14 & \phantom{00}0.07 & \phantom{00}0.14 & \phantom{00}0.07 \\
\hline
Systematic uncertainty & \phantom{00}3.89 & \phantom{00}3.63 & \phantom{00}0.63 & \phantom{00}0.84 & \phantom{00}0.64 \\
Statistical uncertainty & \phantom{00}3.60 & \phantom{0}12.30 & \phantom{00}0.41 & \phantom{00}1.31 & \phantom{00}0.40 \\
\hline
Total uncertainty & \phantom{00}5.30 & \phantom{0}12.83 & \phantom{00}0.76 & \phantom{00}1.56 & \phantom{00}0.75 \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\endgroup
The input measurements and the resulting D0\xspace\ average mass of the top
quark are summarized in Fig.~\ref{fig:summary}, along with the top quark pole mass extracted by D0\xspace
from the measurement of the \ttbar\ cross section~\cite{massxs}.
\section{Summary}
\label{sec:summary}
We have presented the combination of the measurements of the top quark mass in all D0\xspace\ data.
Taking into
account the statistical and systematic uncertainties and their
correlations, we find a combined average of
$\mt=\resulttot$.
This measurement with, a relative precision of 0.43\%,
constitutes the legacy Run~I\ and Run~II\ measurement of the top quark mass in the D0\xspace\ experiment.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.85\textwidth]{d0_topmass_summary.eps}
\end{center}
\vspace{-0.6cm}
\caption{\label{fig:summary} A summary of the top quark mass measurements used
in the D0\xspace combination~\cite{Mtop1-D0-di-l-PRL,Mtop1-D0-di-l-PRD,Mtop1-D0-l+jt-new1,Mtop2-D0-di-l-Nu-PLB,Mtop2-D0-di-l-ME-PRD,Mtop2-D0-l+jt-PRL,Mtop2-D0-l+jt-PRD}, along with the D0\xspace final result,
and the top quark pole mass extracted from the D0\xspace\ cross section measurement~\cite{massxs}. The latter is not used in the combination.
The inner red uncertainty bars represent the statistical uncertainties, while the blue bars represent the total uncertainties.
For comparison, we also show the preliminary 2014 world average of \mt~\cite{worldcombo} which includes D0\xspace\ Run~II\ \dil\ and \lplus\ measurements that are now superseded.
For the top quark pole mass extracted from the D0\xspace\ cross section measurement, a 1.1~\GeV\ theory uncertainty is included in the systematic uncertainty, and the statistical uncertainty is determined such as its relative contribution to the experimental uncertainty is the same as for the cross-section measurement.
}
\end{figure*}
\section{Acknowledgments} \makeatletter{}
We thank the staffs at Fermilab and collaborating institutions,
and acknowledge support from the
Department of Energy and National Science Foundation (United States of America);
Alternative Energies and Atomic Energy Commission and
National Center for Scientific Research/National Institute of Nuclear and Particle Physics (France);
Ministry of Education and Science of the Russian Federation,
National Research Center ``Kurchatov Institute" of the Russian Federation, and
Russian Foundation for Basic Research (Russia);
National Council for the Development of Science and Technology and
Carlos Chagas Filho Foundation for the Support of Research in the State of Rio de Janeiro (Brazil);
Department of Atomic Energy and Department of Science and Technology (India);
Administrative Department of Science, Technology and Innovation (Colombia);
National Council of Science and Technology (Mexico);
National Research Foundation of Korea (Korea);
Foundation for Fundamental Research on Matter (The Netherlands);
Science and Technology Facilities Council and The Royal Society (United Kingdom);
Ministry of Education, Youth and Sports (Czech Republic);
Bundesministerium f\"{u}r Bildung und Forschung (Federal Ministry of Education and Research) and
Deutsche Forschungsgemeinschaft (German Research Foundation) (Germany);
Science Foundation Ireland (Ireland);
Swedish Research Council (Sweden);
China Academy of Sciences and National Natural Science Foundation of China (China);
and
Ministry of Education and Science of Ukraine (Ukraine).
\bibliographystyle{apsrev_custom2}
|
1,108,101,563,702 | arxiv | \section{Introduction}
Along with the Earth, Saturn's icy moon Titan is the only planetary body of the entire Solar System that possesses lakes and seas
\citep{Lopes2007,Stofan2007,Hayes2008}. Some of these lakes and seas are currently covered by liquids, while others are not
\citep{Hayes2008}. Most are located in the polar regions \citep{Hayes2008,Aharonson2009},
although a few occurrences have been reported at lower latitudes \citep{Moore2010,Vixie2012lac}.
The currently-filled lakes and seas are located poleward of 70$^\circ$ of latitude in both hemispheres whereas
most empty depressions are located at lower latitudes (Figure \ref{fig:Titan_lakes})
\citep{Hayes2008,Aharonson2009}.
Altogether, empty depressions, lakes, seas and fluvial channels, argue for the presence of an active
``hydrological'' cycle on Titan similar to that of the Earth, with exchanges between the
subsurface (ground liquids), the surface (lakes, seas, fluvial channels) and Titan's methane-rich
atmosphere, where convective clouds and sporadic intense rainstorms have been imaged by
the Cassini spacecraft instruments \citep{Turtle2011rains}. Methane, rather than water as on Earth, probably
dominates the cycle on Titan \citep{Lunine2008} and thus constitutes one of the
main components of the surface liquid bodies observed in the polar regions \citep{Glein2013,Tan2013}.
Ethane, the main photodissociation product of methane \citep{Atreya2007}, is also implied in Titan's lakes
chemistry, as predicted by several thermodynamical models
\citep{Lunine1983,Raulin1987,Dubouloz1989,Cordier2009,Tan2013,Cordier2013erratum,Glein2013},
or as identified in Ontario Lacus thanks to the Cassini/VIMS instrument \citep{Brown2008}.
Titan's lakes are located in topographic depressions carved into the
ground by geological processes that are poorly understood to date.
The origin of the liquid would be
related to precipitation, surface runoff and underground circulation, leading to the accumulation
of liquids in local topographic depressions.
In the present work, we aim to constrain the origin and the age of these depressions.
Section \ref{sec:geology} first provides a brief overview of their geology and a discussion of their possible origin based on
their morphological characteristics and from considerations about Titan's surface composition and climate. Based on this
discussion, we propose a new quantitative model, whereby the depressions have formed by the dissolution of a
surface geological layer over geological timescales, such as in karstic landscapes on Earth.
In terrestrial karstic landscapes, the maximum quantity of mineral that can be dissolved per year, namely
the solutional denudation rate, can be
computed using a simple thermodynamics-climatic model presented in Section \ref{sec:denudation_rate_model}.
The denudation rate depends on the nature of the surface material (solubility and density of the minerals) and on
the climate conditions (precipitation, evaporation, surface temperature).
Using this simple model, it is possible to determine theoretical timescales for the formation of specific karstic landforms on Earth,
which are compared to relative or absolute age determinations in Section \ref{sec:earth_ages}.
We apply the same model to Titan's surface in Section \ref{sec:denudation_rates_titan}.
Section \ref{sec:DR_Earth_time} is dedicated to the comparative study of denudation rates of
pure solid organics in pure liquid methane, ethane and propane, and of common soluble minerals
(halite, gypsum, anhydrite, calcite and dolomite), cornerstones of karstic landscapes development on Earth,
in liquid water over terrestrial timescales. Section \ref{sec:DR_Titan_time} describes the computation of
denudation rates of pure solids and of mixed organic surface layers in liquid methane over Titan timescales
by using the methane precipitation rates extracted from the GCM of \citet{Schneider2012}.
Based on these denudation rates, we compute the timescales needed to develop the typical
100 m-deep topographic depressions observed in the polar regions of Titan (Section \ref{sec:ages_Titan})
and compare them to timescales estimated from other observations (e.g. crater counting, dune formation).
\section{Geology of Titan's lacustrine depressions} \label{sec:geology}
\subsection{Geomorphological settings} \label{sec:geomorphology}
Seas and lacustrine depressions strongly differ in shape (Figure \ref{fig:Titan_lakes2}).
On one hand, seas are large (several hundred km in width) and deep
(from 150 to 300 - 400 m in depth) \citep{Lorenz2008organic,Lorenz2014,Mastrogiuseppe2014}.
They possess dendritic contours and are connected to fluvial channels (e.g. Ligeia Mare, Figure \ref{fig:Titan_lakes2}a)
\citep{Stofan2007,Sotin2012,Wasiak2013}. They seem
to develop in areas associated with reliefs, which constitute some parts of their coastlines.
On the other hand, Titan's lacustrine depressions (Figure \ref{fig:Titan_lakes2}b)
develop in relatively flat areas. They lie between 300 and 800 meters above the level of the northern
seas \citep{Stiles2009,Kirk2012}. They are typically rounded or lobate in shape and some of them
seem to be interconnected \citep{Bourgeois2008}. Their widths vary from
a few tens of km, such as for most of Titan's lacunae, up to a few hundred
km, such as Ontario Lacus or Jingpo Lacus. Their depths have been tentatively estimated to range from
a few meters to 100 - 300 meters \citep{Hayes2008,Kirk2008,Stiles2009,Lorenz2013},
with ``steep''-sided walls \citep{Mitchell2007,Bourgeois2008,Kirk2008,Hayes2008}.
The liquid-covered depressions would lie 250 meters below the floor of the empty depressions \citep{Kirk2007}, which
could be indicative of the presence of an alkanofer in the sub-surface, analog to terrestrial aquifers,
filling or not the depressions depending on their base level \citep{Hayes2008,Cornet2012}.
The depressions sometimes possess a raised rim, ranging from a few hundred meters up to 600 meters in
height \citep{Kirk2007,Kirk2008}. All these numbers are likely subject to
modification following future improvements in depth-deriving techniques.
\subsection{Geological origin of the depressions} \label{sec:origins}
The geological origin of the topographic depressions and how they are fed by liquids are
still debated. The geometric analysis of the lakes by \citet{Sharma2010,Sharma2011} led
to the conclusion that, unlike on Earth, the formation mechanism of the lacustrine
depressions cannot be derived from the analysis of their coastline shapes. Recently, \citet{Black2012}
and \citet{Tewelde2013} showed that mechanical erosion due to fluvial activity
would have a minor influence in landscape evolution on Titan.
Given this context, several hypotheses are being explored to understand how Titan's lakes
have formed. These include:
\begin{enumerate}
\item cryovolcanic origin \citep{Mitchell2007,Wood2007},
forming topographic depressions in which lakes can exist, such as in terrestrial calderas \citep{Acocella2007}
or maars \citep{Lorenz1986};
\item thermokarstic origin \citep{Kargel2007,Mitchell2007,Harrisson2012},
where the cyclic destabilization of a methane frozen ground would form topographic depressions, such as in
periglacial areas on Earth where the permafrost cyclically freezes and thaws and forms thermokarst lakes,
pingos or alases \citep{French2007} ;
\item solutional origin \citep{Mitchell2007,Bourgeois2008,Mitchell2008,Mitchell2011karst,Malaska2011dissolution,Barnes2011evaporites,Cornet2012},
where processes analogous to terrestrial karstic dissolution create topographic depressions,
such as terrestrial sinkholes/dolines, playas and pans under various climates \citep{Shaw2000,Ford2007}.
\end{enumerate}
On one hand, the general lack of unequivocal cryovolcanic features on Titan tends to limit the
likelihood of the cryovolcanic hypothesis \citep{Moore2011}. A methane-based
permafrost would be difficult to form on Titan due to the presence of nitrogen in the
atmosphere \citep{Lorenz2002,Heintz2009}. Its putative cyclic destabilization would also be challenging,
given the tiny temperature variations between summer and winter, day and night, equator
and poles \citep{Jennings2009,Lora2011,Cottini2012} over all timescales
\citep{Aharonson2009,Lora2011}. On the other hand, solid organics have been shown to be quite
soluble in liquid hydrocabons under Titan's surface conditions
\citep{Lunine1983,Raulin1987,Dubouloz1989,Cordier2009,Cordier2013erratum,Cordier2013,Glein2013,Tan2013}, provided that they
are available at the surface. The observation of bright terrain around present lakes and inside of
empty depressions, analogues of terrestrial evaporites produced by the evaporitic crystallization
of dissolved solids, also strengthens this hypothesis \citep{Barnes2011evaporites, MacKenzie2014}.
On Earth, dissolution-related landforms are not restricted to sinkholes/dolines, pans or
playa, which characterize relatively young karsts \citep{Ford2007}. Spectacular instances of reliefs
nibbled by dissolution, known as cone/cockpit karsts, fluvio karsts or tower karsts, exist
under temperate to tropical/equatorial climates, such as in China \citep{Xuewen2006,Waltham2008},
Indonesia \citep{Ford2007} or the Carribeans \citep{Fleurant2008,LyewAyee2010}.
The observation of possible mature karst-like terrains in
Sikun Labyrinthus (Figure \ref{fig:Titan_lakes2}c) by \citet{Malaska2010}, similar to these terrestrial
karstic landforms also gives further credence to the hypothesis that lacustrine depressions on Titan
are karstic in origin.
\subsection{Composition of Titan's solid surface} \label{sec:composition}
Titan's surface can be divided into five main spectral units identified by the Cassini/VIMS instrument: bright terrain,
dark equatorial dune fields or dark-brown units, blue units, 5 $\mu$m-bright units and the dark lakes \citep{Barnes2007,Stephan2009}.
In the polar regions, the solid surface appears dominantly as bright terrain, lakes, and patches
identified as the 5 $\mu$m-bright unit in VIMS data \citep{Barnes2011evaporites,Sotin2012, MacKenzie2014}.
The spectral characteristics of the VIMS 5 $\mu$m-bright unit seen inside and around some polar (and equatorial)
lacustrine depressions indicate the presence of various hydrocarbons and nitriles \citep{Clark2010, Moriconi2010}
and are not compatible with the presence of water ice \citep{Barnes2009ontario}.
The origin of the organic materials is probably linked to the atmospheric photochemistry, which
results in the formation of various hydrocarbons and nitriles \citep{Lavvas2008a,Lavvas2008b,Krasnopolsky2009}
detected by Cassini \citep{Cui2009,Magee2009,Clark2010,Coustenis2010,Vinatier2010,Cottini2012}.
Table \ref{t:products} gives the estimated fluxes of some produced organics, as derived from several models.
Most of these compounds could condense as solids and sediment onto the surface over geological timescales
\citep{Atreya2007, Malaska2011dissolution}. Most of them would be relatively soluble in liquid alkanes
\citep{Raulin1987,Dubouloz1989,Cordier2009,Cordier2013erratum,Glein2013,Tan2013}.
It is therefore reasonable to assume that a superficial soluble layer, composed of organic products,
exists at the surface of Titan. Episodic dissolution of this layer would be responsible for the development
of karst-like depressions and labyrinthic terrains
\citep{Bourgeois2008,Malaska2010,Malaska2011dissolution,Mitchell2011karst,Cornet2012}.
Evaporitic crystallization could also occur after episodes of dissolution in the liquids,
forming evaporite-like deposits \citep{Barnes2011evaporites,Cornet2012,Cordier2013,MacKenzie2014}.
\section{Solutional denudation rates on Earth} \label{sec:denudation_rate_model}
On Earth, karstic landforms develop thanks to the
dissolution of carbonate (calcite, dolomite) and evaporite (gypsum, anhydrite, halite)
minerals under the action of groundwater and rainfall percolating through pore space
and fractures present in rocks. The mineral solubilities vary as a function of the
environmental conditions (amount of rain and partial pressure of carbon dioxide) \citep{Ford2007}.
Karstic landforms like dolines or sinkholes are often located under temperate to humid climates
in carbonates (though gypsum or halite karsts also exist on Earth). They
reach depths of up to a few hundred meters \citep{Ford2007}.
Karsto-evaporitic landforms like pans are located under semi-arid to arid climates. They reach
depths of up to a few tens of meters \citep{Goudie1995,Bowen2012}. Evaporitic landforms like playas are
located under arid climates. They are characterized by their extreme flatness
and can occur in any kind of topographic depression \citep{Shaw2000}.
Denudation rates (hereafter $DR$) in terrestrial karstic landscapes are primarily constrained by geological
(age and physico-chemical nature of rocks) and climate (net precipitation rates evolution)
analyses. For many limestone-dominated areas (the majority of karst areas), it is commonly
assumed that dissolution features are mainly created in the epikarstic zone located in the
top few meters below the surface \citep{White1984,Ford2007}. Classically, in karstic terrains, the chemical/solutional
denudation rate $DR$ (in meters per Earth year, or m/Eyr) is related to the rock physico-chemical
properties and the climate by the following equation \citep{White1984,Tucker2001,Ford2007,Fleurant2008}:
\begin{equation} \label{eqn:DR}
DR = \rho_w \, \frac{M_{\rm calcite}}{\rho_{\rm calcite}} \, \tau \, m_{\rm Ca}, \qquad{\textrm{[m/Eyr]}}
\end{equation}
where $\rho_w$ is the mass density of liquid water ($\simeq 1000$ kg/m$^3$), $M_{\rm calcite}$ is the molar
mass of calcite (in kg/mol), $\rho_{\rm calcite}$ is the mass density of calcite (in kg/m$^3$), $\tau$ is the
mean annual net precipitation rate (in m/yr),
equivalent to the sum of runoff and infiltration or the difference between precipitation and
evapotranspiration over long timescales \citep{White2012}, and $m_{\rm Ca}$ is the equilibrium
molality of calcite (Ca$^{2+}$ cations, in mol/kg),
assuming an instantaneous dissolution. Following this equation, denudation rates depend linearly on
the climate precipitation regime and on the molality of dissolved materials.
Molality calculations are provided in Appendix \ref{sec:solubility} for various soluble minerals, based on
two thermodynamic hypotheses: an Ideal Solution Theory (IST, no preferential interactions between molecules)
and an Electrolyte Solution Theory (EST, preferential interactions between molecules).
We incorporated the effect of CO$_2$ gas dissolved in water, which acidifies water
due to a series of intermediate reactions (one of which producing carbonic acid),
and increases the dissolution rates of carbonates (calcite and dolomite,
see Appendix \ref{sec:solubility} for more details).
The partial pressure of CO$_2$ (hereafter $P_{CO_2}$, in atm) under different climates can be computed
as follows \citep{Ford2007}:
\begin{equation}
\log P_{CO_2} = -3.47 + 2.09 \, (1-e^{-0.00172 \, AET}),
\end{equation}
where $AET$ is the mean annual evapotranspiration rate in mm/Eyr.
Applying the above formula gives $P_{CO_2} < 3$ matm
for arid areas, $3 < P_{CO_2} < 12$ matm for temperate areas and $P_{CO_2} > 12$ matm for tropical or
equatorial climates. The resulting typical values of denudation rates on Earth for common soluble minerals in liquid water
are given in Table \ref{t:DRmin}. Depending on the climatic conditions, denudation rates of salts are on the order of
a few hundred $\mu$m to a few cm per Earth year, whereas denudation rates of carbonates are on the
order of a few $\mu$m to a few hundred $\mu$m per Earth year.
Note that chemical erosion is just one of the landshaping processes on Earth. Other mechanisms, such as
mechanical erosion, would tend to increase the total denudation rate of a surface \citep{Fleurant2008}.
The dissolved solids can take part in the crystallization of soluble deposits at the surface (e.g. calcretes or salt
crusts in arid environments) or in the sub-surface (e.g. cements), when they saturate the liquids. They do not
necessarily crystallize at the location where they dissolved \citep{Ford2007}.
\section{Denudation rates to determine ages of terrestrial karstic landscapes} \label{sec:earth_ages}
Using the denudation rates, theoretical timescales for the development of dissolution landforms
of a given depth can be inferred. We test hereafter this hypothesis on terrestrial instances by comparing
ages determined from the denudation rates and ages determined from relative or absolute
chronology. For carbonates, the partial pressure of CO$_2$ is computed based on the
MOD16 Global Evapotranspiration Product of the University of Montana/ESRI Mapping Center.
The notation GEyrs, MEyrs and kEyrs will refer to timescales
expressed respectively in Giga Earth years ($10^9$ years), Million Earth years ($10^6$ years) and kilo Earth years
($10^3$ years) in the following sections.
\subsection{Example under a hyper-arid climate: the caves of the Mount Sedom diapir (Israel)}
Mount Sedom is a $11\times1.5$ km salt diapir located in Israel, 250 meters above the level of the Dead sea,
under a hyper-arid climate \citep{Frumkin1994,Frumkin1996}.
It is composed of a Pliocene-Pleistocene ($< 7$ MEyrs) halite basis covered by a 5 - 50 m-thick
layer composed of anhydrite and sandstones.
Several sinkholes and caves exist in Mount Sedom \citep{Frumkin1994}.
In this part of Israel, the present-day hyper-arid climate leads to average rainfall rates
of 50 mm/Eyr, of which 10 - 15 mm/Eyr constitutes
the average effective rainfall rate ($\tau$). Gvishim and Mishquafaim Caves are respectively 60 m and 20 m-deep caves and are essentially
carved into halite in the northern part of Mount Sedom.
Radioisotopic age measurements in several caves of Mount Sedom give an Holocene age, about 8 kEyrs
(and around 3.2 - 3.4 kEyrs from age measurements performed in Mishquafaim Cave) \citep{Frumkin1996}.
Assuming that underground dissolution is somewhat connected to the rainfall percolating through anhydrite and reaching halite,
the denudation rates computed with our model (Equation \ref{eqn:DR}) over this region are $DR \simeq 1.69 - 2.53$ mm/Eyr.
The timescales required to form these caves would be between 23.7 and 35.5 kEyrs for Gvishim Cave, and
between 7.9 and 11.8 kEyrs for Mishquafaim Cave, longer than those estimated from radioisotopic measurements.
However, it should be noted that Mount Sedom experiences a rather deep dissolution, the load of dissolved
solids in the underground liquid water flowing in its caves being up to 3 times that of surface runoff \citep{Frumkin1994}.
Therefore, considering that the denudation rate should be 3 times greater,
we find timescales of development between 7.9 and 11.8 kEyrs for Gvishim Cave and
between 2.6 and 3.9 kEyrs for Mishquafaim Cave, in better agreement with age measurements.
\subsection{Example under a semi-arid climate: The Etosha super-pan (Namibia)}
The Etosha Pan, located in Namibia under a semi-arid climate, is a flat 120$\times60$ km
karsto-evaporitic depression that has already been suggested as a potential
analogue for Titan's lacustrine depressions \citep{Bourgeois2008,Cornet2012}.
The depression is about 15 to 20 m-deep at most, and has been carved into the carbonate layer
essentially composed of calcretes and dolocretes lying ontop a middle
Tertiary-Quaternary detritical sedimentary sequence that covers the Owambo basin
\citep{Buch1996,Hipondoka2005}. This sedimentary sequence is believed to have
accumulated under a semi-arid climate, relatively similar to the current one.
The age of the calcrete layer is not known precisely, but its formation is believed
to have started during the Miocene/Pliocene transition
about 7 MEyrs ago \citep{Buch1997a,Buch1997b}, or even later during the late Pliocene, 4 MEyrs ago \citep{Miller2010}.
The Etosha Pan is believed to have developed at the expense of the calcrete layer since 2 MEyrs \citep{Miller2010}.
In the Owambo sedimentary basin, $AET \simeq 350 $ mm/Eyr, yielding $P_{CO_2} = 3$ matm.
Precipitation rates during the summer rainy season reach up to 500 mm/Eyr, which leads to $\tau = 150$ mm/Eyr at most.
These conditions lead to $DR = 6.5$ $\mu$m/Eyr in a substratum composed of calcite (calcrete) and $DR = 5.6$ $\mu$m/Eyr
in a substratum composed of dolomite (dolocrete). These denudation rates give an approximate age for the Etosha Pan of about
2.31 - 3.08 MEyrs in calcretes and 2.68 to 3.57 MEyrs in dolocretes. These ages are consistent with ages estimated from
geological observations ($< 4$ MEyrs).
\subsection{Example under a temperate climate: the doline of Crveno Jezero (Croatia)}
Crveno Jezero is a 350 m-wide collapse doline located in the Dinaric
Karst of Croatia, under a presently mediterranean temperate climate. This part of the
Dinaric Karst is essentially composed of limestones and dolomites that have been
deposited during the Mesozoic (Triassic to Cretaceous), when the area was
covered by shallow marine carbonate shelves \citep{Vlahovic2002,Mihevc2010}.
The cliffs forming the doline are up to 250 m in height. The bottom
of the depression is covered by a lake, named Red Lake, about 250 m-deep.
The total depth of the doline is therefore about 500 m \citep{Garasic2001,Garasic2012}.
Although the formation process of the collapse dolines in the area is still not completely understood,
especially in terms of the relative importance and timing of the collapse compared to the dissolution in the
development of the structure, the uplift linked to the formation of the Alps (Eocene-Oligocene) led to the exposure
of these carbonates and their subsequent karstification since at least Oligo-Miocene times (last 30 MEyrs) \citep{Sket2012}.
At some point, dissolution occurred to form an underground cave; then the load of the capping rocks exceeded their
cohesion, leading to the brutal or progressive collapse of the cap in the empty space beneath. The
timescale calculated for the doline formation therefore constitutes a higher estimate. Age determinations in the northern Dinaric caves
have been performed and show that karstification would be older than 4 to 5 MEyrs \citep{ZupanHajna2012}.
Crveno Jezero thus developed between 4 and 30 MEyrs ago.
In this region of Croatia, $AET \simeq 500$ mm/Eyr, which leads to $P_{CO_2} = 5.44$ matm.
Annual mean precipitation rates are about 1100 mm/Eyr \citep{Mihevc2010}, leading to $\tau = 600$ mm/Eyr.
These parameters give $DR = 31.8$ $\mu$m/Eyr in limestones dominated by calcite
($DR = 27.5$ $\mu$m/Eyr in limestones dominated by dolomite).
The time required to dissolve 500 m of calcite under these present conditions would be around
15.7 MEyrs (18.2 MEyrs in dolomite). It is likely that cap rocks fell into the cave,
leading to a younger age (1 m of calcite and dolomite dissolve in about 31 and 36 kEyrs respectively).
However, the age determined by our method is still consistent with the geological records.
\subsection{Example under a tropical climate: the Xiaozhai tiankeng (China)}
The Xiaozhai tiankeng is a 600 m-wide collapse doline located in the Chongqing province of China, under a
tropical climate. The depth of this doline is evaluated between 511 and 662 m depending on the location \citep{Xuewen2006,Ford2007}.
It is organized into two major collapse structures, an upper 320 m-deep structure and a lower 342 m-deep shaft.
The tiankeng developed into Triassic limestones in a karst drainage basin. It
results from various surface and subsurface processes such as dissolution, fluid flows and collapse.
Tiankengs of China would have formed during the late Pleistocene (last 128,000 Eyrs)
\citep{Xuewen2006} under an equatorial monsoonal climate similar to nowadays,
emplaced 14 MEyrs ago \citep{Wang2009}.
In the area, annual precipitation rates are about 1500 mm/Eyr, for an annual
evapotranspiration rate of about 800 mm/Eyr, which leads to $P_{CO_2} = 0.0123$ atm and
$\tau=700$ mm/Eyr. Under these conditions, $DR =49.6 $ $\mu$m/Eyr (calcite) or
$DR =42.6 $ $\mu$m/Eyr (dolomite) and the timescale to form the Xiaozhai tiankeng
would be between 10.3 and 13.3 MEyrs (calcite) and 12.0 and 15.5 MEyrs (dolomite),
considerably longer timescales than those estimated from geological records. However,
such a difference is to be expected, since the formation of gigantic tiankengs are
subject to complex interconnected surface processes not taken into account in these
simple calculations (collapse and underground circulation of water). This illustrates the limits
of our method, generally over-estimating the ages of karstic features when they
do not result from dissolution only.
\section{Denudation rates on Titan} \label{sec:denudation_rates_titan}
Equation \ref{eqn:DR} has to be adapted for Titan. From thermodynamics, we compute molar volumes
($V_{\rm m,i}$ in m$^3$/mol, see Appendix \ref{sec:molar_volumes}) and mole fractions at saturation
($X_{i,\rm sat}$, see Appendix \ref{sec:solubility}) instead of mass densities and molalities. We considered the Ideal
and the non-ideal Regular Solutions Theories (IST and RST respectively) and ensured that our results
are consistent with experimentally determined solubilities. The RST has been developed to determine the solubility
of non-polar to slightly polar molecules, such as simple hydrocarbons or carbon dioxide, in non-polar solvents.
It is less likely to be appropriate for polar molecules such as nitriles/tholins and water ice.
We therefore always provide a comparison between the IST and the RST in order to assess the uncertainty
of the calculation by using the RST for polar molecules (see also Appendix \ref{sec:solubility}).
However, we consider the RST, which provides our lowest and more realistic solubilities (at least for hydrocarbons),
as the most suited model for the study. We assume that dissolution is instantaneous, which is not unreasonable
given the rapid saturation of solid hydrocarbons in liquid ethane inferred from
recent dissolution experiments \citep{Malaska2014}, compared to the long geological timescales considered in our work.
For a binary solute-solvent system, Equation \ref{eqn:DR} becomes:
\begin{equation} \label{eqn:DR2}
DR_i =\frac{ X_{i, \rm sat}}{1- X_{i, \rm sat}}\, \tau \, \frac{V_{{\rm m},i}^S }{V_{\rm m, solv}^L}, \qquad \textrm{[m/Eyr]}
\end{equation}
where $V_{{\rm m},i}^S $ and $V_{\rm m, solv}^L$ are the molar volumes of the
solid (crystallized) phase and of the liquid phase respectively.
\subsection{Comparison between Titan and the Earth over terrestrial timescales} \label{sec:DR_Earth_time}
In order to compare the behavior of Titan's solids in Titan's liquids with that of minerals in liquid water on Earth,
we computed Titan's and Earth's denudation rates over Earth timescales by varying the precipitation rate between
0 and 2 m/Eyr. This range of precipitation rates encompasses the expected range on Titan (up to 1.2 - 1.3 m/Eyr during the rainy season \citep{Schneider2012}). It also covers the precipitation range between
arid and tropical climates on Earth. This comparison is illustrated in Figures \ref{fig:DR1} (for hydrocarbons and
carbon dioxide ice) and \ref{fig:DR2} (for nitriles and water ice), for which we
fixed Titan's temperature to 91.5 K (the surface temperature during the rainy season according to
\citet{Schneider2012}) and Earth's temperature to 298.15 K (with a partial pressure of carbon dioxide equal to 0.33 matm).
All organic compounds except C$_{11}$H$_{11}$N would behave like common mineral salts (halite, gypsum or anhydrite) according to the IST, which means that they would experience
dissolution rates in the range of hundred $\mu$m to a few cm over one Eyr. According to the RST and assuming that the liquid
is methane, C$_2$H$_{4}$ would behave like halite (cm-scale dissolution), C$_4$H$_{10}$ and C$_4$H$_{4}$
like gypsum (hundred $\mu$m to mm-scale dissolution), C$_2$H$_{2}$, C$_3$H$_{4}$, C$_4$H$_{6}$ and CO$_{2}$
like carbonates (up to hundred $\mu$m-scale dissolution). C$_6$H$_6$ and nitriles would be less soluble in methane than
calcite is in pure liquid water, with the least soluble nitriles being C$_{11}$H$_{11}$N, HCN and CH$_3$CN. As expected,
water ice is completely insoluble according to the RST, developed for non-polar molecules.
The denudation rates of all organic compounds are higher in ethane and propane than in methane.
Nitriles and C$_6$H$_6$, which are poorly soluble in methane, are quite soluble in ethane and propane
(except C$_{11}$H$_{11}$N, HCN and CH$_3$CN still under the calcite level)
and reach denudation rates similar to those of carbonates or even gypsum.
C$_2$H$_2$ and C$_3$H$_4$ would behave like carbonates (in ethane) or
gypsum (in propane). C$_{4}$H$_{4}$ and C$_{4}$H$_{6}$ behave like salts in those liquids and
are even more soluble than gypsum. C$_4$H$_{10}$ and C$_{2}$H$_{4}$ are halite-like materials,
extremely soluble in ethane and propane. Therefore, the likelihood
of developing dissolution landforms in a hydrocarbon dominated substrate, by analogy
with the Earth, is high.
\subsection{Present denudation rates on Titan over a Titan year} \label{sec:DR_Titan_time}
\subsubsection{Net precipitation rates on Titan}
Computing denudation rates over Titan timescales requires us to define the evolution
of the net precipitation over one Titan year (1 Tyr $\simeq 29.5$ Earth years).
Titan's climate is primarily defined by a rainy warm ``summer'' season and a cold dry ``winter'' season,
both spanning about 10 Earth years. Southern summers are shorter and more
intense than those in the north \citep{Aharonson2009}. Precipitation occurs
as sporadic and intense rainstorms during summer, when cloud formation is observed
\citep{Roe2002,Schaller2006tempete,Schaller2009,Rodriguez2009cloud,Rodriguez2011,Turtle2011rains,Turtle2011grl}.
A few Global Circulation Models (GCM) attempt to describe, at least qualitatively, the
methane cycle on Titan (e.g. \citet{Rannou2006}, \citet{Mitchell2008},
\citet{Tokano2009} and \citet{Schneider2012}). Usually,
net accumulation of rain is predicted at high latitudes
$>60^\circ$ during summer, in agreement with the presence of lakes, whereas
mid-to-low latitudes experience net evaporation, in agreement with the absence of
lakes and the presence of deserts. Quantitatively, the model predictions are
subject to debate since they depend on their physics
(e.g. cloud microphysics, size of the methane reservoir, radiative transfer scheme).
Still, they remain our best estimates about Titan's climate.
During the summer season, the model of \citet{Rannou2006} predicts net precipitation
rates of methane lower than 1 cm/Eyr at 70$^\circ$ latitude and up to 1 m/Eyr poleward of 70$^\circ$,
equivalent of a few $\mu$m to 2.7 mm/Earth day (Eday) respectively. The model of \citet{Mitchell2008} predicts
precipitation rates of about 2 mm/Earth day (Eday) in the polar regions
and along an ``Inter-Tropical Convergence Zone'' (ITCZ), nearly moving
``pole-to-pole''. The intermediate and moist models of \citet{Mitchell2009} predict precipitation
roughly varying between 2 and 4 mm/Eday at the poles and along the ITCZ. These rates are consistent with
those estimated in \citet{Mitchell2011} in order to reproduce the tropical storms seen in 2010
\citep{Turtle2011rains} and with those estimated by the model of \citet{Schneider2012}. The model of
\citet{Tokano2009} also predicts similar precipitation rates (800 to 1600 kg/m$^2$ in half a Titan year,
equivalent to precipitations between 3 and 6 mm/Eday).
Here, we use the methane net
precipitation rates extracted from the GCM of
\citet{Schneider2012} to compute the present-day denudation rate on Titan.
Figure \ref{fig:PE_Titan} represents the mean net precipitation rates
at various polar latitudes. High latitudes $> 80^\circ$
would be quite humid ($\tau = 7 - 8$ m/Tyr). Lower latitudes would be less humid
($\tau = 3 - 3.6$ m/Tyr at $70^\circ$, decreasing to $\tau = 0.4 - 1.6$ m/Tyr at $60^\circ$).
Southern low latitudes would be much drier than northern latitudes over a Titan year
as a result of sparser but more intense rainstorms.
\subsubsection{Case of pure compounds} \label{sec:DR_Titan_time_pure}
Figure \ref{fig:DRTyr} illustrates the denudation rates of a surface composed of pure
organic compounds exposed to methane rains at several southern and northern polar latitudes
according to the IST and RST hypotheses. Ethane and propane are not shown since
the model of \citet{Schneider2012} only considers methane, but the behavior
of Titan's solids in these liquids is already discussed in Section \ref{sec:DR_Earth_time}.
Over one Titan year, the denudation rates are the highest at high latitudes
and the lowest at low southern latitudes. According to the IST, all compounds would experience dissolution on the order
of a few mm to a few meters at almost all latitudes (salt-like material), except C$_{11}$H$_{11}$N, the denudation rate of which
would be a few hundred nm. The dissolution of C$_2$H$_8$N$_2$, CO$_2$, HCN and C$_6$H$_6$ at
60$^\circ$S however would be on the order of several hundred $\mu$m (carbonate-like material).
According to the RST, C$_2$H$_4$ and C$_4$H$_{10}$ denudation rates are between a few
mm up to a few meters per Titan year (salt-like materials). The dissolution
of C$_4$H$_4$, C$_4$H$_6$, C$_2$H$_2$, CO$_2$, C$_3$H$_4$ is $\mu$m to hundred $\mu$m-scale over a
Titan year (carbonate-like materials). The denudation rates of nitriles and C$_6$H$_6$ are
between $10^{-7} - 10^{-10}$ mm/Tyr (for HCN, CH$_3$CN, C$_{11}$H$_{11}$N) and a few $\mu$m/Tyr
(carbonate to siliceous-like materials).
At high latitudes $> 70^\circ$, we do not see much differences between the northern and the southern
denudation rates, as expected from the similarities in precipitation rate between the two poles shown in Figure \ref{fig:PE_Titan}. At low southern latitudes, net precipitation rates are too low over one Titan year
to allow a rapid and significant dissolution. Interestingly, Ontario Lacus and Sikun Labyrinthus, two
landforms compared with terrestrial karsto-evaporitic and karstic landforms \citep{Malaska2010,Cornet2012},
are observed at latitudes greater than 70$^\circ$S, and no other well-developed
dissolution-related landforms are seen at lower southern latitudes.
Therefore, if Titan's surface is composed of pure hydrocarbons, dissolution processes are likely to occur
but the formation of a karstic-like landscapes would be roughly 30 times slower on Titan than on Earth due to
Titan's seasonality in precipitation.
Of course, this latter consideration depends on the actual composition of the surface, which is unlikely to be
pure, and of the accuracy of the climate model used.
\subsubsection{Case of a mixed surface layer} \label{sec:DR_Titan_time_mixed}
We now assume the presence of a surface layer, the composition of which
is proportional to the accumulation rates at the surface ($h_i$) of solids coming from the atmosphere,
calculated in the same way as \citet{Malaska2011dissolution} did:
\begin{equation}
h_i = p_i \, V^S_{\rm m, i} / N_A, \qquad \textrm{[m/Eyr]}
\end{equation}
where $p_i$ is the production rate of molecules (in molecules/m$^2$/Eyr) listed in Table \ref{t:products}, $V^S_{\rm m, i}$ is the
molar volume of the solid (or subcooled liquid if the former is not known, in m$^3$/mol) and $N_A$ is the Avogadro
number ($\simeq 6.022\times10^{23}$ mol$^{-1}$). The composition of the mixed organic layer
is then determined as percentages ($f_i$) of each organic compound in the layer (so that $f_i = h_i / \sum_i h_i$).
This method allows us to consider the volume occupied by each molecule, which is important especially for tholins
because these contribute up to 20 \% on average to the total production rates of molecules, but they build up to $\simeq 50$ \%
of the total thickness of the surface deposits due to their higher molar volumes compared to those of simple hydrocarbons or nitriles.
We consider 3 cases for the surface composition: without tholins and with C$_2$H$_8$N$_2$ or C$_{11}$H$_{11}$N as
tholins. Tholins production rate is that mentioned in \citet{Cabane1992}. Methane precipitation rates are those of \citet{Schneider2012}.
The denudation rates for these mixed organic layers ($DR_{mix}$) are computed in a linear mixing model scheme where:
\begin{equation}
DR_{mix} = \sum_i f_i \, DR_i. \qquad \textrm{[m/Eyr]}
\end{equation}
Figure \ref{fig:DR_photochemical} gives the repartition of denudation as a function of latitude and photochemical models
at the end of a Titan year. According to the IST, the denudation rate of all mixed layers would
be on the order of a few cm to a few meters over a Titan year (salt-like layers). According to the RST,
the organic layer originating from the \citet{Lavvas2008b} model would be the most soluble (dissolution rates of a
few cm to a few dm per Titan year, salt-like layer). The organic layers originating from the other models would be
more carbonate-like layers over a Titan year
(dissolution rates of a few tens of $\mu$m to a few mm per Titan year), whether tholins are included or not.
The lowest solubility of all mixed layers is
reached using a \citet{Krasnopolsky2009}-type composition. Over a Titan year, the likelihood
of developing dissolution-related landforms is therefore non-negligible, even if the surface is not composed of
pure soluble simple solids. These mixed organic layers would behave like carbonate or salty terrestrial layers over a
Titan year.
\section{Discussion: How old are Titan's karstic landscapes?} \label{sec:ages_Titan}
Despite the strong assumptions of the method described in Section \ref{sec:earth_ages} to infer timescales of
formation of terrestrial karstic landforms (we consider only chemical erosion at equilibrium without significant
climate changes over the past few MEyrs), the resulting ages are consistent with ages determined by relative or
absolute chronology or constitute upper limits. Therefore, the determination of the age of the lacustrine depressions
on Titan probably result in maximum timescales of development. Titan's climate is believed to have
remained quite stable over the recent past, with a small periodic insolation variation of $\pm 2$ W/m$^2$ at the poles
during the last MEyr, for the current low insolation at the North pole \citep{Aharonson2009}. This probably brings some
stability to the calculations.
By applying our simple model, we compute the timescales needed to form a 100 m-deep depression by
dissolution of a superficial mixed organic layer under the current climate conditions evaluated by \citet{Schneider2012}.
These are shown in Table \ref{t:Titan_ages} and Figure \ref{fig:T100m}. We compute our formation timescales
using both the IST and the RST since the solubility of polar molecules is not well constrained by the RST
and could get closer to that computed using the IST. However, we consider timescales evaluated using the RST
as our references since they present the most conservative values for the age of the lacustrine depressions.
Independent of the thermodynamic theory considered,
Titan's lacustrine depressions would be young. Among all the compositions tested, the time needed to carve
a 100 m-deep depression by dissolution under current climate conditions at latitudes poleward of 70$^\circ$ would be
between a few kEyrs (IST) and 56 MEyrs (RST).
At 60$^\circ$N, a 100 m-deep depression would be created in 7.7 kEyrs (IST) to 104.6 MEyrs (RST)
while the same depression would be created in 27.6 kEyrs (IST) to 375.1 MEyrs (RST) at 60$^\circ$S.
This strong difference between the two hemispheres could explain why Titan's south polar
regions are deprived of well developed lacustrine depressions
compared to the North.
It should be noted that the hypothesized timescale difference
between the northern and the southern low latitudes results from an extrapolation
of Titan's current climate to the past. \citet{Aharonson2009} showed that
Croll-Milankovich-like cycle with periods of 45 (and 270) kEyrs could exist on Titan, resulting in
a N-S reversal in insolation and likely subsequent climate conditions.
Over geological timescales, the N-S differences in denudation rates and timescales
estimated at these latitudes could be smoothed by these cycles. However,
as noted earlier, we made the same extrapolation for the Earth, whose climate
dramatically changed over time, without obtaining unreasonable formation
timescales.
In any case, all these timescales are consistent with the youth of Titan's surface as determined from:
\begin{enumerate}
\item crater counting ($0.3 - 1.2$ GEyrs, \citet{Neish2012}),
\item dune sediment inventory ($50 - 730$ MEyrs, \citet{Sotin2012} and \citet{Rodriguez2014}),
\item the flattening of the poles due to the substitution of methane by ethane in clathrates (500 MEyrs if restricted
to the poles or $0.3 - 1.7$ GEyrs if not, \citet{Choukroun2012})
\item the possible methane outgassing event ($1.7 - 2.7$ GEyrs, \citet{Tobie2006}).
\end{enumerate}
In summary, the morphology of Titan's lacustrine depressions suggests that dissolution occurs on Titan.
The denudation rates of pure organic compounds and a mixed organic layer as compared to those of soluble minerals
on Earth also supports this hypothesis. The timescales needed to
dissolve various amounts of material as compared to the timescales of development of karstic
landforms on Earth are also quite consistent in the sense that karstic landscapes are usually relatively young landscapes.
Finally, the latitudinal repartition of denudation rates and timescales of dissolution is consistent with the latitudinal
repartition of the possible dissolution-related landforms at the surface of
Titan. The surface dissolution scenario for the origin of Titan's lakes
appears very likely and Titan's lakes could be among the youngest features of the moon.
\section{Conclusion}
Titan's lakes result from the filling of topographic depressions by surface or subsurface liquids.
Their morphology led to analogies with terrestrial landforms of various
origins (volcanic, thermokarstic, karstic, evaporitic or karsto-evaporitic). The karstic/karsto-evaporitic
dissolution scenario seems to be the most relevant, given the nature of surface materials on Titan and its climate.
We constrained the timescales needed for the formation of Titan's depressions by dissolution,
on the basis of the current knowledge on the development of terrestrial karsts.
We computed solutional denudation rates from the theory
developed by \citet{White1984}. This simple theory needs three parameters: the solubilities
and the densities of solids and liquids at a given temperature and a climatic parameter linked
to the net precipitation rates onto the surface. We computed the solubilities of terrestrial minerals
in liquid water at 25$^\circ$C and tested the model by computing the denudation rates and timescales of formation
of several terrestrial examples of karstic landforms.
We then applied the same model to Titan. We computed the solubilities of Titan's
surface organic compounds in pure liquid methane, ethane and propane at 91.5 K
using different thermodynamic theories. We evaluated the molar volumes of
liquid and solid Titan's surface compounds at 91.5 K
and we used the results of the recent GCM of \citet{Schneider2012} as input for the precipitation rates
of methane on Titan, which allowed us to compute denudation rates at several latitudes.
Denudation rates have then been computed for pure organic compounds at Earth and Titan timescales
and have been compared to those determined for soluble minerals on Earth.
We also computed denudation rates for three different compositions of the surface organic layer.
Over one Titan year, these mixed layers of organic compounds behave like terrestrial salts or carbonates, which indicates
their high susceptibility to dissolution, though these processes would be 30 times slower on Titan than on Earth due to
the seasonality (rainfall occurs only during Titan's summer).
We computed theoretical timescales for the formation of 100 m-deep depressions in mixed organic layers under present
climatic conditions. As with dissolution landforms on Earth, Titan's depressions
would be young. At high polar latitudes, we found that the timescales of development
for depressions are relatively short (on the order of 50 MEyrs at maximum to carve 100 m) and consistent with the young age
of Titan's surface. These timescales are consistent with the existence of numerous lacustrine depressions and
dissected landscapes at these latitudes. At southern low latitudes, the computed timescales are as long as 375 MEyrs
due to the low precipitation rates. This low propensity to develop depressions by dissolution is consistent with their relative
absence/bare formation at low latitudes. Over geological timescales greater than those of
Titan's Croll-Milankovitch cycles (45 and 270 kEyrs), this difference would probably be strongly attenuated.
However, climate model predictions are not presently available over geologic timescales, and the present-day seasonal climate
variations are the best that can be currently constrained.
The results of these simple calculations are consistent with the hypothesis that Titan's depressions most likely originate
from surface dissolution. Theoretical timescales for the formation of these landforms are consistent with the other age estimates of Titan's surface.
Future works could include the effects of rain in equilibrium with the nitrogen, ethane and propane atmospheric gases
in the raindrop composition (e.g. \citet{Graves2008} or \citet{Glein2013}). Experimental
constraints on the solubility of gases and solids in liquids thanks to recent technical developments for Titan
experiments \citep{LuspayKuti2012grl,LuspayKuti2014,Malaska2014,Chevrier2014,Leitner2014,Singh2014} would also be of extreme
importance for such work.
Finally, the influence of other landshaping mechanisms such as collapse or subsurface fluid flows,
which play a significant role in the development of some karstic landforms on Earth, could also be implemented.
\section*{Acknowledgements}
The Cassini/RADAR SAR imaging datasets are provided through the NASA Planetary Data System Imaging Node portal
($http://pds-imaging.jpl.nasa.gov/volumes/radar.html$). Terrestrial annual evapotranspiration
data taken from the Numerical Terradynamic Simulation Group (NTSG) database ($http://ntsg.umt.edu/project/mod16$)
and precipitation data taken from the WordClim database ($http://www.worldclim.org/current$) have been used in this study.
The authors want to thank Fran\c cois Raulin and S\'ebastien Rodriguez for helpful discussions, Axel Lef\`evre and
Manuel Giraud for their contribution in the Cassini SAR data processing, and Tim Rawle for the careful proofreading of the
manuscript. The authors would like to acknowledge two anonymous reviewers for their work on the preliminary version of the
manuscript, as well as the editor and associate editor for useful comments. TC is funded by the ESA Postdoctoral Research
Fellowship Programme in Space Science.
|
1,108,101,563,703 | arxiv | \section{INTRODUCTION}
Magnetoelectric multiferroics, insulating magnets exhibiting simultaneous
magnetic and ferroelectric order, are widely investigated in large part
due to their potential applications for developing novel devices, including
magnetic sensors and multistate memory, among others [\onlinecite{ABHGL,saf2006}].
The cross-control
of these distinct order parameters, such as adjusting the magnetization
using an applied electric field or vice-versa, is expected to provide an
extra degree of freedom in developing new types of spin-charge coupled
devices, such as voltage switchable magnetic memories [\onlinecite{Eeren2007}].
Additionally,
there are a number of fundamental materials questions surrounding the
development of multiferroic order. Magnetic and ferroelectric order are
generally contraindicated in the same phase, as ferromagnetism in
transition metal systems typically requires partially filled d-orbitals
while ferroelectric distortions are promoted in a d$^0$ electronic
configuration[\onlinecite{Hill2000}]. Despite this apparent restriction,
a rather large number of
single phase systems have been identified as magnetoelectric
multiferroics [\onlinecite{TK2005,Hur2004,Lawes2005}]. A number of
microscopic mechanisms
have been proposed for the development of multiferroic order, including,
a magnetic Jahn-Teller distortion[\onlinecite{JT}] for
TbMn$_2$O$_5$,[\onlinecite{LCC2004}] bond and site ordering having
distinct centers of inversion symmetry,[\onlinecite{Efremov2004}]
a microscopic mechanism leading to a spin-current
interaction[\onlinecite{Katsura2005}],
the Dzyloshinskii-Moriya interaction,[\onlinecite{SergienkoPRB}]
a general anisotropic exchange striction,[\onlinecite{HYAE}]
a spin-phonon interaction,[\onlinecite{Tackett}] and
a strain induced ferroelectricity.[\onlinecite{RABE3}]
Phenomenologically, magnetically-induced ferroelectric order developing
in systems having multiple magnetic phases can be understood by considering
a trilinear term in the magnetoelectric free energy, F$_{ME}$, coupling
the electric polarization with two distinct order parameters
$\sigma_1$ and $\sigma_2$ which together break inversion symmetry so that
F$_{ME}\propto$P$\sigma_1\sigma_2$[\onlinecite{ABHGL,Lawes2005,MKPRL,
ABHJAP,ABHPRB}]. Since the free energy must
transform as a scalar, there are strong symmetry restrictions on the
allowed representations for $\sigma_1$ and $\sigma_2$; in particular,
the product $\sigma_1({\bf q})\sigma_2({\bf q})^*$ must be antisymmetric
under spatial inversion. This trilinear coupling also predicts
electric field control of a magnetic order
parameter[\onlinecite{ABHGL,Kharel2009}]. A general discussion of
the symmetry of the magnetoelectric coupling in multiferroics is considered
for the specific case of FeVO$_4$\, in the following section. Investigations on
multiferroic Ni$_3$V$_2$O$_8$\, thin films, in which such a trilinear coupling is
believed to be responsible for the multiferroic order[\onlinecite{ABHGL,Lawes2005}], have established
that the multiferroic transition temperature can be varied through the
application of either (or both) magnetic and electric
fields [\onlinecite{Kharel2009}], confirming the strong coupling
between magnetic and dielectric degrees of freedom. Higher order magnetoelectric
coupling terms quadratic in both magnetic and ferroelectric terms will
give rise to magnetization induced shift in the dielectric
response[\onlinecite{RABE1}].
Such coupling has been investigated both theoretically and experimentally
in a range of materials including Mn$_3$O$_4$[\onlinecite{Tackett}],
CoCr$_2$O$_4$[\onlinecite{LawesMelot}],
BaMnF$_4$[\onlinecite{Samara1976,Fox1977,Fox1979,Scott1977}], and
SeCuO$_3$ and TeCuO$_3$[\onlinecite{Lawes2003}]. Because this
magnetodielectric coupling is also expected to depend strongly on the
symmetry of the magnetic phase, it has been suggested that changes in
this coupling may be used to probe changes in the ordered
spin structure[\onlinecite{Tackett}].
Triclinic iron vanadate, FeVO$_4$, has recently been identified as a multiferroic
system having the P$\overline{1}$ space group [\onlinecite{Dixit2009,Daoud2009,Kundys2009}].
Magnetic, thermodynamic, and neutron diffraction studies on FeVO$_4$\,
single crystal and ceramic samples have shown that FeVO$_4$\, transitions from a paramagnetic
phase into a collinear incommensurate (CI) phase at
$T_{N1}$=22 K and then into non-collinear incommensurate (NCI) phase at
$T_{N2}$=15 K.[\onlinecite{Dixit2009,Daoud2009,Kundys2009,He2008}]
Ferroelectric order in FeVO$_4$\,develops in this non-collinear
spiral magnetic phase. The onset of ferroelectric order with
the development of a second magnetic phase suggests that a symmetry-based
approach may be useful in exploring the multiferroic properties in this system.
We present a full Landau theory for this system, specifically considering the
allowed magnetoelectric coupling terms.
Our result is that a nonzero induced spontaneous polarization
${\vec P}$ requires having a magnetic spiral[\onlinecite{MOST}]
described by two order parameters which are out of phase with respect to
one another.[\onlinecite{Lawes2005,MKPRL,ABHGL,ABHPRB}]
In this low symmetry structure
there are no restriction on the orientation of ${\vec P}$ based on symmetry
arguments, unlike the majority of similar magnetically-induced
multiferroics[\onlinecite{ABHGL,Lawes2005,MKPRL,MOST}].
This paper is organized as follows. In Sec. II we present a symmetry
analysis of FeVO$_4$ based on Landau theory. Here we analyze the
symmetry of the magnetoelectric interaction. In Sec. III we
present the results of a number of experiments designed to probe
the structure of these magnetoelectric interactions. In Sec. IV
we briefly summarize our results.
\section{Landau Theory}
Motivated by this general discussion of the possibility of magnetically
driven ferroelectric order in FeVO$_4$, we now present a Landau theory for
FeVO$_4$\, with some details of the construction relegated to the
the Appendix. As discussed in detail in Ref. [\onlinecite{ABHPRB}],
the Fourier transform of the spin ordering is proportional to the
critical eigenvector of the
inverse susceptibility matrix at the ordering wave vector $\vec q$.
In the Appendix we analyze the constraint of
spatial inversion in
the P$\overline{1}$ space group of the paramagnetic phase,
with the following results. The FeVO$_4$\, structure consists of six
$S$=5/2 Fe$^{3+}$ spins in the unit cell at locations ${\mbox{\boldmath{$\tau$}}}_n$. For
$n=1,2,3$, $-{\mbox{\boldmath{$\tau$}}}_n=\overline {\mbox{\boldmath{$\tau$}}}_n=\tau_{n+3}$ and
$-{\mbox{\boldmath{$\tau$}}}_{n+3}=\overline {\mbox{\boldmath{$\tau$}}}_{n+3}={\mbox{\boldmath{$\tau$}}}_n$. Then
inversion symmetry (${\cal I}$) implies that spin Fourier transform obeys
\begin{eqnarray}
{\cal I} \vec S(\vec q, \tau) = \vec S (\vec q , \overline \tau)^* \,
\end{eqnarray}
where, as defined in the Appendix, $\vec S (\vec q, \tau)$ is
the spatial Fourier transform of the thermally averaged spin operator.
As explained in the Appendix, this
relation implies that the spin distribution is inversion-symmetric
about some origin. We find that
inversion symmetry implies that
\begin{eqnarray}
[S_x(1), S_y(1),S_z(1), S_x(2),S_y(2) \dots S_z(6)] &=&
\sigma_n [ x_1^*, y_1^*, z_1^*, x_2^*,y_2^*, ... z_6^*] \
\end{eqnarray}
where all the components are complex valued with
$x_{\overline n}=x_n^*$, $y_{\overline n}=y_n^*$, $z_{\overline n}=z_n^*$,
are normalized by $\sum_{n=1}^6 [|x_n|^2 + |y_n|^2 + |z_n|^2] =1$,
and the wave vector argument is implicit. The amplitude
$\sigma_n (\vec q)$ is the complex valued magnetic order parameter, which obeys
\begin{eqnarray}
{\cal I} \sigma_n(\vec q)=\sigma_n(\vec q)^*=\sigma_n(-\vec q)\ .
\label{ISYM} \end{eqnarray}
As noted in the Appendix, this relation implies that each $\sigma_n$
is inversion invariant about a lattice point (which depends on $n$)
where the order parameter wave has its origin.
As the temperature is lowered one passes from the paramagnetic phase
into a phase with an order parameter $\sigma_1(\vec q)$ and
then, at a lower temperature into a phase where two order
parameters $\sigma_1(\vec q)$ and $\sigma_2(\vec q)$ are nonzero,
both of which obey Eq. (\ref{ISYM}), but which have
different centers of inversion symmetry.
The total magnetoelectric free energy, $F_{\rm ME}$ can be written as
\begin{eqnarray}
F_{\rm ME} &=& F_{\rm M} + F_{\rm E} + V \ ,
\end{eqnarray}
where $F_{\rm M}$ is the purely magnetic free energy,
$F_{\rm E}$ is the dielectric potential which we approximate as
$F_{\rm E} = (1/2) \chi_E^{-1} {\bf P}^2$, where $\chi_E$ is
the dielectric susceptibility (whose crystalline anisotropy is neglected),
and to leading order in $\sigma_n$, the magnetoelectric coupling term is given by:
\begin{eqnarray}
V &=& \sum_{n,m=1}^2 \sum_\gamma
[a_{n,m,\gamma} \sigma_n(\vec q) \sigma_m(\vec q)^*
+ a_{n,m,\gamma}^* \sigma_n(\vec q)^* \sigma_m(\vec q)] P_\gamma \ ,
\end{eqnarray}
where $n$ and $m$ label order parameter modes and $\gamma$ labels the
Cartesian component of $\vec P$. Terms linear in $\sigma_n$ are prohibited
because they are not time reversal invariant and also can not conserve wave vector.
The magnetoelectric interaction $V$
has to be inversion invariant and the appendix shows that
the $a$ coefficients are pure imaginary, so that
\begin{eqnarray}
V &=& i \sum_\gamma r_\gamma [\sigma_1(\vec q) \sigma_2(\vec q)^* -
\sigma_1(\vec q)^* \sigma_2(\vec q)] P_\gamma =
2 \sum_\gamma r_\gamma |\sigma_1(\vec q) \sigma_2(\vec q)|
\sin(\phi_2 - \phi_1) P_\gamma \ ,
\end{eqnarray}
where $\sigma_n(\vec q) = |\sigma_n(\vec q)| \exp (i \phi_n)$.
There is no restriction on the direction of the spontaneous
polarization, so that all components of $\vec P$ will be nonzero.
However, if the magnetic structure is a spiral, then the arguments of
Mostovoy[\onlinecite{MOST}] might be used to predict the approximate
direction of $\vec P$. The result of Eq. (6)
is quite analogous
to that for Ni$_3$V$_2$O$_8$[\onlinecite{Lawes2005}]
or for TbMnO$_3$[\onlinecite{MKPRL}], in that it requires the presence
of two modes $\sigma_1 (\vec q)\equiv \exp(i \phi_1) |\sigma_1(\vec q)|$ and
$\sigma_2 (\vec q)\equiv \exp(i \phi_2) |\sigma_2(\vec q)|$ which are out
of phase with one another: $\phi_1 \not= \phi_2$.
Then the order parameter wavefunctions have different
origins and will therefore break inversion symmetry.
If, as stated in Ref. \onlinecite{Daoud2009}, the eigenvector is {\it not}
inversion invariant as implied by Eq. (\ref{ISYM}), then
one would conclude that the magnetic ordering transition is not continuous.
However, the most likely scenario is that the ordering transitions are
continuous and that the spin distribution for each $\sigma_n(\vec q)$ is inversion symmetric as
obtained in this derivation. The acentric distribution found in
Ref. \onlinecite{Daoud2009} differs only slightly from being inversion symmetric
for reasons that are obscure.[\onlinecite{PGRPC}]
\section{Magnetoelectric Interactions (Experimental)}
\subsection{Sample Synthesis and Structural Characterization}
Motivated by Eq. (6), which predicts that the magnetic structure
defined by $\sigma_1(\vec q)$ and $\sigma_2 (\vec q)$ is coupled to the
electric polarization $P$, we experimentally investigated the nature of the higher order
magnetoelectric coupling in FeVO$_4$. Bulk single phase polycrystalline iron
vanadate (FeVO$_4$) ceramic samples were prepared using standard solid state
reactions. Because Eq. 6 predicts that all components of the
polarization vector are nonzero, and previous measurements on
ceramic FeVO$_4$\, have found clear evidence for multiferroic
behaviour [\onlinecite{Dixit2009}] we focused our study on polycrytalline
samples. A stoichiometric ratio of iron oxide (Fe$_2$O$_3$) and vanadium
pentaoxide (V$_2$O$_5$) solid solutions were thoroughly mixed and ground
to produce a homogeneous mixture. This mixture was slowly heated to 600$^{\rm o}$C
for 4 hours in air. Intermediate grindings followed by thermal
annealing in air were repeated several times to complete the solid state
reaction and ensure a fully reacted and uniform composition. This homogeneous
solid solution was finally annealed in air at 800$^{\rm o}$C for 4 hours,
yielding a yellowish brown powder identified as a single phase iron vanadate
by X-ray diffraction and Raman spectroscopy.
In order to apply large electric fields to FeVO$_4$\, we also prepared thin
film samples. These were fabricated from a phase pure stoichiometric iron
vanadate target. The FeVO$_4$\,powder used for the sputtering target was
prepared by the method described above. Approximately 30 g of FeVO$_4$\,powder
was mixed with 15 mL of 2 mole percent polyvinyl alcohol as a binder. The
dried powder was pressed into a circular disc having a diameter of
approximately 50 mm with a thickness of roughly 3.5 mm followed by air
annealing at 600$^{\rm o}$C for 4 hours to burn off the residual organics.
A final thermal annealing was done at 800$^{\rm o}$C for 4 hours to
produce the dense pellet used for the sputtering target. FeVO$_4$\,films were
deposited at room temperature using RF magnetron sputtering onto conducting
silicon substrates. The working pressure was held at 1.5x10$^{-2}$ torr,
with the atmosphere consisting of a mixture of approximately 1.5x10$^{-3}$ torr
partial pressure of oxygen as the reactive gas and approximately 1.35x10$^{-2}$
torr partial pressure of argon as the sputtering gas. These as
deposited films, prepared over a time of 4 hours, were amorphous. After
air annealing at 700$^{\rm o}$C for 4 hours the films were indexed as
single phase polycrystalline FeVO$_4$.
We investigated the structural, magnetic, and electronic properties of
these samples using a number of different techniques. We used a Rigaku
RU200 powder X-ray diffractometer and Horiba Triax Raman spectrometer to
study the crystalline structure of these samples. We used a Hitachi scanning
electron microscope (SEM) to investigate the surface morphology of the thin
film samples and an associated energy dispersive X-ray (EDX) assembly to
probe the chemical composition of both samples. We measured the temperature
dependent magnetization of the powder sample using a Quantum Design MPMS
SQUID magnetometer, although the very small magnetic anomalies associated
with the transitions could not be clearly distinguished from background in
the thin films samples. We conducted temperature and field dependent
dielectric and pyrocurrent measurements using the temperature and field
control provided by a Quantum Design PPMS system used in conjunction with
an Agilent 4284A LCR meter and a Keithley 6517 electrometer. These
measurements were done on a cold pressed pellet of bulk FeVO$_4$\,with the top
and bottom electrodes fashioned using silver epoxy and on the FeVO$_4$\,thin
films with room sputtered gold (Au) used as the top electrode and the Si
substrate serving as the bottom electrode.
\begin{figure
\centering\includegraphics[width=10.5cm]{Fig1.pdf}\\
\caption{(a) $\theta$-2$\theta$ x-ray diffraction (XRD) pattern of FeVO$_4$\, thin film,
(b) Surface scanning electron micrograph of FeVO$_4$\, thin film,
(c) Cross-sectional SEM image of FeVO$_4$, thin film and
(d) room temperature Raman spectrum on FeVO$_4$\, bulk powder and thin film samples.
The peak at 560 cm$^{-1}$ (indicated by an asterisk) arises from
the silicon substrate}
\label{fig:Fig1}
\end{figure}
The structure of the ceramic FeVO$_4$\,sample was practically identical to that
previously presented for a bulk sample prepared using a different
technique [\onlinecite{Dixit2009}]. The X-ray diffraction (XRD) pattern for the FeVO$_4$\,thin
film is shown in Fig 1(a). These diffraction peaks are consistent with
the expected XRD pattern for FeVO$_4$\,[JCPDS no. 38-1372]. The surface
morphology the thin film sample is shown in Fig 1(b). This SEM micrograph
indicates that the film consists of grains with various orientations, as
well as a number of pinhole defects. We calculated the thickness of these
thin films to be roughly 200 nm, using the cross-sectional SEM micrograph,
Fig 1(c). This value is very consistent with estimates from well defined
interference fringes observed in reflection spectra (not shown). EDX
analysis of both the bulk and thin film samples show a 1:1 iron to vanadium
ratio. We carried out room temperature Raman vibrational spectroscopy to
further probe the microstructures of both bulk and thin films. The
identification of Raman active modes and their detailed temperature
dependent analysis on bulk FeVO$_4$\,sample is discussed elsewhere [\onlinecite{Dixit2009}].
Here we plot the room temperature Raman spectrum of both bulk and thin
film FeVO$_4$\,in Fig1(d). We are able to identify all the Raman active modes
for thin films, which are observed in bulk FeVO$_4$\,[\onlinecite{Dixit2009}] with a small shift
in the Raman peaks for the thin films. The Raman peak arising from the
silicon substrate is indicated by an asterisk.
\subsection{Temperature Dependent Dielectric Measurements on FeVO$_4$\, Ceramic}
\begin{figure
\centering\includegraphics[width=9.5cm]{Fig2.pdf}\\
\caption{(a) Zero field temperature dependent dielectric constant (left axis)
and resistivity (right axis) for ceramic FeVO$_4$\, sample,
(b) Temperature dependence of the dielectric constant near the
magnetic ordering temperature, and
(c) Temperature dependence of dielectric constant at $H$= 0, 20, 40, 60, and 80 kOe.
The dashed line in (b) is a guide to the eye.}
\label{fig:Fig2}
\end{figure}
The temperature dependent magnetization for the bulk FeVO$_4$\,sample (not
shown) was practically identical to that measured previously on a different
ceramic sample prepared using a different technique [\onlinecite{Dixit2009}]. In particular,
the magnetization showed the usual two anomalies associated with the two
incommensurate transitions in this system. We plot the zero-field
dielectric constant and resistivity for bulk FeVO$_4$\,over a broad range of
temperatures in Fig. 2(a). The dielectric constant exhibits a sharp peak
near $T_{N2}$, arising from the development of ferroelectric order in the
incommensurate spiral magnetic phase. As shown in Fig. 2(a), above 35 K,
the dielectric constant for FeVO$_4$\,shows a gradual decrease on cooling,
typical of many insulating materials [\onlinecite{Ramirez2000}].
Below roughly 30 K, the dielectric
constant increases smoothly with further cooling. Since the resistivity
of FeVO$_4$\,increases monotonically with decreasing temperature (except for
a small anomaly at $T_{N2}$), also shown in Fig. 2(a), we attribute this
increase in the dielectric constant to a quartic magnetoelectric
coupling, $V_4$.
Although FeVO$_4$\,does not order magnetically
until cooled below $T_{N1}$=22 K, heat capacity measurements suggest the
presence of short range spin correlations developing well above this
temperature [\onlinecite{Dixit2009}]. It has been suggested in a number of other systems,
including TeCuO$_3$ [\onlinecite{Lawes2003}] and
Mn$_3$O$_4$ [\onlinecite{Tackett}], that short-range magnetic
correlations can produce magnetodielectric corrections; we propose that the
same mechanism is responsible for the non-monotonic temperature dependence
of the dielectric constant of FeVO$_4$\,in the paramagnetic phase.
The fourth order magnetoelectric coupling contains terms quadratic in
$\vec P$ and $\sigma_n(\vec q)$. If $\sigma_1(\vec q)$ is the
order parameter that develops at $T_{N1}$, then this coupling is
probably dominated by
$\lambda|\vec P|^2|\sigma_1(\vec q)|^2$ at high temperatures.
The finite spin correlations developing
above $T_{N1}$ cause $\langle |\sigma_1(\vec q)|^2\rangle$
to be nonzero so that the coupling term is in effect
$a |\vec P|^2$,
where $a= \lambda \langle|\sigma_1(\vec q)|^2\rangle$.
This term
produces a shift in the dielectric constant in the
paramagnetic phase, as seen in Fig. 2(a) below approximately 30 K. Below
$T_{N1}$, approximately 22 K, when $\sigma_1(\vec q)$
acquires a finite expectation value,
a trilinear coupling term $a\vec P<\sigma_1(\vec q)>\sigma_2(\vec q)^*$ is
allowed. This term will lead to mode mixing so that the critical mode
approaching $T_{N2}$ is not $\sigma_2(\vec q)$ but
$(\sigma_2(\vec q)+\rho\vec P)$
where $\rho$ is of order $a\langle \sigma_1(\vec q) \rangle$
[\onlinecite{ABHPRB}].
Then the
divergence in this variable as $T_{N2}$ is approached will
lead to a simultaneous divergence (with a very much
reduced amplitude) in the observed dielectric constant.
This mode mixing is therefore expected to lead
to a slight increase in the magnetodielectric shift below $T_{N1}$ [\onlinecite{ABHPRB}].
This is seen clearly in Fig. 2b, where the dashed line shows the extrapolation
of the magnetodielectric shift from $T>T_{N1}$ to $T<T_{N1}$, an
extrapolation that does not include any
corrections arising from the trilinear magnetoelectric term.
\subsection{Magnetic Field Dependent Dielectric Measurements on FeVO$_4$\, Ceramic}
To further investigate spin-charge coupling in FeVO$_4$, we plot the temperature
dependent dielectric constant measured at different magnetic fields in
Fig 2(c). We find that the dielectric anomaly signaling the onset of
ferroelectric order shifts to lower temperatures with increasing magnetic
field, with the reduction in transition temperature reaching 0.7 K in a
magnetic field of $H$=80 kOe. This result is expected, as the ferroelectric
order producing the dielectric anomaly is associated with the incommensurate spiral
transition, which typically show a reduction in transition temperature in applied
magnetic fields.
\begin{figure
\centering\includegraphics[width=12cm]{Fig3.pdf}\\
\caption{(color on-line)Magnetic field dependence of the relative change in the
dielectric constant for ceramic FeVO$_4$\, at different temperatures with vertical offset included for clarity.}
\label{fig:Fig3}
\end{figure}
We conducted additional measurements of the dielectric response of the bulk
sample while sweeping the magnetic field at fixed temperature. These
results are shown in Fig. 3, plotted as $\Delta\epsilon(H)/ \epsilon(H=0)$
versus $H$ with data measured at different temperatures offset vertically for
clarity. At $T$=17 K, which is intermediate between $T_{N1}$ and $T_{N2}$,
there is a small negative magnetocapacitance, with the dielectric constant
being reduced by approximately 0.03\% in a field of $H$=80 kOe. As the
temperature approaches the multiferroic transition at $T_{N2}$, the
magnetodielectric coupling shows qualitative changes. By $T$=15 K the
magnetocapacitive shift is positive for small fields, with a shift in
dielectric constant on the order of 0.02\% at high magnetic fields.
The magnetocapacitive response is maximal near $T$=14.5 K, with the
dielectric constant being reduced by approximately 0.1\% in a field of
$H$=80 kOe. At still lower temperatures the magnitude of the magnetocapacitive
shift becomes smaller.
Perhaps the most dramatic feature in the isothermal magnetocapacitance curves
presented in Fig. 3 is the presence of clear maxima, which vary as a function
of temperature and magnetic field. These maxima appear first at small fields
at $T$=14.5 K, then shift to larger fields as the temperature is reduced.
We believe that these anomalies do not reflect the suppression of the
multiferroic transition temperature in a magnetic field, as discussed in the
context of Fig 2(b). These isothermal dielectric anomalies persist to
temperatures 2 K or 3 K below $T_{N2}$, while the maximum suppression of
$T_{N2}$ was only 0.7 K over the field range studied, as determined from
the measurements in Fig. 3. We propose that this dielectric anomaly may
indicate a spin-reorientation transition in FeVO$_4$. The magnetodielectric coupling
is expected to depend on the symmetry of the magnetically ordered state
[\onlinecite{Lawes2003,ABHPRB}],
so a field induced spin reorientation crossover could potentially produce
the low temperature dielectric anomalies observed in Fig. 3. Similar magnetic
field-induced dielectric anomalies have been observed in other materials
including Mn$_3$O$_4$ [\onlinecite{Tackett}], although the
specific mechanisms responsible
remain unclear. One possibility is that the external magnetic field serves
to reduce the slight geometrical frustration present
in FeVO$_4$\,[\onlinecite{Daoud2009}], allowing
a different spin structure to emerge.
Alternatively, the spin orientation could be a spin-flop
transition as seen in TbMnO$_3$.[\onlinecite{Aliouane2006}]
We note, however, that FeVO$_4$\,remains
ferroelectric at high magnetic fields [\onlinecite{Dixit2009}],
so the modified spin structures
would still need to transform as defined by Eq. 3.
\subsection{Magnetoelectric Coupling in FeVO$_4$\, Thin Films}
\begin{figure
\centering\includegraphics[width=12cm]{Fig4.pdf}\\
\caption{(a) Temperature dependence of dielectric constant
for FeVO$_4$\, thin films at zero field, Inset: Zero magnetic field polarization
for FeVO$_4$\, thin film measured at poling fields E$_{pole}$ = $\pm$10 MV m$^{-1}$ and
(b) Temperature dependent dielectric constant measured at
E=0 and E=3.75 MV m$^{-1}$ (background was subtracted for clarity)}
\label{fig:Fig4}
\end{figure}
The trilinear magnetoelectric coupling that produces multiferroic order, given in Eq. 6,
also results in an electric field ($\vec E$) dependence of the magnetic structure
through the coupling term in the free energy $\Delta F = - \vec P \cdot \vec E$. We
first confirmed that these thin film samples were also multiferroic, through
measurements of the dielectric constant and pyrocurrent, illustrated in
Fig. 4(a). The dielectric constant for the thin film FeVO$_4$\,is slightly
higher than that found for the ceramic sample. We attribute this discrepancy
mainly to the uncertainty in accurately determining the geometrical factor
for these thin films. The dielectric response for these thin film samples
is approximately independent of measuring frequency and the loss for these
films is tan $\delta \approx$ 0.01, which may be due to the presence of
pinhole defects in the thin film sample as seen in the SEM micrograph in
Fig. 1(b). The zero-field temperature dependent dielectric constant,
measured at $f$=30 kHz, is plotted in Fig. 4(a). There is a sharp peak
near $T_{N2}$=15 K, associated with the development of ferroelectric order
in these thin film samples. We note that, unlike the measurements on
bulk FeVO$_4$\,shown in Fig. 2(a), the background dielectric constant for
FeVO$_4$\,decreases monotonically with decreasing temperature. This behaviour
can be associated with the much larger conductivity of the thin film sample,
arising from the presence of the pinhole defects, which obscures the low
temperature increase in dielectric constant observed in bulk FeVO$_4$\,(Fig. 2(a)).
We confirmed that the low temperature phase of the FeVO$_4$\,thin film is
ferroelectric by integrating the pyrocurrent after poling at positive and
negative fields to yield the spontaneous polarization. These results are
shown in the inset to Fig. 4(a) and indicate a spontaneous polarization of
6 $\mu$C/m$^2$, consistent with previous measurements on polycrystalline
bulk FeVO$_4$\,[\onlinecite{Dixit2009}] Measurements of the dielectric response for FeVO$_4$\,thin
films under applied magnetic fields (not shown) yield a suppression of the
multiferroic transition temperature very similar to that observed in bulk
FeVO$_4$\,(see Fig. 2(b)).
To probe the electric field control of the multiferroic phase transition
temperature, expected from the nature of the magnetoelectric coupling,
we measured the temperature dependent dielectric response in the FeVO$_4$\,thin
film sample as a function of bias voltage. Focusing on thin film samples
allows the application of relatively large electric fields (on the order
of MV/m) with small applied bias voltages. We chose to probe the transition
through dielectric measurements as the magnetic anomaly at $T_{N2}$ cannot be
clearly discerned in these thin film samples. We plot the temperature
dependent dielectric constant measured at E=0 and E=3.75 MV/m in Fig. 4(b).
With the application of an electric field, the dielectric peak shifts
upwards in temperature, by approximately 0.25 K in a field of E=3.75 MV/m.
We note that any sample heating, which is expected to be negligible in any
case because of the low dissipation, would raise the sample temperature
relative to the thermometer temperature, leading to an apparent decrease
in transition temperature, rather than the increase seen in Fig. 4(b).
This increase in transition temperature is consistent with an external
electric field promoting the development of ferroelectric order, and is
similar to what has been observed previously in
multiferroic Ni$_3$V$_2$O$_8$\, films [\onlinecite{Kharel2009}].
The relatively small increase of the ferroelectric transition under such
large applied electric fields can be directly attributed to the very small
polarization in FeVO$_4$. We confirmed that the dielectric anomaly in Fig. 4(b)
can still be associated with the multiferroic transition, even in the
presence of an electric field, by measuring the response under the
simultaneous application of magnetic and electric fields (not shown).
Although the dielectric peak broadens considerably, the continuing presence
of a single peak under such crossed fields is strong evidence that this
anomaly reflects the multiferroic transition in FeVO$_4$.
\subsection{Magnetoelectric phase diagram for FeVO$_4$}
\begin{figure
\centering\includegraphics[width=15cm]{Fig5.pdf}\\
\caption{(a) Electric and magnetic field dependence of the multiferroic transition
temperature $T_{N2}$ and (b) Magnetic field dependence of
multiferroic transition temperature together with the proposed magnetic field
induced spin reorientation cross over. Here, NCI$^*$ indicates the
proposed phase having a spin reoriented structure}
\label{fig:Fig5}
\end{figure}
We summarize the results of these magnetoelectric and magnetodielectric
studies on FeVO$_4$\,in Figs 5(a) and 5(b). We plot the E-field and H-field
dependence of the multiferroic transition temperature in FeVO$_4$\,($T_{N2}$)
in Fig. 5(a), where CI and NCI represent the incommensurate magnetic
structures below $T_{N1}$ and $T_{N2}$ respectively. This transition
temperature is monotonically suppressed in an applied magnetic field,
decreasing by approximately 0.7 K in an applied field of $H$=80 kOe.
The
transition temperature, however, increases systematically with increasing
bias voltage, shifting upwards by 0.25 K in an electric field of roughly
4 MV/m. As discussed in Ref. [\onlinecite{Kharel2009}], the magnetic
field dependence of the CI-NCI phase
boundary is expected to follow $\Delta T_N \propto H^{1/2}$,
while the electric field dependence should be
$\Delta T_N\propto E^{1/(\beta+\gamma)}$.
This ability to control the transition temperature using either
magnetic or electric fields is a key feature for a number of proposed
applications for multiferroic materials. Although the size of the transition
temperature shifts in FeVO$_4$\,are likely too small to be of any practical
use, these results, taken in conjunction with previous studies on Ni$_3$V$_2$O$_8$\, thin
films, provide important evidence that this behaviour is generic among
multiferroic materials.
The magnetodielectric coupling in FeVO$_4$\,allows us to tentatively identify
the onset of short-range magnetic correlations, as indicated by the increase
in dielectric constant below $T$=30 K in Fig. 2(a), and also to propose the
onset of a spin reorientation crossover, based on the field dependent
dielectric anomalies in Fig. 3. Using the data from Fig. 3, we plot this
proposed spin reorientation cross-over boundary line in Fig. 5(b), together
with the magnetic field dependence of $T_{N2}$ (similar to that shown in
Fig. 5(a)). The high-field putative spin-reorientation structure is
labeled as NCI$^*$. As the two boundaries do not coincide, the dielectric
anomalies in Fig. 3 are not likely to be associated with the $T_{N1}$ to
$T_{N2}$ magnetic transition, but may potentially be attributed to a change
in magnetic structure. Magnetic field dependent specific heat measurements
(not shown) do not show any additional anomalies at this proposed cross-over,
suggesting there is a negligible change in entropy between the two
spin structures.
\section{Conclusions}
We have presented a model for the development of multiferroic order in FeVO$_4$\,
in the context of Landau theory, which we have used to develop
constraints on the possible magnetic structures based on
symmetry considerations alone. One of the
noteworthy predictions of this model is that the electric polarization
in the multiferroic phase is able to develop along any direction. To further
investigate the higher order magnetoelectric coupling in this system,
we have investigated the ferroelectric and dielectric response in FeVO$_4$\,to
applied magnetic and electric fields. The multiferroic phase transition
temperature can be tuned by applying electric or magnetic fields, in line
with the predicted trilinear magnetoelectric coupling. We find evidence for a
shift in dielectric constant well above the magnetic transition temperature
$T_{N1}$, which is expected to develop from a fourth order magnetoelectric
coupling term when short range spin correlations develop in the
paramagnetic phase. The dielectric constant shows a small, but distinct, increase
below the first magnetic order transition, which is consistent with the contribution from
a trilinear magnetoelectric coupling term. We find evidence for magnetic field induced
dielectric anomalies in the non-collinear incommensurate magnetic phase of
FeVO$_4$, which we attribute to a spin reorientation transition that does not
suppress the ferroelectric structure. These studies on FeVO$_4$\,demonstrate
the rich spin-charge coupling present in many multiferroic materials,
emphasize the importance to considering higher order expansions of
the magnetoelectric coupling to adequately explain the properties
of these materials, and
illustrate how dielectric spectroscopy can be a valuable tool for probing
the magnetic structures in such systems.
\section{Acknowledgments}
This work was supported by the NSF through DMR-0644823.
\begin{appendix}
\section{Landau Theory}
As discussed in detail in Ref. \onlinecite{ABHPRB} the Fourier transform of the
distribution just below a continuous magnetic ordering transition is
proportional to the critical eigenvector of the inverse susceptibility
matrix. (The critical eigenvector is the one whose eigenvalue first
approaches zero, i. e. which first becomes unstable, as the temperature
is lowered through the ordering transition.) We introduce the inverse
susceptibility as follows. The thermally averaged
spin at the site at position ${\mbox{\boldmath{$\tau$}}}$ in the unit cell at $\vec R$,
$\langle \vec S(\vec R , {\mbox{\boldmath{$\tau$}}}) \rangle$ is defined as
\begin{eqnarray}
\langle \vec S ( \vec R , {\mbox{\boldmath{$\tau$}}} ) \rangle &
\equiv & {\rm Tr} [ {\mbox{\boldmath{$\rho$}}} \vec S_{\rm op} (\vec R , {\mbox{\boldmath{$\tau$}}} ) ] \ ,
\end{eqnarray}
where $\vec S_{\rm op}(\vec R , {\mbox{\boldmath{$\tau$}}})$ is the quantum spin operator at
site $\vec R + {\mbox{\boldmath{$\tau$}}}$ and ${\mbox{\boldmath{$\rho$}}}$ is the density matrix:
\begin{eqnarray}
{\mbox{\boldmath{$\rho$}}} &=& \exp (- \beta {\cal H}) / [{\rm Tr} (\exp(-\beta {\cal H})]\ ,
\end{eqnarray}
where ${\cal H}$ is the Hamiltonian of the system. The Fourier
transform of the spin distribution is given by
\begin{eqnarray}
\vec S (\vec q, \tau ) &=& N^{-1} \sum_{\vec R} \langle \vec S (\vec R , \tau)
\rangle e^{i \vec q \cdot \vec r} \ ,
\end{eqnarray}
where $\vec r$ is the actual position $\vec R + \vec \tau$ of the spin and
$N$ is the total number of unit cells in the system.
Following Landau, we write the free energy, $F$ as an expansion in powers of
$\vec S({\vec q}, \tau )$ as
\begin{eqnarray}
F &=& \frac{1}{2} \sum_{\vec q , \alpha , \beta , \tau , \tau'}
F_{\alpha , \tau ; \beta, \tau' } (\vec q) S_\alpha ({\vec q}, \tau )^*
S_\beta ({\vec q}, \tau' ) + {\cal O} [ S({\vec q})^4] \ ,
\label{FEQ} \end{eqnarray}
where the matrix ${\bf F}$ is the Hermitian inverse susceptibility matrix.
Of course, we do not know or wish to consider the exact form of ${\cal H}$
and we do not attempt to construct the inverse
susceptibility from first principles. But we can analyze how symmetry
influences the structure of the inverse susceptibility. In what follows
we assume that the wave vector $\vec q$ at which ordering occurs
has been established experimentally and therefore we focus only on that
wave vector.
We now consider the case of FeVO$_4$ which hase six spin sites within the
unit cell of the space group P$\overline 1$. The only point group symmetry
element is spatial inversion about the origin ${\cal I}$, so that the six
sites consist of three pairs of sites $\vec r_n$ and $\vec r_{n+3}=-\vec r_n$,
with $n=1,2,3$. Since the group of the wave vector contains only the
identity element, the standard analyses based on this group would indicate
that an allowed spin distribution function is a basis function of the
identity irrep and therefore that symmetry places no restriction on the
form of the spin distribution function.
However, since ${\cal I}$ is a symmetry of the system when all the
spins are zero, the free energy of the system for a configuration
with an arbitrary distribution of $\vec S_\alpha (\vec q , \tau)$ is
the same as that for a configuration obtained by inversion applied to
the distribution $\vec S_\alpha (\vec q, \tau)$. So we consider
the effect of inversion on $\vec S_\alpha (\vec q, \tau)$. The effect of
${\cal I}$ is to move a spin, without changing its orientation (because
spin is a pseudo vector), from an initial location $\vec r$, to a final
location $- \vec r$. This means that
\begin{eqnarray}
{\cal I} \langle \vec S({\vec R}, {\mbox{\boldmath{$\tau$}}}) \rangle =
\langle \vec S(-{\vec R}, \overline {\mbox{\boldmath{$\tau$}}}) \rangle \ ,
\end{eqnarray}
where, for $n=1,2,3$,
\begin{eqnarray}
\overline {\mbox{\boldmath{$\tau$}}}_n = - {\mbox{\boldmath{$\tau$}}}_n
= {\mbox{\boldmath{$\tau$}}}_{n+3} \equiv {\mbox{\boldmath{$\tau$}}}_{\overline n} \ , \hspace {0.5 in}
\overline {\mbox{\boldmath{$\tau$}}}_{n+3} = - {\mbox{\boldmath{$\tau$}}}_{n+3} = {\mbox{\boldmath{$\tau$}}}_{n}
\equiv \tau_{\overline{n+3}} \ .
\end{eqnarray}
It then follows that
\begin{eqnarray}
{\cal I} \vec S ( \vec q , \tau) &=& \vec S( \vec q , \overline \tau )^* \ .
\label{EQ5} \end{eqnarray}
Because we have 6 spins in the unit cell each having three Cartesian spin
components the matrix ${\bf F}$ is an 18 $\times$ 18 matrix which we write
in terms of 9 $\times$ 9 submatrices (for $n=1,2,3$ and $n=4,5,6$,
respectively) as
\begin{eqnarray}
{\bf F} &=& \left[ \begin{array} {c c} {\bf A} & {\bf B} \\
{\bf B}^\dagger & {\bf C} \\ \end{array} \right] \ .
\end{eqnarray}
Now we consider the invariance of the free energy under spatial inversion:
\begin{eqnarray}
F &=& \frac{1}{2} \sum_{\vec q , \alpha , \beta , \tau , \tau'}
F_{\alpha , \tau ; \beta, \tau' } (\vec q) S_\alpha ({\vec q}, \tau )^*
S_\beta ({\vec q}, \tau' ) + {\cal O} [ S({\vec q})^4] \nonumber \\
&=& \frac{1}{2} \sum_{\vec q , \alpha , \tau , \beta , \tau'}
F_{\alpha , \tau ; \beta, \tau' } (\vec q)
[ {\cal I} S_\alpha ({\vec q}, \tau )^*]
[ {\cal I} S_\beta ({\vec q}, \tau' )] + {\cal O} [ S({\vec q})^4] \nonumber \\
&=& \frac{1}{2} \sum_{\vec q , \alpha , \tau , \beta , \tau'}
F_{\alpha , \tau ; \beta, \tau' } (\vec q)
S_\alpha ({\vec q}, \overline \tau )
S_\beta ({\vec q}, \overline \tau' )^*
+ {\cal O} [ S({\vec q})^4] \nonumber \\ &=& \frac{1}{2}
\sum_{\vec q , \alpha , \tau , \beta , \tau'}
F_{\alpha , \tau ; \beta , \tau' } (\vec q)^*
S_\alpha ({\vec q}, \overline \tau )^*
S_\beta ({\vec q}, \overline \tau' )
+ {\cal O} [ S({\vec q})^4] \nonumber \\ &=& \frac{1}{2}
\sum_{\vec q , \alpha , \tau , \beta , \tau'}
F_{\alpha , \overline \tau ; \beta , \overline \tau' } (\vec q)^*
S_\alpha ({\vec q}, \tau )^* S_\beta ({\vec q}, \tau' )
+ {\cal O} [ S({\vec q})^4] \ .
\label{FEQ2} \end{eqnarray}
The next-to-last equality follows because the free energy is real. The
last equality is obtained by interchanging the roles of the dummy
variables $\tau$ and $\overline \tau$ and the roles of $\tau'$ and
$\overline \tau'$.
We now compare Eq. (\ref{FEQ}) and the last line of Eq. (\ref{FEQ2}).
Since these forms have to be equal irrespective of the values of
the $S$'s, we must have that
\begin{eqnarray}
F_{\alpha , \tau ; \beta , \tau'} ( \vec q) &=&
F_{\alpha , \overline \tau ; \beta , \overline \tau'} ( \vec q)^* \ .
\end{eqnarray}
This equality relates (for $1 \leq \tau , \tau' \leq 3$)
the submatrices ${\bf A}$ and ${\bf C}$ and (for
$1 \leq \tau \leq 3$ and $4 \leq \tau' \leq 6$)
${\bf B}$ and ${\bf B}^\dagger$. As a result we see that
${\bf B}^\dagger = {\bf B}^*$, so that ${\bf B}$ is
symmetric and ${\bf C}= {\bf A}^*$. Thus
\begin{eqnarray}
{\bf F} &=& \left[ \begin{array} {c c} {\bf A} & {\bf B} \\
{\bf B}^* & {\bf A}^* \\ \end{array} \right] \ .
\end{eqnarray}
Then the eigenvectors $[{\mbox{\boldmath{$\Psi$}}} , {\mbox{\boldmath{$\Phi$}}}]$, written in terms of the nine
component vectors $\Psi$ and $\Phi$ satisfy
\begin{eqnarray}
{\bf A} {\mbox{\boldmath{$\Psi$}}} + {\bf B} {\mbox{\boldmath{$\Phi$}}} &=& \lambda {\mbox{\boldmath{$\Psi$}}} \ , \hspace{0.5 in}
{\bf B}^* {\mbox{\boldmath{$\Psi$}}} + {\bf A}^* {\mbox{\boldmath{$\Phi$}}} = \lambda {\mbox{\boldmath{$\Phi$}}} \ .
\label{MATEQ} \end{eqnarray}
For instance
\begin{eqnarray}
{\mbox{\boldmath{$\Psi$}}} &=& [ S_x(\vec q , 1), \ S_y(\vec q , 1), \ S_z(\vec q , 1), \
S_x(\vec q , 2) ,\ S_y(\vec q , 2) ,\ S_z(\vec q , 2) ,\
S_x(\vec q , 3) ,\ S_y(\vec q , 3) ,\ S_z(\vec q , 3) ] \nonumber \\
{\mbox{\boldmath{$\Phi$}}} &=& [ S_x(\vec q , 4) ,\ S_y(\vec q , 4) ,\ S_z(\vec q , 4) ,\
S_x(\vec q , 5) ,\ S_y(\vec q , 5) ,\ S_z(\vec q , 5) ,\
S_x(\vec q , 6) ,\ S_y(\vec q , 6) ,\ S_z(\vec q , 6) ] \ .
\end{eqnarray}
The second equation of Eq. (\ref{MATEQ}) can be written as
\begin{eqnarray}
{\bf B} {\mbox{\boldmath{$\Psi$}}}^* + {\bf A} {\mbox{\boldmath{$\Phi$}}}^* &=& \lambda {\mbox{\boldmath{$\Phi$}}}^* \ .
\end{eqnarray}
So if $[ {\mbox{\boldmath{$\Psi$}}} , {\mbox{\boldmath{$\Phi$}}} ]$ is an eigenvector with eigenvalue $\lambda$, then
so is $\exp( i \rho) [{\mbox{\boldmath{$\Phi$}}}^*, {\mbox{\boldmath{$\Psi$}}}^*]$. In principle, these could be
two independent degenerate eigenvectors. But if one considers the
simple case when ${\bf A} = a {\bf E}$ and $B= b{\bf E}$, where
$a$, and $b$ are scalars and ${\bf E}$ is the unit matrix, one sees that
these two solutions are, apart from a phase factor, the same. Only
for special values of the matrices are these two eigenvectors distinct
degenerate solutions. This is an example of an accidental degeneracy
whose existence we exclude. Therefore the condition that these two
solutions only differ by a phase factor leads to the result that
\begin{eqnarray}
{\mbox{\boldmath{$\Psi$}}} = e^{i \rho} {\mbox{\boldmath{$\Phi$}}}^* \ .
\end{eqnarray}
Thus the $n$th eigenvector is
\begin{eqnarray}
{\mbox{\boldmath{$\Lambda$}}}_n \equiv [e^{i \rho_n} {\mbox{\boldmath{$\Phi$}}}_n^* , {\mbox{\boldmath{$\Phi$}}}_n ] =
e^{ i\rho_n /2} [ e^{i \rho_n/2} {\mbox{\boldmath{$\Phi$}}}^* , e^{-i \rho_n/2} {\mbox{\boldmath{$\Phi$}}} ] \ ,
\end{eqnarray}
which we write in canonical form as
\begin{eqnarray}
\Lambda_n & \equiv & \sigma_n(\vec q) [ {\mbox{\boldmath{$\Theta$}}}_n^* , {\mbox{\boldmath{$\Theta$}}}_n] \
\end{eqnarray}
where $\sigma_n(\vec q)\equiv |\sigma_n(\vec q)|\exp(i \phi_n)$ is a
complex-valued amplitude and ${\mbox{\boldmath{$\Theta$}}}_n$ is normalized:
\begin{eqnarray}
1 &=& \sum_{j=1}^9 |[{\mbox{\boldmath{$\Theta$}}}_n]_j|^2 \ .
\end{eqnarray}
Since the inverse susceptibility matrix is 18 dimensional, there are
18 eigenvectors, each of this canonical form. We identify
$\sigma_n(\vec q)$ as the order parameter which characterizes
order of the $n$th eigenvector. As the temperature is
lowered one such solution (which we label $n=1$) becomes critical
and at a lower temperature a second solution (which we label $n=2$)
becomes critical. As we shall see in a moment, the magnitudes of
the associated order parameters $\sigma_n({\bf q})$ and their relative
phase are fixed by the fourth order terms in the free energy which
we have so far not considered. Using Eq. (\ref{EQ5}) we see that
\begin{eqnarray}
{\cal I} {\mbox{\boldmath{$\Lambda$}}}_n &=& {\cal I} [\sigma_n(\vec q) {\mbox{\boldmath{$\Theta$}}}_n^*,
\sigma_n(\vec q) {\mbox{\boldmath{$\Theta$}}}_n] = [\sigma_n(\vec q)^* {\mbox{\boldmath{$\Theta$}}}_n^* ,
\sigma_n(\vec q)^* {\mbox{\boldmath{$\Theta$}}}_n ] = \sigma_n(\vec q)^* \Lambda_n \ ,
\label{INVEQ} \end{eqnarray}
which indicates that the order parameter transforms under inversion as
\begin{eqnarray}
{\cal I} \sigma_n(\vec q) &=& \sigma_n(\vec q)^* \ .
\label{IEQ} \end{eqnarray}
Also, under spatial translation, $T_{\vec R}$, we have that
\begin{eqnarray}
T_{\vec R} \sigma_n(\vec q) &=& e^{i \vec q \cdot \vec R} \sigma_n (\vec q)\ .
\label{TEQ} \end{eqnarray}
Note that Eq. (\ref{IEQ}) does {\it not} imply that the $n$th
eigenvector is invariant under inversion {\it about the origin}.
However, as we now show, it does imply that the $n$th eigenvector
is invariant about an origin which depends on the choice of phase
of the $n$th eigenvector. (It is obvious that a cosine wave is
only inversion invariant about one of its nodes which need not
occur at the origin.) If ${\cal I}_{\vec R}$ denotes inversion
about the lattice vector ${\vec R}$, then we have
\begin{eqnarray}
{\cal I}_{\vec R} \sigma_n({\vec q}) &=& T_{\vec R} {\cal I} T_{- \vec R} \sigma_n
(\vec q) = T_{\vec R} {\cal I} e^{-i \vec q \cdot \vec R} \sigma_n (\vec q) \nonumber \\
&=& T_{\vec R} e^{i \vec q \cdot \vec R} \sigma_n(\vec q)^* =
e^{2i \vec q \cdot \vec R\sigma_n(\vec q)} \ .
\end{eqnarray}
Let $\sigma_n(\vec q) = \sigma_n(\vec q)| e^{i\chi}$.
Then if we choose $\vec R$ so that $\vec q \cdot \vec R = \chi$, then
\begin{eqnarray}
{\cal I}_{\bf R} \sigma_n({\vec q}) &=& \sigma_n(\vec q) \ .
\end{eqnarray}
So, Eq. (\ref{IEQ}) implies inversion symmetry about a point which can be
chosen to be arbitrarily close to a lattice point for an infinite system.
Thus the contribution to the free energy from these order parameters
$\sigma_n(\vec q)$ at wave vector $\vec q$ can be written as
\begin{eqnarray}
F &=& \sum_n \Biggl[ a_n (T-T_n) |\sigma_n(\vec q)|^2
+ b_n |\sigma_n(\vec q)|^4 + \dots \Biggr]
\nonumber \\ && \ + \sum_{n < m} c_{nm} |\sigma_n(\vec q) \sigma_m(\vec q)|^2
+ \sum_{n<m} \Biggl( d_{nm} [\sigma_n(\vec q) \sigma_m(\vec q)^*]^2
+ d_{nm}^* [\sigma_n(\vec q)^* \sigma_m(\vec q)]^2 \Biggr) \ ,
\label{FFEQ} \end{eqnarray}
where translational invariance indicates that for an incommensurate
wave vector the free energy is a function of $|\sigma_m|^2$, $|\sigma_n|^2$,
$\sigma_n \sigma_m^*$, and $\sigma_n^* \sigma_m$. In writing this
free energy we have assumed that the wave vectors of $\sigma_1$
and $\sigma_2$ are locked to be the same, as discussed in Ref.
\onlinecite{MKPRB}.
The generic situation in multiferroics is that as one lowers the temperature
an order parameter $\sigma_1$ first becomes nonzero and then, at a lower
temperature, a second order parameter $\sigma_2$ becomes nonzero. In
many cases, such as Ni$_3$V$_2$O$_8$[\onlinecite{Lawes2005}] or
TbMnO$_3$[\onlinecite{MKPRL}] $\sigma_1$ and $\sigma_2$ have different
nontrivial symmetry. Here all the order parameters have the symmetry
expressed by Eqs. (\ref{IEQ}) and (\ref{TEQ}). (The phase
$\phi_2$ of the second order parameter is fixed relative to that, $\phi_1$,
of the first order parameter by the term in $d_{12}$ in Eq. (\ref{FFEQ}).)
Finally, we consider the magnetoelectric coupling, $V$, in the free energy
which is responsible for the appearance of ferroelectricity
(for which $\vec P \not= 0$, where $\vec P$ is the electric polarization).
We write
\begin{eqnarray}
F &=& F_{\rm M} + F_{\rm E} + V \ ,
\end{eqnarray}
where $F_{\rm M}$ is the purely magnetic free energy of Eq. (\ref{FFEQ}),
$F_{\rm E}$ is the dielectric potential which we approximate as
$F_{\rm E} = (1/2) \chi_E^{-1} {\bf P}^2$, where $\chi_E$ is
the dielectric susceptibility (whose crystalline anisotropy is neglected),
and to leading order in $\sigma_n$
\begin{eqnarray}
V &=& \sum_{n,m=1}^2 \sum_\gamma
[a_{n,m,\gamma} \sigma_n(\vec q) \sigma_m(\vec q)^*
+ a_{n,m,\gamma}^* \sigma_n(\vec q)^* \sigma_m(\vec q)] P_\gamma \ ,
\end{eqnarray}
where $n$ and $m$ label order parameter modes and $\gamma$ labels the
Cartesian component of $\vec P$. Terms linear in $\sigma_n$ are prohibited
because they can not conserve wave vector. Terms of order $\sigma^4$ or
higher can exist.[\onlinecite{ELBIO,BET,HKAE}] The interaction $V$
has to be inversion invariant. Since ${\cal I}\vec P = - \vec P$ and
${\cal I} |\sigma_n|^2 = |\sigma_n|^2$, we see that the terms
with $n=m$ are not inversion invariant and hence are not allowed. Thus
\begin{eqnarray}
V &=& \sum_\gamma [a_\gamma \sigma_1(\vec q) \sigma_2(\vec q)^*
+ a_\gamma^* \sigma_1(\vec q)^* \sigma_2(\vec q)] P_\gamma \ .
\end{eqnarray}
Using ${\cal I} P_\gamma = - P_\gamma$ and Eq. (\ref{IEQ}) we see that
inversion invariance implies that
$a_\gamma = i r_\gamma$, where $r_\gamma$ is real. Then
\begin{eqnarray}
V &=& i \sum_\gamma r_\gamma [\sigma_1(\vec q) \sigma_2(\vec q)^* -
\sigma_1(\vec q)^* \sigma_2(\vec q)] P_\gamma =
2 \sum_\gamma r_\gamma |\sigma_1(\vec q) \sigma_2(\vec q)|
\sin(\phi_2 - \phi_1) P_\gamma \ .
\label{EQRES} \end{eqnarray}
Note that there is no restriction on the direction of the spontaneous
polarization, so that all components of $\vec P$ will be nonzero.
However, if the magnetic structure is a spiral, then the arguments of
Mostovoy[\onlinecite{MOST}] can be used to predict the approximate
direction of $\vec P$. The result of Eq. (\ref{EQRES}) is quite analogous
to that for Ni$_3$V$_2$O$_8$[\onlinecite{Lawes2005}]
or for TbMnO$_3$[\onlinecite{MKPRL}], in that it requires the two modes
$\sigma_1 (\vec q)\equiv \exp(i \phi_1) |\sigma_1(\vec q)|$ and
$\sigma_2 (\vec q)\equiv \exp(i \phi_2) |\sigma_2(\vec q)|$ to be out of phase
with one another, in other words that $\phi_1 \not= \phi_2$.
If, as stated in Ref. \onlinecite{Daoud2009}, the eigenvector is {\it not}
inversion invariant as implied by Eq. (\ref{INVEQ}), then
one would conclude that the magnetic ordering transition is not continuous.
However, the differences between the diffraction patters of the
structure of Ref. \onlinecite{Daoud2009} and that suggested here
are subtle enough[\onlinecite{PGRPC}] that our suggested structure
seems probably the
correct one.
\end{appendix}
|
1,108,101,563,704 | arxiv | \section{Introduction}
Exciton and following charge migration is important and ubiquitous
in photo energy conversion process in material.
A photo excited exciton is a source of charge separation leading to chemical driving force.
\cite{Sundstrom-review-solar-energy,accum-CS}
Fundamentally and microscopically, they are described by quantum dynamics.
Though a picture of charge separation is still unrevealed,
experimental observations of time dependent charge separation and
their control have been reported in pico second and nano second time scales.
\cite{Nakajima-JPCL2012-CS-obs-ps,image-CS-CR}
In order to understand charge separation mechanism
we need a time dependent picture fine enough to include chemical reaction
within sub-pico-second.
Because a fully ab initio description of charge separation in quantum dynamics theory
is computationally tough,
an investigation of interesting features related to
a realistic size of molecular aggregate system
accompanied with fluctuation have been limited.
To overcome this, we utilize a simplified tight-binding model
having sparsity in a molecular aggregate Hamiltonian matrix,
referential occupation of quantum sub-states inside monomers
and Liouville--von Neumman equation.
We can reduce a computational cost using this sparsity of
interaction network in molecular aggregates
associated fast spatial damping of electron coupling expected in realistic material system.
Based on this, we demonstrate an efficient scheme
for examining a time dependent behavior of charge separation.
We investigate how static and dynamical disorder affect a charge separation
after a birth of exciton in molecular aggregate system.
Here, static and dynamical disorders mean
initial disorder in reference structure and
time-dependent deformation of configuration of aggregate, respectively.
Here we examine a quantum dynamics of excited electrons
without resorting to method using too much coarse graining treatment
such as kinetic Monte Carlo schemes
because they can never tell about a microscopic origin
inherent in this kind of phenomena, for example,
quantum mechanical interference, decoherence and dynamical resonance, and so on.
These methods are complementary. In fact, the latter Monte Carlo type kinematic methods
have been successfully used for the description in a wide range
from middle to macroscopic scale.
\cite{Madigan-PRL2006,Sousa-JCP2018}
We here focus on a shorter time scale and smaller systems compared to such a macroscopic view,
namely, middle scale description with respect to time and spatial size.
By introducing and using a new concise scheme,
we intend to propose an origin of excitation driven charge migration
in a moderate size of molecular aggregate systems.
In the next section, \ref{theory},
theoretical method is explained by focusing
a Hamiltonian sparsity for molecular aggregates.
That is followed by the numerical section, \ref{numerical},
for demonstrating the present scheme
using fused lattice structure system consisting of
simplified monomers having local quantum sites.
There, we introduce charge diffusion and separation measure and
discuss effects of structure fluctuation and static disorder.
In Sect. \ref{concl} we conclude this article with a future perspective
about the present scheme.
\section{Theoretical method}
\label{theory}
We briefly explain the general formulation of the developed calculation method
aimed for a concise description and examination of exciton and charge migration.
Generalization with respect to state number and couplings is formally straightforward.
The sparsity of Hamiltonian matrix is utilized for attaining a reduction of computational cost.
\subsection{Sparse form of Hamiltonian matrix
having a network structure in interactions of quantum states}
For a compact and economical calculation of quantum dynamics
supported by a sparcity in a network associated with Hamiltonian matrix,
instead of treating full matrix elements
\begin{align}
\{ H_{\alpha \beta} \}_{ \alpha, \beta = 1 \sim N_{\rm{basis}} },
\label{H_orginal}
\end{align}
we employ a following data structure as the expression of this matrix,
\begin{align}
\{ \alpha_i, \beta_i, H_{\alpha_i \beta_i} \}_{ i = 1 \sim N_{\textrm{pair}} }.
\label{H_sparse}
\end{align}
Here $N_{\textrm{pair}}$ is a number of non-zero elements in the original matrix.
To be more precise practically, for an infinitesimal threshold value $\eta$,
$N_{\textrm{pair}}$ is the number of elements in the set
$ \{ H_{\alpha \beta} ; | H_{\alpha \beta} | \ge \eta \} $. Practically
we can establish $\eta$ as a function of a spatial distance of pair subsystems.
Next, we provide a block structure associated with a molecular aggregation
that we are interested in here.
Total basis number $N_{\textrm{basis}}$ is the same as
a sum of numbers of quantum states given for all subsystems,
$ N_{\textrm{basis}} = \sum_{a=1}^{N_{\textrm{g}}} M_I(a) $,
with $ M_I(a) $ being the number of internal quantum states in subsystem $a$.
$N_{\textrm{g}}$ is a number of group sites.
The total Hamiltonian matrix is divided into $N_{\rm{g}}^2$ sectors and
the $(a,b)$-th block takes a matrix form having a size of $M_I(a) \times M_I(b)$
where $a$ and $b$ in $(a,b)$ denote positions of row and column, respectively.
$ a = b $ and $ a \neq b $ correspond respectively to
local Hamiltonian of subsystem $a$ and coupling matrix between $a$ and $b$.
Practically, the number of multiplication of their elements
to other matrix is reduced effectively
accompanied with a small additional cost
in pointer skip in a memory space.
We refer to this as a compressed Hamiltonian matrix (CHM).
As a final part of this subsection, for making clear the sector-ed structure,
we define a one-to-one mapping
between an index label of original matrix and an index pair in sector-ed matrix
as $\{ k \Leftrightarrow (a,\alpha_a) \} $
with
$ 1 \leq k \leq N_{\textrm{basis}} $,
$ 1 \leq a \leq N_{\textrm{g}} $
and $ 1 \leq \alpha_a \leq M_I(a) $.
\subsection{Liouville--von Neuman equation
using multiplication of compressed Hamiltonian matrix}
Using CHM,
a time derivative of density matrix $\rho_{\alpha\beta}$ in LvN equation,
\begin{align}
\dfrac{d\rho_{\alpha\beta}}{dt}
=
- \dfrac{i}{\hbar}
\sum_{\gamma}^{Nb}
\left\{
\rho_{\alpha\gamma} H_{\gamma\beta} - H_{\alpha\gamma} \rho_{\gamma\beta}
\right\},
\end{align}
can be cast into the following pseudo code:
\begin{align}
&\omega := 0 \notag \\
&do \quad i = 1, \,\, N_{\textrm{pa}} \notag \\
&\quad do \quad p = 1, \,\, N_b \notag \\
&\quad\quad \omega_{p\beta_i} := \omega_{p\beta_i} -
\dfrac{i}{\hbar} \rho_{p\alpha_i} H_{\alpha_i\beta_i} \notag \\
&\quad enddo \notag \\
&\quad do \quad q = 1, \,\, N_b \notag \\
&\quad\quad \omega_{\alpha_i q} := \omega_{\alpha_i q} -
\left( - \dfrac{i}{\hbar} H_{\alpha_i\beta_i} \rho_{\beta_i q} \right) \notag \\
&\quad enddo \notag \\
&enddo
\label{LvN-sparse}
\end{align}
with
$ \omega \equiv \dfrac{d}{dt} \rho $.
\subsection{Structure of model Hamiltonian employed here}
We briefly summarize a specific form of Hamiltonian
employed here for a demonstration of the present method.
Each block bare sub-Hamiltonian sector for subsystem a is
$ H^{aa;0}_{i_a j_a} = \delta_{i_a j_a} \epsilon^{a}_{i_a} $.
Coupling off-diagonal block sectors for different subsystems, a and b, are given by
$
H_{i_a j_b}^{ab}(R_{ab}) =
s_{i_a i_b}^{ab} \rm{exp}\left( - R_{ab} / R^{\rm damp}_{i_a j_b} \right)
$.
Here, $R_{ab}$ denotes distance between subsystems a and b
while $ R^{\rm damp}_{i_a j_b} $ are the parameters that
determine a damping strength with respect to $R_{ab}$.
Range of indicies are $ a, b = 1, 2, ..., N_{\textrm{g}} $ and $ i_a = 1, 2, ..., M_I(a) $.
For taking into account an effect of structure dynamics in a simulation,
diagonal elements in each block sub-Hamiltonian sector for subsystem $a$ were designed to change
as a function of distance of each pair subsystems
in a form of
$ H_{i_a i_a}^{aa} = H^{aa;0}_{i_a i_a}
+ \sum_{b \neq a} \dfrac{1}{4} k^\textrm{spring} (R_{ab}-R_{ab}^\textrm{ref})^{2} $.
In this equation we used 1/4 but not 1/2 for avoiding an apparent double counting
so that the spring constant, $k^\textrm{spring}$, should keep physical meaning.
\subsection{Time propagation}
For computational efficiency,
we use the Chebyshev expansion method for the time propagation operator
associated with LvN equation of motion.
The formalism of this time propagation method is summarized
in Appendix \ref{Cheby-LvN} for self-containednes,
of which theoretical details are given in the work by Guo {\it et al.}
\cite{Guo-JCP1999-Cheby-LvN}
The time increment for each time step used in this article was set to 4 a.u.,
which was sufficient for the conversion of the result, now shown here.
The number of time steps in simulation is 500 and total time is 2000 a.u.
which is 48.3 fs.
\section{Numerical demonstration}
\label{numerical}
\subsection{Effective Charge distance from reference point}
Here we examine how a charge pattern emerges
after a birth of local exciton at the center
in a generally disordered lattice system
consisting of molecular monomers.
This preparation of initial excitation is intended
for examining inherent feature of dispersion
that a quantum mechanical network has.
For this purpose, we construct an evaluation formula
for effective position of wave fronts represented by
$u^+$ and $u^-$ for positive and negative charge
roughly measured from position of initial exciton.
\begin{itemize}
\item[1)]
Calculate sum of positive and negative charge over the sites as \\
\begin{align}
q_\textrm{sum}^{\pm} = \sum_{g}^\textrm{site} h(\pm q_g) \, q_g \quad
\end{align}
$h(x)$ is the Heaviside function and takes 1 and 0 respectively for x $>$ 0 and x $\le$ 0.
\item[2)]
Evaluate weighted sum of positive negative charge multiplied with
the distance from the reference position over the sites as \\
\begin{align}
u^{\pm} = \sum_{g}^\textrm{site} h(\pm q_g) \, q_g |{\bf r}_g-{\bf r}_\textrm{ref}| \quad
\end{align}
\item[3)]
Normalize $u^+$ and $u^-$ by using $q_\textrm{sum}^+$ and $q_\textrm{sum}^-$.
And then, determine the effective propagation front of positive and negative charge as
\begin{align}
d^{\pm} = u^{\pm} / q_\textrm{sum}^{\pm}
\end{align}
\end{itemize}
The charge on each site is evaluated as
the difference of quantum population in the corresponding site
from the reference occupancy.
In the following numerical demonstration,
each site has two quantum states of local ground and excited states
with $ M_I = 2 $ and an reference occupation only in the ground state.
For each initial structure we prepared randomly 30 velocity vectors
for the geometries having disorder parameter $f$ being fixed
and took average and variations of $ d^{\pm} $.
Magnitude of each velocity vector of monomer was
determined so as to match with equivalently divided amount
of energies according to micro-canonical temperature, 0 and 300 K
while the directions of vectors were given randomly.
\subsection{Details}
First of all in the presentation, we summarize the calculation conditions
including system parameters employed here.
For simplicity we set the two internal quantum states for each, namely,
we set $ M_I(a) = 2 $ with a referential occupancy for lowest one
be hypothetical setting of use of HOMO and LUMO for arbitrary monomer $a$ in the system.
Below we express $ M_I(a) = M_I $.
Since a three dimensional calculation is computationally demanding,
we treat two dimensional model that still has a significance as
a starting point for investigating electron dynamics in a surface of molecular crystal.
The energies of these two states are commonly set to be
$\epsilon^a_{1}=0.0$ and $\epsilon^a_{2}=0.3$ Hartree for all $a$
that appear in each block matrix placed at diagonal position
in a whole Hamiltonian matrix.
We selected this energy difference $\epsilon^a_{2}-\epsilon^a_{1}=0.3$
similar to HOMO-LUMO gap of naphthalene obtained by CAM-B3LYP/6-31G(d) calculation
at optimized geometry, 0.2731 Hartree.
We note that a naphthalene serves as an electron donor molecule,
for example, in a naphthalene - tetra-cyano-ethylene dimer.
Therefore, the employed test model of molecular aggregate
is expected to have a hole transfer property.
As a referential spatial configuration of monomer positions,
we employed square lattice on x-y plane
of which numbers of rows and columns are expressed by $n_x$ and $n_y$ with $n_x=n_y$.
The total number of monomers are $ N_{\textrm{g}} = n_x n_y $.
In this article, we examine two cases of $n_x=n_y=25$ and $31$
with fixed reference distance of nearest monomers being $ D_{x} = D_{y} = 7.8$, respectively.
Later we simply express $D_{x}=D_{y}$ as $D$.
Note that in dynamics calculations they change
according to the introduced disorder of geometrical structure at initial simulation time
and time-dependent deformation of configuration,
which is followed by a clear illustration of a significant effect
on charge migration dynamics after a birth of local excitation.
Total number of elements in a basis set are given by
$ N_{\textrm{basis}} = N_{\textrm{g}} M_I $.
We introduced a disorder to initial geometry of locations of monomers in aggregate
by giving random displacement $ ( \eta -0.5 ) f D $ for them
in each Cartesian coordinate, x and y
compared from reference positions within a lattice structure.
Here $ \eta $ being random number over $[0:1]$ while
$f$ is a parameter that modulate the degree of disorder, that we call disorder parameter.
In this article, we compare the results with variation of $f$ = 0.0, 0.2 and 0.4.
Time increments in dynamics calculation commonly used for quantum and classical
degrees of freedoms are commonly set to efficiently and practically small value, 4 au.
Total simulation time is $T_{\textrm{max}} = 2000$
with a number of time steps $N_{\textrm{t}}$ being 500.
Then, we briefly explain model functions used in construction of Hamiltonian matrix.
The elements of damping matrix of monomer interaction matrices $H_{i_a i_b}^{ab}(R_{ab})$
were set as
$R^{\textrm{damp}}_{i_a j_a }=1.2$ \AA \,( = 2.268 Bohr )
for all $i_a,j_a=1 \sim M_I(a)$ and $a$
while strength amplitude matrices in integration matrices
were supposed to be
$s_{11}^{ab}=0.3$, $s_{12}^{ab}=s_{21}^{ab}=1.5$, and $s_{22}^{ab}=1.0$.
A threshold parameter for monomer pairing interaction is set to $r_{\textrm{pair}}$ = 10.8.
Monomer interactions in a total Hamiltonian matrix were taken into account
only for monomer pairs having distances being less that $r_{\textrm{pair}}$,
which characterize sparsity of Hamiltonian for aggregate system.
We checked that this value of $r_{\textrm{pair}}$ is sufficiently large
for the convergence of results with use of
damping matrix $R^{\textrm{damp}}_{i_a j_a }$ and
strength matrix $s_{i_a j_b}^{ab}$ in monomer interaction mentioned above.
Masses of uniform monomer were varied as
$\mu = 5\times 10^4$, $1 \times 10^5$ and $2 \times 10^5$
for examining a kinematic effect on charge migration dynamics.
The spring constant of hypothetical harmonic potential for each monomer pair
was commonly set to $k^{\textrm{spring}}=0.002$ throughout dynamics calculations.
In order to obtain a fundamental information on a propensity of charge migration dynamics,
we utilized an introduction of a local exciton as a sudden perturbation
in quantum state of whole the system at the initial simulation time.
In all the dynamics calculations,
central monomer was initially locally excited from the lowest to highest state
with a half amount of quantum population,
which corresponds to hypothetical HOMO-LUMO excitation.
Each portion of initial kinetic energy in 2$N_{\textrm{g}}$ all the classical degrees of freedom
of molecular aggregates confined in x-y plane was set to
$\frac{1}{2}k_{\textrm{B}}T$ with the variation of
micro-canonical temperature $T$ being 0 and 300 Kelvin.
Here $k_{\textrm{B}}$ denotes the Boltzmann constant.
Direction of velocity vectors for each monomer was randomly given.
In all the simulations,
we employed 30 sets of sample initial velocities for taking average of quantum properties.
In the present simulation time scale,
effect of a bounce of excitons and front of charge wave
from peripheral edge of the whole lattice is negligible.
Finally, we briefly show the efficiency of the present method using Eq.(\ref{LvN-sparse})
based on the sparsity of Hamiltonian of molecular aggregate.
Comparisons of computational cost with the original Hamiltonian without use of sparsity
are summarized in Tab. \ref{speedup} associated with the variation of
system size $n_x=n_y$ and $r_{\textrm{pair}}$.
In the case of $n_x=n_y=31$ having largest computational cost in this article,
we can obtain about 20 times speed-up compared to the fully connected Hamiltonian.
\subsection{Results and discussions}
As a general tendency of dynamics independent of $\mu$, $f$ and $n_x=n_y$,
the time dependent distance of effective positive and negative charge fronts
are reduced by molecular motion
that we can see by the comparison between $T=0$ and 300
in the panels (a), (d) and (g) of Fig. \ref{Fig-1} and \ref{Fig-2}
despite of slight exceptions after the half of simulation time in
the case of $n_x=n_y=25$ with $T=300$.
In the cases of $n_x=n_y=25$,
the effect of structure dynamics on charge front propagation
critically depends on the disorder parameter $f$.
The larger disorder leads to kinetic promotion of charge front propagation
as found in the (b/c), (e/f) and (h/i) of Fig. \ref{Fig-1}.
To explain more precisely, as seen in the comparison between
the results of $(f,T)=(0.4,300)$ and $(0.4,0)$ presented in the panel (b)
associated with the smallest mass $\mu=50000$,
kinetic motion promotes the propagation of positive charge front.
On the other hand, as seen through the comparisons of the cases
of $(f,T)=(0.0,300)$ and $(0.0,0)$ in the panel (c),
the final position of positive charge front is reduced due to kinetic motion.
It is also the case tor the propagation of negative charge front
that structure dynamics provides larger effect on results
with higher disorder parameter $f$.
These trends are also seen in the cases of different masses
presented in panels (e/f) and (h/i)
though the larger mass is accompanied with weaker dependency
due to the reduction of weaker modification of monomer coupling during dynamics.
The regularity in structure characterized by small disorder parameter $f=0$
contributes to charge separation as found in the time dependent behavior
of effective distance of positive and negative charge.
The kinetic fluctuation with $T=300$ suppress significantly a charge separation
and moderately depress both propagation of charge fronts.
On the other hand, compared to the simulations without disorder,
in case of no kinetic fluctuation expressed by $T=0$,
the large disorder associated with $f=0.2$ and $0.4$ suppresses the propagation of wave front
of positive and negative charge and resultantly the charge separation is depressed.
However, interestingly, in the high disorder cases,
the kinetic fluctuation labeled with $T=300$ promotes the propagation of wave front
with keeping the small difference of them, namely, depressed charge separation.
These tendencies are common to the cases with different lattice sizes.
The effect of lattice size appears in a case of small mass with $\mu=50000$.
The case of larger system $n_x=n_y=31$ with the same unit lattice distance
is characterized by a progression of the wave front of both sign charges.
This can be attributed to the coupling of electrons with
collective motion of structure.
\onecolumngrid
\clearpage
\begin{figure}[th]
\includegraphics[width=1.0\textwidth]{fig-note-1.eps}
\caption{
Network structures of aggregates for $n_x$=$n_y$=25 (a,b,c) and 31(d,e,f) cases.
Disorder parameters $f$ are 0.0, 0.2 and 0.4 respectively for panel (a/d), (b/e) and (c/f).
Lattice size as reference length of nearest site is $D_{x}=D_{y}=7.8$ Bohr.
$f$ serves as a ratio of fluctuation compared to this reference length.
Rings at the centers of panels denote the site with initial excitation.}
\label{Fig-1}
\end{figure}
\clearpage
\begin{figure}[th]
\includegraphics[width=1.0\textwidth]{fig-note-2.eps}
\caption{
Effect of disorder, mass and structure dynamics on charge migration dynamics.
Top(a,d,g), middle(b,e,h) and bottom(c,f,i) panels show time dependent behaviors
correspondingly of effective distance of opposite sign charges,$(d^{+}-d^{-})$
diffusion length of positive, $d^{+}$, and negative ones, $d^{-}$.
Masses employed in panels in the left(a,b,c), middle(d,e,f) and right(g,h,i) columns
are $ \mu = 5\times{10}^4$, $1\times{10}^5$ and $2\times{10}^5$, respectively.
$n_x$=$n_y$=25.
}
\label{Fig-2}
\end{figure}
\clearpage
\begin{figure}[th]
\includegraphics[width=1.0\textwidth]{fig-note-3.eps}
\caption{
The counterpart of Fig.\ref{Fig-2} for $n_x$=$n_y$=31.
As seen in the cases of small mass here and comparisons with small size cluster,
kinematic motion affect enhancement of charge propagation
}
\label{Fig-3}
\end{figure}
\clearpage
\section{Concluding remarks}
\label{concl}
In this article, we numerically clarified the effect of
structural disorder and dynamical fluctuation
on charge migration dynamics starting from a birth of local exciton in
a quantum network of molecular aggregates.
For the convenient analysis, we utilized model Hamiltonians
having complicate interactions of which parameters were
determined by using quantum chemical calculation.
Effects of static disorder and dynamical fluctuation on charge migration
were examined by varying disorder parameter, kinetic energy and effective mass of monomers.
A summary of observation in the numerical part of this work is as follows:
(i) structural disorder in aggregate reduces the rate of charge separation
(ii) molecular motion can promote a charge diffusion in cases of smaller mass of each monomer.
We can expect that these knowledge support a future realization of
optimal condition for aimed properties of electronic device
with respect to a charge separation leading to current in solar cell,
photo catalysis reaction involved with non-local radical creation,
and spontaneous photoemission caused by annihilation of electron and hole pair.
In our future work, ingredients of Hamiltonian matrix of aggregates
are obtained by using, for example, the group diabatic Fock scheme
\cite{gdf-eld}
combined with locally projection of active orbital space,
consideration of non-linearity with respect to density matrix
and spin-orbit interaction.
\cite{rt-pgdf-eld}
This provides a way for exploring mechanisms
underlying diffusion and migration dynamics
of excition and charge density from a quantum dynamics view point,
which supports a development of electronic functional material.
\begin{acknowledgements}
This research was supported by MEXT, Japan,
``Next-Generation Supercomputer Project''
(the K computer project)
and ``Priority Issue on Post-K Computer''
(Development of new fundamental technologies
for high-efficiency energy creation, conversion/storage and use).
Some of the computations in the present study were performed
using the Research Center for Computational Science, Okazaki, Japan,
and also HOKUSAI system in RIKEN, Wako, Japan.
The author deeply appreciates Dr. Takahito Nakajima in RIKEN-CCS
for the financial support and research environment.
\end{acknowledgements}
\small
\begin{center}
{\bf{AVAILABILITY OF DATA}}
\end{center}
\normalsize
The data that support the findings of this study are available on request from the corresponding author.
\clearpage
\noindent
|
1,108,101,563,705 | arxiv |
\section{Conclusions}\label{conclusions}
In this paper, we proposed dynamic registration, combining ego motion estimation and moving object detection in a point cloud registration pipeline. 3D object detection is utilized to detect objects previously and removed temporarily. Then it begins to iteratively estimate ego motion and add static objects until no static objects are segmented. We evaluate our proposed method on KITTI benchmark and compare it with different registration algorithms, which demonstrates its effectiveness. We'll future extend dynamic registration to a dynamic slam system.
\section{Experiments And Results}\label{experiments}
To better evaluate the performance of our proposed dynamic registration algorithm,
we adopt KITTI Tracking datasets \cite{geiger2013vision} as a benchmark for the experiment since it provides both ego and object information.
\subsection {Implementation Details}
As discussed in \cref{section:3ddet}, we leverage a pillars-based 3D object detection network, PointPillars, for detecting objects in KITTI Tracking sequences.
We directly use a version provided by MMDetection3D\cite{mmdet3d2020}, without any fine-tuning or editions. The model is pre-trained on KITTI 3D detection benchmark. Detailed settings and the pre-trained model can be found in their repository.
3D object detection on KITTI dataset traditionally follows a range of [(0,70),(-40,40),(-3,1)]. However, it doesn't perform well in a registration scenario. Thus we limit the point cloud range to [(0,40),(-30,30),(-3,1)] in x, y, z axis respectively. Subsequently, results out of range are filtered out. We also select cuboids with a confidence threshold higher than 0.5, in order to grab quality detections.
Additionally, to improve the efficiency and accuracy of the registration algorithm, we downsample each input point cloud with a 0.3m voxel.
\subsection{Quantitative Experiments}
\subsubsection{Evaluation Metrics}
\begin{table*}[htbp]
\centering
\caption{Comparison of RMSE of RPE on KITTI Tracking Datasets Seq.0000-0020. Bold numbers indicate lower errors.}
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}[2]{*}{seq} & \multicolumn{2}{c|}{Registration} & \multicolumn{4}{c}{Dynamic Registration} \\
& \qquad \quad NDT \qquad & \quad ICP \qquad \quad & \quad NDT-RMA \quad & \multicolumn{1}{c}{ICP-RMA} & \quad NDT-RMD \quad & \quad ICP-RMD \quad \\
\midrule
0000 & 0.2583 & 0.3641 & 0.2646 & 0.3887 & \textbf{0.2101} & \textbf{0.3444} \\
0001 & 0.3453 & 0.4550 & 0.3506 & 0.4793 & \textbf{0.2689} & \textbf{0.4050} \\
0002 & 0.4560 & 0.5723 & 0.4718 & 0.5861 & \textbf{0.4479} & \textbf{0.5696} \\
0003 & 0.7364 & 0.7952 & 0.7523 & 0.7869 & \textbf{0.7151} & \textbf{0.7769} \\
0004 & 0.8349 & 0.7200 & 0.8509 & 0.7353 & \textbf{0.8127} & \textbf{0.6874} \\
0005 & 0.8709 & 0.8537 & 0.8773 & 0.8643 & \textbf{0.8545} & \textbf{0.8301} \\
0006 & 0.3247 & 0.4664 & 0.3190 & 0.4651 & \textbf{0.2912} & \textbf{0.4496} \\
0007 & 0.3650 & 0.4428 & 0.3566 & 0.4601 & \textbf{0.3195} & \textbf{0.4095} \\
0008 & 0.7415 & 0.7842 & 0.7361 & 0.7859 & \textbf{0.7102} & \textbf{0.7589} \\
0009 & 0.4218 & 0.5042 & 0.4432 & 0.5230 & \textbf{0.3979} & \textbf{0.4613} \\
0010 & 0.9602 & 0.8938 & \textbf{0.9185} & 0.8978 & 0.9407 & \textbf{0.8667} \\
0011 & 0.3978 & 0.5209 & 0.4137 & 0.5629 & \textbf{0.3722} & \textbf{0.5111} \\
0012 & 0.1161 & \textbf{0.1432} & \textbf{0.0856} & 0.1490 & 0.0955 & 0.1473 \\
0013 & 0.3404 & 0.4283 & 0.3488 & 0.4282 & \textbf{0.2861} & \textbf{0.3735} \\
0014 & \textbf{0.1874} & \textbf{0.4625} & 0.1939 & 0.5032 & 0.1876 & 0.4653 \\
0015 & 0.3067 & 0.3625 & 0.3161 & 0.3705 & \textbf{0.2894} & \textbf{0.3571} \\
0016 & 0.1917 & \textbf{0.1136} & 0.1341 & 0.1145 & \textbf{0.1102} & 0.1159 \\
0017 & 0.1259 & \textbf{0.1598} & 0.1514 & 0.1658 & \textbf{0.0801} & 0.1617 \\
0018 & 0.2791 & 0.3746 & \textbf{0.2759} & 0.3857 & 0.2788 & \textbf{0.3074} \\
0019 & 0.2154 & 0.2977 & 0.2162 & 0.3054 & \textbf{0.1954} & \textbf{0.2797} \\
0020 & 0.4879 & 0.7300 & 0.5098 & 0.7938 & \textbf{0.4526} & \textbf{0.7138} \\
\midrule
mean & 0.4268 & 0.4974 & 0.4279 & 0.5120 & \textbf{0.3960} & \textbf{0.4758} \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
We adopt relative pose error (RPE) to test the performances of our proposed registration algorithm. Absolute trajectory error (ATE) may not be suitable in a scan matching scenario since a slight error in a transformation matrix at the beginning of a sequence leads to a large gap in two trajectories subsequently.
For simplicity, here we only consider the translational part of RPE:
\begin{equation}\label{eq:rpe}
\mathrm{RPE}_{\text{trans}}=\sqrt{\frac{1}{N-1} \sum_{i=1}^{N-1} \| \operatorname{trans}\left(\boldsymbol{T}_{\mathrm{gt}}^{-1}\boldsymbol{T}_{\text{est}} \right) \|_{2}^{2}}
\end{equation}
where $\boldsymbol{T}_{\mathrm{gt}}$ and $\boldsymbol{T}_{\text{est}}$ are ground truth pose and estimated pose between two frames, respectively.
\subsubsection{Results}
The results are shown in \cref{tab:addlabel}, which details comparisons of our method’s performance against other traditional registration algorithms. We build our Dynamic Registration based on two classical methods, NDT and ICP. Results match intuitive expectations on average. Dynamic Registration performs better in almost all sequences, since removing dynamic objects follows the static world assumption.
Results show our approach outperforms other registration algorithms, even better than removing all potential moving objects.
Removing all objects without checking their motion state also works when objects all keep moving. However, this coarse strategy increases localization error because fewer static points are engaged to the registration pipeline. Adding static points to the environment decreases localization uncertainty in a dynamic and complex environment.
\subsection{Qualitative Evaluation}
\subsubsection{Odometry And Mapping}
\begin{figure}[htbp]
\centering
\subfigure[traj]{
\label{fig:traj}\includegraphics[width=\columnwidth]{image0013}}
\subfigure[Odometry and mapping results of NDT]{
\label{fig:ndtmapping}\includegraphics[width=0.9\columnwidth]{mapping0013ndt}}
\subfigure[Odometry and mapping results of Dynamic Registration ]{
\label{fig:rmdmapping}\includegraphics[width=0.9\columnwidth]{mapping0013rmd}}
\caption{Comparisons of the global trajectories for sample sequence 0013 among KITTI Tracking Datasets. In \cref{fig:traj}, dashed curves represent ground truth poses. Blue solid lines are trajectories of NDT, and the red ones are trajectories of proposed Dynamic Registration. In \cref{fig:ndtmapping} and \cref{fig:rmdmapping}, red box zoom-in details o f the environment.}
\label{fig:odoandmap}
\vspace*{-5mm}
\end{figure}
We also implement a pairwise structureless odometry based on continuous point cloud registration simply on visualization grounds. Since other factor makes it inexplicit to evaluate the effects of removing dynamic objects. Results are shown in \cref{fig:odoandmap}. Here Dynamic Registration is build on NDT, corresponding to NDT-RMD in \cref{tab:addlabel}. Dynamic Registration Odometry is closer to the ground truth trajectory than NDT odometry.
Mapping results indicate that our approach successfully removes dynamic objects. Red boxes enlarge objects' trajectories. Obvious ghostings are cleared, but fragment remains(purple box in \cref{fig:rmdmapping}), mainly due to false negatives of the detector. However, false positives matter little, since they are possibly segmented as static objects and then added to the environment.
\subsubsection{Moving Object Detection}
In most cases, moving objects can be segmented successfully. But motion segmentation in dynamic registration relates to 3D object detection, ego-motion estimation, and the preset segmentation threshold.
Low-quality detection results occur when objects are too far from the ego car, which leads to false positives or false negatives. These "wrong detections" are filtered out by data association, as mentioned in \cref{section:reprojection}. But data association merely solved the issue partially, since false negatives remain until state estimation and prediction are conducted. Detect objects in point cloud sequences (also regarded as multi object tracking) is beyond the discussion scope of this paper.
\section{Introduction}
Simultaneous Localization and Mapping (SLAM) estimating the robot pose while mapping its surroundings of an unknown environment.
Most SLAM systems assume static environments. However, the real world contains dynamic objects. Localization or ego-motion estimation in a dynamic environment is important for applications such as robot navigation and autonomous driving tasks.
SLAM algorithms can easily fail in dynamic environments, so moving objects need to be detected and removed.
Most articles \cite{saputra2018visual,bescos2021dynaslam, zhang2020vdoslam} viewed localization in a dynamic environment from two different perspectives: either as a robust SLAM or as an extension of SLAM. The former regards dynamic elements as outliers and remove them from the estimated process \cite{mur-artal2017orbslam2}, whereas the latter extends SLAM to detect and track the dynamic objects \cite{wang2003online}.
Robot systems require object information of the surroundings for decision making and motion planning. In autonomous driving scenarios, the car not only needs to localize itself but to perceive other cars or pedestrians.
Dynamic scenes are crucial in real environments. Thus it's necessary to pay attention to all objects and their motion state, to facilitate segmenting and tracking these instances, rather than focusing on segmenting and inferencing points that are probably in moving.
\begin{figure}[htbp]
\centering
\subfigure[3D Detection]{\label{fig:notremove}
\includegraphics[width=0.4\columnwidth]{0000ori}}
\subfigure[Remove all objects]
{\label{fig:removeall}
\includegraphics[width=0.4\columnwidth]{0000rma}}
\subfigure[Remove moving objects]{
\label{fig:removedynamic}\includegraphics[width=0.4\columnwidth]{0000rmd}}
\subfigure[Corresponding image ]{\label{fig:corimage}
\includegraphics[width=0.4\columnwidth]{000127}}
\caption{Removing objects using 3D object detection in different stages. We use red, green and blue represent cars, cyclists and pedestrians respectively. Image in \cref{fig:corimage} is only utilized for visibility.}
\label{fig:removeobjects}
\vspace*{-5mm}
\end{figure}
These are the start points of the paper: localize in dynamic environments and detect full objects. The intuition is to directly sense dynamic objects and remove them. But when ego-robot moves, motion segmentation become implicit. However, this effect has little influence on object detection. When obtained objects previously, ego-motion estimation and moving objects detection in dynamic environments seem feasible.
In consideration of above all, we propose Dynamic Registration. The remainder of this paper is organized as follows.
In \cref{relatedwork}, we review literature in related research areas.
In \cref{methodology}, we describe the proposed Dynamic Registration.
In \cref{experiments}, we introduce the experimental details and evaluated results.
Finally, we conclude this work and suggest future extensions in \cref{conclusions}.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=170mm]{pipeline.png}
\caption{Over view of our Dynamic Registration system.}
\label{fig:dr}
\end{center}
\end{figure*}
\section{Methodology}\label{methodology}
Our Dynamic Registration system consists of three modules, as illustrated in \cref{fig:dr}.
The first module performs a 3D object detection neural network, generating 3D bounding boxes for all instances.
The second module temporarily removes all objects from the input point cloud. And then, an initial pose can be drawn by classical scan match methods.
In the third module, we transform objects in current frames into previous frames in Birds Eye View(BEV) by the estimated poses. And a motion state estimation module is employed to segment static and dynamic objects.
Points of static objects and environments merge to generate a new environment. Then the merged static environment is used to estimate the ego-pose and state of objects iteratively until no objects are considered static.
Finally, the static environment, ego-motion, and dynamic objects, these three coupling individuals, are calculated and estimated successfully.
\subsection{Object Detection And Removal}
\subsubsection{3D Object Detection}\label{section:3ddet}
We aim to remove moving object points from a registration pipeline and segment these points as instance-level objects. Previous works in segmenting or detecting moving elements use either a semantic segmentation or a binary moving segmentation since the heavy computations and complexities of instance segmentation in a 3D point cloud.
These 2D segmented methods are not enough to describe objects in 3D space. Further post-processing like clustering needs to be implemented in terms of instances.
The balance of real-time and accuracy need to be considered. 3D Object Detection predicts 3D bounding boxes for objects in a single frame point cloud. However, compared with 3D instance segmentation\cite{zhou2020joint}, 3D object detection achieves almost the same instance-level performances due to the sparsity of point clouds.
VoxelNet \cite{zhou2018voxelnet} achieve end-to-end detection using Voxel Feature Extractor, but introduced 3D convolutions. SECOND \cite{yan2018second} speeds up 3D convolutions via sparse optimization. PointPillars \cite{lang2019pointpillars} proposes pillar representation to improve the efficiency.
In order to implement real-time detections in scenes, we adopt PointPillars to realize 3D Object Detection.
The output of 3D object detection at time $t$ with $n$ objects is defined as
\begin{equation}\label{eq:3ddet}
\begin{split}
D_{t} &\triangleq \{ d_{0},d_{1},\ldots,d_{n} \}\\
d_{n} &\triangleq \left\lbrace x,y,z,l,w,h,\theta,label,socre\right\rbrace
\end{split}
\end{equation}
where $ x,y,z $ the center point coordinate of a cuboid; $ l,w,h $ the length, width, height of the cuboid, both in meters, respectively; $\theta$ the heading angle around z-axis in degrees. In addition to these cuboid parameters, $label$ represents three classes including car, cyclist, and pedestrian, and $score$ denotes a confidence score for detection result.
\subsubsection{Removing Points In Cuboid}\label{section:points}
\begin{figure}[htbp]
\vspace*{-3mm}
\centering
\includegraphics[width=\columnwidth]{findpointsincuboid.png}
\caption{$ABCD$ represent a cuboid in a 3D space. O is origin of coordinates. $P_1$ (in red) is inside the cuboid while $P_2$ (in blue) is outside. $OA$, $OB$, $OC$, $OD$, $OP_1$, $OP_2$ denote their coordinates respectively.}
\label{fig:findpoints}
\vspace*{-3mm}
\end{figure}
Supposing a cuboid can be formulated as three vectors, as shown in \cref{fig:findpoints}, which are $AC$, $AB$, $AD$, respectively. We use the dot product to decide whether a point is located in the cuboid.
Let $P$ be a point in 3D space, if
\begin{equation}
0 \leqslant \overrightarrow{AP} \cdot \dfrac{\overrightarrow{AB}}{\lVert \overrightarrow{AB} \rVert} \leqslant \overrightarrow{AB} \cdot \dfrac{\overrightarrow{AB}}{\lVert \overrightarrow{AB} \rVert}
\end{equation}
then $P$ is fixed at $\pm$ 90° around $\overrightarrow{AB}$ axis, and its length is within $\lVert \overrightarrow{AB} \rVert$. In the same way, the point $P$ can be limited in the cuboid. This procedure can be expressed as:
\begin{equation}
\begin{split}
0 &\leqslant \overrightarrow{AP} \cdot \dfrac{\overrightarrow{AB}}{\lVert \overrightarrow{AB} \rVert} \leqslant \overrightarrow{AB} \cdot \dfrac{\overrightarrow{AB}}{\lVert \overrightarrow{AB} \rVert}\\
0 &\leqslant \overrightarrow{AP} \cdot \dfrac{\overrightarrow{AD}}{\lVert \overrightarrow{AD} \rVert} \leqslant \overrightarrow{AD} \cdot \dfrac{\overrightarrow{AD}}{\lVert \overrightarrow{AD} \rVert}\\
0 &\leqslant \overrightarrow{AP} \cdot \dfrac{\overrightarrow{AC}}{\lVert \overrightarrow{AC} \rVert} \leqslant \overrightarrow{AC} \cdot \dfrac{\overrightarrow{AC}}{\lVert \overrightarrow{AC} \rVert}
\end{split}
\end{equation}
We first remove all potential moving objects temporarily because static objects will be added to the environment after motion segmentation. \cref{fig:notremove} and \cref{fig:removeall} show the removing results.
\subsection{Object Reprojection}\label{section:objectreprojection}
\subsubsection{Initial Pose Estimation}\label{section:initpose}
We first remove all points in the detected cuboids without checking their motion state. After that, static and dynamic objects are both removed. Then an initial pose can be estimated by a registration algorithm using the clean point clouds.
\begin{equation}
T_{0}=Registration(M_{t-1},M_{t})
\end{equation}
where $T_{0} \in SE(3)$.
\subsubsection{Reprojection}\label{section:reprojection}
Detection results $D_{t}$ at time $t$ is projected to time $t-1$ using estimated transformation matrix $T_0$.
\begin{equation}
D_{t}^{t-1}=T_{0}D_{t}
\end{equation}
Objects are detected differently in two frames. It theoretically remains the same detections in such a short time interval. However, affected by the uncertainty of observations and inferences, it may occur noisy detections, false positives, false negatives, and occlusions which result in different results for each frame.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{seg}
\caption{Object reprojection. Even objects are detected differently, they can be associated and segmented.}
\label{fig:reprojection}
\vspace*{-3mm}
\end{figure}
\subsubsection{Data Association}\label{section:dataassociation}
Data association has to be taken to connect corresponding objects successfully.
Traditional methods include JPDA, PHD, MHT, RFS in the radar tracking system or deep learning based methods, but they might be too heavy for this situation. Inspired by SORT \cite{bewley2016simple} and AB3DMOT\cite{weng20203d} in the field of multi object tracking, here Hungarian assignment algorithm is employed for addressing the associate problem.
But different from the MOT task, objects have the closest distance when projected to the previous frame. Based on this fact, we can associate the center point of cuboids without calculating complex 3D IoU. Objects considered to be the same which has the nearest Euclidean distance, as shown in \cref{fig:reprojection}.
\subsection{Dynamic Registration}
\subsubsection{Motion segmentation}
Object reprojection errors can be drawn subsequently from associated detections according to the initial pose. Here the reprojection errors represent moving distances of objects in two frames, so detected objects can be segmented by a threshold. Then dynamic environment can be divided into moving objects and the static environment with static objects.
\begin{algorithm}
\caption{Dynamic Registration}\label{algorithmDR}
\KwIn{Input Point Clouds: $M_{t-1}$, $M_{t}$}
\KwOut{Dynamic Registration Pose: $T^{DR}_{t}$ \\
\qquad \quad \ \, Segmented Dynamic Objects: $D_{t-1}^{d}$, $D_{t}^{d}$ \\
\qquad \quad \ \, Segmented Static Objects: $D_{t-1}^{s}$, $D_{t}^{s}$}
$(D_{t-1},D_{t})\leftarrow $ 3D Object Detection $(M_{t-1},M_{t})$\\
$(M_{t-1}^{RA}, M_{t}^{RA})\leftarrow $ RemoveAll $(M_{t-1}, D_{t-1}, M_{t}, D_{t})$\\
$T_{t}^{0}\leftarrow $ Registration $(M_{t-1}^{RA}, M_{t}^{RA})$\\
$D_{t}^{t-1}\leftarrow $ Object Reprojection $(T_{t}^{0}, D_{t-1}, D_{t})$\\
$(D_{t-1}^{d}, D_{t}^{d}, D_{t-1}^{s}, D_{t}^{s})\leftarrow $ MotionSegmentation$(D_{t-1}, D_{t}^{t-1})$\\
$(M_{t-1}^{RD}, M_{t}^{RD})\leftarrow $ MergeAndConcatenate$(D_{t-1}^{s}, D_{t}^{s}, M_{t-1}^{RA}, M_{t}^{RA})$\\
$T^{DR}_{t}\leftarrow $ Registration $(M_{t-1}^{RD}, M_{t}^{RD})$\\
\end{algorithm}
\subsubsection{Iterative Dynamic Registration}
In most cases, removing all objects from point clouds helps improve registration accuracy. However, static objects always occur in highly changeling real-world environments, e.g., parked cars. Static objects also provide rich features. There is no direct intuition for keeping them from the registration pipeline.
Since static objects have been segmented, they can be added to the environment. When newly stationary points are merging and concatenating, a registration algorithm, as mentioned in \cref{section:initpose}, is employed again for a more reliable transformation matrix. This dynamic registration process is iteratively conducted until no static objects are generated. After that, static environment, dynamic objects, and ego-motion are all fully segmented out. Any mapping, tracking, or localization algorithm may be utilized for post-processing.
We formulate the iterative dynamic registration process as \cref{algorithmDR}: Dynamic Registration, \cref{algorithmIDR}: Iterative Dynamic Registration, respectively. Lower left $t$ represent time stamp. Upper left $0$ and $\ast$ represent initial results and optimized results. Upper right symbols $RA$, $RD$, $s$, $d$ represent \underline{R}emove \underline{A}ll objects, \underline{R}emove \underline{D}ynamic objects, \underline{s}tatic and \underline{d}ynamic respectively.
\begin{algorithm}
\caption{Iterative Dynamic Registration}\label{algorithmIDR}
\KwIn{Point Clouds: $M_{t-1}^{RA}$, $M_{t}^{RA}$\\
\qquad \quad Detection Results: $D_{t-1}$, $D_{t}$ \\
\qquad \quad Initial Registration Pose: $T_{t}^{0}$ \\
\qquad \quad Dynamic Registration Pose: $T_{t}^{DR}$}
\KwOut{Iterated Registration Pose: $T^{\ast}_{t}$ \\
\qquad \quad \ \, Iterated Dynamic Objects: $D_{t-1}^{d\ast}$, $D_{t}^{d\ast}$ \\
\qquad \quad \ \, Iterated Static Objects: $D_{t-1}^{s\ast}$, $D_{t}^{s\ast}$}
$^{0}D_{t}^{t-1}\leftarrow $ Object Reprojection $(T_{t}^{0}, D_{t-1}, D_{t})$\\
$(^{0}D_{t-1}^{d},\ ^{0}D_{t}^{d},\ ^{0}D_{t-1}^{s},\ ^{0}D_{t}^{s})\leftarrow $
Motion Segmentation $(D_{t-1},\ ^{0}D_{t}^{t-1})$\\
$^{\ast}D_{t}^{t-1}\leftarrow $ Object Reprojection $(T^{DR}_{t}, D_{t-1}, D_{t})$\\
$(^{\ast}D_{t-1}^{d},\ ^{\ast}D_{t}^{d},\ ^{\ast}D_{t-1}^{s},\ ^{\ast}D_{t}^{s})\leftarrow $
Motion Segmentation $(D_{t-1}, ^{\ast}D_{t}^{t-1})$\\
\While{$^{\ast}D_{t-1}^{s} \neq D_{t-1}^{s}$ and $^{\ast}D_{t}^{s} \neq D_{t}^{s}$}
{$(^{\ast}M_{t-1}^{RD}, ^{\ast}M_{t}^{RD})\leftarrow $ MergeAndConcatenate$(^{\ast}D_{t-1}^{s}, ^{\ast}D_{t}^{s}, M_{t-1}^{RA}, M_{t}^{RA})$\\
$T^{\ast}_{t}\leftarrow $ Registration $(^{\ast}M_{t-1}^{RD},^{\ast}M_{t}^{RD})$\\
$D_{t-1}^{s} = \, ^{\ast}D_{t-1}^{s}$ and $D_{t}^{s} = \, ^{\ast}D_{t}^{s} $ \\
$^{\ast}D_{t}^{t-1}\leftarrow $ Object Reprojection $(T^{\ast}_{t}, D_{t-1}, D_{t})$\\
$(^{\ast}D_{t-1}^{d},\ ^{\ast}D_{t}^{d},\ ^{\ast}D_{t-1}^{s},\ ^{\ast}D_{t}^{s})\leftarrow $ MotionSegmentation$(D_{t-1}, ^{\ast}D_{t}^{t-1})$\\
}
\end{algorithm}
\section{Related Work}\label{relatedwork}
\subsection {Moving Object Detection}
Moving object detection or segmentation has been widely researched in the computer vision community in the past\cite{siam2018modnet}. Whereas here we only focus on lidar-based methods.
Generally, it's hard for MOD methods to segment complete moving objects in dynamic scenes due to the existing coupling of ego-motion, which threatens complete geometric representations of objects.
\textit {Map based}:\cite{pagad2020robust} builds an occupancy map as a filter to remove dynamic points.\cite{pomerleau2014longterm} performs map points classification based on visibility assumptions for the sake of lifelong map updating and moving objects segmentation.
\textit {Motion based}: Motion based methods detect moving objects without prior informations, but may fail to segment all moving objects limited by merely leveraging motion clues. And they are unable to detect stationary objects that have the potential to move\cite{dewan2016motionbased}.
\textit {End to end}: Chen et al. \cite{chen2021moving} projects point cloud to range view and then generates residual images as inputs of a segmentation network. But it simply divided the point clouds into moving and static.
\cite{sun2021pointmoseg} segments points using two secutive frames semantically.
However, these methods miss the concept of instance. They tend to segment a point with its state of motion and semantic label rather than to detect a car or a pedestrian with its size, location, orientation and, its motion state. That is to say, we may get some moving points representing a half car, which makes it hard to decide the motion state of the object.
\subsection {Localization in Dynamic Environment}
Most SLAM systems designed for dynamic environments are visual SLAM\cite{bescos2018dynaslam, yu2018dsslam}. 2D object detection is usually implemented in a robot SLAM system \cite{he2017mask}.
One important reason is, it's easier to deploy and accelerate a visual detection model than in lidar point clouds. However, recent advanced technics in 3D object detection make it possible to meet the need for both speed and accuracy in point cloud\cite{yan2018second,lang2019pointpillars}. We introduce them in \cref{section:3ddet}
Wang et al. \cite{wang2007simultaneous} first proposed a Bayesian-based theory of Simultaneous Localization, Mapping and Moving Object Tracking(SLAMMOT).
From then on, SLAM and the Detection And
Tracking of Moving Objects (DATMO) can be combined and have been proved to be beneficial to each other. Paralleling SLAM and MOT could estimate both ego-motion and object motion Simultaneously.
\cite{moosmann2013joint} implement a joint estimation of localization and tracking. To emphasize the capability of tracking, they adopt segmentation rather than detection.
\cite{wang20204d} project 3D point cloud into a 2D plane, and adopt a FCN network to segment objects. But classification and clustering need to be taken as post-processing.
Our Dynamic Registration is designed for two point clouds frames without prior motion compensation or motion estimation, i.e., scene flow or GPS/IMU data. Once moving objects are segmented out, they can be tracked in any manner.
\section{Results}
A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions.
|
1,108,101,563,706 | arxiv | \section{The $U(3)\times U(3)$ NJL model}
The $U(3)\times U(3)$ NJL model with scalar-pseudoscalar and vector-axial-vector sectors is
used in the present work. To solve the $U_A(1)$ problem, the six-quark t`Hooft interaction is
added to the Lagrangian of the model \cite{Klimt:1989pm,Klevansky:1992qe}
\begin{eqnarray}
\mathcal{L}& =& {\bar q}(i{\hat \partial} - m^0)q
+ \frac{G}{2}\sum_{i=0}^8 [({\bar q} {\lambda}_i q)^2 +({\bar q}i{\gamma}_5{\lambda}_i q)^2]
+ \frac{G_V}{2}\sum_{i=0}^8 [({\bar q} {\gamma}_\mu {\lambda}_i q)^2 +({\bar q}{\gamma}_5{\gamma}_\mu{\lambda}_i q)^2]
\nonumber \\
&&- K \left( {\det}[{\bar q}(1+\gamma_5)q]+{\det}[{\bar q}(1-\gamma_5)q] \right),
\label{Ldet}
\end{eqnarray}
where $\lambda_i$ (i=1,...,8) are the Gell-Mann matrices and $\lambda^0 =
{\sqrt{\frac{2}{3}}}${\bf 1}, with {\bf 1} being the unit matrix; $m^0$ is the current quark
mass matrix with diagonal elements $m^0_u$, $m^0_d$, $m^0_s$ $(m^0_u \approx m^0_d)$, $G$ and
$G_V$ are the scalar--pseudoscalar and vector--axial-vector four-quark coupling constants; $K$
is the six-quark coupling constants. The six-quark interaction can be reduced to an effective
four-fermion vertex after the contraction of one of the quark pairs. The details are given in
appendix A.
Light current quarks transform to massive constituent quarks as a result of spontaneous chiral
symmetry breaking. Constituent quark masses can be found from the Dyson-Schwinger equation for
the quark propagators (gap equations)
\begin{eqnarray}
m_u&=&m_u^0 + 8 m_u G I_1(m_u)+32 m_u m_s K I_1(m_u) I_1(m_s)\nonumber\\
m_s&=&m_s^0 + 8 m_s G I_1(m_s)+32 K \left(m_uI_1(m_u)\right)^2, \label{gapNJL}
\end{eqnarray}
where $I_1(m)$ is the quadratically divergent integral. The modified Pauli-Villars (PV)
regularization with two substractions with same $\Lambda$ is used for the regularization of
divergent integrals\footnote{Any function $f(m^2)$ of mass $m^2$ is regularized by using the
rule
\begin{eqnarray}
f(m^2)\to f(m^2)-f(m^2+\Lambda^2)+\Lambda^2 f^\prime(m^2+\Lambda^2).\nonumber
\end{eqnarray}} (see
\cite{Bernard:1992mp,Bernard:1995hm,Bajc:1996gt,Schuren:1991sc}). In this case the
quadratically and logarithmically divergent integrals $I_1(m)$ and $I_2(m)$ have the same form
as in the four-momentum cut-off scheme
\begin{eqnarray}
I_1(m) &=& \frac{N_c}{4 \pi^2}
\left[\Lambda^2-m^2\ln\left(\frac{\Lambda^2}{m^2}+1\right)\right], \quad
I_2 (m) =\frac{N_c}{4 \pi^2}
\left[\ln\left(\frac{\Lambda^2}{m^2}+1\right)-\left(1+\frac{m^2}{\Lambda^2}\right)^{-1}\right]
\nonumber.
\end{eqnarray}
Moreover, the Pauli-Villars regularization is suitable for the description of the vector
sector because it preserves gauge invariance.
Masses and vertex functions of the mesons can be found from the Bethe-Salpeter equation. The
expression for the quark-antiquark scattering matrix is
\begin{eqnarray}
\hat{T}=\mathbf{G}+\mathbf{G}\mathbf{\Pi}(p^2)\hat{T}=\frac{1}{\mathbf{G}^{-1}-\mathbf{\Pi}(p^2)},
\end{eqnarray}
where $\mathbf{G}$ and $\mathbf{\Pi}(p^2)$ are the corresponding matrices of the four-quark
coupling constant and polarization loops. The particle mass can be found from the equation
$\mathrm{det}(\mathbf{G}^{-1}-\mathbf{\Pi}(M^2))=0$ and near the poles the corresponding part
of the $\hat{T}$ matrix can be expressed in the form
\begin{eqnarray}
\hat{T}=\frac{\bar{V} \otimes V}{p^2-M^2},
\end{eqnarray}
where $V$ and $M$ are the vertex function and mass of the meson, and $\bar{V} = \gamma^0
V^\dag \gamma^0$. Details of calculations for different channels are presented in appendices
B, C. Here we discuss only general properties.
The most simple situation takes place for the vector and the isovector scalar meson with equal
quark masses (say $\rho$ and $a_0$). In this case, the coupling constant and polarization
operator are just numbers (not matrices). For pseudoscalar mesons, additional axial-vector
components appear in the vertex function due to the pseudoscalar--axial-vector mixing (in the
scalar case this transition loop is proportional to the difference of quark masses). An
additional complication takes place for $\eta$ and $\eta^\prime$ due to the singlet-octet
mixing (or mixing of strange and non-strange quarks due to the t`Hooft interaction).
Therefore, the vertex function of this meson has four components: strange and non-strange
pseudoscalar and axial-vector.
\section{Fixing model parameters}
The model has six parameters: the coupling constants $G$, $G_V$, $K$, PV cut-off $\Lambda$,
and constituent quark masses $m_u$ and $m_s$. We use two parametrization schemes. In the first
one, the model parameters are defined using masses of the pion, kaon, $\rho$ and $\eta$ mesons
and the weak pion decay constant $f_\pi$. Note that the number of input parameters is greater
than the number of physical observables by one. This allows us, following
\cite{Bernard:1995hm}, to take the mass of the $u$ quark slightly larger than the half of the
$\rho$-meson mass.
As a result, we have the following set (set I) of model parameters
\begin{eqnarray}
m_u = 390\,\mathrm{MeV},\,
m_s=496\,\mathrm{GeV},\,
G=6.62\,\mathrm{GeV}^{-2},\,
G_V=-11.29\,\mathrm{GeV}^{-2},\,
K = 123\,\mathrm{GeV}^{-5},\,
\Lambda=1\,\mathrm{GeV}.
\end{eqnarray}
The values of the current quark masses $m^0_u,m^0_s$ are defined from the gap
equations (\ref{gapNJL}) $m^0_u=3.9$ MeV and $m^0_s=70$ ($m^0_u/m^0_s=18$).
For this set of model parameters, the two-photon decay width of the $\eta$ meson
$\Gamma_{\eta\to \gamma\gamma}=0.37$ KeV, is smaller than the experimental one:
$\Gamma^{\mathrm{exp}}_{\eta\to\gamma\gamma}=0.510\pm0.026$ \cite{PDBook}.
In the set II the model parameters are fixed in order to reproduce the two-photon decay width
of the $\eta$ meson instead of its mass (the $\eta$ meson mass in this case $M_\eta=530$ MeV)
\begin{eqnarray}
m_u = 390\,\mathrm{MeV},\,
m_s=506\,\mathrm{GeV},\,
G=8.04\,\mathrm{GeV}^{-2},\,
G_V=-11.29\,\mathrm{GeV}^{-2},\,
K = 77\,\mathrm{GeV}^{-5},\,
\Lambda=1\,\mathrm{GeV}.
\end{eqnarray}
The current quark masses are $m^0_u=3.9$ MeV and $m^0_s=78$ MeV ($m^0_u/m^0_s=20$).
\section{Decay $\eta \to \pi^0 \gamma \gamma$}
\begin{figure}
\resizebox{0.8\textwidth}{!}{\includegraphics{EtPiGaGa}} \caption{\label{fig:etpigaga}
Diagrams contributing to the amplitude of the process $\eta \to \pi^0 \gamma \gamma$.}
\end{figure}
The general form of the $\eta \to \pi^0 \gamma \gamma$ decay amplitude contains two
independent tensor structures \cite{Ecker:1987hd}
\begin{eqnarray}
T= T^{\mu\nu}\epsilon^{1}_\mu\epsilon^{2}_\nu,\quad T^{\mu\nu}=A(x_{1},x_{2})(q_{1}^{\nu}
q_{2}^{\mu} - q_{1} \cdot q_{2} g^{\mu\nu}) + B(x_{1},x_{2}) \left[ -M_{\eta}^{2} x_{1}x_{2}
g^{\mu\nu} - \frac{q_{1} \cdot q_{2}}{M_{\eta}^{2}} p^{\mu}p^{\nu} + x_{1} q_{2}^{\mu} p^{\nu}
+ x_{2} p^{\mu} q_{1}^{\nu} \right], \label{ampform}
\end{eqnarray}
where $p$, $q_1$, $q_2$ are the momentum of the $\eta$ meson and photons, $\epsilon^{1}_\mu$
and $\epsilon^{2}_\nu$ are the polarization vectors of the photons, and $x_i= p\cdot
q_i/M_\eta^2$.
The $\eta \to \pi^0 \gamma \gamma$ decay width has the form
\begin{eqnarray}
\Gamma & = & \frac{M_{\eta}^{5}}{256 \pi^{2}}
\int\limits_{0}^{(1-y)/2} dx_{1} \int\limits_{x_2^{\mathrm{min}} }^{x_2^{\mathrm{max}}} dx_{2}
\left\{ \left| A(x_{1},x_{2}) + \frac{1}{2} B(x_{1},x_{2}) \right|^{2} \left[ 2(x_{1}+x_{2})
+y -1 \right]^{2} \right. \nonumber \\
& + & \left. \frac{1}{4} \left| B(x_{1},x_{2}) \right| ^{2} \left[ 4 x_{1} x_{2}
- \left[ 2(x_{1}+x_{2})+ y-1 \right] \right] ^{2} \right\} ,\\
&&x_2^{\mathrm{min}}={(1-2x_1-y)/2 },\quad x_2^{\mathrm{max}}={(1-2x_1-y)/2(1-2x_1)},
\quad y =M_\pi^2/M_\eta^2.\nonumber
\end{eqnarray}
In the NJL model the amplitude for the $\eta \to \pi^0 \gamma \gamma$ decay process is
described by three types of diagrams (see Fig. \ref{fig:etpigaga}): the quark box and exchange
of scalar($a_0$) and vector ($\rho,\omega$) resonances. Let us consider theses contributions
in detail.
The scalar meson exchange has the simplest form. It gives a contribution only to $A(x_1,x_2)$.
This contribution consists of three parts and can be written in the form (see appendices B and
C for the definition of polarization loops and vertex functions):
\begin{eqnarray}
A(x_1,x_2) &=& \frac{g_{a_0 \eta \pi}(2q_1\cdot q_2)g_{a_0 \gamma \gamma}(2q_1\cdot q_2)}{G_{a_0}^{-1}-\Pi_{SS}^{uu}(2 q_1\cdot q_2 )}
, \quad q_1\cdot q_2 = M_{\eta}^{2}\left(x_1+x_2-\frac{1}{2}\right)+\frac{M_\pi^2}{2}\nonumber\\
g_{a_0\gamma\gamma}(p^2)&=& \frac{1}{2 \pi^2} \int \limits_0^1 dx_1 \int \limits_0^{1-x_1}
dx_2 \frac{m_u(1-4x_1x_2)}{(p^2 x_1 x_2-m_u^2-\Lambda^2)^2 (p^2 x_1 x_2-m_u^2)} \\
g_{a_0 \eta \pi}(p^2)&=& -i 2 N_c N_f\int \frac{d^4_\Lambda k}{(2\pi)^4}
\mathrm {Tr}_D\left\{V_{a_0}S_u(k+q_1)V_{\pi}S_u(k)V_{\eta}S_u(k-q_2)\right\}.\nonumber
\end{eqnarray}
here $\mathrm {Tr}_D$ is the trace over Dirac indices, index $\Lambda$ in the measure of
integration means PV regularization of the integral and $S_j(p)=(\hat{p}-m_j)^{-1}$.
The amplitude with the vector meson ($\rho,\omega$) exchanges consists of two quark triangles
of anomalous type (see appendix D) and the vector meson propagator. It gives the following
contributions
\begin{eqnarray}
B(x_{1},x_{2})&=&\sum\limits_{j=\rho,\omega}\,\sum\limits_{i=1,2}
\frac{g_{\eta j \gamma}(M_\eta^2,M_\eta^2(1-2 x_i),0)g_{\pi j \gamma}(M_\pi^2,M_\eta^2(1-2 x_i),0)}
{G_{2}^{-1}-\Pi_{VV}^{uu}(M_\eta^2(1-2 x_i))},\\
A(x_{1},x_{2})&=&\sum\limits_{j=\rho,\omega}\, \sum\limits_{i=1,2}
\frac{g_{\eta j \gamma}(M_\eta^2,M_\eta^2(1-2 x_i),0)g_{\pi j \gamma}(M_\pi^2,M_\eta^2(1-2 x_i),0)M_\eta^2(1-x_i)}
{G_{2}^{-1}-\Pi_{VV}^{uu}(M_\eta^2(1-2 x_i))}.\nonumber
\end{eqnarray}
The box diagram is of a more complicated structure. It consists of three types of boxes (plus
three crossed) and contains the diagrams with pseudoscalar and axial-vector components of the
$\pi$ and $\eta$ mesons
\begin{eqnarray}
T_{\mu\nu}=-i e^{2} \int \frac{d^4_\Lambda k}{(2\pi)^4}
\mathrm {Tr}_D \biggl(\biggr. &&V_{\pi} S(k) V_{\eta} S(k+p-q_1-q_{2}) \gamma_{\nu} S(k+p-q_1) \gamma_{\mu} S(k+p)+\nonumber\\
&&+V_{\pi} S(k) V_{\eta} S(k+q_2) \gamma_{\nu} S(k+p-q_1) \gamma_{\mu} S(k+p)\\
&&+V_{\pi} S(k) \gamma_{\nu} S(k+q_2) \gamma_{\mu} S(k+q_1+q_2) V_{\eta} S(k+p)
+\left\{q_1 \leftrightarrow q_2 ,\mu \leftrightarrow \nu \right\}\biggl.\biggr)\nonumber
\end{eqnarray}
We calculate these diagrams numerically. In order to check the integration procedure, we
calculate all coefficients of different tensor structures and verify if they have gauge
invariant form (\ref{ampform}).
The obtained results for the decay width are given in the Table 1 for two sets of model
parameters. The main contribution comes from the box diagram. The contribution from vector
mesons has a constructive interference while the scalar $a_0$ contribution has a destructive
one. The results are in satisfactory agreement with Crystal Ball data $0.45\pm0.12$
\cite{Prakhov:2005vx} and the present value $0.57\pm0.21$ given in PDG\cite{PDBook}.
\begin{table}
\caption{$\eta\to\pi^0\gamma\gamma$ decay width.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Contribution &model 1& model 2 \\
\hline
vector mesons & 0.17 & 0.20 \\
scalar meson & 0.03 & 0.12 \\
vector+scalar mesons& 0.10 & 0.12 \\
box & 0.28 & 0.35 \\
box+vector & 0.78 & 0.95 \\
total & 0.53 & 0.45 \\
\hline
\end{tabular}
\end{table}
It is also very instructive to consider the invariant mass distribution. In Figures
\ref{fig:DifEtaPiGaGaS1} and \ref{fig:DifEtaPiGaGaS2} the invariant mass distribution of the
two-photons is shown for the scalar meson contribution, vector mesons contribution, scalar +
vector mesons and total. In Figure \ref{fig:DifEtaPiGaGaS1S2Os}, the results of our
calculations of the invariant mass distribution are compared with the calculation in the
chiral unitary approach \cite{Oset:2002sh}.
\begin{figure}
\resizebox{0.5\textwidth}{!}{\includegraphics{DGEPGG1v2.eps}}
\caption{\label{fig:DifEtaPiGaGaS1} Invariant mass distribution of the two-photons of the
scalar meson contribution(dots), vector meson contributions(short dash), scalar + vector
mesons(dash-dot), quark box(long dash) and total(continuous line) for the set I.}
\end{figure}
\begin{figure}
\resizebox{0.5\textwidth}{!}{\includegraphics{DGEPGG2v2.eps}}
\caption{\label{fig:DifEtaPiGaGaS2} Invariant mass distribution of the two-photons of the
scalar meson contribution(dots), vector meson contributions(short dash), scalar + vector
mesons(dash-dot) , quark box(long dash) and total(continuous line) for the set II.}
\end{figure}
\begin{figure}
\resizebox{0.5\textwidth}{!}{\includegraphics{DGEPGG123v2.eps}}
\caption{\label{fig:DifEtaPiGaGaS1S2Os} Invariant mass distribution of the two-photons of the
total contributions for the set I(dashes), set II(dots) together with the results of the
chiral unitary approach \cite{Oset:2002sh}.}
\end{figure}
\section{Conclusions}
Earlier calculations of the process $\eta \to \pi^0 \gamma\gamma$ in the NJL model do not
include the momentum dependence of quark loops and pseudoscalar--axial-vector transitions and
are in satisfactory agreement with the GAMS experiment.
Recently, the new experimental data on this decay have been obtained and the value of the
decay width is almost two times smaller. A number of theoretical estimates is also obtained,
and it seems that the momentum dependence of amplitudes is important for a correct description
of this process ( ``all-order'' estimate in ChPT).
In the present work, the contributions from quark box, scalar and vector pole diagrams are
considered with the full momentum dependence. The pseudoscalar--axial-vector transitions are
also taken into account.
The obtained result is consistent with recent experiments, theoretical estimates of ChPT
\cite{Ametller:1991dp,Bellucci:1995ay} and the chiral unitary approach\cite{Oset:2002sh}.
In future, we plan to consider the polarizability of pions and also decays of vector mesons
$\rho(\omega)\to\eta(\pi)\pi\gamma$.
\begin{acknowledgments}
The authors thank I. V. Anikin, A. E. Dorokhov, A. A. Osipov and V. L. Yudichev for useful
discussions. The authors acknowledge the support of the Russian Foundation for Basic Research,
under contract 05-02-16699.
\end{acknowledgments}
\section*{Appendixes}
\subsection{Lagrangian}
Lagrangian (\ref{Ldet}) can be rewritten in the form (see
\cite{Klimt:1989pm,Klevansky:1992qe})
\begin{eqnarray}
&&\mathcal{L} =
{\bar q}(i{\hat \partial} - m^0)q +
{\frac{1}{2}} \sum_{i=1}^9
[G_i^{(-)} ({\bar q}{\lambda^\prime}_i q)^2 +G_i^{(+)}({\bar q}i{\gamma}_5{\lambda^\prime}_i q)^2] +
\nonumber \\
&&\qquad+ G^{(-)}_{us}({\bar q} {\lambda}_u q)({\bar q} {\lambda}_s q)
+ G^{(+)}_{us}({\bar q}i{\gamma}_5{\lambda}_u q)({\bar q}i {\gamma}_5{\lambda}_s q)
+ \frac{G_V}{2}\sum_{i=0}^8 [({\bar q}{\gamma}_\mu { \lambda}_i q)^2
+({\bar q}{\gamma}_5{\gamma}_\mu{\lambda}_i q)^2],
\label{LGus}
\end{eqnarray}
where
\begin{eqnarray}
&&{\lambda^\prime}_i={\lambda}_i ~~~ (i=1,...,7),~~~\lambda^\prime_8 = \lambda_u =
({\sqrt 2}
\lambda_0 + \lambda_8)/{\sqrt 3},\nonumber\\
&&\lambda^\prime_9 = \lambda_s = (-\lambda_0 + {\sqrt 2}\lambda_8)/{\sqrt 3}, \label{DefG}\\
&&G_1^{(\pm)}=G_2^{(\pm)}=G_3^{(\pm)}= G \pm 4Km_sI_1 (m_s), \nonumber \\
&&G_4^{(\pm)}=G_5^{(\pm)}=G_6^{(\pm)}=G_7^{(\pm)}= G \pm 4Km_uI_1 (m_u),
\nonumber \\
&&G_u^{(\pm)}= G \mp 4Km_sI_1(m_s), ~~~ G_s^{(\pm)}= G, ~~~ G_{us}^{(\pm)}= \pm 4{\sqrt
2}Km_uI_1 (m_u).\nonumber
\end{eqnarray}
\subsection{Polarization loops}
Polarization loops in different channels after the PV regularization
\begin{eqnarray}
e^{-izm_im_j} \to R_{ij}(z)=e^{-izm_im_j}\left[1-(1+iz\Lambda^2)e^{-iz\Lambda^2}\right]
\end{eqnarray}
take the form (see \cite{Bernard:1995hm} for the expressions for the polarization loops with
equal indices)
\begin{eqnarray}
\Pi_{PP}^{ij}(p^2) &=&\frac{N_c}{4\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}
\left[-\frac{i}{z}+\frac{1}{2}p^2(1-y^2)-\frac{1}{2}\left[(m_i-m_j)^2-y(m_i^2-m_j^2)\right]\right] ,\nonumber\\
\Pi_{SS}^{ij}(p^2) &=&\Pi_{PP}^{ij}(p^2)-2m_im_j\frac{N_c}{4\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA} ,\nonumber\\
\Pi_{VV}^{ij,{\mu\nu}}(p^2) &=&\left(g^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{p^2}\right) \Pi_{VV}^{ij}(p^2) + \frac{p^{\mu}p^{\nu}}{p^2} \Pi_{VV}^{ij,L}(p^2),\nonumber\\
\Pi_{VV}^{ij,L}(p^2) &=&\frac{N_c}{8\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}
\left[(m_i-m_j)^2-y(m_i^2-m_j^2)\right]\nonumber,\\
\Pi_{VV}^{ij}(p^2) &=& \Pi_{VV}^{ij,L}(p^2)- p^2 \frac{N_c}{8\pi^2}\int_{-1}^1dy(1-y^2)\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}
\nonumber,\\
\Pi_{AA}^{ij,{\mu\nu}}(p^2) &=&\left(g^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{p^2}\right) \Pi_{AA}^{ij,T}(p^2) + \frac{p^{\mu}p^{\nu}}{p^2} \Pi_{AA}^{ij}(p^2),\nonumber\\
\Pi_{AA}^{ij,T}(p^2) &=&\Pi_{VV}^{ij}(p^2)+2m_im_j\frac{N_c}{4\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA},\\
\Pi_{AA}^{ij}(p^2) &=&\Pi_{VV}^{ij,L}(p^2)+2m_im_j\frac{N_c}{4\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}\nonumber,\\
\Pi_{PA}^{ij,\mu}(p^2) &=& \frac{p^{\mu}}{\sqrt{p^2}} \Pi_{PA}^{ij}(p^2) = p^{\mu} i(m_i+m_j) \frac{N_c}{8\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}\nonumber,\\
\Pi_{AP}^{ij,\mu}(p^2) &=& \frac{p^{\mu}}{\sqrt{p^2}} \Pi_{AP}^{ij}(p^2) =-p^{\mu} i(m_i+m_j) \frac{N_c}{8\pi^2}\int_{-1}^1dy\int_0^{\infty}\frac{dz}{z}R_{ij}(z)e^{izA}\nonumber,\\
&&A=\frac{p^2}{4}(1-y^2)-\frac{1}{2}\left[(m_i-m_j)^2-y(m_i^2-m_j^2)\right]\nonumber.
\end{eqnarray}
\subsection{Vertex functions}
The most simple form have the vertex functions for the vector $\rho$ and the isovector scalar
meson $a_0$, namely \footnote{We suppress flavor indices.}:
\begin{eqnarray}
V_{a_0}=g_{a_0} \mathbf{I} a_0, \quad V_{\rho}= g_{\rho} \gamma_\mu \rho^\mu.
\end{eqnarray}
The matrices $\mathbf{G}$ and $\mathbf{\Pi}$ for $a_0$ and $\rho$ mesons have the form
\begin{eqnarray}
\mathbf{G}_{a_0} &=& G_1^{(-)},\, \mathbf{\Pi}_{a_0}(p^2) = \Pi_{SS}^{uu}(p^2), \\
\mathbf{G}_{\rho}&=&G_2 \quad , \mathbf{\Pi}_{\rho}(p^2) = \Pi_{VV}^{uu}(p^2)\nonumber
\end{eqnarray}
For the pion and kaon, additional axial-vector components appear in the vertex function due
to pseudoscalar--axial-vector mixing
\begin{eqnarray}
V_{\pi}=g_{\pi} i \gamma_5 (1+\Delta_\pi \hat{p}) \pi,\,\quad
V_{K} =g_{K} i\gamma_5(1+\Delta_K \hat{p}) K
\end{eqnarray}
Here $\mathbf{G}$ and $\mathbf{\Pi}$ are
\begin{eqnarray}
\mathbf{G}_{\pi} =
\begin{pmatrix} G_1^{(+)}&0\\
0 &G_2\end{pmatrix}
, \mathbf{\Pi}_{\pi}(p^2) =
\begin{pmatrix}
\Pi^{uu}_{PP}(p^2) & \Pi^{uu}_{PA}(p^2)\\
\Pi^{uu}_{AP}(p^2) & \Pi^{uu}_{AA}(p^2)
\end{pmatrix},\\
\mathbf{G}_{K} =
\begin{pmatrix} G_4^{(+)}&0\\
0 &G_2\end{pmatrix}
, \mathbf{\Pi}_{K}(p^2) =
\begin{pmatrix}
\Pi^{us}_{PP}(p^2) & \Pi^{us}_{PA}(p^2)\\
\Pi^{us}_{AP}(p^2) & \Pi^{us}_{AA}(p^2)
\end{pmatrix}.\nonumber
\end{eqnarray}
Therefore, the vertex function of the $\eta$ meson have four components: strange and
non-strange pseudoscalar and axial-vector
\begin{eqnarray}
V_{\eta}&=&
g_{\eta_u} i \gamma_5 (1+\Delta_{\eta_u} \hat{p}) \eta_u
+g_{\eta_s} i \gamma_5 (1+\Delta_{\eta_s} \hat{p}) \eta_s =\\
&=& g_{\eta} i \gamma_5( \cos\Theta_\eta\eta_u-\sin\Theta_\eta\eta_s +\Delta_{\eta} \hat{p}( \cos\widetilde{\Theta}_\eta \eta_u
-\sin\widetilde{\Theta}_\eta\eta_s)),\nonumber
\end{eqnarray}
where $\Theta_\eta$ and $\widetilde{\Theta}_\eta$ are the mixing angles for pseudoscalar and
axial-vector components. The matrices $\mathbf{G}$ and $\mathbf{\Pi}(p^2)$ are four-by-four
matrices
\begin{eqnarray}
\mathbf{G} =
\begin{pmatrix} \mathbf{G}^{(+)}&0\\
0 &\mathbf{G_2}\end{pmatrix}
, \mathbf{G}^{(+)} =
\begin{pmatrix} G_u^{(+)}&G_{us}^{(+)}\\
G_{us}^{(+)}&G_s^{(+)}\end{pmatrix},\mathbf{G_2}=\mathrm{diag}\{G_2,G_2\}\\
\mathbf{\Pi}(p^2) =
\begin{pmatrix}
\mathbf{\Pi}_{PP}(p^2) & \mathbf{\Pi}_{PA}(p^2)\\
\mathbf{\Pi}_{AP}(p^2) & \mathbf{\Pi}_{AA}(p^2)
\end{pmatrix}
, \mathbf{\Pi}_{ij}(p^2) = \mathrm{diag}\{\Pi^{uu}_{ij(p^2)},\Pi^{ss}_{ij}(p^2)\},
i,j=P,A\nonumber
\end{eqnarray}
\subsection{Amplitudes $\eta\to\gamma\gamma$, $\rho\to\eta(\pi)\gamma$}
The amplitude for the two-photon decay width of the pseudoscalar meson has the form
\begin{eqnarray}
A(P\to\gamma\gamma) \ = \ e^2\; g_{P\gamma\gamma}(M_P^2,q_1^2,q_2^2)\
\epsilon_{\mu\nu\alpha\beta} \ \epsilon_1^{\mu} \epsilon_2^{\nu} \;q_1^\alpha
q_2^\beta\ ,
\end{eqnarray}
where $q_1$, $q_2$ are the momentum of photons and $\epsilon^{1}_\mu$, $\epsilon^{2}_\nu$ are
the polarization vectors of the photons,
\begin{eqnarray}
g_{\pi\gamma\gamma}(M_\pi^2,q_1^2,q_2^2) & = & I_u(M_\pi^2,q_1^2,q_2^2)g_{\pi}, \\
g_{\eta\gamma\gamma}(M_\eta^2,q_1^2,q_2^2) & = &\frac{5}{3}
I_u(M_\eta^2,q_1^2,q_2^2)g_{\eta_u}-\frac{\sqrt 2}{3} I_s(M_\eta^2,q_1^2,q_2^2)g_{\eta_s}
.\nonumber
\end{eqnarray}
The loop integrals $I_j(M_P^2)$ are given by
\begin{eqnarray}
I_j(M_P^2,q_1^2,q_2^2) & = &
\frac{1}{2 \pi^2} \int \limits_0^1 dx_1 \int \limits_0^{1-x_1} dx_2
\frac{m_j}{m_j^2-x_1(1-x_1-x_2)q_1^2-x_2(1-x_1-x_2) q_2^2-x_1x_2 M_P^2}.
\end{eqnarray}
The amplitudes for the processes $\rho(\omega)\to\eta(\pi)\gamma$ have the form
\begin{eqnarray}
A(P V \gamma) \ = \ g_\rho e\; g_{P\rho\gamma}(M_P^2,q_1^2,q_2^2)\
\epsilon_{\mu\nu\alpha\beta} \ \epsilon_1^{\mu} \epsilon_2^{\nu} \;q_1^\alpha q_2^\beta\ ,
\end{eqnarray}
here $q_1$ and $\epsilon^{1}_\mu$ are the momentum and the polarization vector of
$\rho(\omega)$ meson.
\begin{eqnarray}
g_{\pi \rho\gamma}(M_\pi^2,q_1^2,q_2^2) & = & I_u(M_\pi^2,q_1^2,q_2^2)g_{\pi},\quad
g_{\eta\rho\gamma}(M_\eta^2,q_1^2,q_2^2) = 3I_u(M_\eta^2,q_1^2,q_2^2)g_{\eta_u}, \nonumber\\
g_{\pi \omega\gamma}(M_\pi^2,q_1^2,q_2^2) & = &3I_u(M_\pi^2,q_1^2,q_2^2)g_{\pi} ,\quad
g_{\eta\omega\gamma}(M_\eta^2,q_1^2,q_2^2) = I_u(M_\eta^2,q_1^2,q_2^2)g_{\eta_u}.
\end{eqnarray}
|
1,108,101,563,707 | arxiv |
\section*{Acknowledgmenets}
This work was partly supported by IdEx Université de Paris ANR-18-IDEX-0001, as well the French National Research Agency (ANR) projects QuBIC and QuDATA.
\section{Introduction}
It is well known that many interesting and relevant optimization problems in the domain of Machine Learning can be expressed in the framework of convex optimization \cite{boyd2004convex,bubeck2015convex}. The landmark result in this area was the discovery of interior-point methods (IPM) by \cite{karmarkar1984new}, and their subsequent generalization to all ``self-scaled'' (i.e. symmetric) cones by \cite{nesterov1997self, nesterov1998primal}. Very recently, \cite{cohen2019solving} have shown that it is possible to solve linear programs (LP) in $\widetilde{O}(n^\omega)$, the time it takes to multiply two matrices (as long as $\omega \geq 2+1/6$, which is currently the case). This result has been further extended in \cite{lee2019solving} to a slightly more general class of cones, however, their techniques did not yield improved complexities for second-order (SOCP) and semidefinite programming (SDP). \change{An efficient algorithm for SOCP would also yield efficient algorithms for many interesting problems, such as (standard and quadratically-constrained) convex quadratic programming, portfolio optimization, and many others \cite{alizadeh2003second}.}
Starting with the landmark results of \cite{grover1996fast, shor1999polynomial}, and, more recently, \cite{harrow2009quantum}, it has been demonstrated that quantum computers offer significant (sometimes even exponential) asymptotic speedups for a number of important problems. More recently, there has been substantial work in the area of convex optimization. Quantum speedups for gradient descent were investigated by \cite{gilyen2019optimizing}, whereas \cite{brandao2017quantum, brandao2019quantum, van2017quantum, van2019improvements} presented quantum algorithms for SDP based on the the multiplicative weights framework of \cite{arora2012multiplicative}. However, it has been difficult to establish asymptotic speedups for this family of quantum SDP solvers as their the running time depends on problem-specific parameters, including a 5th-power dependence on the width of the SDP. \change{Interestingly, the recent result of \cite{brandao2019faster} suggests that such a speedup might be obtained when applying an SDP algorithm of this type to some low-precision instances of quadratic binary optimization.}
In an orthogonal approach, \cite{kerenidis2018quantum} proposed a quantum algorithm for LPs and SDPs by quantizing a variant of the classical interior point method and using the state of the art quantum linear algebra tools \cite{chakraborty2019power, gilyen2019quantum} -- in particular, the matrix multiplication and inversion algorithms whose running time is sub-linear in the input size. However, the complexity of this algorithm depends on the condition number of $O(n^2)$-sized matrices that is difficult to bound theoretically. It therefore remains an open question to find an end-to-end optimization problems for which quantum SDP solvers achieve an asymptotic speedup over state of the art classical algorithms.
In this paper, we propose support vector machines (SVM) as a candidate for such an end-to-end quantum speedup using a quantum interior point method based algorithm.
\subsection{Our results and techniques}
In this section, we provide a high level sketch of our results and the techniques used for the quantum interior point method for SOCPs, we begin by discussing the differences between classical and quantum interior point methods.
A classical interior point method solves an optimization problem over symmetric cones by starting with a feasible solution and iteratively finding solutions with a smaller duality gap while maintaining feasibility. A single iterative step consists of solving a system of linear equations called the Newton linear system and updating the current iterate using the solutions of the Newton system. The analysis of the classical IPM shows that in each iteration, the updated solutions remain feasible and the duality gap is decreased by a factor of $(1- \alpha/\sqrt{n})$ where $n$ is the dimension of the optimization problem and $\alpha > 0$ is a constant. The algorithm therefore converges to a feasible solution with duality gap $\epsilon$ in $O(\sqrt{n} \log (1/\epsilon))$ iterations.
A quantum interior point method \cite{kerenidis2018quantum} uses a quantum linear system solver instead of classical one in each iteration of the IPM. However, there is an important difference between classical and quantum linear algebra procedures for solving the linear system $A\vec{x}=\vec{b}$.
Unlike classical linear system solvers which return an exact description of $\vec{x}$, quantum tomography procedures can return an $\epsilon$-accurate solution $\widetilde{\vec{x}}$ such that $\norm{ \measured{\vec{x}} - \vec{x}} \leq \epsilon \norm{\vec{x}}$ with \change{$O(n/\epsilon^{2})$} runs of the quantum linear system solver. \change{Additionally, these linear system solvers require $A$ and $\vec{b}$ to be given as block-encodings~\cite{chakraborty2019power}, so this input model is used by our algorithm as well.} One of the main technical challenges in developing a quantum interior point method (QIPM) is to establish convergence of the classical IPM which uses $\epsilon$-approximate solutions of the Newton linear system (in the $\ell_{2}$ norm) instead of the exact solutions in the classical IPM.
The quantum interior point method for second-order cone programs requires additional ideas going beyond those used for the quantum interior point methods for SDP \cite{kerenidis2018quantum}. Second order cone programs are optimization problems over the product of second-order or Lorentz cones (see section \ref{sec:socp} for definitions), interior point methods for SOCP can be described using the Euclidean Jordan algebra framework \cite{monteiro2000polynomial}. The Euclidean Jordan algebra framework provides analogs of concepts like eigenvalues, spectral and Frobenius norms for matrices and positive semidefinite constraints for the case of SOCPs.
Using these conceptual ideas from the Euclidean Jordan algebra framework \cite{monteiro2000polynomial} and the analysis of the approximate
SDP interior point method \cite{kerenidis2018quantum} we provide an approximate IPM for SOCP that converges in $O(\sqrt{n} \log (1/\epsilon))$ iterations. Approximate IPMs for SOCP have not been previously investigated in the classical or the quantum optimization literature, this analysis is one of the main technical contributions of this paper.
From an algorithmic perspective, SOCPs are much closer to LPs (Linear Programs) than to SDPs, since for cones of dimension $n$, the Newton linear systems arising in LP and SOCP IPMs are of size $O(n)$, whereas in the SDP case they are of size $O(n^2)$. \change{Namely, a second-order conic constraint of dimension $n$ can be expressed as a single PSD constraint on a (sparse) $n \times n$ matrix \cite{alizadeh2003second} -- this allows us to embed an SOCP with $n$ variables and $m$ constraints in an SDP with an $n\times n$ matrix and $m$ constraints. The cost of solving that SDP would have a worse dependence on the error \cite{van2019improvements} or the input size \cite{kerenidis2018quantum}. On the other hand,} the quantum representations (block encodings) of the Newton linear systems for SOCP are also much simpler to construct than those for SDP. The smaller size of the SOCP linear system also makes it feasible to empirically estimate the condition number for these linear systems in a reasonable amount of time allowing us to carry out extensive numerical experiments to validate the running time of the quantum algorithm.
The theoretical analysis of the quantum algorithm for SOCP shows that its worst-case running time is
\begin{equation}\label{eq:complexity}
\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(\frac1\epsilon\right) \right),
\end{equation}
where $r$ is the rank and $n$ the dimension of the SOCP, $\delta$ bounds the distance of intermediate solutions from the cone boundary, $\zeta$ is a parameter bounded by $\sqrt{n}$, $\kappa$ is an upper bound on the condition number of matrices arising in the interior-point method for SOCPs, and $\epsilon$ is the target duality gap. The running time of the algorithm depends on problem dependent parameters like $\kappa$ and $\delta$ that are difficult to bound in terms of the problem dimension $n$ -- this is also the case with previous quantum SDP solvers \cite{kerenidis2018quantum, van2019improvements} and makes it important to validate the quantum optimization speedups empirically. \change{Interestingly, since we require a classical solution of the Newton system, the linear system solver could also be replaced by a classical iterative solver \cite{saad2003iterative} which would yield a complexity of $O(n^2 \sqrt{r} \kappa \log(n/\epsilon))$.}
Let us make a remark about the complexity: as it is the case with all approximation algorithms, \eqref{eq:complexity} depends on the inverse of the target duality gap $\epsilon$, thus the running time of our algorithm grows to infinity as $\epsilon$ approaches zero, as in the case of classical IPM. Our running time also depends on $\kappa$, which in turn is empirically observed to grow inversely with the duality gap (in particular as $O(1/\epsilon)$) which again makes the running time go to infinity as $\epsilon$ approaches zero. The quantum IPM is a low precision method, unlike the classical IPM, and it can offer speedups for settings where the desired precision $\epsilon$ is moderate or low. \change{Thus although at first glance it seems that the $\epsilon$-dependence in \eqref{eq:complexity} is logarithmic, experimental evidence suggests that the factor $\frac{\zeta \kappa}{\delta^2}$ depends polynomially on $1/\epsilon$.}
Support Vector Machines (SVM) is an important application in machine learning, where even a modest value of $\epsilon=0.1$ yields an almost optimal classifier. Since the SVM training problem can be reduced to SOCP, the quantum IPM for SOCP can be used to obtain an efficient quantum SVM algorithm. We perform extensive numerical experiments to evaluate our algorithm on random SVM instances and compare it against state of the art classical SOCP and SVM solvers.
The numerical experiments on random SVM instances indicate that the running time of the quantum algorithm scales as roughly $O(n^{2.591})$, where all the parameters in the running time are taken into account and the exponent is estimated using a least squares fit. We also benchmarked the exponent for classical SVM algorithms on the same instances and for a comparable accuracy, the scaling exponent was found to be $3.31$ for general SOCP solvers and $3.11$ for state-of-the-art SVM solvers. We note that this does not prove a worst-case asymptotic speedup, but the experiments on unstructured SVM instances provide strong evidence for a significant polynomial speedup of the quantum SVM algorithm over state-of-the-art classical SVM algorithms. We can therefore view SVMs as a candidate problem for which quantum optimization algorithms can achieve a polynomial speedup over state of the art classical algorithms for an end-to-end application.
\subsection{Related work}
Our main result is the first specialized quantum algorithm for training support-vector machines (SVM). While several quantum SVM algorithms have been proposed, they are unlikely to offer general speedups for the most widely used formulation of SVM -- the soft-margin ($\ell_1$-)SVM (defined in eq. \eqref{prob:SVM}). On one hand, papers such as \cite{arodz2019quantum} formulate the SVM as a SDP, and solve that using existing quantum SDP solvers such as \cite{brandao2017quantum,brandao2019quantum,van2017quantum,van2019improvements} -- with the conclusion being that a speedup is observed only for very specific sparse instances. On the other hand, \cite{rebentrost2014quantum} solves an easier related problem -- the least-squares SVM ($\ell_2$-SVM or LS-SVM, see eq. \eqref{prob:LS-SVM} for its formulation), thus losing the desirable sparsity properties of $\ell_1$-SVM \cite{suykens2002weighted}. It turns out that applying our algorithm to this problem also yields the same complexity as in \cite{rebentrost2014quantum} for the $\ell_2$-SVM. Very recently, a quantum algorithm for yet another variant of SVM (SVM-perf, \cite{joachims2006training}) has been presented in \cite{allcock2020quantum}.
\section{Preliminaries}
\subsection{Second-order cone programming} \label{sec:socp}
\change{For the sake of completeness, in this section we present the most important results about classical SOCP IPMs, from \cite{monteiro2000polynomial,alizadeh2003second}. We start by defining SOCP as the} optimization problem over the product of second-order (or Lorentz) cones $\mathcal{L} = \mathcal{L}^{n_1} \times \cdots \times \mathcal{L}^{n_r}$, where $\mathcal{L}_k \subseteq \mathbb{R}^k$ is defined as $\mathcal{L}^k = \left\lbrace\left. \vec{x} = (x_0; \vecrest{x}) \in \mathbb{R}^{k} \;\right\rvert\; \norm{\vecrest{x}} \leq x_0 \right\rbrace$. In this paper we consider the problem \eqref{prob:SOCP primal} and its dual \eqref{prob:SOCP dual}:
\begin{center}
\begin{tabular}{rl}
\begin{minipage}{0.45\linewidth}
\begin{equation}
\begin{array}{ll}
\min & \vec{c}^\top \vec{x}\\
\text{s.t.}& A \vec{x} = \vec{b} \\
& \vec{x} \in \mathcal{L},
\end{array} \label{prob:SOCP primal}
\end{equation}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\begin{equation}
\begin{array}{ll}
\max & \vec{b}^\top \vec{y}\\
\text{s.t.}& A^\top \vec{y} + \vec{s} = \vec{c}\\
& \vec{s} \in \mathcal{L},\; \vec{y} \in \mathbb{R}^m.
\end{array} \label{prob:SOCP dual}
\end{equation}
\end{minipage}
\end{tabular}
\end{center}
We call $n:= \sum_{i=1}^r n_i$ the \emph{size} of the SOCP \eqref{prob:SOCP primal}, and $r$ is its \emph{rank}.
A solution $(\vec{x}, \vec{y}, \vec{s})$ satisfying the constraints of both \eqref{prob:SOCP primal} and \eqref{prob:SOCP dual} is \emph{feasible}, and if in addition it satisfies $\vec{x} \in \operatorname{int} \mathcal{L}$ and $\vec{s} \in \operatorname{int} \mathcal{L}$, it is \emph{strictly feasible}. If at least one constraint of \eqref{prob:SOCP primal} or \eqref{prob:SOCP dual}, is violated, the solution is \emph{infeasible}. The duality gap of a feasible solution $(\vec{x}, \vec{y}, \vec{s})$ is defined as $\mu := \mu(\vec{x}, \vec{s}) := \frac1r \vec{x}^\top \vec{s}$. As opposed to LP, and similarly to SDP, strict feasibility is required for \emph{strong duality} to hold \cite{alizadeh2003second}. From now on, we assume that our SOCP has a strictly feasible primal-dual solution (this assumption is valid, since the homogeneous self-dual embedding technique from \cite{ye1994hsd} allows us to embed \eqref{prob:SOCP primal} and \eqref{prob:SOCP dual} in a slightly larger SOCP where this condition is satisfied).
\subsection{Euclidean Jordan algebras}
The cone $\mathcal{L}^n$ has an algebraic structure similar to that of symmetric matrices under the matrix product. Here, we consider the Jordan product of $(x_0, \vecrest{x}) \in \mathbb{R}^{n}$ and $(y_0, \vecrest{y}) \in \mathbb{R}^{n}$, defined as
\begin{equation*}
\vec{x} \circ \vec{y} := \begin{bmatrix}
\vec{x}^T \vec{y} \\
x_0\vecrest{y} + y_0\vecrest{x}
\end{bmatrix}, \text{ and its identity element }\vec{e} := \begin{bmatrix}
1 \\
0^{n-1}
\end{bmatrix}.
\end{equation*}
This product is closely related to the (linear) matrix representation $\operatorname{Arw}(\vec{x}) := \begin{bmatrix}
x_0 & \vecrest{x}^T \\
\vecrest{x} & x_0 I_{n-1}
\end{bmatrix}$, which in turn satisfies the following equality:
\begin{equation*}
\vec{x} \circ \vec{y} = \operatorname{Arw}(\vec{x}) \vec{y} = \operatorname{Arw}(\vec{x}) \operatorname{Arw}(\vec{y}) \vec{e}.
\end{equation*}
The key observation is that this product induces a spectral decomposition of any vector $\vec{x}$, that has similar properties as its matrix relative. Namely, for any vector $\vec{x}$ we define
\begin{align}\label{eq:jordan eigenvalues and eigenvectors}
\lambda_1(\vec{x}) &:= x_0 + \norm{\vecrest{x}},\; \vec{c}_1(\vec{x}) := \frac12 \begin{bmatrix}
1 \\
\frac{\vecrest{x}}{\norm{\vecrest{x}}}
\end{bmatrix}, \nonumber \\
\lambda_2(\vec{x}) &:= x_0 - \norm{\vecrest{x}},\; \vec{c}_2(\vec{x}) := \frac12 \begin{bmatrix}
1 \\
\frac{-\vecrest{x}}{\norm{\vecrest{x}}}
\end{bmatrix}.
\end{align}
\change{We use the shorthands $\lambda_1 := \lambda_1(\vec{x})$, $\lambda_2 := \lambda_2(\vec{x})$, $\vec{c}_1 := \vec{c}_1(\vec{x})$ and $\vec{c}_2 := \vec{c}_2(\vec{x})$ whenever $\vec{x}$ is clear from the context,} so we observe that $\vec{x} = \lambda_1 \vec{c}_1 + \lambda_2 \vec{c}_2$. The set of eigenvectors $\{\vec{c}_1, \vec{c}_2 \}$ is called the \emph{Jordan frame} of $\vec{x}$, and satisfies several properties:
\begin{proposition}[Properties of Jordan frames]
Let $\vec{x} \in \mathbb{R}^{n}$ and let $\{ \vec{c}_1, \vec{c}_2 \}$ be its Jordan frame. Then, the following holds:
\begin{enumerate}
\item $\vec{c}_1 \circ \vec{c}_2 = 0$ (the eigenvectors are ``orthogonal'')
\item $\vec{c}_1\circ \vec{c}_1 = \vec{c}_1$ and $\vec{c}_2 \circ \vec{c}_2 = \vec{c}_2$
\item $\vec{c}_1$, $\vec{c}_2$ are of the form $\left( \frac12; \pm \vecrest{c} \right)$ with $\norm{\vecrest{c}} = \frac12$
\end{enumerate}
\end{proposition}
On the other hand, just like a given matrix is positive (semi)definite if and only if all of its eigenvalues are positive (nonnegative), a similar result holds for $\mathcal{L}^n$ and $\operatorname{int} \mathcal{L}^n$ (the Lorentz cone and its interior):
\begin{proposition}
Let $\vec{x} \in \mathbb{R}^{n}$ have eigenvalues $\lambda_1, \lambda_2$. Then, the following holds:
\begin{enumerate}
\item $\vec{x} \in \mathcal{L}^n$ if and only if $\lambda_1\geq 0$ and $\lambda_2 \geq 0$.
\item $\vec{x} \in \operatorname{int} \mathcal{L}^n$ if and only if $\lambda_1 > 0$ and $\lambda_2 > 0$.
\end{enumerate}
\end{proposition}
Now, using this decomposition, we can define arbitrary real powers $\vec{x}^p$ for $p \in \mathbb{R}$ as $\vec{x}^p := \lambda_1^p\vec{c}_1 + \lambda_2^p \vec{c}_2$, and in particular the ``inverse'' and the ``square root''
\begin{align*}
\vec{x}^{-1} &= \frac{1}{\lambda_1} \vec{c}_1 + \frac{1}{\lambda_2} \vec{c}_2, \text{ if } \lambda_1\lambda_2 \neq 0, \\
\vec{x}^{1/2} &= \sqrt{\lambda_1} \vec{c}_1 + \sqrt{\lambda_2} \vec{c}_2, \text{ if } \vec{x} \in \mathcal{L}^n.
\end{align*}
Moreover, we can also define some operator norms, namely the Frobenius and the spectral one:
\begin{align*}
\norm{\vec{x}}_F &= \sqrt{\lambda_1^2 + \lambda_2^2} = \sqrt{2}\norm{\vec{x}}, \\
\norm{\vec{x}}_2 &= \max\{ |\lambda_1|, |\lambda_2| \} = |x_0| + \norm{\vecrest{x}}.
\end{align*}
Finally, we define an analogue to the operation $Y \mapsto XYX$. It turns out that for this we need another matrix representation (\emph{quadratic representation}) $Q_{\vec{x}}$, defined as
\begin{align} \label{qrep}
Q_{\vec{x}} &:= 2\operatorname{Arw}^2(\vec{x}) - \operatorname{Arw}(\vec{x}^2)
= \begin{bmatrix}
\norm{\vec{x}}^2 & 2x_0\vecrest{x}^T \\
2x_0\vecrest{x} & \lambda_1\lambda_2 I_n + 2\vecrest{x}\vecrest{x}^T
\end{bmatrix}.
\end{align}
Now, the matrix-vector product $Q_{\vec{x}}\vec{y}$ will behave as the quantity $XYX$. To simplify the notation, we also define the matrix $T_{\vec{x}} := Q_{\vec{x}^{1/2}}$.
The definitions that we introduced so far are suitable for dealing with a single constraint $\vec{x} \in \mathcal{L}^n$. For dealing with multiple constraints $\vec{x}_1 \in \mathcal{L}^{n_1}, \dots, \vec{x}_r \in \mathcal{L}^{n_r}$, we need to deal with block-vectors $\vec{x} = (\vec{x}_1;\vec{x}_2;\dots;\vec{x}_r)$ and $\vec{y} = (\vec{y}_1;\vec{y}_2;\dots;\vec{y}_r)$. We call the number of blocks $r$ the \emph{rank} of the vector (thus, up to now, we were only considering rank-1 vectors). Now, we extend all our definitions to rank-$r$ vectors.
\begin{enumerate}
\item $\vec{x} \circ \vec{y} := (\vec{x}_1 \circ \vec{y}_1; \dots; \vec{x}_r \circ \vec{y}_r)$
\item The matrix representations $\operatorname{Arw}(\vec{x})$ and $Q_{\vec{x}}$ are the block-diagonal matrices containing the representations of the blocks:
\begin{align*}
\operatorname{Arw}(\vec{x}) &:= \operatorname{Arw}(\vec{x}_1) \oplus \cdots \oplus \operatorname{Arw}(\vec{x}_r) \text{ and }
Q_{\vec{x}} := Q_{\vec{x}_1} \oplus \cdots \oplus Q_{\vec{x}_r}
\end{align*}
\item $\vec{x}$ has $2r$ eigenvalues (with multiplicities) -- the union of the eigenvalues of the blocks $\vec{x}_i$. The eigenvectors of $\vec{x}$ corresponding to block $i$ contain the eigenvectors of $\vec{x}_i$ as block $i$, and are zero everywhere else.
\item The identity element is $\vec{e} = (\vec{e}_1; \dots ;\vec{e}_r)$, where $\vec{e}_i$'s are the identity elements for the corresponding blocks.
\end{enumerate}
Thus, all things defined using eigenvalues can also be defined for rank-$r$ vectors:
\begin{enumerate}
\item The norms are extended as $\norm{\vec{x}}^2_F := \sum_{i=1}^r \norm{\vec{x}_i}_F^2$ and $\norm{\vec{x}}_2 := \max_i \norm{\vec{x}_i}_2$, and
\item Powers are computed blockwise as $\vec{x}^p := (\vec{x}_1^p;\dots;\vec{x}_r^p)$ whenever the corresponding blocks are defined.
\end{enumerate}
Some further matrix-algebra inspired properties of block vectors are stated in the following two claims:
\begin{claim}[Algebraic properties]\label{claim:algebraic properties}
Let $\vec{x}, \vec{y}$ be two arbitrary block-vectors. Then, the following holds:
\begin{enumerate}
\item The spectral norm is subadditive: $\norm{\vec{x} + \vec{y}}_2 \leq \norm{\vec{x}}_2 + \norm{\vec{y}}_2$.
\item The spectral norm is less than the Frobenius norm: $\norm{\vec{x}}_2 \leq \norm{\vec{x}}_F$.
\item If $A$ is a matrix with minimum and maximum singular values $\sigma_{\text{min}}$ and $\sigma_{\text{max}}$ respectively, then the norm $\norm{A\vec{x}}$ is bounded as $\sigma_{\text{min}} \norm{\vec{x}} \leq \norm{A\vec{x}} \leq \sigma_{\text{max}} \norm{\vec{x}}$.
\item The minimum eigenvalue of $\vec{x} + \vec{y}$ is bounded as $\lambda_\text{min}(\vec{x} + \vec{y}) \geq \lambda_\text{min}(\vec{x}) - \norm{\vec{y}}_2$.
\item The following submultiplicativity property holds: $\norm{\vec{x} \circ \vec{y}}_F \leq \norm{\vec{x}}_2 \cdot \norm{\vec{y}}_F$.
\end{enumerate}
\end{claim}
\change{In general, the proofs of these statements are analogous to the matrix case, with a few notable differences:} First, the vector spectral norm $\norm{\cdot}_2$ is not actually a norm, since there exist nonzero vectors outside $\mathcal{L}$ which have zero norm. It is, however, still bounded by the Frobenius norm (just like in the matrix case), which is in fact a proper norm. Secondly, the minimum eigenvalue bound also holds for matrix spectral norms, with the exact same statement. Finally, the last property is reminiscent of the matrix submultiplicativity property $\norm{A \cdot B}_F \leq \norm{A}_2 \norm{B}_F$.
We finish with several well-known properties of the quadratic representation $Q_{\vec{x}}$ and $T_{\vec{x}}$.
\begin{proposition}[Properties of $Q_{\vec{x}}$, from \cite{alizadeh2003second}]\label{claim:Properties of Qx}
Let $\vec{x} \in \operatorname{int} \mathcal{L}$. Then, the following holds:
\begin{enumerate}
\item $Q_{\vec{x}} \vec{e} = \vec{x}^2$, and thus $T_{\vec{x}} \vec{e} = \vec{x}$.
\item $Q_{\vec{x}^{-1}} = Q_{\vec{x}}^{-1}$, and more generally $Q_{\vec{x}^p} = Q_{\vec{x}}^p$ for all $p \in \mathbb{R}$.
\item $\norm{Q_{\vec{x}}}_2 = \norm{\vec{x}}_2^2$, and thus $\norm{T_{\vec{x}}}_2 = \norm{\vec{x}}_2$.
\item $Q_{\vec{x}}$ preserves $\mathcal{L}$, i.e. $Q_{\vec{x}}(\mathcal{L}) = \mathcal{L}$ and $Q_{\vec{x}}(\operatorname{int} \mathcal{L}) = \operatorname{int} \mathcal{L}$.
\end{enumerate}
\end{proposition}
\subsection{Interior-point methods}
Our algorithm follows the general IPM structure, i.e. it uses Newton's method to solve a sequence of increasingly strict relaxations of the Karush-Kuhn-Tucker (KKT) optimality conditions:
\begin{align}
A\vec{x} = \vec{b},\; A^\top \vec{y} + \vec{s} = \vec{c} \label{eq:central path} \\
\vec{x} \circ \vec{s} = \nu \vec{e},\;
\vec{x} \in \mathcal{L}, \vec{s} \in \mathcal{L}, \nonumber
\end{align}
where the parameter $\nu>0$ is decreased by a factor $\sigma<1$ in each iteration. Since $\vec{x} \circ \vec{s} = \nu \vec{e}$ implies that the duality gap is $\mu = \nu$, by letting $\nu \to 0$ the IPM converges towards the optimal solution. The curve traced by (feasible) solutions $(\vec{x}, \vec{y}, \vec{s})$ of \eqref{eq:central path} for $\nu > 0$ is called the \emph{central path}, and we note that all points on it are strictly feasible.
More precisely, in each iteration we need to find $\Delta\vec{x}, \Delta\vec{y}, \Delta\vec{s}$ such that $\vec{x}_\text{next} := \vec{x} + \Delta\vec{x}$, $\vec{y}_\text{next} := \vec{y} + \Delta\vec{y}$ and $\vec{s}_\text{next} := \vec{s} + \Delta\vec{s}$ satisfy \eqref{eq:central path} for $\nu = \sigma \mu$. After linearizing the product $\vec{x}_\text{next}\circ \vec{s}_\text{next}$, we obtain the following linear system -- the \emph{Newton system}:
\begin{align}
\begingroup
\begin{bmatrix}
A & 0 & 0 \\
0 & A^\top & I \\
\operatorname{Arw}(\vec{s}) & 0 & \operatorname{Arw}(\vec{x})
\end{bmatrix}
\begin{bmatrix}
\Delta\vec{x} \\
\Delta\vec{y} \\
\Delta\vec{s}
\end{bmatrix} =
\begin{bmatrix}
\vec{b} - A \vec{x} \\
\vec{c} - \vec{s} - A^\top \vec{y} \\
\sigma \mu \vec{e} - \vec{x} \circ \vec{s}
\end{bmatrix}.
\endgroup
\label{eq:Newton system}
\end{align}
As a final remark, it is not guaranteed that $(\vec{x}_\text{next}, \vec{y}_\text{next}, \vec{s}_\text{next})$ is on the central path \eqref{eq:central path}, or even that it is still strictly feasible. Luckily, it can be shown that long as $(\vec{x}, \vec{y}, \vec{s})$ starts out in a neighborhood $\mathcal{N}$ of the central path, $(\vec{x}_\text{next}, \vec{y}_\text{next}, \vec{s}_\text{next})$ will remain both strictly feasible and in $\mathcal{N}$.
It can be shown that this algorithm halves the duality gap every $O(\sqrt{r})$ iterations, so indeed, after $O(\sqrt{r} \log(\mu_0 / \epsilon))$ it will converge to a (feasible) solution with duality gap at most $\epsilon$ (given that the initial duality gap was $\mu_0$).
\subsection{Quantum linear algebra}
As it was touched upon in the paper, the main speedup in our algorithms comes from the fact that we use the quantum linear algebra algorithms from \cite{chakraborty2019power, gilyen2019quantum} whose complexity is sublinear in the dimension. Of course, we need to change our computational model for this sentence to make any sense: namely, we encode $n$-dimensional unit vectors as quantum states of a $\lceil \log_2(n) \rceil$-qubit system. In other words, for a vector $\vec{z} \in \mathbb{R}^{2^k}$ with $\norm{\vec{z}} = 1$, we use the notation $\ket{\vec{z}}$ to refer to the $k$-qubit state $\ket{\vec{z}} := \sum_{i=0}^{2^k - 1} z_i \ket{i}$, where $\ket{i}$ are the standard basis vectors of $\mathbb{C}^{2^k}$, the state space of a $k$-qubit system \cite{nielsen2010quantum}.
Given a quantum state $\ket{\vec{z}}$, we have very limited ways of interacting with it: we can either apply a unitary transformation $U: \mathbb{C}^{2^k} \to \mathbb{C}^{2^k}$, or we can \emph{measure} it, which means that we discard the state and obtain a single random integer $0 \leq i \leq 2^k - 1$, with the probability of measuring $i$ being $z_i^2$. In particular this means that we can neither observe the \emph{amplitudes} $z_i$ directly, nor can we create a copy of $\ket{\vec{z}}$ for an arbitrary $\ket{\vec{z}}$. In addition to this, it is \emph{a priori} not clear how (and whether it is even possible) to implement the state $\ket{\vec{z}}$ or an arbitrary unitary $U$ using the gates of a quantum computer. Luckily, there exists a quantum-classical framework using \emph{QRAM} data structures described in \cite{kerenidis2016quantum} that provides a positive answer to both of these questions.
The QRAM can be thought of as the quantum analogue to RAM, i.e. an array $[\vec{b}^{(1)}, \dots, \vec{b}^{(m)}]$ of $w$-bit bitstrings, whose elements we can access in poly-logarithmic time given their address (position in the array). More precisely, QRAM is just an efficient implementation of the unitary transformation
\begin{equation*}
\ket{i}\ket{0}^{\otimes w} \mapsto \ket{i} \ket{b^{(i)}_1\dots b^{(i)}_w}, \text{ for } i \in [m].
\end{equation*}
The usefulness of QRAM data structures becomes clear when we consider the \emph{block encoding} framework:
\begin{definition}
Let $A \in \mathbb{R}^{n\times n}$ be a symmetric matrix. Then, the $\ell$-qubit unitary matrix $U \in \mathbb{C}^{2^\ell \times 2^\ell}$ is a $(\zeta, \ell)$ block encoding of $A$ if $U = \begin{bmatrix}
A / \zeta & \cdot \\
\cdot & \cdot
\end{bmatrix}$. For an arbitrary matrix $B \in \mathbb{R}^{n \times m}$, a block encoding of $B$ is any block encoding of its symmetrized version $\operatorname{sym}(B) := \begin{bmatrix}0 & B\\ B^\top & 0\end{bmatrix}$.
\end{definition}
We want $U$ to be implemented efficiently, i.e. using an $\ell$-qubit quantum circuit of depth (poly-)logarithmic in $n$. Such a circuit would allow us to efficiently create states $\ket{A_i}$ corresponding to rows (or columns) of $A$. Moreover, we need to be able to construct such a data structure efficiently from the classical description of $A$. It turns out that we are able to fulfill both of these requirements using a data structure built on top of QRAM.
\begin{theorem}[Block encodings using QRAM \cite{kerenidis2016quantum, kerenidis2017quantum}] \label{qbe}
There exist QRAM data structures for storing vectors $\vec{v}_i \in \mathbb{R}^n$, $i \in [m]$ and matrices $A \in \mathbb{R}^{n\times n}$ such that with access to these data structures one can do the following:
\begin{enumerate}
\item Given $i\in [m]$, prepare the state $\ket{\vec{v}_i}$ in time $\widetilde{O}(1)$. In other words, the unitary $\ket{i}\ket{0} \mapsto \ket{i}\ket{\vec{v}_i}$ can be implemented efficiently.
\item A $(\zeta(A), 2 \log n)$ unitary block encoding for $A$ with $\zeta(A) = \norm{A}_{2}^{-1}\min( \norm{A}_{F}, s_{1}(A))$, where $s_1(A) = \max_i \sum_j |A_{i, j}|$ can be implemented in time $\widetilde{O}(\log n)$. Moreover, this block encoding can be constructed in a single pass over the matrix $A$, and it can be updated in $O(\log^2 n)$ time per entry.
\end{enumerate}
\end{theorem}
From now on, we will also refer to storing vectors and matrices in QRAM, meaning that we use the data structure from Theorem \ref{qbe}. Note that this is the same quantum oracle model that has been used to solve SDPs in \cite{kerenidis2018quantum} and \cite{van2019improvements}.
Once we have these block encodings, we may use them to perform linear algebra. In particular, we want to construct the quantum states $\ket{A\vec{b}}$ and $\ket{A^{-1}\vec{b}}$, corresponding to the matrix-vector product $A\vec{b}$ and the solution of the linear system $A\vec{x} = \vec{b}$:
\begin{theorem}[Quantum linear algebra with block encodings \cite{chakraborty2019power, gilyen2019quantum}] \label{qlsa}
Let $A \in \mathbb{R}^{n\times n}$ be a matrix with non-zero eigenvalues in the interval $[-1, -1/\kappa] \cup [1/\kappa, 1]$, and let $\epsilon > 0$. Given an implementation of an $(\zeta, O(\log n))$ block encoding for $A$ in time $T_{U}$ and a procedure for preparing state $\ket{b}$ in time $T_{b}$,
\begin{enumerate}
\item A state $\epsilon$-close to $\ket{A^{-1} b}$ can be generated in time
$O((T_{U} \kappa \zeta+ T_{b} \kappa) \operatorname{polylog}(\kappa \zeta /\epsilon))$.
\item A state $\epsilon$-close to $\ket{A b}$ can be generated in time $O((T_{U} \kappa \zeta+ T_{b} \kappa) \operatorname{polylog}(\kappa \zeta /\epsilon))$.
\item For $\mathcal{A} \in \{ A, A^{-1} \}$, an estimate $\Lambda$ such that $\Lambda \in (1\pm \epsilon) \norm{ \mathcal{A} b}$ can be generated in time $O((T_{U}+T_{b}) \frac{ \kappa \zeta}{ \epsilon} \operatorname{polylog}(\kappa \zeta/\epsilon))$.
\end{enumerate}
\end{theorem}
\noindent Finally, in order to recover classical information from the outputs of a linear system solver, we require an efficient procedure for \emph{quantum state tomography}. The tomography procedure
is linear in the dimension of the quantum state.
\begin{theorem}[Efficient vector state tomography, \cite{kerenidis2018quantum}]\label{vector state tomography}
There exists an algorithm that given a procedure for constructing $\ket{\vec{x}}$ (i.e. a unitary mapping $U:\ket{0} \mapsto \ket{\vec{x}}$ \change{and its controlled version} in time $T_U$) and precision $\delta > 0$ produces an estimate $\measured{\vec{x}} \in \mathbb{R}^n$ with $\norm{\measured{\vec{x}}} = 1$ such that $\norm{\vec{x} - \measured{\vec{x}}} \leq \sqrt{7} \delta$ with probability at least $(1 - 1/n^{0.83})$. The algorithm runs in time $O\left( T_U \frac{n \log n}{\delta^2}\right)$.
\end{theorem}
Of course, repeating this algorithm $\widetilde{O}(1)$ times allows us to increase the success probability to at least $1 - 1/\operatorname{poly}(n)$. Putting Theorems \ref{qbe}, \ref{qlsa} and \ref{vector state tomography} together, assuming that $A$ and $\vec{b}$ are already in QRAM, we obtain that the complexity of a completely self-contained algorithm for solving the system $A\vec{x} = \vec{b}$ with error $\delta$ is $\widetilde{O} \left( n \cdot \frac{\kappa \zeta}{\delta^2} \right)$. For well-conditioned matrices, this presents a significant improvement over $O(n^\omega)$ (or, in practice, $O(n^3)$) needed for solving linear systems classically, especially when $n$ is large and the desired precision is not too high. This can be compared with classical iterative linear algebra algorithms \cite{saad2003iterative} that have $O(\operatorname{nnz}(A))$ complexity per iteration, as well as the new quantum-inspired solvers \cite{gilyen2018quantum} that have high-degree polynomial dependence on the rank, error, and the condition number.
\section{A quantum interior-point method}
Having introduced the classical IPM for SOCP, we can finally introduce our quantum IPM (Algorithm \ref{alg:qipm}). The main idea is to use quantum linear algebra as much as possible (including solving the Newton system -- the most expensive part of each iteration), and falling back to classical computation when needed. Since quantum linear algebra introduces inexactness, we need to deal with it in the analysis. The inspiration for the ``quantum part'' of the analysis is \cite{kerenidis2018quantum}, whereas the ``classical part'' is based on \cite{monteiro2000polynomial}. Nevertheless, the SOCP analysis is unique in many aspects and a number of hurdles had to be overcome to the make the analysis go through. In the rest of the section, we give a brief introduction of the most important quantum building blocks we use, as well as present a sketch of the analysis.
\begin{algorithm}
\caption{A quantum IPM for SOCP} \label{alg:qipm}
\textbf{Require:} Matrix $A$ and vectors $\vec{b}, \vec{c}$ in QRAM, precision parameter $\epsilon$\\
\begin{enumerate}
\item Find feasible initial point $(\vec{x}, \vec{y}, \vec{s}, \mu)$ and store it in QRAM.
\item Repeat the following steps for $O(\sqrt{r}\log(\mu_0/\epsilon))$ iterations:
\begin{enumerate}
\item Compute the vector $\sigma \mu \vec{e} - \vec{x} \circ \vec{s}$ classically and store it in QRAM.
\item Prepare and update the block encodings of the LHS and the RHS of the Newton system \eqref{eq:Newton system}
\item Solve the Newton system to obtain $\ket{(\Delta\vec{x};\Delta\vec{y};\Delta\vec{s})}$, and obtain a classical approximate solution $\measured{\left( \Delta\vec{x} ; \Delta\vec{y} ; \Delta\vec{s} \right)}$ using tomography.
\item Update $\vec{x} \gets \vec{x} + \measured{\dx}$, $\vec{s} \gets \vec{s} + \measured{\ds}$ and store in QRAM.
\item Update $\mu \gets \frac1r \vec{x}^\top \vec{s}$.
\end{enumerate}
\item Output $(\vec{x}, \vec{y}, \vec{s})$.
\end{enumerate}
\end{algorithm}
First, we note that the algorithms from the previous section allow us to ``forget'' that Algorithm~\ref{alg:qipm} is quantum, and treat it as a small modification of the classical IPM, where the system \eqref{eq:Newton system} is solved up to an $\ell_2$-error $\delta$. Since Algorithm~\ref{alg:qipm} is iterative, the main part of the analysis is proving that a single iteration preserves closeness to the central path, strict feasibility, and improves the duality gap. In the remainder of this section, we state our main results informally, while the exact statements and proofs of all claims can be found in the supplementary material.
\begin{theorem}[Per-iteration correctness, informal] \label{thm:main}
Let $(\vec{x}, \vec{y}, \vec{s})$ be a strictly feasible primal-dual solution that is close to the central path, with duality gap $\mu$, and at distance at least $\delta$ from the boundary of $\mathcal{L}$. Then, the Newton system \eqref{eq:Newton system} has a unique solution $(\Delta\vec{x}, \Delta\vec{y}, \Delta\vec{s})$. There exist positive constants $\xi, \alpha$ such that the following holds: If we let $\measured{\dx}, \measured{\ds}$ be approximate solutions of \eqref{eq:Newton system} that satisfy
\[
\norm{\Delta\vec{x} - \measured{\dx}}_F \leq \xi \delta \text{ and }
\norm{\Delta\vec{s} - \measured{\ds}}_F \leq \xi \delta,
\]
and let $\vec{x}_\text{next} := \vec{x} + \measured{\dx}$ and $\vec{s}_\text{next} := \vec{s} + \measured{\ds}$ be the updated solution, then:
\begin{enumerate}
\item The updated solution is strictly feasible, i.e. $\vec{x}_\text{next} \in \operatorname{int} \mathcal{L}$ and $\vec{s}_\text{next} \in \operatorname{int} \mathcal{L}$.
\item The updated solution is close to the central path, and the new duality gap is less than $(1-\alpha/\sqrt{r})\mu$.
\end{enumerate}
\end{theorem}
The proof of this theorem consists of 3 main parts:
\begin{enumerate}
\item Rescaling $\vec{x}$ and $\vec{s}$ so that they commute in the Jordan-algebraic sense \cite{alizadeh2003second}. This part can be reused from the classical analysis \cite{monteiro2000polynomial}.
\item Bounding the norms of $\measured{\dx}$ and $\measured{\ds}$, and proving that $\vec{x} + \measured{\dx}$ and $\vec{s} + \measured{\ds}$ are still strictly feasible (in the sense of belonging to $\operatorname{int} \mathcal{L}$). This part of the analysis is also inspired by the classical analysis, but it has to take into account the inexactness of the Newton system solution.
\item Proving that the new solution $(\vec{x} + \measured{\dx}, \vec{y} + \measured{\dy}, \vec{s} + \measured{\ds})$ is in the neighborhood of the central path, and the duality gap/central path parameter have decreased by a factor of $1 - \alpha/\sqrt{r}$, where $\alpha$ is constant. This part is the most technical, and while it is inspired by \cite{kerenidis2018quantum}, it required using many of the Jordan-algebraic tools from \cite{alizadeh2003second, monteiro2000polynomial}.
\end{enumerate}
Theorem~\ref{thm:main} formalizes the fact that the Algorithm~\ref{alg:qipm} has the same iteration invariant as the classical IPM. Since the duality gap is reduced by the same factor in both algorithms, their iteration complexity is the same, and a simple calculation shows that they need $O(\sqrt{r})$ iterations to halve the duality gap. On the other hand, the cost of each iteration varies, since the complexity of the quantum linear system solver depends on the precision $\xi\delta$, the condition number $\kappa$ of the Newton matrix, as well as its $\zeta$-parameter. While exactly bounding these quantities is the subject of future research, it is worth noting that for $\zeta$, we have the trivial bound $\zeta \leq \sqrt{n}$, and research on iterative linear algebra methods \cite{dollar2005iterative} suggests that $\kappa = O(1/\mu) = O(1/\epsilon)$. The final complexity is summarized in the following theorem:
\begin{theorem}\label{thm:runtime}
Let \eqref{prob:SOCP primal} be a SOCP with $A \in \mathbb{R}^{m\times n}$, $m \leq n$, and $\mathcal{L} = \mathcal{L}^{n_1} \times \cdots \times \mathcal{L}^{n_r}$. Then, Algorithm \ref{alg:qipm} achieves duality gap $\epsilon$ in time
\begin{equation*}
T = \widetilde{O} \left( \sqrt{r} \log\left( \mu_0 / \epsilon \right) \cdot \frac{n \kappa \zeta}{\delta^2}\log\left( \frac{\kappa \zeta}{\delta} \right) \right),
\end{equation*}
where the $\widetilde{O}(\cdot)$ notation hides the factors that are poly-logarithmic in $n$ and $m$.
\end{theorem}
Finally, the quality of the resulting (classical) solution is characterized by the following theorem:
\begin{theorem}\label{thm:feasibility}
Let \eqref{prob:SOCP primal} be a SOCP as in Theorem \ref{thm:runtime}. Then, after $T$ iterations, the (linear) infeasibility of the final iterate $\vec{x}, \vec{y}, \vec{s}$ is bounded as
\begin{align*}
\norm{A\vec{x} - \vec{b}} &\leq \delta\norm{A},\\
\norm{A^\top \vec{y} + \vec{s} - \vec{c}} &\leq \delta \left( \norm{A} + 1 \right).
\end{align*}
\end{theorem}
\section{Technical results}
In this section, we present our main technical results -- the proofs of Theorems~\ref{thm:main}, \ref{thm:runtime} and~\ref{thm:feasibility}.
\subsection{Central path}
In addition to the central path defined in \eqref{eq:central path}, we define the distance from the central path as $d(\vec{x}, \vec{s}, \nu) = \norm{T_{\vec{x}} \vec{s} - \nu \vec{e}}_F$, so the corresponding $\eta$-neighborhood is given by
\begin{align*}
\mathcal{N}_\eta(\nu) := \{ &(\vec{x}, \vec{y}, \vec{s})\;|\;(\vec{x}, \vec{y}, \vec{s}) \;\text{strictly feasible} \text{ and } d(\vec{x}, \vec{s}, \nu) \leq \eta \nu \}.
\end{align*}
Using this neighborhood definition, we can specify what exactly do we mean when we claim that the important properties of the central path are valid in its neighborhood as well.
\begin{lemma}[Properties of the central path]\label{lemma:Properties of the central path}
Let $\nu > 0$ be arbitrary and let $\vec{x}, \vec{s} \in \operatorname{int}\mathcal{L}$. Then, $\vec{x}$ and $\vec{s}$ satisfy the following properties:
\begin{enumerate}
\item For all $\nu > 0$, the duality gap and distance from the central path are related as
\begin{align*}
| \vec{x}^\top \vec{s} - r\nu | \leq \sqrt{\frac{r}{2}} \cdot d(\vec{x}, \vec{s}, \nu).
\end{align*}
\item The distance from the central path is symmetric in its arguments i.e. $d(\vec{x}, \vec{s}, \nu) = d(\vec{s}, \vec{x}, \nu)$.
\item Let $\mu = \frac1r \vec{x}^\top \vec{s}$. If $d(\vec{x}, \vec{s}, \mu) \leq \eta \mu$, then $(1+\eta) \norm{\vec{s}^{-1}}_2 \geq \norm{\mu^{-1} \vec{x}}_2$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part 1, let $\{\lambda_i\}_{i=1}^{2r}$ be the eigenvalues of $T_{\vec{x}}\vec{s}$, note that $T_{x}$ is invertible as $x \in \mathcal{L}$.
Then using the properties of $T_{\vec{x}}$, we have
\begin{align*}
\vec{x}^\top\vec{s} &= \vec{x}^\top T_{\vec{x}}^{-1} T_{\vec{x}} \vec{s} = (T_{\vec{x}^{-1}} \vec{x})^\top T_{\vec{x}} \vec{s} =\vec{e}^\top T_{\vec{x}} \vec{s} = \frac12 \sum_{i=1}^{2r} \lambda_i.
\end{align*}
We can therefore bound the duality gap $x^\top s$ as follows,
\begin{align*}
\vec{x}^\top \vec{s} = \frac12 \sum_{i=1}^{2r} \lambda_i &\leq r\nu + \frac12 \sum_{i=1}^{2r} |\lambda_i - \nu| \leq r\nu + \sqrt{\frac{r}{2}}\sqrt{\sum_{i=1}^{2r} (\lambda_i - \nu)^2} = r\nu + \sqrt{\frac{r}{2}} \cdot d(\vec{x}, \vec{s}, \nu).
\end{align*}
The second step used the Cauchy-Schwarz inequality while the third follows from the definition $d(\vec{x}, \vec{s}, \nu)^2 = \sum_{i=1}^{2r} (\lambda_i - \nu)^2$.
The proof of the lower bound is similar, but starts instead with the inequality
\[
\frac12\sum_{i=1}^{2r} \lambda_i \geq r\nu - \frac12 \sum_{i=1}^{2r} |\nu - \lambda_i|.
\]
For part 2, it suffices to prove that $T_{\vec{x}} \vec{s}$ and $T_{\vec{s}} \vec{x}$ have the same eigenvalues. This follows from part 2 of Theorem 10 in \cite{alizadeh2003second}.
Finally for part 3, as $d(\vec{s}, \vec{x}, \mu) \leq \eta \mu$ we have,
\begin{align*}
\eta \mu &\geq \norm{T_{\vec{s}}\vec{x} - \mu \vec{e}}_{F} \\
&= \norm{T_{\vec{s}} \vec{x} - \mu \left( T_{\vec{s}} T_{\vec{s}^{-1}} \right) \vec{e} }_{F} \\
&= \norm{T_{\vec{s}} \left( \vec{x} - \mu T_{\vec{s}^{-1}}\vec{e} \right) }_{F} \\
&\geq \lambda_\text{min} (T_{\vec{s}}) \norm{\vec{x} - \mu T_{\vec{s}^{-1}}\vec{e}}_{F} \\
&\geq \lambda_\text{min} (T_{\vec{s}}) \norm{\vec{x} - \mu \vec{s}^{-1}}_{2} \\
&= \frac{1}{\norm{\vec{s}^{-1}}_{2}} \cdot \mu \cdot \norm{ \mu^{-1} \vec{x} - \vec{s}^{-1} }_{2}
\end{align*}
Therefore, $\eta \norm{\vec{s}^{-1}}_{2} \geq \norm{\mu^{-1} \vec{x} - \vec{s}^{-1}}_{2}$. Finally, by the triangle inequality for the spectral norm,
\[
\eta \norm{\vec{s}^{-1}}_{2} \geq \norm{\mu^{-1} \vec{x}}_{2} -\norm{\vec{s}^{-1}}_{2},
\]
so we can conclude that $\norm{\vec{s}^{-1}}_{2} \geq \frac{1}{1+\eta} \norm{\mu^{-1} \vec{x}}_{2}$.
\end{proof}
\subsection{A single quantum IPM iteration}
Recall that the essence of our quantum algorithm is repeated solution of the Newton system \eqref{eq:Newton system} using quantum linear algebra. As such, our goal is to prove the following theorem:
\setcounter{theorem}{3}
\begin{theorem}[Per-iteration correctness, formal]
Let $\chi = \eta = 0.01$ and $\xi = 0.001$ be positive constants and let $(\vec{x}, \vec{y}, \vec{s})$ be a feasible solution of \eqref{prob:SOCP primal} and \eqref{prob:SOCP dual} with $\mu = \frac1r \vec{x}^\top \vec{s}$ and $d(\vec{x}, \vec{s}, \mu) \leq \eta\mu$. Then, for $\sigma = 1-\chi/\sqrt{n}$, the Newton system \eqref{eq:Newton system} has a unique solution $(\Delta\vec{x}, \Delta\vec{y}, \Delta\vec{s})$. Let $\measured{\dx}, \measured{\ds}$ be approximate solutions of \eqref{eq:Newton system} that satisfy
\[
\norm{\Delta\vec{x} - \measured{\dx}}_F \leq \frac{\xi}{\norm{T_{\vec{x}^{-1}}}} ,\;
\norm{\Delta\vec{s} - \measured{\ds}}_F \leq \frac{\xi}{2\norm{T_{\vec{s}^{-1}}}},
\]
where $T_{\vec{x}}$ and $T_{\vec{x}}$ are the square roots of the quadratic representation matrices in equation \eqref{qrep}.
If we let $\vec{x}_\text{next} := \vec{x} + \measured{\dx}$ and $\vec{s}_\text{next} := \vec{s} + \measured{\ds}$, the following holds:
\begin{enumerate}
\item The updated solution is strictly feasible, i.e. $\vec{x}_\text{next} \in \operatorname{int} \mathcal{L}$ and $\vec{s}_\text{next} \in \operatorname{int} \mathcal{L}$.
\item The updated solution satisfies $d(\vec{x}_\text{next}, \vec{s}_\text{next}, \measured{\mu}) \leq \eta\measured{\mu}$ and $\frac1r \vec{x}_\text{next}^\top \vec{s}_\text{next} = \measured{\mu}$ for $\measured{\mu} = \measured{\sigma}\mu$, $\measured{\sigma} = 1 - \frac{\alpha}{\sqrt{r}}$ and a constant $0 < \alpha \leq \chi$.
\end{enumerate}
\end{theorem}
Since the Newton system \eqref{eq:Newton system} is the same as in the classical case, we can reuse Theorem 1 from \cite{monteiro2000polynomial} for the uniqueness part of Theorem \ref{thm:main}. Therefore, we just need to prove the two parts about strict feasibility and improving the duality gap.
Our analysis is inspired by the general case analysis from \cite{ben2001lectures}, the derived SDP analysis from \cite{kerenidis2018quantum}, and uses some technical results from the SOCP analysis in \cite{monteiro2000polynomial}.
The proof of Theorem \ref{thm:main} consists of three main steps:
\begin{enumerate}
\item Rescaling $\vec{x}$ and $\vec{s}$ so that they share the same Jordan frame.
\item Bounding the norms of $\Delta\vec{x}$ and $\Delta\vec{s}$, and proving that $\vec{x} + \Delta\vec{x}$ and $\vec{s} + \Delta\vec{s}$ are still strictly feasible (in the sense of belonging to $\operatorname{int} \mathcal{L}$).
\item Proving that the new solution $(\vec{x} + \Delta\vec{x}, \vec{y} + \Delta\vec{y}, \vec{s} + \Delta\vec{s})$ is in the $\eta$-neighborhood of the central path, and the duality gap/central path parameter have decreased by a factor of $1 - \alpha/\sqrt{n}$, where $\alpha$ is constant.
\end{enumerate}
\subsection{Rescaling \texorpdfstring{$\vec{x}$}{x} and \texorpdfstring{$\vec{s}$}{s}}
As in the case of SDPs, the first step of the proof uses the symmetries of the Lorentz cone to perform a commutative scaling, that is to reduce the analysis to the case when $\vec{x}$ and $\vec{s}$ share the same Jordan frame. Although $\circ$ is commutative by definition, two vectors sharing a Jordan frame are akin to two matrices sharing a system of eigenvectors, and thus commuting (some authors \cite{alizadeh2003second} say that the vectors \emph{operator commute} in this case).
The easiest way to achieve this is to scale by $T_{\vec{x}} = Q_{\vec{x}^{1/2}}$ and $\mu^{-1}$, i.e. to change our variables as
\[
\vec{x} \mapsto \scaled{\vec{x}} := T_{\vec{x}}^{-1} \vec{x} = \vec{e} \text{ and } \vec{s} \mapsto \scaled{\vec{s}} := \mu^{-1}T_{\vec{x}} \vec{s}.
\]
Note that for convenience, we have also rescaled the duality gap to 1. Recall also that in the matrix case, the equivalent of this scaling was $X \mapsto X^{-1/2} X X^{-1/2} = I$ and $S \mapsto \mu^{-1}X^{1/2} S X^{1/2}$.
We use the notation $\scaled{\vec{z}}$ to denote the appropriately-scaled vector $\vec{z}$, so that we have
\[
\scaled{\dx} := T_{\vec{x}}^{-1} \Delta\vec{x},\quad \scaled{\ds} := \mu^{-1}T_{\vec{x}} \Delta\vec{s}
\]
For approximate quantities (e.g. the ones obtained using tomography, or any other approximate linear system solver), we use the notation $\measured{\phantom{i}\cdot\phantom{i}}$, so that the increments become $\measured{\dx}$ and $\measured{\ds}$, and their scaled counterparts are $\measuredscaled{\dx} := T_{\vec{x}}^{-1} \measured{\dx}$ and $\measuredscaled{\ds} := \mu^{-1}T_{\vec{x}} \measured{\ds}$. Finally, we denote the scaled version of the next iterate as $\scaled{\xnext} := \vec{e} + \measuredscaled{\dx}$ and
$\scaled{\snext} := \ss + \measuredscaled{\ds}$. Now, we see that the statement of Theorem \ref{thm:main} implies the following bounds on $\norm{ \scaled{\dx} - \measuredscaled{\dx} }_F$ and $\norm{\scaled{\ds} - \measuredscaled{\ds}}_F$:
\begin{align*}
\norm{\scaled{\dx} - \measuredscaled{\dx}}_F & = \norm{T_{\vec{x}^{-1}}\Delta\vec{x} - T_{\vec{x}^{-1}} \measured{\dx}}_F \\
&\leq \norm{T_{\vec{x}^{-1}}} \cdot \norm{\Delta\vec{x} - \measured{\dx}}_F \leq \xi, \text{ and } \\
\norm{\scaled{\ds} - \measuredscaled{\ds}}_F &= \mu^{-1}\norm{T_{\vec{x}} \Delta\vec{s} - T_{\vec{x}} \measured{\ds}}_F \\
&\leq \mu^{-1}\norm{T_{\vec{x}}} \norm{\Delta\vec{s} - \measured{\ds}}_F \\
&= \mu^{-1} \norm{\vec{x}}_2 \norm{\Delta\vec{s} - \measured{\ds}}_F \\
&\leq (1+\eta) \norm{\vec{s}^{-1}}_2 \norm{\Delta\vec{s} - \measured{\ds}}_F \text{ by Lemma \ref{lemma:Properties of the central path}} \\
&\leq 2 \norm{T_{\vec{s}^{-1}}} \norm{\Delta\vec{s} - \measured{\ds}}_F \leq \xi.
\end{align*}
Throughout the analysis, we will make use of several constants: $\eta > 0$ is the distance from the central path, i.e. we ensure that our iterates stay in the $\eta$-neighborhood $\mathcal{N}_\eta$ of the central path. The constant $\sigma = 1 - \chi / \sqrt{r}$ is the factor by which we aim to decrease our duality gap, for some constant $\chi > 0$. Finally constant $\xi > 0$ is the approximation error for the scaled increments $\measuredscaled{\dx}, \measuredscaled{\ds}$.
Having this notation in mind, we can state several facts about the relation between the duality gap and the central path distance for the original and scaled vectors.
\begin{claim}\label{claim:stuff preserved under scaling}
The following holds for the scaled vectors $\scaled{\vec{x}}$ and $\ss$:
\begin{enumerate}
\item The scaled duality gap is $\frac1r \scaled{\vec{x}}^\top \ss = 1$.
\item $d(\vec{x}, \vec{s}, \mu) \leq \eta \mu$ is equivalent to $\norm{\ss - \vec{e}} \leq \eta$.
\item $d(\vec{x}, \vec{s}, \mu\sigma) = \mu \cdot d(\scaled{\vec{x}}, \ss, \sigma)$, for all $\sigma > 0$.
\end{enumerate}
\end{claim}
At this point, we claim that it suffices to prove the two parts of Theorem \ref{thm:main} in the scaled case. Namely, assuming that $\scaled{\xnext} \in \operatorname{int} \mathcal{L}$ and $\scaled{\snext} \in \operatorname{int}\mathcal{L}$, by construction we get
\begin{equation*}
\vec{x}_\text{next} = T_{\vec{x}} \scaled{\xnext} \text{ and } \vec{s}_\text{next} = T_{\vec{x}^{-1}} \scaled{\snext}
\end{equation*}
and thus $\vec{x}_\text{next}, \vec{s}_\text{next} \in \operatorname{int} \mathcal{L}$.
On the other hand, if $\mu d(\scaled{\xnext}, \scaled{\snext}, \measured{\sigma}) \leq \eta \measured{\mu}$, then $d(\vec{x}_\text{next}, \vec{s}_\text{next}, \measured{\mu}) \leq \eta \measured{\mu}$ follows by Claim \ref{claim:stuff preserved under scaling}. Similarly, from $\frac1r \scaled{\xnext}^\top \scaled{\snext} = \measured{\sigma}$, we also get $\frac1r \vec{x}_\text{next}^\top \vec{s}_\text{next} = \measured{\mu}$.
We conclude this part with two technical results from \cite{monteiro2000polynomial}, that use the auxiliary matrix $R_{xs}$ defined as $R_{xs} := T_{\vec{x}} \operatorname{Arw}(\vec{x})^{-1} \operatorname{Arw}(\vec{s}) T_{\vec{x}}$. These results are useful for the later parts of the proof of Theorem \ref{thm:main}.
\begin{claim}[\cite{monteiro2000polynomial}, Lemma 3]\label{claim:R_xs bound}
Let $\eta$ be the distance from the central path, and let $\nu > 0$ be arbitrary. Then, $R_{xs}$ is bounded as
\[
\norm{R_{xs} - \nu I} \leq 3 \eta \nu.
\]
\end{claim}
\begin{claim}[\cite{monteiro2000polynomial}, Lemma 5, proof]\label{claim:dss expression}
Let $\mu$ be the duality gap. Then, the scaled increment $\scaled{\ds}$ is
\[
\scaled{\ds} = \sigma \vec{e} - \ss - \mu^{-1}R_{xs} \scaled{\dx}.
\]
\end{claim}
\subsection{Maintaining strict feasibility}
The main tool for showing that strict feasibility is conserved is the following bound on the increments $\scaled{\dx}$ and $\scaled{\ds}$:
\begin{lemma}[\cite{monteiro2000polynomial}, Lemma 6]\label{lemma:bounds for increment}
Let $\eta$ be the distance from the central path and let $\mu$ be the duality gap. Then, we have the following bounds for the scaled direction:
\[
\begin{array}{rl}
\norm{\scaled{\dx}}_F &\leq \frac{\Theta}{\sqrt{2}} \\
\norm{\scaled{\ds}}_F &\leq \Theta \sqrt{2}
\end{array}
, \quad\text{where}\quad \Theta = \frac{2\sqrt{\eta^2 / 2+ (1-\sigma)^2 r}}{1 - 3\eta}
\]
\end{lemma}
\noindent Moreover, if we substitute $\sigma$ with its actual value $1 - \chi/\sqrt{r}$, we get $\Theta = \frac{\sqrt{2\eta^2 + 4\chi^2}}{1-3\eta}$, which we can make arbitrarily small by tuning the constants. Now, we can immediately use this result to prove $\scaled{\xnext}, \scaled{\snext} \in \operatorname{int} \mathcal{L}$.
\begin{lemma} \label{l1}
Let $\eta = \chi = 0.01$ and $\xi = 0.001$. Then, $\scaled{\xnext}$ and $\scaled{\snext}$ are strictly feasible, i.e. $\scaled{\xnext}, \scaled{\snext} \in \operatorname{int} \mathcal{L}$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:bounds for increment}, $\lambda_\text{min}(\measured{\vec{x}}) \geq 1 - \norm{\measuredscaled{\dx}}_F \geq 1 - \frac{\Theta}{\sqrt{2}} - \xi$. On the other hand, since $d(\vec{x}, \vec{s}, \mu) \leq \eta \mu$, we have $d(\scaled{\vec{x}}, \ss, 1) \leq \eta$, and thus
\begin{align*}
\eta^2 &\geq \norm{\ss - e}_F^2 = \sum_{i=1}^{2r} (\lambda_i(\ss)-1)^2
\end{align*}
The above equation implies that $\lambda_i(\ss) \in \left[ 1-\eta, 1+\eta \right], \forall i \in [2r]$.
Now, since $\norm{\vec{z}}_2\leq \norm{\vec{z}}_F$,
\begin{align*}
\lambda_\text{min}(\measured{\vec{s}}) &\geq \lambda_\text{min}(\ss + \scaled{\ds}) - \norm{\measuredscaled{\ds} - \scaled{\ds}}_F \\
&\geq \lambda_\text{min}(\ss) - \norm{\scaled{\ds}}_F - \norm{\measuredscaled{\ds} - \scaled{\ds}}_F \\
&\geq 1-\eta - \Theta \sqrt{2} - \xi,
\end{align*}
where we used Lemma \ref{lemma:bounds for increment} for the last inequality.
Substituting $\eta = \chi = 0.01$ and $\xi = 0.001$, we get that $\lambda_\text{min}(\measured{\vec{x}}) \geq 0.8$ and $\lambda_\text{min}(\measured{\vec{s}}) \geq 0.8$.
\end{proof}
\subsection{Maintaining closeness to central path}
Finally, we move on to the most technical part of the proof of Theorem \ref{thm:main}, where we prove that $\scaled{\xnext}, \scaled{\snext}$ is still close to the central path, and the duality gap has decreased by a constant factor. We split this into two lemmas.
\begin{lemma}\label{lemma:we stay on central path}
Let $\eta = \chi = 0.01$, $\xi=0.001$, and let $\alpha$ be any value satisfying $0 < \alpha \leq \chi$. Then, for $\measured{\sigma} = 1 - \alpha / \sqrt{r}$, the distance to the central path is maintained, that is, $d(\scaled{\xnext}, \scaled{\snext}, \measured{\sigma}) < \eta\measured{\sigma}$.
\end{lemma}
\begin{proof}
By Claim \ref{claim:stuff preserved under scaling}, the distance of the next iterate from the central path is \[d(\scaled{\xnext}, \scaled{\snext}, \measured{\sigma}) = \norm{T_{\scaled{\xnext}} \scaled{\snext} - \measured{\sigma}\vec{e}}_F,\] and we can bound it from above as
\begin{align*}
d(\scaled{\xnext}, \scaled{\snext}, \measured{\sigma}) &= \norm{T_{\scaled{\xnext}} \scaled{\snext} - \measured{\sigma}\vec{e}}_F \\
&= \norm{T_{\scaled{\xnext}} \scaled{\snext} - \measured{\sigma} T_{\scaled{\xnext}}T_{\scaled{\xnext}^{-1}} \vec{e}}_F \\
&\leq \norm{T_{\scaled{\xnext}}} \cdot \norm{\scaled{\snext} - \measured{\sigma}\cdot (\scaled{\xnext})^{-1}}.
\end{align*}
So, it is enough to bound $\norm{\vec{z}}_F := \norm{\scaled{\snext} - \measured{\sigma}\cdot \scaled{\xnext}^{-1}}_F$ from above, since
\begin{align*}
\norm{T_{\scaled{\xnext}}} &= \norm{\scaled{\xnext}}_2 \leq 1 + \norm{\measuredscaled{\dx}}_2 \leq 1 + \norm{\scaled{\dx}}_2 + \xi \leq 1 + \frac{\Theta}{\sqrt{2}} + \xi.
\end{align*}
We split $\vec{z}$ as
\begin{align*}
\vec{z} &= \underbrace{\left( \ss + \measuredscaled{\ds} - \measured{\sigma}e + \measuredscaled{\dx} \right)}_{\vec{z}_1}
+ \underbrace{(\measured{\sigma} - 1)\measuredscaled{\dx}}_{\vec{z}_2} + \underbrace{\measured{\sigma} \left( e - \measuredscaled{\dx} - (e + \measuredscaled{\dx})^{-1} \right) }_{\vec{z}_3},
\end{align*}
and we bound $\norm{\vec{z}_1}_F, \norm{\vec{z}_2}_F$, and $\norm{\vec{z}_3}_F$ separately.
\begin{enumerate}
\item By the triangle inequality, $\norm{\vec{z}_1}_F \leq \norm{\ss + \scaled{\ds} - \measured{\sigma} e + \scaled{\dx}}_F + 2\xi$. Furthermore, after substituting $\scaled{\ds}$ from Claim \ref{claim:dss expression},
we get
\begin{align*}
\ss + \scaled{\ds} - \measured{\sigma} e + \scaled{\dx} &= \sigma e - \mu^{-1}R_{xs} \scaled{\dx} - \measured{\sigma} e + \scaled{\dx} \\
&= \frac{\alpha - \chi}{\sqrt{r}} e + \mu^{-1}(\mu I - R_{xs})\scaled{\dx}.
\end{align*}
Using the bound for $\norm{\mu I - R_{xs}}$ from Claim \ref{claim:R_xs bound} as well as the bound for $\norm{\scaled{\dx}}_F$ from Lemma \ref{lemma:bounds for increment}, we obtain
\begin{align*}
\norm{\vec{z}_1}_F \leq 2\xi + \frac{\chi}{\sqrt{r}} + \frac{3}{\sqrt{2}}\eta\Theta.
\end{align*}
\item $\norm{\vec{z}_2}_F \leq \frac{\chi}{\sqrt{r}} \left( \frac{\Theta}{\sqrt{2}} + \xi \right)$, where we used the bound from Lemma \ref{lemma:bounds for increment} again.
\item Here, we first need to bound $\norm{ (e + \scaled{\dx})^{-1} - (e + \measuredscaled{\dx})^{-1} }_F$. For this, we use the submultiplicativity of $\norm{\cdot}_F$:
\begin{align*}
\norm{& (e + \scaled{\dx})^{-1} - (e + \measuredscaled{\dx})^{-1} }_F = \norm{ (e+\scaled{\dx})^{-1} \circ \left( e - (e+\scaled{\dx}) \circ (e + \measuredscaled{\dx})^{-1} \right) }_F\\
&\leq \norm{ (e+\scaled{\dx})^{-1} }_2 \cdot \norm{ e - (e+\measuredscaled{\dx} + \scaled{\dx} - \measuredscaled{\dx}) \circ (e + \measuredscaled{\dx})^{-1} }_F \\
&= \norm{ (e+\scaled{\dx})^{-1} }_2 \cdot \norm{ (\scaled{\dx} - \measuredscaled{\dx}) \circ (e + \measuredscaled{\dx})^{-1} }_F \\
&\leq \norm{ (e+\scaled{\dx})^{-1} }_2 \cdot \norm{ \scaled{\dx} - \measuredscaled{\dx} }_F \cdot \norm{ (e + \measuredscaled{\dx})^{-1} }_2 \\
&\leq \xi \cdot \norm{ (e+\scaled{\dx})^{-1} }_2 \cdot \norm{ (e + \measuredscaled{\dx})^{-1} }_2.
\end{align*}
Now, we have the bound $\norm{(e+\scaled{\dx})^{-1}}_2 \leq \frac{1}{1 - \norm{\scaled{\dx}}_F}$ and similarly $\norm{(e+\measuredscaled{\dx})^{-1}}_2 \leq \frac{1}{1 - \norm{\scaled{\dx}}_F - \xi}$, so we get
\[
\norm{ (e + \scaled{\dx})^{-1} - (e + \measuredscaled{\dx})^{-1} }_F \leq \frac{\xi}{(1 - \norm{\scaled{\dx}}_F - \xi)^2}.
\]
Using this, we can bound $\norm{ \vec{z}_3 }_F$:
\begin{align*}
\norm{ \vec{z}_3 }_F \leq \measured{\sigma}&\left( \norm{ e - \scaled{\dx} - (e+\scaled{\dx})^{-1} }_F + \xi + \frac{\xi}{(1 - \norm{\scaled{\dx}}_F - \xi)^2} \right).
\end{align*}
If we let $\lambda_i$ be the eigenvalues of $\scaled{\dx}$, then by Lemma \ref{lemma:bounds for increment}, we have
\begin{align*}
\norm{& e - \scaled{\dx} - (e+\scaled{\dx})^{-1} }_F \\
&= \sqrt{ \sum_{i=1}^{2r} \left( (1-\lambda_{i})-\frac{1}{1+\lambda_{i}} \right)^2 } \\
&= \sqrt{\sum_{i=1}^{2r} \frac{\lambda_{i}^4}{(1+\lambda_{i})^2}}
\leq \frac{\Theta}{\sqrt2 - \Theta} \sqrt{\sum_{i=1}^{2r} \lambda_{i}^2} \\
&\leq \frac{\Theta^2}{2 - \sqrt{2}\Theta}.
\end{align*}
\end{enumerate}
Combining all bound from above, we obtain
\begin{align*}
d(\scaled{\xnext}, \scaled{\snext}, \measured{\sigma}) \leq \left( 1 + \frac{\Theta}{\sqrt{2}} + \xi \right) \cdot
&\left(2\xi + \frac{\chi}{\sqrt{r}} + \frac{3}{\sqrt{2}}\eta\Theta\right. \\
&+\frac{\chi}{\sqrt{r}} \left( \frac{\Theta}{\sqrt{2}} + \xi \right) \\
&+\left. \measured{\sigma}\left( \frac{\Theta^2}{2 - \sqrt{2}\Theta} + \xi + \frac{\xi}{(1 - \Theta/\sqrt{2} - \xi)^2} \right) \right).
\end{align*}
Finally, if we plug in $\chi=0.01$, $\eta=0.01$, $\xi= 0.001$, we get $d(\measured{\vec{x}}, \measured{\vec{s}}, \measured{\sigma}) \leq 0.005\measured{\sigma} \leq \eta\measured{\sigma}$.
\end{proof}
Now, we prove that the duality gap decreases.
\begin{lemma} \label{gap}
For the same constants, the updated solution satisfies $\frac1r \scaled{\xnext}^\top \scaled{\snext} = \left( 1-\frac{\alpha}{\sqrt{r}} \right)$ for $\alpha = 0.005$.
\end{lemma}
\begin{proof}
Since $\scaled{\xnext}$ and $\scaled{\snext}$ are scaled quantities, the duality gap between their unscaled counterparts is $\frac{\mu}{r}\scaled{\xnext}^\top \scaled{\snext}$. Applying Lemma \ref{lemma:Properties of the central path} (and Claim \ref{claim:stuff preserved under scaling}) with $\nu=\sigma\mu$ and $\scaled{\xnext}, \scaled{\snext}$, we obtain the upper bound
\begin{align*}
\mu\scaled{\xnext}^\top \scaled{\snext} \leq r \sigma\mu + \sqrt{\frac{r}{2}} \mu d(\scaled{\xnext}, \scaled{\snext}, \sigma),
\end{align*}
which in turn implies
\begin{equation*}
\frac1r \scaled{\xnext}^\top \scaled{\snext} \leq \left( 1 - \frac{0.01}{\sqrt{r}} \right) \left( 1 + \frac{d(\scaled{\xnext}, \scaled{\snext}, \sigma)}{\sigma \sqrt{2r}} \right).
\end{equation*}
By instantiating Lemma \ref{lemma:Properties of the central path} for $\alpha = \chi$, from its proof, we obtain $d(\scaled{\xnext}, \scaled{\snext}, \sigma) \leq 0.005 \sigma$, and thus
\begin{equation*}
\frac1r \scaled{\xnext}^\top \scaled{\snext} \leq 1 - \frac{0.005}{\sqrt{r}}
\end{equation*}
Therefore, the final $\alpha$ for this Lemma is $0.005$.
\end{proof}
\subsection{Final complexity and feasibility}
In every iteration we need to solve the Newton system to a precision dependent on the norms of $T_{\vec{x}^{-1}}$ and $T_{\vec{s}^{-1}}$. Thus, to bound the running time of the algorithm (since the complexity of the vector state tomography procedure depends on the desired precision), we need to bound $\norm{T_{\vec{x}^{-1}}}$ and $\norm{T_{\vec{s}^{-1}}}$. Indeed, by the properties of the quadratic representation, we get
\begin{align*}
\norm{T_{\vec{x}^{-1}}} &= \norm{\vec{x}^{-1}} = \lambda_\text{min}(\vec{x})^{-1} \text{ and }\\ \norm{T_{\vec{s}^{-1}}} &= \norm{\vec{s}^{-1}} = \lambda_\text{min}(\vec{s})^{-1}.
\end{align*}
If the tomography precision for iteration $i$ is chosen to be at least (i.e. smaller than)
\begin{equation}\label{eq:definition of delta}
\delta_i := \frac{\xi}{4} \min \left\{ \lambda_\text{min}(\vec{x}_i), \lambda_\text{min}(\vec{s}_i) \right\},
\end{equation}
then the premises of Theorem \ref{thm:main} are satisfied. The tomography precision for the entire algorithm can therefore be chosen to be $\delta := \min_i \delta_i$. Note that these minimum eigenvalues are related to how close the current iterate is to the boundary of $\mathcal{L}$ -- as long as $\vec{x}_i, \vec{s}_i$ are not ``too close'' to the boundary of $\mathcal{L}$, their minimal eigenvalues should not be ``too small''.
There are two more parameters that impact the complexity of the quantum linear system solver: the condition number of the Newton matrix $\kappa_i$ and the matrix parameter $\zeta_i$ of the QRAM encoding in iteration $i$. For both of these quantities we define their global versions as $\kappa = \max_i \kappa_i$ and $\zeta = \max_i \zeta_i$. Therefore, we arrive to the following statement about the complexity of Algorithm~\ref{alg:qipm}.
\begin{theorem}[restatement]
Let \eqref{prob:SOCP primal} be a SOCP with $A \in \mathbb{R}^{m\times n}$, $m \leq n$, and $\mathcal{L} = \mathcal{L}^{n_1} \times \cdots \times \mathcal{L}^{n_r}$. Then, our algorithm achieves duality gap $\epsilon$ in time
\begin{equation*}
T = \widetilde{O} \left( \sqrt{r} \log\left( \mu_0 / \epsilon \right) \cdot \frac{n \kappa \zeta}{\delta^2}\log\left( \frac{\kappa \zeta}{\delta} \right) \right).
\end{equation*}
\end{theorem}
This complexity can be easily interpreted as product of the number of iterations and the cost of $n$-dimensional vector tomography with error $\delta$. So, improving the complexity of the tomography algorithm would improve the running time of our algorithm as well.
Note that up to now, we cared mostly about strict (conic) feasibility of $\vec{x}$ and $\vec{s}$. Now, we address the fact that the linear constraints $A\vec{x} = \vec{b}$ and $A^\top \vec{y} + \vec{s} = \vec{c}$ are not exactly satisfied during the execution of the algorithm. Luckily, it turns out that this error is not accumulated, but is instead determined just by the final tomography precision:
\begin{theorem}[restatement]
Let \eqref{prob:SOCP primal} be a SOCP as in Theorem \ref{thm:runtime}. Then, after $T$ iterations, the (linear) infeasibility of the final iterate $\vec{x}, \vec{y}, \vec{s}$ is bounded as
\begin{align*}
\norm{A\vec{x}_T - \vec{b}} &\leq \delta\norm{A} , \\
\norm{A^\top \vec{y}_T + \vec{s}_T - \vec{c}} &\leq \delta \left( \norm{A} + 1 \right).
\end{align*}
\end{theorem}
\begin{proof}
Let $(\vec{x}_T, \vec{y}_T, \vec{s}_T)$ be the $T$-th iterate. Then, the following holds for $A\vec{x}_T - \vec{b}$:
\begin{equation}\label{eq:eq11}
A \vec{x}_T - \vec{b} = A\vec{x}_0 + A\sum_{t=1}^\top \measured{\dx}_t - \vec{b} = A \sum_{t=1}^\top \measured{\dx}_t.
\end{equation}
On the other hand, the Newton system at iteration $T$ has the constraint $A\Delta\vec{x}_T = \vec{b} - A\vec{x}_{T-1}$, which we can further recursively transform as,
\begin{align*}
A\Delta\vec{x}_T &= \vec{b} - A\vec{x}_{T-1} = \vec{b} - A\left( \vec{x}_{T-2} + \measured{\dx}_{T-1} \right) \\
&= \vec{b} - A\vec{x}_0 - \sum_{t=1}^{T-1} \measured{\dx}_t = - \sum_{t=1}^{T-1} \measured{\dx}_t.
\end{align*}
Substituting this into equation \eqref{eq:eq11}, we get
\[
A\vec{x}_T - \vec{b} = A \left( \measured{\dx}_T - \Delta\vec{x}_T \right).
\]
Similarly, using the constraint $A^\top\Delta\vec{y}_T + \Delta\vec{s}_{T} = \vec{c} - \vec{s}_{T-1} - A^\top \vec{y}_{T-1}$ we obtain that
\[
A^\top \vec{y}_T + \vec{s}_T - \vec{c} = A^\top \left( \measured{\dy}_T - \Delta\vec{y}_T \right) + \left( \measured{\ds}_T - \Delta\vec{s}_T \right).
\]
Finally, we can bound the norms of these two quantities,
\begin{align*}
\norm{A\vec{x}_T - \vec{b}} &\leq \delta\norm{A} , \\
\norm{A^\top \vec{y}_T + \vec{s}_T - \vec{c}} &\leq \delta \left( \norm{A} +1 \right).
\end{align*}
\end{proof}
\section{Quantum Support-Vector Machines}
In this section we present our quantum support vector machine (SVM) algorithm as an application of our SOCP solver.
Given a set of vectors $\mathcal{X} = \{ \vec{x}^{(i)} \in \mathbb{R}^n\;|\;i \in [m] \}$ (\emph{training examples}) and their \emph{labels} $y^{(i)} \in \{-1, 1\}$, the objective of the SVM training process is to find the ``best'' hyperplane that separates training examples with label $1$ from those with label $-1$. In this paper we focus on the (traditional) \emph{soft-margin} ($\ell_1$-)SVM, which can be expressed as the following optimization problem:
\begin{equation}
\begin{array}{ll}
\min\limits_{\vec{w}, b, \vec{\xi}} & \norm{\vec{w}}^2 + C\norm{\vec{\xi}}_1 \\
\text{s.t.}& y^{(i)}(\vec{w}^\top\vec{x}^{(i)}+b) \geq 1 - \xi_i, \;\forall i \in [m] \\
&\vec{\xi} \geq 0.
\end{array} \label{prob:SVM}
\end{equation}
Here, the variables $\vec{w}\in\mathbb{R}^n$ and $b \in \mathbb{R}$ correspond to the hyperplane, $\vec{\xi} \in \mathbb{R}^m$ corresponds to the ``linear inseparability'' of each point, and the constant $C > 0$ is a hyperparameter that quantifies the tradeoff between maximizing the margin and minimizing the constraint violations.
As a slightly less traditional alternative, one might also consider the $\ell_2$-SVM (or least-squares SVM, LS-SVM) \cite{suykens1999least}, where the $\norm{\vec{\xi}}_1$ regularization term is replaced by $\norm{\vec{\xi}}^2$. This formulation arises from considering the least-squares regression problem with the constraints $y^{(i)}(\vec{w}^\top\vec{x}^{(i)}+b) = 1$, which we solve by minimizing the squared $2$-norm of the residuals:
\begin{equation}
\begin{array}{ll}
\min\limits_{\vec{w}, b, \vec{\xi}} & \norm{\vec{w}}^2 + C \norm{\vec{\xi}}^2 \\
\text{s.t.}& y^{(i)}(\vec{w}^\top\vec{x}^{(i)}+b) = 1 - \xi_i, \;\forall i \in [m]
\end{array} \label{prob:LS-SVM}
\end{equation}
Since this is a least-squares problem, the optimal $\vec{w}, b$ and $\vec{\xi}$ can be obtained by solving a linear system. In \cite{rebentrost2014quantum}, a quantum algorithm for LS-SVM is presented, which uses a single quantum linear system solver. Unfortunately, replacing the $\ell_1$-norm with $\ell_2$ in the objective of \eqref{prob:LS-SVM} leads to the loss of a key property of ($\ell_1$-)SVM -- weight sparsity \cite{suykens2002weighted}.
\subsection{Reducing SVM to SOCP}
Finally, we are going to reduce the SVM problem \eqref{prob:SVM} to SOCP. In order to do that, we define an auxiliary vector $\vec{t} = \left( t+1; t; \vec{w} \right)$, where $t \in \mathbb{R}$ -- this allows us to ``compute'' $\norm{\vec{w}}^2$ using the constraint $\vec{t} \in \mathcal{L}^{n+2}$ since
\begin{equation*}
\vec{t} \in \mathcal{L}^{n+2} \Leftrightarrow (t+1)^2 \geq t^2 + \norm{\vec{w}}^2 \Leftrightarrow 2t+1 \geq \norm{\vec{w}}^2.
\end{equation*}
Thus, minimizing $\norm{\vec{w}}^2$ is equivalent to minimizing $t$. Note we can restrict our bias $b$ to be nonnegative without any loss in generality, since the case $b < 0$ can be equivalently described by a bias $-b > 0$ and weights $-\vec{w}$. Using these transformations, we can restate \eqref{prob:SVM} as the following SOCP:
\begin{equation}
\begin{array}{ll}
\min\limits_{\vec{t}, b, \vec{\xi}} & \begin{bmatrix}
0 & 1 & 0^n & 0 & C^m
\end{bmatrix} \begin{bmatrix}
\vec{t} & b & \vec{\xi}
\end{bmatrix}^\top \\
\text{s.t.}&
\begin{bmatrix}
0 & 0 & & 1 & \\
\vdots & \vdots& X^\top & \vdots & \operatorname{diag}(\vec{y}) \\
0 & 0 & & 1 & \\
1 & -1 & 0^n & 0 & 0^m
\end{bmatrix} \begin{bmatrix}
\vec{t} \\
b \\
\vec{\xi}
\end{bmatrix} = \begin{bmatrix}
\vec{y} \\
1
\end{bmatrix}\\
& \vec{t} \in \mathcal{L}^{n+2},\; b \in \mathcal{L}^1,\; \xi_i \in \mathcal{L}^1\quad \forall i \in [m]
\end{array} \label{prob:SVM SOCP primal}
\end{equation}
Here, we use the notation $X \in \mathbb{R}^{n\times m}$ for the matrix whose columns are the training examples $\vec{x}^{(i)}$, and $\vec{y} \in \mathbb{R}^m$ for the vector of labels.
This problem has $O(n+m)$ variables, and $O(m)$ conic constraints (i.e. its rank is $r = O(m)$). Therefore, in the interesting case of $m = \Theta(n)$, it can be solved in $\widetilde{O}(\sqrt{n})$ iterations. More precisely, if we consider both the primal and the dual, in total they have $3m+2n+7$ scalar variables and $2m+4$ conic constraints.
In practice (as evidenced by the LIBSVM and LIBLINEAR libraries \cite{chang2011libsvm, fan2008liblinear}), a small modification is made to the formulations \eqref{prob:SVM} and \eqref{prob:LS-SVM}: instead of treating the bias separately, all data points are extended with a constant unit coordinate. In this case, the SOCP formulation remains almost identical, with the only difference being that the constraints $\vec{t} \in \mathcal{L}^{n+2}$ and $b \in \mathcal{L}^1$ are replaced by a single conic constraint $(\vec{t};b) \in \mathcal{L}^{n+3}$. This change allows us to come up with a simple feasible initial solution in our numerical experiments, without going through the homogeneous self-dual formalism of \cite{ye1994hsd}.
Note also that we can solve the LS-SVM problem \eqref{prob:LS-SVM}, by reducing it to a SOCP in a similar manner. In fact, this would have resulted in just $O(1)$ conic constraints, so an IPM would converge to a solution in $\widetilde{O}(1)$ iterations, which is comparable with the result from \cite{rebentrost2014quantum}.
\subsection{Experimental results}
We next present some experimental results to assess the running time parameters and the performance of our algorithm for random instances of SVM. If an algorithm demonstrates a speedup on unstructured instances like these, it is reasonable to extrapolate that the speedup is generic, as it could not have used any special properties of the instance to derive an advantage. For a given dimension $n$ and number of training points $m$, we denote our distribution of random SVMs with $\mathcal{SVM}(n, m, p)$, where $p$ denotes the probability that a datapoint is misclassified by the optimal separating hyperplane. Additionally, for every training set sampled from $\mathcal{SVM}(n, m, p)$, a corresponding test set of size $\lfloor m/3 \rfloor$ was also sampled from the same distribution. These test sets are used to evaluate the generalization error of SVMs trained in various ways.
Our experiments consist of generating roughly 16000 instances of $\mathcal{SVM}(n, 2n, p)$, where $n$ is chosen to be uniform between $2^2$ and $2^9$ and $p$ is chosen uniformly from the discrete set $\{ 0, 0.1, \dots, 0.9, 1 \}$. The instances are then solved using a simulation of Algorithm~\ref{alg:qipm} (with the target duality gap of $\epsilon=0.1$) as well as using the ECOS SOCP solver \cite{domahidi2013ecos} (with the default target duality gap). We simulate the execution of Algorithm \ref{alg:qipm} by implementing the classical IPM and adding noise to the solution of the Newton system \eqref{eq:Newton system}. The noise added to each coordinate is uniform, from an interval selected so that the noisy increment $(\Delta\vec{x}, \Delta\vec{y}, \Delta\vec{s})$ simulates the outputs of the tomography algorithm with precision determined by Theorem \ref{thm:main}. The SVM parameter $C$ is set to be equal to $1$ in all experiments. Additionally, a separate, smaller experiment with roughly 1000 instances following the same distribution is performed for comparing Algorithm~\ref{alg:qipm} with LIBSVM \cite{chang2011libsvm} using a linear kernel.
The experiments are performed on a Dell Precision 7820T workstation with two Intel Xeon Silver 4110 CPUs and 64GB of RAM, and experiment logs are available \change{at~\cite{FigshareData}}.
\begin{figure}
\centering
\begin{minipage}{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{complexity.png}
\captionof{figure}{Observed complexity of Algorithm~\ref{alg:qipm}, its power law fit, and 95\% confidence interval.}
\label{fig:complexity}
\end{minipage}~~
\begin{minipage}{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{svm_accuracies.png}
\captionof{figure}{Empirical CDF of the difference in accuracy between SVMs trained in different ways.}
\label{fig:SVM accuracies}
\end{minipage}
\end{figure}
By finding the least-squares fit of the power law $y = ax^b$ through the observed values of the quantity $\frac{n^{1.5} \kappa \zeta}{\delta^2}$, we obtain the exponent $b = 2.591$, and its 95\% confidence interval $[2.564, 2.619]$ (this interval is computed in the standard way using Student's $t$-distribution, as described in \cite{neter1996applied}). These observations, the power law, and its confidence interval are show on Figure~\ref{fig:complexity}. Thus, we can say that for random $\mathcal{SVM}(n, 2n, p)$-instances, and fixed $\epsilon=0.1$, the complexity of Algorithm \ref{alg:qipm} scales as $O(n^{2.591})$. This represents a polynomial improvement over general dense SOCP solvers with complexity $O(n^{\omega+0.5})$. In practice, the polynomial speedup is conserved when compared to ECOS \cite{domahidi2013ecos}, that has a measured running time scaling of $O(n^{3.314})$, with a 95\% confidence interval for the exponent of [3.297, 3.330] (this is consistent with the internal usage of a \cite{strassen1969gaussian}-like matrix multiplication algorithm, with complexity $O(n^{2.807})$). Neglecting constant factors, this gives us a speedup of $10^4$ for $n=10^6$. The results from the LIBSVM solver indicate that the training time with a linear kernel has a complexity of $O(n^{3.112})$, with a 95\% confidence interval for the exponent of [2.799, 3.425]. These results suggest Algorithm~\ref{alg:qipm} retains its advantage even when compared to state-of-the-art specialized classical algorithms.
Additionally, we use the gathered data to verify that the accuracy of our quantum (or approximate) SVM is close to the optimum: Figure~\ref{fig:SVM accuracies} shows that both at train- and at test-time the accuracies of all three classifiers are most of the time within a few percent of each other, with Algorithm~\ref{alg:qipm} often outperforming the exact SOCP SVM classifier.
In conclusion, the performed numerical experiments indicate that Algorithm~\ref{alg:qipm} provides a polynomial speedup for solving SOCPs with low- and medium precision requirements. In particular, for SVM, we achieve a polynomial speedup with no detriment to the quality of the trained classifier.
|
1,108,101,563,708 | arxiv |
\section{Introduction}
The Tarantula Nebula or 30 Doradus in the Large Magellanic Cloud is the most important star-forming complex in the Local Group.
At its heart is R136, the stellar cluster that contains the most massive stars known \citep{CSH+:2010,CCB+:2016}
which fall into the categories of O stars and more particularly hydrogen-rich Wolf-Rayet stars.
Between 2014 and 2016, a high-spatial-resolution X-ray imaging survey known as T-ReX
was undertaken with the \textit{Chandra}~Observatory
to study both the stellar population itself and the thermal and dynamical effects
wrought by stellar winds and supernova shocks on the surrounding interstellar medium.
The exposure time of 2 million seconds accumulated over nearly 21 months offers a far more intensive view of the stars in 30 Doradus than
anything so far available for the Milky Way although there have been significant amounts of X-ray observing time on individual massive
stars such as the nearby, single, early O-type supergiant
$\zeta$~Puppis \citep[e.g.][]{NFR:2012},
long used by \textit{XMM-Newton}~as a calibration source,
and the more distant long-period binaries
WR~140 \citep[e.g.][]{P:2012}
and $\eta$~Carinae \citep[e.g.][]{C:2005}.
There is a clear physical distinction between the soft intrinsic X-ray emission produced in the winds of single stars,
commonly attributed to many microscopic shocks supposed by \citet{FPP:1997} and others to form as a consequence of instabilities in the wind-driving mechanism,
and the harder, more luminous emission resulting from the macroscopic shock interaction between the counterflowing winds in a bound system of two or
more massive stars \citep[e.g.][]{SBP:1992, RN:2016}.
The variability properties of single and binary stars are also different. The \textit{XMM-Newton}~data of $\zeta$~Puppis collected intermittently
over many years have shown stochastic variability of unknown origin of up to 10\% or so on timescales of hours or days \citep{NOG:2013}
that appears typical of the intrinsic emission of single O stars in general.
Among the binaries, on the other hand, the highly eccentric system WR~140, of 7.9-year orbital period, shows changes in X-ray intensity many times
higher determined mainly by the relative disposition of the two stars through orbital separation and stellar conjunction \citep[e.g.][]{P:2012,SMT+:2015}.
Closer binary systems with periods, $P$,
of days rather than years,
such as the Wolf-Rayet binary
V444~Cygni (WN5o+O6III-V, $P=4.2\mr{d}$), can show high-amplitude phase-repeatable X-ray variability, more complex than at longer wavelengths,
where the hardness of the spectrum suggests colliding winds are involved \citep{JNH+:2015}.
On the other hand, X-ray emission from the prominent visual eclipsing O-star binary $\delta$~Orionis ($P=5.7\mr{d}$)
looked more like a single star in its behaviour during a simultaneous X-ray and optical campaign \citep{NHC+:2015} that covered most of an orbital cycle.
For single and binary stars,
variability in any part of the electromagnetic spectrum is an observational property of significance for determining the physical nature of a system.
The T-ReX survey,
which will be described in detail elsewhere,
offers unprecedented opportunities of this type in the X-ray band.
This paper concentrates on one
particular star, Melnick 34 or BAT99 116, abbreviated Mk~34,
near the centre of the broader survey.
It lies 10\arcsec or 2.5~pc projected distance
east of R136, the dense stellar core of 30 Doradus from which,
as shown in Figure~\ref{Fig:R136}, it is comfortably resolved by the \textit{Chandra}~telescope with the ACIS-I instrument
which offers by far the best X-ray resolving power available now and in the foreseeable future.
\begin{figure}
\includegraphics[width=\columnwidth,viewport=130 40 670 525,clip]{figures/R136_ds9.pdf}
\caption{\textit{Chandra}~ACIS-I X-ray image on a logarithmic intensity scale of the core of 30 Doradus accumulated
during the T-ReX campaign of an area of 19\arcsec$\times$19\arcsec centred on R136c. North is up and east to the left.
Also shown is the linear length scale at the 50~kpc distance of the LMC.}
\label{Fig:R136}
\end{figure}
Mk~34 is a hydrogen-rich Wolf-Rayet star, classified WN5ha by \cite{CW:2011} or WN5h:a by \cite{HRT+:2014}, who assigned a high mass of $390~\mathrm{M}_{\sun}$
through spectroscopic analysis.
In the VLT-FLAMES census of the hot luminous stars in 30 Doradus \citep{DCdK+:2013},
similar analysis suggested Mk~34 to be the second most luminous star
behind R136a1 and to have the highest mass-loss rate
\changed{of $6.3\times10^{-5}\mr{M}_{\sun}/\mr{yr}$}
among 500 confirmed hot stars\changed{, about 20\% higher than the estimate of \cite{HRT+:2014}}.
The colour index $B-V=0.25$ shows that the star suffers significant extinction and X-ray absorption in the interstellar medium.
Through the discovery of dramatic optical radial-velocity variations that increased smoothly over 11 days only to have vanished a week later,
\citet{CSC+:2011} established Mk~34, also designated P93\_1134, to be a highly eccentric binary of two very massive stars.
The data plotted in the middle panel of their Figure 1 suggest an orbital period
greater than 50 days and a mass ratio of about $0.8$.
Previous X-ray measurements showed it was the brightest stellar X-ray source in 30 Doradus, designated ACIS~132 of the 180 sources in the complete
1999 \textit{Chandra}~study
of \cite{TBF+:2006} and CX~5 of the 20 bright objects discussed by \cite{PZPL:2002}.
Before any published optical radial-velocity evidence, their common speculation was that Mk~34 is a binary with X-rays powered by colliding winds,
this despite a luminosity about an order of magnitude higher than any comparable system in the Milky Way
and no
evidence for variability, either in the \textit{Chandra}~data or in comparison with other measurements years earlier.
\section{\textit{Chandra}~T-ReX Campaign}
The 51 observations of the \textit{Chandra}~Visionary Program (PI: Townsley)
known as T-ReX, to signify the Tarantula -- Revealed by X-rays,
were executed over 630 days between 2014 May 3 and 2016 January 22.
They were all targeted at R136a1, the central star of the dense stellar core of 30~Doradus that lies 10\arcsec~from Mk~34,
and were executed at different
spacecraft roll angles according to season.
The ACIS-I field-of-view measures 16.9\arcmin$\times$16.9\arcmin~so that Mk~34 was always located in the central parts
of the detector where the angular resolution is at its sharpest.
The disposition of sources including Mk~34 with respect to detector edges or bad-surface geometry depends on the roll angle which
thus partly determines detection sensitivity.
Each observation was analysed with the ACIS Extract package \citep{BTF+:2010}
used in previous work on stellar clusters,
notably the \textit{Chandra}~Carina Complex Project \citep{TBC+:2011,BTF+:2011}.
For point sources such as Mk~34,
the package delivers overall X-ray count rates and medium-resolution spectra
with accompanying calibration material dependent on individual observing conditions
to allow robust comparison of measurements taken at different times.
For Mk~34 in this paper, the detected count-rates were corrected according to
the relative mean of the energy-dependent effective area tabulated for each
observation in the ARF file, the so-called associated response function, that also encodes the instrumental geometry.
The observation log in Table~\ref{Table:ChandraLog} shows the sequence of sensitivity-corrected count rates for Mk~34 during the T-ReX
campaign with
three earlier archived observations from 2006 January
bringing the total to 54.
As discussed by \citet{BTF+:2011} in their Appendix A and \citet{TBG+:2014}, sources observed with the \textit{Chandra}~ACIS instrument are subject to pile-up
in which two or more photons detected in identical or adjacent spatial and temporal readout elements are indistinguishable from single events
with the energies combined. These are then either accepted or discarded according to geometrical criteria known as grade selection.
The outcome of these two effects is a reduction in the count rate
that scales linearly with source brightness in Mk~34
and spurious hardening of the spectrum.
For each X-ray source, the ACIS Extract package provides simulated countermeasures in an overall
count-rate correction factor and a spectrum restored to remove pile-up distortion.
Mk~34 has been bright enough during most of its \textit{Chandra}~observational history in Table~\ref{Table:ChandraLog} for pile-up to be significant:
the count-rate correction factor increases linearly with detected count rate to a maximum of 1.17 about a median of 1.08.
While it is important to be aware of the extent of pile-up, the temporal analysis
discussed below is most reliably done without pile-up corrections which have therefore not been applied to
the count rates in Table~\ref{Table:ChandraLog}.
On the other hand, use of
the reconstructed spectra is unavoidable for the X-ray spectroscopy considered below in section~\ref{section:spectroscopy}.
\input{Chandra.log}
\subsection{\textit{Chandra}~X-ray photometry of Mk~34}\label{section:photometry}
The 54 observations of Mk~34 had exposure times between 9.8 and 93.7~ks about a median of 37.6
with count rates ranging between 2.2 and 76.2 counts per ks about a median of 35.8.
The 51 count rates of Mk~34 during the 630-day T-ReX campaign itself are plotted in Fig.~\ref{Fig:Photometry} and show
obvious variability. This first became clear when two high measurements separated by 3 days in 2014 August at about twice the
median
were followed 11 days later by a measurement close to zero and a further 6 days later at about half the median.
A second minimum stretching over about a week in 2015 December and other brighter measurements in Fig.~\ref{Fig:Photometry} strongly suggest
4 cycles of repeatable structure in the course of the 630-day T-ReX campaign.
Given the irregular sampling of the light curve, a more precise value was explored through minimum string-length analysis
of the quantities
$$\{x_i,y_i\}=\{r_i\cos{\phi_i},r_i\sin{\phi_i}\}$$
where $r_i$ is the count rate of the $i$th measurement ordered in folded phase of a trial period.
The results are plotted in Fig.~\ref{Fig:StringLength}.
The minimum value of the string length curve infers a best-value period of 155.1 days with an uncertainty of 1 or 2 days.
We also considered the Plavchan algorithm \citep{PJK+:2008} implemented by the
NASA Exoplanet Archive\footnote{http://exoplanetarchive.ipac.caltech.edu}.
Although this method is considered by its authors better suited to the detection of small-amplitude variations among much more
numerous data than available for Mk~34,
it gave similar results.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34Photometry-crop.pdf}
\caption{\textit{Chandra}~ACIS-I X-ray count rates of Mk~34 during the T-ReX campaign. \changed{Many of the error bars are smaller than the plot symbol.}}
\label{Fig:Photometry}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34StringLength-crop.pdf}
\caption{String length against trial period for the 54 \textit{Chandra}~ACIS-I X-ray count rate measurements of Mk~34 in Table~\ref{Table:ChandraLog}.}
\label{Fig:StringLength}
\end{figure}
The folded light curve with $P=155.1\rm{d}$ is shown in Fig.~\ref{Fig:X-rayCycle}, where the maximum count rate was taken to define zero phase.
The folded light curve shows accurate repeatability:
despite some missing coverage, the light curve clearly shows an accelerating 30 or 40-day rise to maximum, followed by a sharp
decrease in a few days to a minimum that lasts a few days, before a steady 10-day recovery to a relatively stable state of probably more than
100 days in length. Count rates plotted in grey from the archived observations over 10 days in late 2006 January
agree within small errors
with those close in phase taken 8-10 years later as the rise began to accelerate.
The decrease after maximum appears very steep: in the folded light curve the closest measurements were $1.6$ and $7.9$ days later
when the count rate had fallen by factors of $1.3$ and more than $30$, respectively.
There is little evidence for rapid variability within individual observations as shown by the modest values of $\log{\mr{P}_\mr{KS}}$
reported in Table~\ref{Table:ChandraLog}, where $\mr{P}_\mr{KS}$ is the {\em P}-value for the one-sample Kolmogorov-Smirnov statistic
under the hypothesis of constant source flux.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34FoldedPhotometry-crop.pdf}
\caption{Repeated cycles of \textit{Chandra}~ACIS-I X-ray count rates of Mk~34 folded at the period of 155.1 days derived from string-length analysis
centred on X-ray maximum. The points plotted in grey are those from the 3 archived observations in 2006 January, 8 or more years before
the other data.}
\label{Fig:X-rayCycle}
\end{figure}
\subsection{X-ray historical record}
The \textit{XMM-Newton}~science archive\footnote{http://nxsa.esac.esa.int/nxsa-web/\#search} shows that Mk~34 has been detected 3 times by the EPIC imaging spectrometers
with the count rates reported in Table~\ref{Table:XMMLog}
during observations of PSR~J0537-6909 in 2001 November, IGR~J05414-6858 in 2011 October and in one exposure in 2012 October of the 48 forming
a wide-area survey of the LMC.
The three EPIC instruments, pn, MOS1 and MO2, observe simultaneously and cover nearly identical fields-of-view although
Mk~34 was not observed by the pn instrument in 2001 November because of the timing mode chosen for the pulsar
or by the MOS instruments in 2011 October because the star fell very close to the edge of the pn field-of-view where MOS coverage does not reach.
Despite these complications, the XMM measurements of Mk~34, whose identification is confused as 3XMM~J053843.[79]-690605 in the 3XMM-DR6 catalogue,
showed clear variability by about a factor of 2 very well correlated with the
\textit{Chandra}~folded light curve as shown by comparison with the ACIS-I count rates closest in phase also reported in Table~\ref{Table:XMMLog}.
The 3 observations all took place in the rising part of the X-ray light curve with the 2001 measurement, the brightest of the three, a few days
before maximum light 30 cycles earlier than the T-ReX maximum
suggesting an uncertainty in the period of the order of 0.1 days.
\input{XMM.log}
\subsection{New \textit{Swift}~X-ray photometry}
Once the T-ReX campaign had finished, Mk~34's putative period of 155.1 days was used to predict the timing of the next maximum in early 2016 May.
With the recognition that the source is strong enough for sufficiently accurate measurements in reasonable exposures with the \textit{Swift}~XRT instrument in
imaging PC mode,
an application for \textit{Swift}~ToO observations was submitted and approved to cover the anticipated maximum and subsequent minimum at intervals of about 7 days.
The results are shown in Table~\ref{Table:SwiftLog} where the sequence of detected count rates confirms the expectations based on the
\textit{Chandra}~folded light curve in Figure~\ref{Fig:X-rayCycle}: the highest count rate detected on 2016 May 3 within hours of the predicted maximum was about twice
as bright as the measurement about a month later. The much lower intervening count rates did not match the reductions of factors of 30 or so observed with
\textit{Chandra}, probably because of source confusion within the more modest angular resolution of the XRT which, in common with any other current X-ray instrument,
does not match that of ACIS images. \changed{The \textit{Swift}~XRT images show two clearly resolved sources coincident with Mk~34 and R140a that are
separated by 54\arcsec~but cannot distinguish between Mk~34 and
R136c only 7\arcsec~away but a median factor of 8.6 fainter in resolved \textit{Chandra}~images.}
\input{Swift.log}
\section{\textit{Chandra}~X-ray spectroscopy of Mk~34}\label{section:spectroscopy}
The accumulated moderate resolution ACIS-I X-ray spectrum of Mk~34 shown in Figure~\ref{Fig:Spectrum} gives a clear qualitative
picture of its general spectral properties. The spectrum is hard,
stretching beyond the clear detection of \ion{Fe}{XXV} at 6.7 keV with the presence of a strong continuum in addition to
emission lines at all energies.
This type of spectrum is characteristic of well-established colliding-wind binaries such as
WR~140 \citep[e.g.][]{PCSW:2005},
WR~25 \citep[e.g.][]{PC:2006}
and $\eta$~Carinae although, given the 50 kpc distance of the LMC, Mk~34 is more luminous.
In comparison with WR~25 at about 2.3 kpc in Carina, for example, Mk~34 is more than 20 times further away and normally only 10 times fainter,
suggesting a luminosity greater by roughly an order of magnitude.
The much weaker intrinsic emission of some single Wolf-Rayet stars such as
WR~78 \citep{SZG+:2010}
and WR~134 \citep{SZG+:2012}
also show He-like line emission of \ion{Fe}{XXV} from very hot plasma but with no obvious continuum
which therefore appears to be an important defining property of colliding winds.
At low energies, the spectrum of Mk~34 is marked by a steep cutoff below about 1 keV,
likely due to a combination of circumstellar and interstellar photoelectric absorption,
leaving little
flux at the energies most affected by the instrumental contamination discussed below.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34Spectrum-crop.pdf}
\caption{\textit{Chandra}~ACIS-I X-ray spectrum of Mk~34 accumulated from all the available data.}
\label{Fig:Spectrum}
\end{figure}
Each of the 54 individual spectra that contribute to Figure~\ref{Fig:Spectrum} is accompanied by customised response files
that reflect the
contemporary calibration of the ACIS instrument
and enable detailed models of the observed spectra to be constructed. Of particular relevance is knowledge of the
instrumental contamination\footnote{http://cxc.harvard.edu/ciao/why/acisqecontam.html}
that affects most the sensitivity at low energies and has been building up in the ACIS optical path
since launch in 2000 and has been developing more quickly since late 2009.
These considerations are less important in Mk~34 than in objects with softer spectra.
Independent of count rate or phase, the shapes of all 54 spectra were
very similar in shape. For each $0.5-8$~keV spectrum, ACIS~Extract calculates the mean observed energy, $\langle\mr{E}\rangle$,
of its constituent events.
The set of these values has a narrow distribution characterised by a mean absolute deviation of 0.042~keV about a median of
2.238~keV. The brightest and second brightest observations had $\langle\mr{E}\rangle=2.284$~keV and $\langle\mr{E}\rangle=2.305$~keV,
respectively, comfortably within the overall distribution.
Only the faintest spectra immediately after maximum
were significantly harder probably due to increased photoelectric absorption, as discussed below.
Spectral models were fit with XSPEC v12.6.0v simultaneously to all 54 spectra consisting of a constant empirical emission spectrum
modified by 54 time-dependent values of luminosity, $L_X$, and absorbing column density, $N_X$, of LMC abundances.
The emission was modelled with a
2-temperature thermal plasma of variable abundances with the best-fit parameters shown in Table~\ref{Table:XspecModel}.
The purpose of the model is not to imply the simultaneous presence of two
equilibrium plasmas of distinct temperature or to study the abundances
but instead accurately to reproduce on a phenomenological basis the shape of the underlying spectrum
in order to assess the evolution of luminosity and absorption.
The 54 values of $L_X$ and $N_X$ are plotted in Figure~\ref{Fig:LxNx} in units characteristic of familiar colliding-wind binary systems
in the Galaxy of $10^{34}\mr{erg}~\mr{s}^{-1}$ and $10^{21}\mr{cm}^{-2}$ for luminosity and column density respectively.
In Mk~34, both show coherent repeatable behaviour as a function of phase.
Figure~\ref{Fig:LxNx} also suggests how pile-up has distorted the observed spectra by plotting with open symbols the values of $L_X$ and $N_X$
derived instead from analysis of the set of pile-up restored spectra supplied by ACIS Extract.
Recalling that pile-up affects typically 8\% and up to 17\% of counts by discarding or
moving them to higher energies, apparent hardening of the spectrum
of Mk~34 at its brightest which might have been thought due to increasing absorption was more likely caused by pile-up.
Mk~34 is an order of magnitude more luminous than any similar galactic system.
After apastron, its luminosity increases slowly at first before accelerating to reach a maximum brighter by a factor of about 3
that immediately precedes an event that looks like an eclipse. Within the limits of the data available,
the steady recovery is reproducible between cycles and shows a sharp transition at $+20$ days after periastron
to the gentle decrease that leads again to apastron.
The lack of colour variations throughout most of the orbit suggests that a constant
interstellar component might account for most of the absorption observed in most of the spectra with
a value of about $15\times10^{21}\mr{cm}^{-2}$ according to Figure~\ref{Fig:LxNx}.
This is indeed consistent with expectations from Mk34's optical and IR photometry which shows some of the highest
reddening among the hot stars in 30 Doradus.
For the narrow-band colour excess $E_{b-v}$, \citet{DCdK+:2013} reported a value of $0.47$ compared to $0.75$ from the models of \cite{HRT+:2014}.
For the elevated gas-to-dust ratio in R136 cited by \citet{DCdK+:2013} and the low metallicity of the LMC, $\mr{Z}=0.5$, also used in
the X-ray absorption model, this would imply an interstellar hydrogen
column density of $11$ or $17\times10^{21}\mr{cm}^{-2}$ for the competing values of $E_{b-v}$, bracketing the X-ray value.
Although through most of the orbit there is little evidence of X-ray colour changes,
absorption does reach a maximum of about double the interstellar value
for 2 or 3 weeks centred about 10 days after X-ray maximum
during the single X-ray eclipse revealed by the light curve when one of the stars is probably passing across the line-of-sight.
Here low count rates make some estimates uncertain
although there are some more precise measurements after eclipse egress.
Another task of future observations with more complete phase coverage will be to try to identify a second eclipse.
\input{XSPECModel}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34LxNxLxNx-crop.pdf}
\caption{Phase-dependent \textit{Chandra}~ACIS-I estimates of the X-ray luminosity, $L_X$, in red and absorption, $N_X$, in blue of Mk~34 in units characteristic
of colliding-wind systems in the Milky Way. The filled symbols with error bars show estimates ignoring pile-up; the open symbols without error bars were derived
from spectra reconstructed to remove the effects of pile-up.}
\label{Fig:LxNx}
\end{figure}
\section{Form of the X-ray light curve}
The distinctive shape of the folded X-ray light curve of Mk~34 is very similar in phased form to that seen recently in the
Galactic very massive Wolf-Rayet colliding-wind binary system WR~21a \citep{STM+:2015,GN:2016} with a combination of
100~ks of
\textit{XMM-Newton}~data at four phases and 306~ks of a \textit{Swift}~ToO XRT campaign of 330 snapshots spread evenly
over the entire 31.672-day period
of its optical radial velocity orbit \citep{NGB+:2008,TSF+:2016}.
A comparison of the two stars is shown in Figure~\ref{Fig:Mk34WR21a} where Mk~34's zero phase was shifted forward
10 days from its observed maximum count rate and the \textit{Swift}~XRT count rate was halved.
Despite the differences in period of a factor of 5 and luminosity of more than an order of magnitude,
the similarities are striking in the gradual rise to maximum followed by the subsequent deep minimum and asymmetric recovery.
The well-established spectral types of WR~21a, O3/WN6ha+O3Vz((f*)), and its Keplerian orbital elements \citep{TSF+:2016}
allow an assessment of the relationship between X-ray
orbital light-curve morphology and stellar and orbital geometry as discussed in part by \cite{GN:2016}.
Its minimum is probably caused, qualitatively at least, by some combination of three mechanisms:
absorption by the extended wind of the Wolf-Rayet star;
eclipse by its stellar core;
and reduced upstream shock velocities.
The potential utility of X-ray measurements is emphasised by the lack of eclipses at longer wavelengths
although quantitative models remain to be devised.
Once Mk~34's orbit is known in the near future, such quantitative models would seem to be required as
simple arguments, although hard to fault, appear to fail:
if the similarity of their X-ray light curves were to suggest similar orbital eccentricities for Mk~34 and WR~21a,
then, as demanded by Kepler's laws, scaling the sum of WR~21a's minimum masses of $102 \pm 6~\mr{M}_{\sun}$ \citep{TSF+:2016}
by the period ratio and the cube of the relative velocity amplitudes according to \citet{CSC+:2011}
would suggest unfeasibly
high combined minimum masses of over $1000~\mr{M}_{\sun}$ for Mk~34.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34andWR21a-crop.pdf}
\caption{Phase-dependent \textit{Chandra}~ACIS-I X-ray count rate of Mk~34 in comparison with the \textit{Swift}~XRT light curve of
the Galactic Wolf-Rayet eccentric binary system WR~21a after setting zero phase of Mk~34 10 days after its X-ray maximum.}
\label{Fig:Mk34WR21a}
\end{figure}
Mk~34 and WR~21a are not the only binary systems with very massive Wolf-Rayet primary stars that show
high-amplitude orbital phase-related X-ray variability of apparently similar type.
Of \citeauthor{CW:2011}'s (\citeyear{CW:2011}) very massive stars, WR~25 in Carina and WR~43c in
NGC~3603 are two other clear but more complex examples. The binary properties of the four systems,
with periods ranging from about 10 to 200 days, are shown in Table~\ref{Table:vmWRX}
which also includes rough estimates of the X-ray luminosities at apastron, when the stars are furthest apart;
at X-ray maximum, often close to periastron; and at eclipse minimum.
Phased X-ray light curves are plotted in Figure~\ref{Fig:vmWRX} scaled by distance to show their relative luminosities.
\textit{Swift}~XRT data of WR~21a and WR~25 were scaled by a factor of 4.0 calculated from comparison of the ACIS-I
count rate during the observation of WR~21a with ObsID 9113 with similar phases of the XRT light curve.
Mk~34 is the most luminous by about an order of magnitude, probably signalling the presence of
two extreme stars in close proximity with high mass-loss rates and high-velocity winds.
Repeatable events with all the appearance of eclipses occur in all four systems.
For the well-established orbits of WR~21a and WR~25 these occur very close to
inferior conjunction when the primary Wolf-Rayet star is passing in front of its binary companion and presumably also, therefore,
in front of the X-ray source between the two stars.
Like Mk~34, both these stars also show increased X-ray absorption in narrow intervals around these phases.
These do not apply to the system of shortest period, WR~43c \citep{SCC+:2008},
one of the central stars in the cluster NGC~3603,
where the smoother minimum rather occurs near a quadrature.
As argued above, once orbital geometry has been defined by Kepler's laws, X-ray eclipses by star and wind
promise direct estimates of fundamental parameters such as orbital inclination,
stellar radius and mass-loss rate such as attempted, for example, by \cite{P:2012} for WR~25.
The data shown here for WR~25 and WR~43c will be discussed in detail elsewhere.
Luminous colliding-wind X-ray sources of high-amplitude orbital-driven variability are by no means ubiquitous among
very massive Wolf-Rayet binary systems. The close binary WR~43a (WN6ha+WN6ha, P=3.7724d) in NGC3603 \citep{SCC+:2008} is normally less luminous in X-rays
than its neighbour WR~43c. Also fainter by an order of magnitude or more are the short-period WR~20a (O3If*/WN6+O3If*/WN6, P=3.686d) \citep{NRM:2008},
and the longer period WR~22 (WN7h+O9III-V, P=80.336~d) \citep{GNS+:2009} although the phase coverage of these last two systems has been poor.
Even among colliding-wind binaries in general, Mk~34 is arguably
the most luminous system yet identified. Also shown in Table~\ref{Table:vmWRX}
are comparisons with the brightest and best observed objects of this class, namely
$\eta$~Carinae \citep{HCR+:2014,CHL+:2015,CLM+:2017},
WR~140 \citep{PCSW:2005,SMT+:2015} and
WR~48a \citep{WvdHvW+:2012,ZTG+:2014}.
Despite much longer orbital periods of years and decades,
all show X-ray cycles with many properties in common with Mk~34 with differences only of detail.
In addition to a rich set of lines, spectra feature hard, bright X-ray continuum emission with equivalent temperatures of 4-5 keV.
After minimum at apastron, luminosities rise gradually to maximum shortly before periastron
before sudden, complex eclipses very close to periastron precede notably slow and asymmetric recoveries
towards apastron and the beginnings of a new, usually repeatable cycle.
The only rival to Mk~34's status as the most X-ray luminous of the colliding-wind binaries is the LBV system $\eta$~Carinae
which displays several characteristics that are so far
unique \citep{CHL+:2015} including flares and eclipses of variable shape.
It was during a flare in the approach to the most recent periastron passage in 2014 that $\eta$~Carinae reached
the exceptional maximum luminosity reported in Table~\ref{Table:vmWRX}
that was about 50\% higher than in previous cycles \citep{CLM+:2017}.
For Mk~34, while the good agreement between the contrasts in count rate observed in comparison with \textit{XMM-Newton}~and \textit{Swift}~
tend to suggest a reproducible maximum, data at maximum are sparse so that
this should be subject to test with more \textit{Chandra}~data of high spatial resolution.
While Mk~34 is likely to be a system of two hydrogen-rich Wolf-Rayet stars,
the high luminosity of $\eta$~Carinae is supposed to be powered by interactions of a slow, dense LBV wind and the much faster wind
of an unidentified companion.
Nevertheless, given the obvious similarity of spectra and light curves, Mk~34 and $\eta$~Carinae probably share many aspects of shock physics and geometry.
\input{vmWRX}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Mk34etal-crop.pdf}
\caption{Phase-dependent \changed{ACIS-I X-ray count rate multiplied by the square of the distance in kpc} of Mk~34
in comparison with three other X-ray bright binary systems in the Galaxy with very massive Wolf-Rayet primary stars.}
\label{Fig:vmWRX}
\end{figure}
\section{Conclusions}
Mk~34 is one of the most prominent Wolf-Rayet stars in the LMC and the brightest stellar X-ray source.
\textit{Chandra}~ACIS-I observations made as part of the 2~Ms T-ReX campaign on the dense stellar cluster
30 Doradus, have revealed that it has a repeatable X-ray cycle of 155.1 days that is
confirmed by archived \textit{XMM-Newton}~data and new ToO observations with \textit{Swift}.
It is the most X-ray luminous colliding-wind binary system yet identified,
exceeding even $\eta$~Carinae.
Though lacking coverage at some crucial phases,
the form of the phased X-ray light curve of Mk~34 appears repeatable and
very similar to the Galactic colliding-wind
binary system WR~21a which is of shorter period and lower luminosity but whose more detailed light curve
suggests that a combination of binary, geometrical and radiative mechanisms is responsible for the distinctive gradual rise
to maximum before the sudden onset of a deep minimum and gradual recovery.
We are in the process of establishing the
optical radial velocity orbit of Mk~34 in order to define how the geometrical disposition of the stars is related to its X-ray behaviour.
Although located 50~kpc away in the LMC, Mk~34 is a brighter X-ray source most of the time than many well-known
Wolf-Rayet systems in the Galaxy. As a result, high-resolution observations with the \textit{Chandra}~HETG will also be also feasible to
confirm a variety of low metal abundances in the LMC and study the physics and dynamics of the shocks responsible for its X-rays.
Detailed future X-ray photometry holds the promise of direct
estimates of the radius of one or both of the stellar components of Mk~34, widely thought to be among the most massive of stars.
\section*{Acknowledgements}
Broos and Townsley were supported by {\em Chandra X-ray Observatory} general observer (GO) grants GO5-6080X and GO4-15131X (PI: L.\ Townsley), and by the Penn State ACIS Instrument Team Contract SV4-74018, issued by the {\em Chandra} X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060.
Partial financial support for Pollock and Tehrani was provided by the United Kingdom STFC.
We are very grateful to the \textit{Swift}~ToO program, its Project Scientist and Observatory Duty Scientists for awards
of observing time, general support and extensive use of UKSSDC data analysis tools.
\changed{
This research has made use of SAOImage DS9, developed
by Smithsonian Astrophysical Observatory.
}
\bibliographystyle{mnras}
|
1,108,101,563,709 | arxiv | \section{Introduction}
Ten years ago Chevalier and Clegg (1985) published a brief letter in
which they conjectured that a high supernova rate in a galactic
nucleus could heat the surrounding gas to temperatures with sound
speeds exceeding the escape velocity of the galaxy. This hot, tenuous
gas would expand outward from the galaxy in the form of a ``wind,''
enriching the surrounding intergalactic medium. Since that time, the
galactic wind model has been confirmed by observations of over a dozen
galactic-scale outflows: e.g., M82 (\cite{AT78}), NGC~253
(\cite{FT84}), NGC~1569 (\cite{HDLFGW95}), NGC~1705 (\cite{MFD89}),
NGC~1808 (\cite{P93}), NGC~2146 (\cite{AHWL95}), NGC~3079
(\cite{VCBTFS94}), NGC~3628 (\cite{FHK90}), NGC~4051
(\cite{CHSMTGMP97}), NGC~4666 (\cite{DPLHE97}), Mk~509
(\cite{PBAC83}). These massive ejections of gas and dust are usually
observed as large ($\sim$few kpc), roughly conical structures of
filaments originating in the nuclear regions and oriented along the
minor axes of the galaxies. They are therefore most often observed in
edge-on disk systems (e.g., M82, NGC~1808), in optical emission lines
(\cite{LH95}), radio continuum (\cite{BODDP93}), and soft X-rays (e.g,
\cite{BST95}; \cite{AHWL95}). Recent spectral studies have also shown
the ability to detect winds in face-on galaxies using their kinematic
signatures (e.g., NGC~2782 [\cite{BSK92}], Mk~231 [\cite{HK87};
\cite{KCTK97}]).
In some cases these winds are indeed driven by supernovae and massive
stellar winds from a central starburst (e.g., M82, NGC~1569;
\cite{LH95}; \cite{LH96}), while other winds appear to be powered by
more exotic forces associated with the central engines of AGN (e.g.,
NGC~3079, NGC~4051; \cite{CBGOLTMC96}; \cite{CBGOC96}), and some
galaxies appear to exhibit both starburst and AGN characteristics
(e.g., NGC~1808 [\cite{FBW92}], Mk~231 [\cite{LCM94}]). Because of
the low densities of the wind material, emission from optical lines is
often very difficult to detect, except in nearby galaxies. The x-ray
emission from these winds is comparable in luminosity to the optical
emission lines, but is visible on larger spatial scales and therefore
detectable to greater distances.
\placetable{m82statstab}
Due to its proximity and favorable inclination angle (see
Tab.~\ref{m82statstab}), the irregular disk galaxy M82 (NGC~3034) has
been studied extensively as a prototype galactic wind system.
Following the original discovery (\cite{LS63}), the first detailed
spectroscopic study of the optical emission line filaments
(\cite{BBR64}) revealed kinematics indicative of a bipolar, roughly
conical, outflow of gas along the minor axis of the galaxy.
Thirty years and over 500 published papers later, the initial
interpretation of the M82 filaments in terms of an outflow still
stands. Support for this picture extends from radio to gamma ray
wavelengths. The starburst nature of the nucleus of M82 has been
verified through its strong infrared emission (e.g., \cite{SHN87};
\cite{RLSNKLdH88}; \cite{TCJDD91}) and numerous compact radio
supernovae (e.g., \cite{KBS85}; \cite{HTCCY94}; \cite{MPWASd94}).
Models of the starburst evolution produce appropriate quantities of
energy and mass on plausible timescales to create and sustain the
observed nuclear and galactic wind behavior (e.g., \cite{RLRT93};
\cite{DM93}). The wind itself has been observed optically in both
spectral (e.g., \cite{MHv87}; \cite{MGDP95}) and imaging studies
(e.g., \cite{IvASTY94}), in emission lines (e.g., \cite{AHM90}) and
broadband radiation (e.g., \cite{OM78}). In particular, kinematic
evidence for the existence of a galactic wind in M82 has been
presented by several optical emission line studies (e.g., \cite{H72};
\cite{AT78}; \cite{BT88}; \cite{HAM90}; \cite{MGDP95}), and even by
molecular observations (\cite{NHHSHS87}).
An extensive x-ray halo has been observed oriented along the minor
axis, extending 5--6~kpc from the disk, far beyond the visible extent
of the optical filaments (e.g., \cite{WSG84}; \cite{SPBKS89};
\cite{BST95}). A few percent of the supernova energy from the
starburst has been deposited in this hot ($T\sim10^8$~K) ``x-ray
wind.'' A large ($r\sim8$~kpc) spherical halo has been reported at
radio continuum wavelengths (\cite{SO91}), and has been interpreted as
synchrotron emission from relativistic electrons in the outflowing
wind.
With this wealth of data, it is surprising that little attempt has
been made to undertake detailed comparisons of models and
observations. Large-scale galactic winds were first proposed to
account for the lack of gas in elliptical galaxies (\cite{MB71}). The
theory of these winds has since evolved in tandem with research on
starburst-driven galactic winds (e.g., \cite{WC83}; \cite{SKC93a};
\cite{SKC93b}). Since the original starburst wind model
(\cite{CC85}), advances have been made in both analytical studies
(e.g., \cite{KM92a}; \cite{KM92b}) and hydrodynamic simulations (e.g.,
\cite{TI88}; \cite{TB93}; \cite{SBHL94}; \cite{SBHB96}). Models of
winds in other astrophysical situations, such as stellar winds (e.g.,
\cite{SRLL91}) and winds from AGN (e.g., \cite{S93}; \cite{AL94};
\cite{ALB94}), have contributed to our understanding as well. On a
broader scale, starburst-driven winds have important ramifications
for a variety of astrophysical and cosmological situations, including
enrichment of the intergalactic medium, contributions to the diffuse
X-ray background, the evolution of dwarf and interacting galaxies, and
the formation of elliptical galaxies through mergers (see \cite{HAM90}
and references therein).
Toward the goal of investigating this model, we have obtained
high-resolution imaging Fabry-Perot observations of M82 in several
optical emission lines. In Section~\ref{observations} of this paper
we present our Fabry-Perot observations and describe the methods of
data reduction. In Section~\ref{maps}, we present the derived
two-dimensional emission-line maps of M82 illustrating the
distribution of line flux, ionized gas velocity, and ionization state
across the outflow. In Section~\ref{discussion}, we describe our new
data for the disk, halo, and outflow and provide comparisons with
other published observations and a number of kinematic models. We
present our conclusions in Section~\ref{conclusions}. A subsequent
paper describes refinements to our kinematic and ionization models to
accommodate new observations from the {\it Keck\/} and {\it Hubble
Space Telescopes}.
\section{Observations and Reductions}
\label{observations}
\subsection{Fabry-Perot Observations}
Observations of M82 were obtained in February, 1986 at the 3.6-meter
Canada-France-Hawaii telescope (CFHT) on Mauna Kea, in the emission
lines of \hbox{H$\alpha$}\ $\lambda$6563 and \hbox{[{\ion{N}{2}}]}\ $\lambda$6583. The Hawaii
Imaging Fabry-Perot Interferometer (HIFI; \cite{BT89}) was used with
an ET-50 etalon from Queensgate Instruments. The etalon coatings were
formulated to provide a free spectral range of 85\AA\ and a finesse of
60, for a spectral resolution of 1.4\AA\ (65~\hbox{km~s$^{-1}$}, at \hbox{H$\alpha$}). The
$512\times 512$ pixels of the Texas Instruments CCD, after $2\times 2$
on-chip binning, each subtend 0\farcs 86 ($\sim$13~pc at the distance
of M82). This scale undersampled the estimated 1\arcsec\ seeing disk
but was necessary due to the low flux levels of the outflow emission.
The 3\farcm 5 field covers almost the entire region of minor-axis
optical filaments. Other characteristics of the observing system are
provided in Table~\ref{HIFItab}, along with their experimentally
determined values.
A second set of Fabry-Perot observations were obtained in March, 1992
at the University of Hawaii 88-inch telescope on Mauna Kea, in the
emission line of \hbox{[{\ion{O}{3}}]}\ $\lambda$5007. Again the HIFI system was
used, with an ET-70 etalon owned by Dr.\ Sylvain Veilleux. Although
the free spectral range of this etalon (61\AA) was smaller than that
used for the \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ observations, the finesse was essentially
identical, as were the effective spatial and velocity resolutions
(0\farcs 85 pix$^{-1}$ and 60~\hbox{km~s$^{-1}$}\ at \hbox{[{\ion{O}{3}}]}, respectively) and the
field of view. The characteristics of the observing system are
provided in more detail in Table~\ref{HIFItab}. In both cases, the
etalon was tilted to remove internal reflections and paired with wide
(50\AA\ FWHM) interference filters to eliminate interorder confusion
problems.
\placetable{HIFItab}
\subsection{Data Reductions}
Most of the data reduction was performed under the Zodiac image
processing system (\cite{M82}; \cite{S97}), encompassing standard
routines as well as a number of programs written by the authors and
optimized for Fabry-Perot data. IRAF\footnote{IRAF is distributed by
the National Optical Astronomy Observatories, which is operated by the
Association of Universities for Research in Astronomy, Inc. (AURA)
under cooperative agreement with the National Science Foundation.} was
also used for a small number of specific problems. A summary of the
data reduction steps is now provided; see \cite{S95} for more details.
\subsubsection{CCD Reductions}
The bias level was subtracted from each frame, using the average of
several CCD bias frames. Cosmic rays were denoted by hand and removed
by nearest-neighbor interpolation. Bad columns were also denoted by
hand and then removed by sinc interpolation across each row of the bad
column. The sinc function was selected for use in the interpolation
because of its favorable Fourier properties: its Fourier transform is
the top-hat function, implying even sampling of the image noise
structure over a finite range of frequency.
A flatfield image was constructed for our data by summing a series of
frames of a white source, scanning the etalon through a free spectral
range. Any first-order slope in the illumination pattern was removed
with a two-dimensional linear fit. (Sky flats were not available.)
The flatfield was then divided by a smoothed ($\sigma \sim 3$ pixels)
version of itself, normalized, and divided into each data and
calibration frame. Due to problems with on-chip binning, the movement
of transient defects between the images of each cube, and temperature
fluctuations of the CCD, the flatfielding process did not
substantially alter the noise content of most frames. Fortunately,
the pixel-to-pixel sensitivity variations comprise a noise source at a
level of only $\sim$1.6\%, a small value compared to other Fabry-Perot
photometry uncertainties.
\subsubsection{Fabry-Perot Reductions}
{\it Cube building.\/} In order to randomize variations in the sky
background due to changing hour angle, approaching dawn, etc., as well
as instrumental fluctuations, the data frames were observed in a
random or staggered order with respect to the etalon optical gap. The
frames were first sorted by gap and stacked to form cubes: target
(M82) data cubes through each filter, a whitelight or flatfield cube
through each filter, a standard lamp calibration cube, and standard
star cubes.
{\it Whitelight calibration.\/} The whitelight cube was used to map
the spatial and spectral shape of the interference filter, which was
then removed in the same manner as flatfielding. The whitelight cube
was heavily smoothed ($\sigma \sim 15$ pixels) in the spatial
dimension, re-sampled from 0.65\AA\ to 0.99\AA\ steps, to match the
etalon gap spacings of the data cubes, then normalized and divided
into the corresponding data cubes.
{\it Frame alignment.\/} The frames in each data cube were aligned
using two stars near the disk of the galaxy. (The single bright star
in the field, AGK~3+69~428 [\cite{BG82}], 2\farcm 5 southwest of the
nucleus, saturated the detector.) Fractional pixel shifts were made
by two-dimensional spline interpolation of the images. The spatial
registration is accurate to better than one pixel ($\la 0\farcs 5$).
We note here that, although the HIFI system and the tilted etalon
provide a field that is relatively free of ghost reflections, the
location of the bright star AGK~3+69~428 in the southwest corner of
the frame was such that a pair of concentric ghost images of the star
appear opposite the optical axis, in the southeast corner of the
frame. Following frame registration, a region encompassing the ghost
images was masked from further analysis. This is unfortunate, in that
the minor axis emission of M82 runs directly through this region, but
it demonstrates the care that must be taken when observing with
Fabry-Perot systems, which always have prevalent ghost patterns.
{\it Ring fitting.\/} Temperature and humidity variations produce
drifts in the the expected spectral response of the Fabry-Perot
etalon. In order to parametrize the spectral and spatial drifts in
the Fabry-Perot system, we obtained a set of calibration lamp images,
taken periodically throughout the night, all at the same etalon
spacing. Elliptical fits to the rings revealed no noticeable
variations in the radius or circularity of the rings, indicating that
the spectral stability of the etalon was extremely good. Flexure of
the telescope system was detectable as a two pixel ($\sim$1\farcs 5)
shift of the ring centers (i.e., the optical axis) over the course of
each night. The frame alignment procedure has removed this, with
minimal effect along the wavelength axis.
{\it Data cube resampling and smoothing.\/} While our \hbox{H$\alpha$}+\hbox{[{\ion{N}{2}}]}\ data
was sampled regularly at 0.99\AA\ (45~\hbox{km~s$^{-1}$}) intervals across the \hbox{H$\alpha$}\
line, observing time constraints required us to interpolate several
frames across the \hbox{[{\ion{N}{2}}]}\ portion of the spectra, where the sampling was
a factor of two coarser. Although a spline interpolation was
performed without difficulty, a slightly larger error should be
assumed for the final \hbox{[{\ion{N}{2}}]}\ fluxes and velocities. The \hbox{[{\ion{O}{3}}]}\ data
set required extrapolation of a single (continuum) frame at either end
of the spectra.
A light Hanning filter was then applied along the spectral axis of
each cube, in order to allow efficient automated fitting of the large
numbers of the spectra. Tests indicate this had a negligible effect
on the final fit parameters.
{\it Phase calibration.\/} The instrumental profile of a Fabry-Perot
interferometer is a complex function of spatial position, wavelength,
and optical gap spacing, given by the well-known Airy function
(\cite{BT89}). Due to the large free spectral range and low
interference order of the etalons, the monochromatic ``phase
surfaces,'' as observed in the emission lines of a calibration lamp,
were well parametrized by the analytical expression for the
three-dimensional Airy function. A fit to this function determined
several system constants listed in Table~\ref{HIFItab}. This fit was
then used to shift each spectrum in the data cube by the appropriate
value to generate monochromatic frames. The convergence of two night
sky lines into a single frame each confirmed the accuracy of the phase
correction for the \hbox{H$\alpha$}+\hbox{[{\ion{N}{2}}]}\ data set. For the \hbox{[{\ion{O}{3}}]}\ data set,
however, a poorly sampled calibration cube prevented us from
performing an accurate phase correction. Therefore the \hbox{[{\ion{O}{3}}]}\
observations will be used for flux measurements and morphology, but
not for kinematic studies.
{\it Sky subtraction.\/} A limited field of view prevented us from
obtaining a sky spectrum devoid of galaxy emission. We therefore
removed the two bright night sky lines in the \hbox{H$\alpha$}+\hbox{[{\ion{N}{2}}]}\ data set by
subtracting Gaussian components with the proper velocity and a mean
flux level as observed across the field. The lines were identified as
OH emission at 6553.61\AA\ and 6577.28\AA\ (\cite{OM92}), providing
the wavelength calibration for the spectral axis. The night sky
continuum was not removed from the data. The \hbox{[{\ion{O}{3}}]}\ data were not
sky-subtracted, due to the low level of sky flux and the difficulties
associated with the non-phase-corrected data. Resulting errors in the
final spectral fits are negligible.
\subsubsection{Photometry}
\label{photometry}
Recent studies (e.g., \cite{VCBTFS94}) have shown that Fabry-Perot
data can be flux calibrated to an accuracy equal to that of CCD
imagery and longslit spectroscopy. Rather than scaling to an
externally calibrated data set, we have applied a primary calibration,
employing standard star observations directly. Since this Fabry-Perot
flux calibration procedure is described in more detail elsewhere
(e.g., \cite{BSVJ97}), we outline only the salient points below.
The star $\varepsilon$~Orionis was used to calibrate the \hbox{H$\alpha$}+\hbox{[{\ion{N}{2}}]}\
observations; $\alpha$~Lyrae and $\eta$~Hydrae were used for the
\hbox{[{\ion{O}{3}}]}\ observations. After cosmetic cleaning and applying the
whitelight correction described above, the counts in each stellar
image were summed, and then scaled by the telescope aperture size and
exposure time. To obtain pixel values in units of counts cm$^{-2}$
sec$^{-1}$ \AA$^{-1}$ we must then divide by the effective spectral
bandpass of the etalon at each pixel. We emphasize that this value is
not necessarily the same as either the spectral sampling of the cube
or the etalon resolution. Rather, it is a measure of the amount of
flux transmitted at a given wavelength for a sequence of etalon
spacings; it depends primarily on the finesse ($N_R$) of the etalon.
This ``effective bandpass'' was calculated by summing the flux under a
synthetic monochromatic spectrum spanning an entire order of the
appropriate theoretical Airy function and dividing by the peak value
of the Airy function. We derived an effective bandpass of 3.0\AA\ for
our data sets. The scaled values of observed stellar counts were then
compared with published flux values (\cite{H70}; \cite{HL75}),
corrected for atmospheric extinction. Each pixel in the final spectra
has units of ergs cm$^{-2}$ sec$^{-1}$ pixel$^{-1}$ frame$^{-1}$,
yielding total fluxes from the line fits in units of ergs cm$^{-2}$
sec$^{-1}$ pixel$^{-1}$. The final estimated systematic error in the
flux calibration was $\sim$7.5\%.
\section{Fabry-Perot Maps}
\label{maps}
The emission-line spectra were fit with Gaussian profiles across the
field to generate spatial maps of line characteristics such as flux,
velocity, and dispersion. Single-component fits were made to the \hbox{H$\alpha$}\
and \hbox{[{\ion{N}{2}}]}\ lines across the majority of the field, well into the halo
of the galaxy. There are regions north of the galaxy's disk where
double components were fit to the \hbox{H$\alpha$}\ line profile, as well as
regions south of the disk where double components were fit to both the
\hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ lines. The \hbox{[{\ion{O}{3}}]}\ line was fit with a single Gaussian
component throughout the field, although visual inspection revealed
signs of line splitting in specific regions.
The region of acceptable fits is identical for the \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\
maps, although the \hbox{[{\ion{N}{2}}]}\ fits in the northern-most regions of the maps
have significantly higher errors, due to clipping of the 6583\AA\
profile by the limited spectral coverage. The region with sufficient
\hbox{[{\ion{O}{3}}]}\ flux for profile fitting is much smaller than that of \hbox{H$\alpha$}\ or
\hbox{[{\ion{N}{2}}]}. The central regions of the galaxy are saturated in both the
\hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ observations.
We now describe the important features of each Fabry-Perot map. Each
spatial map is presented at approximately the same scale, with tick
marks separated by one arcminute ($\sim$950~pc). The maps from the
\hbox{H$\alpha$}+\hbox{[{\ion{N}{2}}]}\ data set contain regions in which the spectral lines are
split; these maps show both a flux-weighted total for the galaxy as
well as values for the individual components separately. These
components are referred to as the high-velocity component (HVC) and
the low-velocity component (LVC). North is up and east is to the left
in all images. The position angle of the major axis of the galaxy is
approximately 65\arcdeg.
\placefigure{HaNiifluxfig}
Figure~\ref{HaNiifluxfig} illustrates the flux distribution in the
light of \hbox{H$\alpha$}\ 6563\AA\ and \hbox{[{\ion{N}{2}}]}\ 6583\AA. We measure a total
unsaturated \hbox{H$\alpha$}\ flux of $9.9\times 10^{-11}$ ergs cm$^{-2}$
sec$^{-1}$ and a total \hbox{[{\ion{N}{2}}]}\ flux of $4.1\times 10^{-11}$ ergs
cm$^{-2}$ sec$^{-1}$. A rough comparison with the \hbox{H$\alpha$}\ imagery to be
presented in the next section indicates that the saturated nuclear
regions contribute an additional $\sim2.1\times 10^{-11}$ ergs
cm$^{-2}$ sec$^{-1}$ in the \hbox{H$\alpha$}\ line, or $\sim$17\% of the total.
The flux is concentrated in the nucleus and along the minor axis, with
very little emission originating in the extended disk of the galaxy.
The nuclear line emission has saturated the detector in two
concentrations (``the eyes''), as well as a number of weaker ancillary
regions. In contrast, the minor axis emission is spatially extensive
and filamentary. Numerous long radial filaments can be seen extending
more than a kiloparsec from the nucleus, as well as complex
small-scale structures. Note, for example, the bright bow shock-like
arc approximately 500~pc SSE of the nucleus.
The morphology of the extraplanar gas differs between the two sides of
the galaxy, appearing more chaotic on the north side and showing signs
of collimation in the south. The southern \hbox{H$\alpha$}\ emission exhibits a
sharp edge on the eastern side. Furthermore, the emission to the south
is considerably brighter and more extensive, although there appears to
be an abrupt reduction in flux at a distance of approximately 500~pc
from the nucleus. Examination of the \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ emission maps
reveals that, beyond 500~pc, the flux in the HVC drops rapidly, while
the flux in the LVC remains more uniform. We also find pervasive
diffuse emission at a low level throughout the halo of the galaxy.
\placefigure{Oiiifluxfig}
Figure~\ref{Oiiifluxfig} is a map of the flux from M82 at \hbox{[{\ion{O}{3}}]}\
5007\AA. We measure a total \hbox{[{\ion{O}{3}}]}\ flux of $2.6\times 10^{-12}$ ergs
cm$^{-2}$ sec$^{-1}$, more than an order of magnitude below that seen
at \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}. There is essentially no emission from the disk at
this wavelength; the flux originates almost entirely within the
nucleus and along the southern minor axis of the disk. Although the
radial extent of the minor-axis emission is much smaller than that
seen in \hbox{[{\ion{N}{2}}]}\ and \hbox{H$\alpha$}, the filamentary morphology is similar. Indeed,
excellent correlation is seen with the \hbox{H$\alpha$}\ flux map, which has been
contoured on Figure~\ref{Oiiifluxfig}. A striking feature of the
\hbox{[{\ion{O}{3}}]}\ flux map is the presence of two distinct streams of emitting
gas, each extending along the southern minor axis from one of the
bright nuclear emission regions. These streams can also be identified
in the \hbox{H$\alpha$}\ flux map (Figure~\ref{HaNiifluxfig}), although the much
more pervasive nature of the \hbox{H$\alpha$}\ emission makes the structure more
difficult to discern at small radii.
\placefigure{Havelfig}
Figure~\ref{Havelfig} is a radial velocity map for the \hbox{H$\alpha$}-emitting
gas in M82. Two prominent trends are seen: first, there exists a
strong velocity gradient along the major axis of the galaxy,
consistent with normal disk rotation, with the eastern end moving away
from the observer. Second, a velocity transition is evident along the
minor axis of the galaxy: strong blue-shifting of the \hbox{H$\alpha$}\ emission is
seen south of the disk; strong red-shifting to the north. The
major-axis rotation signature merges into this minor axis motion in a
gradual fashion, although this may be due in part to the
flux-weighting procedure.
The HVC consists of highly redshifted emission in the north and highly
blueshifted emission in the south ($v_r\sim300$~\hbox{km~s$^{-1}$}). The LVC
consists of emission at roughly the systemic velocity of the galaxy
($v_{sys}\approx203$~\hbox{km~s$^{-1}$}; \cite{ddCBPF91}), in both the north and
south. The northern and southern component pairs mirror each other
almost exactly in terms of relative velocity and kinematic structure.
Perhaps surprisingly, the velocities within each individual component
exhibit little radial or azimuthal variation. The exception to this
is the inner portion of the HVC, where the radial velocities approach
the systemic velocity.
A map of the line-of-sight velocities of the \hbox{[{\ion{N}{2}}]}-emitting gas was
also produced, but is not presented here, as it provides little
additional information. The trends along both the major and minor
axes are identical to those seen at \hbox{H$\alpha$}, but at a lower
signal-to-noise ratio due to the decreased flux of \hbox{[{\ion{N}{2}}]}\ line
emission. A velocity map of the \hbox{[{\ion{O}{3}}]}-emitting gas was not produced,
for the reasons outlined above.
\placefigure{NiiHalgfig}
Figure~\ref{NiiHalgfig} shows the logarithm of the ratio of the \hbox{[{\ion{N}{2}}]}\
6583\AA\ flux to the \hbox{H$\alpha$}\ 6563\AA\ flux from M82. The most remarkable
feature in this map is the presence of a distinct region of low line
ratio (0.0--1.0) along the minor axis, south of the disk. This region
extends at least a kiloparsec in radius from the nucleus and is
narrower than the regions of minor axis filaments visible in the \hbox{H$\alpha$}\
and \hbox{[{\ion{N}{2}}]}\ flux maps (Fig.~\ref{HaNiifluxfig}). The region of low
ratios can be separated into two distinct structures, originating with
the bright central eyes of M82, similar to the structures seen at
smaller radii in the \hbox{[{\ion{O}{3}}]}\ 5007\AA\ flux map
(Fig.~\ref{Oiiifluxfig}).
Elsewhere in the galaxy, higher line ratios ($\gtrsim1.0$) prevail,
particularly at large radii within the inclined disk. Discrete
\ion{H}{2} regions can be seen in the disk as small concentrations
with the expected \hbox{\nii/\ha}\ ratio of $\sim$0.5 (\cite{O89}). The
regions of extremely low line ratio to the north are a result of the
incompletely sampled \hbox{[{\ion{N}{2}}]}\ line, and should be ignored.
\placefigure{OiiiHalgfig}
Figure~\ref{OiiiHalgfig} is a map of the logarithm of the ratio of the
\hbox{[{\ion{O}{3}}]}\ 5007\AA\ flux to the \hbox{H$\alpha$}\ 6563\AA\ flux from M82. The ratio
has been calculated over the entire spatial extent of the \hbox{[{\ion{O}{3}}]}\
emission, as seen in the flux map (Fig.~\ref{Oiiifluxfig}). The
observed ratios are low ($\sim$0.05) throughout the nucleus and
regions south of the disk, with the only apparent trend being a
gradual increase in the ratio with distance from the nucleus.
\section{Discussion}
\label{discussion}
We now discuss the observational implications of these data in the
context of each of the galaxy's primary morphological components:
starburst disk, extended halo, and galactic-scale outflow.
\subsection{Starburst Disk}
Our observations reveal a small irregular disk containing heavy
obscuration and a central starburst. The \hbox{H$\alpha$}\ and \hbox{[{\ion{O}{3}}]}\ flux maps
(Figs.~\ref{HaNiifluxfig} and \ref{Oiiifluxfig}) reveal little ionized
gas in the extended disk of M82, outside of the central nuclear
starburst region, as found by slit spectra (\cite{OM78}). Small
concentrations of line emission that do exist in the disk outside of
the starburst, such as the regions 0\farcm 5 east of the nucleus (see
also \cite{OM78}; Fig.~3), appear to be giant \hbox{{\ion{H}{2}}}\ regions. The
\hbox{\nii/\ha}\ ratio map (Fig.~\ref{NiiHalgfig}) indicates moderate values
for these regions ($\sim$0.56), consistent with a cooler range of
photoionized \hbox{{\ion{H}{2}}}\ regions ($T_{\hbox{max}}\sim40,000$~K, \cite{ED85};
\cite{VO87}). Such regions exhibit low levels of \hbox{[{\ion{O}{3}}]}\ emission,
consistent with our non-detection in Figure~\ref{Oiiifluxfig}.
Although the high extinction in the disk of M82 (3--27~mag;
\cite{OGHC95}; \cite{MRRK93}) serves to hide smaller \hbox{{\ion{H}{2}}}\ regions
from our optical observations, the paucity of large \hbox{{\ion{H}{2}}}\ regions
suggests that the level of star formation must be low outside of the
central starburst. This conclusion has also been reached by
multi-epoch radio supernovae studies (e.g., \cite{KBS85};
\cite{HTCCY94}). The weak nature of the \hbox{[{\ion{O}{3}}]}\ emission implies a
high overall metallicity in M82, as has been suggested by previous
observations (e.g., \cite{DEHH87}; \cite{GL92}), and as is anticipated
due to chemical enrichment by the starburst (e.g., \cite{KS96}).
The nuclear emission is dominated by two large saturated regions, each
approximately 200~pc in diameter, and centered $\sim$125~pc from the
2~\micron\ stellar nucleus (see Fig.~\ref{HaNiifluxfig}). These
regions correspond to knots $A$ and $C$ of \cite{OM78} and are known
to be extremely high surface brightness ``clusters of clusters'' of
young ($T\lesssim50$~Myr) stars (\cite{OGHC95}). Also identifiable in
Figure~\ref{HaNiifluxfig} are knots $D$ and $E$ (also saturated), as
well as knots $F$, $G$, and $H$. Knot $B$ is extremely faint at \hbox{H$\alpha$}\
wavelengths, especially when compared to broadband (\cite{OM78}) and
ultraviolet (\cite{He96}) observations, suggesting a higher gas
content in the inner regions of the galaxy. The outflow can be traced
to knots $A$ and $C$. This relationship is particularly
well-demonstrated by the \hbox{\nii/\ha}\ map (Fig.~\ref{NiiHalgfig}) and the
\hbox{[{\ion{O}{3}}]}\ flux map (Fig.~\ref{Oiiifluxfig}).
\placefigure{rotcurvefig}
We have plotted the \hbox{H$\alpha$}\ radial velocities along a straight line
corresponding to the major axis of the galaxy in
Figure~\ref{rotcurvefig} (panel $c$), along with a number of rotation
curves from the literature at a range of wavelengths. The published
systemic velocity of M82, 203~\hbox{km~s$^{-1}$}\ (\cite{ddCBPF91}), corrected to a
heliocentric value of 208.7~\hbox{km~s$^{-1}$}, has been subtracted from the \hbox{H$\alpha$}\
Fabry-Perot data. The rotation curve rises to approximately 100~\hbox{km~s$^{-1}$}\
within $\sim$9\arcsec\ ($\sim$140~pc) of the nucleus and remains
relatively flat to the edges of the observations. No substantial
fall-off in the rotation curve is seen, due primarily to the limited
radial extent of line emission from the disk. The observed rotation
curve matches well those found at \hbox{H$\alpha$}\ by \cite{H72} (his Fig.~12) and
\cite{MCGD93} (panel $d$ of Fig.~\ref{rotcurvefig}), including the
turn-over and asymptotic velocities and the nuclear velocity gradient
($\sim$11~\hbox{km~s$^{-1}$}\ arcsec$^{-1}$). Figure~\ref{rotcurvefig} also
illustrates an increase in the central velocity gradient with
wavelength. As has been pointed out by other authors (e.g.,
\cite{MCGD93}), this effect is indicative of the large extinction
toward the nuclear regions of the galaxy.
The \hbox{H$\alpha$}\ rotation curve also correlates well with that of the 250~pc
nuclear ring seen in molecular emission lines (e.g., \cite{LNSWRK90}).
The double-lobed structure of the central \hbox{H$\alpha$}\ emission is suggestive
of an edge-on ring structure as well, interior to the molecular ring.
The starburst could be identified with the ring of ionized gas and is
probably propagating outward, fueled by the cold gas in the molecular
ring (\cite{WGT92}; cf.\ \cite{SL95}). This ring may also provide
much of the extinction toward the nucleus of M82 (e.g., \cite{TG92}).
The dynamic conditions of the central regions would have removed most
of the obscuring material interior to the ring, as has been suggested
in our Galaxy (e.g., \cite{BGW82}). The resolution of the central
star clusters by recent HST observations (\cite{OGHC95}) suggests that
any such ring would be very clumpy however, and the two bright regions
may indeed represent real spatial enhancements in the distribution of
ionized gas and star formation. The relationship between this ring
structure and the proposed bar in M82 ($r\sim500$~pc; \cite{TCJDD91};
\cite{AL95}) is not clear.
The \hbox{\nii/\ha}\ line ratio map (Fig.~\ref{NiiHalgfig}) shows a high ratio
in the disk, especially at large radii, where the value is observed to
exceed 1.0. The actual line ratios in the disk may be even slightly
higher and possibly more uniform, due to dilution by the central
starburst and the halo. Such high ratios are by no means rare in the
nuclei of disk galaxies (\cite{K83}), particularly LINERS (e.g.,
NGC~4319 [\cite{SA87}], NGC~5194 [\cite{FCJLV85}; \cite{GG85}]), and
are understood to originate with shock excitation and/or
photoionization by a power-law source (\cite{VO87}). Although it is
unlikely that M82 harbors an AGN (e.g., \cite{RLTLT80}; \cite{CP92};
\cite{MPWASd94}), a comparison with other galaxies which exhibit
pervasive high \hbox{\nii/\ha}\ line ratios in their extended disks is
instructive: studies of NGC~3079 (\cite{VCBTFS94}) and especially
NGC~1068 (\cite{BSC91}) have found high \hbox{\nii/\ha}\ ratios of 0.6--1.3
across much of the inner disk. In NGC~1068, the disk \hbox{{\ion{H}{2}}}\ regions
are found to reside in regions of lower \hbox{\nii/\ha}\ ratio
($\sim$0.3--0.8), while the disk as a whole is permeated with gas
exhibiting the higher ratios, similar to what we observe in M82. In
order to boost this forbidden line to recombination line ratio, the
``heating per ionization'' must be high, for which most models require
high energy photons, energetic electrons, or a dilute radiation bath
(\cite{S92}).
In the case of M82, an attractive candidate to enhance the \hbox{\nii/\ha}\
ratio in the disk is the concept of ``mixing layers'' (\cite{SC93};
\cite{VDS94}). The turbulence resulting from the interaction of hot
supernovae remnants with the ambient ISM creates an intermediate
temperature phase, which models suggest emits a radiation field
somewhat harder than thermal bremsstrahlung. This process can produce
\hbox{\nii/\ha}\ ratios in excess of 1.0 (\cite{SSB93}). By employing soft
X-rays as the photoionization mechanism, mixing ratios have been used
to model line emission ratios of up to 3--4 in cooling flows
(\cite{DV91}; \cite{CF92}). Considering the current optical
appearance of the disk of M82, its interaction with the galaxy M81
approximately $10^8$ years ago, and the energetic activity associated
with the nuclear starburst, a turbulent inner disk would not be
unexpected.
However, as mentioned above, our observations do not detect
significant numbers of star forming regions outside the starburst
nucleus. Even if this is attributed to high levels of extinction in
the disk, other wavebands confirm the low level of star formation. All
of the radio supernovae discovered by \cite{KBS85} are within 300~pc
of the galaxy's nucleus, well inside the region of highest \hbox{\nii/\ha}\
ratios. The diffuse X-ray flux in the disk has also been shown to
decrease rapidly with radius, implying a reduced star formation rate
outside the nuclear regions (\cite{BST95}). Other models for
producing high \hbox{\nii/\ha}\ ratios in the disk must be considered, such as
chemical enrichment and cosmic ray heating. A combination of shock
and photoionization may also provide a solution (e.g., \cite{HG90}),
although detailed models are not yet available.
\subsection{Extended Halo}
Early polarization observations detected a strong linear polarization
throughout the halo of M82, with the position angles oriented
perpendicular to a radial vector from the nucleus (\cite{E69}). This
polarization was interpreted as evidence for a scattering component in
the galaxy (e.g., \cite{S69}), probably consisting of electrons
illuminated by the nucleus. The more difficult issue has been the
polarization of the optical filaments themselves. The first
observations in this regard (\cite{VS72}) determined that the minor
axis filaments and the halo were equally polarized at optical
wavelengths, suggesting that perhaps there was no ``explosion'' in M82
and that the off-axis filaments were merely density enhancements in a
dusty cloud through which the galaxy was moving (\cite{SMM77}).
Although the discovery of split emission lines (\cite{AT78}) and the
minor-axis X-ray halo (\cite{WSG84}) have virtually eliminated this
alternate interpretation for the optical filaments (although see
\cite{RMS87}), the polarization measurements remain poorly understood.
Recent observations (e.g., \cite{SEA91}) indicate that the optical
filaments may indeed contain a scattered component, but even then
there is uncertainty as to the source of the polarization, i.e. the
nucleus (\cite{V74}) or the entire disk (\cite{SM75}). Unfortunately,
with few exceptions (e.g., \cite{SAC76}; \cite{BT88}; \cite{D93};
\cite{SBHL94}), the presence of a halo component in M82 has been
largely ignored.
Our detailed analyses in \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ clearly confirm the existence
of the smooth exponential halo noted by \cite{BT88}.
Figure~\ref{exphalofig} illustrates the flux along a narrow
($\sim$9\arcsec) band parallel to and approximately 45\arcsec\
southeast of the major axis of the galaxy. The flux due to the
outflow filaments can be seen superimposed upon an exponentially
decreasing background. The halo has be detected across our entire
region of fit lines, approaching a flux level of $10^{-15}$ ergs
cm$^{-2}$ sec$^{-1}$ arcsec$^{-2}$ at a radius of 1~kpc.
\placefigure{exphalofig}
We have estimated the radial profile of the halo flux at \hbox{H$\alpha$}\ with an
exponential function:
\begin{equation}
I(\hbox{\hbox{H$\alpha$}}) = I_0(\hbox{\hbox{H$\alpha$}}) e^{-r/r_e},
\end{equation}
where $I_0(\hbox{\hbox{H$\alpha$}}) \sim 1.6\times10^{-15}$ ergs cm$^{-2}$
sec$^{-1}$ arcsec$^{-2}$ \AA$^{-1}$ and $r_e \sim 315$~pc. This
profile has been overlayed on the cut in Figure~\ref{exphalofig}. For
the observed halo line width of $\sim$350~\hbox{km~s$^{-1}$}, the integrated \hbox{H$\alpha$}\
flux is $\sim 2.5\times10^{-11}$ ergs cm$^{-2}$ sec$^{-1}$. This
compares with an observed flux from the filaments and {\it
unsaturated\/} nucleus of $\sim 9.9\times10^{-11}$ ergs cm$^{-2}$
sec$^{-1}$. For a distance of 3.25~Mpc this implies a total \hbox{H$\alpha$}\
luminosity from the halo of $3.2\times10^{40}$ ergs sec$^{-1}$.
The azimuthally symmetric polarization pattern of the halo in
broadband light (\cite{SAC76}) suggests a scattering origin. The halo
is known to comprise cold neutral atoms (\cite{C77}), relativistic
electrons (\cite{SO91}), dust (\cite{VS72}) and warm ions
(\cite{BT88}). If we associate the line-emitting halo with the
polarized component, the line width ($\sim$350~\hbox{km~s$^{-1}$}\ FWHM) reflects
either the motion of scattering mirrors embedded in a warm medium or
the ``beam-averaged'' projected kinematics of the nuclear and
large-scale disk gas. If the observed line dispersion arises from
thermal motions of electrons, the kinetic temperature must be less
than 1000 K, at which point the flux from recombination would
overwhelm the scattered flux for any reasonable halo density ($n_H
\approx 1$ cm$^{-3}$; \cite{C77}). Dust scattering is expected to be
much more efficient in any case. The ratio of the scattering optical
depths can be written
\begin{equation}
R_{de} = {{\sigma_d n_d}\over{\sigma_e n_e}},
\end{equation}
where $n_d$ and $n_e$ are the dust and electron densities; $\sigma_d$
and $\sigma_e$ are the dust and electron scattering cross sections.
For $\sigma_d$, we form the more conservative weighted mean of the MRN
(\cite{MRN77}) grain distribution for which $\sigma_d \approx
0.01\mu$m. In the local ISM, the dust-to-gas {\it number\/} density
ratio is $n_d / n_H \approx 10^{-12}$ (\cite{OS73}) although this
depends strictly on the grain composition. With the conservative
assumption of a totally ionized halo, we deduce a scattering ratio
$R_{de} \sim 50$, verifying the relative importance of dust
scattering. \cite{SAC76} demonstrate that a reasonable optical depth
to dust scattering is $\tau_d \sim 0.1$ from a comparison of the halo
broadband flux and the estimated disk flux.
The halo dust could reside in an extended neutral or warm ionized
medium, some fraction of which could be supplied by the energetic wind
(\cite{BBR64}). We now compare the timescale for dust destruction by
sputtering with the estimated age of the starburst wind, $\tau_{wind}
\sim 3 \times 10^6$ years (e.g., \cite{LS63}; \cite{BT88}). From
\cite{OS73}, the timescale for grain sputtering is
\begin{equation}
\tau_{sput} \sim 3 \times 10^5\ n_e \left({{10^6}\over{T_e}}\right)^{1/2}
\left({{0.01}\over{Y}}\right) \left({{r_d}\over{0.1}}\right) \quad
\hbox{years,}
\label{sputeq}
\end{equation}
where $Y$ is the sputtering ``yield,'' i.e., the number of atoms
released per impact. $Y$ varies from $\sim 5 \times 10^{-4}$ at the
threshold sputtering temperature of $3 \times 10^5$ K to $\sim 0.01$
for temperatures of $10^7$--$10^8$ K.
At the low temperatures expected in a galactic halo,
$T\lesssim10^5$~K, the lifetime of dust is greater than $10^7$ yrs,
comparable to or longer than the lifetime of the outflow. However,
any dust located directly in the hot ($T\sim10^8$~K) X-ray-emitting
wind survives for no more than $10^5$ years, and has therefore been
destroyed. Such an effect is supported by low-resolution radio maps
of the M81/M82 region, which indicate an anti-correlation between the
wind lobes and \hbox{{\ion{H}{1}}}\ column density (\cite{C77}). We note that more
recent studies of gas-grain sputtering and grain-grain collisions
(\cite{TMSH94}; \cite{JTH96}) suggest that grains may be able to
survive much longer than previously thought. However, most of these
studies are specific to the three-phase ISM in the Galaxy, and it
remains unclear precisely how the results should be extended to
galactic wind systems, which characteristically involve higher
temperatures ($10^8$~K vs.\ $10^6$~K) and larger velocities (600~\hbox{km~s$^{-1}$}\
vs.\ 200~\hbox{km~s$^{-1}$}) than standard ISM models.
Given the sputtering timescales, if the dust has been delivered into
the halo by the wind, it must be as a component of cooler material
entrained by the hot wind itself. However, this implies that the halo
dust and optical emission line filaments may then have the same origin
and morphology, yet the polarization observations do not seem to
indicate a minor-axis concentration in the dust distribution
(\cite{SM75}; \cite{SAC76}; \cite{SEA91}). We also do not observe
substantial redshifted emission south of the galaxy, as would be
expected from mirrors moving with the outflow.
Alternatively, dust may have been forced into the halo at an early
stage of the outflow, at a time when it was dominated much more by
radiation from massive stars in the central burst than by supernovae.
Such a mechanism has been hypothesized by \cite{F97} to explain the
appearance of high-z dust in the Galaxy and other edge-on spirals. It
has been demonstrated that radiation pressure is sufficient to
evacuate a large fraction of the dust near an active star-forming
region into the halo, creating a dust distribution which varies slowly
with height above the disk. Given the current importance of radiation
effects in the inner portion of the M82 outflow (see
Section~\ref{compoptxray} below) and the extensive nature of the
observed dusty halo, such a scenario seems a reasonable model.
Regardless of any mechanism of relocating disk dust into the halo,
clearly the hypothesized encounter between M81 and M82
$\sim10^8$~years ago (\cite{C77}; \cite{YHL94}) has played an
important role in the evolution of the halo in M82. (This encounter
is also thought to have initiated the central starburst in M82, e.g.,
\cite{MH94}.) Radio observations have shown massive clouds of \hbox{{\ion{H}{1}}}\
surrounding both galaxies, with large arcs and bridges joining them
and the nearby galaxy NGC~3077 (e.g., \cite{D74}; \cite{YHL94}). The
large-scale velocity structure of this gas blends with the global \hbox{H$\alpha$}\
velocity trends in M82, matching the systemic velocity and even the
minor-axis velocity gradient (\cite{C77}). The \hbox{{\ion{H}{1}}}\ cloud is clearly
extensive enough to replenish dust that has been destroyed by
sputtering. Moreover, this massive reservoir of atomic gas should
help to maintain the tenuous halo gas itself in M82, a component that
is required by hydrodynamical models in order for the outflowing wind
to produce observable structures (\cite{SBHL94}).
\subsection{Galactic-scale Outflow}
A number of studies have concentrated on the minor axis filaments in
M82. Imaging observations (e.g., \cite{LS63}; \cite{IvASTY94}) have
been directed toward understanding the morphology of the outflow,
while spectral studies (e.g., \cite{BBR64}; \cite{H72}; \cite{AT78};
\cite{WCS84}; \cite{BT88}; \cite{MGDP95}) have attempted to
parametrize the kinematics of the filaments. The most recent
observations detect split emission lines that suggest a pair of
expanding bubbles or cones. The exact size and shape of the expanding
structures remains a topic of some debate. Additional optical
spectroscopy has measured the radial variations of line emission along
the outflow (e.g., \cite{HAM90}). Ratios such as \hbox{\nii/\ha}\ increase
with distance from the nuclear starburst, suggesting a gradual change
in the gas excitation mechanism. We now investigate these findings in
light of the spatial and spectral coverage provided by our Fabry-Perot
spectrophotometry.
\subsubsection{Optical Morphology}
Optical emission line images of the M82 outflow reveal a complex
distribution of radial filaments and knots along the minor axis of the
galaxy. In order to examine the morphology of these filaments, R. B.
Tully kindly obtained for us very deep ($t\sim3000$~sec) \hbox{H$\alpha$}\ imagery
at the University of Hawaii 88-inch telescope on Mauna Kea
(Fig.~\ref{deephafig}). We imaged a 6\arcmin\ circular field, a
factor of two larger than our Fabry-Perot field. The optical
filaments associated with the minor axis outflow are visible across
the entire field, to radial distances of at least 3.5~kpc to the north
and 2~kpc to the south. This represents approximately half the extent
of the soft X-ray halo (see below). The filaments are bright within a
kiloparsec of the disk, particularly in the south. In these inner
regions, a web of long ($\sim$300~pc) filaments appear distributed
roughly parallel to the minor axis, amidst a ``foam'' of smaller
emission structures. Beyond a kiloparsec in radius, the filamentary
structure breaks up into fainter, more isolated clumps.
\placefigure{deephafig}
The faintness of the northern filaments is almost certainly due to
obscuration by the inclined disk of the galaxy (cf.\ \cite{He96}).
The inclination angle of M82 is estimated to be 81\fdg 5
(\cite{LS63}), such that the nearer edge of the galaxy is projected on
the northwest side of the nucleus. We observe the nuclear regions of
the galaxy through the southern side of the disk, and therefore expect
the inner filaments to be much brighter there. If we assume a
line-of-sight dimension for the optical disk equal to its linear
dimension on the sky ($\sim$11\farcm 2; \cite{ddCBPF91}), we expect
the northern outflow to be at least partially obscured to projected
radii of $\sim0\farcm 8$. This is consistent with the similar
morphologies of the filaments beyond this radius in the north and
south in Figure~\ref{deephafig}.
Although complicated by obscuration in the north, the inner filament
structure also appears to differ between the two lobes in terms of
collimation. The bright inner kiloparsec of the southern lobe is
relatively well confined to the minor axis, whereas the northern
outflow filaments cover a much wider range of opening angle. This is
seen particularly well in the \hbox{H$\alpha$}\ flux map from the Fabry-Perot data
(Fig.~\ref{HaNiifluxfig}), and is probably not an obscuration effect,
but rather a difference in the physical morphology of these two lobes.
One possible interpretation is that the nuclear starburst is located
slightly above the galactic mid-plane. The smaller mass of covering
material to the north would make collimation of an expanding wind more
difficult, resulting in an almost immediate ``breakout'' of the wind
from the disk.
Finally, we have noted that the \hbox{\nii/\ha}\ ratio map and \hbox{[{\ion{O}{3}}]}\ flux maps
indicate that the southern outflow involves two distinct components,
each originating from one of the central bright emission regions.
This may be a ``limb brightening'' effect as has been suggested for
the pair of central emission regions themselves. This implies that
the emitting filaments are distributed along the outer surface of the
outflow, rather than throughout the volume. On the other hand, it is
likely that the two bright central regions from which the outflow
streams appear to originate are stellar ``superclusters''
(\cite{OGHC95}) which merely happen to be physically located on either
side of the kinematic center of the galaxy, from our point of view.
Regardless, we do not observe emission enhancement along the outflow
axis, as would be the case for a volume brightened distribution.
If we do not include emission arising within approximately 8\arcsec\
(125~pc) of the disk, our line fits encompass an \hbox{H$\alpha$}\ flux of
$7.6\times10^{-11}$~ergs s$^{-1}$ cm$^{-2}$. After a rough
subtraction of the halo model given in the previous section, this
implies a total filament luminosity
$L(H\alpha)\sim7.6\times10^{40}$~ergs s$^{-1}$ (cf.\
Tab.~\ref{m82statstab}). Assuming the filaments are completely
ionized, using a case B recombination coefficient for $T\sim 10,000$~K
of $\alpha_B = 2.59\times10^{-13}$~cm$^3$ s$^{-1}$ (\cite{O89}), and
employing the outflow geometry to be discussed in
Section~\ref{kinmodsec} below, we calculate an rms filament density
$\left<n_e\right>\sim5\cdot f^{-1/2}$~cm$^{-3}$. A very rough
estimate for the filament filling factor of $f\sim0.1$ would suggest a
mean electron density in the filaments
$\left<n_e\right>\sim15$~cm$^{-3}$, easily consistent with \hbox{[{\ion{S}{2}}]}\
doublet ratios in the low-density limit throughout much of the large
volume of outer filaments. This mean density implies a filament mass
$M\sim5.8\times10^6$~M$_{\sun}$ distributed in a volume
$V\sim1.1\times10^8$~pc$^3$, and a kinetic energy in the ionized
filaments $KE\sim2.1\times10^{55}$~ergs. These values are consistent
with those first computed by \cite{LS63}, $5.8\times10^6$~M$_{\sun}$
and $2.4\times10^{55}$~ergs, respectively. The entire mass of
outflowing gas is estimated to be a couple orders of magnitude larger
(\cite{HAM90}). Note that the kinetic energy in the filaments is only
about one percent of the estimated input supernovae energy ($\sim 2
\times 10^{57}$~ergs; \cite{WSG84}).
\subsubsection{X-Ray Morphology}
A large x-ray halo has been observed in M82 (\cite{WSG84};
\cite{SPBKS89}; \cite{TOMMK90}; \cite{BST95}), extending at least 5--6
kiloparsecs in radius along the minor axis of the galaxy. Originally
believed to be thermal emission from the hot ($T\sim10^7$--$10^8$~K)
gas within the outflow, current models suggest an origin in
wind-shocked clouds. J.N. Bregman has kindly provided us with a
recently published (\cite{BST95}) image of M82 taken with the {\it
R\"ontgensatellit\/} ({\it ROSAT}; \cite{T84}) High-Resolution Imager
(HRI; \cite{PBHKMPRSZC87}). The pair of 13\arcmin\ square images with
total exposure time 33.8 kiloseconds were taken in early 1991 and late
1992. The resolution is about 5\arcsec.
Contours representing the {\it ROSAT\/} soft x-ray flux have been
overlayed on our deep \hbox{H$\alpha$}\ image (Fig.~\ref{deephafig}). The
morphologies are markedly similar, with specific filaments and knots
clearly enhanced in both emission bands. (Note, for example, the two
extensive filaments in the north, and the knots approximately 500~pc
SW of the nucleus.) But in light of the more extensive distribution
of the X-rays versus the optical emission, it seems unlikely that {\it
all\/} of the X-ray flux is associated with specific filaments of
optical emission. Since the regions of high optical/X-ray correlation
are also some of the brightest in \hbox{H$\alpha$}, these knots presumably
represent density enhancements in the outflow bubbles. The large
extent of the X-ray halo and close spatial correlation with the \hbox{H$\alpha$}\
emission suggest an interpretation in terms of shocks driven by a
fast, rarefied wind plowing into denser halo gas (e.g. \cite{SBD93};
\cite{DS96}). Our observed \hbox{H$\alpha$}/X-ray luminosity ratio of $\approx$30
in the brightest filaments, identical to that derived by \cite{PC96},
suggests a similar scenario. We defer detailed modeling of the \hbox{H$\alpha$}\
and soft x-ray emission to a subsequent paper, in which the shocks
will be constrained by deep spectroscopy and imaging from the {\it
Keck\/} and {\it Hubble Space Telescopes}.
\subsubsection{Optical Kinematics}
Our Fabry-Perot observations provide velocity information across the
entire outflow in M82. Two well-known features of this emission line
gas should be emphasized {\it a priori}. First, the velocity
signature of outflow is clearly observed along the minor axis in both
\hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\ (see Fig.~\ref{Havelfig}). The well-established disk
inclination angle of 81\fdg 5 (\cite{LS63}) and the increased opacity
toward the northern filaments clearly establishes the large-scale
kinematics as arising from outflow (cf.\ \cite{E72}; \cite{SMM77}).
Second, the optical emission lines split into two distinct velocity
components in large regions along the minor axis of the galaxy. The
velocity structure of the individual components, as well as the
difference between them, provides convincing evidence for the presence
of outflowing gas along the minor axis of M82 (\cite{AT78}).
Figure~\ref{Hamosaicfig} illustrates the velocity structure of the
\hbox{H$\alpha$}-emitting gas in M82, as observed in the frames of the Fabry-Perot
data cube. Both minor-axis outflow lobes span velocity ranges
exceeding $v_{proj}\sim300$~\hbox{km~s$^{-1}$}, overlapping at the systemic
velocity.
\placefigure{Hamosaicfig}
We now describe our attempts to understand the complex dynamics of the
outflow in M82 with the aid of kinematic models. We have extracted
one-dimensional velocity cuts and synthetic two-dimensional spectra
from the Fabry-Perot data cubes. These cuts have been made relative
to the outflow axis, at an estimated position angle of
$\sim$150\arcdeg\ ($\sim$85\arcdeg\ relative to the major axis of the
galaxy; \cite{MGDP95}). While some authors have argued for a
spherical wind in M82 (e.g., \cite{SO91}), most recent models find
evidence for a bipolar outflow, usually in the shape of cones with
opening angles much less than 90\arcdeg. This is borne out by our
study: the line flux, splitting, and ratio maps all exhibit marked
azimuthal variations indicative of an aspherical outflow morphology,
strongly weighted toward the minor axis of the galaxy. Studies of the
supernova distribution (e.g., \cite{KBS85}) and other disk
observations (e.g., \cite{AL95}) indicate that even the central
injection zone is probably not spherical, but rather in the form of a
flattened disk, with dimensions $\sim$600~pc wide and $\sim$200~pc
thick.
The specific velocities predicted by \cite{CC85} do not agree with our
observations. An estimated supernova rate of 0.3~yr$^{-1}$
(\cite{RLTLT80}) yields an outflow velocity from their model of
2000--3000~\hbox{km~s$^{-1}$}\ at the edge of the starburst injection zone. This
value is several times larger than the deprojected velocities observed
at optical wavelengths ($v\sim$525--655~\hbox{km~s$^{-1}$}; see below) on much
larger scales. However, the extensive radio continuum halo
(\cite{SBB85}; \cite{SO91}) could arise from synchrotron radiation due
to a population of relativistic electrons, as they are transported
outward in a wind at velocities in the range considered by Chevalier
\& Clegg. The relationship between such a wind and the slower, denser
minor-axis outflow seen at optical and x-ray wavelengths remains
unclear. We suspect that much of the \hbox{H$\alpha$}\ filamentation arises from
large-scale shocks from a high-speed wind plowing into the gaseous
halo and entrained disk gas. Only a small fraction of the total wind
energy is encompassed by the radio halo ($\sim$2\%; \cite{SO91}).
\placefigure{windvelfig}
In modeling the bipolar outflow in M82, we model the individual line
components rather than flux-weighted velocity profiles. In
Figure~\ref{windvelfig}, we show the velocities of line fits to the
dual \hbox{H$\alpha$}\ components along the minor axis of the galaxy. While the
northern outflow components show a relatively constant projected
separation of $\sim$300~\hbox{km~s$^{-1}$}, the southern region of split lines
reveals an intriguing variation. Within 200~pc south of the nucleus,
where the favorable inclination of the disk allows us to measure line
profiles closer to the starburst, the individual components of \hbox{H$\alpha$}\
are separated by a much smaller velocity, comparable to our resolution
($\sim$50~\hbox{km~s$^{-1}$}). Between 200 and 500~pc from the nucleus, the
components rapidly diverge, remaining at a constant separation of
$\sim$300~\hbox{km~s$^{-1}$}\ beyond 500~pc. Maps of the line component splitting
reveal a separation of this order throughout the spatial extent of
both lobes. Also plotted in Figure~\ref{windvelfig} are fits to the
\hbox{H$\alpha$}\ profiles from the long-slit optical spectra of \cite{MGDP95},
which clearly confirms our observed minor-axis trend in the line
splitting. An identical radial trend is seen in the \hbox{[{\ion{N}{2}}]}\ line
components along the southern outflow axis, but is not shown in the
figure.
\subsubsection{Kinematic Models}
\label{kinmodsec}
In order to understand the intrinsic velocity structure underlying the
observed kinematics, we have undertaken two separate sets of
three-dimensional Monte-Carlo wind simulations: a rounded expanding
bubble and a pair of cones arranged as a funnel. Although primarily
geometric in nature, these models allow us to estimate the
three-dimensional morphology of the outflow, as well as intrinsic gas
velocities, both of which are important constraints for physical
models of the wind emission mechanisms and other observational
phenomena.
\placefigure{modelsfig}
For our initial model, we used rounded bubble geometries, given by the
spherical functions,
\begin{eqnarray}
\rho &=& A\ \cos{m\theta} \qquad -\frac{\pi}{2m} < \theta < +\frac{\pi}{2m}\\
\rho &=& A\ \cos^m \theta \qquad -\frac{\pi}{2} < \theta < +\frac{\pi}{2}
\label{bubblesimeq}
\end{eqnarray}
(see Fig.~\ref{modelsfig}$a$). Both functions produce a parabolic
leading surface becoming conical at small radii. The parameter $m$ can
be related to an opening angle and a Mach number in cases where the
leading surface is produced in a bow shock. Truncating the bubble at
inner and outer radii allows for the selection of a outflow structure
with specific opening angle and radial curvature. We superimposed two
alternate velocity laws upon this spatial geometry: a radial velocity
vector, which corresponds physically to the case in which each gas
parcel is accelerated by the flow originating at the bubble apex, or a
velocity vector perpendicular to the surface of the bubble, which may
be more appropriate for an expansion due to increasing heat and
pressure inside the bubble, such as an inflating balloon.
We then executed a series of Monte-Carlo simulations for each velocity
law, varying bubble parameters such as opening angle, inclination
angle, and inner and outer truncation radii. The conclusion reached
from this set of simulations is that {\it the rapid divergence of the
velocity components in the southern outflow cannot be reproduced by a
single bubble}, at least not without invoking highly contrived
velocity profiles for the wind. Studies of the minor axis x-ray
distribution are similarly unable to model the emission with a single
bubble or cone (e.g., \cite{SBHB96}).
Morphologically, one can divide the southern region of split lines
into two separate velocity regimes: the region within approximately
200~pc of the nucleus, where the split line components are separated
by $\sim$50~\hbox{km~s$^{-1}$}, and the region beyond 500~pc radius, in which the
components are separated by a much larger, but still relatively
constant, projected value of $\sim$300~\hbox{km~s$^{-1}$}. The inner component is
not observed north of the galaxy, presumably because the split lines
cannot be resolved at a sufficiently small radius due to the
intervening inclined disk of the galaxy. Note that the \hbox{H$\alpha$}\ and \hbox{[{\ion{N}{2}}]}\
flux maps (Fig.~\ref{HaNiifluxfig}) also suggest the presence of two
distinct regions, as the line flux drops sharply at the same radius at
which the velocity components rapidly separate ($\sim$500~pc).
We therefore performed another set of Monte-Carlo simulations, this
time using a double cone geometry, given by the cylindrical function
\begin{equation}
r = \cases{\kappa_1 (z - z_{01}), & 0 $<$ z $<$ 350~pc \cr
\kappa_2 (z - z_{02}), & 350 $<$ z $<$ 800~pc\cr}
\label{conesimeq}
\end{equation}
(see Fig.~\ref{modelsfig}$b$). The $\kappa$ parameters determine the
opening angles of the cones, while the $z_0$ parameters control the
radial extent of the cones through truncation. Again, we superimposed
both radial and normal (i.e., perpendicular) velocity laws, but
determined in the end that a velocity vector tangential to the cone
surfaces provided the best match to the observations and was most
easily understood physically, in terms of material entrained by the
high-velocity wind. An additional parameter was used to smooth the
abrupt projected velocity transition where the two cones meet (at
$z=350$~pc).
\placefigure{conesfig}
\placetable{conetab}
After performing simulations over a range of cone sizes, inclination
angles, opening angles, and velocity laws, we derived the final model
shown in Figure~\ref{conesfig}. Parameters of the cones which best
fit the observations are given in Table~\ref{conetab}. The inner
``cone'' is almost a cylinder of radius equal to the injection zone
($r\sim200$~pc), with a small, but non-zero, opening angle. The outer
cone has an opening angle of approximately 25\arcdeg, in relative
agreement with the ``small'' opening angle models of \cite{BT88} and
\cite{MGDP95}. Models in the ``large'' opening angle regime (e.g.,
60\arcdeg; \cite{HAM90}) do not match the observations, requiring
excessively low intrinsic velocities and a larger projected spatial
extent for the outflow region (see Fig.~\ref{conesfig}$a$). A large
opening angle for the primary outflow cone also produces substantially
skewed Doppler ellipses in spectra perpendicular to the outflow axis.
This effect is due to the slit cutting through the back and front of
the cone at different nuclear radii, and is not observed in our
synthetic spectra (see Figs.~\ref{conesfig}$c$ and \ref{conesfig}$d$).
The inner and outer cones are inclined toward the observer by
$\sim$5\arcdeg\ and $\sim$15\arcdeg, respectively, roughly aligning
the back sides of the two cones (see Fig.~\ref{modelsfig}$b$). This
is required to explain the lack of a sharp velocity gradient in the
low-velocity component (LVC) at a radius of 350~pc, as is observed in
the high-velocity component (HVC). In addition, since the LVC
exhibits small but non-zero projected velocities, the back sides of
the cones must be at a slight angle to the plane of the sky. These
inclination angles agree with previous estimates (e.g., \cite{BBR64};
\cite{He96}). While the observed velocities of the HVC could be
duplicated with smaller cone opening angles and a larger inclination,
the low velocities of the LVC require small inclination angles.
Initial attempts to model the kinematics of the outflow with a
constant velocity law, i.e., using only the simple double-cone
geometry to reproduce the observed velocity structure, were not
successful. Figure~\ref{windvelfig} illustrates that both velocity
components in the south and north exhibit non-zero slopes in the
position-velocity plot. This can be understood as either a continuous
change in the intrinsic gas velocity or as a change in the projected
velocity through a continuous change in the outflow cone geometry.
The latter case implies that both sides of the outer cone are
constantly bending toward the observer, producing a slowly increasing
projected velocity with radius. A gradually increasing intrinsic wind
velocity is probably necessary as well, and can be understood from a
physical standpoint. Buoyancy effects in the hot wind from the
decreasing disk density with scale height, decreasing wind densities
from the lack of collimation at larger radii, and other effects
contribute to produce wind velocities that increase with radius in
standard galactic wind models (e.g., \cite{CC85}; \cite{SBHL94}).
After testing a number of stronger power-law expressions for the gas
velocity dependence on radius, we finally chose the simple linear
model given in Table~\ref{conetab}. Together with a constant cone
opening angle, this intrinsic velocity structure produces linear
projected velocity gradients that correspond well to those observed in
Figure~\ref{windvelfig}. The intrinsic velocities of the gas range
from 525~\hbox{km~s$^{-1}$}\ near the nucleus to 655~\hbox{km~s$^{-1}$}\ at a radius of 1~kpc.
These velocities are comparable to the escape velocity for M82 (see
Table~\ref{m82statstab}), implying that the most distant entrained
filaments are not bound to the galaxy. This conclusion is also
supported by the large radial extent of the fast wind itself, as seen
in soft X-rays, along the minor axis.
Just as the outflow cones are not aligned with the minor axis of the
galaxy along the line of sight, neither are they aligned in the plane
of the sky. The region of split \hbox{H$\alpha$}\ lines constitutes a cone on the
sky with the expected opening angle of $\sim$25\arcdeg, but with a
position angle of $\sim$165\arcdeg, approximately 15\arcdeg\ greater
than what the literature (e.g., \cite{MGDP95}) had previously defined
as the ``outflow axis.'' The cone axis is rotated $\sim$100\arcdeg\
from the major axis of the galaxy, and places the eastern edge of the
cone almost directly parallel to the minor axis. In fact,
Figure~\ref{conesfig}$a$ illustrates that this eastern edge is quite
pronounced, both in the split emission lines and in the \hbox{H$\alpha$}\ flux
observed from the inner collimated zone. This suggests that the
tilting of the outflow cones in the plane of the sky has been produced
by a relative density enhancement in the eastern lower halo which
maintains collimation of the wind even as it fans out toward the west
and toward the observer.
While the large outflow cones appear to originate east of the galaxy's
minor axis, a small region of split lines is also observed on the
western edge of the collimated zone, approximately 300~pc from the
nucleus. The morphology of the \hbox{H$\alpha$}\ and \hbox{[{\ion{O}{3}}]}\ flux maps
(Figs.~\ref{HaNiifluxfig} and \ref{Oiiifluxfig}) indicates that this
region constitutes a small bubble on the side of the larger outflow
structure. We clearly see an enhanced rim of \hbox{H$\alpha$}\ emission around the
bubble, and split lines within it.
Recalling again the identification of two outflow ``streams'' in the
\hbox{[{\ion{O}{3}}]}\ flux map and the \hbox{\nii/\ha}\ ratio map (Figs.~\ref{Oiiifluxfig} and
\ref{NiiHalgfig}), one might expect a more substantial outflow from
the western half of the nucleus, or at least a more
centrally-positioned outflow cone. However, it appears that gas
densities on the western side of the lower halo are substantial enough
to keep the western stream from expanding into a cone. The stream
appears to bend toward the west in the \hbox{[{\ion{O}{3}}]}\ flux map
(Fig.~\ref{Oiiifluxfig}), and the only expanding structure that we
observe is the one small bubble.
But at larger radii, the ambient density toward the west must drop
relative to the eastern side. While the outflow remains tightly
confined to the east, even beyond the collimated zone, certain
Fabry-Perot maps show evidence of variations in the western side of
the outer outflow cone, suggesting a more azimuthally-extended
morphology there, e.g., break-out into a less dense region. For
example, the \hbox{H$\alpha$}\ velocity map (Fig.~\ref{Havelfig}) reveals high
velocity clumps at radii of $\sim$1~kpc that are distributed westward
from the sharp eastern edge over almost 90\arcdeg\ in azimuth. These
clumps are clearly associated with the outflow, and can also be seen
as the blue-shifted emission just west of the split line region in our
most distant outflow cut, Figure~\ref{conesfig}$d$. The fact that
this gas exhibits smaller velocities in panel $c$ of this figure
illustrates that kinematical effects from disk rotation are greater
closer to the disk. In contrast, the emission immediately east of the
split line region lies at effectively the same velocity in both panels
$c$ and $d$ of Figure~\ref{conesfig}, as this coincides with the sharp
collimating edge on that side of the outflow. The spatial structure
of the \hbox{H$\alpha$}\ velocity map (like that published in \cite{H72}) also
suggests the presence of rotating disk material that has been
entrained and gradually diverted to the outflow.
To summarize, the inner 350~pc of the outflow constitutes a flow down
a pipe. The outflow is collimated, presumably by ambient and
entrained disk material, and highly inclined to our line of sight,
such that the observed radial component of the flow velocity is only
$\sim$50~\hbox{km~s$^{-1}$}. Beyond 350~pc, however, the collimation weakens and
the outflow expands rapidly as a cone of emission with an opening
angle of 25\arcdeg\ and a projected front-to-back velocity separation
of approximately 300~\hbox{km~s$^{-1}$}. This expansion is preferentially toward
the west and toward the observer. A linearly increasing intrinsic gas
velocity with an initial value of 525~\hbox{km~s$^{-1}$}\ and a gradient of
0.13~\hbox{km~s$^{-1}$}\ pc$^{-1}$ matches the observations well out to a radius of
a kiloparsec.
The line-splitting phenomenon indicates that the \hbox{H$\alpha$}-emitting
filaments are produced on the surface of the outflow cones, at the
interface between the wind and the ambient halo material. In addition
to the minor axis velocity structure, this model explains the
increased flux within the collimated region (see
Fig.~\ref{conesfig}$a$) as a result of an elevated density with
respect to the outer expanding cone (${\cal F}\propto n_e^2$). An
increased density in the innermost regions has also been indicated
observationally by the ``filling in'' of the line profiles at those
radii (e.g., \cite{MGDP95}).
While the inner bubble has an extremely small opening angle, we find
that an outer cone of opening angle $\sim$20\arcdeg--30\arcdeg\ fits
the data most closely. This range agrees well with the ``small''
opening angle values found by other authors (e.g., $\sim$30\arcdeg;
\cite{MGDP95}). Our observations do not support the ``large'' opening
angle regime (e.g., $\sim$60\arcdeg; e.g., \cite{HAM90}).
\subsubsection{Comparison of Kinematic with Hydrodynamic Models}
Striking support for the two-zone model is obtained from a comparison
with recent hydrodynamic outflow models. Initial two-dimensional
simulations (e.g., \cite{TI88}) proposed an evolution of the outflow
which included collimation of the flow at early stages by disk
material, subsequent breakout of the confined flow along the minor
axis, and eventual extension of the hot wind material up to a few kpc
from the nucleus.
More recent simulations (\cite{TB93}; \cite{SBHL94}; \cite{SBHB96})
include a separate corotating halo component and are able to reproduce
much of the observed morphology through the interactions of the wind
with the disk and halo. These simulations show that the outflow
entrains disk gas around itself, dragging the cooler, denser material
up to a couple of kiloparsecs above the plane of the galaxy (e.g., see
Fig.~6 in \cite{SBHL94}). The regions of densest entrained disk
material, near the base of the outflow, serve to collimate the outflow
beyond the height of the disk itself. The scale height of this
collimation in the simulations is similar to that seen in our
observations, $\sim$500~pc. The ``fingers'' of disk material
entrained to heights above the collimated zone can be identified with
the optical emission line filaments we observe on the outer walls of
the outflow cone. This entrained gas has also been observed at
molecular wavelengths (e.g., \cite{SC84}; \cite{NHHSHS87};
\cite{SNHGRW90}). Within the collimated region, the simulations show
that both the wind and the confining walls are at their densest,
consistent with the observed increased levels of optical emission.
Finally, it should be noted that a recent analysis of the minor-axis
X-ray distribution implies a partially confined outflow of this gas as
well, within 1.6~kpc of the disk (\cite{BST95}).
\subsubsection{Comparison of Optical and X-ray Observations}
\label{compoptxray}
A number of authors have observed an extensive nebula of soft X-ray
emission along the minor axis of M82 (e.g., \cite{WSG84};
\cite{SPBKS89}; \cite{TOMMK90}; \cite{BST95}). The initial
interpretation was that these are thermal X-rays, produced by the hot
($T\sim10^7$~K) gas in the outflow. The optical line emission must
then arise from the cooler boundary layer of the wind, where the hot
gas interacts with entrained disk and ambient halo material. Such a
hot gas would not be gravitationally bound to the galaxy and could
easily expand to the the observed radial distances of 5--6~kpc
(\cite{BST95}) in the estimated age of the starburst ($t\sim 5\times
10^7$~yr; \cite{RLTLT80}; \cite{DM93}). Recent ASCA observations
(\cite{THAKFIIOPRN94}; \cite{ML97}) find three temperature components
in this wind, with more extended softer components, confirming that
the hot wind is cooling as it expands.
Recent hydrodynamic simulations and studies of the x-ray halo spatial
distribution have cast doubt on this interpretation, however. Current
hydrodynamic simulations (\cite{TB93}; \cite{SBHL94}) derive gas
temperatures an order of magnitude larger than earlier estimates
(e.g., \cite{WSG84}). This $10^8$~K gas does not emit as strongly in
the X-ray bands, producing insufficient soft and hard X-rays to
account for the observations. Even \cite{CC85} admit that the density
falls too rapidly in their simple model to account for the x-ray
photons as thermal. X-ray spectral studies derive a range of thermal
gas temperatures and suggest alternative emission mechanisms
(\cite{F88}; \cite{SPBKS89}; \cite{SPS96}). Finally, correlations
between our deep \hbox{H$\alpha$}\ imaging and Fabry-Perot observations and
high-resolution {\it ROSAT\/} imagery support a non-thermal origin for
at least a portion of the minor-axis x-ray emission.
\placefigure{rosatfig}
Figure~\ref{rosatfig} compares the spatial distribution of soft X-rays
observed by {\it ROSAT\/} with our Fabry-Perot \hbox{H$\alpha$}\ flux map. As was
evident at larger radii in the comparison with the deep \hbox{H$\alpha$}\ imagery
(Fig.~\ref{deephafig}), the X-rays and optical line emission are
clearly correlated. On large scales, the minor-axis x-ray flux drops
at a radius of approximately 500~pc from the nucleus, as does the \hbox{H$\alpha$}\
emission. But also on scales as small as 10 pixels (150~pc), the
x-ray and optical emission appears well correlated. This implies that
soft X-rays are being produced in regions very close to those which
are producing \hbox{H$\alpha$}\ emission, a situation which is very difficult to
understand in terms of a thermal emission mechanism.
These observations lend support to the x-ray emission mechanism
suggested by several authors (e.g., \cite{CC85}; \cite{SBHL94};
\cite{SPS96}): the soft X-rays arise from shocked disk and halo
``clouds.'' This shocked gas can produce both the observed x-ray and
optical emission, accounting for the spatial correlation in
Figure~\ref{rosatfig}. A hybrid model seems necessary in which the
higher gas temperatures and densities near the nucleus create a region
dominated by shocks at interfaces with disk and halo gas clouds, while
the cooler temperatures and lower densities at larger radii produce a
decrease in optical emission and an increase in thermally-emitted
x-rays. A comparison of the scale lengths of the \hbox{H$\alpha$}\ and x-ray
emission along the minor axis confirms the more extended nature of the
x-ray component: the \hbox{H$\alpha$}\ surface brightness along the minor axis is
fit well by an exponential function, with a scale length of
$\sim$250~pc. For the most distant \hbox{H$\alpha$}\ emission ($r\sim1$~kpc), this
exponential can be approximated by a power law of slope $-2$,
essentially the same power law exponent measured for the X-rays at a
comparable radius (\cite{BST95}). Beyond this radius, the optical
surface brightness falls more rapidly than does the x-ray surface
brightness.
Recent detailed modeling of the x-ray emission (\cite{BST95}) suggests
a temperature at large radii of only $\sim 2 \times 10^6$~K, implying
an increasing role for thermal emission with radius. Similarly,
hydrodynamic simulations find that the majority of the wind mass must
be accumulated near the starburst region, not from evaporating halo
clouds (\cite{SBHB96}). It should be noted, however, that
observations with the Ginga x-ray satellite have made the startling
discovery of faint x-ray emission extending several {\it tens\/} of
kiloparsecs from M82 (\cite{TOMMK90}). Hydrodynamic simulations have
modeled this emission as shock-excited in nature, assuming that the
outflow is much older, $\sim 5 \times 10^7$~years (\cite{TB93}).
Although this observation has yet to be confirmed, the rapid radial
decrease in the wind pressure and density could propagate the wind
shock to large distances from its starburst origins.
The true importance of shocks versus thermal emission can be estimated
from optical line diagnostics, at least in the inner regions of the
M82 outflow. As was pointed out in Section~\ref{maps}, the
flux-weighted \hbox{\nii/\ha}\ ratio from the Fabry-Perot data is highly
uniform and low in the inner kiloparsec of the outflow; values of
0.3--0.6 are typical. However, we must be careful to use line ratios
for individual kinematic components when drawing conclusions regarding
the physics of the gas, especially when the components have been
modeled as distinct physical regions. The \hbox{\nii/\ha}\ line ratio of the
individual components reveals a similar low value across the spatial
extent of split lines, except in the inner collimated zone, where a
higher \hbox{\nii/\ha}\ ratio is seen ($\sim$1.0), particularly in the
low-velocity component. Although we were unable to resolve separate
components in the \hbox{[{\ion{O}{3}}]}\ observations, we note that the \hbox{\oiii/\ha}\ ratio
exhibits a strong radial gradient, unlike the \hbox{\nii/\ha}\ ratio. The
\hbox{\oiii/\ha}\ ratio increases from a value of approximately 0.03 at the
center to 0.08 at a distance of $\sim$750~pc.
\placefigure{vofig}
In order to investigate the importance of shock excitation for the
optical filaments, we have compared the observed emission line ratios
from the Fabry-Perot data with the standard emission-line galaxy
diagnostic diagrams of \cite{VO87}. Although the small number of
emission line diagnostics at our disposal limits our analysis, we can
nevertheless make a rough assessment of the influence of shocks using
the \hbox{\oiii/\hb}\ versus \hbox{\nii/\ha}\ diagnostic diagram (Fig.~\ref{vofig}; from
Fig.~1 of \cite{VO87}). Using an \hbox{\hb/\ha}\ ratio of 0.25 for the outflow
gas (\cite{HAM90}), we see that the emission line ratios from the
southern wind lobe of M82 rest in the region of the diagram for
starburst galaxies, as expected. The ratios are comparable to those
for most cooler \hbox{{\ion{H}{2}}}\ regions and \hbox{{\ion{H}{2}}}\ region models. This
immediately suggests an emission mechanism such as photoionization for
the filaments, particularly near the nucleus, where the \hbox{\oiii/\ha}\ ratio
is lower. As we move out in radius, however, shocks appear to become
more important as an excitation mechanism, as the increasing \hbox{\oiii/\ha}\
ratio drives the locus in Figure~\ref{vofig} toward the region for
non-thermally powered AGN.
\placefigure{dsfig}
In order to more directly interpret our emission line fluxes in light
of a shock mechanism, we have also compared the observed line ratios
with a recent set of high-velocity shock models (\cite{DS95}). These
models have been computed for shocks in the velocity range of
150--500~\hbox{km~s$^{-1}$}; the deprojected gas velocity of the wind in M82 is
estimated to be at the upper end of this range. Again using an \hbox{\hb/\ha}\
ratio of 0.25 for the outflow gas (\cite{HAM90}), a comparison with
the \hbox{\oiii/\hb}\ versus \hbox{\nii/\ha}\ diagnostic diagram (Fig.~\ref{dsfig}; from
Fig.~2$b$ of \cite{DS95}) shows that it is unlikely that the observed
emission line flux from the inner outflow filaments arises entirely
from shocks. There is simply not enough \hbox{[{\ion{O}{3}}]}\ emission observed in
the inner kiloparsec of the M82 outflow. However, the increasing
\hbox{\oiii/\ha}\ ratio with distance from the nucleus suggests that shocks
probably become important at larger radii, just as suggested by the
observational diagnostic diagrams (\cite{VO87}). Longslit optical
observations have also reached the conclusion that the line ratios
become more shock-like with increasing distance from the starburst
(e.g., \cite{HAM90}), although this has often not included analysis of
individual velocity components.
Additional support for a photoionization mechanism for the inner
optical filaments is provided by studies of the diffuse ionized medium
(DIM) in NGC~891. In that galaxy, which has no outflow or other
obvious sources of shock ionization, photoionization models have been
used to understand the variation of line ratios with height above the
disk plane. These models show that the \hbox{\nii/\ha}\ ratio gradually
decreases as the ionization parameter (the ratio of ionizing photons
to gas density) increases. In contrast, the \hbox{\oiii/\ha}\ ratio should
increase rapidly with ionization parameter (\cite{S92}). Regardless
of the presence of shocks then, the low value of \hbox{\nii/\ha}\ and the
gradually increasing value of \hbox{\oiii/\ha}\ in the innermost filaments of
M82 can be understood as the result of a gradual drop in filament
density, relative to the number of ionizing photons from the
starburst. In the outer filaments, however, the \hbox{\nii/\ha}\ ratio begins
to increase, presumably as a result of dilution of the radiation field
in the expanding uncollimated bubble, as well as perhaps an increasing
influence of shocks.
Recent studies of the influence of halo dust on these line ratio
trends in NGC~891 (\cite{FBDG96}) point out that the \hbox{\nii/\ha}\ ratio may
appear artificially low near the disk due to dilution by scattered
radiation from disk \hbox{{\ion{H}{2}}}\ regions. This suggests that the low \hbox{\nii/\ha}\
ratios for the inner filaments in M82 may be due in part to higher
dust densities in the inner halo, scattering disk radiation from the
nuclear starburst. This proposition corresponds well with the high
levels of polarization detected from the filaments and the exponential
nature of the observed halo, although the observed polarization levels
in M82 ($\sim$10--15\%) are much higher than those modeled in NGC~891
($\sim$1--2\%).
Based upon these comparisons and our geometric models, we propose that
the optical emission from the inner kiloparsec of the M82 filament
network is, at least partially, due to photoionization of the sides of
the cavity created by the outflow. The hot gas in the wind itself
would be quite transparent to the UV ionizing photons from the
starburst region, allowing the entrained disk and halo gas to be
illuminated directly. The tipped geometry of the outflow cones
probably places the systemic side of each cone more directly in the
path of the photoionizing radiation from the central starburst,
explaining the higher fluxes seen in the low-velocity components of
the wind. However, small regions of higher \hbox{\nii/\ha}\ ratios in the
individual velocity components suggest that a complex combination of
shock and photoionization is probably required in the violent
collimated zone, where the disk gas is being entrained and drawn
upward by the hot wind just as it leaves the luminous starburst
region. Although our small field of view and sensitivity limits
restrict our analysis of the more extended optical filaments, we
confirm a trend toward more shock-like line emission in the outer
regions of the outflow.
\subsubsection{Comparison with UV Observations}
The inner regions of the outflow in M82 have been detected in UV
images taken with the {\it Ultraviolet Imaging Telescope\/} (UIT;
\cite{Se92}). Figure~\ref{uvfig} compares our deep \hbox{H$\alpha$}\ image with
the UV image observed in the 2490\AA\ band by the {\it UIT}. The
southern outflow is clearly evident in ultraviolet emission, out to at
least a kiloparsec. The northern outflow is only barely visible in
the {\it UIT\/} data, presumably because of the high disk extinction
at ultraviolet wavelengths. The projected morphology of the southern
UV outflow is similar to that at \hbox{H$\alpha$}\ wavelengths, consisting of a
relatively smooth, broad fan of emission. The ratio of the \hbox{H$\alpha$}\ and
UV surface brightnesses (each in units of ergs s$^{-1}$ cm$^{-2}$
arcsec$^{-2}$ \AA$^{-1}$) in the outflow region is $\sim$8.8
(\cite{He96}).
\placefigure{uvfig}
Previous analyses of the {\it UIT\/} data (\cite{MOLNRSSS91};
\cite{H93}), as well as earlier balloon observations (\cite{CRBGH90};
\cite{BGHRB90}) have interpreted the minor-axis ultraviolet emission
in M82 in terms of dust scattering of photons from the nuclear
starburst by particles in the outflow. It has been suggested that
filamentary structures can be seen in the {\it UIT\/} data
(\cite{MOLNRSSS91}), presumably because the higher relative densities
in the filaments enhance scattering in those regions. The spatial
distribution of the UV emission does not correlate well with the \hbox{H$\alpha$},
X-ray, radio continuum, or IR morphologies, although this may be due
to variations in opacity (\cite{CRBGH90}). One difficulty with the
scattering picture for the production of the southern UV outflow is
the surprising non-detection of emission in the {\it UIT\/} 1600\AA\
far-ultraviolet (FUV) band (\cite{H93}). The naive expectation for
Rayleigh scattering would be to expect increased scattered flux at
shorter wavelengths, even in the presence of substantial extinction
along the line of sight. Ultraviolet observations of reflection
nebula (e.g., \cite{CBGWB95}) and starburst galaxies (\cite{CKS94})
derive extinction curves that confirm this expectation.
An alternative explanation for the UV light is ``two photon''
continuum emission from ionized gas within the optical filaments.
This emission arises from the spontaneous two-photon decay of the
2~$^2S$ level of \hbox{{\ion{H}{1}}}\ and is commonly seen in AGN and regions of
low-density ionized gas. The spectral distribution of two-photon
emission is symmetric, in photons per frequency interval, about a peak
at 2431\AA\ (\cite{O89}), very close to the {\it UIT\/} 2490\AA\ band.
Unlike the scattering function, two-photon flux decreases
($\sim\lambda^{-1}$) toward shorter wavelengths (see also
\cite{He96}). Preliminary analysis of the \hbox{[{\ion{S}{2}}]}\
$\lambda\lambda$6719,6731 lines in optical spectra of the outflow from
the {\it Keck\/} telescope confirm previous studies (e.g.,
\cite{RC80}; \cite{HSGH84}; \cite{HAM90}) that the inner filament
densities rarely exceed 500 cm$^{-3}$. As this is significantly below
the critical density at which collisional effects influence the
intensity of two-photon emission ($n_e\sim10^{4}$~cm$^{-3}$;
\cite{O89}), we may anticipate the filamentary nature of the outflow
in the UV imagery.
The observed \hbox{\hb/\ha}\ ratio of 0.25 for the outflow gas (\cite{HAM90})
implies an \hbox{H$\beta$}/UV ratio of $\sim$2.2. This is approximately a factor
of ten greater than the ratio of \hbox{H$\beta$}\ to two-photon flux predicted by
photoionization models ($\sim$0.1; \cite{DBS82}; \cite{D97}),
suggesting the presence of 2.5 mag more extinction at 2500\AA. The
observed extinction law for starburst galaxies (\cite{CKS94}; Fig.~5),
combined with the observed \hbox{H$\alpha$}\ extinction factor of 1.8 in the
outflow (\cite{HAM90}) indicates that this level of obscuration is
entirely reasonable. Shock-induced two-photon emission is almost
certainly ruled out, as it would require extinction by a factor of
$\sim4$ to match the observed UV intensity, well above the value
observed in the outflow regions. Spectral observations near the
Balmer limit, $\sim$3650\AA, where the two-photon continuum emission
is enhanced relative to line emission, could be useful for determining
the importance of this mechanism.
\section{Conclusions}
\label{conclusions}
It is widely recognized that M82 is the prototype of galactic wind
systems. Most models have concentrated on explaining measurements
taken along isolated position angles close to the galaxy's minor axis.
However, our new study has shown that the filamentary system
associated with the outflow is highly complex. Our simplest kinematic
model requires at least two discrete structures for each side of the
galaxy. The filaments are distributed in a network over these
surfaces. More surprisingly, the axes of the outflows on either side
of the disk are aligned neither with each other nor with the spin axis
of the disk. The gas excitation suggests that stellar ionization
dominates close to the disk with increased dominance of shock
excitation further along the outflow. The clearest signature of the
biconic outflow geometry is seen in the line ratio maps which suggest
that radiation is escaping from the disk along a channel excavated by
the hot rarefied wind. We find evidence for a smooth line-emitting
halo which we associate with the linearly polarized halo seen in
broadband studies. We have observed a warm ionized medium throughout
the inclined spiral disk.
In our view, the most pressing issue is an explanation of the
emission-line spectropolarimetry. We strongly urge that these
observations are repeated to deeper levels, at least to the point
where the diffuse line-emitting halo is detected. Observations which
{\it only\/} detect the bright filaments cannot remove the
contribution from the halo along the line of sight.
The disk-halo interaction in galaxies is a fundamental topic in its
own right (\cite{B90}). We suggest that M82 may provide the best
observational constraints on fountain flows (e.g., \cite{SB91}) once
the origin of the filaments is fully understood. What happens to the
entrained material that is lifted into the galactic halo? What is the
relationship between the relativistic electron halo, the wind
filaments, and the diffuse line-emitting halo? We suspect that
answers can only come from future observations of x-ray, radio and
millimeter emission at a resolution and sensitivity comparable to the
present study.
\acknowledgments
Funding for this research was provided by the office of the Dean of
Natural Sciences at Rice University, the Texas Space Grant Consortium,
and the Sigma Xi Research Society. Additional funding was provided by
AURA/STScI (grant GO-4382.01.92A), the National Science Foundation
(grant AST 88-18900), the William F. Marlar Foundation, the National
Optical Astronomy Observatories (NOAO), and Mr.\ and Mrs.\ William
Gordon. This research was performed in partial fulfillment of the
Ph.D. degree at Rice University. JBH acknowledges a Fullam Award from
the Dudley Observatory.
We thank Drs.\ Reginald J. Dufour, Patrick M. Hartigan, Jon C.
Weisheit, C. R. O'Dell, and Sylvain Veilleux for enlightening
discussions of the material contained herein. Special thanks is due
Drs.\ Joel Bregman and Brent Tully for providing the {\it ROSAT\/} and
deep \hbox{H$\alpha$}\ imagery, respectively, and Drs.\ Geoff Bicknell and Michael
Dopita for their assistance with the stellar wind equations and
high-velocity shock models, respectively.
This research has made use of the NASA/IPAC Extragalactic Database
(NED) which is operated by the Jet Propulsion Laboratory, Caltech,
under contract with the National Aeronautics and Space Administration.
The Astro\-physics Science Infor\-mation and Abstract Service (ASIAS),
administered by the Astrophysics Data System (ADS), was also used.
Color versions of several figures in this paper are available from the
authors.
\newpage
|
1,108,101,563,710 | arxiv | \section{Introduction}
\label{sec:intro}
The top quark is produced at the Tevatron mostly in pairs: the theoretical cross section for this process amounts to 6.7\,pb at Next-to-Leading Order (NLO)\,pb\,\cite{Cacciari} (for an assumed top quark mass of 175\,GeV/c$^2$). According to the Standard Model (SM), the top quark decays in a W boson and a b quark $\simeq 100\%$ of the time. The hadronic or leptonic decays of the two W bosons thus characterize the three non-overlapping final samples, which differ for their Branching Ratios (BR) and background contamination and composition: the {\bf dileptonic} sample, with two leptonically decaying W's, has two high-Pt tracks and large missing E$_T$, and is the cleanest of all, but on the other hand the one with the smallest BR($\sim$5$\%$). The {\bf semileptonic} sample contains events where one W decays leptonically and the other decays hadronically; it is characterized by large BR($\sim$30$\%$) and moderate background, mostly coming from production of W in association with jets. The {\bf all-hadronic} sample is where both W's decay hadronically; this channel has the largest BR($\sim$44$\%$) but also a very large background from QCD multijet production. In the latter two cases, to enhance signal purity CDF and D\O\ require the presence of long-lived B mesons, as a signature of b quarks, through the presence of a displaced secondary vertex (b-tagging).
\section{Top quark properties}
Thanks to high statistics and high purity, semileptonic $t \bar t$ events are the best candidates to test SM predictions and non-SM particle production in the top sector:
\paragraph{Pair production cross section}
The measurement of the $t \bar t$ production cross section provides a test of QCD calculation and any discrepancy from the theoretical expectation could hint to production or decay mechanisms not predicted by the SM. The most recent measurement comes from the D\O\ experiment\,\cite{D0xsec} and is performed in the semileptonic channel using ${\cal L} =1$\,fb$^{-1}$, counting events passing selection cuts and requiring at least one jet to be tagged as a b-quark jet; the measured cross section corresponds to $\sigma_{t \bar t} = 8.3^{+0.6}_{-0.5}(stat.)^{+0.9}_{-1.0}(syst.)\pm0.5(lumi.)\,$pb. The measurements performed by CDF and D\O\ in the complementary samples give compatible results.
\paragraph{Production mechanism}
SM predicts the top pairs to be produced through quark-antiquark annihilation 85\% of the time, and the rest 15$\%$ through gluon-gluon fusion. Taking advantage of the fact that the average number of low-$P_T$ tracks is proportional to the gluon content of a sample, CDF deploys a template method to fit a gluon-rich and a gluon-deprived track multiplicity distribution to the data, to measure\,\cite{CDFprod} the fraction of events produced through gluon-gluon fusion to be
$\sigma(gg \to t \bar t)/\sigma(p \bar p \to t \bar t) = 0.07\pm0.14(stat.)\pm 0.07(syst.)$.
\paragraph{Decay mechanism}
According to the SM the W boson is produced 70$\%$ of the time with longitudinal helicity, and the rest with left-handed helicity.; right handed helicity is forbidden by the theory. A template method is used here, the template variable being $cos \theta^*$, the cosine of the decay angle between the momentum of the charged lepton in the W boson rest frame and the W momentum in the top quark rest frame, which is highly sensitive to the W helicity. CDF measures\,\cite{CDFwhel}
$F^0 = 0.59 \pm 0.12 (stat.)^{+0.07}_{-0.06} (syst.)$ and $F^+ = -0.03 \pm 0.06 (stat.)^{+0.04}_{-0.03} (syst.)$.
\paragraph{New physics with top quarks?}
The top quark can be seen as an hadronic probe to very high mass scales.
CDF scans the $t \bar t$ invariant mass distribution to look for possible peaks due to resonant Z$^{\prime}$ production in the mass range 450-900\,GeV/c$^2$. Limits can be set to the product of the cross section times the branching ratio to top pairs. This limit amounts\,\cite{CDFresonance} to $\sigma \times$ BR(Z$^{\prime} \to t \bar t) < 0.8$\,pb at 95$\%$ Confidence Level (CL) for a Z$^{\prime}$ mass greater than 600\,GeV/c$^2$. \\
Overall, the measurements performed by the two experiments are in good agreement with each other and with the theoretical prediction.
\section{The top quark mass}
\label{sec:mass}
The top quark is the only quark that decays before hadronizing. Its mass, which is a free parameter in the SM, can thus be direcly measured. Moreover, due to the top quark and W contribution to radiative corrections, the measurements of their masses provide a powerful constraint on the Higgs boson mass. The top quark mass has traditionally been measured in each channel; a major boost in precision has been achieved by exploiting the presence of hadronically decaying W whose daughter jets can be used to constrain the biggest source of systematic uncertainty, the Jet Energy Scale (JES). For this reason, the most precise results now come from the analysis of the semileptonic and the all-hadronic samples.
There are two main classes of methods to extract the mass: the Template Method and the Matrix Element method.
The former consists in choosing a variable which is strongly correlated with the observable one wants to measure, and in building templates of this variable for simulated signal and background events. The variable used to measure the M$_{\rm top}$ is a tri-jet reconstructed invariant mass; the light quark dijet mass is chosen to simultaneously measure the JES.
The Matrix Element technique aims to use all the available informations to calculate a probability for the event to come from signal or background according to the theory predictions for the final state kinematics. Transfer functions are needed in order to convert reconstructed objects into kinematical tree-level quantities. For both techniques a likelihood will compare the data to the signal and background and its maximization will provide us the measured values.
The most precise measurements are performed using the matrix element technique in the semileptonic sample to simultaneously measure M$_{\rm top}$ and JES. The most recent D\O\ measurement\,\cite{D0mass} amount to M$_{\rm top} = 170.5 \pm 1.8(stat.) \pm 1.6 ($JES$) \pm 1.2 (syst.)\,$GeV/c$^2$.
\begin{figure}
\centering
\includegraphics[height=5.0cm]{tevatron07.eps}
\includegraphics[height=5.4cm]{w07.eps}
\caption{Tevatron Run II best measurements used in the combination; on the right is shown the updated constraint on the SM Higgs mass given from the latest determination of the top and W masses.}
\label{fig:1}
\end{figure}
CDF alone explores the all-hadronic channel, where the latest analysis employs a cut based selection to improve the signal-to-(mostly QCD) background ratio from $\sim1/1000$ to $\sim1/1$. This analysis uses a mixed technique to extract the mass: a template is built out of the probability given by the matrix element computation, and a dijet mass is used to measure the JES; this result\,\cite{CDFhadmass} is now the most precise in this channel and corresponds to M$_{\rm top} = 171.1 \pm 2.8(stat.) \pm 2.4($JES$) \pm 2.1 (syst.)\, {\rm GeV/c}^2$.
The best measurement in each channel is then combined to give the very precise Tevatron average value\,\cite{TevTopMass} of M$_{\rm top} = 170.9 \pm 1.1 (stat.) \pm 1.5 (syst.) = 170.9 \pm 1.8$\, GeV/c$^2$.
With such a $1 \%$ precision achieved, the M$_{\rm top}$ measurement will likely be a long-standing legacy of the Tevatron collider.
\section{Single top production}
The SM allows the electroweak production of single top quarks with the theoretical cross section at NLO\,\cite{Kidonakis} of 1.98\,pb in the t-channel and 0.88\,pb in the s-channel (assuming M$_{\rm top}=175\,$GeV/c$^2$). Single top quark events can be used to study the $Wtb$ coupling and directly measure the $V_{tb}$ element of the CKM matrix without assuming only three generation of quarks. CDF and D\O\ restrict their searches to events where the W decays leptonically; the signature is thus characterized by missing energy from the neutrino, one high-Pt lepton and, a b-jet from the top decay, which is required to be tagged to further reduce the background. Additionally we expect a light quark jet in the t-channel or one more b-jet in the s-channel. After the event selection we are left with a S/B of about 1/20. Both CDF and D\O\ experiments use different advanced techniques to better isolate the signal from the large background. The best D\O\ measurement uses a machine-learning technique that applies cuts iteratively to classify events, namely a boosted decision tree. It produces an output variable distribution which ranges from 0 to 1, with the background peaking close to 0 and the signal close to 1. A binned likelihood fit is used to extract the cross section, that D\O\ measures\,\cite{singletopD0} to be $\sigma($s+t channel$)=4.3^{+1.8}_{-1.4}\,$pb, $3.4\,\sigma$ away from the background only hypothesis, and in agreement with the SM expectation; D\O\ also measures the element V$_{tb}$ of the CKM matrix to be $0.68 < |V_{tb}|< 1$ at 95\% CL.
\begin{figure}
\centering
\includegraphics[height=4.1cm]{pvalue.eps}
\includegraphics[height=4.3cm]{ME.eps}
\caption{On left, fraction of background-only pseudo-experiments giving a cross section higher than the observed. On the right, event probability discriminant used by CDF to extract the cross-section.}
\label{fig:1}
\end{figure}
CDF's best result comes from using the event matrix element to build a probability for the event to come from signal or background. An event probability discriminant is then built and a likelihood fit extracts the signal and background relative normalizations. CDF measures\,\cite{singletopCDF} an excess of $2.3\,
\sigma$ and extract a cross section for the s+t channel to be $2.7^{+1.5}_{-1.3}\,$pb.
\section{Conclusions}
The measurement presented here confirm the SM expectation for top quark production and decay within the theoretical uncertainty, and provide high precision on the most important top property, the top quark mass, that it will take years for the LHC to achieve it. The first evidence of single top production and first direct measurement of the $V_{tb}$ parameter constitute another major Tevatron success. However, most analysis are statistically limited and with 2\,fb$^{-1}$ already recorded, and between 6 to 8\,fb$^{-1}$ expected, uncertainties will be reduced and smaller deviation from the SM investigated.
I would like to thank here the conference organizers and my CDF and D\O\ collaborators for the hard work and effort spent in achieving the results presented above.
\input{referenc}
\printindex
\end{document}
|
1,108,101,563,711 | arxiv | \section{Introduction and summary}
This paper deals with $\mathcal{N}=2$ four-dimensional superconformal field theories (SCFT) and with the computation of the OPE coefficients of the chiral primary operators parametrizing the Coulomb branch (CB) of their moduli space. Research in the last two decades has enormously increased the landscape of such theories, which features points lacking marginal deformations, hence constituting isolated non-Lagrangian theories with an intrinsically strong interaction. The archetypes of such theories are the Argyres-Douglas (AD) and Minahan-Nemeschansky (MN) theories \cite{Argyres:1995jj,Argyres:1995xn,Eguchi:1996vu,Eguchi:1996ds,Minahan:1996cj}. More recently \cite{Argyres:2015ffa,Argyres:2015gha,Argyres:2016xmc,Argyres:2016xua} this group of theories has been further enlarged in the attempt to compile a complete list of $\mathcal{N}=2$ SCFT classifying their moduli space of vacua, characterized by their Coulomb branches, Higgs branches and possibly mixed branches. The main properties of such theories have been derived either with field theory methods thanks to their connection to gauge theories through dualities and RG-flows \cite{Argyres:2010py,Argyres:2007tq,Maruyoshi:2016aim,Agarwal:2016pjo}, with geometric techniques via class ${\cal S}$ constructions and engineering in string theory \cite{Gaiotto:2009we,Bonelli:2011aa,Xie:2012hs,Cecotti:2012jx,Cecotti:2013lda,Chacaltana:2014nya,Wang:2015mra,Chacaltana:2016shw,Giacomelli:2017ckh} or by studying the superconformal index~\cite{Buican:2015ina,Cordova:2015nma,Agarwal:2018zqi,Xie:2021omd,Song:2021dhu,Buican:2021elx}.
It is well known that the dynamics of superconformal theories is strongly constrained. Indeed, superconformal symmetry is powerful enough to determine all correlators of half-BPS operators of the theory out of the two- and three-point functions of superconformal primaries.
Denoting by $\{ {\cal O}_a \}$ a basis of primary operators, a SCFT is therefore specified by the conformal dimensions $\Delta_a$, the metric $g_{ab}$ and the OPE coefficients $\lambda_{abc}$, respectively defined as
\begin{equation}
G_{ab}(x) =\langle \mathcal{O}_a(x)\mathcal{O}_b(0)\rangle={g_{ab} \over |x|^{\Delta_a+\Delta_b}}\,,\qquad\qquad{\cal O}_a (x) {\cal O}_b(0) =\sum_{c} { \lambda_{abc} {\cal O}_c(0) \over |x|^{\Delta_a+\Delta_b-\Delta_c} } +\cdots\,,
\end{equation}
where the dots stand for subleading terms in the limit $x\to 0$. A particularly interesting class of operators are the scalar \emph{chiral} primaries $\{ {\cal O}_i \}$. This set of operators form a commutative ring ${\cal O}_i (x) {\cal O}_j(0)\sim {\cal O}_{i} (0){\cal O}_{j} (0)+\ldots$, which makes the OPE coefficients either zero or one. The chiral/anti-chiral two-point functions $G_{ij} =\langle \mathcal{O}_i(x)\bar{\mathcal{O}}_{j}(0)\rangle$, instead, are non-trivial. Alternatively,
in an orthonormal basis where $\hat {g}_{ij}= \delta_{ij}$, the SCFT data is encoded in the OPE coefficients $\hat{\lambda}_{ijk}$ which can be expressed in terms of the metric coefficients $G_{ij}$. The OPE coefficients $\hat \lambda_{ijk}$ will be the focus of this paper.\footnote{OPE coefficients
will always be written in the orthonormal basis, so the hat will be omitted from now on.}
We will compute them using two different approaches: the conformal bootstrap and localization.
The starting point of the conformal bootstrap is the writing of the four-point function as an infinite sum of conformal blocks associated to the exchange of operators of arbitrary weight and spin, with coefficients given by quadratic combinations of the unknown OPE coefficients.
Crossing relations are then used to bound the region of allowed values for the OPE coefficients after a specific numerical optimization procedure. This approach has been pioneered in \cite{Rattazzi:2008pe} and detailed in the supersymmetric $\mathcal{N}=2$ case in \cite{Beem:2014zpa,Cornagliotto:2017snu,Gimenez-Grau:2020jrx}. See \cite{Simmons-Duffin:2016gjk, Poland:2018epd} for recent reviews.
The second approach we use is based on localization techniques, introduced in \cite{Nekrasov:2002qd,Flume:2002az,Bruzzo:2002xf} for gauge theories on $\mathbb{R}^4$, and later in \cite{Pestun:2007rz} for $S^4$. Two-point correlators of chiral/anti-chiral primary operators have been computed using localization on the four-sphere, with operator insertions at the North and South poles \cite{Baggio:2014ioa,Baggio:2014sna,Baggio:2015vxa,Baggio:2016skg,Gerchkovitz:2016gxx,Rodriguez-Gomez:2016ijh,Rodriguez-Gomez:2016cem,Billo:2017glv,Billo:2019job,Beccaria:2020hgy,Beccaria:2021hvt,Billo:2021rdb}. Results at weak coupling were shown to reproduce those obtained via Feynman-diagram techniques. Instanton corrections for $SU(2)$ with $N_f=4$ massless flavours were shown to be in complete agreement with those obtained by the conformal bootstrap method \cite{Beem:2014zpa}.
In this paper, we are interested in two-point correlators of those SCFT's of the AD type, lying in the CB moduli space of $\mathcal{N}=2$ gauge theories with matter, at points where mutually non-local degrees of freedom become simultaneously massless. Consequently these theories lack a Lagrangian description.\footnote{Some of these theories, although they have no ${\mathcal{N}}=2$ conformal manifold, may be reached from ${\mathcal{N}}=1$ Lagrangian theories in the UV by an RG flow \cite{Maruyoshi:2016tqk}.}
The work is inspired by the observation \cite{Russo:2014nka,Russo:2019ipg} that, for specific choices of the masses,
the partition function on $S^4$ of $SU(N)$ gauge theories with fundamental matter is dominated at large radius by saddle points, some of which are precisely the AD points, suggesting that the large-radius physics is described by a superconformal field theory. Here we exploit this observation
to extract the two-point correlators of the CB operators and, consequently, their OPE coefficients for interacting non-Lagrangian conformal field theories, from the large-radius limit of gauge theories with fundamental matter.
Following \cite{Billo:2017glv,Billo:2019job,Gerchkovitz:2016gxx}, we derive a localization formula for the two-point correlators on $S^4$, and cast the result in the form
\begin{equation}
G_{ij} (x) =\langle \mathcal{O}_i(x)\bar{\mathcal{O}}_{j}(0)\rangle = C_{ij} -C_{im} C^{mn} C_{nj}\,,
\label{aqqbar20}
\end{equation}
where $C_{ij}$ is the matrix-model two-point function
\begin{equation}
C_{ij}={1\over {Z}_{S^4}}
\int d {a} \, O_i({a},q) \bar{O}_j({a},\bar q) \big| Z_{\mathbb{R}^4} ({a},\epsilon_1,\epsilon_2,q)\big|^2\,,
\label{correlatorsS40}
\end{equation}
and $x=2\pi R$, with $R$ the radius of the four sphere.
The integral is performed over the imaginary axes for ${a}=\{ a_1,\ldots,a_{N-1} \}$. $Z_{\mathbb{R}^4}$ and $O_i$ are respectively the partition function and the one-point function on $\mathbb{R}^4$ in an $\Omega$-background where $\epsilon_1 =\epsilon_2=R^{-1}$, whereas ${Z}_{S^4}$ is the partition function on the four-sphere, $q=\Lambda^{2N-N_f}/4$, $N_f$ is the number of flavours and $\Lambda$ is the renormalization-group invariant scale. The sum over $m,n$ in (\ref{aqqbar20}) runs over all operators with dimensions lower than $\Delta_i$ and gets rid of the operator mixing
via Gram-Schmidt orthogonalization.
In the large-radius limit, the integral (\ref{correlatorsS40}), for specific choices of the masses, is dominated by a saddle point corresponding to an AD conformal theory in the moduli space of the gauge theory, and $G_{ij}$ will therefore give rise to the two-point correlator of that conformal theory.
There are two main difficulties in evaluating the saddle-point integral. First, the saddle point is located at strong coupling, so one has to rely on non-perturbative formulae for the prepotential, where the entire tower of instanton contributions has been resummed. Second, the correlators receive contributions from higher-order terms in the radius expansion, codified by the generalized prepotentials ${\cal F}_{g\geq2}$, which survive the limit ${a}\to {a}_*$, $R\to \infty$ with $({a}-{a}_*)R$ finite, where $a_*$ is the saddle-point value of $a$.
In the case of $SU(2)$ gauge theories with matter, and the AD theories describing their strong coupling regimes, exact formulae for the SW periods are known and the contributions of the SW prepotential ${\cal F}_0$ and its first gravitational correction ${\cal F}_1$ can be evaluated explicitly.
For higher-rank AD theories, explicit formulae for the SW periods are, to our knowledge, not known. Here we compute the SW periods for some rank-two AD theories describing the strong coupling regimes of pure $SU(5)$ and $SU(6)$ gauges theories (see \cite{Masuda:1996np} for studies of the SW periods in the underlying SQCD theories). The SW periods are explicitly evaluated and summed up to generalized hypergeometric functions.
While there is a priori no reason to expect that the higher gravitational terms encoded in the functions ${\cal F}_{g>1}$ are suppressed, we
find remarkable evidence that they give mild contributions to the OPE coefficients: Indeed,
we show that, in almost all cases we analyzed, results obtained including only the contributions of ${\cal F}_0$ and ${\cal F}_1$ are inside the bootstrap windows. Similar conclusions were reached in \cite{Grassi:2019txd}, where the results for the OPE coefficients of the rank-one AD theories have been extrapolated from matrix-model formulae derived in the large-R-charge limit.
We show similar agreement for the rank-two AD theories, known as $(A_1,A_4)$ and $(A_1,A_5)$, which lie in the moduli space of pure $SU(5)$ and $SU(6)$ super Yang-Mills. Thanks to the relatively low values of the conformal dimensions and the central charges, we are able to compute, using the bootstrap machinery, stringent bounds for the OPE coefficients in these theories, providing remarkable tests of the localization formula.
Finally, we observe that the localization formula for the OPE coefficients makes reference only to intrinsic data of the SCFT, and can therefore be extended to other non-Lagrangian SCFT's, not necessarily connected to gauge theories, like the MN ones. Unfortunately, the conformal dimensions in these theories are not so small, and the windows obtained from the conformal bootstrap are too wide to qualify as a valid test.
This is the plan of the paper: In Section \ref{Sec:Localization} we discuss localization and compute the OPE coefficients.
In Section \ref{Sec:Bootstrap} we discuss the results obtained with the bootstrap. In Section \ref{Sec:LargeCharge} we verify that our formulae for computing the OPE coefficients are compatible with those valid at large R-charge available in the literature. Finally in Section \ref{Sec:Conclusions} we draw our conclusions and discuss some open questions. Technical details are deferred to the appendices.
\section{OPE coefficients from localization}\label{Sec:Localization}
This section is divided into five subsections. In \ref{Sec:LocalizationTwoPoint} we introduce the localization formula for two-point correlators on $S^4$, and discuss the large-radius limit in \ref{Sec:LargeRadius} and the integration measure in \ref{Sec:Measure}. In \ref{Sec:SU2computation} and \ref{Sec:SU56computation} we compute the OPE coefficients for the AD theories lying in the CB moduli space of $SU(2)$ SQCD and $SU(5),SU(6)$ pure super Yang-Mills respectively.
\subsection{Localization formula}\label{Sec:LocalizationTwoPoint}
\subsubsection{One-point functions on $\mathbb{R}^4$}
The partition function on ${\mathbb R}^4$ for a supersymmetric $\mathcal{N}=2$ $SU(N)$ gauge theory with $N_f$ flavours in an $\Omega$ background with parameters $\epsilon_1,\epsilon_2$, can be written as the product of a tree level, a one-loop, and an instanton contribution as \cite{Nekrasov:2002qd}
\begin{equation}
Z_{\mathbb{R}^4} (a,\epsilon_1,\epsilon_2,q)=Z_{\rm tree} (a,\epsilon_1,\epsilon_2,q)Z_{\rm one-loop} (a,\epsilon_1,\epsilon_2)
Z_{\rm inst} (a,\epsilon_1,\epsilon_2,q)\,,
\label{partitionR4}
\end{equation}
where \cite{Alday:2009aq,Billo:2013fi,Fucito:2013fba}
\begin{eqnarray}
Z_{\rm tree} (a,q) &=& q^{ \sum_u a_u^2 \over 2\epsilon_1 \epsilon_2 } \nonumber\\
Z_{\rm one-looop} (a) &=& { \prod_{u=1}^N \prod_{f=1}^{N_f} \Gamma_2\left(a_{u}-m_i +{\epsilon\over 2}\right) \over \prod_{u < v}^N \Gamma_2(a_{uv})\Gamma_2(a_{uv}+\epsilon) } \nonumber\\
Z_{\mathbb{R}^4, \rm inst} (a,q) &=& \sum_{Y} q^{|Y|} Z_Y= \sum_{Y} q^{|Y|} { \prod_{u=1}^N \prod_{f=1}^{N_f} z_{Y_u,0} \left(a_{u}-m_i +{\epsilon\over 2}\right) \over \prod_{u , v=1}^N
z_{Y_u,Y_v} (a_{uv}) } \label{znek}
\end{eqnarray}
with $\epsilon=\epsilon_1+\epsilon_2$, $ q = e^{2\pi {\rm i} \tau} \mu^{2N-N_f}$ the instanton-counting parameter ($\mu$ being the renormalization reference scale) and
\begin{equation}
\tau={\theta\over 2\pi } +{4\pi {\rm i} \over g^2}\,.
\label{taucoupling}
\end{equation}
The sum
over $Y=\{ Y_u\}_{u=1,\ldots,N} $ runs over the $N$-tuplets of Young tableaux with a total number $|Y|$ of boxes
and
\begin{eqnarray}
z_{Y_u,Y_v} &=& \prod_{(i,j)\in Y_u} \left[ x+\epsilon_1 (i-k_{vj}) -\epsilon_2(j-1-\tilde{k}_{ui} ) \right] \nonumber\\
&& \times \prod_{(i,j)\in Y_v} \left[ x-\epsilon_1 (i-1-k_{uj}) -\epsilon_2(j-\tilde{k}_{vi} ) \right]\,,
\label{zryy}
\end{eqnarray}
where $k_{uj}$ and $\tilde k_{ui}$ denote the lengths of the $j^{\rm th}$ row and $i^{\rm th}$ column of $Y_u$.
We are interested in correlators involving chiral primary operators made out of the scalar field $\varphi$ in the vector multiplet. We write
\begin{equation}
{\cal O}_{J_1,J_2\ldots J_n} (x)= {\rm tr\,} \varphi^{J_1}(x) \, {\rm tr\,} \varphi^{J_2}(x)\, \ldots {\rm tr\,} \varphi^{J_n}(x)\,.
\label{chiral2}
\end{equation}
Sometimes we will find it convenient to use a collective index $i$ (which refers to the total R-charge) to denote the operators in (\ref{chiral2}) as ${\cal O}_i(x)$ and their dimension as $\Delta_i$. The one-point function of such operators on ${\mathbb{R}^4}$ is given by
\begin{equation}
O_{i}(a,q) =\langle \,{\cal O}_{J_1,J_2\ldots J_n}(x)\rangle = \sum_{Y} q^{|Y|} Z_Y
O _{J_1,Y}O _{J_2,Y}\ldots O _{J_n,Y}\,,
\label{corrR4}
\end{equation}
where
\begin{equation}
O _{J,Y} =
\sum_{s=1}^{|Y|} \left[ (\chi_s{+}\epsilon_1)^J {+}(\chi_s{+}\epsilon_2)^J {-}\chi_s^J{-}(\chi_s {+}\epsilon)^J \right]\,,
\label{varphifix}
\end{equation}
and
\begin{equation}
\chi_s=\chi_{u,(i,j)\in Y_u}=a_u+(i-1)\epsilon_1+(j-1)\epsilon_2\,.
\end{equation}
\subsubsection{Two-point correlators on $S^4$}
The partition function on $S^4$ of a supersymmetric $\mathcal{N}=2$ $SU(N)$ gauge theory
can be written as \cite{Pestun:2007rz}\footnote{Notice that here we have reabsorbed
the Vandermonde determinant $\prod _{u<v}^{N}a_{uv}^2$ into the modulus square of the one-loop partition function (\ref{znek})
using the identity $\Gamma(x+\epsilon_1)\Gamma(x+\epsilon_2)=x \Gamma(x)\Gamma(x+\epsilon)$ }
\begin{equation}
{Z}_{S^4}=\int d {a} \, \big| Z_{\mathbb{R}^4} ( {a},\epsilon_1,\epsilon_2,q)\big|^2\,,
\label{ws444}
\end{equation}
where ${a}=\{a_u\in{\mathbb C} \}_{u=1,\ldots,N}$, with $\sum_u a_u=0$, and the integral is performed over the imaginary axes.
$Z_{\mathbb{R}^4}$ is the partition function on ${\mathbb R}^4$ in an $\Omega$ background with parameters
\begin{equation}
\epsilon_1 \epsilon_2 ={1\over R^2} \, ,\qquad b=\sqrt{\epsilon_1\over \epsilon_2} \,,
\end{equation}
with $R$ the radius of the sphere and $b$ the squashing parameter that here will always be set to one.
We now introduce the two-point matrix-model integral
\begin{equation}
C_{ij}={1\over {Z}_{S^4}}
\int d {a} \, O_i({a},q) \bar{O}_j({a},\bar q) \big| Z_{\mathbb{R}^4} ({a},\epsilon_1,\epsilon_2,q)\big|^2\,,
\label{correlatorsS4}
\end{equation}
where $O_i({a},q)$ are the one-point functions of the chiral primary operators on $\mathbb{R}^4$, defined in \eqref{corrR4}.
Following \cite{Gerchkovitz:2016gxx,Billo:2017glv,Billo:2019job}, the two-point correlators $G_{ij}$ on the sphere can be obtained as
\begin{equation}
G_{ij}= \langle {\cal O}_i(2\pi R)\bar{\cal O}_j(0)\rangle = C_{ij} -C_{im} C^{mn} C_{nj}\,,
\label{aqqbar2}
\end{equation}
where the second term ``subtracts'' the mixing contributions between operators of different dimensions. Indeed, the sum over $m,n$ runs over all the operators with dimensions lower than $\Delta_{i}$
and $C^{mn}$ is the inverse of the mixing matrix restricted to the space of such operators. It follows
from (\ref{aqqbar2}) that $G_{in}=0$, i.e.~there is no mixing between operators of different dimensions, as expected.
Let us consider the OPE coefficients. Since we are dealing with chiral operators, there are no contractions between them, and therefore
\begin{equation}
{\cal O}_i (x) {\cal O}_j(0) = {\cal O}_{i}(0) {\cal O}_{j}(0) +\ldots\,,
\end{equation}
i.e.~the only non-trivial OPE coefficients are $\hat\lambda_{i,j,k_{ij} }=1$ with $k_{ij}$ the index label for
the operator ${\cal O}_{i} {\cal O}_{j}$. One can\ introduce the canonically-normalized operators $\hat{\cal O}_i$
\begin{equation}
{\cal O}_i (x)= \sqrt{G_{ii}} \, {\cal \hat O}_i(x)\,.
\label{rescalingO}
\end{equation}
In the new basis $\hat{G}_{ij}\sim\delta_{ij}/R^{2\Delta_i}$ and
\begin{equation}
\hat \lambda_{i,j,k_{ij}}=\sqrt{\frac{G_{k_{ij} , k_{ij} }}{G_{ii} G_{jj}}}\,.
\label{opecoeff}
\end{equation}
For the first few non-zero two-point correlators one finds
\begin{eqnarray}
G_{11} &=& C_{11}{-}C_{01} C_{10} \label{gs} \nonumber\\ \\
G_{22} &=& C_{22}{-}\frac{C_{01} C_{12} C_{20}{+}C_{02} C_{10}
C_{21}{-}C_{02} C_{11} C_{20}{-} C_{12} C_{21}}{C_{01} C_{10}{-} C_{11}}\,,\nonumber
\label{OPEcoeff1}
\end{eqnarray}
Alternatively one can write \cite{Gerchkovitz:2016gxx}
\begin{equation}
G_{nn} ={{\rm det}_{i,j\leq n} C_{ij} \over {\rm det}_{i,j\leq n-1} C_{ij} }\,.
\label{OPEcoeff2}
\end{equation}
Using these formulae one can evaluate the two-point correlators and the OPE coefficients for any SCFT with a known prepotential. In the
case of the $\mathcal{N}=2$ SCFT with $SU(2)$ gauge group and four fundamental massless flavours, the computation of $\lambda_{112}$
has been performed in \cite{Beem:2014zpa} numerically, keeping into account the first few instantonic contributions, and matched against the result obtained via the conformal bootstrap. Note that formula \eqref{OPEcoeff2} cannot be used for rank higher than one, because of the mixing of operators of the same dimension.\footnote{In the large-$N$ limit of gauge theories, the mixing of multi-trace operators can be neglected, and a formula like \eqref{OPEcoeff2} can still be used \cite{Beccaria:2020hgy,Beccaria:2021hvt,Billo:2021rdb}.} Formula \eqref{aqqbar2}, instead, can be used for any rank, leading in general to a non-diagonal matrix on the space of operators of the same dimension.
\subsection{The flat-space limit}\label{Sec:LargeRadius}
As we hinted in the Introduction, an important class of $\mathcal{N}=2$ SCFT \cite{Argyres:1995xn,Argyres:1995jj,Minahan:1996cj} live in the strong coupling regime of an asymptotically-free gauge theory. They are isolated and do not admit a Lagrangian formulation. Can localization be used in these cases? The answer is yes.
The crucial observation was put forward in \cite{Russo:2014nka,Russo:2019ipg}, where the partition function $Z_{S^4}$ of the $SU(N)$ gauge theories with fundamental matter was studied in the limit where the radius of the four-sphere is large. In this limit
the gauge prepotential ${\cal F}$ can be written as
\begin{equation}
{\cal F}( a,R) \approx \sum_{g=0}^\infty {\cal F}_g (a) \, R^{-2g} \,,
\label{rf0}
\end{equation}
where $ {\cal F}_0 (a)$ is the SW prepotential characterizing the low energy dynamics of the gauge theory in flat space and ${\cal F}_g$ account for the gravitational corrections arising from the $\Omega$-curvature of the spacetime.
The integral (\ref{ws444}) is dominated by a saddle
point at the extremum ${a}= {a}_*$ of the SW prepotential ${\cal F}_0(a)$, i.e.~where
\begin{equation}
a^s_{D}(a_*)=-{1\over 2\pi {\rm i}} {\partial {\cal F}_0 \over \partial a_s} \Big|_{a=a_*}=0\,, \qquad \quad s=1,\ldots,N-1\,,
\end{equation}
leading to $Z_{S^4} \approx \big| e^{ \Lambda^2 R^2 f_* } \big|^2$ with $ {\cal F}_0(a_*)=\Lambda^2 \, f_*$ and $f_*$ a complex number. More precisely,
based on dimensional analysis, the expansion for large $\Lambda R$ of the prepotential can be written as\footnote{We have taken all masses to be of the order of $\Lambda$, and they will eventually be tuned to specific values corresponding to the AD points.}
\begin{equation}
R^2\, {\cal F} (a,\Lambda, R) \approx (\Lambda R)^2 f_* +F[ (a-a_*) R ] \,,
\end{equation}
where we discarded terms suppressed in the limit $\Lambda R\to\infty$. Plugging this into (\ref{correlatorsS4}) one finds
\begin{equation}
C_{ij}\approx
{1\over Z_{S^4} } \int d {a} \, O_i({a}) \, O_j({a}) e^{ F[ (a-a_*) R ] +{\rm h.c.} } \,,
\end{equation}
with
\begin{equation}
Z_{S^4} \approx \int d{a} \, e^{ F[ (a-a_*) R ] +{\rm h.c.} } \,.
\end{equation}
It is important to observe that the physics near a generic saddle point is better described in terms of the dual periods $a^s_{D}$. However, if the saddle point corresponds to an AD conformal point, either $a$ or $a_D$ can be used, because they are related by
\begin{equation}\label{tauaDa}
a^s_{D} \approx \tau^{s s'} \left( a-a_{*}\right)_{s'}\,,
\end{equation}
with $\tau^{s s'}$ a symmetric matrix with positive-definite imaginary part, whose entries depend homogeneously on $a_s$ with degree zero.
\subsection{The u-plane integral}\label{Sec:Measure}
The SW periods $a_s(u)$ provide a relation between the variables $a=\{a_s \}_{s=1,\ldots N-1} $ and the curve parameters $u=\{u_n\}_{n=2,\ldots,N}$ representing the gauge-invariant coordinates of the CB moduli space.
One can use these relations to write the partition function as an integral over the u-plane. The SW prepotential ${\cal F}_0(u)$ can be obtained from
$a_D(u)$ upon u-integration
\begin{equation}
{\partial {\cal F}_0(u) \over \partial u_n} = -2\pi {\rm i} \, a_D^s(u) {\partial a_s(u)\over \partial u_n} \,.
\end{equation}
The first gravitational correction ${\cal F}_1$ can also be written in terms of the SW periods and the discriminant $\Delta({u})$ of the SW geometry \cite{Nakajima:2003uh,Witten:1995gf,Moore:1997pc,Manschot:2019pog,Shapere:2008zf}
\begin{equation}
{\cal F}_1(u) = - \log\left[ \Lambda^{\frac{N(N-1)}{2}} {\rm det} \left( {\partial {a_s} \over \partial {u_n} } \right)\right]^{1 \over 2} +\left( b^2+{1\over b^2} \right) \log\left[ {\Delta(u) \over \Lambda^{2N(2N-1)} }\right]^{1\over 24}\!\!.
\label{F1g}
\end{equation}
The above formula can be checked perturbatively in $q$ using the Nekrasov partition function reviewed in Appendix \ref{SWSU(2)gauge}, or verified using modular invariance of the path integral.
Plugging (\ref{F1g}) into (\ref{correlatorsS4}) and setting the squashing parameter $b=1$ one finds
\begin{equation}
C_{ij} ={1\over {Z}_{S^4}}
\int d{u} \, {\cal O}_i({u}) \,{\cal O}_j({u}) \, \left| \Delta({u})^{1\over 6} e^{ 2 R^2 {\cal F}_0 ({u})} \right| \, +\ldots \,,
\label{correlatorsS43}
\end{equation}
and
\begin{equation}
{Z}_{S^4} =
\int d {u}\, \left| \Delta({u})^{1\over 6} e^{ 2 R^2 {\cal F}_0 ({u})} \right| +\ldots \,,
\label{ZS43}
\end{equation}
where the dots are the contribution of higher gravitational terms ($g\geq2$).\footnote{The inclusion of ${\cal F}_1$ is crucial in order to reconstruct a modular covariant u-integration measure \cite{Shapere:2008zf}.} These contributions are in general not known in a closed form.
In this paper we will compute the correlators and the OPE coefficients including only the contributions of ${\cal F}_0$ and ${\cal F}_1$.
Surprisingly, the results already fall, in most of the cases, inside the narrow windows determined by the conformal bootstrap techniques, showing small discrepancies only for few operators of low dimensions.
\subsection[$SU(2)$ gauge theory with fundamentals]{$\boldsymbol{SU(2)}$ gauge theory with fundamentals}\label{Sec:SU2computation}
In this section we apply the method we previously discussed to the computation of the OPE coefficients for the $SU(2)$ gauge theory with $N_f=1,2,3$ fundamental flavors (setting equal masses for all the flavors). The SW geometry of these theories is given by
\begin{equation}
w(x)^2=\prod_{i=1}^4 (x-e_i) =P(x)^2 -\Lambda^{4-N_f} \, (x-m)^{N_f} \,,
\end{equation}
with
\begin{equation}
P(x)=
\left\{
\begin{array}{ll}
x^2-u \qquad& N_f=1\,, \\
x^2-u +{\Lambda^2\over 4} \qquad& N_f=2 \,, \\
x^2-u +{\Lambda\over 4} ( x- 3 m) \qquad& N_f=3 \,. \\
\end{array}
\right.
\end{equation}
The SW prepotential ${\cal F}_0$ can be obtained from the SW periods via
\begin{equation}
{\cal F}_0(u) = - 2\pi {\rm i}\int^u du' a_{D} (u') {\partial a(u')\over \partial u'}\,, \label{prepo}
\end{equation}
with $a(u)$ and $a_D(u)$ the SW periods, whose explicit expressions are given in Appendix \ref{SWSU(2)gauge}.
We are interested in the physics near the AD conformal point, obtained by requiring that three branch points collide, i.e.
\begin{equation}
w^2(x)=(w^{2})'(x)=(w^{2})''(x)=0\,,
\end{equation}
that can be solved for $x$, $u$, and $m$.
\begin{table}
\centering
$
\setstretch{1.3}
\begin{array}{|c|cccccc|}
\hline
N_f & d & c & x_* & u_* & m_* & \alpha \\
\hline
1 & \ft{6}{5} & \ft{11}{30}& \ft{\Lambda}{2} & \ft{3\Lambda^2}{4} & -\ft{3\Lambda}{4} & 2 \\
2 & \ft{4}{3} & \ft12 & {\Lambda\over 2} & {\Lambda^2\over 2} & {\Lambda\over 2} & 3 \\
3 & \ft{3}{2} & \ft23 & {\Lambda\over 8} & {5\Lambda^2\over 64} & - {\Lambda\over 8} & 4 \\
\hline
\end{array}
\setstretch{1}
$
\caption{\label{tablead}Summary of the data identifying the AD theories in the CB of $SU(2)$ SQCD. $c$ denotes the central charge, $d$ the CB-operator conformal dimension, and $\alpha$ the power at which the CB coordinate appears in the discriminant.}
\end{table}
Expanding the discriminant, given in Eq.~\eqref{ddelta}, and the prepotential in Eq.~(\ref{prepo}) around the AD saddle point one finds
\begin{eqnarray}
\Delta(u) & = & c_\Delta\, \Lambda^{12 }\,\left({ u-u_* \over \Lambda^2} \right)^\alpha \,,\nonumber\\
{\cal F}_0(u) &=& \Lambda^2 \, \left[ f_* - {c^2_{\cal F} \over 2} \, \left({ u-u_* \over \Lambda^2} \right)^{2\over d} \right] \,, \label{uf0}
\end{eqnarray}
with $c_\Delta$ and $c^2_{\cal F}$ some real dimensionless constants, whose value is irrelevant for our purposes,\footnote{The minus sign in front of $c^2_{\cal F}$, though, is crucial in order for the integrals in \eqref{correlatorsS43} to converge. But, as can be seen by combining \eqref{uf0} with \eqref{tauaDa}, this sign is guaranteed by the positive-definiteness of the gauge coupling ${\rm Im}\, \tau$.} and $d, \alpha$ rational numbers characterizing the AD theory. The results for $N_f=1,2,3$ are displayed in Table
\ref{tablead}. The dynamics of the gauge theory near these points is described by the rank-$1$ AD conformal field theories known as $\mathcal{H}_0$, $\mathcal{H}_1$, and $\mathcal{H}_2$ respectively.
In particular, one finds
\begin{equation}
\alpha=N_f+1=12\hspace{1pt}\frac{d-1}{d} \,.
\end{equation}
Now let us consider the two-point correlators in these theories.
The chiral ring is generated by a single operator that can be taken to be
\begin{equation}
{\cal O}(x) =\Lambda^{d-2}\left( \ft12 {\rm tr} \varphi^2(x) - u_*\right) =\tilde u \,,
\end{equation}
with the shift chosen such that $\langle {\cal O} \rangle =0$ at the SCFT point $u=u_*$, and the normalization chosen such that all the dependence on the scale $\Lambda$ drops out in two point functions $C_{ij}$.
The remaining operators in the chiral ring are obtained as powers ${\cal O}_i(x)={\cal O}(x)^i$ of the chiral ring generator. The one-point function factorizes as
\begin{equation}
O_i(u)=\langle {\cal O} (x)^i \rangle = \tilde{u}^i\,. \label{oi}
\end{equation}
Plugging (\ref{uf0}) and (\ref{oi}) into (\ref{correlatorsS43}) one finds\footnote{Here we use
\begin{equation}
\int_0^\infty e^{-c x^\beta} x^\gamma=\beta^{-1} c^{-{1+\gamma\over \beta} } \Gamma\left( { 1+ \gamma \over \beta} \right)\nonumber\,.
\end{equation}
}
\begin{eqnarray}
C_{ij} & \approx & { \int_0^\infty d\tilde{u} \, e^{- R^2 c^2_{\cal F}\, \tilde{u}^{\frac{2}{d}} } \, \tilde{u}^{i+j+{\alpha\over 6}} \over
\int_0^\infty d\tilde{u} \, e^{- R^2 c^2_{\cal F} \, \hat{u}^{\frac{2}{d}} } \, \tilde{u}^{\alpha \over 6} } ={1 \over (R c_{\cal F})^{d(i+j) }} { \Gamma\left(\frac{ d}{2} \left(1+i+j+\ft{\alpha}{6} \right) \right) \over \Gamma\left(\frac{d}{2}\left(1+\frac{\alpha}{6} \right)\right) }\,.
\label{cij2}
\end{eqnarray}
Notice that $C_{ij}\sim R^{-{(i+j)d}}$
as expected from conformal invariance.
The two-point correlators follow then from (\ref{aqqbar2}) or (\ref{OPEcoeff2}), and the OPE coefficients from (\ref{opecoeff}). The results are displayed in Table \ref{tSW}.
Interestingly our results depend only on two numbers, $d$ and $\alpha$, characterizing the dimension of the CB operator and
the degree of the discriminant of the curve. In particular, the latter is related to the central charge $c$ of the theory. These two numbers are codified in the SW curves and can be easily computed for non-lagrangian theories like the MN theories with flavour symmetry $E_6$, $E_7$, and $E_8$.
This data will be compared in Section \ref{Sec:Bootstrap} against the results obtained using the bootstrap approach.
\begin{table}
\centering
$
\setstretch{1.2}
\begin{array}{|c|cccccc|}
\hline
{\rm SCFT} & \mathcal{H}_0 & \mathcal{H}_1&\mathcal{H}_2 &E_6 &E_7 & E_8\\
\hline
{\rm SW} & y^2=x^3{+} {u} & y^2=x^3{+} {u} x & y^2=x^3{+} {u}^2& y^2=x^3{+} {u}^4& y^2=x^3{+} {u}^3 x& y^2=x^3{+} {u}^5\\
\hline
d & \frac{6}{5} & \frac{4}{3} & \frac{3}{2} & 3 & 4 & 6 \\
c & \frac{11}{30} & \frac{1}{2} & \frac{2}{3} & \ft{13}{6} & \ft{19}{6} & \ft{31}{6} \\
\alpha &2& 3 & 4 & 8 &9& 10\\
\hline
\lambda^2_{u\hspace{1pt} u\hspace{1pt} u^2} & 2.09823 & 2.24125 & 2.42063 & 4.51365 & 6.75467 & 15.1158 \\
\lambda^2_{u\hspace{1pt} u^2\hspace{1pt} u^3} & 3.30002 & 3.67408 & 4.17529 & 12.0469 & 24.011 & 95.3327 \\
\lambda^2_{u^2\hspace{1pt} u^2\hspace{1pt} u^4} &7.20621 & 8.62414 & 10.7157 & 67.008 & 222.212 & 2443.47 \\
\hline
\end{array}
\setstretch{1}
$
\caption{\label{tSW}Some OPE coefficients for the rank-$1$ AD theories $\mathcal{H}_0,\mathcal{H}_1,\mathcal{H}_2$, and for the rank-$1$ MN theories $E_6,E_7,E_8$. We have removed the $\tilde{}$ from $u$ in order not to clutter the notation.}
\end{table}
\subsection[AD conformal points in pure \texorpdfstring{$SU(N)$}{SU(N)} gauge theories]{AD conformal points in pure $\boldsymbol{SU(N)}$ gauge theories}
\label{Sec:SU56computation}
The SW geometry describing the low energy dynamics of pure $SU(N)$ gauge theories can be written as
\begin{equation}
w(x)^2 =P_N(x)^2-\Lambda^{2N}\,,
\end{equation}
where
\begin{equation}
P_N=x^N-u_2 x^{N-2}-u_3 x^{N-3} \ldots -u_N
\end{equation}
and $u_n$ the CB parameters. These theories contain an AD conformal point \cite{Eguchi:1996ds}, usually called $(A_1,A_N)$, at
\begin{equation}
u_2=u_3=\ldots u_{N-1}= 0 \, , \qquad u_N=\Lambda^N \,,
\end{equation}
which is obtained by setting
\begin{equation}
x = \tilde x \nu\,, \qquad \qquad w = \tilde w \, {\rm i} \sqrt{2} \,(\Lambda\nu)^{N/2} \,, \qquad \qquad u_n =\tilde u_n \, \nu^{n} +\delta_{n,N} \Lambda^N\,,
\end{equation}
and keeping the leading order in the limit $\nu \to 0$. This leads to the AD curve
\begin{equation}
\tilde w(x)^2 = \tilde x^N-\tilde u_2 \, \tilde x^{N-2} -\tilde u_3 \tilde x^{N-3} \ldots -\tilde u_N\,, \label{adsw}
\end{equation}
and the SW differential
\begin{equation}
\lambda = {\tilde w d \tilde x} \,.
\end{equation}
The conformal dimensions of the parameters $\tilde u_n$ entering the curve is determined by requiring that the SW differential has dimension one
$d(\lambda)=1$, leading to
\begin{equation}
d(\tilde x) ={2\over N+2}\,, \qquad \qquad d(\tilde w) ={N\over N+2}\,, \qquad \qquad d(\tilde u_n) ={2 n\over N+2} \,.\label{puregaugedims}
\end{equation}
These parameters are interpreted as Coulomb branch parameters, masses and couplings depending on whether dimensions are bigger, equal or lower than 1 respectively. In particular Coulomb branch parameters $\tilde u_n$ correspond to the choices
\begin{equation}
N\geq n > {N\over 2}+1\,. \label{rint}
\end{equation}
There are $r=\left[ \ft{N-1}{2} \right] $ of such operators, with $r$ often referred to as the rank of the CFT and $[A]$ denoting the integral part of $A$. Finally the central charge of the CFT is given by
\cite{Shapere:2008zf}
\begin{equation}\label{puregaugec}
c={d(\Delta) \over 12} +{r\over 6} = {N(N-1)\over 6(N+2) } + \ft16 \left[ \frac{N-1}{2} \right]\,.
\end{equation}
\begin{table}
\setstretch{1.25}
\centering
$
\begin{array}{|c|c|c|c|c|c|c|}
\hline
N & d(\tilde u_n) & {\rm rank} & {\rm curve}& {\rm CFT} & \Delta & c \\
\hline
3& {4\over 5} , {\bf {6\over 5}} &1 & \tilde w^2=\tilde x^3- v & \mathcal{H}_0\equiv(A_1,A_2) & -3^3\, v^2 & \ft{11}{ 30} \\
4 & {2\over 3} , 1 , {\bf {4\over 3}}&1 & \tilde w^2=\tilde x^4 -v & \mathcal{H}_1\equiv(A_1,A_3) & -4^4\, \, v^3 & \ft12\\
5 & {4\over 7} , {6\over 7} , {\bf {8\over 7}} , {\bf {10\over 7}} &2 & \tilde w^2=\tilde x^5-u\, x-\, v & (A_1,A_4)& -4^4 \, u^5+5^5\, v^4 &\ft{17}{21} \\
6 & ~~ {1\over 2} , {3\over 4} , 1, {\bf {5\over 4}} , {\bf {3\over 2}} ~~&2& ~~ \tilde w^2=\tilde x^6-u\, x-v~~ & (A_1,A_5)& 5^5 \, u^6+6^6 \, v^5 & \ft{23}{24} \\
\hline
\end{array}\
\setstretch{1}
$
\caption{\label{ttSW}CB conformal dimensions and SW curves for the $(A_1,A_N)$ theories with $N=3,\ldots,6$. We set masses and couplings to zero.}
\end{table}
In Table \ref{ttSW} we list the conformal dimensions $d(\tilde u_n)$ of the parameters entering the SW curve for the theories of rank up to two, indicating in boldface those operators
spanning the Coulomb branch. We display also the curves (and their determinant) obtained by setting to zero masses and couplings, and renaming the CB operators as $u$, $v$. For $N=3,4$, one finds the rank-one CFTs $\mathcal{H}_0$, $\mathcal{H}_1$ studied in the previous section. For $N=5,6$ one finds the rank-two AD CFTs known as $(A_1,A_4)$ and $(A_1,A_5)$.
In a rank-two theory, one can consider correlators involving a single Coulomb branch operator or mixed ones. In the former case, the correlator can be naively computed, by integrating over the one-dimensional slice defined by deformations from the conformal point involving only the CB operator we are interested in. The two-point correlators $G_{nn}$ are then given (\ref{OPEcoeff2}) with $C_{ij}$ given by the rank-one formula
(\ref{cij2}). The resulting OPE coefficients, say for correlators involving only $u$-insertions, depend then only on the dimension $d(\tilde u)$ and the exponent $\alpha_u $ defining the vanishing rate of the discriminant $\Delta \sim u^{\alpha_u }$. The results obtained from this one-dimensional truncation are displayed in Table \ref{tablesu560}.
While we do not know how to physically justify such a truncation, we will see that it nevertheless gives results in good agreement with the bootstrap.
\begin{table}
\setstretch{1.2}
\centering
$
\begin{array}{|c|cc|cc|c|c|c|}
\hline
&~ d(u)~&~ \alpha_u~ & ~d(v)~ &~ \alpha_v~ & ~c ~ & \lambda^2_{u\, u\, u^2} & \lambda^2_{v\,v\,v^2} \\
\hline
(A_1,A_4) & \ft{8}{7} & 5 & \ft{10}{7} & 4 & \ft{17}{21} & 1.981 & 2.327 \\
(A_1,A_5) & \ft{5}{4} & 6 & \ft{3}{2} & 5 & \ft{23}{24} & 2.077 & 2.378\\
\hline
\end{array}
$
\setstretch{1}
\caption{\label{tablesu560} OPE coefficients for the rank-$2$ AD theories of type $(A_1,A_N)$ obtained by truncating the Coulomb branch
integral to a single operator slice.}
\end{table}
In the next two sections we will derive the OPE coefficients (including mixed ones) from a full-fledged two-dimensional integration over the CB moduli space.
\subsubsection{$(A_1,A_5)$ Argyres-Douglas theory}
After setting the masses and couplings to zero, the low energy dynamics of $(A_1,A_5)$ at a generic point in the Coulomb branch is described by the curve\footnote{This analysis generalizes the results of \cite{Eguchi:1996vu} in this case.}
\begin{equation}
\tilde w^2 = \tilde x^6 - u \tilde x - v \,.
\end{equation}
The SW periods are defined by
\begin{equation}
a_s =\oint_{\alpha_s} \lambda \,,\quad \qquad a_D^s =\oint_{\beta_s} \lambda\,, \label{swperiods}
\end{equation}
with
\begin{equation}
\lambda =d\tilde x \sqrt{ \tilde x^6 - u \tilde x - v }\,,
\end{equation}
$(\alpha_s,\beta_s)$ is a basis of cycles with intersection matrix $\alpha_s \cap \beta_{s'}=\delta_{s s'}$ and $s,s'=1,2$. The integrals
(\ref{swperiods}) can be explicitly performed in the limit of small $u$. In this limit the branch points are located around the sixth roots
\begin{equation}
e_i =v^{1\over 6} \, w^i
\end{equation}
\ with $w = e^{\pi {\rm i} \over 3}$. The SW differential can be expanded as
\begin{equation}
\lambda =d\tilde x \sum_{n=0}^\infty {\Gamma(\ft32)\, (-u \tilde x)^n \over n! \Gamma(\ft32-n) } \, (\tilde x^6 - v)^{ {1\over 2}-n }
\end{equation}
leading to the integrals
\begin{equation}
\Pi_i =\oint_{\gamma_i} \lambda=2 \int_{e_i}^{e_{i+1}} \lambda = {\rm i} \, w^i\, V^{2\over 3} A( w^i \kappa) \,, \end{equation}
where
\begin{equation}
A(\kappa)=\sqrt{\pi } \sum_{n=0}^\infty
\frac{
\Gamma \left(\frac{n+7}{6}\right) \kappa^n \, \sin\left( {(n+1)\pi \over 6} \right) }{ (n{+}1)! \Gamma
\left(\frac{5}{6} (2{-}n)\right) } \label{akappasu6}
\end{equation}
and
\begin{equation}
{\rm i} V^{2\over 3}= v^{2\over 3} w^{{1\over 2}} \qquad ,\qquad \kappa=u v^{-{5 \over 6} } w^{1\over2}\,.
\end{equation}
The variables $V$ and $\kappa$ are chosen such that the $a_s$'s are purely imaginary for $V$ and $\kappa$ real and positive.
This choice follows from \cite{Pestun:2007rz} and is dictated by the mathematical consistency of the integral over $S^4$ and the gluing of the two charts.
The sum in the right hand side of (\ref{akappasu6}) can be carried out and written in terms of hypergeometric functions but the
explicit formula is not very illuminating, so we will omit it here.
The periods can then be written as
\begin{equation}
a_1=\Pi_{0} \,, \qquad\quad a_2=\Pi_{3}\,, \qquad\quad a_D^1=\Pi_{1} \,, \qquad\quad a_D^2=\Pi_{4} \,.
\end{equation}
As a consistency check of our choices, we have verified that the Riemann bilinear identities are satisfied, leading to a symmetric $\tau$ matrix, with positive imaginary part.
Furthermore we notice that the $a_s$'s are purely imaginary for $\kappa$ and $V$ real and positive.
The SW prepotential ${\cal F}_0(a)$ can be computed in terms of the SW periods. Indeed, using the fact that ${\cal F}_0(a)$ is a local homogeneous
function of degree two in the $a$'s, one finds
\begin{equation}
{\cal F}_0(a)=\frac{1}{2}\sum_{s=1}^2 a_s {\partial\over \partial a_s} {\cal F}_0(a) = -\pi {\rm i} \sum_{s=1}^2 a_s \, a_{D}^s \,.
\label{faad}
\end{equation}
We introduce the function
\begin{equation}
f(\kappa) = - 2\, V^{-{4\over 3}} \, {\rm Re} \, {\cal F}_0 =-{\rm Re} \left\{ 2 \pi {\rm i} \,w\, \left[ A( \kappa) A( w\kappa) + A( -\kappa) A( -w\kappa) \right] \right\}\,.
\end{equation}
One can check that $f(\kappa)$ is a positive, monotonously increasing function for $\kappa\geq0$.
The matrix-model type integral (\ref{correlatorsS43}) reduces to
\begin{eqnarray}
C_{u^{m} v^n}&=& \frac{1}{Z_{S^4}}\int d \kappa \,dV \left| 6^6{-} 5^5 \kappa^6 \right|^{1\over 6}
\, e^{ -V^{4\over 3} \, f(\kappa)} \, V^{ {5\over 3} +{5i\over 6} +n} \kappa^m \nonumber\\
&=& \frac{1}{Z_{S^4}} \int d \kappa \left| 6^6 {-}5^5 \kappa^6 \right|^{1\over 6}
\, \Gamma\left( 2{+}\ft{5m}{8}{+}\ft{3n}{4}\right) \left[ f(\kappa)\right]^{ -2-\ft{5m}{8}-\ft{3n}{4}} \, \kappa^m\,.
\end{eqnarray}
The last integral is performed numerically. The results are displayed in Table \ref{tablesu56}.
\begin{table}
\setstretch{1.2}
\centering
$
\begin{array}{|c|c|c|c|c|c|c|}
\hline
&~ [u]~ & ~[v]~ & ~c ~ & \lambda^2_{u\,\hspace{0.5pt} u\,\hspace{0.5pt} u^2} &\lambda^2_{u\,\hspace{0.5pt} v\,\hspace{0.5pt} u v} & \lambda^2_{v\,\hspace{0.5pt} v\,\hspace{0.5pt} v^2} \\
\hline
(A_1,A_4) & \ft{8}{7} & \ft{10}{7} & \ft{17}{21} &1.878 & 1.043 & 2.231 \\
(A_1,A_5) & \ft{5}{4} & \ft{3}{2} & \ft{23}{24} & 1.929 & 1.039 & 2.203 \\
\hline
\end{array}
$
\setstretch{1}
\caption{\label{tablesu56} Some OPE coefficients for the rank-$2$ AD theories of type $(A_1,A_N)$.}
\end{table}
\subsubsection{$(A_1,A_4)$ Argyres-Douglas theory}
After setting masses and couplings to zero, the low energy dynamic of $(A_1,A_4)$ at a generic point in the CB is described by the curve
\begin{equation}
\tilde w^2 = \tilde x^5 - u \tilde x - v \,.
\end{equation}
The SW differential is now
\begin{equation}
\lambda =d\tilde x \sqrt{ \tilde x^5 - u \tilde x - v }\,.
\end{equation}
Again we computed the integrals as an expansion for $u$ small. In this limit the branch points are located around the fifth roots
\begin{equation}
e_i =v^{1\over 5} \, w^i
\end{equation}
\ with $w=e^{2\pi {\rm i} \over 5}$. The SW differential can be expanded as
\begin{equation}
\lambda =d\tilde x \sum_{n=0}^\infty {\Gamma(\ft32)\, (-u \tilde x)^n \over n! \Gamma(\ft32-n) } \, (\tilde x^5 - v)^{ {1\over 2}-n }
\end{equation}
leading to the integrals
\begin{equation}
\Pi_i =\oint_{\gamma_i} \lambda=2 \int_{e_i}^{e_{i+1}} \lambda = {\rm i} \, w^i\, V^{7\over 10} A( w^i \kappa)\,, \end{equation}
where
\begin{equation}
A(\kappa)=\sqrt{\pi } \sum_{n=0}^\infty
\frac{
\Gamma \left(\frac{n+6}{5}\right) \kappa^n \, \sin\left( {(n+1)\pi \over 5} \right) }{ (n{+}1)! \Gamma
\left(\frac{1}{10} (17{-}8n)\right) } \label{akappasu5}
\end{equation}
and
\begin{equation}
{\rm i} V^{7\over 10}= v^{7\over 10} w^{{1\over 2}} \, ,\quad\qquad \kappa=u v^{-{4 \over 5} } w^{1\over2}\,.
\end{equation}
The periods are now given by
\begin{equation}
a_1=\Pi_{0} \,, \qquad\quad a_2=\Pi_{4}+\Pi_1\,, \qquad\quad a_D^1=\Pi_{1} \,, \qquad\quad a_D^2=-\Pi_{3} \,.
\end{equation}
The SW prepotential ${\cal F}_0(a)$ is again computed from (\ref{faad}) leading to
\begin{equation}
f(\kappa) ={ -} 2\, V^{-{7\over 5}} \, {\rm Re} \, {\cal F}_0 =-{\rm Re} \left\{ 2 \pi {\rm i} \,\left[ w\, A( \kappa) A( w\kappa) {-} w^2\, A( w^3 \kappa) A(w^4 \kappa) {-} w^4\, A( w^3 \kappa) A(w \kappa) \right] \right\}\,.
\end{equation}
Once again $f(\kappa)$ is a positive, monotonously increasing function for $\kappa\geq0$.
The matrix-model integral (\ref{correlatorsS43}) reduces to
\begin{eqnarray}
C_{u^{m} v^n}&=&\frac{1}{Z_{S^4}} \int d \kappa \,dV \left| 5^5{+} 4^4 \kappa^5 \right|^{1\over 6}
\, e^{ -V^{7\over 5} \, f(\kappa)} \, V^{ {22\over 15} +{4m\over 5} +n} \kappa^m \nonumber\\
&=& \frac{1}{Z_{S^4}} \int d \kappa \left| 5^5 {+}4^4 \kappa^5 \right|^{1\over 6}
\, \Gamma\left( \ft{37}{21}{+}\ft{4m}{7}{+}\ft{5n}{7}\right) \left[ f(\kappa)\right]^{ {-} {\ft{37}{21}}{-}\ft{4m}{7}{-}\ft{5n}{7} } \, \kappa^m\,.
\end{eqnarray}
The last integral is performed numerically. The results are displayed in Table \ref{tablesu56}.
\section{Conformal bootstrap approach}\label{Sec:Bootstrap}
In this section we would like to obtain upper and lower bounds on the OPE coefficients that have been computed in the previous section and summarised in Table \ref{tSW}. The approach that we follow is the conformal bootstrap that allows us to find bounds for the dimensions and OPE coefficients of intermediate operators by exploiting the symmetries of the specific CFT. In our setup, such consistency conditions come from superconformal symmetry, unitarity and crossing symmetry of the four-point functions. The numerical study of these conditions give bounds on the CFT data. While we defer the details to the following sections, we would like to emphasize that the bounds that we find are fully non\nobreakdash-perturbative and they do not refer to any Lagrangian description.
This section is divided into seven subsections. In Subsection~\ref{sec:multiplets} we show the multiplets exchanged in the OPE of two Coulomb branch operators. In Subsection~\ref{sec:bootstrapEqns} we write down the crossing equations and then we discuss their numerical implementation in Subsection~\ref{sec:numerical}. Subsections~\ref{sec:OPE_bounds} and~\ref{sec:gap_bounds} explain the setup for obtaining bounds on the OPE coefficients and operator dimensions, respectively, while Subsections~\ref{sec:res_bounds} and~\ref{sec:res_gaps} show the results we obtained.
\subsection{Superconformal multiplets exchanged}\label{sec:multiplets}
The OPE of two chiral operator has been investigated in~\cite{Fitzpatrick:2014oza, Beem:2014zpa, Lemos:2015awa, Gimenez-Grau:2020jrx}. In this section, we review their results using the notation introduced in~\cite{Cordova:2016emh}. A superconformal multiplet is fully specified by the Cartan eigenvalues of its superconformal primary. The Cartan eigenvalues consist in the conformal dimension $\Delta$, the left and right spins $j,{\bar{\jmath}}$, the $SU(2)$ Cartan $R$ and the $U(1)$ charge $r$. This information is encoded as
\eqn{
L\overbar{L}[j\hspace{1pt};{\bar{\jmath}}]^{(R;r)}_\Delta\,.
}[]
In principle one can infer any possible shortening condition by the values of $\Delta,R,r,j,{\bar{\jmath}}$. For convenience, however, shortening conditions will be indicated by replacing $L$ or $\overbar{L}$ with the symbols $A_1$, $A_2$, $B_1$ or $\bar{A}_1$, $\bar{A}_2$, $\bar{B}_1$. The meaning of these symbols has to do with the states that become null and are thus factored out of the multiplet.
The chiral and antichiral operators are the superconformal primaries of the following multiplets\footnote{In~\cite{Cordova:2016emh}'s conventions $Q$ has $\mathfrak{u}(1)$ charge $-1$ so chiral operators must have charge $\pm 2\Delta$. For simplicity, we rescale the R-charge by $-\tfrac12$ so that $r$ will denote both the conformal dimension and the charge of $\phi_r$. The multiplet notation, however, will stay consistent with its original work. In addition we emphasize that the $SU(2)$ R-charge is expressed with the Dynkin label (i.e. R-spin $\frac12$ means $R=1$.)}
\eqn{
\phi_r \in B_1\overbar{L}[0\hspace{1pt};0]_r^{(0;-2r)} \,,\qquad
{\bar{\phi}}_r \in L\overbar{B}_1[0\hspace{1pt};0]_r^{(0;2r)} \,.
}[]
For example for a SCFT gauge theory with ${\cal N}=2$ supersymmetry, $ \phi_r$ can represent
the primary ${\rm tr}\, \varphi^{r}(x)$ (or a multi-trace operator made of $r$ scalars) and the same for ${\bar{\phi}}_r $ with chiral fields replaced by anti-chiral ones.
The OPE of a chiral and an antichiral operator depends on the sign of the two charges. Assuming first that $r_1 \geq r_2 +1$ and letting $r_{12} = r_1-r_2$ one has \cite{Lemos:2015awa}
\eqn{
\phi_{r_1} \times {\bar{\phi}}_{r_2} = B_1\overbar{L}[0\hspace{1pt};0]_{r_{12}}^{(0;-2r_{12})} + A_t\overbar{L}[\ell\hspace{1pt};\hspace{-0.8pt}\ell]^{(0;-2r_{12})}_{\ell+2+r_{12}} + L\overbar{L}[\ell\hspace{1pt};\hspace{-0.8pt}\ell]^{(0;-2r_{12})}_\Delta
\,,
}[]
with $\Delta > \ell+2+r_{12}$ and $t = 1$ if $\ell>0$ and $t = 2$ if $\ell=0$. Since we do not have permutation symmetry, $\ell$ can assume all non-negative even and odd values. If instead $0 < r_1 - r_2 < 1$ the $B_1\overbar{L}$ multiplet is absent, while the rest of the OPE remains the same. If $r_2 > r_1$ we have the same conclusion with the conjugate multiplets while if $r_1=r_2\equiv r$ we have \cite{Fitzpatrick:2014oza, Beem:2014zpa}
\eqn{
\phi_r \times {\bar{\phi}}_r = A_2\overbar{A}_2[0\hspace{1pt};0]^{(0;0)}_{2} + L\overbar{L}[\ell\hspace{1pt};\hspace{-0.8pt}\ell]^{(0;0)}_\Delta
\,,
}[]
with, again, $\Delta > \ell+2$. Also here $\ell$ can be even or odd.
The interesting operators that appear are the chiral operators themselves in the OPE with different R-charges and the stress tensor multiplet $A_2\overbar{A}_2[0\hspace{1pt};0]_2^{(0;0)}$. There are no higher spin analogs $A_1\overbar{A}_1[\ell\hspace{1pt};\hspace{-0.8pt}\ell]$ as they contain higher spin currents that appear only in free theories~\cite{Maldacena:2011jn,Boulanger:2013zza,Alba:2013yda}.
The superconformal blocks encoding the contributions of the above multiplets, including also the whole towers of superdescendants, to a given four-point functions are all expressed in terms of a single function. This function has been denoted $\mathcal{G}^{t}_{\Delta,\ell}$ for the $t$-channel correlator and $\mathcal{G}^{u}_{\Delta,\ell}$ $u$-channel correlator. In Section~\ref{sec:bootstrapEqns} we give the explicit definitions. In Figure~\ref{fig:neutral_channel} we depict the ranges of $\Delta$ and $\ell$ that correspond to a given multiplet. The fact that, for example, the long multiplet half-line starts at the stress tensor point means that, from this four-point functions' point of view, the stress tensor multiplet is indistinguishable from a long multiplet at dimension very close to two. The same phenomenon will appear also elsewhere. This is important when applying numerical bootstrap techniques, and it is the reason why it is necessary to input a small gap in the long sector whenever one wants to assume something about a protected operator which sits at the unitarity bound.
Now let us move to the OPE of two chiral operators. The list of exchanged superconformal multiplets is now much richer. In Figure~\ref{fig:charged_channel} we depict the range of spins and conformal dimensions of the blocks exchanged in this channel. The superconformal blocks reduce to usual conformal blocks $g_{\Delta,\ell}$ in this case because the selection rules allow the exchange of only one operator per multiplet. For notational consistency, however, we will denote them as $\mathcal{G}^s_{\Delta,\ell}$. In the equation below all superconformal multiplets, with the exception of the first one, contribute to the OPE through a superdescendant rather than a superprimary. For example, in a long multiplet $\mathcal{O}$ only $Q^4\mathcal{O}$ will contribute as it is the only chiral operator in it. The OPE reads~\cite{Lemos:2015awa}
\eqna{
\phi_{r_1} \times \phi_{r_2} &=
B_1\overbar{L}[0\hspace{1pt};0]^{(0;-2(r_1+r_2))}_{r_1+r_2}+B_1\overbar{L}[0\hspace{1pt};0]^{(2;-2(r_1+r_2)+2)}_{r_1+r_2+1} + \\&\;\quad +
B_1\overbar{L}[0\hspace{1pt};\lnsp1]^{(1;-2(r_1+r_2)+1)}_{r_1+r_2+\frac12} + A_t\overbar{L}[\ell\hspace{1pt}-2;\hspace{-0.8pt}\ell]^{(0;-2(r_1+r_2)+2)}_{r_1+r_2-1+\ell} + \\&\;\quad +
A_t\overbar{L}[\ell-1\hspace{1pt};\hspace{-0.8pt}\ell]^{(1;-2(r_1+r_2)+3)}_{r_1+r_2+\frac12+\ell} + L\overbar{L}[\ell\hspace{1pt};\hspace{-0.8pt}\ell]_{\Delta-2}^{(0;-2(r_1+r_2)+4)}\,,
}[chargedOPE]
where $t=2$ if the first spin label is zero and $t=1$ otherwise.
The first multiplet appearing in~\eqref{chargedOPE} is the one whose OPE coefficients we want to extremize. The second one is necessarily present if there is a mixed branch in the theory. Assuming its absence requires in practice to assume a gap on the scalar long multiplets.\footnote{The numerical results are weakly affected by assuming the absence of this class of operators.}
\begin{figure}
\newcommand{\begin{tikzpicture}{\begin{tikzpicture}
\node at (-.5,0) {$\phi_{r_1} \times {\bar{\phi}}_{r_2}$};
\node at (1.2,0) {$\ell = 0$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (4.5,0) -- (7.88,0);
\node at (4.5,0)[circle,fill,inner sep=1.2pt]{};
\node at (3,0)[circle,fill,inner sep=1.2pt]{};
\node at (3,.5) {$B_1\overbar{L}$};
\node at (3,-.25) {\footnotesize $r_{12}$};
\node at (4.5,.5) {$A_2\overbar{L}$};
\node at (4.5,-.25) {\footnotesize $r_{12}+2$};
\draw [->] (4.5,.25)--(4.5,.08);
\draw [->] (3,.25)--(3,.08);
\node at (6.7,.5) {$L\overbar{L}$};
\begin{scope}[shift={(0,-1.2)}]
\node at (1.2,0) {$\ell > 0$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (5.3,0) -- (7.88,0);
\node at (5.3,0)[circle,fill,inner sep=1.2pt]{};
\node at (5.3,.5) {$A_1\overbar{L}$};
\node at (5.3,-.25) {\footnotesize $\ell+r_{12}+2$};
\draw [->] (5.3,.25)--(5.3,.06);
\node at (7.1,.5) {$L\overbar{L}$};
\end{scope}
\end{tikzpicture}}
\newcommand{\begin{tikzpicture}{\begin{tikzpicture}
\node at (-.5,0) {$\phi_{r} \times {\bar{\phi}}_r$};
\node at (1.2,0) {$\ell = 0$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (3.5,0) -- (7.88,0);
\node at (3.5,0)[circle,fill,inner sep=1.2pt]{};
\node at (2.01,-.25) {\footnotesize $0$};
\node at (2.006,0)[circle,fill,inner sep=1.2pt]{};
\node at (3.5,.4) {$A_2A_2$};
\node at (3.5,-.25) {\footnotesize $2$};
\draw [->] (3.5,.25)--(3.5,.06);
\node at (6.7,.4) {$L\overbar{L}$};
\begin{scope}[shift={(0,-.8)}]
\node at (1.2,0) {$\ell > 0$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (4.3,0) -- (7.88,0);
\node at (4.3,0)[circle,draw=black, fill=white,inner sep=1.2pt]{};
\node at (4.3,-.25) {\footnotesize $\ell+2$};
\node at (6.7,.4) {$L\overbar{L}$};
\end{scope}
\end{tikzpicture}}
\centering
\begin{tikzpicture
\\
\begin{tikzpicture
\caption{OPE in the $\phi_{r_1} \times {\bar{\phi}}_{r_2} $ channel depicted according to the conformal blocks $\mathcal{G}^t_{\Delta,\ell}$ or $\mathcal{G}^{u}_{\Delta,\ell}$ exchanged. The white dot implies that there is no operator there.}\label{fig:neutral_channel}
\end{figure}
\begin{figure}
\newcommand{\begin{tikzpicture}{\begin{tikzpicture}
\node at (-.5,0) {$\phi_{r_1} \times \phi_{r_2}$};
\node at (1.2,0) {$\ell = 0$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (4.8,0) -- (7.88,0);
\node at (4.8,0)[circle,fill=black!40!blue!30!white,draw,inner sep=1.2pt]{};
\node at (2.8,0)[circle,fill,inner sep=1.2pt]{};
\draw [draw=black!30!red!80!white,thick,dash pattern={on 5pt off 1.8pt}, join=round, cap=round, dash phase=2pt] (2.2,-.45) rectangle ++(1.18,1.22);
\node at (2.8,.5) {$B_1\overbar{L}$};
\node at (2.8,-.25) {\footnotesize $r_1+r_2$};
\node at (4.8,.5) {$B_1\overbar{L}^{(2)}$};
\node at (4.8,-.25) {\footnotesize $r_1+r_2+2$};
\draw [->] (4.8,.25)--(4.8,.08);
\draw [->] (2.8,.25)--(2.8,.08);
\node at (6.7,.5) {$L\overbar{L}^{(4)}$};
\begin{scope}[shift={(0,-1.2)}]
\node at (1.2,0) {$\ell = 1$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (5.2,0) -- (7.88,0);
\node at (5.2,0)[circle,fill,inner sep=1.2pt]{};
\node at (3.2,0)[circle,fill,inner sep=1.2pt]{};
\node at (3.2,.5) {$B_1\overbar{L}^{(1)}$};
\node at (3,-.25) {\footnotesize $r_1+r_2+1$};
\node at (5.2,.5) {$A_2\overbar{L}^{(3)}$};
\node at (5.2,-.25) {\footnotesize $r_1+r_2+3$};
\draw [->] (5.2,.25)--(5.2,.08);
\draw [->] (3.2,.25)--(3.2,.08);
\node at (6.7,.5) {$L\overbar{L}^{(4)}$};
\end{scope}
\begin{scope}[shift={(0,-2.4)}]
\node at (1.2,0) {$\ell > 1$};
\draw [thin, gray, dash pattern={on 3pt off 5pt}] (2,0) -- (7.99,0);
\node at (8.1,-.25) {\scriptsize{$\Delta$}};
\draw [{|[scale=1.1]}->, >={Latex[round, scale=1.3]}, thin, dash pattern={on 0pt off 3pt}] (2,0) -- (8,0);
\draw [very thick] (5.6,0) -- (7.88,0);
\node at (5.6,0)[circle,fill,inner sep=1.2pt]{};
\node at (3.6,0)[circle,fill,inner sep=1.2pt]{};
\node at (3.6,.5) {$A_t\overbar{L}^{(2)}$};
\node at (3.6,-.25) {\footnotesize $r_1+r_2+\ell$};
\node at (5.6,.5) {$A_1\overbar{L}^{(3)}$};
\node at (5.6,-.25) {\footnotesize $\quad r_1+r_2+\ell+2$};
\draw [->] (5.6,.25)--(5.6,.08);
\draw [->] (3.6,.25)--(3.6,.08);
\node at (6.9,.5) {$L\overbar{L}^{(4)}$};
\end{scope}
\end{tikzpicture}}
\centering
\begin{tikzpicture
\caption{OPE in the $\phi_{r_1} \times \phi_{r_2} $ channel depicted according to the conformal blocks $\mathcal{G}^s_{\Delta,\ell}$ exchanged. When present, the superscript denotes (twice) the difference between the R-charge of the superconformal primary and $r_1+r_2$, which also corresponds to the level of the superdescendant that enters the OPE. The light blue dot implies that the operator is there only if a mixed branch is present. We also framed in red the the operator whose OPE coefficient we want to extremize.}\label{fig:charged_channel}
\end{figure}
\subsection{Bootstrap equations}\label{sec:bootstrapEqns}
In order to find the relations constraining the OPE coefficients that we are interested in, we need to study four-point correlators.
In this section we will review the crossing equations of the system of correlators involving only $\phi_r$ and ${\bar{\phi}}_r$, which were first obtained in~\cite{Beem:2014zpa}. In appendix~\ref{app:mixed_crossing_eqns} we will also show the crossing equations of the mixed correlator system involving two pairs of chiral fields $\phi_{r_1}$, $\phi_{r_2}$ which first appeared in~\cite{Lemos:2015awa}.\footnote{In the context of superconformal field theories, mixed correlators have been studied in \cite{Lemos:2015awa,Gimenez-Grau:2020jrx,Agmon:2019imm,Bissi:2020jve}} The latter setup is needed to obtain OPE bounds on theories of higher rank or on coefficients of the type $\lambda_{u\,u^2\,u^3}^2$.
Let us start with four-point functions involving only operators $\phi_r$ and $ {\bar{\phi}}_r$. The only nonzero ones have an equal number of chiral and antichiral fields. We will use different OPEs to define the functions $f_{s,t,u}$ as follows
\eqn{
\langle {\bar{\phi}}_r(x_1)\phi_r(x_2)\phi_r(x_3){\bar{\phi}}_r(x_4)\rangle = \frac{f_t(z,{\bar{z}})}{(x_{12}^2)^{r}(x_{34}^2)^{r}}=\,\frac{f_s(1-z,1-{\bar{z}})}{(x_{23}^2)^{r}(x_{14}^2)^{r}}=\frac{f_u\big(\frac{z}{z-1},\frac{{\bar{z}}}{{\bar{z}}-1}\big )}{(x_{12}^2)^{r}(x_{34}^2)^{r}}\,,
}[]
with $x_{ij}^2=|x_i-x_j|^2$ and
\eqn{
\frac{x_{12}^2\hspace{1pt} x_{34}^2}{x_{13}^2\hspace{1pt} x_{24}^2} = u = z{\bar{z}}\,,\qquad
\frac{x_{14}^2\hspace{1pt} x_{23}^2}{x_{13}^2\hspace{1pt} x_{24}^2} = v = (1-z)(1-{\bar{z}})\,.
}[]
The functions $f_{s,t,u}(z,{\bar{z}})$ must satisfy these crossing constraints
\twoseqn{
((1-z)(1-{\bar{z}}))^r\, f_t(z,{\bar{z}}) = (z{\bar{z}})^r \, f_s(1-z,1-{\bar{z}})\,,
}[eqFirst]{
((1-z)(1-{\bar{z}}))^r\, f_u(z,{\bar{z}}) = (z{\bar{z}})^r \, f_u(1-z,1-{\bar{z}})\,.
}[][]
In order to write these constraints in a form amenable to numerical computations we must expand the functions $f_{s,t,u}$ in conformal blocks. Based on the analysis of section~\ref{sec:multiplets} we have\footnote{We remind the reader that $\lambda_{\mathcal{O}_1\mathcal{O}_2\mathcal{O}_3}$ stands for the coefficient of $\mathcal{O}_3$ in the OPE of $\mathcal{O}_1\times\mathcal{O}_2$, which means that it is computed by the three-point function $\langle\mathcal{O}_1\mathcal{O}_2{\overbar{\mathcal{O}}}_3\rangle$.}
\threeseqn{
f_t(z,{\bar{z}}) &= \mathcal{G}^t_{0,0}(z,{\bar{z}})+ \frac{r^2}{6\hspace{0.5pt} c}\mathcal{G}^t_{2,0}(z,{\bar{z}}) + \sum_{\ell = 0}^\infty\sum_{\Delta > \ell + 2} |\lambda_{\phi_r{\bar{\phi}}_r\mathcal{O}_{\Delta,\ell}}|^2(-1)^\ell\,\mathcal{G}^t_{\Delta,\ell}(z,{\bar{z}})\,,
}[]{
f_s(z,{\bar{z}}) &= |\lambda_{\phi_r\phi_r\phi_{2r}}|^2 \mathcal{G}^s_{2r,0}(z,{\bar{z}}) + \sum_{\substack{\ell=0\\\mathrm{even}}}^\infty \sum_{\substack{\Delta = \ell + 2r\\\Delta \geq \ell + 2r+2}} |\lambda_{\phi_r\phi_r\mathcal{O}_{\Delta,\ell,2r}}|^2 \mathcal{G}^s_{\Delta,\ell}(z,{\bar{z}})\,,
}[chargedExp]{
f_u(z,{\bar{z}}) &= \mathcal{G}^u_{0,0}(z,{\bar{z}}) + \frac{r^2}{6\hspace{0.5pt} c}\mathcal{G}^u_{2,0}(z,{\bar{z}}) + \sum_{\ell = 0}^\infty\sum_{\Delta > \ell + 2} |\lambda_{\phi_r{\bar{\phi}}_r\mathcal{O}_{\Delta,\ell}}|^2\,\mathcal{G}^u_{\Delta,\ell}(z,{\bar{z}})\,,
}[]
where $c$ is the central charge. It is not necessary to input $c$ explicitly: it can be left arbitrary just like the other OPE coefficients. However we will fix it to a specific values on all our computations. In the first line we used $\lambda_{{\bar{\phi}}\phi\mathcal{O}} = (-1)^\ell\lambda_{\phi{\bar{\phi}}\mathcal{O}}$ and in the second we used $\lambda_{{\bar{\phi}}\phib{\overbar{\mathcal{O}}}} = \lambda_{\phi\phi\mathcal{O}}^*$. We denote the operators as $\mathcal{O}_{\Delta,\ell}$ if they are R-neutral, otherwise as $\mathcal{O}_{\Delta,\ell,r}$, with $r$ denoting the R-charge (in units where $\phi_r$ has R-charge $r$). Notice that from this point on, the subscripts labeling $\lambda$ refer to the $r$-charge of the corresponding operators. These conventions are more suitable in this description, being also consistent with previous literature.
The blocks $\mathcal{G}^s,\mathcal{G}^t$ and $\mathcal{G}^u$ are explicit functions given by
\eqn{
\mathcal{G}^I_{\Delta,\ell}(z,{\bar{z}}) = \frac{z{\bar{z}}}{z-{\bar{z}}}\bigl(k^I_{\Delta+\ell}(z)k^I_{\Delta-\ell-2}({\bar{z}})- z\leftrightarrow{\bar{z}}\bigr)\,,\qquad I=s,t,u\,,
}[blockDef]
with
\threeseqn{
k^s_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta}2,\frac{\beta}2;\beta;z\mright)\,,}[ksDef]{
k^t_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta}2,\frac{\beta}2;\beta+2;z\mright)\,,}[]{
k^u_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta}2,\frac{\beta+4}2;\beta+2;z\mright)\,.
}[][kDef]
With all these ingredients in place, we can finally write down the crossing equation as follows
\eqna{
&\sum_{\Delta,\ell} |\lambda_{\phi_r\phi_r\mathcal{O}_{\Delta,\ell,2r}}|^2 \,\vec{V}^{\mathrm{charged}}_{\Delta,\ell} +
\sum_{\Delta,\ell}(\lambda_{\phi_r{\bar{\phi}}_r\mathcal{O}_{\Delta,\ell}})^2 \,\vec{V}^{\mathrm{neutral}}_{\Delta,\ell}=-\vec{V}^{\mathrm{neutral}}_{0,0} -
\frac{r^2}{6\hspace{0.5pt} c}\;\vec{V}^{\mathrm{neutral}}_{2,0}\,,
}[ceqSingle]
with
\eqn{
\vec{V}^{\mathrm{charged}}_{\Delta,\ell} \equiv \begin{bmatrix}
-(-1)^\ell \mathcal{F}^s_{+,\Delta,\ell} \\
(-1)^\ell \mathcal{F}^s_{-,\Delta,\ell} \\
0
\end{bmatrix}\,,\quad
\vec{V}^{\mathrm{neutral}}_{\Delta,\ell} \equiv \begin{bmatrix}
(-1)^\ell \mathcal{F}^{\hspace{0.5pt} t}_{+,\Delta,\ell} \\
(-1)^\ell \mathcal{F}^{\hspace{0.5pt} t}_{-,\Delta,\ell} \\
\mathcal{F}^u_{-,\Delta,\ell}
\end{bmatrix}\,,
}[vvvvv]
where we introduced the (anti)symmetric combinations (which implicitly depend on $r$)
\eqn{
\mathcal{F}^I_{\pm,\Delta,\ell}(z,{\bar{z}}) = v^r \mathcal{G}^I_{\Delta,\ell}(z,{\bar{z}}) \pm u^r \mathcal{G}^I_{\Delta,\ell}(1-z,1-{\bar{z}})\,,\qquad I=s,t,u\,.
}[Ffunctions]
The crossing equation~\ceqSingle can be thought of as a vector equation in an infinite dimensional space (the space of 3-vector valued functions of $z$ and ${\bar{z}}$). In the next section we will review the standard numerical methods adopted to study it.
\subsection{Numerical implementation}\label{sec:numerical}
In this section, we will give a quick review on the numerical implementation of the crossing equation~\ceqSingle. The general strategy is to find contradictions to the bootstrap equations that stem from the existence of a functional $\alpha$ satisfying some properties. Schematically, $\alpha$ should satisfy some inequalities that take the form
\eqn{
\exists\;\alpha\quad\mathrm{s.t}\quad \forall\;(\Delta,\ell) \in \mathrm{spectrum}\quad\alpha[\vec{V}_{\Delta,\ell}] \geq 0\quad \mathrm{and}\quad \alpha[\vec{V}_{0,0}^\mathrm{neutral}] = 1\,.
}[assumptions]
This would make the left hand side of~\ceqSingle non-negative and the right hand side strictly negative, hence a contradiction, meaning that the assumptions made on the spectrum were inconsistent. There are essentially two obstacles that one needs to overcome in order to translate this problem in something that can be made easily accessible by computers
\begin{enumerate}
\item Searching in a space of functionals which is infinite dimensional (i.e. the dual of a space of functions)
\item Imposing an inequality over an infinite, and in any case not fully specified, set of $\Delta$'s and $\ell$'s
\end{enumerate}
To overcome 1.\ we can restrict ourselves to a finite dimensional subspace of functionals. Empirically it turns out that the following set works particularly well\footnote{Here we show the action of $\alpha$ on a single entry of the vector $\vec{V}$. The actual $\alpha$ we are looking for is a vector of functionals, each acting on a different entry of $\vec{V}$.}
\eqn{
\alpha[F] = \sum_{n\leq m}^{n+m\leq\Lambda} \alpha_{n,m}\frac{\partial^n}{\partial z^n}\frac{\partial^m}{\partial {\bar{z}}^m}F(z,{\bar{z}})\big|_{z={\bar{z}}=\frac12}\,.
}[alphaSpace]
This way, if $\Lambda$ is odd, the space of functionals has dimension
\eqn{
\mathrm{dim}(\Lambda) = \frac14(\Lambda+1)(\Lambda+3)\,.
}[]
The choice of $z={\bar{z}}=1/2$ is motivated by the fact that it is the point where both the $s$ channel OPE ($z,{\bar{z}}\sim0$) and $t$ channel OPE ($z,{\bar{z}}\sim1$) converge optimally. In principle one could evaluate the derivatives in slightly different points as well, but this has shown no significant improvements in the past.
To overcome 2.\ we need to work more. In principle one could discretize $\Delta$ and put a cutoff on both $\Delta$ and $\ell$ and impose positivity on all the points in this two-dimensional lattice. Despite being possibile, this is not computationally very convenient. A better approach takes advantage of the analytic structure of the conformal blocks. It is a known fact that conformal blocks and their derivatives in $z,{\bar{z}}$ can be systematically approximated as a positive function times a polynomial in $\Delta$ \cite{Poland:2011ey}
\eqn{
\partial_z^n\partial_{\bar{z}}^m\,{\cal G}_{\Delta,\ell}\big(\tfrac12,\tfrac12\big) = \chi(\Delta)\,p^{m,n}_\ell(\Delta)\,,\qquad
\chi(\Delta) > 0 \quad \forall\;\Delta \geq \ell+2\,.
}[eq:blockFactoriz]
The actual expression for $\chi(\Delta)$ will be shown later~\eqref{eq:derivApprox}. Because of this, we can ignore $\chi(\Delta)$ for the purpose of imposing $\alpha[g_{\Delta,\ell}] \geq 0$. Therefore, we only have to study positivity of the polynomial $p^{m,n}_\ell$ for all $\Delta \geq \ell+2$. This is a known problem in mathematics that can be mapped to imposing positive-semidefiniteness on a pair of matrices.\footnote{Very roughly, any polynomial $P(x)$ which is positive for $x>x_*$ can be written as $$P(x)=\vec{u}^T(x)\cdot M \cdot \vec{u}(x) + (x-x_*)\, \vec{u}^T(x)\cdot N \cdot \vec{u}(x)\,,$$ where $u(x)$ is a vector of independent polynomials, whose length is the degree of $P$ divided by two, and $M,N$ are positive definite matrices.}
We refer the reader to Appendix \ref{appendixrecursion} for details.
To summarize, we consider a finite subspace of functionals $\alpha$ given by~\alphaSpace and we take rational approximations of the blocks. Next we put a cutoff $\ell_{\mathrm{max}}$ on the number of spins and impose~\assumptions on all spins up to it. The rational approximation of the blocks is what allows us to impose positivity for all $\Delta$'s in one go.
\subsection{Bounds on the OPE coefficient}\label{sec:OPE_bounds}
In this specific instance of the bootstrap we want to put upper and lower bounds on the OPE coefficient of $\lambda_{\phi_r\phi_r\phi_{2r}}$. It is possible to get lower bounds only if the operator is isolated. Indeed for the theories that we are considering this is the case, as one can easily see from section~\ref{sec:multiplets}. The bootstrap problem that we want to study is
\begin{quote}\itshape
Fix $c$, then maximize (minimize) $B_{\pm} \equiv \alpha\mleft[\vec{V}^\mathrm{neutral}_{0,0} + \frac{r^2}{6c} \, \vec{V}_{2,0}^{\mathrm{neutral}}\mright]$ subject to
\begin{enumerate}
\item $\alpha[\vec{V}_{\Delta,\ell}^{\mathrm{charged}}] \geq 0\quad \forall\;\ell \geq 0\;\mathrm{even}\,,\quad \forall\;\Delta = \ell+2r\;\;\mathrm{or}\;\;\Delta \geq \ell + 2r+2$
\item $\alpha[\vec{V}_{\Delta,\ell}^{\mathrm{neutral}}] \geq 0\quad \forall\;\ell > 0\,,\quad \forall\;\Delta \geq \ell + 2$
\item $\alpha[\vec{V}_{\Delta,0}^{\mathrm{neutral}}] \geq 0\quad \forall\;\Delta \geq 2 + \epsilon$
\item $\alpha[\vec{V}_{4,0}^{\mathrm{charged}}] = \pm 1$
\end{enumerate}
\end{quote}
The parameter $\epsilon$ can be taken to be $\sim 0.1$ and it is needed to make $V_{2,0}^{\mathrm{neutral}}$ isolated so that it is meaningful to impose a value of the central charge.
The final result will be the two-sided bound
\eqn{
B_- \leq |\lambda_{\phi_r\phi_r\phi_{2r}}|^2 \leq B_+\,.
}[]
There will be such a bound for every value of $r$ and $c$, so we can compute it for any rank-one $\mathcal{N}=2$ SCFT or any other higher-rank SCFT if we focus only on a single Coulomb branch operator. Bounds involving mixed correlators are also possible but we do not show the list of bootstrap assumptions for brevity.
\subsection{Bound on the dimension of the lightest neutral unprotected operator}\label{sec:gap_bounds}
If we know the OPE coefficient of the chiral scalar of dimension $2r$ we can put it in the right hand side of the bootstrap equation. The assumptions that we need are as follows
\begin{quote}\itshape
Fix the value of $\lambda_{\phi_r\phi_r\phi_{2r}}$ and of $c$. Then find $\alpha$ such that
\begin{enumerate}
\item $\alpha\mleft[\vec{V}^\mathrm{neutral}_{0,0} + |\lambda_{\phi_r\phi_r\phi_{2r}}|^2\hspace{1pt} \vec{V}^{\mathrm{charged}}_{2r,0} + \frac{r^2}{6c} \, \vec{V}_{2,0}^{\mathrm{neutral}}\mright]=1$
\item $\alpha[\vec{V}_{\Delta,\ell}^{\mathrm{neutral}}] \geq 0\quad \forall\;\ell > 0\,,\quad \forall\;\Delta \geq \ell + 2$
\item $\alpha[\vec{V}_{\Delta,\ell}^{\mathrm{charged}}] \geq 0\quad \forall\;\ell \geq 2\;\mathrm{even}\,,\quad \forall\;\Delta = \ell+2r\;\;\mathrm{or}\;\;\Delta \geq \ell + 2r+2$
\item $\alpha[\vec{V}_{2r+2,0}^{\mathrm{charged}}] \geq 0$
\item $\alpha[\vec{V}_{\Delta,0}^{\mathrm{neutral}}] \geq 0\quad \forall\;\Delta \geq \Delta_\mathrm{gap} > 2$
\end{enumerate}
\end{quote}
The gap $\Delta_{\mathrm{gap}}$ can be taken to be inside an interval $[\Delta_\mathrm{min},\Delta_\mathrm{max}]$ and we can run a binary search by checking $\Delta_\mathrm{gap} = \frac12(\Delta_\mathrm{min}+\Delta_\mathrm{max})$ and updating the upper or lower limit according to whether an $\alpha$ satisfying the above assumptions is found or not.
\subsection{Results for the OPE coefficient}\label{sec:res_bounds}
In Tables~\ref{tab:summary1} and \ref{tab:summary2} we report our results for the OPE bounds obtained via the numerical bootstrap. The first and last line use the single correlator setup described in Section~\ref{sec:bootstrapEqns}. The second line uses the mixed correlator setup instead, whose crossing vectors are shown in appendix~\ref{app:mixed_crossing_eqns}.
In order to obtain these bounds we set up a bootstrap problem as explained in section~\ref{sec:OPE_bounds}. We then use a newly developed framework, \texttt{sailboot},\footnote{Available at \href{https://gitlab.com/maneandrea/sailboot}{\texttt{gitlab.com/maneandrea/sailboot}}.} to translate crossing equations and assumptions into numerical vectors that can be fed to \texttt{sdpb}.\footnote{Available at \href{https://github.com/davidsd/sdpb}{\texttt{github.com/davidsd/sdpb}}~\cite{Landry:2019qug}.}
The upper and lower bounds for the simplest OPE coefficient in the $\mathcal{H}_0$ theory, namely $\lambda_{\phi_r\,\phi_r\phi^2_{r}}$ at $r=\frac65$, were previously obtained in~\cite{Cornagliotto:2017snu} while bounds for the same OPE coefficient but in the $\mathcal{H}_1$ and $\mathcal{H}_2$ theories first appeared in~\cite{Gimenez-Grau:2020jrx}. We also did the same computation for Minahan-Nemeschansky theories but, unfortunately, the bounds were either extremely weak or non-existent. This is generally expected when one considers correlators of operators with large external dimensions. A possible cause of this is that when the external dimensions are large there is a big gap between the unitarity bound and the generalized free theory spectrum, leading to many ``fake'' approximate solutions to crossing that spoil the numerics.\footnote{We thank Alessandro Vichi for discussions about this point.}
\begin{table}[h]
\centering
\setstretch{1.05}
\begin{tabular}{|r|ccc|}
\hline
theory & $\mathcal{H}_0$ & $\mathcal{H}_1$ & $\mathcal{H}_2$
\\
$(r,c)$ & $(\frac65,\frac{11}{30})$ & $(\frac43,\frac12)$ & $(\frac32,\frac23)$
\\[2pt]
\hline
\multirow{2}{*}{
$\lambda^2_{u\,u\,u^2}$} & 2.167 & 2.359 & 2.698
\\
& 2.142 & 2.215 & 2.298
\\ \hline
\multirow{2}{*}{
$\lambda^2_{u\,u^2\,u^3}$} & 3.637 & 4.445 &\\
& 3.192 & 3.217 &\\ \hline
\end{tabular}
\caption{Bootstrap bounds on OPE coefficients of some rank-one theories obtained with the single correlator setup at $\Lambda=43$ ($\lambda^2_{u\,u\,u^2}$) and the mixed correlator setup at $\Lambda=19$ ($\lambda^2_{u\,u^2\,u^3}$). \label{tab:summary1}}
\setstretch{1}
\end{table}
Next we considered some theories of rank two. In Table~\ref{tablesu56} we show the results for some theories of interest. The OPE coefficients that we considered are either $\lambda_{u\,v\,uv}^2$, where $u\equiv \phi_{r_1}$ and $v\equiv \phi_{r_2}$ are the two Coulomb branch operators, or $\lambda_{\phi_i\,\phi_i\,\phi_i^2}^2$ with $\phi_i$ either $u$ or $v$. For the former one needs to use the mixed correlator setup, described in appendix~\ref{app:mixed_crossing_eqns}, whereas for the latter the single correlator setup is sufficient. It should be noted that when studying $\lambda_{\phi_i\,\phi_i\,\phi_i^2}^2$, the information that the theory is of rank two is not fed into the bootstrap equations, if not indirectly through its central charge. Some more results for the OPE coefficient bounds of heavier operators in the pure $SU(5)$ theory can be found in Table~\ref{tab:rank2more}.
\begin{table}[h]
\centering
\setstretch{1.05}
\begin{tabular}{|r|cc|}
\hline
theory & $(A_1,A_4)$ & $(A_1, A_5)$ \\
$(r_u,r_v,c)$ & $(\frac87, \frac{10}7, \frac{17}{21})$ & $(\frac54, \frac32, \frac{23}{24})$ \\[2pt]
\hline
\multirow{2}{*}{
$\lambda^2_{u\,u\,u^2}$} & 2.102 & 2.231 \\
& 2.024 & 2.055 \\ \hline
\multirow{2}{*}{
$\lambda^2_{u\,v\,uv}$} & 1.125 & 1.233 \\
& 0.981 & 0.960 \\ \hline
\multirow{2}{*}{
$\lambda^2_{v\,v\,v^2}$} & 2.533 & 2.709 \\
& 2.181 & 2.195 \\\hline
\end{tabular}
\caption{Bootstrap bounds on OPE coefficients of some rank-two theories obtained with the single correlator setup at $\Lambda=41$ ($\lambda^2_{u\,u\,u^2}$ and $\lambda^2_{v\,v\,v^2}$) and the mixed correlator setup at $\Lambda=19$ ($\lambda^2_{u\,v\,uv}$). \label{tab:summary2}}
\setstretch{1}
\end{table}
\begin{table}[h]
\centering
\setstretch{1.05}
\begin{tabular}{|r|ccc|}
\hline&&&\\[-12pt]
coefficient & $\lambda^2_{u\,u^2\,u^3}$ & $\lambda^2_{u^2\,u^2\,u^4}$ & $\lambda^2_{v^2\,v^2\,v^4}$ \\[3pt]
\hline
\multirow{2}{*}{
bounds} & 3.512 & 12.42 & 142.6 \\
& 0.890 & 2.871 & 0.855 \\
\hline
\end{tabular}
\caption{Bootstrap bounds on OPE other coefficients of the rank-two theory $(A_1,A_4)$ with $c=17/21$, $r_1=8/7$ and $r_2=10/7$ obtained with the single correlator setup at $\Lambda=41$ ($\lambda^2_{u^2\,u^2\,u^4}$ and $\lambda^2_{v^2\,v^2\,v^4}$) and the mixed correlator setup at $\Lambda=19$ ($\lambda^2_{u\,u^2\,u^3}$).\label{tab:rank2more}}
\setstretch{1}
\end{table}
The pure~$SU(5)$ theory belongs to a family of theories of incresing rank all obtained by going to a special locus of a pure gauge theory with gauge group $SU(N)$~\cite{Martone:2021ixp,Eguchi:1996vu,Eguchi:1996ds}. The central charges and Coulomb branch dimensions are summarized in equations~\eqref{puregaugec} and~\eqref{puregaugedims}, respectively.
In Figure~\ref{fig:pureSUN} we plot upper and lower bounds for the OPE coefficients $\lambda_{u\,u\,u^2}^2$ of the theories above for $N$ ranging from $5$~to~$12$ where $r$ is the dimension of the heaviest Coulomb branch generator, namely $r=2N/(N+2)$.
\begin{figure}
\centering
\includegraphicsWlabel{pureSUN.pdf}{$\lambda_{r\,r\,2r}^2$}{$N$}
\caption{Upper and lower bounds for the OPE coefficients $\lambda_{r\,r\,2r}^2$ in the pure $SU(N)$ theories where $r$ is the conformal dimension of the heaviest Coulomb branch generator. The shaded area is disallowed.}\label{fig:pureSUN}
\end{figure}
For rank-two, we can carve out the cube of OPE bounds by scanning over the values of $\lambda^2_{u\,u\,u^2}$, $\lambda^2_{v\,v\,v^2}$ and obtaining for each point an upper and a lower bound on $\lambda^2_{u\,v\,uv}$. This can be done by putting the vectors $\vec{V}^{2r_i}_{2r_i,0}$ of~\eqref{mixedVecDef} to the right of the crossing equations, just like the first point in Subsection~\ref{sec:gap_bounds}. The results for the $(A_1,A_n)$ theories for $n=4,5$ can be found in Figure~\ref{fig:shell}.
\begin{figure}
\centering
\subfloat[{$(A_1,A_4)$}]{\includegraphics[scale=1]{shell_A4.pdf}}\\
\subfloat[{$(A_1,A_5)$}]{\includegraphics[scale=1]{shell_A5.pdf}}
\caption{Upper and lower bounds on $\lambda_{u\,v\,uv}$ as a function of the values of $\lambda^2_{u\,u\,u^2}$ and $\lambda^2_{v\,v\,v^2}$. The base of the plot is bounded by the values found in the single correlator bootstrap in Table~\ref{tab:summary2}.}
\label{fig:shell}
\end{figure}
\subsection{Results for the lightest scalar neutral unprotected operator}\label{sec:res_gaps}
In this subsection we present upper bounds on the dimension of the lightest unprotected neutral operator appearing in the OPE $\phi_r\times{\bar{\phi}}_r$. The definition of the bootstrap problem to consider was given in subsection~\ref{sec:gap_bounds}. The following plots will present a scan on several values of the OPE coefficient $\lambda_{u\,u\,u^2}^2$ ranging between the bounds obtained in the previous subsection. The result is an upper bound on the lightest conformal dimension, which we denote as $\Delta_{\mathrm{gap}}$. It is interesting to notice that the upper end of the OPE window requires a very low $\Delta_{\mathrm{gap}}$, suggesting that probably the true value of $\lambda_{u\,u\,u^2}^2$ is located towards the lower end. This conclusion stems from the expectation that a strongly interacting theory is not expected to have operator dimensions too close to the free theory value. The plots are presented in Figure~\ref{fig:scans}.
\begin{figure}
\centering
\subfloat[Theory $\mathcal{H}_0$, $r=\tfrac65$]{\includegraphicsWlabel[scale=1.03]{scanH0.pdf}{$\Delta_{\mathrm{gap}}$}{$\lambda^2_{u\,u\,u^2}$}}\,
\subfloat[Theory $\mathcal{H}_1$, $r=\tfrac43$]{\includegraphicsWlabel[scale=1.03]{scanH1.pdf}{$\Delta_{\mathrm{gap}}$}{$\lambda^2_{u\,u\,u^2}$}} \\
\subfloat[Theory $\mathcal{H}_2$, $r=\tfrac32$]{\includegraphicsWlabel[scale=1.03]{scanH2.pdf}{$\Delta_{\mathrm{gap}}$}{$\lambda^2_{u\,u\,u^2}$}} \,
\subfloat[Theory $D_4$, $r=2$]{\includegraphicsWlabel[scale=1.03]{scanD4.pdf}{$\Delta_{\mathrm{gap}}$}{$\lambda^2_{u\,u\,u^2}$}}
\caption{Upper bound on the dimension of the lightest scalar neutral unprotected operator $\Delta_{\mathrm{gap}}$ as a function of the OPE coefficient $\lambda_{u\,u\,u^2}^2$ for various rank-one theories. The upper area (shaded in red) is disallowed. The blue vertical line marks the value found by localization (see Table~\protect\ref{tSW}).}\label{fig:scans}
\end{figure}
\section{Large charge asymptotics}\label{Sec:LargeCharge}
The two-point functions $G_{nn}$ behave in a universal way as the charge (and, consequently, the dimension) of the operators is sent to infinity. More precisely, if we denote $nr\equiv\mathcal{J}$, we have the following asymptotic behavior~\cite{Hellerman:2017sur,Hellerman:2020sqj,Hellerman:2021duh}\footnote{After completion of the present paper, we learned that in \cite{Hellerman:2018xpi} this asymptotic formula has been improved, resumming the large-charge expansion. See also \cite{Beccaria:2020azj} for the higher-rank generalization.}
\eqn{
G_{nn}\;\underset{n\to\infty}{\sim}\;\tilde{\mathcal{Y}}\,\Gamma(\mathcal{J}+1) \left(\frac{N_\mathcal{O}}{2\pi R}\right)^{2\mathcal{J}} \mathcal{J}^\alpha\,,
}[asympLargeN]
where $\tilde{\mathcal{Y}}$ and $N_\mathcal{O}$ are constants and $\alpha$ is a constant related to the $a$-anomaly of the SCFT considered.\footnote{Not to be confused with the $\alpha$ of Table~\ref{tablead}.} The values $\alpha$ for the theories of interest are summarized in Table~\ref{tab:fits}.
In this section we want to compare our results with this asymptotic expansion. This will provide a nontrivial check of our method and will also allow us to estimate the values of $\tilde{\mathcal{Y}}$ and $N_\mathcal{O}$. The formula~\eqref{OPEcoeff2}, with $C_{ij}$ given by \eqref{cij2}, can in principle be computed exactly for all $n$, but since we want to study the behavior at very large $n$ it is easier to simply evaluate it numerically because this will significantly speed up the computation of the determinants. We found that the agreement with the formula~\asympLargeN is excellent already at $n\sim100$. In Figure~\ref{fig:largeCharge} we plot the values of $G_{nn}$ (in blue) together with the asymptotic form (in red). For brevity, we only show $\mathcal{H}_0$ and MN with flavor group $E_6$. The other AD or MN theories look similar.. The small inset instead shows the percent error in green. Then in Table~\ref{tab:fits} we show our best estimates for $\tilde{\mathcal{Y}}$ and $N_\mathcal{O}$.
\begin{figure}
\centering
\subfloat[{$r=\frac65$}]{\includegraphics[scale=1.4]{plot-6_5.pdf}}
\subfloat[{$r=3$}]{\includegraphics[scale=1.4]{plot-3.pdf}}
\caption{Comparison of $G_{nn}$ with the asymptotic form \asympLargeN where $c_{F}$ and $R$ have been set to $1$.}\label{fig:largeCharge}
\end{figure}
\begin{table}
\centering
\setstretch{1.4}
\begin{tabular}{|c|ccccccc|}
\hline
$r$ & $\frac65$ & $\frac43$ & $\frac32$ & $2$ & $3$ & $4$ & $6$ \\
\hline
$\alpha$ & $\frac3{10}$ & $\frac12$ & $\frac34$ & $\frac32$ & $3$ & $\frac92$ & $\frac{15}2$ \\
$\sqrt{c_F}N_\mathcal{O}$ & $2.197$ & $2.411$ & $2.641$ & $3.1416$ & $3.676$ & $3.937$ & $4.176$ \\
$\tilde\mathcal{Y}$ & $1.164$ & $1.111$ & $1.010$ & $0.6347$ & $0.137$ & $1.619\times10^{-2}$ & $6.530\times10^{-5}$ \\
\hline
\end{tabular}
\setstretch{1}
\caption{In the second row we show the values of the parameter $\alpha$ of \asympLargeN. In the last two rows we show the best fit estimates for $\tilde{\mathcal{Y}}$ and $N_\mathcal{O}$ coming from the plots of Figure~\ref{fig:largeCharge}.}\label{tab:fits}
\end{table}
\section{Conclusions and Outlook}\label{Sec:Conclusions}
In this paper we have computed the OPE coefficients of some four-point correlators for $\mathcal{N}=2$ SCFT theories of the AD type. We have performed the computations using both localization and the bootstrap method as summarized in Table~\ref{tab:summary}. We find surprising agreement between the two methods, except for a few values. We remark that our results for the OPE coefficient $\lambda_{u\,u\,u^2}$ of the rank-one AD theories were also found in \cite{Grassi:2019txd}, extrapolating from the large-charge expansion, and in \cite{Cornagliotto:2017snu, Gimenez-Grau:2020jrx} using the bootstrap method.
\begin{table}[h]
\centering
\setstretch{1.05}
\noindent\parbox[t]{.46\textwidth}{%
\begin{tabular}[t]{|r|l|ccc|}
\hline
OPE & method & $\mathcal{H}_0$ & $\mathcal{H}_1$ & $\mathcal{H}_2$
\\
\hline
\multirow{3}{*}{
$\lambda^2_{u\,u\,u^2}$} &
\multirow{2}{*}{Boot.$\;\Big\lbrace$} & 2.167 & 2.359 & 2.698
\\
& & 2.142 & 2.215 & 2.298
\\ & Localiz. & 2.098 & 2.241 & 2.421
\\ \hline
\multirow{3}{*}{
$\lambda^2_{u\,u^2\,u^3}$} &
\multirow{2}{*}{Boot.$\;\Big\lbrace$} & 3.637 & 4.445 &\\
& & 3.192 & 3.217 &
\\ & Localiz. & 3.300 & 3.674 & 4.175 \\ \hline
\end{tabular}
}\hfill
\parbox[t]{.46\textwidth}{%
\begin{tabular}[t]{|r|l|cc|}
\hline
OPE & method & $(A_1,A_4)$ & $(A_1, A_5)$ \\
\hline
\multirow{3}{*}{
$\lambda^2_{u\,u\,u^2}$} &
\multirow{2}{*}{Boot.$\;\Big\lbrace$} & 2.102 & 2.231 \\
& & 2.024 & 2.055
\\ & Localiz. & 1.878 & 1.929 \\\hline
\multirow{3}{*}{
$\lambda^2_{u\,v\,uv}$} &
\multirow{2}{*}{Boot.$\;\Big\lbrace$} & 1.125 & 1.233 \\
& & 0.981 & 0.960
\\ & Localiz. & 1.043 & 1.039 \\\hline
\multirow{3}{*}{
$\lambda^2_{v\,v\,v^2}$} &
\multirow{2}{*}{Boot.$\;\Big\lbrace$} & 2.533 & 2.709 \\
& & 2.181 & 2.195
\\ & Localiz. & 2.231 & 2.203 \\ \hline
\end{tabular}
}
\caption{Summary of the results from localization and numerical bootstrap, reproduced here for convenience and for easing the comparison between the two methods.\label{tab:summary}}
\end{table}
Probably the most mysterious point, at the conceptual level, is why the localization formula, though truncated at $\mathcal{F}_1$, gives so accurate values for the OPE coefficients. Indeed, lacking a parametrically-small quantity that regulates the genus expansion, higher-genus corrections are in principle not expected to give negligible contributions. While it would be very interesting to have a deeper understanding of this point, it would also be important to have a handle on the functions $\mathcal{F}_g$ with $g\geq2$ for the AD theories, to try and estimate their effect on the OPE coefficients. We hope to report on this matter in the near future.
On the more technical level, the localization formula for two-point correlators we used in this paper relies on knowledge of the SW prepotential $\mathcal{F}_0$, and hence of the SW periods. For $SU(N)$ gauge theories with fundamental matter of rank higher than one, explicit expressions for the SW periods of AD theories are, to our knowledge, not known. Here we tackle this problem, by considering directly the SW curve in the vicinity of
the conformal points and computing explicitly the SW periods for the rank-two AD appearing in the moduli space of pure $SU(5)$ and $SU(6)$ theories.
It would be nice to extend our computation to more theories of the AD type. Also, for higher-rank theories, we have been elusive regarding the integration contours of the matrix-model integral, and only paid attention to pick those which makes the integral over the CB parameters convergent. While this seems a reasonable guiding principle, further clarifications are needed on this point.
\acknowledgments
We are particularly grateful to R.~Poghossian for illuminating discussions on the periods of Riemann surface.
We are also grateful to A.~Grassi, J.~Minahan, D.~Orlando, J.~Russo, and L.~Tizzano for discussions. AM would like to thank G.~Fardelli and A.~Gimenez-Grau for discussions. The work of AB and AM is supported by Knut and Alice Wallenberg Foundation under grant KAW 2016.0129 and by VR grant 2018-04438. AM and AB would like to thank the INFN and University of Rome Tor Vergata for their hospitality.
The computations in this work were enabled by resources in project SNIC 2020/15-320 and SNIC 2021/22-500 provided by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
\begin{appendix}
\section{The Seiberg-Witten prepotential for \texorpdfstring{$\boldsymbol{SU(2)}$}{SU(2)} gauge theory}\label{SWSU(2)gauge}
\subsection{Elliptic geometry}
In the quartic form, an elliptic geometry can be written as
\begin{equation}
w^2=\prod_{i=1}^4 (x-e_i) =x^4+d_3 \, x^3 +d_2 \, x^2+d_1 \, x+d_0 \,.
\end{equation}
The first period is
\begin{equation}\label{Eq.w1}
w_1 = { {\rm i} \over \pi} \int_{e_1}^{e_2} {dx \over w } = \int_0^1 {dz \over \pi\sqrt{e_{13}e_{24} \, z(1-z)(1-z \zeta)}}= { _{2} F_1(\ft12,\ft12,1,\zeta)\over \sqrt{e_{13} e_{24} }}\,,
\end{equation}
with $z={(x-e_1)e_{24}\over (x-e_4)e_{21}}$ and
\begin{equation}
\zeta ={e_{12} e_{34} \over e_{13}e_{24} }\,.
\end{equation}
The second period is obtained permuting the roots $(123)\to (231)$ leading to
\begin{equation}\label{Eq.w2}
w_2 = {{\rm i} \over \pi } \int_{e_2}^{e_3} {dx \over w } = { _{2} F_1\left(\ft12,\ft12,1, {e_{23} e_{14} \over e_{21}e_{34} } \right)\over \sqrt{e_{12} e_{43} }}= { {\rm i} \, _{2} F_1(\ft12,\ft12,1,1-\zeta) \over \sqrt{e_{13} e_{24}}}\,,
\end{equation}
where in the last equation we used the identity
\begin{equation}
_{2} F_1\left(\ft12,\ft12,1, x \right) =(1-x)^{-{1\over 2}} {}_{2} F_1\left(\ft12,\ft12,1, {x\over x -1} \right)\,. \label{idk}
\end{equation}
Ordering the roots
\begin{equation}
e_1<e_2<e_3<e_4
\end{equation}
one finds that $\zeta^{-1}>1$, so $\zeta^{-1} \notin [0,1]$ as required.
The torus complex structure is defined as the ratio of the periods
\begin{equation}
\tau = {w_2\over w_1} ={{\rm i} \, {}_{2} F_1(\ft12,\ft12,1,1-\zeta) \over _{2} F_1(\ft12,\ft12,1,\zeta) }\,.
\end{equation}
Alternatively, the periods of the curve can be expressed in terms of permutation invariant quantities $D$, $\Delta$ given in terms of the expansion coefficients $d_n$ rather than the roots.
The two invariants are defined as
\begin{eqnarray}
D&=&\ft{1}{16} \sum_{i\neq j\neq k \neq l} e_{ij}^2 \, e_{kl}^2=d_2^2-3\, d_1 \, d_3+12 d_0\nonumber\\
\Delta&=&\prod_{i< j } e_{ij}^2 =d_2^2-3\, d_1 \, d_3+12 d_0=-27 d_1^4-4 d_3^3 d_1^3+18 d_2 d_3 d_1^3-4 d_2^3 d_1^2+d_2^2 d_3^2 d_1^2\,,\nonumber\\
&&-6 d_0 d_3^2
d_1^2+144 d_0 d_2 d_1^2+18 d_0 d_2 d_3^3 d_1-192 d_0^2 d_3 d_1-80 d_0 d_2^2 d_3
d_1+16 d_0 d_2^4\nonumber\\
&&-27 d_0^2 d_3^4+256 d_0^3-128 d_0^2 d_2^2-4 d_0 d_2^3 d_3^2+144
d_0^2 d_2 d_3^2\,.\label{ddelta}
\end{eqnarray}
In terms of these variables, we can define the following two functions
\begin{eqnarray}
w_3 &=& D^{-{1\over 4}} \,_{2} F_1\left(\ft{1}{12},\ft{5}{12},1,J^{-1}
\right) \,,\label{f1f2}\\
w_4 &=& {\rm i} D^{-{1\over 4}} \,\left[ -_{2} F_1\left(\ft{1}{12},\ft{5}{12},1,J^{-1}\right)
+{\Gamma(\ft{5}{12})\Gamma(\ft{1}{12}) \over 2 \pi^{3\over 2} } \,_{2} F_1\left(\ft{1}{12},\ft{5}{12},\ft12,1{-}J^{-1}
\right) \right]\,, \nonumber
\end{eqnarray}
with
\begin{equation}
J(\zeta) ={4 D^3 \over 27\, \Delta } = {4 (1-\zeta+\zeta^2) \over 27 \zeta^2(1-\zeta)^2 }\,.
\end{equation}
Using quadratic and cubic transformation identities of hypergeometric functions one can check that for
\begin{eqnarray}\label{Weakw1w2}
\zeta \ll1 : && \qquad w_3=w_1 \quad, \quad w_4=w_2 \,.
\end{eqnarray}
These formulae can be analytically continued away from $\zeta$ small, where the periods $w_1$, $w_2$ can be written as linear combinations of $w_3$, $w_4$, related to those in the original region by modular transformations.
The same formulae are obtained by writing the curve in the cubic Weiertrass form
\begin{equation}
w^2=\prod_{i=1}^3 (x-e_i) = x^3+a_2 x+a_3\,,
\end{equation}
where now
\begin{equation}
\zeta= {e_{12} \over e_{13} } \, ,\ \qquad J= {4 a^3\over 4 a^3+27 b^2}\,.
\end{equation}
\subsection{SU(2) gauge theories with matter}
The curves of SU(2) gauge theories with fundamental matter of equal masses will be written as
\begin{equation}
y^2+P(x) y+{\Lambda^{4-N_f}\over 4} \, (x-m)^{N_f}=0\,.
\end{equation}
The branch cuts are located at
\begin{equation}
w(x)^2=P(x)^2 -\Lambda^{4-N_f} \, (x-m)^{N_f} =\prod_{i=1}^4 (x-e_i)=\sum_{n=0}^4 d_n \, x^n \, ,
\end{equation}
The function $P(x)$ for $N_f\leq 3$ is given by
\begin{equation}
P(x)=
\left\{
\begin{array}{cc}
x^2-u & N_f=0,1 \\
x^2-u +{\Lambda^2\over 4} & N_f=2 \\
x^2-u +{\Lambda\over 4} ( x- 3 m) & N_f=3 \,, \\
\end{array}
\right.
\end{equation}
We will compute the periods using the permutation invariant variables $D$ and $\Delta$, so we write
\begin{eqnarray}
{\partial a(u) \over \partial u} &=& \oint_\alpha {\partial \lambda \over \partial u} = {1\over 2 \pi i} \oint_{\alpha} {dx\over w(x) }=w_1
\,,\nonumber \\
{\partial a_D(u) \over \partial u} &=& \oint_\alpha {\partial \lambda \over \partial u} = {1\over 2 \pi i} \oint_{\beta} {dx\over w(x) } =w_2\,,
\label{periods}
\end{eqnarray}
with $w_1$, $w_2$ given by \eqref{Eq.w1} and \eqref{Eq.w2}.
The parameter $u$ is chosen such that the SW prepotential $\lambda$ behaves at large $x$ as
\begin{equation}
-2\pi {\rm i} \lambda = x {d \log y(x) \over dx} \approx \sum_{n=0}^\infty { \left\langle {\rm tr} \varphi^n \right \rangle \over x^n } \approx 2+{2u\over x^2}+\ldots
\end{equation}
leading to $u= \ft12{\rm tr} \varphi^2$. The weak coupling expansion of periods can alternatively be obtained via localization with instanton-counting parameter $q$ related to $\Lambda$ via
\begin{equation}
4q=\Lambda^{4-N_f}\,.
\end{equation}
\subsubsection[$N_f=0$ fundamentals]{$\boldsymbol{N_f=0}$ fundamentals}
Let us consider first the pure gauge theory. The SW curve becomes
\begin{equation}
w(x)^2= (x^2-u)^2-\Lambda^4\,,
\end{equation}
giving the invariants
\begin{eqnarray}
D&=& 4(4u^2-3 \Lambda^4)\,, \nonumber\\
\Delta &=& 256\Lambda^8(u^2- \Lambda^4)\,, \label{disc0}
\end{eqnarray}
Combining (\ref{periods}) with \eqref{Weakw1w2} and \eqref{f1f2}, one finds the weak coupling expansion
\begin{eqnarray}
{\partial a\over \partial u} &=& {1\over \sqrt{u}} \left( \frac{1}{2}+ {3\Lambda^4\over 32 u^2}+ {105\Lambda^8\over 2048 u^4} +\ldots \right)\,.
\end{eqnarray}
Integrating over $u$, inverting to find $u(a)$, and evaluating the second period one finds
\begin{eqnarray}
a(u) &=& \sqrt{u}-{\Lambda^4\over 16 u^{3\over 2} } - {15 \Lambda^8\over 1024 u^{7\over 2} } -{105 \Lambda^{12} \over 16384 u^{11\over 2} } +\ldots\,,\nonumber\\
u(a) &=& a^2 + {\Lambda^4\over 2^3 a^2}+ {5\Lambda^8\over 2^9 a^6}+ {9 \Lambda^{12}\over 2^{12}\, a^{10} }+\ldots\,, \nonumber\\
{\cal F}_0(a) &=& a^2 \log\left( 64 {a^4\over \Lambda^4} \right) -6 a^2 - {\Lambda^4\over 8 a^2}- {5 \Lambda^8\over 1024 \,a^6}
- {3\Lambda^{12}\over 4096 \,a^{10} } +\ldots\,. \label{au0}
\end{eqnarray}
The same formula follows from the Nekrasov partition function
\begin{equation}
{\cal F}_{\rm loc}={-}\frac{2 q}{ \left(4 a^2{-}\epsilon^2 \right)}{-}\frac{q^2 \left(20 a^2{+}7 \epsilon _1^2{+}7 \epsilon
_2^2{+}16 \epsilon _1 \epsilon _2\right)}{\left(4 a^2 {-}\epsilon^2\right){}^2 \left(4
a^2{-}(2 \epsilon _1{+}\epsilon _2)^2\right) \left(4 a^2{-}(\epsilon _1{+}2 \epsilon _2)^2\right) }+\ldots
\end{equation}
sending $\epsilon_{1,2}$ to zero and setting $4q=\Lambda^4$.
Using the expansions above one can check the relations \cite{Nakajima:2003uh}
\begin{eqnarray}
u&=&\frac{\theta_3^4(q_{\rm IR})+\theta_2^4(q_{\rm IR})}{\theta_3^2(q_{\rm IR})\theta_2^2(q_{\rm IR})}\Lambda^2\,,\nonumber\\
\frac{\partial u}{\partial a}&=&\frac{\sqrt{2} \Lambda}{\theta_3(q_{\rm IR})\theta_2(q_{\rm IR})}\,,
\label{transprop}
\end{eqnarray}
where the $\theta_i(q_{\rm IR})$ are the standard theta functions and $q_{\rm IR}=e^{\pi {\rm i} \partial a_D/\partial a}$.
\subsubsection[$N_f=1$ fundamentals]{$\boldsymbol{N_f=1}$ fundamentals}
Next we consider the $N_f=1$ theory.
The curve becomes
\begin{equation}
w(x)^2= \left(x^2-u \right)^2-\Lambda^3 (x-m)\,.
\end{equation}
The invariants read
\begin{eqnarray}
D &=& 4(4u^2+3 m \Lambda^3)\,, \nonumber\\
\Delta &=& \Lambda^6 \left(-27 \Lambda ^6+256 \Lambda^3 m^3+256 m^2 u^2-288 \Lambda ^3 m u-256 u^3\right) \,. \label{discnf1}
\end{eqnarray}
At weak coupling, combining (\ref{periods}) with \eqref{Weakw1w2} and \eqref{f1f2}, one finds
\begin{eqnarray}
a(u) &=&\sqrt{u}+\frac{\Lambda ^3 m}{16 u^{3/2}}+\frac{3 \Lambda ^6 \left(u-5 m^2\right)}{1024 u^{7/2}}-\frac{35 \Lambda ^9 \left(m u-3 m^3\right)}{16384 u^{11/2}}+ \ldots\,,\nonumber\\
u(a) &=&a^2 -\frac{\Lambda ^3 m}{8 a^2}+\frac{\Lambda ^6 \left(5 m^2-3 a^2\right)}{512 a^6} +\frac{\Lambda ^9 \left(7 a^2 m-9 m^3\right)}{4096 a^{10}} {+}\ldots\,,\nonumber\\
{\cal F}_0(a) &=& a^2 \log\left( -64{\rm i} {a^4\over \Lambda^4} \right) -{9 a^2\over 2} -\ft12(a+m)^2\log\frac{(a+m)}{\Lambda}-\ft12(a-m)^2\log\frac{(a-m)}{\Lambda}\nonumber\\
&&+\frac{\Lambda ^3 m}{8 a^2}+\frac{\Lambda ^6 \left(3 a^2-5 m^2\right)}{1024 a^6}+\frac{\Lambda ^9 m \left(9 m^2-7 a^2\right)}{12288 a^{10}}+\ldots\,. \label{au2}
\end{eqnarray}
The same formula follows from the partition function computed using localization
\begin{equation}
{\cal F}_{\rm loc}=\frac{2 m q}{4a^2-\epsilon ^2}
+\frac{q^2 \left(3 \left(4 a^2-\epsilon ^2\right)^2-4 m^2 \left(20 a^2+7 \epsilon _1^2+7 \epsilon _2^2+16 \epsilon _1 \epsilon _2\right)\right)}{4 \left(4 a^2-\epsilon
^2\right)^2 \left(4 a^2-\left(2 \epsilon _1+\epsilon _2\right){}^2\right) \left(4 a^2-\left(\epsilon _1+2 \epsilon _2\right){}^2\right)} \end{equation}
sending $\epsilon_{1,2}$ to zero and setting $4q=\Lambda^3$.
At strong coupling, near the AD conformal point $u_*={3 \Lambda^2 \over 4}$, $m_*=-{3\Lambda\over 4}$, one finds
\begin{equation}
\Delta \approx-432 \Lambda^8 (u-u_*)^2\, \qquad , \qquad D=24 \Lambda^2 \, (u-u_*) \qquad , \qquad J=-{128 (u-u_*)\over 27 \Lambda^2} \,.
\end{equation}
Expanding (\ref{f1f2}) in this limit we get
\begin{equation}
f_i\sim D^{-{1\over 4} } \, J^{1\over 12}\sim (u-u_*)^{-\frac{1}{6}}\,.
\end{equation}
Integrating over $u$ one finds
\begin{eqnarray}
a_D \sim a-a_* &\sim& (u-u_*)^{5\over 6}\qquad , \qquad \tau= {a_D\over a-a_*} =e^{\pi {\rm i} \over 3}
\end{eqnarray}
where we used the fact that $J=0$ for $\tau=e^{\pi {\rm i} \over 3}$ up to SL$(2,\mathbb{Z})$ transformations.
\subsubsection[$N_f=2$ fundamentals]{$\boldsymbol{N_f=2}$ fundamentals}
Next we consider the $N_f=2$ theory with equal masses.
The curve becomes
\begin{equation}
w(x)^2= \left(x^2-u+ {\Lambda^2 \over 4} \right)^2-\Lambda^2(x-m)^2\,.
\end{equation}
The invariants read
\begin{eqnarray}
D &=& 16 u^2-12 m^2 \Lambda^2-4 u \Lambda^2+\Lambda^4 \,, \nonumber\\
\Delta &=& 16 \Lambda^4(u^2-m^2 \Lambda^2) (4u-4 m^2-\Lambda^2)^2\,. \label{disc2}
\end{eqnarray}
At weak coupling, combining (\ref{periods}) with \eqref{Weakw1w2} and \eqref{f1f2}, one finds
\begin{eqnarray}
a(u) &=&\sqrt{u}{-} \frac{\Lambda ^2 \left({u}+m^2\right)}{16 u^{3/2}}{-}\frac{3 \Lambda ^4 \left(5 m^4{+}2 m^2
u{+}u^2\right)}{1024 u^{7/2}}{-}\frac{5 \Lambda ^6 \left(21 m^6{+}7 m^4 u{+}3 m^2 u^2{+}u^3\right)}{16384
u^{11/2}}+\ldots\,,\nonumber\\
u(a) &=& a^2{+}{a^2{+}m^2 \over 8 a^2}\Lambda^2+{a^4{-}6 a^2 m^2{+}5 m^4\over 512 a^6} \Lambda^4 {+} {5a^4{-}14 a^2 m^2{+}9 m^4\over 4096 a^{10}} m^2 \Lambda^6{+}\ldots\,,\nonumber\\
{\cal F}_0(a) &=& a^2 \log\left( 64 {a^4\over \Lambda^4} \right) -3 a^2 -(a+m)^2\log\frac{(a+m)}{\Lambda}-(a-m)^2\log\frac{(a-m)}{\Lambda}\nonumber\\
&&{-}{\Lambda^2 (a^2{+}m^2) \over 8 a^2}{-} {\Lambda^4 (5m^4{-}6 a^2 m^2{+}a^4)\over 1024 \,a^6} \label{au2}
+\ldots\,. \label{au2}
\end{eqnarray}
The same formula follows from the partition function computed using localization
\begin{equation}
{\cal F}_{\rm loc}= -\frac{q \left(4 \left(a^2{+}m^2\right){-}\epsilon ^2\right)}{2 \left(4 a^2{-}\epsilon ^2\right)}{-}\frac{q^2 \left(\left(4 a^2{-}\epsilon ^2\right)^2
\left({-}4 a^2{+}24 m^2{+}\epsilon ^2\right){-}16 m^4 \left(20 a^2{+}7 \epsilon _1^2{+}7 \epsilon _2^2{+}16 \epsilon _1 \epsilon _2\right)\right)}{16
\left(4 a^2{-}\epsilon ^2\right)^2 \left(4 a^2{-}\left(2 \epsilon _1{+}\epsilon _2\right){}^2\right) \left(4 a^2{-}\left(\epsilon _1{+}2 \epsilon
_2\right){}^2\right)}
\end{equation}
sending $\epsilon_{1,2}$ to zero and setting $4q=\Lambda^2$.
At strong coupling, near the AD conformal point $u_*={\Lambda^2 \over 2}$, $m_*={\Lambda\over 2}$, one finds
\begin{equation}
\Delta \approx 256 \,\Lambda^6\, (u-u_*)^3 \, \qquad , \qquad D=12 \Lambda^2 \, (u-u_*) \qquad , \qquad J=1 \,.
\end{equation}
Expanding (\ref{f1f2}) in this limit we get
\begin{equation}
f_i\sim D^{-{1\over 4} } \, J^{1\over 12}\sim (u-u_*)^{-\frac{1}{4}}\,.
\end{equation}
Integrating over $u$ one finds
\begin{eqnarray}
a_D\sim a-a_* &\sim& (u-u_*)^{3\over 4}\qquad , \qquad \tau= {a_D\over a-a_*} ={\rm i}
\end{eqnarray}
where we used the fact that $J=1$ for $\tau={\rm i}$ up to SL$(2,\mathbb{Z})$ transformations.
\subsubsection[$N_f=3$ fundamentals]{$\boldsymbol{N_f=3}$ fundamentals}
Finally we consider the $N_f=3$ theory with equal masses.
The curve becomes
\begin{equation}
w(x)^2= \left(x^2-u+ {\Lambda \over 4} (x-3m) \right)^2-\Lambda(x-m)^3\,.
\end{equation}
The invariants read
\begin{eqnarray}
D &=&\frac{\Lambda ^4}{256}-\frac{3 \Lambda ^3 m}{8}+\frac{1}{2} \Lambda ^2 \left(9 m^2-2 u\right)+12 \Lambda m \left(m^2+u\right)+16 u^2 \label{discnf3}\,, \\
\Delta &=& \frac{\Lambda ^2}{8} \left(2 m^2-\Lambda m-2 u\right)^3 \left(256 \Lambda m^3-3 \Lambda ^2 m^2+96 \Lambda m u+256 u^2-\Lambda ^2 u\right) \,.\nonumber
\end{eqnarray}
At weak coupling, combining (\ref{periods}) with \eqref{Weakw1w2} and \eqref{f1f2}, one finds
\begin{eqnarray}
a(u) &=&\sqrt{u}+ \frac{\Lambda m \left(m^2+3 u\right)}{16 u^{3/2}}-\frac{\Lambda ^2 \left(15 m^6+27 m^4 u+21 m^2 u^2+u^3\right)}{1024 u^{7/2}} +\ldots\,,\nonumber\\
u(a) &=& a^2 -\frac{\Lambda m \left(3 a^2 +m^2\right)}{8 a^2}+\frac{\Lambda ^2 \left(a^2-m^2\right)^2 \left(a^2+5 m^2\right)}{512 a^6} {+}\ldots\,,\nonumber\\
{\cal F}_0(a) &=& a^2 \log\left( -64 {{\rm i} a^4\over \Lambda^4} \right) -{3 a^2\over 2} -\ft32(a+m)^2\log\frac{(a+m)}{\Lambda}-\ft32(a-m)^2\log\frac{(a-m)}{\Lambda}\nonumber\\
&&+\frac{\Lambda m^3}{8 a^2}-\frac{\Lambda ^2m^2 \left(3 a^4 -9 a^2 m^2+5 m^4\right)}{1024 a^6} \label{au2nf3}
+\ldots\,. \label{au2}
\end{eqnarray}
The same formula follows from the partition function computed using localization
\begin{equation}
{\cal F}_{\rm loc}= q \frac{ (3m-\epsilon) (4a^2-\epsilon^2)+4 m^3 }{2 \left(4 a^2{-}\epsilon ^2\right)} +\ldots
\end{equation}
sending $\epsilon_{1,2}$ to zero and setting $4q=\Lambda$.
At strong coupling, near the AD conformal point $u_*={5 \Lambda^2 \over 64}$, $m_*=-{\Lambda\over 8}$, one finds
\begin{equation}
\Delta \approx-27 \Lambda^4 (u-u_*)^4\, \qquad , \qquad D=16\, (u-u_*)^2 \qquad , \qquad J=-{16384 (u-u_*)^2\over 729 \Lambda^4} \,.
\end{equation}
Expanding (\ref{f1f2}) in this limit we get
\begin{equation}
f_i\sim D^{-{1\over 4} } \, J^{1\over 12}\sim (u-u_*)^{\frac{1}{3}}\,.
\end{equation}
Integrating over $u$ one finds
\begin{eqnarray}
a_D\sim a-a_* &\sim& (u-u_*)^{2\over 3}\qquad , \qquad \tau= {a_D\over a-a_*} =e^{\pi {\rm i} \over 3}
\end{eqnarray}
where we used the fact that $J=0$ for $\tau=e^{\pi {\rm i} \over 3}$ up to SL$(2,\mathbb{Z})$ transformations.
\section{Conformal block recursion relations}
\label{appendixrecursion}
In equation~\eqref{eq:blockFactoriz} we mentioned that the derivatives of conformal blocks can be written as a positive factor times a polynomial. This property can be checked rather easily. First let us take the explicit expression of $g_{\Delta,\ell}$ in terms of hypergeometric functions from~\eqref{blockDef} with~\eqref{ksDef}. The derivatives with order higher than two acting on $k^s_\beta(z)$ can be recast as a first derivative or the function itself times polynomials, thanks to the hypergeometric equation. Then it suffices to do a Pade approximation of $k^s_\beta(1/2)$ and $\partial_zk^s_\beta(1/2)$. After this, it can be explicitly checked that the denominators will turn out to be positive for all unitary values of $\Delta$. See for example~\cite{Poland:2011ey}.
Let us now motivate the existence of such an approximation in general. Conformal blocks can be seen as meromorphic functions of $\Delta$ and representation theory can be used to predict their pole structure. Indeed their poles correspond to null states that arise in the degenerate Verma modules $\mathcal{V}_{\Delta,\ell}$ when $\Delta$ assumes specific non-unitary values. The presence of these poles can be intuitively understood from the schematic definition of a conformal block as a projector onto a given conformal multiplet inserted inside a four-point function
\eqn{
\frac{g_{\Delta,\ell}(z,{\bar{z}})}{(x_{12}^2)^{\Delta_\phi}(x_{34}^2)^{\Delta_\phi}} = \sum_{n,m=0}^\infty\big\langle \phi(x_1)\phi(x_2)\big|\Psi_n^{\Delta,\ell}\big\rangle\,\left(\big\langle\Psi_n^{\Delta,\ell}\big|\Psi_m^{\Delta,\ell}\big\rangle\right)^{-1}\,\big\langle\Psi_m^{\Delta,\ell}\big|\phi(x_3)\phi(x_4)\big\rangle\,,
}[eq:cbSchematic]
where $|\Psi_0^{\Delta,\ell}\rangle$ is the state corresponding to the conformal primary of dimension $\Delta$ and spin $\ell$ and $n,m$ enumerate its descendants. The factor in the middle is the inverse of the matrix of two-point functions. If a certain Verma module $\mathcal{V}_{\Delta,\ell}$ has a null descendant --- which can only happen for non-unitary values of $\Delta$ and $\ell$ --- the matrix of two-point functions will have a zero eigenvalue, thus leading to a pole in~\eqref{eq:cbSchematic}. These poles have been classified in~\cite{Penedones:2015aga}. The residue of each pole is a sum over all the descendants of the singular state, which means that it can be written as a conformal blocks with shifted parameters, times possibly a coefficient~$R_*$
\eqn{
g_{\Delta,\ell}(z,{\bar{z}}) \quad\underset{\Delta\to\Delta^*}{\longrightarrow}\quad \frac{R_*\, g_{\Delta^*,\ell^*}(z,{\bar{z}})}{\Delta - \Delta^*}\,.
}[]
With this property one can approximate the blocks very efficiently using a recursion relation. First we remove the essential singularity in $\Delta$ by defining a block $h_{\Delta,\ell}$ as follows
\eqn{
g_{\Delta,\ell}(z,{\bar{z}}) = r^\Delta\,h_{\Delta,\ell}(z,{\bar{z}})\,,
}[]
where $r$ is related to the cross ratios $z$ and ${\bar{z}}$ as follows
\eqn{
r = \sqrt{|\rho\bar\rho|}\,,\qquad \rho = \frac{z}{(1+\sqrt{1-z})^2}\,,\qquad \bar\rho = \frac{{\bar{z}}}{(1+\sqrt{1-{\bar{z}}})^2}\,.
}[]
Now $h_{\Delta,\ell}$, if regarded as a complex function of $\Delta$, is meromorphic and it approaches a constant at infinity, which means that it is fully specified by its poles
\eqn{
h_{\Delta,\ell}(z,{\bar{z}}) = h_{\infty,\ell}(z,{\bar{z}}) + \sum_{\mathrm{poles}\,\Delta_A^*} \frac{R_A\, r^{n_A}\,h_{\Delta^*_A,\ell^*_A}(z,{\bar{z}})}{\Delta - \Delta^*_A} + \sum_{\mathrm{double\,poles}\,\Delta_B^*} \frac{R_B\, r^{n_B}\,h_{\Delta^*_B,\ell^*_B}(z,{\bar{z}})}{(\Delta - \Delta^*_B)^2}\,.
}[eq:recRelBlocks]
The sum over double poles appears only for CFTs in even spacetime dimensions. Triple and higher poles never appear. The precise expressions for $h_{\infty,\ell}$, $R_A$, $\Delta_A^*$, $\ell_A^*$ and $n_A$ are given in~\cite{Penedones:2015aga}. The values of the poles $\Delta^*_B$ are also available in~\cite{Penedones:2015aga} (where it can be seen that two types of poles overlap) but the residues are unpublished. The detailed recursion relation is nevertheless implemented in an openly available software for all dimensions.\footnote{Available at \href{https://gitlab.com/bootstrapcollaboration/scalar_blocks}{\texttt{gitlab.com/bootstrapcollaboration/scalar\_blocks}}.} One can then truncate the sum by keeping only a finite number of poles. Noting that, since $n_A$ is always at least~$2$, the sum is suppressed by a factor of $r^2$, which, at the crossing symmetric point, is about~$0.029$. This allows us to solve the system iteratively and obtain a rapidly converging power series in $r$.
This type of recursion relation first appeared in the context of two-dimensional CFTs~\cite{Zamolodchikov:1984eqp, Zamolodchikov:1987eqp} but was later extended to general dimensions in~\cite{Kos:2013tga}. See also~\cite{Kos:2014bka} for a more detailed exposition.
Now that we have such a rational approximation of the conformal blocks, it is easy to take derivatives. Letting $r_* = 3-2\sqrt{2}$ being the value of $r$ at $z={\bar{z}}=1/2$ we have
\eqna{
\partial_z^n\partial_{\bar{z}}^m\,g_{\Delta,\ell}\big(\tfrac12,\tfrac12\big) &= r_*^\Delta\Bigg(q^{m,n}_\ell(\Delta) + \sum_{\mathrm{poles}\,\Delta_A^*} \frac{a_{\ell,A}^{m,n}}{\Delta-\Delta^*_A}\Bigg)\\
&= \frac{r_*^\Delta}{\prod_{\mathrm{poles}\,\Delta_A^*}(\Delta-\Delta_A^*)}\,p^{m,n}_\ell(\Delta)\,,
}[eq:derivApprox]
for some polynomial $q^{m,n}_\ell$ and some coefficients $a_{\ell,A}^{m,n}$. In the second line we simply took a common denominator. As we have explained earlier, the origin of the poles $\Delta_A^*$ guarantees that, for unitary values of $\Delta$, the denominator is always a positive function.
In order to have very precise expressions for the blocks, one would like to keep many poles. If the polynomials are too big, however, the semidefinite programming problem becomes more expensive. A good trade-off between precision and performance is obtained by computing the blocks at very high recursion order and then Padé approximating the obtained expression to a rational function with fewer poles. This is known as the pole-shifting procedure and it was introduced in~\cite{Kos:2013tga}.
\section{Crossing equations of the system of mixed correlators}\label{app:mixed_crossing_eqns}
In this appendix we show the crossing equations for the system of correlators involving two distinct chiral fields $\phi_{r_1}$, $\phi_{r_2}$ and their antichiral partners. Before writing the crossing vectors we need to introduce the blocks for unequal external operators. In order not to clutter the equations with too many subscripts, we depart from the notation in the main text and use $g$ in place of $\mathcal{G}^s$, $\tilde\mathcal{G}$ in place of $\mathcal{G}^t$ and $\mathcal{G}$ in place of $\mathcal{G}^u$. This is also consistent with previous works~\cite{Lemos:2015awa}. With this notation, the generalizations of~\eqref{blockDef} read
\eqna{
g^{(r,r')}_{\Delta,\ell}(z,{\bar{z}}) & =\frac{z{\bar{z}}}{z-{\bar{z}}}\bigl(k^s_{\Delta+\ell}(z)k^s_{\Delta-\ell-2}({\bar{z}})- z\leftrightarrow{\bar{z}}\bigr)\,,\\
\mathcal{G}^{(r)}_{\Delta,\ell}(z,{\bar{z}}) &= \frac{z{\bar{z}}}{z-{\bar{z}}}\bigl(k^u_{\Delta+\ell}(z)k^u_{\Delta-\ell-2}({\bar{z}})- z\leftrightarrow{\bar{z}}\bigr)\,,\\
\tilde{\mathcal{G}}^{(r)}_{\Delta,\ell}(z,{\bar{z}})& =\frac{z{\bar{z}}}{z-{\bar{z}}}\bigl(k^t_{\Delta+\ell}(z)k^t_{\Delta-\ell-2}({\bar{z}})- z\leftrightarrow{\bar{z}}\bigr)\,,\\
}[blockDefMixed]
where now the $k$ functions are defined as
\eqna{
k^s_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta-r}2,\frac{\beta+r'}2;\beta;z\mright)\,, \\
k^u_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta-r}2,\frac{\beta-r+4}2;\beta+2;z\mright)\,,\\
k^t_\beta(z) &=
z^{\frac{\beta}2}\,{}_2F_1\hspace{-0.8pt}\mleft(\frac{\beta-r}2,\frac{\beta+r}2;\beta+2;z\mright)\,.
}[kDefMixed]
Next we construct the crossing functions which generalize~\eqref{Ffunctions}
\threeseqn{
F^{ijkl}_{\pm,\Delta,\ell}(z,{\bar{z}}) &= v^{\frac{r_j+r_k}2} g^{(r_{ij},r_{kl})}_{\Delta,\ell}(z,{\bar{z}}) \pm u^{\frac{r_j+r_k}2} g^{(r_{ij},r_{kl})}_{\Delta,\ell}(1-z,1-{\bar{z}})\,,
}[]{
\mathcal{F}^{ijkl}_{\pm,\Delta,\ell}(z,{\bar{z}}) &= v^{\frac{r_j+r_k}2} \mathcal{G}^{(r_{ij})}_{\Delta,\ell}(z,{\bar{z}}) \pm u^{\frac{r_j+r_k}2} \mathcal{G}^{(r_{ij})}_{\Delta,\ell}(1-z,1-{\bar{z}})\,,
}[]{
\tilde{\mathcal{F}}^{ijkl}_{\pm,\Delta,\ell}(z,{\bar{z}}) &= v^{\frac{r_j+r_k}2} \tilde{\mathcal{G}}^{(r_{ij})}_{\Delta,\ell}(z,{\bar{z}}) \pm u^{\frac{r_j+r_k}2} \tilde{\mathcal{G}}^{(r_{ij})}_{\Delta,\ell}(1-z,1-{\bar{z}})\,.
}[][]
with $r_{ij} = r_i - r_j$ and $r_i$ is both the R-charge and the dimension of $\phi_{r_i}$. The blocks $\mathcal{G}$ and $\tilde{\mathcal{G}}$ have only one superscript because they will only appear with $r_{ij}=- r_{kl}$ and $r_{ij}=r_{kl}$ respectively.
Let us abbreviate $\phi_i = \phi_{r_i}$ and ${\bar{\phi}}_i={\bar{\phi}}_{r_i}$. The crossing equation can then be written as follows
\eqna{
&\sum_{\mathcal{O}\in\phi_1\times\phi_2} |\lambda_{\phi_1\phi_2\mathcal{O}}|^2\, \vec{V}^{r_1+r_2}_{\Delta,\ell} + \sum_{i=1,2} \sum_{\mathcal{O}\in\phi_i\phi_i} |\lambda_{\phi_i\phi_i\mathcal{O}}|^2 \, \vec{V}^{2r_i}_{\Delta,\ell}
\\&+
\sum_{\mathcal{O}\in\phi_1\times{\bar{\phi}}_2} |\lambda_{\phi_1{\bar{\phi}}_2\mathcal{O}}|^2\, \vec{V}^{r_1-r_2}_{\Delta,\ell}+
\sum_{\mathcal{O}\in\phi_i\times{\bar{\phi}}_i} (\lambda_{\phi_1{\bar{\phi}}_1\mathcal{O}}^*\;\lambda_{\phi_2{\bar{\phi}}_2\mathcal{O}}^*)\cdot\vec{\mathcal{V}}^{\mathrm{neutral}}_{\Delta,\ell} \cdot
\begin{pmatrix}
\lambda_{\phi_1{\bar{\phi}}_1\mathcal{O}}\\
\lambda_{\phi_2{\bar{\phi}}_2\mathcal{O}}
\end{pmatrix} = \\&=
-\vec{V}_\mathds{1} - \frac{1}{6\hspace{0.5pt} c}\vec{V}_T\,,
}[]
with the crossing vectors $\vec{V}^{q}_{\Delta,\ell},\vec{V}^\mathrm{neutral}_{0,0},\vec{V}_{T}$ and the crossing matrix $\vec{\mathcal{V}}^{\mathrm{neutral}}_{\Delta,\ell}$ defined below
\newcommand{\FF}[1]{F_{\pm,\Delta,\ell}^{#1}}
\newcommand{\CFF}[1]{\mathcal{F}_{\pm,\Delta,\ell}^{#1}}
\newcommand{\CtFF}[1]{\tilde{\mathcal{F}}_{\pm,\Delta,\ell}^{#1}}
\newcommand{\CFFm}[1]{\mathcal{F}_{-,\Delta,\ell}^{#1}}
\newcommand{\CtFFm}[1]{\tilde{\mathcal{F}}_{-,\Delta,\ell}^{#1}}
\newcommand{(-1)^\ell}{(-1)^\ell}
\fourseqn{\small
\vec{V}^{r_1+r_2}_{\Delta,\ell} = \left[
\begin{array}{c}
\mp(-1)^\ell\FF{2121} \\
0_2 \\
\mp \FF{1221} \\
0_6
\end{array}
\right]\,,\qquad
\vec{V}^{2r_1}_{\Delta,\ell} = \left[
\begin{array}{c}
0_6 \\
\mp(-1)^\ell\FF{1111} \\
0_4
\end{array}
\right]\,,
}[]{
\vec{V}^{2r_2}_{\Delta,\ell} = \left[
\begin{array}{c}
0_9 \\
\mp(-1)^\ell\FF{2222} \\
0
\end{array}
\right]\,,\qquad
\vec{V}^{r_1-r_2}_{\Delta,\ell} = \left[
\begin{array}{c}
(-1)^\ell\CtFF{2121} \\
\CFF{1221} \\
0_8
\end{array}
\right]\,,
}[]{
\vec{\mathcal{V}}^{\mathrm{neutral}}_{\Delta,\ell} = \left[
\begin{array}{c}
\boldsymbol0_2\\
\begin{pmatrix}
0 & \mp\frac12\CFF{1122} \\ \mp\frac12\CFF{1122} & 0
\end{pmatrix}\\
\begin{pmatrix}
0 & \frac\monel2\CtFF{1122} \\ \frac\monel2\CtFF{1122} & 0
\end{pmatrix}\\
\begin{pmatrix}
(-1)^\ell\CtFF{1111} & 0 \\ 0 & 0
\end{pmatrix}\\
\begin{pmatrix}
\CFFm{1111} & 0 \\ 0 & 0
\end{pmatrix}\\
\begin{pmatrix}
0 & 0 \\ 0 & (-1)^\ell\CtFF{2222}
\end{pmatrix}\\
\begin{pmatrix}
0 & 0 \\ 0 & \CFFm{2222}
\end{pmatrix}\\
\end{array}
\right]\,,\hspace{3.8em}
}[]{
\vec{V}_{\mathds{1}} = \left[
\begin{array}{c}
0_2 \\
\mp\mathcal{F}^{1122}_{\pm,0,0} \\
\tilde{\mathcal{F}}^{1122}_{\pm,0,0} \\
\tilde{\mathcal{F}}^{1111}_{\pm,0,0} \\
\mathcal{F}^{1111}_{-,0,0} \\
\tilde{\mathcal{F}}^{2222}_{\pm,0,0} \\
\mathcal{F}^{2222}_{-,0,0}
\end{array}
\right]\,,\qquad
\vec{V}_{T} = \left[
\begin{array}{c}
0_2 \\
\mp r_1r_2\,\mathcal{F}^{1122}_{\pm,2,0} \\
r_1r_2\,\tilde{\mathcal{F}}^{1122}_{\pm,2,0} \\
r_1^2\,\tilde{\mathcal{F}}^{1111}_{\pm,2,0} \\
r_1^2\,\mathcal{F}^{1111}_{-,2,0} \\
r_2^2\,\tilde{\mathcal{F}}^{2222}_{\pm,2,0} \\
r_2^2\,\mathcal{F}^{2222}_{-,0,0}
\end{array}
\right]\,.\hspace{2.2em}
}[][mixedVecDef]
In the above vectors we defined as $0_n$ a sequence of $n$ zeros and as $\boldsymbol0_n$ a sequence of $n$ null two-by-two matrices. Entries with a $\pm$ span two rows, one for each choice of sign.
For the allowed values of $\Delta$ and $\ell$ in each OPE refer to section~\ref{sec:multiplets}. The vector $\vec{V}_\mathds{1}$ represents the contribution of the identity and the vector $\vec{V}_{T}$ represents the contribution of the stress tensor multiplet, namely $A_2\overbar{A}_2[0;0]_2^{(0;0)}$. They are both particular cases of $\vec{\mathcal{V}}^{\mathrm{neutral}}_{\Delta,\ell}$ dotted with the appropriate OPE coefficients.
\section{Normalization of the operators}
In this appendix, we show the operator normalization that we chose for out computations. This will make the meaning of $\lambda_{ijk}$ unambiguous. Scalar operators are normalized as
\eqn{
\langle \mathcal{O}_i(x_1) {\overbar{\mathcal{O}}}_j(x_2)\rangle = \frac{\delta_{ij}}{(x_{12}^2)^{\Delta_i}}\,.
}[twopnorm]
Their OPE is fixed by defining the conformal blocks. Namely, we expand a four-point function as
\eqn{\langle \mathcal{O}_1(x_1)\cdots\mathcal{O}_4(x_4)\rangle = \frac{\left(\frac{x_{24}^2}{x_{14}^2}\right)^{\frac12\Delta_{12}}\left(\frac{x_{14}^2}{x_{13}^2}\right)^{\frac12\Delta_{34}}}{(x_{12}^2)^{\frac12(\Delta_1+\Delta_2)}(x_{34}^2)^{\frac12(\Delta_3+\Delta_4)}}\,\sum_{\Delta,\ell}\lambda_{12(\Delta,\ell)}\lambda_{34(\Delta,\ell)}^*\, g^{\Delta_{12},\Delta_{34}}_{\Delta,\ell}(z,{\bar{z}})\,,
}[]
where the function $g^{\Delta_{12},\Delta_{34}}_{\Delta,\ell}(z,{\bar{z}})$ is defined in~\eqref{blockDefMixed}. A property that this block satisfies is
\eqn{
g^{\Delta_{12},\Delta_{34}}_{\Delta,\ell}(z,z) \;\overset{z\to 0}{\longrightarrow}\; z^\Delta\hspace{1pt}(\ell+1)\,.
}[gnorm]
The choices~\twopnorm and~\gnorm are enough to unambiguously determine the meaning of $\lambda_{ijk}$.
\end{appendix}
\bibliographystyle{JHEP}
|
1,108,101,563,712 | arxiv | \section{Introduction}
\ac{ScoX1} is a compact object in a binary system with a low-mass companion
star. \cite{Fomalont2001ScoX1, Steeghs2002ScoX1}
It is believed to be a rapidly spinning neutron star
and a promising source of
continuous gravitational waves \cite{Watts_2008}. The signal received by an observatory
such as LIGO\cite{LIGO}, Virgo\cite{Virgo} or KAGRA\cite{KAGRA} depends on the parameters of the system,
and a search for that signal loses sensitivity if the incorrect values
are used for those parameters. Several of the parameters
are uncertain, and one method to ensure that the signal is not missed
is to perform the search at each point in a template bank covering the
relevant parameter space. These include the projected semimajor axis $a_p = a\sin{i}$
of the neutron star's orbit, the orbital period $P_{\text{orb}}$, and the time
$t_{\text{asc}}$ at which the neutron star crosses the ascending node as measured
in th solar-system barycenter.
The loss of \ac{SNR} associated with an incorrect
choice of parameters is, in a generic Taylor expansion, a quadratic
function of the parameter offsets. This allows us to write the
fractional loss in SNR, also known as the mismatch, as a squared distance
using a metric on parameter space. In general, this metric will vary
over the parameter space (i.e., the associated geometry will have
intrinsic curvature), but we can divide the parameter space into small
enough pieces that the space is approximately flat, and the metric can
be assumed to be constant. In that case, there exists a
transformation to Euclidean {coordinate}s. The problem of placing
templates so that the mismatch of any point in parameter space from
the nearest template is no more than some maximum mismatch $\mu$ is
then equivalent to the problem of covering the corresponding Euclidean
space with spheres of radius $\sqrt{\mu}$. The most efficient covering
in $n\le 5$ dimensions is the lattice family $A_n^*$, which includes
the hexagonal lattice $A_2^*$ and the body-centered cubic lattice
$A_3^*$. For example, the density of lattice points for $A_4^*$ is a
factor of $2.8$ lower than the corresponding hypercubic
($\mathbb{Z}^4$) lattice.
We use the {\texttt{LatticeTiling}} module in the LIGO Algorithms Library\cite{lalsuite}
(\texttt{lalsuite}) to investigate efficient lattice coverings for the
parameter space of a search for \ac{ScoX1} using advanced LIGO data. We
show how the search can be made more efficient by: replacing a
hypercubic grid with an $A_n^*$ lattice; accounting for the
elliptical boundaries associated with the correlated prior
uncertainties between orbital period and orbital phase;
defining a sheared {coordinate} change such that a particular
combination of the orbital period and orbital phase is unresolved,
and explicitly searching only in the other three dimensions of the
parameter space. These improvements allow the search to be carried
out using fewer computational resources. Alternatively, since the
search method we use is tunable, with a trade-off between
computational cost and sensitivity, the more efficient lattice allows
a more sensitive search to be done at the same computing cost.
The plan of this paper is as follows: In \secref{s:CrossCorr}, we
briefly summarize the cross-correlation search for continuous \acp{GW}
as applied to \ac{ScoX1}. In \secref{s:byhand} we describe the
existing method of template placement ``by hand'' in a rectangular
grid. In \secref{s:covering} we describe the application of the
sphere covering problem to generation of template lattices, which is
implemented in the {\texttt{LatticeTiling}} module in the \texttt{lalsuite}
software library\cite{lalsuite}, as described in
\cite{Wette2014_Lattice}. In \secref{s:paramspace} we consider the
specific features of the parameter space for the \ac{ScoX1} search
which impact our search: \secref{ss:priors} describes the orbital
priors, especially the relationship between orbital period and phase.
In \secref{ss:stdcoords} we consider the standard {coordinate}s, where the
time of ascension describing the orbital phase has been propagated
in time to the epoch of the gravitational wave search, inducing prior
correlations between the period and time of ascension.
In \secref{ss:sheared} we show how a shearing transformation can be
used to define a modified period parameter whose prior uncertainty is
independent of the uncertainty on the propagated time of ascension.
In \secref{s:results} we construct a number of lattices and compare
the numbers of templates and modelled computing costs. Finally
\secref{s:conclusions} contains conclusions and implications of this
work.
\section{Background}
\subsection{Cross-Correlation Search for Scorpius X-1}
\label{s:CrossCorr}
The model-based cross-correlation method
\cite{Dhurandhar2007_CrossCorr} has been developed to search for
continuous \acp{GW}, most notably from the low-mass X-ray binary
\ac{ScoX1} \cite{Whelan2015_ScoX1CrossCorr} and applied to mock data
\cite{Messenger2015_MDC1} as well as observational data from Advanced
LIGO's first and second science runs
\cite{LVC2017_O1ScoX1CrossCorr,Zhang2021_O2ScoX1CrossCorr}. It is a
semi-coherent method where the data are divided into short segments of
duration $200\un{s}\lesssimT_{\text{sft}}\lesssim 2000\un{s}$, which we call
``SFTs'' because we construct a Short Fourier Transform from each of
them. A detection statistic is constructed including correlations
between pairs of segments separated by a coherence time $T_{\text{max}}$ or
less.\footnote{In this paper we describe the original ``demod''
implementation of the search. At low frequencies, the search can be
made more efficient by using resampling to reimplement the loop over
data and the search over frequencies, as described in
\cite{Meadors2018_CCResamp}, but the considerations for the template
bank in the orbital parameters are similar.} The sensitivity of the
search scales with the number of included pairs; when $T_{\text{max}}$ is much
less than the total observation time, the detectable \ac{GW} strain is
proportional to $T_{\text{max}}^{-1/4}$. Since the search is computationally
limited and the computing cost increases with $T_{\text{max}}$, the search can
be tuned to trade computational cost for sensitivity. This tuning can
also be done across the parameter space, with different parameter
space regions being assigned different $T_{\text{max}}$ values. Typically, one
uses more computing resources in regions of parameter space which are
more likely to contain the signal, where the search is inherently more
sensitive, and where it is inherently computationally cheaper.
The output of the search is a detection statistic, which is normalized
to have unit variance. The value of $\rho$ can then be seen as a
\ac{SNR} for the search. In the presence of a signal of intrinsic
amplitude $h_0$, the expectation value $\ev{\rho}\propto h_0^2$, with the proportionality
constant being a measure of the sensitivity of the search.
Because the model-based statistic is constructed using signal
parameters such as intrinsic frequency and parameters influencing the
Doppler modulation of the signal, such as sky position and the binary
orbit of the neutron star, the \ac{SNR} in the presence of a
signal will be reduced if the template model parameters differ from
those of the signal. For parameters which are unknown or insufficiently
constrained, the search is run repeatedly at different points in
parameter space to try to find a point close to the true signal. If the
parameter values for a search point are $\{\lambda_i\}$ and the
corresponding true values of the signal are $\{\lambda^s_i\}$, we can
define the \textit{mismatch} $\mu$ as the fractional loss in \ac{SNR}:
\begin{equation}
\mu = 1 - \frac{\ev{\rho}_{\{\lambda_i\}}}{\ev{\rho}_{\{\lambda^s_i\}}}.
\end{equation}
A Taylor expansion in the $n$ parameters $\{\lambda_i\}$
gives\footnote{This assumes that
the \ac{SNR} is a local maximum at the true signal point
$\lambda_i=\lambda^s_i$. This is not quite true, as shown in
\cite{Whelan2015_ScoX1CrossCorr}, but it is a good starting point.}
\begin{equation}
\mu \approx
\sum_{i=1}^n \sum_{j=1}^n
g_{ij}(\lambda_i-\lambda_i^s)(\lambda_j-\lambda_j^s),
\end{equation}
where the matrix $\{g_{ij}\}$ acts as a metric on parameter space.
The general form of the metric for the cross-correlation search is
\cite{Whelan2015_ScoX1CrossCorr}
\begin{equation}
\label{e:crosscorrmetric}
g_{ij} \approx
\frac{1}{2}\langle \Delta\Phi_{\alpha,i} \Delta\Phi_{\alpha,j}\rangle_{\alpha}
\ ,
\end{equation}
where $\alpha$ represents a pair of SFTs, $\Delta\Phi_{\alpha}$ is the
difference in modelled signal phase between the SFTs in the pair,
$\langle\cdot\rangle_{\alpha}$ is an average over SFT pairs weighted
by the antenna patterns and sensitivity of the detectors involved, and
${}_{,i}=\frac{\partial}{\partial\lambda}_i$ is a partial derivative
with respect to the parameter $\lambda_i$.
\begin{figure}[t]
\begin{center}
\includegraphics[height=0.42\columnwidth]{squaregrid.pdf}
\includegraphics[height=0.42\columnwidth]{rhombicgrid.pdf}
\end{center}
\caption{Illustration of maximum mismatch of the by-hand grid of
\secref{s:byhand} when the metric is not diagonal. Specializing
to the case of $n=2$ and $\mu_1=\mu_2$, we transform to Euclidean
{coordinate}s as described in \secref{s:covering}. Left: If the
metric is diagonal, this is just a scaling which transforms the
rectangle defined by four grid points $ABDC$ into a square with
sides of length $\sqrt{\mu_1}=\sqrt{\mu_2}$. The point in
parameter space farthest from any grid point is the center $P$ of
the square, with $d_{AP}=d_{BP}=d_{CP}=d_{DP}=\sqrt{\mu_1/2}$.
Right: If the metric is not diagonal, this square becomes a
rhombus $ABDC$. Defining $AD$ to be the long diagonal, the point
farthest from any vertex is not the center, but a point $P$ on the
long diagonal with $d_{AP}=d_{BP}=d_{CP}$. (There is an
equivalent point $Q$ on the other side of the center with
$d_{DQ}=d_{BQ}=d_{CQ}$.) We see that $APB$ (or equivalently $APC$
or $DQB$ or $DQC$) is an isoceles triangle with
$d_{AP}=d_{BP}=\sqrt{\mu_{\text{max}}}$ and $d_{AB}=\sqrt{\mu_1}$. In the
case of a non-diagonal metric, $\angle{APB}$ is an obtuse angle,
and $d_{AP}=d_{BP}<d_{AB}/\sqrt{2}$, so
$\mu_{\text{max}}>\mu_1/2=(\mu_1+\mu_2)/4$.}
\label{f:rhombicgrid}
\end{figure}
\subsection{Simple Rectangular Template Placement}
\label{s:byhand}
The cross-correlation analyses run to date examined a parameter space
divided up into rectangular regions, small enough to
assume a constant metric. Then, a set of discrete points is placed
over the parameter space which lie on a rectangular grid with
spacing $\delta\lambda_i$ in the $\lambda_i$ direction, using what we
refer to as the ``by hand'' method. The number of points used is
\begin{equation}
N_i =
\left \lceil
\frac{\lambda_i^{\text{max}} - \lambda_i^{\text{max}}}{\delta \lambda_i},
\right \rceil
\end{equation}
where $\lceil\cdot\rceil$ indicates rounding to up to the next
integer. The spacing $\delta\lambda_i$ is chosen to be
\begin{equation}
\delta \lambda_i = \sqrt{\frac{\mu_i}{g_{ii}}}
\end{equation}
so that the mismatch between adjacent points\footnote{Note that for
historical reasons, $\mu_i$ is defined as the mismatch between
adjacent points in the grid, rather than the maximum mismatch
between some point in the parameter space and the nearest grid
point. This is the origin of the factor of $\frac{1}{4}$ appearing
in \eqref{e:mumaxgrid}.} in the $\lambda_i$ direction is
$g_{ii}(\delta \lambda_i)^2=\mu_i$. If the metric is approximately
diagonal, $g_{ij}=g_{ii}\delta_{ij}$, then the point in the parameter
space farthest (in the sense of the metric) from any grid point is
$\frac{\delta \lambda_i}{2}$ away in the $\lambda_i$ direction, and
has a total mismatch of
\begin{equation}
\label{e:mumaxgrid}
\mu_{\text{max}} = \sum_{i=1}^n g_{ii} \left(\frac{\delta \lambda_i}{2}\right)^2
= \frac{1}{4} \sum_{i=1}^n \mu_i
\end{equation}
If the metric is not diagonal, the procedure described above will lead
to a maximum mismatch greater than that given in \eqref{e:mumaxgrid},
as illustrated in \figref{f:rhombicgrid}. This approach is
conservative and can result in much larger template banks if the
metric contains large correlations. The number of templates could be
reduced by accounting for the metric correlations, which will be
discussed later in \secref{ss:sheared}.
\subsection{Covering Lattices}
\label{s:covering}
The general problem of choosing a set of template points with a prescribed
maximum mismatch distance $\mu_{\text{max}}$ between any point in the parameter
space and the nearest template is an application of the
\textit{sphere covering problem} \cite{conway1998sphere}. Since we
treat the metric $\{g_{ij}\}$ as approximately constant, there is
always a linear transformation of the parameters $\{\lambda_i\}$ into
Euclidean coordinates $\{x_i\}$; the mismatch between two
points separated by parameter differences $\{\Delta\lambda_i\}$ is then
\begin{equation}
\sum_{i=1}^n \sum_{j=1}^n g_{ij} (\Delta\lambda_i)(\Delta\lambda_j)
= \sum_{i=1}^n (\Delta x_i)^2
\ .
\end{equation}
The template placement problem is then simplified to one of placing
(hyper-)spheres of radius
$\sqrt{\mu_{\text{max}}}$ in the $\{x_i\}$ space so that every point of the
region of interest is covered by at least one sphere. To efficiently
cover the space, the overlap between spheres should be minimized. This
is quantified using the normalized thickness or center density $\theta$,
which is the average number of templates per unit volume for the
unit sphere.
A sphere covering based on a repeating pattern is known as a lattice.
The number of templates required to cover the space will be at a minimum
when the lattice has the smallest thickness $\theta$. A perfect lattice
has a thickness of 1.
The simplest lattice is the cubic lattice $\mathbb{Z}^n$, which has
points equally spaced in each of the (Euclidean) coordinate
directions. The ``by hand'' lattice of \secref{s:byhand} is an
example of a $\mathbb{Z}^n$ lattice, if the metric $\{g_{ij}\}$ is
diagonal and all of the mismatches $\mu_i$ are chosen to be equal. A
more efficient lattice is $A_n^*$, which is a general analogue of the
hexagonal lattice. For the sphere
covering problem, the thinnest lattice is the $A_n^*$ lattice, which
in two dimensions has a hexagonal principal cell. The principal cell is the
set of points closest to given point in a lattice, and the vertices
are locations where covering spheres intersect.
It has been shown for $n\le 5$ that $A_n^*$ is the
most efficient covering lattice, i.e., has the smallest thickness
$\theta$ \cite{conway1998sphere}, and for higher
dimensions it is typically close to the most efficient
covering \cite{Prix2007_Lattice}. Since a more efficient lattice
allows the same volume of parameter space to be covered with fewer
templates, it can reduce the necessary computing cost at a given
sensitivity.\footnote{But see \cite{Allen2021Lattice}.}
Construction of $\mathbb{Z}^n$ and $A_n^*$ lattices in physical coordinates
$\{\lambda_i\}$ given a constant mismatch metric $\{g_{ij}\}$ is
implemented by the {\texttt{LatticeTiling}} module in the
\texttt{lalsuite} software library\cite{lalsuite}, as described
in \cite{Wette2014_Lattice}. A particular challenge is ensuring that
the area within the boundaries of a search region is completely covered,
which sometimes requires retaining templates whose parameters lie
outside the search region. We shall see that this can necessitate
some care in choosing coordinates to take advantage of underresolved
directions in parameter space.
\section{Parameter Space for Sco X-1 Search}
\label{s:paramspace}
\subsection{Observational Priors}
\label{ss:priors}
The \ac{GW} signal produced by a spinning neutron star is
the system is nearly periodic in the neutron star's rest frame, and
Doppler shifted as a result of the motion of the detector as the Earth
rotates and moves in its orbit and, in the case of a low-mass X-ray
binary such as \ac{ScoX1}, of the neutron star in its own orbit with
its binary companion. For an accreting neutron star in approximate
spin equilibrium, the frequency $f_0$ can be approximated as
constant.\footnote{In practice, this equilibrium will be imperfect,
leading to some ``spin wandering'', but the impact of deviations
from equilibrium was shown in \cite{Whelan2015_ScoX1CrossCorr} to be
limited when the coherence time is not too long, especially with the
levels of spin wandering predicted by
\cite{Mukherjee2018_Spinwander}.} The Doppler shift from detector
motion is primarily affected by sky position, which for \ac{ScoX1} is
well enough known\cite{2mass06} that its uncertainty does not affect
the search. The Doppler shift from the binary motion is affected by
five orbital parameters: eccentricity, orientation, projected orbital
speed, orbital period, and orbital phase\cite{Leaci2015_ScoX1}. The
orbit of \ac{ScoX1} is believed to be nearly
circular\cite{Galloway2014}, so that the search needs to cover only
three orbital parameters: projected speed, period, and
phase.
The best constraints on these come from \cite{Wang2018_PEGS3}. The constraint on orbital period $P_{\text{orb}}$ is Gaussian, with a mean
of $P_0=68023.86\un{s}$ and a standard deviation of
$\sigma_{\Porb}=0.043\un{s}$. The orbital phase is described by
time of ascension $t_{\text{asc}}$, which is the time at which the neutron star
crosses the plane of the sky moving away from the observer (i.e.,
crosses the ascending node). The constraint on this is also Gaussian,
with a mean of
$t_{\text{asc},0}=974416624~ \text{GPS}~$({2010--Nov--21 23:16:49\,UTC})
and a standard deviation of $\sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p=50\un{s}$. These
estimates are uncorrelated, as shown in the left panel of \figref{f:TPprior},
but if we
convert the time of ascension to a subsequent equivalent time
$\TascNow=\Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}}+\norbP_{\text{orb}}$, a correlation is induced, as
described in \secref{ss:stdcoords} and shown in the right panel of
\figref{f:TPprior}. The constraints on the orbital
velocity of the neutron star in \cite{Wang2018_PEGS3} are described in
terms of the amplitude of the component of velocity along the line of
sight, known as $K_1$, and consist of constraints that
$40\un{km/s}\le K_1\le 90\un{km/s}$, but without a well-determined
probability density between those limits. Searches for \acp{GW} from
\ac{ScoX1} typically use a uniform prior distribution on this
parameter. The parameter used is also typically written as
the line-of-sight component of the semimajor axis of the orbit,
$\ap=\frac{K_1P_{\text{orb}}}{2\pi}$. Since the relative uncertainty on
$P_{\text{orb}}$ is much less than on $K_1$, one assumes a uniform prior on
$\ap$ for $1.44\un{lt-s}\le\ap\le3.25\un{lt-s}$, where
the units on $\ap$ are given in light-seconds.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{TPprior_2010.pdf}
\includegraphics[width=0.47\columnwidth]{TPprior_2019.pdf}
\end{center}
\caption{Orbital parameter constraints from \cite{Wang2018_PEGS3}.
If the time of ascension $\Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}}$ is quoted in 2010 (left
panel) its uncertainty is uncorrelated with the orbital period
$P_{\text{orb}}$. If we propagate forward by $n_{\text{orb}}$ orbits to determine
an equivalent time of ascension
$\TascNow = \Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}} + \norbP_{\text{orb}}$, this introduces correlations.
In each case the level surfaces of the probability distribution
are shown, for which $\chi^2$, defined in \eqref{e:chisqThen} and
\eqref{e:chisqNow} equals $1^1$, $2^2$ and $3^2$. We refer to
these as at normalized distances of $1\sigma$, $2\sigma$, and
$3\sigma$, and they correspond to cumulative probabilities of
{39.3\%}, {86.5\%}, and {98.9\%},
respectively.}
\label{f:TPprior}
\end{figure}
\subsection{Standard Search {Coordinate}s}
\label{ss:stdcoords}
The phase derivatives $\{\Delta\Phi_{\alpha,i}\}$ appearing
in \eqref{e:crosscorrmetric} are computed in
\cite{Whelan2015_ScoX1CrossCorr} for the standard search coordinates
$\{\lambda_i\}\equiv\{f_0,\ap,\TascNow,P_{\text{orb}}\}$. Here we introduce the
$\TascNow$ as the time of ascension at a given point in the
propagated 2019 coordinates, indicated by the prime. For long searches
which evenly sample the orbital phase of the binary, the
non-negligible metric elements have the approximate form\footnote{Note
that the original formula for $g'_{\TascNow\Porb}$--(4.20h) in
\cite{Whelan2015_ScoX1CrossCorr}--contains a sign error which has
not been relevant previously because approximate form of $g'_{\TascNow\Porb}$
has only been used to set it to zero.}
\begin{subequations}
\begin{gather}
{g_{f_0f_0}}
\approx 2\pi^2
\left\langle
\Delta\tdet_\alpha^2
\right\rangle_{\!\alpha}
\approx \frac{2\pi^2}{3}T_{\text{max}}^2,
\\
{g_{\ap\ap}} = 4\pi^2 f_0^2
\left\langle
\sin^2\frac{\pi \Delta\tdet_\alpha}{P_{\text{orb}}}
\right\rangle_{\!\alpha}
\approx
2\pi^2 f_0^2
\left(
1 - \sinc \frac{2T_{\text{max}}}{P_{\text{orb}}}
\right),
\\
\begin{pmatrix}
{g'_{\TascNow\TascNow}} & {g'_{\TascNow\Porb}} \\ {g'_{\Porb\TascNow}} & {g'_{\Porb\Porb}}
\end{pmatrix}
\approx
\begin{pmatrix}
1
&
\frac{
-(\TascNow -
\left\langle
\overline{\tdet}_{\alpha}
\right\rangle_{\!\alpha})
}
{P_{\text{orb}}}
\\
\frac{
-(\TascNow -
\left\langle
\overline{\tdet}_{\alpha}
\right\rangle_{\!\alpha})
}
{P_{\text{orb}}}
&
\frac{
\left\langle
(\TascNow-\overline{\tdet}_{\alpha})^2
\right\rangle_{\!\alpha}
}
{P_{\text{orb}}}
\end{pmatrix}
\frac{16\pi^4 f_0^2 a_p^2}{P_{\text{orb}}^2}
\left\langle
\sin^2\frac{\pi \Delta\tdet_\alpha}{P_{\text{orb}}}
\right\rangle_{\!\alpha},
\end{gather}
\end{subequations}
where $\Delta\tdet_\alpha$ is the difference between the timestamps of the
two SFTs in pair $\alpha$, and $\overline{\tdet}_\alpha$ is their mean. Note
that the implementation in \texttt{lalsuite} \cite{lalsuite} uses the
exact metric elements, which include additional (generally small)
off-diagonal elements.
If we define the midpoint of the run (according to the weighted
average $\langle\cdot\rangle_{\alpha}$) to be
$\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}} = \langle\overline{\tdet}_{\alpha}\rangle_{\alpha}$ and the variance as
$\sigobs^2 = \langle(\overline{\tdet}_{\alpha}-\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}})^2\rangle_{\alpha}$ The
metric elements on the $\TascNow,P_{\text{orb}}$ subspace become
\begin{subequations}
\begin{gather}
{g'_{\TascNow\TascNow}} \approx \frac{16\pi^4 f_0^2 a_p^2}{P_{\text{orb}}^2}
\left\langle
\sin^2\frac{\pi \Delta\tdet_\alpha}{P_{\text{orb}}}
\right\rangle_{\!\alpha},
\\
{g'_{\TascNow\Porb}} \approx
\left(
\frac{-(\TascNow-\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}})}{P_{\text{orb}}}
\right)
{g'_{\TascNow\TascNow}},
\\
{g'_{\Porb\Porb}} \approx
\left(
\frac{(\TascNow-\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}})^2+\sigobs^2}{P_{\text{orb}}^2}
\right)
{g'_{\TascNow\TascNow}}.
\end{gather}
\end{subequations}
Note that, as shown in \cite{Whelan2015_ScoX1CrossCorr}, if we ignore
any data gaps and the noise and antenna pattern weighting, for
an observing run of duration $\Tobs$ and coherence time $T_{\text{max}}$,
$\sigobs^2\approx\frac{\Tobs^2}{12}$, and
$\left\langle \sin^2\frac{\pi \Delta\tdet_\alpha}{P_{\text{orb}}}
\right\rangle_{\!\alpha}\approx\frac{1}{2}\left( 1 - \sinc
\frac{2T_{\text{max}}}{P_{\text{orb}}} \right)$.
If we choose $n_{\text{orb}}$ so that
\begin{equation}
t'_{\text{asc},0} = t_{\text{asc},0} + n_{\text{orb}}P_0
\end{equation}
is as close as possible to $\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}}$, we can minimize the magnitude of
\begin{equation}
{g'_{\TascNow\Porb}} \approx
\left(
\frac{t_{\text{asc},0} + n_{\text{orb}}P_0-\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}}}{P_0}
\right)
{g'_{\TascNow\TascNow}}.
\end{equation}
This achieved by taking
\begin{equation}
\label{e:norb-unsheared}
n_{\text{orb}} = \nint{\frac{\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}}-t_{\text{asc},0}}{P_0}}
\end{equation}
where $\nint{\cdot}$ indicates rounding to the nearest integer.
To give a concrete example, we consider the LIGO-Virgo O3 data run
\cite{LVK2020_ObsScenarios} which began on {2019--Apr--01 00:00:00\,UTC} (GPS
{1238112018}), continued until a commissioning break at
{2019--Oct--01 00:00:00\,UTC} (GPS {1253923218}), resumed on
{2019--Nov--01 15:00:00\,UTC} (GPS {1256655618}), and ended on
{2020--Mar--27 17:00:00\,UTC} (GPS {1269363618}).
Neglecting variability of antenna patterns and noise spectra, as well
as any data gaps other than the commissioning break, we find an
average time of $\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}}=\text{GPS}~1253589161\equiv${2019--Sep--27 03:12:23\,UTC}.
This translates into an optimal $n_{\text{orb}}=4104$, corresponding to
$t'_{\text{asc},0}=\text{GPS}~1253586547\equiv${2019--Sep--27 02:28:49\,UTC}.\footnote{The
actual values for O3 including duty cycle, noise weighting and
antenna patterns, and using the exact form of the metric, will be
slightly different, but we will use the values above for
illustration in this paper.}
The joint prior on $\TascNow$ and $P_{\text{orb}}$ will remain a multivariate
Gaussian, but now with a non-diagonal variance-covariance matrix. The
marginal prior on $\TascNow$ will be a Gaussian with mean
$t'_{\text{asc},0}$ and variance
\begin{equation}
\sigTascNow^2 = \sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p^2 + n_{\text{orb}}^2\sigma_{\Porb}^2.
\end{equation}
The joint prior can be illustrated by plotting level curves of the
quantity
\begin{equation}
\label{e:chisqThen}
\chi^2
=
\left(
\frac{\Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}}-t_{\text{asc},0}}{\sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p}
\right)^2
+
\left(
\frac{P_{\text{orb}}-P_0}{\sigma_{\Porb}}
\right)^2,
\end{equation}
whose prior distribution is a chi-squared with two degrees of freedom
(\figref{f:TPprior}, right panel). A bit of algebra shows that
\begin{equation}
\label{e:chisqNow}
\chi^2
=
\frac{\sigTascNow^2}{\sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p^2}
\left[
\left(
\frac{P_{\text{orb}} - P_0}{\sigma_{\Porb}}
\right)^2
- \frac{2n_{\text{orb}}\sigma_{\Porb}}{\sigTascNow}
\left(
\frac{\TascNow - t'_{\text{asc},0}}{\sigTascNow}
\right)
\left(
\frac{P_{\text{orb}} - P_0}{\sigma_{\Porb}}
\right)
+
\left(
\frac{\TascNow - t'_{\text{asc},0}}{\sigTascNow}
\right)^2
\right].
\end{equation}
In previous searches, rectangular boundaries have been used in all
coordinate directions in the parameter space. For the O1 search, these
regions covered out to $3\sigma$ of the marginal priors on $\TascNow$
and $P_{\text{orb}}$, as shown in Figure~1 of \cite{LVC2017_O1ScoX1CrossCorr}.
If we use a similar approach in O3 (\figref{f:TP_regions}, left panel),
the search regions cover a large area of $\TascNow,P_{\text{orb}}$
parameter space with negligible prior probability. Since the middle
third of the $\TascNow$ range is searched separately (at a higher
coherence time $T_{\text{max}}$, since the prior probability density is higher
there), a simple approach can reduce the over-coverage of the search region.
The $P_{\text{orb}}$ search range is different for each of the rectangular
regions covering different ranges of $\TascNow$, discarding regions in
which the prior $\chi^2\gtrsim 3^2$. These ``chopped'' regions are
shown in the right panel of \figref{f:TP_regions}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{O3_TP_regions_unchopped.pdf}
\includegraphics[width=0.47\columnwidth]{O3_TP_regions_chopped.pdf}
\end{center}
\caption{Left: rectangular search region boundaries cover the entire $\TascNow,P_{\text{orb}}$
region, as used in O1 CrossCorr \cite{LVC2017_O1ScoX1CrossCorr}. The darker
region bounded by one-sigma in the $t_{\text{asc}}$ direction indicates a higher
likelihood of finding the signal in that region of parameter space. Right:
Rectangular regions more closely concentrated on the uncertainty ellipses
than the O1 search regions. The areas in $\TascNow,P_{\text{orb}}$ have been ``chopped''
to eliminate extra parameter space area where a signal is not likely to be found. }
\label{f:TP_regions}
\end{figure}
The chopped regions can be achieved without significant
modification to the previously existing search code.
To further
improve the efficiency of the parameter space coverage, we can define
an elliptical boundary function which sets the range of $P_{\text{orb}}$
continuously as a function of $\TascNow$. This function can be used
in the {\texttt{LatticeTiling}} module to restrict template placement to
those needed to cover the prior ellipse corresponding to
$\chi^2\le k^2$ for a particular $k$:
\begin{equation}
\label{e:PorbEllipticalBoundary}
\frac{P_{\text{orb}} - P_0}{\sigma_{\Porb}}
\in
\frac{n_{\text{orb}}\sigma_{\Porb}}{\sigTascNow}
\left(
\frac{\TascNow - t'_{\text{asc},0}}{\sigTascNow}
\right)
\pm
\frac{\sigma_{\Tasc}}{\sigTascNow}
\sqrt{
k^2
-
\left(
\frac{\TascNow - t'_{\text{asc},0}}{\sigTascNow}
\right)^2
}.
\end{equation}
This is used to define a search region, together with a constant
boundary on $\TascNow$:
\begin{equation}
t'_{\text{asc},0} - k\sigTascNow \le
\TascNowMin \le \TascNow \le t'_{\text{asc,max}}
\le t'_{\text{asc},0} + k\sigTascNow,
\end{equation}
and illustrated in the left panel of \figref{f:TP_regions_ellip}.
Note that we choose $k=3.3$ rather than $k=3$ as the boundary, since
the former encloses {99.6\%} of the prior probability, while
the latter would enclose only {98.9\%}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{O3_TP_regions_ellip.pdf}
\includegraphics[width=0.47\columnwidth]{O3_TP_regions_sheared.pdf}
\end{center}
\caption{Left: Search region boundaries in $\TascNow,P_{\text{orb}}$ now
define a boundary function, where the range of $P_{\text{orb}}$ is computed
as a function of $\TascNow$, defined in
\eqref{e:PorbEllipticalBoundary}. We choose $k=3.3$ for the
boundary. For reference, the colored ellipses are level surfaces
at $1\sigma$, $2\sigma$, and $3\sigma$, i.e., $\chi^2=1^2$, $2^2$,
and $3^3$, as defined in \eqref{e:chisqNow}. The darker shaded
region lies within $\pm 1\sigma$ of the marginal distribution on
$\TascNow$, where the signal is more likely to be found, and the
lighter-shaded region is between $\pm 3\sigma$ and $\pm 1\sigma$.
Dividing up the search regions based on $\TascNow$ rather than
$\chi^2$ is more efficient since $\TascNow$ is always resolved,
and $P_{\text{orb}}$ may not be. Right: the same search regions in a
``sheared'' set of {coordinate}s $\TascNow,\tilde{P}$, where
$\tilde{P}$ is a linear combination of $P_{\text{orb}}$ and $\TascNow$,
defined in \eqref{e:PorbShear}, which aligns the constant-$\chi^2$
ellipses with the {coordinate} axes.}
\label{f:TP_regions_ellip}
\end{figure}
\subsection{Sheared {Coordinate}s}
\label{ss:sheared}
The joint prior uncertainty in $\TascNow$,$P_{\text{orb}}$ space complicates the
placement of lattice points neatly in coordinate directions. The fact
that the semimajor axis of the uncertainty ellipses does not lie in a
coordinate direction forces rows of lattice points calculated from a
diagonal metric to be placed over a complicated area in parameter
space, which is illustrated in \secref{s:results}. A coordinate
transformation can be performed that
preserves the diagonal metric and shears the coordinates from
$(\TascNow,P_{\text{orb}})$ to $(\TascNow,\tilde{P})$, aligning the semimajor axis of
the uncertainty ellipses with the coordinate directions,
as shown in the right panel of \figref{f:TP_regions_ellip}.
The lattice points are then chosen in a straightforward way, before a
transformation is then performed back to the physical coordinates. In
particular, this simplifies the question of whether multiple templates
are necessary to cover the period direction. Looking at the right
panel of \figref{f:TPprior} or the left panel of
\figref{f:TP_regions_ellip}, we see that the marginal uncertainty in
$P_{\text{orb}}$ is considerably larger than the conditional uncertainty at a
particular value of $\TascNow$. Changing {coordinate}s to $\tilde{P}$,
which is observationally uncorrelated with $\TascNow$, allows us to
cover a range of period values corresponding to this smaller marginal
uncertainty.
We can accomplish this {coordinate} transformation by subtracting from
$P_{\text{orb}}$ the centerline of the observational uncertainty ellipse and
defining
\begin{equation}
\label{e:PorbShear}
\tilde{P} =
P_{\text{orb}}
-
\frac{n_{\text{orb}}\sigma_{\Porb}}{\sigTascNow}
\left(
\frac{\TascNow - t'_{\text{asc},0}}{\sigTascNow}
\right)
\sigma_{\Porb},
\end{equation}
so that
\begin{equation}
\label{e:chisqShear}
\chi^2
=
\left(
\frac{\TascNow-t'_{\text{asc},0}}{\sigTascNow}
\right)^2
+
\left(
\frac{\tilde{P}-P_0}{\sigma_{\PorbShear}}
\right)^2,
\end{equation}
and the priors on $\TascNow$ and $\tilde{P}$ are once again
independent Gaussians. Note that
\begin{equation}
\sigma_{\PorbShear} = \left(\frac{\sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p}{\sigTascNow}\right) \sigma_{\Porb},
\end{equation}
so the area of the uncertainty ellipse is the same in all three sets
of {coordinate}s: $(\Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}},P_{\text{orb}})$, $(\TascNow,P_{\text{orb}})$, and
$(\TascNow,\tilde{P})$.
This transformation affects the metric:
\begin{subequations}
\begin{gather}
\tilde{g}_{\TascNow\TascNow} = g'_{\TascNow\TascNow} + 2 \left(\frac{\partial\Porb}{\partial\TascNow}\right)_{\PorbShear}g'_{\TascNow\Porb} + \left(\frac{\partial\Porb}{\partial\TascNow}\right)_{\PorbShear}^2 g'_{\Porb\Porb}, \\
\tilde{g}_{\TascNow\PorbShear} = g'_{\TascNow\Porb} + \left(\frac{\partial\Porb}{\partial\TascNow}\right)_{\PorbShear}g'_{\Porb\Porb}, \\
\tilde{g}_{\PorbShear\PorbShear} = g'_{\Porb\Porb},
\end{gather}
\end{subequations}
where
\begin{equation}
\left(\frac{\partial\Porb}{\partial\TascNow}\right)_{\PorbShear} = n_{\text{orb}} \left(\frac{\sigma_{\Porb}}{\sigTascNow}\right)^2
= n_{\text{orb}} \frac{\sigma_{\Porb}^2}{\sigma_{\TascThen}} \newcommand{\sigTascNow}{\sigma_{\TascNow}} \newcommand{\ap}{a_p^2 + n_{\text{orb}}^2\sigma_{\Porb}^2}.
\end{equation}
In order to make the metric as close to diagonal as possible in these
{coordinate}s, we should choose a different $n_{\text{orb}}$ from that defined in
\eqref{e:norb-unsheared}. Instead we make
\begin{equation}
\tilde{g}_{\TascNow\PorbShear} \approx
\left(
n_{\text{orb}} - \frac{\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}}-t_{\text{asc},0}}{P_0}
+ n_{\text{orb}} \left(\frac{\sigma_{\Porb}}{\sigTascNow}\right)^2
\frac{(t'_{\text{asc},0}-\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}})^2+\sigobs^2}{P_0^2}
\right)
{g'_{\TascNow\TascNow}}
\end{equation}
close to zero. If we set this to zero and solve algebraically for
$n_{\text{orb}}$, we get
\begin{equation}
\label{e:norb-sheared}
n_{\text{orb}} \approx \frac{\mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}} - t_{\text{asc},0}}{P_0}
\left( 1 + \left(\frac{\sigma_{\Porb}}{\sigTascNow}\right)^2 \
\frac{(t'_{\text{asc},0}- \mu_{\text{obs}}} \newcommand{\sigobs}{\sigma_{\text{obs}}} \newcommand{\Tmid}{T_{\text{mid}})^2 + \sigobs^2}{P_0^2} \right)^{-1}.
\end{equation}
Since the definitions of $t'_{\text{asc},0}$ and $\sigTascNow$
depend on $n_{\text{orb}}$ as well, we need to solve iteratively for the
optimal $n_{\text{orb}}$ to minimize the metric correlation in these sheared
{coordinate}s. This converges quickly, giving, for the reference values
used in this paper, $n_{\text{orb}} = 4108$, corresponding to
$t'_{\text{asc},0}=\text{GPS}~1253858643
\equiv${2019--Sep--30 06:03:45\,UTC}.\footnote{Again, the
actual best value using the data with gaps, antenna patterns and
variable noise level, as well as the exact metric, will be slightly
different, but the relationship between the choices of $n_{\text{orb}}$
optimized for sheared and unsheared {coordinate}s is illustrative.}
With this choice, we have {coordinate}s $\TascNow$ and $\tilde{P}$ with
no prior correlation and negligible correlation in the search metric.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{O3_Ta_regions.pdf}
\end{center}
\caption{Cells of $\TascNow,\ap$ space for sample lattice
construction. Each rectangular cell has its own coherence time
$T_{\text{max}}$, corresponding to a coherence time used in
\cite{LVC2017_O1ScoX1CrossCorr}, and we construct a lattice in
each of these cells. The range of orbital period values for each
cell is a function of $\TascNow$, as illustrated in
\figref{f:TP_regions} or \figref{f:TP_regions_ellip}. We
construct the lattice in all 9 of these regions, and include the
template counts in the computing cost estimate. For the three
shaded regions, we also include the templates in the corresponding
$\TascNow,P_{\text{orb}}$ or $\TascNow,\tilde{P}$ plot of the lattice. }
\label{f:Ta_regions}
\end{figure}
\section{Example Lattices and Results}
\label{s:results}
To quantify the reduction in number of search templates and computing
costs at a given mismatch, we construct sample lattices of each type
for a variety of representative regions in parameter space. For each
choice of {coordinate} system and lattice type, we construct $9\times 14$
lattices, corresponding to the nine regions of orbital parameter space
$(\TascNow,P_{\text{orb}},\ap)$ or $(\TascNow,\tilde{P},\ap)$ shown in
\figref{f:Ta_regions} and fourteen frequency bands beginning at
$25\un{Hz}$ and ending at $2000\un{Hz}$. Each of these regions has
its own $T_{\text{max}}$ value taken from the search in
\cite{LVC2017_O1ScoX1CrossCorr}.
In that search, the frequency $f_0$ was split into
ranges of width $0.05\un{Hz}$, and a search job covered that range of
frequencies along with one of the orbital parameter space regions.
Rather than constructing the full set of $9\times 39500$ lattices
covering all the bands from $25\un{Hz}$ to $2000\un{Hz}$, we choose
one $0.0005\un{Hz}$ range from the middle of each band, construct the
nine lattices (one for each orbital parameter cell) corresponding to
that range, and scale up the number of templates by the number of such
ranges in the band. Since the computing cost scales roughly with the
number of templates times the number of SFT pairs, we approximate the
computing cost for each band $i$ and cell $c$ as
$N^{\text{pair}}_{ic} N^{\text{tmplt}}_{ic}$. We estimate the number
of SFT pairs as in \cite{Whelan2015_ScoX1CrossCorr} by
\begin{equation}
N^{\text{pair}}_{ic} \approx N_{\text{det}}^2\frac{T_{\text{obs}} T^{\text{max}}_{ic}}{T^{\text{SFT}}_i},
\end{equation}
where we show explicity that the SFT duration depends on the frequency
band $i$ while the coherence time depends on the frequency band $i$
and orbital parameter space cell $c$. Note that this is an
overestimate of the absolute number of pairs, because we computed the
$T_{\text{obs}}$ using the start and end times of the two parts of O3 rather
than an actual set of data segments reflecting the true duty cycle.
In addition to the total computing cost
$\sum_i\sum_c N^{\text{pair}}_{ic} N^{\text{tmplt}}_{ic}$ for each
lattice, we also plot the lattice points projected onto the
$\TascNow,P_{\text{orb}}$ or $\TascNow,\tilde{P}$ plane, limiting attention
for the plots to the shaded cells in \figref{f:Ta_regions}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{tiling4Dhand.pdf}
\includegraphics[width=0.47\columnwidth]{tiling4DZ4.pdf}
\end{center}
\caption{Left: Lattice setups that most closely resemble what
was done in previous CrossCorr searches, but with the inclusion of
``chopped'' search regions in $\TascNow,P_{\text{orb}}$. The lattice points
are placed by computing the spacing in each direction given the mismatch
and the metric (\secref{s:byhand}). Points are then placed to cover the
uncertainty ellipses in each of the three rectangular search regions,
where the darker region bounded by one-sigma in $\TascNow$ represents the
region where we are most likely to find a signal if it is present.The
total number of templates for this setup is {$1.060\times 10^{12}$}. Right:
Implementing the elliptical boundary function and using {\texttt{LatticeTiling}}
to place a cubic lattice changes where the points are placed. The total
number of templates for this setup is {$1.682\times 10^{12}$}.}
\label{f:9H9Z}
\end{figure}
\Figref{f:9H9Z} shows two implementations of cubic ($\mathbb{Z}^4$)
lattices, both using the original by hand method described in
\secref{s:byhand} and using the {\texttt{LatticeTiling}} module. The main
difference between the two methods is in how they handle the
boundaries of the elliptical search region. The by-hand method
uses the chopped regions illustrated in the right panel of
\figref{f:TP_regions}, while the {\texttt{LatticeTiling}} method uses the
elliptical boundaries of \figref{f:TP_regions_ellip}. Note that while
{\texttt{LatticeTiling}} uses a smaller region of parameter space, it actually
requires more templates (a total over the whole parameter space of
{$1.682\times 10^{12}$} versus {$1.060\times 10^{12}$} for the by-hand method) because
of its conservative approach to covering the boundaries. Ordinarily
this would be a small effect, but since only two or three templates
are required in the $P_{\text{orb}}$ direction, it is significant in this case,
which motivates the special handing of the $P_{\text{orb}}$ coordinate which
follows.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{tiling4DA4.pdf}
\includegraphics[width=0.47\columnwidth]{tiling4DA4sh.pdf}
\end{center}
\caption{Left: Implementing the elliptical boundary function
discussed in \secref{ss:stdcoords} requires the use of
{\texttt{LatticeTiling}} \cite{Wette2014_Lattice} for template
placement. This setup shows a use of an $A_4^*$ lattice with the
elliptical boundary function and lattice template points placed by
{\texttt{LatticeTiling}}. Note that the density of templates is
increased in the one-sigma region. The total number of templates
used here (across four-dimensional parameter space) is
{$6.006\times 10^{11}$}. Right: After performing the shearing
transformation discussed in \secref{ss:sheared}, we use
{\texttt{LatticeTiling}} to place templates in
$\TascNow, \tilde{P}$ space. This figure shows an $A_4^*$ lattice
over the sheared uncertainty ellipses using the elliptical
boundary function. Note that the primary axes of the uncertainty
ellipses are aligned with the coordinate axes in this
area-preserving transformation, and that template density is again
greater in the one-sigma region. The total number of templates
here is {$5.931\times 10^{11}$}.}
\label{f:SA9A}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\columnwidth]{tilingShDiag.pdf}
\includegraphics[width=0.47\columnwidth]{tiling3DA3sh.pdf}
\end{center}
\caption{Left: The number of orbits used to propagate $\Tasc} \newcommand{\TascNow}{\Tasc'} \newcommand{\TascNowMin}{t'_{\text{asc,min}}$
into 2019 coordinates was chosen based on what would diagonalize
the standard/ 2019 metric. This is what introduced the slant in
the template rows seen in \figref{f:SA9A}. Choosing a different
$n_{\text{orb}}$ eliminates the slant, to produce this figure, showing a
lattice covering using an $A_4^*$ lattice, {\texttt{LatticeTiling}}
to place the templates, and the sheared coordinates with a
diagonalized metric to align the uncertainty ellipses with the
primary axes of the parameter space. The total number of templates
is {$5.918\times 10^{11}$}. Right: Noticing that the spacing between
template rows in $\TascNow, \tilde{P}$ seemed to be larger than
the cross-section of the uncertainty ellipse, we perform a
calculation described in \secref{ss:sheared} to determine whether
the orbital period needs to be resolved in the sheared
coordinates. After finding that it does not, for our search, we
fix $\tilde{P} = P_0$, forcing {\texttt{LatticeTiling}} to place a
single row of lattice templates along the centerline of the
sheared uncertainty ellipse. Note that the template density is
still greater in the one-sigma region. Here, the total number of
templates is {$3.867\times 10^{11}$}, our best result for template count
and an improvement from the original setup by a factor of about
3.}
\label{f:SPSD}
\end{figure}
If we change the lattice from $\mathbb{Z}^4$ to $A_4^*$, we obtain the
lattice shown in the left panel of \figref{f:SA9A}. The use of a more
efficient lattice has reduced the total number of templates to
{$6.006\times 10^{11}$}, but we can see from the figure that the templates
extend well beyond the boundaries of the search region. In the right
panel, we construct the lattice in the sheared {coordinate}s
$\TascNow,\tilde{P}$ defined in \secref{ss:sheared}, which simplifies
the boundaries of the search region, but produces lattices with
comparable numbers of templates ({$5.931\times 10^{11}$} total). In these
{coordinate}s, the mismatch metric has a non-negligible off-diagonal
component $\tilde{g}_{\TascNow\PorbShear}$, so the template lattice is constructed using a
basis which looks ``slanted'' in these {coordinate}s.
We can make the
metric approximately diagonal, as described in \secref{ss:sheared}
by choosing a different value of $n_{\text{orb}}$ derived from
\eqref{e:norb-sheared}; for the example considered, this means
changing $n_{\text{orb}}$ from $4104$ to $4108$. The resulting
lattice is shown in the left panel of \figref{f:SPSD}. Note that the
total number of templates is comparable to the other $A_4^*$ lattices,
a total {$5.918\times 10^{11}$} across the whole parameter space. The fact
that all of the $A_4^*$ lattices have comparable numbers of templates
indicates that the {\texttt{LatticeTiling}} module is behaving consistently,
even when the {coordinate}s being used have metric
correlations or oddly-shaped boundaries. However, it is clearly not
taking full advantage of the narrow range of plausible $\tilde{P}$
values. The underlying issue is that {\texttt{LatticeTiling}}, by the nature
of its boundary-covering algorithm \cite{Wette2014_Lattice}, uses a
minimum of two templates in a {coordinate} direction, even if a single
template would be sufficient to cover the space at the desired minimum
mismatch.
The change to $\TascNow,\tilde{P}$ {coordinate}s, in which both the prior
uncertainty and mismatch metric are approximately uncorrelated, allows
us to take advantage of the small prior uncertainty in $\tilde{P}$.
If we limit attention to lattices with all their templates on the
hypersurface $\tilde{P}=P_0$, the mismatch between a signal
with parameters $\{\lambda^s_i\}$ and a template point $\{\lambda_i\}$
will be
\begin{equation}
\mu = \tilde{g}_{\PorbShear\PorbShear}(\tilde{P}^s-P_0)^2 + 2\sum_{\alpha} g_{\alpha\tilde{P}}
(\lambda_{\alpha}^s-\lambda_{\alpha})(\tilde{P}^s-P_0)
+ \sum_{\alpha}\sum_{\beta}
g_{\alpha\beta}(\lambda_{\alpha}^s-\lambda_{\alpha})
(\lambda_{\alpha}^s-\lambda_{\alpha}),
\end{equation}
where $\{\lambda_{\alpha}\}=\{\TascNow,f_0,\ap\}$ are the other three
{coordinate}s of the parameter space and $\tilde{g}_{\PorbShear\PorbShear}$ is the sheared metric
element for orbital period. If we assume the metric is
approximately diagonal, this becomes
\begin{equation}
\mu \approx \tilde{g}_{\PorbShear\PorbShear}(\tilde{P}^s-P_0)^2 + \mu^{\parallel},
\end{equation}
As shown in the Appendix, the general expression is
\begin{equation}
\mu \approx \frac{(\tilde{P}^s-P_0)^2}{\tilde{g}^{\PorbShear\PorbShear}} + \mu^{\parallel}
\end{equation}
Since the prior uncertainty ellipse with $\chi^2\le k^2$ (see
\eqref{e:chisqShear} and \figref{f:TP_regions_ellip}) has
$(\tilde{P}-P_0)^2\le k^2\sigma_{\Porb}^2$, we can obtain a lattice
with $\mu<\mu_{\text{max}}$ everywhere if we construct a three-dimensional
lattice with
\begin{equation}
\mu_{\text{max}}^{\parallel} \le \mu_{\text{max}} - \frac{k^2\sigma_{\Porb}^2}{\tilde{g}^{\PorbShear\PorbShear}}
\end{equation}
A conservative approach is to allocate a mismatch of
$\frac{\mu_{\text{max}}}{4}$ to the $\tilde{P}$ direction and
$\frac{3\mu_{\text{max}}}{4}$ to the other three directions. Then we proceed as
follows:
\begin{itemize}
\item If $\frac{k^2\sigma_{\Porb}^2}{\tilde{g}^{\PorbShear\PorbShear}}>\frac{\mu_{\text{max}}}{4}$, we construct an
$A_4^*$ lattice covering the full four-dimensional parameter space
as usual.
\item If $\frac{k^2\sigma_{\Porb}^2}{\tilde{g}^{\PorbShear\PorbShear}}\le\frac{\mu_{\text{max}}}{4}$, we construct a
three-dimensional $A_3^*$ lattice with maximum mismatch
$\mu_{\text{max}}^{\parallel}\frac{3\mu_{\text{max}}}{4}$ and $\tilde{P}=P_0$ at
all lattice points. (In {\texttt{LatticeTiling}} we accomplish this by
setting the search region to have zero width in the $\tilde{P}$
direction.)
\end{itemize}
Following this approach produces the most efficient lattice, with
{$3.867\times 10^{11}$} total templates, illustrated in the right panel of
\figref{f:SPSD}. A slightly more agressive approach would be to
``reallocate'' any unused mismatch if
$\frac{k^2\sigma_{\Porb}^2}{\tilde{g}^{\PorbShear\PorbShear}}<\frac{\mu_{\text{max}}}{4}$, and set to the
maxiumum mismatch of the $A_3^*$ lattice to
\begin{equation}
\mu_{\text{max}}^{\parallel} = \mu_{\text{max}} - \frac{k^2\sigma_{\Porb}^2}{\tilde{g}^{\PorbShear\PorbShear}}.
\ ;
\end{equation}
This leads to a slightly smaller number of templates ($3.431\times 10^{11}$).
\begin{table}[t]
\centering
\caption{Comparing Estimates of Raw Computing Cost: we display the chosen
coordinates, the number of orbits needed to propogate $\TascNow$ to obtain
a diagonal metric, the type of search region boundary used, and the type
of lattice structure. Then, we show the number of templates required to
cover all of parameter space using a given lattice and estimate the computing
cost by multiplying the number of lattice templates by the number of SFT
pairs.}
\label{tab:results}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
{Coordinate}s & $n_{\text{orb}}$ & Boundary & Type
& $\sum_{i,c} N^{\text{tmplt}}_{ic}$
& $\sum_{i,c}N^{\text{pair}}_{ic} N^{\text{tmplt}}_{ic}$ \\
\hline
\hline
$\TascNow,P_{\text{orb}},\ap,f_0$ & $4104$ & Chopped & $\mathbb{Z}^4$
& {$1.060\times 10^{12}$} & {$1.434\times 10^{18}$} \\
\hline
$\TascNow,P_{\text{orb}},\ap,f_0$ & $4104$ & Elliptical & $\mathbb{Z}^4$
& {$1.682\times 10^{12}$} & {$2.085\times 10^{18}$} \\
\hline
$\TascNow,P_{\text{orb}},\ap,f_0$ & $4104$ & Elliptical & $A_4^*$
& {$6.006\times 10^{11}$} & {$7.439\times 10^{17}$} \\
\hline
$\TascNow,\tilde{P},\ap,f_0$ & $4104$ & Elliptical & $A_4^*$
& {$5.931\times 10^{11}$} & {$7.352\times 10^{17}$} \\
\hline
$\TascNow,\tilde{P},\ap,f_0$ & $4108$ & Elliptical & $A_4^*$
& {$5.918\times 10^{11}$} & {$7.316\times 10^{17}$} \\
\hline
$\TascNow,\ap,f_0$; $\tilde{P}=P_0$ & $4108$ & Elliptical & $A_3^*$
& {$3.867\times 10^{11}$} & {$4.928\times 10^{17}$} \\
\hline
\multicolumn{4}{|c|}{Same with reallocated mismatch}
& {$3.431\times 10^{11}$} & {$4.483\times 10^{17}$} \\
\hline
\end{tabular}
\end{table}
The properties of the different lattices are summarized in
\tabref{tab:results}. In addition to the total number of templates
$\sum_{i,c} N^{\text{tmplt}}_{ic}$ across all of the parameter space
cells, we also show the sum
$\sum_{i,c}N^{\text{pair}}_{ic} N^{\text{tmplt}}_{ic}$ which should
roughly scale with the computing cost. Roughly speaking, replacing
the by-hand cubic lattice with an $A_n^*$ lattice reduces to overall
computing cost by a factor of 2, while enforcing
unresolved $\tilde{P}$ when possible reduces the cost by a further
factor of 1.5, for an overall improvement of a factor of
3 resulting from the enhancements described in this paper.
\section{Conclusions}
\label{s:conclusions}
In this paper we have discussed changes to the lattice used in the
template-based cross-correlation search for continuous gravitational
waves from Scorpius X-1. We detailed the setup of our parameter space
and explained how previous searches used lattices in the same
parameter space. We then gave four major improvements to
improve the lattice setup, using fewer templates for a given computing
cost. We first showed that there is a reduction in template count by
switching from a hypercubic lattice to an $A_n^*$ lattice in
\secref{s:byhand}. Then, we defined an elliptical boundary function in
\secref{ss:stdcoords} to improve the shape of the search region in
$\TascNow$ and $P_{\text{orb}}$ to be more focused on the section of parameter
space within the prior ellipses. In \secref{ss:sheared} we defined an
area-preserving shearing transformation that aligned the axes of the
prior ellipses with the coordinate axes. This simplifies the task of
using {\texttt{LatticeTiling}} to place a horizontal row of templates in
parameter space. Finally, we compared the cross-section of the prior
ellipses in $\TascNow$ and $\tilde{P}$ to determine whether
$\tilde{P}$ needed to be resolved, and determined that it did not in
\secref{ss:sheared}. This allowed us to use an $A_3^*$ lattice, and
reduced the original template count by a factor of $\sim 3$.
This reduction in template count allows the use of longer coherence
times at the same computing cost, enabling a more sensitive search.
\section*{Acknowledgments}
We wish to thank Chris Messenger, as well as the members of the LIGO
Scientific Collaboration and Virgo Collaboration continuous waves
group, for useful feedback.
KJW, JTW, and JKW were supported by NSF grant PHY-1806824.
KW was supported by the Australian Research Council Centre of Excellence for
Gravitational Wave Discovery (OzGrav) through project number CE170100004.
This paper has been assigned LIGO Document Number LIGO-P2000502-v3.
\section*{References}
\providecommand{\newblock}{}
|
1,108,101,563,713 | arxiv | \section{Introduction}
Cartesian products of graphs derive their popularity from their simplicity,
and their importance from the fact that many classes of graphs, such as
hypercubes, Hamming graphs, median graphs, benzenoid graphs, or Cartesian
graph bundles, are either Cartesian products or closely related to them
\cite{haimkl-2011}. As even slight disturbances of a product, such as
the addition or deletion of an edge, can destroy the product structure
completely \cite{Fei-1986}, the question arises whether it is possible to
restore the original product structure after such a disturbance. In other
words, given a graph, the question is, how close it is to a Cartesian
product, and whether one can find this product algorithmically.
Unfortunately, in general this problem can only be solved by heuristic
algorithms, as discussed in detail in \cite{HeImKu-2013}.
\w{That} paper
also presents several heuristic \w{algorithms} for the solution of this problem.
One of the main steps towards such algorithms is the computation of an
equivalence relation $\mathfrak d_{|S_v}(W)^*$ on the edge-set of a graph. The
complexity \w{of} the computation of $\mathfrak d_{|S_v}(W)^*$ in \cite{HeImKu-2013} is
$O(n\Delta^4)$, where $n$ is the number of vertices, and $\Delta$ the
maximum degree of $G$. Here we improve the recognition complexity of
$\mathfrak d_{|S_v}(W)^*$ to $O(m\Delta)$, where $m$ is the number of edges of
$G$, \w{and thereby improve} the complexity of the just mentioned heuristic
algorithms.
A special case is the computation of the relation $\delta^\ast =
\mathfrak d_{|S_v}(V(G))^*$. This relation defines the so-called quasi Cartesian
product, see Section \ref{sec:twist}. Hence, quasi products can be
recognized in $O(m \Delta)$ time. As the algorithm can easily be
parallelized, it leads to sublinear recognition of quasi Cartesian
products.
When the given graph $G$ is a Cartesian product from which
just one vertex was deleted, things are easier. In that case, the product
is uniquely defined and can be reconstructed in polynomial time from $G$,
see \cite{do-1975} and \cite{haze-1999}. In other words, if $G$ is given,
and if one knows that there is a Cartesian product graph $H$ such that $G =
H\smallsetminus x$, then $H$ is uniquely defined. Hagauer and \v{Z}erovnik
showed that the complexity of finding $H$ is $O(mn(\Delta^2+m))$. The
methods of the present paper will lead to a new algorithm of complexity
$O(m\Delta^2+\Delta^4)$ for the solution of this problem. This is
\w{part of the dissertation} \cite{ku-2013} \w{of the third author}, and will be the topic of a subsequent publication.
Another class of graphs that is closely related to Cartesian products are
Cartesian graph bundles, see Section \ref{sec:twist}. In \cite{impize-1997}
it was \w{proved} that Cartesian graph bundles over a triangle-free base can be
effectively recognized, and in \cite{PiZmZe-2001} it was shown that this
can be done in $O(mn^2)$ time. With the methods of this paper, we suppose
that one can improve it to $O(m\Delta)$ time.
This too will be published separately.
\section{Preliminaries}
We consider finite, connected undirected graphs $G=(V,E)$ without loops and
multiple edges. The {\it Cartesian product} $G_1\square G_2$ of graphs
$G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ is a graph with vertex set
$V_1\times V_2$, where the vertices $(u_1,v_1)$ and $(u_2,v_2)$ are
adjacent if $u_1u_2\in E_1$ and $v_1=v_2$, or if
$v_1v_2\in E_2$ and $u_1=u_2$. The Cartesian
product is associative, commutative, and has the one vertex graph $K_1$ as
a unit \cite{haimkl-2011}.
By associativity we can write $G_1\square \cdots \square G_k$ for a product
$G$ of graphs $G_1,\ldots,$ $G_k$ and can label the vertices of $G$ by the
set of all $k$-tuples $(v_1, v_2,\dots,v_k)$, where $v_i\in G_i$ for $1\le
i\le k$. One says two edges have the same \emph{Cartesian color} if their
endpoints differ in the same coordinate.
A graph $G$ is {\it prime} if the identity $G=G_1\square G_2$
implies that $G_1$ or $G_2$ is the one-vertex graph $K_1$. A representation
of a graph $G$ as a product $G_1\square G_2\square\dots\square G_k$ of
\mh{non-trivial} prime graphs is called a {\it prime factorization} of $G$.
It is well known
that every connected graph $G$ has a prime factor decomposition with
respect to the Cartesian product, and that this factorization is unique up
to isomorphisms and the order of the factors, see Sabidussi \cite{sa-1960}.
Furthermore, the prime factor decomposition can be computed in linear time,
see \cite{impe-2007}.
Following the notation in \cite{HeImKu-2013}, an induced cycle
on four vertices is called \emph{chordless square}. Let the edges $e=vu$
and $f=vw$ span a chordless square $vuxw$. Then $f$ is the
\emph{opposite} edge of the edge $xu$. The vertex $x$ is called \emph{top vertex}
(w.r.t. the square spanned by $e$ and $f$). A top vertex $x$ is
\emph{unique} if $|N(x) \cap N(v)|= 2$, where $N(u)$ denotes the (open)
$1$-neighborhood of vertex $u$. In other words, a top vertex $x$ is not
unique if there are further squares with top vertex $x$ spanned by the
edges $e$ or $f$ together with a third distinct edge $g$.
\tk{Note that the existence of a unique top vertex $x$ does not imply
that $e$ and $f$
span a unique square, as there might be another $vuyw$ with a possible
unique top vertex $y$. Thus, $e$ and $f$ span
unique square $vuxw$ only if $|N(u) \cap N(w)| =2$ holds.}
The \emph{degree}
$\deg(u):=|N(u)|$ of a vertex $u$ is the number of edges that
contain $u$. The maximum degree of a graph is denoted by $\Delta$ and
a path on $n$ vertices by $P_n$.
We now \w{recall the Breadth-First Search (BFS)} ordering of the vertices $v_0,\ldots,v_{n-1}$ of a
graph: select an arbitrary, but
fixed vertex $v_0\in V(G)$, called the {\it root}, and create a sorted list
of vertices. \w{Begin} with $v_0$; append all neighbors
$v_1,\ldots,v_{\deg(v_0)}$ of $v_0$ to the list; then append all neighbors of $v_1$
that are not already in \w{the} list; \w{and} continue recursively with
$v_2,v_3,\ldots$ until all vertices of $G$ are processed.
\subsection{The Relations $\boldsymbol{\delta}$, $\boldsymbol{\sigma}$ and the Square Property.}
There are two basic relations defined on an edge set of a given graph
that play an important role in the field of Cartesian product recognition.
\begin{defn}
Two edges $e,f\in E(G)$ are in the \emph{relation $\delta_G$},
if one of the following conditions in $G$ is satisfied:
\begin{itemize}
\item[(i)] $e$ and $f$ are adjacent and it is not the case that there is a
unique square spanned by $e$ and $f$, and that this square is
chordless.
\item[(ii)] $e$ and $f$ are opposite edges of a chordless square.
\item[(iii)] $e=f$.
\end{itemize}
\label{def:delta}
\end{defn}
Clearly, this relation is reflexive and symmetric but not
necessarily transitive. We denote its {\it transitive closures}, that is,
the smallest transitive relation containing $\delta_G$, by
$\delta_G^*$. \\
If adjacent edges $e$ and $f$ are not in relation $\delta$, that is,
\w{if} Condition (i) of Definition \ref{def:delta} is not fulfilled, then
they span a unique \w{square, and this square is} chordless. We call such a square
just \emph{unique chordless square (spanned by $e$ and $f$)}.\\
Two edges $e$ and $f$ are in the \emph{product relation} $\sigma_G$ if they have the
same Cartesian colors with respect to the prime factorization of $G$.
The product relation $\sigma_G$ is a uniquely defined
equivalence relation on $E(G)$ that contains all information about the
prime factorization\footnote{For the properties of $\sigma$ that we will
cite or use, we refer the reader to \cite{haimkl-2011} or \cite{imkl-2000}.}.
Furthermore, $\delta_G$ and $\delta_G^\ast$ are contained in $\sigma_G$.
\w{Notice that we may also use the notation $\delta(G)$ for $\delta_G$, respectively $\sigma(G)$ for $\sigma_G$.}
If there is no risk of confusion we \w{may even} write $\delta$, resp., $\sigma$.
instead of $\delta_G$, resp., $\sigma_G$.
We say an equivalence relation $\rho$ defined on the edge set of a graph
$G$ has the {\it square property} if the following three conditions hold:
\begin{itemize}
\item[(a)] For any two edges $e = uv$ and $f = uw$ that belong to
different equivalence classes of $\rho$ there exists a unique
vertex $x \neq u$ of $G$ that is adjacent to $v$ and
$w$.
\item[(b)] The square $uvxw$ is chordless.
\item[(c)] The opposite edges of any chordless square belong to the same equivalence class of $\rho$.
\end{itemize}
From the definition of $\delta$ it easily follows that $\delta$ is a
refinement of any such $\rho$. It also implies that $\delta^\ast$, and thus
also $\sigma$, have the square property. This property is of fundamental
importance, both for the Cartesian and the quasi Cartesian product.
We note in passing that $\sigma$ is the convex hull of
$\delta^\ast$, see \cite{imze-1994}.
\subsection{The Partial Star Product}
\label{sec:psp}
This section is concerned with the {\it partial star product}, which plays
a decisive role in the local approach. As it was introduced in
\cite{HeImKu-2013}, we will only \w{define it here,} list some of its most basic properties\w{,}
and refer to \cite{HeImKu-2013} for details.
Let $G=(V,E)$ be a given graph
and $E_v$ the set of all edges incident to some vertex $v \in V$. We
define the local relation $\mathfrak d_v$ as follows: $$\mathfrak d_v = ((E_v\times E) \cup
(E\times E_v)) \cap \delta_G \subseteq \delta(\langle N_2^G[v]\rangle).$$ In other
words, $\mathfrak d_v$ is the subset of $\delta_G$ that contains all pairs $(e,f)\in
\delta_G$, where at least one of the edges $e$ and $f$ is incident to $v$.
\w{Clearly} $\mathfrak d^*_v$, which is not necessarily a subset of
$\delta$, is contained in $\delta^*$, \w{see} \cite{HeImKu-2013}.
Let $S_v$ be a subgraph of $G$ that contains all edges incident to $v$ and
all squares spanned by edges $e, e'\in E_v$ where $e$ and $e'$ are not in
relation $\mathfrak d_v^*$. Then $S_v$ is called \emph{partial star product} (PSP
for short). To be more precise:
\begin{defn}[Partial Star Product (PSP)]
Let $F_v\subseteq E\setminus E_v$
be the set of edges which are opposite edges of (chordless)
squares spanned by $e,e'\in E_v$ that are in different
$\mathfrak d^*_v$ classes, that is, $(e,e') \not\in \mathfrak d^*_v$.
Then the \emph{partial star product} is the subgraph
$S_v \subseteq G$ with edge set $E'= E_v\cup F_v$ and vertex set $\cup_{e\in E'}
e$. We call $v$ the \emph{center} of $S_v$, $E_v$ the set of \emph{primal
edges}, $F_v$ the set of \emph{non-primal edges}, and the vertices adjacent
to $v$ \emph{primal vertices} of $S_v$.
\label{def:starproduct}
\end{defn}
As shown in \cite{HeImKu-2013}, a partial star product $S_v$ is always an
isometric subgraph or even isomorphic to a Cartesian product graph $H$, where
the factors of $H$ are so-called stars $K_{1,n}$.
These stars can directly be determined by the respective $\mathfrak d_v^*$ classes,
see \cite{HeImKu-2013}.
Now we define a {\it local coloring} of $S_v$ as the restriction of
the relation $\mathfrak d^*_v$ to $S_v$:
$$\mathfrak d_{|S_v}:=\mathfrak d^*_{v|S_v}= \{(e,f) \in \mathfrak d^*_v \mid e,f \in E(S_v)\}.$$
In other words, $\mathfrak d_{|S_v}$ is the subset
of $\mathfrak d^*_v$ that contains all pairs of edges $(e,f)\in \mathfrak d^*_v$ where both
$e$ and $f$ are in $S_v$ and edges obtain the same local color whenever
they are in the same equivalence class of $\mathfrak d_{|S_v}$.
As an example consider the PSP $S_v$ in Figure \ref{fig:agp}.
The relation $\mathfrak d_{|S_v}$ has three equivalence classes
(highlighted by thick, dashed and double-lined edges).
Note, $\delta^*$ just contains one equivalence class. Hence,
$\mathfrak d_{|S_v} \neq \w{\delta(S_v)^*}$.
For a given subset $W\subseteq V$ we set
$$\mathfrak d_{|S_v}(W) = \cup_{v\in W} \mathfrak d_{|S_v}\,.$$ The transitive closure of
$\mathfrak d_{|S_v}(W)$ is then called the {\it global coloring} with respect to
$W$. As shown in \cite{HeImKu-2013}, we have the following theorem.
\begin{thm}
Let $G=(V,E)$ be a given graph and
$\mathfrak d_{|S_v}(V) = \cup_{v_\in V} \mathfrak d_{|S_v}$.
Then $$\mathfrak d_{|S_v}(V)^* = \delta(G)^*.$$
\label{thm:union_equals_delta}
\end{thm}
For later reference and for the design of the recognition algorithm
we \w{list} the following three lemmas about \w{relevant} properties of the PSP.
\begin{lem}[\cite{HeImKu-2013}]
\label{lem:PSP1}
Let G=(V,E) be a given graph and $S_v$ be a PSP \w{of} an arbitrary vertex $v\in V$.
If $e, f \in E_v$ are primal edges that are not in relation $\mathfrak d_v^*$, then
$e$ and $f$ span a unique chordless square with a unique top vertex in $G$.
Conversely, suppose that $x$ is a non-primal vertex of $S_v$. Then
there is a unique chordless square in $S_v$ that contains $x$,
and that is spanned by edges $e, f \in E_v$ with $(e,f)\not\in \mathfrak d^*_v$.
\end{lem}
\begin{lem}[\cite{HeImKu-2013}]
Let G=(V,E) be a given graph and $f \in F_v$ be a non-primal edge of a
PSP $S_v$ \w{of} an arbitrary vertex $v\in V$. Then $f$ is opposite to exactly one
primal edge $e\in E_v$ in $S_v$, and $(e,f)\in \mathfrak d_{|S_v}$.
\label{lem:PSP2}
\end{lem}
\begin{lem}[\cite{HeImKu-2013}]
Let G=(V,E) be a given graph and $W\subseteq V$ such that $\langle W \rangle$ is connected.
Then each vertex $x\in W$ meets every equivalence class
of $\mathfrak d_{|S_v}(W)^*$ in $\cup_{v\in W} S_v$.
\label{lem:vMeetsEveryClass}
\end{lem}
\section{Quasi Cartesian Products}
\label{sec:twist}
Given a Cartesian product $G = A \square B$ of two connected, prime graphs
$A$ and $B$\w{,} one can recover the factors $A$ and $B$ as follows: the product
relation $\sigma$ has two equivalence classes, say $E_1$ and $E_2$, and the
connected components of the graph $(V(G), E_1)$ are all isomorphic copies
of the factor $A$, or of the factor $B$, see Figure \ref{fig:cp}. This property
naturally extends to products of more than two prime factors.
We already observed that $\delta$ is finer than any equivalence relation
$\rho$ that satisfies the square property. Hence the equivalence classes of
$\rho$ are unions of $\delta^\ast$-classes. This also holds for $\sigma$.
It is important to \w{keep in mind that} $\sigma$ can \mh{be trivial, that
is, it} consists of a single equivalence
class \w{even when $\delta^\ast$ has} more than one equivalence class.
We call all graphs $G$ with a non-trivial equivalence relation $\rho$ that is
defined on $E(G)$ and satisfies the square property \emph{quasi
(Cartesian) products}. Since $\delta^*\subseteq \rho$ for every such
relation $\rho$, it follows that $\delta^*$ must have at least two
equivalence classes for any quasi product. By Theorem
\ref{thm:union_equals_delta} \w{we have} $\mathfrak d_{|S_v}(V(G))^* =\delta^*$.
In other words, quasi products can be defined as graphs where the PSP's of
all vertices are non-trivial, that is, none of the PSP's is a star $K_{1,n}$, and in
addition, where the union over all \w{$\mathfrak d_{|S_v}$ yields} a non-trivial $\delta^*$.
Consider the equivalence classes of the relation $\delta^\ast$ of the
graph $G$ of Figure \ref{fig:twist2}.
It has two equivalence classes, and locally looks
like a Cartesian product, but is actually reminiscent of a M\"obius band.
Notice that the graph $G$ in Figure \ref{fig:twist2} is prime with respect to
Cartesian multiplication, although $\delta^*$ has two equivalence classes:
all components of the first class are paths of length $2$, and there are
two components of the other $\delta^\ast$-class, which do not
have the same size. Locally this graph looks
either like $P_3 \square P_3$ or $P_2 \square P_3$.
\begin{figure}[tbp]
\centering
\subfigure[The Cartesian product $G=P_3\Box C_4$.]{
\label{fig:cp}
\includegraphics[bb= 174 483 417 660, width=0.4\textwidth]{fig/usualProd.ps}
}$\qquad\qquad$
\subfigure[A quasi Cartesian product, which is also a graph bundle.]{
\label{fig:twist2}
\includegraphics[bb= 174 483 417 660, width=0.4\textwidth]{fig/twistedP3Pn.ps}
}
\subfigure[A quasi Cartesian product, which is not a graph bundle.]{
\label{fig:twistNotBundle}
\includegraphics[bb=174 455 417 688, width=0.4\textwidth]{fig/twistedNotBundle.ps}
}
$\qquad\qquad$
\subfigure[The approximate product and PSP $S_v$, which is neither a quasi product nor a graph bundle.]{
\label{fig:agp}
\includegraphics[bb= 55 443 298 626, scale=0.6]{./fig/Q3.ps}
}
\caption{Shown are several quasi Cartesian products, graph bundles and approximate products.}
\label{fig:Exmpl1}
\end{figure}
In fact, the graph in Figure \ref{fig:twist2} is a so-called Cartesian
graph bundle \cite{impize-1997}, where \emph{Cartesian graph bundles} are defined as follows:
Let $B$ and $F$ be graphs. A graph $G$ is a (Cartesian) graph bundle with fiber $F$ over the base $B$
if there exists a weak homomorphism\footnote{A weak
homomorphism maps edges into edges or single vertices.} $p : G \rightarrow
B$ such that
\begin{itemize}
\item[(i)] for any $u \in V (B)$, the subgraph (induced by) $p^{-1}(u)$ is isomorphic to $F$, and
\item[(ii)] for any $e \in E(B)$, the subgraph $p^{-1}(e)$ is isomorphic to $K_2 \square F.$
\end{itemize}
The graph of Figure \ref{fig:twistNotBundle} shows that not all quasi
Cartesian products are graph bundles. On the other hand, not every graph
bundle has to be a quasi product. The standard example is
the complete bipartite graph $K_{2,3}$.
It is a graph bundle with base $K_3$ and fiber $K_2$, but has
only one $\delta^\ast$-class.
Note, in \cite{HeImKu-2013} we \w{considered} "approximate products"
which \w{were} first introduced in \cite{hiks-2008, hellmuth-2011}.
As approximate products are all graphs that have a
\w{(small) edit} distance
to a non-trivial product graph, it is clear that every bundle and quasi product
can be considered as \w{an} approximate product, while the converse is not true.
\w{For example,} consider the graph in Figure \ref{fig:agp}.
Here, $\delta^*$ has only one equivalence class.
However, the relation $\mathfrak d_{|S_v}$ has, in this case,
three equivalence classes (highlighted by thick, dashed and double-lined edges).
Because of the local product-like structure of quasi Cartesian products
we are led to the following conjecture:
\begin{conj}
Quasi Cartesian products can be reconstructed in essentially the same time
from vertex-deleted subgraphs as Cartesian products.
\end{conj}
\section{Recognition Algorithms}
\subsection{Computing the Local and Global Coloring}
For a given graph $G$, let $W\subseteq V(G)$ be an arbitrary subset of the
vertex set of $G$ such that the induced subgraph $\langle W\rangle$ is
connected. Our approach \w{for the computation} is based on the recognition of all PSP's $S_v$ with
$v\in W$, and \w{subsequent} merging of their local colorings. The subroutine computing
local colorings calls the vertices in BFS-order with respect to an
arbitrarily chosen root $v_0\in W$.
Let \w{now} us briefly introduce several additional notions used in the PSP
recognition algorithm. \w{At the start} of every iteration we assign
pairwise different {\it temporary local colors} to the primal edges of
every PSP. These colors are then merged in subroutine processes to compute
\emph{local colors} associated with every PSP. Analogously, we
use {\it temporary global colors} that are initially assigned to every edge
incident with the root $v_0$.
For any vertex $v$ \w{of} distance two from a PSP center $c$ we store
attributes called {\it first and second primal neighbor}\w{, that is,}
references to adjacent primal vertices from which $v$ was "visited" (in
pseudo-code attributes are accessed by $v.FirstPrimalNeighbor$ and
$v.SecondPrimalNeighbor$). When $v$ is found to have at least two primal
neighbors we add $v$ to $\mathbb{T}_{c}$\w{,} which is a stack of candidates
for non-primal vertices of $S_{c}$. Finally, we use {\it incidence} and
{\it absence lists} to store recognized squares spanned by primal edges.
Whenever we recognize that two primal edges span a square we put them into
the incidence list. \w{If} we find out that a pair of primal edges
cannot span a unique chordless square with unique top vertex\w{, then} we put it into
the absence list. Note that the
above structures are local and are always associated with a certain PSP
recognition subroutine (Algorithm \ref{alg:PSPrecognition}).
Finally, we will "map" local colors to temporary global colors via
temporary vectors which helps us to merge local with global colors.
Algorithm \ref{alg:PSPrecognition} computes a local coloring for given PSP's
and merges it with the global coloring $\mathfrak d_{|S_v}(W)^*$ where $W\subseteq V(G)$ is the set of
treated centers. Algorithm \ref{alg:deltaRecognition} \w{summarizes} the main control structure of the
local approach.
\clearpage
\begin{algo}[PSP recognition]
\small
\label{alg:PSPrecognition}
\begin{tabular}{ll}
\emph{Input:} & Connected graph $G=(V,E)$, PSP center $c\in V$, global coloring $\mathfrak d_{|S}(W)^*$, \\
& where $W\subseteq V$ is the set of treated centers and where the subgraph induced \\ &
by $W\cup c$ is connected.\\
\emph{Output:} & New temporary global coloring $\mathfrak d_{|S}(W\cup c)^*$.
\end{tabular}
\renewcommand{\baselinestretch}{1.1}\small
\begin{enumerate}
\item\label{alg:PSPrecognition0} Initialization.
\item\label{alg:PSPrecognition1} {\texttt{{FOR}}}\ every neighbor $u$ of $c$ \texttt{{DO}}:
\begin{enumerate}
\item\label{alg:PSPrecognition1a} {\texttt{{FOR}}}\ every neighbor $w$ of $u$ (except $c$) \texttt{{DO}}:
\begin{enumerate}
\item\label{alg:PSPrecognition1ai} \texttt{{IF}}\ $w$ is primal w.r.t. $c$ \texttt{{THEN}}\ put pair of primal edges $(cu, cw)$ to absence list.
\item\label{alg:PSPrecognition1aii} \texttt{{ELSE}} \texttt{{IF}}\ $w$ was not visited \texttt{{THEN}}\ set $w.FirstPrimalNeighbor=u$.
\item\label{alg:PSPrecognition1aiii} \texttt{{ELSE}}\ ($w$ is not primal and was already visited) \texttt{{DO}}:
\begin{enumerate}
\item\label{alg:PSPrecognition1aiiiA} \texttt{{IF}}\ only one primal neighbor $v$ $(v\neq u)$ of $w$ was recognized so far, then \texttt{{DO}}:
\begin{itemize}
\item Set $w.SecondPrimalNeighbor=u$.
\item \texttt{{IF}}\ $(cu,cv)$ is not in incidence list\w{,} then add $w$ to the stack $\mathbb{T}_c$ and
add the pair $(cu,cv)$ to incidence list.
\item \texttt{{ELSE}}\ ($cu$ and $cv$ span more squares) add pair $(cu,cv)$ to absence list.
\end{itemize}
\item\label{alg:PSPrecognition1aiiiB} \texttt{{ELSE}}:
\begin{itemize}
\item Add all pairs formed by primal edges $cv_1,cv_2,cu$ to absence list\w{,} where $v_1,v_2$ are first and second primal neighbors of $w$.
\end{itemize}
\end{enumerate}
\end{enumerate}
\end{enumerate}
\item\label{alg:PSPrecognition2} Assign pairwise different temporary local colors to primal edges.
\item\label{alg:PSPrecognition3} {\texttt{{FOR}}}\ any pair $(cu,cv)$ of primal edges $cu$ and $cv$ \texttt{{DO}}:
\begin{enumerate}
\item\label{alg:PSPrecognition3a} \texttt{{IF}}\ $(cu,cv)$ is contained in absence list \texttt{{THEN}}\ merge temporary local colors of $cu$ and $cv$.
\item\label{alg:PSPrecognition3b} \texttt{{IF}}\ $(cu,cv)$ is not contained in incidence list \texttt{{THEN}}\ merge temporary local colors of $cu$ and $cv$.
\end{enumerate}
(Resulting merged temporary local colors determine local colors of primal edges in $S_c$. We will \w{reference} them in the following steps.).
\item\label{alg:PSPRecognition4} {\texttt{{FOR}}}\ any primal edge $cu$ \texttt{{DO}}:
\begin{enumerate}
\item \texttt{{IF}}\ $cu$ was already assigned some temporary global color $d_1$ \texttt{{THEN}}
\begin{enumerate}
\item\label{alg:PSPRecognition4i} \texttt{{IF}}\ local color $b$ of $cu$ was already mapped to some
temporary global color $d_2$\w{,} where $d_2\neq d_1$\w{,} \texttt{{THEN}}\ merge $d_1$ and $d_2$.
\item\label{alg:PSPRecognition4ii} \texttt{{ELSE}}\ map local color $b$ to $d_1$.
\end{enumerate}
\end{enumerate}
\item\label{alg:PSPRecognition5} {\texttt{{FOR}}}\ any vertex $v$ from stack $\mathbb{T}_c$ \texttt{{DO}}:
\begin{enumerate}
\item Check local colors of primal edges $cw_1$ and $cw_2$ (where $w_1,w_2$ are first and second primal neighbor of $v$, respectively).
\item \texttt{{IF}}\ they differ in local colors \texttt{{THEN}}
\begin{enumerate}
\item \texttt{{IF}}\ there was defined temporary global color $d_1$ for $vw_1$ \texttt{{THEN}}
\begin{enumerate}
\item \texttt{{IF}}\ local color $b$ of $cw_2$ was already mapped to some temporary global color $d_2$\w{,} where $d_2\neq d_1$ \texttt{{THEN}}\ merge $d_1$ and $d_2$.
\item \texttt{{ELSE}}\ map local color $b$ to $d_1$.
\end{enumerate}
\item \texttt{{IF}}\ there was already defined temporary global color $d_1$ for $vw_2$ \texttt{{THEN}}:
\begin{enumerate}
\item \texttt{{IF}}\ local color $b$ of $cw_1$ was already mapped to some temporary global color $d_2$\w{,} where $d_2\neq d_1$
\texttt{{THEN}}\ merge $d_1$ and $d_2$.
\item \texttt{{ELSE}}\ map local color $b$ to $d_1$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\item\label{alg:PSPRecognition6} Take every edge $e$ of the PSP $S_c$
that was not colored by any temporary
global color up to now and assign it
$d$, where $d$ is the temporary global
color to which the local color of $e$ or the local color of its opposite primal edge $e'$
was mapped.
\\ (If there is a local color $b$ that was not mapped to any temporary global
color\w{, then} we create \w{a} new temporary global color and assign it to all edges of color $b$).
\end{enumerate}
\renewcommand{\baselinestretch}{1.}\normalsize
\end{algo}
\normalsize
\bigskip
\begin{algo}[Computation of $\mathfrak d_{|S_v}(W)^*$]
\small
\label{alg:deltaRecognition}
\begin{tabular}{ll}
\emph{Input:} & A connected graph $G$, \mh{$W\subseteq V(G)$ s.t. the induced subgraph $\langle W\rangle$ is connected} \\
& and, an arbitrary vertex $v_0\in W$. \hfill\\
\emph{Output:} & Relation $\mathfrak d_{|S_v}(W)^*$. \hfill
\end{tabular}
\begin{enumerate}
\item\label{alg:delta0} Initialization.
\item\label{alg:delta1} Set sequence $Q$ of vertices $v_0,v_1,\dots,v_n$ that form $W$ in BFS-order with respect to $v_0$.
\item\label{alg:delta2} Set $W':=\emptyset$.
\item\label{alg:delta3} Assign pairwise different temporary global colors to edges incident to $v_0$.
\item\label{alg:delta4} {\texttt{{FOR}}}\ any vertex $v_i$ from sequence $Q$ \texttt{{DO}}:
\begin{enumerate}
\item\label{alg:delta4a} Use Algorithm \ref{alg:PSPrecognition} to compute $\mathfrak d_{|S_v}(W'\cup v_i)^*$.
\item\label{alg:delta4b} Add $v_i$ to $W'$.
\end{enumerate}
\end{enumerate}
\end{algo}
\normalsize
In order to show that Algorithm \ref{alg:PSPrecognition} \w{correctly} recognizes
the local coloring, we define the (temporary) relations $\alpha_c$ and
$\beta_c$ for a \w{chosen} vertex $c$:
Two \emph{primal} edges of $S_c$ are
\begin{itemize}
\item in \emph{relation $\alpha_c$} if they are
contained in the incidence list and
\item in \emph{relation $\beta_c$} if they are contained in the absence list
\end{itemize}
after Algorithm \ref{alg:PSPrecognition} is
\w{executed} for $c$.
Note, we denote by $ \overline\alpha_c$
the complement of $\alpha_c$\w{,} which contains all pairs of primal edges of
PSP $S_c$ that are not listed in the incidence list.
\begin{lem}\label{lem:alpha_1new}
Let $e$ and $f$ be two primal edges of the PSP $S_c$.
If $e$ and $f$ span a square with some non-primal vertex $w$ as
unique top-vertex, then $(e,f) \in \alpha_c$.
\end{lem}
\begin{proof}
Let $e=cu_1$ and $f=cu_2$ be primal edges in $S_c$ that span a square
$cu_1wu_2$ with unique top-vertex $w$, where $w$ is non-primal.
Note, since $w$ is the unique top vertex, the vertices $u_1$ and $u_2$ are its only primal neighbors.
W.l.o.g. assume that for vertex $w$ no first primal neighbor was assigned
and let first $u_1$ and then $u_2$ be visited. In Step
\ref{alg:PSPrecognition1a} vertex $w$ is recognized and the first primal
neighbor $u_1$ is determined in Step \ref{alg:PSPrecognition1aii}. Take
the next vertex $u_2$. Since $w$ is not primal and was already visited, we
are in Step \ref{alg:PSPrecognition1aiii}. Since only one primal neighbor
of $w$ was recognized so far, we go to Step \ref{alg:PSPrecognition1aiiiA}.
If $(cu_1,cu_2)$ is not already contained in the incidence list, it
will be added now and thus, $(cu_1,cu_2)\in \alpha_c$.
\end{proof}
\begin{cor}
\label{cor:alpha_2}
Let $e$ and $f$ be two adjacent distinct primal edges of the PSP $S_c$.
If $(e,f)\in \overline\alpha_c$, then $e$ and $f$ do not span a square or
span a square with non-unique \tk{or primal} top vertex.
In particular, $\overline\alpha_c$ contains all pairs $(e,f)$
that do not span any square.
\end{cor}
\begin{proof}
The first statement is just the contrapositive of the statement in Lemma \ref{lem:alpha_1new}.
For the second statement observe that
if $e=cx$ and $f=cy$ are two distinct primal edges of $S_c$ that
do not span a square, then the vertices $x$ and $y$ do not have a common
non-primal neighbor $w$. It is now easy to verify that in
none of the substeps of Step \ref{alg:PSPrecognition1}
the pair $(e,f)$ is added to the incidence list,
and thus, $(e,f)\in \overline\alpha_c$.
\end{proof}
\begin{lem}\label{lem:beta_1}
Let $e$ and $f$ be two primal edges of the PSP $S_c$ that are in relation
$\beta_c$. Then $e$ and $f$ do not span a unique chordless square with unique
top vertex.
\end{lem}
\begin{proof}
Let $e=cu_1$ and $f=cu_2$ be primal edges of $S_c$. Then pair $(e,f)$
is inserted to absence list in:
\begin{itemize}
\item[a)] Step \ref{alg:PSPrecognition1ai}, when $u_1$ and $u_2$ are
adjacent. Then \w{no} square spanned by $e$ and $f$ can be
chordless.
\item[b)] Step \ref{alg:PSPrecognition1aiiiA} (\texttt{{ELSE}}-condition),
when $(e,f)$ is already
listed in the incidence list and another square spanned by $e$ and
$f$ is recognized. Thus, $e$ and $f$ do not span a unique square.
\item[c)] Step \ref{alg:PSPrecognition1aiiiB}, when $e$ and $f$ span a square
with top vertex $w$ that has more than two primal neighbors and
at least one of the primal vertices $u_1$ and $u_2$ are recognized as
first or second primal neighbor of $w$. Thus $e$ and $f$ span \w{a}
square with non-unique top vertex.
\end{itemize}
\end{proof}
\begin{lem}\label{lem:beta_2}
Relation $\beta^*_c$ contains all pairs of primal edges $(e,f)$ of $S_c$
that satisfy at least one of the following conditions:
\begin{itemize}
\item[a)] $e$ and $f$ span \w{a} square with \w{a} chord.
\item[b)] $e$ and $f$ span \w{a} square with non-unique top vertex.
\item[c)] $e$ and $f$ span more than one square.
\end{itemize}
\end{lem}
\begin{proof}
Let $e=cu_1$ and $f=cu_2$ be primal edges of w{[PSP]} $S_c$.
\begin{itemize}
\item[a)] If $e$ and $f$ span \w{a} square with \w{a} chord, then $u_1$ and $u_2$ are
adjacent or the top vertex $w$ of the spanned square is primal and thus,
there is a primal edge $g=cw$. In the first case, we can conclude
analogously as in the proof of Lemma
\ref{lem:beta_1} that $(e,f)\in \beta_c$.
In \w{the} second case, we \w{analogously} obtain $(e,g), (f,g) \in \beta_c$
and therefore, $(e,f)\in\beta_c^*$.
\item[b)] Let $e$ and $f$ span a square with non-unique top vertex $w$.
If at
least one of the primal vertices $u_1,u_2$ is a first or second
neighbor of $w$ then $e$ and $f$ are listed in the absence
list, as shown in the proof of Lemma \ref{lem:beta_1}. If
$u_1$ and $u_2$ are neither first nor second primal neighbor\w{s} of
$w$\w{,} then both edges $e$ and $f$ will be added to the absence list in Step
\ref{alg:PSPrecognition1aiiiB}\w{,} together with the primal edge $g=cu_3$, where
$u_3$ is the first recognized primal
neighbor of $w$. In other words, $(e,g),(f,g)\in\beta_c$ and hence,
$(e,f)\in\beta_c^*$.
\item[c)] Let $e$ and $f$ span two squares with top vertices $w$ and $w'$,
respectively and assume w.l.o.g. that first vertex $w$ is visited and then
$w'$.
If both vertices $u_1$ and $u_2$ are recognized as first and
second primal neighbor\w{s} of $w$ and $w'$, then $(cu_1,cu_2)$ is added to
the incidence list when visiting $w$ in Step
\ref{alg:PSPrecognition1aiiiA}. However, when we visit $w'$\w{,} then
we insert $(cu_1,cu_2)$ to the absence list in Step
\ref{alg:PSPrecognition1aiiiA}, \w{because} this pair is
already included in the incidence list.
Thus, $(e,f)\in \beta_c$.
If at least one of the vertices
$w,w'$ does not have $u_1$ and $u_2$ as first or second
primal neighbor\w{,} then $e$ and $f$ must span a square with non-unique
top vertex. Item b) implies that $(e,f)\in\beta_c^*$.
\end{itemize}
\end{proof}
\begin{lem}\label{lem:beta_3}
Let $f$ be a non-primal edge and $e_1,e_2$ be two distinct primal edges of $S_c$.
Let $(e_1,f),(e_2,f)\in \mathfrak d_c$. Then $(e_1,e_2)\in\beta^*_c$.
\end{lem}
\begin{proof}
Since the edge $f$ is non-primal, $f$ is not incident with the center $c$.
Recall, by the definition of $\mathfrak d_c$, two distinct edges can be in relation $\mathfrak d_c$ only if they have \w{a} common vertex
or are opposite edges in a square. To prove our lemma we need to investigate
\w{the} three following cases, which are also \w{illustrated} in Figure \ref{fig:proof_beta1}:
\begin{itemize}
\item[a)] Suppose both edges $e_1$ and $e_2$ \w{are incident} with $f$.
Then $e_1$ and $e_2$ span \w{a} triangle and consequently $(e_1,e_2)$ will be
added to the absence list in Step \ref{alg:PSPrecognition1ai}.
\item[b)] Let $e_1$ and $e_2$ be opposite to $f$ in some squares. There are
two possible cases (see Figure \ref{fig:proof_beta1} b)). In the
first case $e_1$ and $e_2$ span a square with non-unique top
vertex. By Lemma \ref{lem:beta_2}, $(e_1,e_2)\in\beta^*_c$. In the
second case $e_1$ and $e_2$ span triangles with other primal
edges $e_3$ and $e_4$. As in \w{Case} a) of this proof, we have $(e_1,e_3)\in\beta_c$,
$(e_3,e_4)\in\beta_c$, $(e_4,e_2)\in\beta_c$ and consequently,
$(e_1,e_2)\in\beta^*_c$
\item[c)] Suppose only $e_1$ has a common vertex with $f$
and $e_2$ is opposite to
$f$ in a square. Again we need to consider two cases (see
Figure \ref{fig:proof_beta1} c)).
Since $e_1$ and $f$ are adjacent and $(e_1,f)\in \mathfrak d_c$,
we can conclude that either no square is spanned by $e_1$ and $f$\w{,}
or \w{that} the square spanned by $e_1$ and $f$
is not chordless or not unique.
Its easy to see that in the first case the edges $e_1$ and $e_2$
are contained in a common triangle and thus will be added to the absence list
in Step \ref{alg:PSPrecognition1ai}.
In the second case $e_1,e_2$ span a
square which has
a chord or \tk{has a non-unique top vertex}. In both cases
Lemma \ref{lem:beta_2} implies that $e_1$ and $e_2$ are in relation
$\beta^*_c$.
\end{itemize}
\end{proof}
\begin{figure}[tbp]
\centering
\includegraphics[bb= 128 286 472 501, scale=.8]{./fig/proofLemma.ps}
\caption{The three \w{possible} cases a), b) and c) that are investigated in the
Proof of Lemma \ref{lem:beta_3}.}\label{fig:proof_beta1}
\end{figure}
\begin{lem}
Let $e$ and $f$ be distinct primal edges of the PSP $S_c$.
Then $(e,f)\in (\overline\alpha_c\cup\beta_c)^*$ if and only if $(e,f)\in\mathfrak d_c^*$.
\label{lem:primalIFF}
\end{lem}
\begin{proof}
Assume first that $(e,f)\in \overline\alpha_c\cup\beta_c$.
By Corollary \ref{cor:alpha_2}, if $(e,f)\in \overline\alpha_c$\w{,} then $e$ and $f$
do not span a common square\w{,} or span a square with non-unique \tk{or primal} top vertex. In the first case, $e$ and $f$ are in relation $\delta_G$ and consequently also in
relation $\mathfrak d_c$. On the other hand, if $e$ and $f$ span square with
non-unique top vertex then\w{,} by Lemma \ref{lem:PSP1}, $e$ and $f$ are in
relation $\mathfrak d_c^*$ as well. \tk{Finally, if $e$ and $f$ span a square with primal
top vertex $w$\w{,} then this square has a chord $cw$ and $(e,f)\in\mathfrak d_c^*$}.
If $(e,f)\in\beta_c$\w{,} then Lemma \ref{lem:beta_1} implies that $e$ and $f$ do not
span unique chordless square with unique top vertex. Again, by Lemma \ref{lem:PSP1}\w{,}
we \w{infer that} $(e,f)\in\mathfrak d_c^*$. Hence, $\overline\alpha_c\cup\beta_c \subseteq \mathfrak d_c^*$\w{,}
and consequently, $(\overline\alpha_c\cup\beta_c)^*\subseteq \mathfrak d_c^*$.
Now, let $(e,f)\in\mathfrak d_c^*$. Then there is a sequence
$U=(e=e_1,e_2,\dots,e_k=f)$, $k\geq2$\w{,} with $(e_i,e_{i+1})\in\mathfrak d_c$ for
$i=1,2,\dots,k-1$. By definition of $\mathfrak d_c$, two primal edges
are in relation $\mathfrak d_c$ if and only if they do not span a unique and chordless
square. Corollary \ref{cor:alpha_2} and Lemma \ref{lem:beta_2} imply that
all these pairs are contained in $(\overline\alpha_c\cup\beta_c)^*$. Hence, any
two consecutive primal edges $e_i$ and $e_{i+1}$ contained in the sequence
$U$ are in relation $(\overline\alpha_c\cup\beta_c)^*$. Assume \w{that} there is an edge
$e_i \in U$ \w{that} is not incident to \w{the} center $c$ and thus,
non-primal. By \w{the} definition of $\mathfrak d_c$\w{,} and since $(e_{i-1},e_i),(e_i,e_{i+1})\in\mathfrak d_c$, we can
conclude that the edges $e_{i-1}$ and $e_{i+1}$ must be primal in $S_c$.
Lemma \ref{lem:beta_3} implies that $e_{i-1}$ and $e_{i+1}$ must be in
relation $\beta^*_c$. By removing all edges from $U$ that are not incident
with $c$ we \w{obtain} a sequence $U'=e=e_1,e'_2,\dots,e'_j=f$ of primal edges. By
analogous arguments as before, all pairs $(e'_i,e'_{i+1})$ of $U'$ must be contained
in $(\overline\alpha_c\cup\beta_c)^*$. By transitivity,
$e$ and $f$ are also in $( \overline\alpha_c\cup\beta_c)^*$.
\end{proof}
\begin{cor}\label{cor:localColoringDet}
Let $e$ and $f$ be primal edges of \w{the} PSP $S_c$. Then $(e,f)\in (
\overline\alpha_c\cup\beta_c)^*$ if and only if $e$ and $f$ have the same local
color in $S_c$.
\end{cor}
\begin{proof}
This is an immediate consequence of Lemma \ref{lem:primalIFF},
the local color assignment\w{,} and the merging procedure
(Step \ref{alg:PSPrecognition2} and \ref{alg:PSPrecognition3}) in
Algorithm \ref{alg:PSPrecognition}.
\end{proof}
\begin{lem}\label{lem:PSP_recognition}
Let $\mathfrak d_{|S_v}(W)^*$ be a global coloring associated with a set of treated
centers $W$ and assume that the induced subgraph $\langle W \rangle$ is connected.
Let $c$ be a vertex that is not contained in $W$ but \w{adjacent to a vertex in $W$}.
Then Algorithm \ref{alg:PSPrecognition} computes the global coloring
$\mathfrak d_{|S_v}(W\cup c)^*$ by taking $W$ and $c$ as input.
\end{lem}
\begin{proof}
Let $W\subseteq V(G)$ be a set of PSP centers and let $c\in V(G)$ be a
given center of PSP $S_c$ where $c\not\in W$ \tk{and $\langle W\cup c\rangle$ is connected}.
In Step \ref{alg:PSPrecognition1} of Algorithm \ref{alg:PSPrecognition}
we compute the absence and incidence lists. In Step \ref{alg:PSPrecognition2},
we assign pairwise different temporary local colors to any primal edge adjacent
to $c$. Two
temporary local colors $b_1$ and $b_2$ are then merged in Step
\ref{alg:PSPrecognition3} if and only if there exists some pair of primal
edges $(e_1,e_2)\in(\overline\alpha_c\cup\beta_c)$ where
$e_1$ is colored with $b_1$ and $e_2$ with $b_2$. \w{Therefore,}
\w{merged}
temporary local colors reflect equivalence classes
of $(\overline\alpha_c\cup\beta_c)^*$ containing the primal edges incident to $c$.
By Corollary \ref{cor:localColoringDet}, $(\overline\alpha_c\cup\beta_c)^*$
classes indeed determine the local colors of primal edges
in $S_c$.
Note, if one knows the colors of primal edges incident
to $c$, \w{then} it is very easy to determine the set of
non-primal edges of $S_c$, as any two primal edges of different
equivalence classes span a unique and chordless square.
In Step \ref{alg:PSPRecognition5}, we investigate each vertex $v$ from
stack $\mathbb{T}_c$ and check the local colors of primal edges $cw_1$ and $cw_2$\w{,}
where $w_1$ and $w_2$ are the first and second recognized primal neighbors of
$v$, respectively. If $cw_1$ and $cw_2$ differ in \w{their} local colors\w{,}
then $vw_1$ and $vw_2$ are non-primal edges of $S_c$\w{, as} follows from the PSP
construction. Recall that the stack contains all vertices that are at distance
two from center $c$ and which are adjacent to at least two primal vertices. In
other words, the stack contains all \tk{non-primal} top vertices of all squares spanned by
primal edges. Consequently, we claim that all non-primal edges of the PSP $S_c$
are treated in Step \ref{alg:PSPRecognition5}. Note that non-primal edges have
the same local color as \w{their} opposite primal edge, which is unique by Lemma
\ref{lem:PSP2}.
As we already argued, after Step \ref{alg:PSPrecognition3} is performed we
know\w{,} or can \w{at least} easily determine all edges of $S_c$ and their local colors.
Recall that local colors define the local coloring $\mathfrak d_{|S_c}$. Suppose,
temporary global colors that correspond to the global coloring
$\mathfrak d_{|S_v}(W)^*$ are assigned.
Our goal is to modify and identify temporary global colors such that
they will correspond to the global coloring $\mathfrak d_{|S_v}(W\cup c)^*$. Let
$B_1,B_2,\dots,B_k$ be \w{the} classes of $\mathfrak d_{|S_c}$ (\emph{local classes}) and
$D_1,D_2,\dots,D_l$ be \w{the} classes of $\mathfrak d_{|S_v}(W)^*$
(\emph{global classes}).
When a local class $B_i$ and a
global class $D_j$ have a nonempty
intersection\w{, then} we can infer that all their edges must be contained in a
common class of $\mathfrak d_{|S_v}(W\cup c)^*$.
Note, by means of Lemma \ref{lem:vMeetsEveryClass},
we can conclude that for each local class $B_i$ there is a global class $D_j$ such
that $B_i\cap D_j\neq \emptyset$, see also \cite{HeImKu-2013}.
In that case we need to guarantee that edges of
$B_i$ and $D_j$ will be colored by the same temporary global color. Note,
in the beginning of the iteration two edges have the same temporary global
color if and only if they lie in a common global class.
In Step \ref{alg:PSPRecognition4} and Step \ref{alg:PSPRecognition5}, we
investigate all primal and non-primal edges of $S_c$. When we treat first
edge $e$ that is colored by some local color $b_i$, that is $e\in B_i$,
and has already \w{been} assigned some temporary global color
$d_j$, and \w{therefore} $e\in D_j$, then we map $b_i$
to $d_j$. \w{Thus}, we keep the information that $e\in B_i\cap D_j$.
In Step \ref{alg:PSPRecognition6}, we then assign temporary global color $d_j$
to any edge of $S_c$ that is colored by the local color $b_i$. If
the local color $b_i$ is already mapped to some temporary global color
$d_j$\w{,} and \w{if} we find another edge of $S_c$ that is colored by $b_i$ and
simultaneously has \w{been} assigned some different temporary global color $d_{j\prime}$\w{,}
then we merge $d_j$ and $d_{j\prime}$ in Step \ref{alg:PSPRecognition4i}.
Obviously this is correct, since $B_i\cap D_j\neq\emptyset$ and $B_i\cap
D_{j\prime}\neq\emptyset$\w{,} and hence $D_j,D_{j\prime}$ and $B_i$ must be
contained in a common equivalence class of $\mathfrak d_{|S_v}(W\cup c)^*$.
\w{Recall,}
for each local class $B_i$ there is a global class $D_j$ such that $B_i\cap
D_j\neq \emptyset$. \w{This} means that every local color is mapped to some
global color\w{,} and \w{consequently there is no}
need to create \w{a} new temporary global color
in Step \ref{alg:PSPRecognition6}.
Therefore, whenever local and global classes \w{share an} edge\w{, then}
all their edges will have the same temporary global color at the end of Step
\ref{alg:PSPRecognition6}. On the other hand\w{,} when edges of two different
global classes are colored by the same temporary global color\w{,} then both
global classes must be contained in a common class of
$\mathfrak d_{|S_v}(W\cup c)^*$.
Hence, after \w{the performance of} Step \ref{alg:PSPRecognition6},
the merged temporary global colors determine \w{the equivalence} classes of $\mathfrak d_{|S_v}(W\cup c)^*$.
\end{proof}
\begin{lem}
Let $G$ be a connected graph, \mh{ $W\subseteq V(G)$ s.t. $\langle W \rangle$ is connected}, and
$v_0$ an arbitrary vertex of $G$. Then
Algorithm \ref{alg:deltaRecognition} computes the global coloring
\mh{$\mathfrak d_{|S_v}(W)^*$} by taking $G$, $W$ and $v_0$ as input.
\end{lem}
\begin{proof}
In Step \ref{alg:delta1} we define the BFS-order in which
\w{the vertices}
will be processed and store this sequence in $Q$. In Step \ref{alg:delta3} we assign pairwise different
temporary global colors to all edges that are incident with $v_0$. In Step
\ref{alg:delta4} we iterate over all vertices of \mh{the given induced connected
subgraph $\langle W\rangle$ of $G$}. For
every vertex we execute Algorithm \ref{alg:PSPrecognition}. Lemma
\ref{lem:PSP_recognition} implies that in the first
iteration, we correctly compute the local colors for $S_{v_0}$\w{,} and consequently also
\mh{$\mathfrak d_{|S_v}(\{v_0\})^*$}. Obviously, whenever we merge two
temporary local colors of two primal edges in the first iteration\w{, then} we also merge
their temporary global colors.
Consequently, the resulting temporary global colors correspond to
the global coloring \mh{$\mathfrak d_{|S_v}(\{v_0\})^*$} after the first iteration. Lemma
\ref{lem:PSP_recognition} implies that after all iterations are performed, \mh{that is,
all vertices in $Q$ are processed,}
the resulting temporary global colors correspond to \mh{$\mathfrak d_{|S_v}(W)^*$ for the given
input set $W\subseteq V(G)$}.
\end{proof}
\w{For the global coloring, Theorem \ref{thm:union_equals_delta} implies that
$\mathfrak d_{|S_v}(V(G))^* = \delta_G^*$. This leads to the following corollary.}
\begin{cor}
Let $G$ be a \w{connected} graph and $v_0$ an arbitrary vertex of $G$. Then
Algorithm \ref{alg:deltaRecognition} computes the global coloring
$\delta_G^*$ by taking $G$, \mh{$V(G)$} and $v_0$ as input.
\label{cor:deltastar}
\end{cor}
\subsection{Time Complexity}
We begin with the complexity of merging colors. We have global and local
colors, and will define \emph{local} and \emph{global color graphs}. Both
graphs are acyclic temporary structures. Their vertex sets are the sets of
temporary colors in the initial state. In this state the color graphs have
no edges. Every component is a single vertex and corresponds to an initial
temporary color. Recall that we color edges of graphs, for example the
edges of $G$ or $S_v$. The color of an edge is indicated by a pointer to a
vertex of the color graph. These pointers are not changed, but the colors
will correspond to the components of the color graph. When two colors are
merged, then this reflected by adding an edge between their respective
components.
The color graph is represented by an adjacency list as described in
\cite[Chapter 17.2]{haimkl-2011} or \cite[pp. 34 -37]{imkl-2000}.
Thus, working with the color graph needs $O(k)$ space
when $k$ colors are used.
Furthermore, for every vertex of the color graph we keep an index of the
connected component in which the vertex is contained. We also store the
actual size of every component, that is, the number of vertices of this
component.
Suppose we wish to merge temporary colors of edges $e$ and $f$ that
are identified with vertices $a$, respectively $b$, in the color graph.
We first check whether $a$ and $b$ are contained in the same connected
component by comparing component indices. If the component indices are the
same, then $e$ and $f$ already have the same color, and no action is
necessary. Otherwise we insert an edge between $a$ and $b$ in the color
graph. As this merges the components of $a$ and $b$ we have to update
component indices and the size. The size is updated in constant time. For
the component index we use the index of the larger component. Thus, no
index change is necessary for the larger component, but we have to assign
the new index to all vertices of the smaller component.
Notice that the color graph remains acyclic, as we only add edges between different components.
\begin{lem}
Let $G_0 =(V,E)$ be a graph with $V=\{v_1,\dots,v_k\}$ and $E=\emptyset$.
The components of $G_0$ consist of single vertices. We assign component index
$j$ to every component $\{v_j\}$.
For $i=1, \dots , k-1$ let $G_{i+1}$ denote the graph that
results from $G_{i}$ by adding an edge between two distinct
connected components, say $C$ and $C'$. If $|C| \leq |C'|$, we use the
the component index of $C'$ for the new component and assign it
to every vertex of $C$.
Then every $G_i$ is acyclic, and the total cost of merging colors is $O(k \log_2 k)$.
\label{lem:log2}
\end{lem}
\begin{proof}
Acyclicity is true by construction.
A vertex is assigned a new component index when its component is merged
with a larger one. Thus, the size of the component at least doubles at
every such step. Because the maximum size of a component is bounded by $k$,
there can be at most $\log_2 k$ reassignments of the component index for
every vertex. As there are $k$ vertices, this means that the total cost of
merging colors is $O(k \log_2 k)$.
\end{proof}
The color graph is used to identify \tk{temporary} local, resp., global colors. Based on this,
we now define the \emph{local} and \emph{global color graph}.
Assigned labels of the vertices of the global color graph are stored in the edge
list, where any edge is identified with at most one such label.
A graph is represented
by an extended adjacency list, where for any vertex and its neighbor
a reference to the edge (in the edge list) that connects them is stored. This
reference allows to access a global temporary color from adjacency list in
constant time.
In every iteration of Algorithm \ref{alg:deltaRecognition}, we recognize the PSP
for one vertex by calling Algorithm \ref{alg:PSPrecognition}.
\w{In the following paragraph we introduce several temporary
attributes and matrices that are used in the algorithm.}
Suppose we execute \w{an} iteration that recognizes some PSP $S_c$.
To indicate whether a vertex was treated in this iteration we \w{introduce the} attribute $visited$,
that is, when vertex $v$ is visited in this iteration we set $v.visited=c$.
Any value different from $c$ means
that vertex $v$ was not yet treated in this iteration. Analogously, we
\w{introduce} the attribute
$primal$ to indicate that a vertex is adjacent to the current center $c$. The attribute
$tempLabel$ maps primal vertices to the indices of rows and columns of the
matrices $incidenceList$ and $absenceList$. For any vertex $v$ that is at
distance two from the center $c$ we store its first and second primal neighbor
$w_1$ and $w_2$ in the attributes $FirstPrimalNeighbor$ and
$SecondPrimalNeighbor$. Furthermore, we need to keep the position of $vw_1$ and
$vw_2$ in the edge list to get their temporary global colors. For this
purpose, we use attributes $firstEdge$ and $secondEdge$. \tk{Attribute}
$mapLocalColor$ helps us to map temporary local colors to the vertices of
the global color graph. Any vertex that is at distance two from the center and
has a least two primal neighbors is a candidate for a non-primal vertex. We
insert them to the $stack$. The temporary structures help to access the required
information in constant time:
\begin{itemize}
\item $v.visited=c$ \\ vertex $v$ has been already visited in the current iteration.
\item $v.primal=c$ \\ vertex $v$ is adjacent to center $c$.
\item $incidenceList[v.tempLabel,u.tempLabel]=0$ \\ pair of primal edges $(cv,cu)$ is missing in the incidence list.
\item $absenceList[v.tempLabel,u.tempLabel]=1$ \\ pair of primal edges $(cv,cu)$ was inserted to the absence list.
\item $v.firstPrimalNeighbor=u$ \\ $u$ is the first recognized primal neighbor of the non-primal vertex $v$.
\item $v.firstEdge=e$ \\ edge $e$ joins the non-primal vertex $v$ with its first recognized primal neighbor
(it is used to get \w{the} temporary global color from \w{the} edge list).
\item $b.mapLocalColor=d$ \\ local color $b$ is mapped to temporary global color $d$ (i.e. there exists an edge that is colored by both colors).
\end{itemize}
Note that the temporary matrices $incidenceList$ and $absenceList$ have
dimension $\deg(c)\times \deg(c)$ and \w{that} all their entries are set to zero in the
beginning of every iteration.
\begin{thm}
For a given connected graph $G=(V,E)$ with maximum degree $\Delta$ and $W\subseteq V $,
Algorithm \ref{alg:deltaRecognition} runs in $O(|E|\Delta)$ time and $O(|E|+\Delta^2)$ space.
\label{thm:compl}
\end{thm}
\begin{proof}
Let $G$ be a given graph with $m$ edges and $n$ vertices.
In Step \ref{alg:delta0} of Algorithm
\ref{alg:deltaRecognition} we initialize all temporary attributes and
matrices. This consumes $O(m+n) = O(m)$ time and space,
since $G$ is connected\w{,} and hence, $m\geq n-1$. Moreover\w{,} we set all
temporary colors of edges in the edge list to zero, which does not increase the
time and space complexity of the initial step. Recall that we use an extended adjacency
list, where every vertex and its neighbors keep the reference to the edge in
the edge list that connects them. To create an extended adjacency list we iterate over
all edges in the edge list\w{,} and for every edge $uv=e\in E(G)$ we set a new
\w{entry for the} neighbor $v$ for $u$ and\w{,} simultaneously\w{,} we add a reference $v.edge=e$.
The same is done for vertex $v$. It can be done in $O(m)$ time and space.
In Step \ref{alg:delta1} of Algorithm \ref{alg:deltaRecognition},
we build a sequence of vertices in BFS-order
starting with $v_0$, which is done in $O(m+n)$ time in general.
Since $G$ is connected, the BFS-ordering can be computed in $O(m)$ time.
Step \ref{alg:delta2} takes constant time.
In Step \ref{alg:delta3} we
initialize the global color graph that has $\deg(v_0)$ vertices (\w{bounded by} $\Delta$ in
general). As we already showed, all operations on the global color graph take
$O(\Delta\log_2{\Delta})$ time and $O(\Delta)$ space. We proceed to traverse
all neighbors $u_1,u_2,\dots,u_{\deg{(v_0)}}$ of the root $v_0\in V(G)$
(via the adjacency list) and \w{assign} them unique labels $1,2,\dots,\deg(v_0)$ in edge list,
that is, every edge $v_0u_i$ gets the label $i$.
In this way, we initialize pairwise different temporary global colors of edges
incident with $v_0$ \w{, that is, to vertices} of the global color graph.
\w{Using} the extended adjacency list\w{,} we set the label to an edge in
the edge list in constant time. In Step \ref{alg:delta4} we run Algorithm
\ref{alg:PSPrecognition} for any vertex from the defined BFS-sequence.
In the remainder of this proof, we will focus on \w{the} complexity of
Algorithm \ref{alg:PSPrecognition}.
Suppose we perform Algorithm \ref{alg:PSPrecognition} for
vertex $c$ to recognize the PSP $S_c$. The recognition process is based on
temporary structures. We do not need to reset any of these structures, for any \w{execution}
of Algorithm \ref{alg:PSPrecognition} \w{for a new center $c$}, except $absenceList$ and
$incidenceList$. \tk{This is done in Step \ref{alg:PSPrecognition0}.
Further, we set here} the attribute $tempLabel$ for
every primal vertex $v$, such that every vertex has assigned a unique number
from $\{1,2,\dots,\deg(c)\}$. \tk{Finally}, we traverse
all neighbors of the center $c$ and for each of them we set $primal$ to $c$.
Hence, the initial step of Algorithm \ref{alg:PSPrecognition} is done in
$O(\deg(c)^2)$ time.
Step \ref{alg:PSPrecognition1a} is performed for every neighbor of every
primal vertex. The number of all such neighbors is at most $\deg(c)\Delta$.
For every treated vertex, we set attribute $visited$ to $c$. This allows
us to verify in constant time that a vertex was already visited in the
recognition subroutine Algorithm \ref{alg:PSPrecognition}.
If the condition in Step \ref{alg:PSPrecognition1ai} is satisfied, then we
put primal edges $cu$ and $cw$ to the absence list. By the previous arguments,
this can be done in constant time by usage of $tempLabel$ and $absenceList$.
If the condition in Step \ref{alg:PSPrecognition1aii} is satisfied, we set
vertex $u$ as first primal neighbor of vertex $w$. For this purpose, we
use the attribute $firstPrimalNeighbor$. We also set $w.firstEdge=e$\w{,} where $e$
is a reference to the edge in the edge list that connects $u$ and $w$. This
reference is obtained from the extended adjacency list in constant time. Recall,
the edge list is used to store the labels of vertices of
the global color graph for the edges of a given graph, that is, the assignment
of temporary global colors to the edges.
Using $w.firstEdge$, we are able to directly
access the temporary global color of edge $uw$ in constant time.
Step \ref{alg:PSPrecognition1aiii} is performed when we try to visit a vertex
$w$ from some vertex $u$ where $w$ has been already visited before from some
vertex $v$. If $v$ is the only recognized primal neighbor of $w$\w{, then we} we perform
analogous operations as in the previous step. Moreover. if $(cu,cv)$ is not contained
in the incidence list, then we set $u$ as second primal neighbor of $w$, \w{add}
$(cu,cv)$ to the incidence list and \w{add} $w$ to the stack. Otherwise we add
$(cu,cv)$ to the absence list. The number of operations in this step is
constant.
If $w$ has more recognized primal neighbors we process case B. Here we just add all pairs formed
by $cv_1,cv_2,cu$ to absence list. Again\w{,} the number of operations is constant by usage of $tempLabel$
and matrices $incidenceList$ and $absenceList$.
In Step \ref{alg:PSPrecognition2}\w{[,]} we assign pairwise different
temporary local colors
to the primal edges. Assume the neighbors of the center $c$ are labeled
by $1,2,\dots,\deg{(c)}$, then we set value $u.tempLabel$ to \tk{$c$}$u$.
In Step \ref{alg:PSPrecognition3a}
we iterate over all entries of the $absenceList$. For all pairs of edges that
are in the absence list we check whether they \w{still} have different temporary
local colors and if so, we merge their temporary local colors by adding
a respective edge in the local
color graph. Analogously we treat all pairs of edges contained in the
$incidenceList$ in Step \ref{alg:PSPrecognition3b}.
Here we merge temporary local colors of primal
edges $cu$ and $cv$ when the pair $(cu,cv)$ is missing. To treat all entries of
the $absenceList$ and $incidenceList$ we need to perform $\deg(c)^2$ iterations.
Recall, the temporary local color of the primal edge $cu$ is equal to the
index of the connected component in the local color graph,
in which vertex $u.tempLabel$ is contained.
Thus, the temporary local color of this primal edge can be accessed in
constant time. As we already showed, the number of all operations on the local
color graph is bounded by $O(\deg(c)\log_2{\deg(c)})$.
Hence, the overall time complexity
of both Steps \ref{alg:PSPrecognition2} and \ref{alg:PSPrecognition3} is $O(\deg(c)^2)$.
In Step \ref{alg:PSPRecognition4} we map temporary local colors of primal
edges to temporary global colors. For this purpose, we use the attribute
$mapLocalColor$. The temporary global color of every edge can be accessed by
the extended adjacency list, the edge list and the global color graph in constant time.
Since we need to iterate over all primal vertices, we can conclude that Step
\ref{alg:PSPRecognition4} takes $O(\deg(c))$ time.
In Step \ref{alg:PSPRecognition5} we perform analogous operations for any vertex
from Stack $\mathbb{T}_c$ as in Step \ref{alg:PSPRecognition4}. In the worst
case, we add all vertices that are at distance two from the center to the
stack.
Hence, the size of the stack is bounded by $O(\deg(c)\Delta)$. Recall that the
first and second primal neighbor $w_1$ and $w_2$ of every vertex $v$ from the stack can
be directly accessed by the attributes $firstPrimalNeighbor$ and
$secondPrimalNeighbor$. On the other hand, the temporary global colors of
non-primal edges $vw_1$ and $vw_2$ can be accessed directly by the attributes
$firstEdge$ and $secondEdge$. Thus, all needful information can be accessed
in constant time. Consequently, the time complexity of this step is bounded by
$O(\deg(c)\Delta)$.
In the last \w{step,} Step \ref{alg:PSPRecognition6}, we iterate over all edges of the recognized PSP.
Note, the list of all primal edges can be obtained from the extended adjacency list. To get all
non-primal edges we iterate over all vertices from the stack and use the attributes
$firstEdge$ and $secondEdge$, which takes $O(\deg(c)\Delta)$ time.
The remaining operations can be done in constant time.
To summarize, Algorithm \ref{alg:PSPrecognition} runs in
$O(\deg(c)\Delta)$ time.
Consequently, Step \ref{alg:delta4} of Algorithm
\ref{alg:deltaRecognition} runs in
$O(\sum_{c\in W}\deg(c) \Delta)= O(m\Delta)$ time,
which defines also the total time complexity of Algorithm \ref{alg:deltaRecognition}.
The most space
consuming structures are the edge list and the extended adjacency list ($O(m)$ space) and the
temporary matrices $absenceList$ and $incidenceList$ $(O(\Delta^2)$ space). Hence,
the overall space complexity is $O(m+\Delta^2)$.
\end{proof}
Since quasi Cartesian products are defined as graphs with non-trivial $\delta^*$,
\w{ Theorem \ref{thm:compl}} and Corollary \ref{cor:deltastar} imply the following \w{corollary}.
\begin{cor}
For a given connected graph $G=(V,E)$ with bounded maximum degree
Algorithm \ref{alg:deltaRecognition} (with slight modifications) determines
whether $G$ is a quasi Cartesian product in $O(|E|)$ time and $O(|E|)$ space.
\end{cor}
\subsection{Parallel Processing}
The local approach allows the parallel computation of $\delta^*(G)$ on
multiple processors. Consider a graph $G$ with vertex set $V(G)$. Suppose
we are given a decomposition of $V(G)=W_1\cup W_2\cup\dots\cup W_k$ into
$k$ parts such, that $|W_1|\approx|W_2|\approx\dots\approx|W_k|$, where
the subgraphs induced by $W_1,W_2,\dots,W_k$ are connected\w{,} and the number
of edges whose endpoints lie in different partitions is small (we call such
decomposition {\it good}). Then algorithm \ref{alg:deltaRecognition} can be
used to compute the colorings
$\mathfrak d_{|S_v}(W_1)^*,\mathfrak d_{|S_v}(W_2)^*,\dots,\mathfrak d_{|S_v}(W_k)^*$, where every
instance of the algorithm can run in parallel. The resulting global
colorings are used to compute
$\mathfrak d_{|S_v}(V(G))^*=(\mathfrak d_{|S_v}(W_1)^*\cup\mathfrak d_{|S_v}(W_2)^*\cup\dots\cup\mathfrak d_{|S_v}(W_k)^*)^*$.
Let us sketch the parallelization.
\bigskip
\begin{algo}[Parallel recognition of $\delta^*$]
\small
\label{alg:parallelDeltaRecognition}
\begin{tabular}{ll}
\emph{Input:} & A graph $G$, and a good decomposition $V(G)=W_1\cup W_2\cup\dots\cup W_k$.\\
\emph{Output:} & Relation $\delta_G^*$.
\end{tabular}
\begin{enumerate}
\item\label{alg:parallel1} \w{F}or every partition $W_i$ \w{concurrently} compute global coloring \tk{$\mathfrak d_{|S_v}(W_i)$} ($i\in\{1,2,\dots,k\}$):
\begin{enumerate}
\item\label{alg:parallel2} Take all vertices of $W_i$ and order them in BFS to get sequence $Q_i$.
\item\label{alg:parallel3} Set $W':=\emptyset$.
\item\label{alg:parallel4} Assign pairwise different temporary global colors to edges incident to first vertex in $Q_i$.
\item\label{alg:parallel5} For any vertex $v$ from sequence $Q_i$ do:
\begin{enumerate}
\item\label{alg:parallel5a} Use Algorithm \ref{alg:PSPrecognition} to compute \tk{$\mathfrak d_{|S_v}(W'\cup v)^*$}.
\item\label{alg:parallel5c} Put all edges that were treated in previous \w{step} and have at least one endpoint \w{not in} partition $W_i$ to stack $\mathbb{T}_i$.
\item\label{alg:parallel5b} Add $v$ to $W'$.
\end{enumerate}
\end{enumerate}
\item Run concurrently for every partition $W_i$ to merge all global colorings ($i\in\{1,2,\dots,k\}$):
\begin{enumerate}
\item For each edge from stack $\mathbb{T}_i$\w{,} take all its assigned global colors and merge them.
\end{enumerate}
\end{enumerate}
\end{algo}
\normalsize
\begin{figure}
\centering
\includegraphics[bb=93 254 515 545, scale=0.7]{./fig/parallel.ps}
\caption{Example - Parallel recognition of $\delta^*$.}\label{fig:parallel_delta}
\end{figure}
Figure \ref{fig:parallel_delta} shows an example of decomposed vertex set
of a given graph $G$. The computation of global colorings associated with
\w{the individual sets of the partition} can be
done then in parallel. The edges that are colored by global color when
\w{the} partition is treated are highlighted by \w{bold} black color. Thus
we can observe that many edges will be colored by more then one color.
\w{Notice that we do not treat} the task of finding a good partition. With the methods of \cite{hazwi-2001}
this is possible with high probability in $O(\log n)$ time, where $n$ is the number of vertices.
\bibliographystyle{plain}
|
1,108,101,563,714 | arxiv | \section*{Introduction}
Whereas the electronic band topology in nonmagnetic topological materials is determined by the crystal symmetry and the relative strength of spin-orbit coupling, the magnetic ground state also plays a decisive role in magnetic topological materials \cite{Tokura2019,zou2019study,Watanabe2018,Xu2020,Elcoro2021,Wang2021magnetic}. The concurrence of magnetism and electronic band topology leads to novel states of matter, including Weyl fermions in centrosymmetric systems \cite{Kuroda2017,Liu2018}, magnetic topological insulators \cite{Otrokov2019}, quantum anomalous Hall effect \cite{deng2020quantum}, and axion insulators \cite{Sekine2021}. The dependence of band topology on the magnetic ground state grants access to topological phase transitions by manipulation of the latter, and opens up a route for discovering functional properties in magnetic topological materials.
$f$-electron materials provide a fertile setting for discovering emergent physics at the intersection of magnetism, electron correlations and topology, including magnetic topological insulators \cite{Hua2018,Wang2019,Xu2019,Li2019}, as well as topological spin textures \cite{Kurumaji2019,Hirschberger2019,Kaneko2019,Puphal2020} and strongly correlated topological phases such as topological Kondo insulators and semimetals \cite{Dzero2010,Kim2013,xu2014,Guo2018}.
Among the $f$-electron magnetic topological materials, EuCd$_2$As$_2$ exhibits a single Dirac cone at the Fermi level well inside the paramagnetic state, offering an ideal platform for realizing exotic quasiparticles \cite{Hua2018,Niu2019,Xu2021}. Below $T_{\rm N}\approx9$~K, the Eu$^{2+}$ moments are oriented in the $ab$-plane and form A-type antiferromagnetic (AFM) order, with antiferromagnetically stacked ferromagnetic (FM) Eu$^{2+}$ layers \cite{Rahn2018}. With such a magnetic ground state, a gap opens at the Dirac point and the system may be a small-gap magnetic topological insulator (MTI) \cite{Wang2019,Ma2020}. A fully polarized state along the $c$-axis can be readily obtained upon applying a magnetic field, giving rise to an ideal single pair of Weyl points \cite{Wang2019,Soh2019}. A Weyl state is also suggested for the paramagnetic phase slightly above $T_{\rm N}$ based on photoemission measurements, resulting from a proliferation of quasi-static in-plane ferromagnetic fluctuations \cite{Ma2019,Soh2020}. A plethora of additional topological states, including magnetic Dirac fermions, axion insulator, and higher-order topological insulator, are also expected given the appropriate magnetic ground state \cite{Ma2020}. Moreover, in addition to becoming fully polarized under modest in-plane or $c$-axis magnetic fields \cite{Rahn2018},
a FM ground state can be stabilized by changes in the synthesis protocol \cite{Jo2020} or applying hydrostatic pressure \cite{gati2021pressureinduced}, demonstrating highly-tunable magnetism. These behaviors highlight the proximity of competing magnetic ground states in EuCd$_2$As$_2$, which can be harnessed to manipulate its electronic topology.
Here, by carrying out resistivity measurements on EuCd$_2$As$_2$ single crystals under hydrostatic pressure, we find an insulating dome within a small pressure window between $p_{\rm c1}\sim1.0$~GPa and $p_{\rm c2}\sim2.0$~GPa, straddled by two regimes ($p<p_{\rm c1}$ and $p>p_{\rm c2}$) with metallic transport. The insulating state is easily suppressed by a magnetic field (0.2~T for $H\perp c$), leading to a remarkable colossal negative magnetoresistance (MR) which reaches $\sim10^5$\% at 0.3~K,
and can likely be enhanced upon further cooling.
Based on first-principles calculations, these experimental observations may arise from two topological phase transitions tuned by magnetism. Namely, a transition from an AFM topological insulator to an AFM trivial insulator (TrI) at $p_{\rm c1}\sim1.0$~GPa, and a transition from an AFM TrI to a FM Weyl semimetal (WSM) at $p_{\rm c2}\sim2.0$~GPa, triggered by a pressure-induced AFM-FM transition of the magnetic ground state. A similar mechanism accounts for the suppression of the TrI state under applied magnetic field, which polarizes the AFM state and transforms the system to a WSM, leading to the colossal MR.
These findings demonstrate that the combination of Dirac fermions and proximate magnetic ground states provides a route for discovering tunable topological quantum materials with functional properties.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{Fig1.eps} \protect\caption{{\bf Physical properties of EuCd$_2$As$_2$ at ambient pressure.} (a) Temperature dependence of the resistivity $\rho(T)$ in the $ab$-plane. The primitive unit cell is shown in the inset, with Eu$^2+$ ions forming a triangular lattice in the $ab$-plane; (b) Total specific heat $C_{\rm p}(T)$; (c) Magnetic susceptibility $\chi(T)$, measured in an applied field of $\mu_0 H=0.1$~T; (d) The field-dependence of the magnetization $M(H)$, measured for two field directions at 2~K after zero-field cooling.}
\label{Fig_basic}
\end{figure}
\section*{Results}
\subsection*{Metallic-insulating-metallic evolution of electrical transport under pressure}
EuCd$_2$As$_2$ crystallizes in a centrosymmetric trigonal structure (unit cell volume $V=125.03$~\AA$^{3}$ \cite{sample}), with planes of Eu$^{2+}$ that form triangular lattices [inset of Fig.~\ref{Fig_basic}(a)] separated by layers of Cd-As tetrahedra networks \cite{Artmann1996}. Whereas the FM alignment of magnetic moments within the $ab$-plane is robust, the stacking of these FM planes can be tuned between AFM or FM order along the $c$-axis, resulting in an overall A-type AFM or FM ground state \cite{Rahn2018,Krishna2018,Jo2020}.
Given the proximity between these magnetic ground states, the physical properties of our EuCd$_2$As$_2$ samples were carefully characterized at ambient pressure, with the results summarized in Fig.~\ref{Fig_basic}. Clear peaks in both the resistivity $\rho(T)$ and specific heat $C_{\rm p}(T)$ are observed around $T_{\rm N}\approx9$~K [Figs.~\ref{Fig_basic}(a) and (b)].
Measurements of the magnetic susceptibility under a small field of 0.1~T [Fig.~\ref{Fig_basic}(c)] suggest an AFM ground state with an easy $ab$-plane, in agreement with resonant x-ray scattering measurements \cite{Rahn2018}. Magnetization measurements reveal saturation fields (saturated moments) of about $1.7$~T (7.28~$\mu_{\rm B}$/Eu) for $H\parallel c$ and $0.7$~T (7.01~$\mu_{\rm B}$/Eu) for $H\perp c$ [Fig.~\ref{Fig_basic}(d)]. The saturated moments are close to the ideal value of 7.0~$\mu_{\rm B}$ for Eu$^{2+}$ ions, indicating localized magnetism. These characterizations establish that the EuCd$_2$As$_2$ samples used in this study exhibit an A-type AFM ground state below $T_{\rm N}\approx9$~K at ambient pressure, similar to previous reports \cite{Wang2016,Rahn2018,gati2021pressureinduced}.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{Fig2.eps} \protect\caption{{\bf Metallic-insulating-metallic evolution of electrical transport under applied pressure.} Temperature dependence of the $ab$-plane resistivity $\rho(T)$ of EuCd$_2$As$_2$ under various pressures from 0.45~GPa to 1.28~GPa, for samples (a) \#1 and (b) \#2. The corresponding $\rho(T)$ under pressures from 1.50~GPa to 2.21~GPa are shown in panels (c) and (d). The insets of (c) and (d) show $\rho(T)$ under 1.50~GPa and the corresponding fits to $\rho(T)-\rho_0\propto e^{\frac{\Delta}{k_{\rm B} T}}$, where $\rho_0$ is a temperature-independent contribution to the resistivity. The vertical arrows in (a) and (c) mark the direction of increasing pressure.}
\label{Fig_rho}
\end{figure}
In order to track the pressure-evolution of the ground state properties of EuCd$_2$As$_2$, electrical resistivity measurements were carried out on four samples under pressures up to 2.50~GPa and temperatures down to 0.3~K. The results for two samples \#1 and \#2 are presented in Fig.~\ref{Fig_rho}, and the others are shown in the Supplementary Materials (see Fig.~S1). All samples exhibit consistent behaviors under pressure.
For pressures up to 0.87~GPa [Figs.~\ref{Fig_rho}(a) and (b)], $\rho(T)$ is qualitatively similar to that at ambient pressure, with a sharp peak around $T_{\rm N}$ and metallic behavior at low temperatures [$\rho(0.3~{\rm K})\lesssim20$~m$\Omega\cdot$cm]. Upon increasing the pressure to 1.05~GPa, an additional subtle upturn appears when cooling below $\sim2$~K. The upturn in $\rho(T)$ at low temperatures becomes more prominent under 1.28~GPa, and is maximized around 1.50~GPa [Figs.~\ref{Fig_rho}(c) and (d)], with $\rho(0.3~{\rm K})$ reaching $\approx40$~$\Omega\cdot$cm and $\approx15$~$\Omega\cdot$cm, for samples \#1 and \#2 respectively [insets in Figs.~\ref{Fig_rho}(c) and (d)]. The temperature-dependence of the resistivity under 1.50~GPa can be captured by $\rho(T)-\rho_0\propto\exp(\frac{\Delta}{k_{\rm B}T})$, where $\rho_0$ is a temperature-independent contribution and $\Delta$ represents an energy gap. This indicates that the upturn is associated with an electronic gap, suggesting a metal-insulator transition in EuCd$_2$As$_2$ across $p_{\rm c1}\sim1.0$~GPa. By fitting $\rho(T)$ of all measured samples for $T<5$~K, $\Delta$ is consistently found to be $\approx0.07-0.10$~meV at 1.50~GPa (see Fig.~S1 and Section~S1 in the Supplementary Materials), suggesting that $\Delta$ does not have a significant sample dependence.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{Fig3.eps} \protect\caption{{\bf Colossal magnetoresistance in EuCd$_2$As$_2$ under 1.50~GPa.} Field-dependence of the resistivity $\rho(H)$ at various temperatures, for fields (a) along the $c$-axis, and (b) in the $ab$-plane, plotted on a log scale. Temperature-dependence of the resistivity $\rho(T)$ under various magnetic fields along (c) the $c$-axis, and in (d) the $ab$-plane.}
\label{Fig_MR}
\end{figure}
Upon further increasing the pressure, the upturn in $\rho(T)$ weakens under 1.79~GPa and disappears for $p\gtrsim2.21$~GPa, at which $\rho(0.3~{\rm K})$ reverts back to $\lesssim20$~m$\Omega\cdot$cm. Such an evolution points to the restoration of metallic transport above $p_{\rm c2}\sim2.0$~GPa, and experimentally establishes the presence of an insulating dome within the magnetically ordered state for $p_{\rm c1}<p<p_{\rm c2}$.
Since A-type AFM order is present in EuCd$_2$As$_2$ up to $\sim p_{\rm c2}$, as indicated by the response of the resistivity to small magnetic fields (see Fig.~S2 and Section~S2 in the Supplementary Materials) and previous transport and $\mu$SR measurements \cite{gati2021pressureinduced}, the insulating state appears tied to the AFM order.
\section*{Colossal magnetoresistance in the insulating state}
To further elucidate the connection between the magnetic order and the insulating state under pressure, magnetoresistance measurements were carried out under 1.50~GPa for samples~\#1 and \#2, and the results are shown in Fig.~\ref{Fig_MR}. Since the saturation field of EuCd$_2$As$_2$ is small [Fig.~\ref{Fig_basic}(d)], and further reduces with increasing pressure \cite{gati2021pressureinduced}, it is straightforward to assume that the A-type AFM order of EuCd$_2$As$_2$ can be polarized to achieve a FM state using accessible magnetic fields. If the insulating state indeed relies on the presence of A-type AFM order, it should be suppressed when the AFM state becomes fully polarized.
At 1.5~GPa, with increasing field both along the $c$-axis and in the $ab$-plane, $\rho(0.3~{\rm K})$ reduces by around three orders of magnitude, and settles to values less than $20$~m$\Omega\cdot$cm, as shown in Figs.~\ref{Fig_MR}(a) and (b). This transformation from insulating to metallic electrical transport is similar to the pressure-induced evolution across $p_{\rm c2}$. Quantitatively, the MR [defined as $[\rho(H)-\rho(0)]/\rho(0)$] at 0.3~K reaches $-3.78\times10^5$\% for sample~\#1 with $H\parallel c$ [Fig.~\ref{Fig_MR}(a)], and $-1.05\times10^5$\% for sample~\#2 with $H\perp c$ [Fig.~\ref{Fig_MR}(b)]. The values of the MR in EuCd$_2$As$_2$ are large even when compared to the colossal MR in the manganites \cite{vonHelmolt1993,Jin1994,Salamon2001}, and are even more striking considering that they are achieved with relatively small fields [Table~\ref{table_MR}]. The fields at which the MR saturates are $\mu_0H\sim1.5$~T for $H\parallel c$ and $\mu_0H\sim0.2$~T for $H\perp c$, which are in excellent agreement with the saturation fields \cite{gati2021pressureinduced}, and indicates that the MR is associated with polarization of the Eu$^{2+}$ moments. At higher temperatures, the resistivity in zero field is substantially reduced, whereas the resistivity under $\left|\mu_0H\right|\gtrsim2$~T remains similar to that at 0.3~K. This leads to a weakening of the MR upon warming, originating from a decrease in the zero-field resistivity with increasing temperature.
The colossal negative MR can also be seen from the temperature dependence of the resistivity under various fields for samples~\#1 and \#2 at 1.50~GPa, as shown in Figs.~\ref{Fig_MR}(c) and (d). Similar to the results in Figs.~\ref{Fig_rho}(c) and (d), where the insulating state is suppressed by increasing pressure above $p_{\rm c2}$, modest magnetic fields also efficiently suppress the insulating state. In both cases, the A-type AFM phase is destabilized and superseded by a FM state, respectively due to an AFM-FM transition at $p_{\rm c2}\sim2.0$~GPa, or the full polarization of Eu$^{2+}$ moments by an applied field.
While a sizable MR is also seen in EuCd$_2$As$_2$ at ambient pressure near $T_{\rm N}$, it should be emphasized that the colossal MR we have uncovered in the insulating state of EuCd$_2$As$_2$ is distinct. Whereas the former is associated with critical fluctuations near a magnetic transition with similar behaviors detected in a number of Eu-based compounds \cite{Oliver1972,Shapira1972,Wang2021}, the MR shown in Fig.~\ref{Fig_MR} is linked to a field-induced transition from an insulating state with a small electronic gap $\Delta$ to a metallic state, and persists down to at least 0.3~K. Therefore, although the MR in our measurements may contain contributions related to critical fluctuations near the magnetic transition temperature, the MR at low temperatures (e.g. $T<2$~K) clearly has a different, and thus far unreported mechanism.
\begin{table}[tb]
\caption{Magnetic fields needed to achieve various values of magnetoresistance in EuCd$_2$As$_2$ under 1.50~GPa at 0.3~K}
\label{table_MR}
\begin{tabular}{|c |c |c |c |c|}
\hline Field direction & MR ($-10^5$\%) & MR ($-10^4$\%) & MR ($-10^3$\%) & MR ($-10^2$\%)\\
\hline H $\parallel$ c (sample~\#1) & 0.42~T& 0.18~T & 0.10~T & 0.05~T \\
\hline H $\perp$ c (sample~\#2) & 0.22~T& 0.10~T & 0.06~T & 0.04~T \\
\hline
\end{tabular}
\end{table}
From Figs.~\ref{Fig_MR}(c) and (d), it can be seen that the low-temperature MR gradually decreases with increasing temperature. On the flip side, this suggests that the MR should continue to increase as the temperature is lowered below 0.3~K. In fact, since the resistivity tends to infinity for an insulator when the temperature approaches absolute zero, the MR could become substantially larger (possibly by orders of magnitude) than what we observed at 0.3~K. This consideration makes the colossal negative MR in EuCd$_2$As$_2$ of potential interest for ultra-low-temperature applications, especially ones at millikelvin temperatures.
\subsection*{Evolution of the electronic structure under pressure}
The experimentally observed insulating dome is difficult to understand without considering magnetic order, and the colossal MR points to a strong coupling between electrical transport, which is dictated by the electronic topology, and magnetism.
To gain insight into the origin of these experimental observations, the electronic structures of EuCd$_2$As$_2$ were calculated for different magnetic ground states and at various pressures, using density functional theory (DFT), with results shown in Fig.~\ref{Fig_DFT}. Applied pressures in DFT calculations are gauged via the unit cell volume $V$, with increasing pressure corresponding to a decrease in $V$. For the A-type AFM state with moments in the $ab$-plane found experimentally \cite{Rahn2018}, EuCd$_2$As$_2$ is a MTI with a 10~meV bulk gap and topologically-protected surface states when $V=127.60$~{\AA}$^3$ [Fig.~\ref{Fig_DFT}(a)]. While the bulk of a MTI is insulating, topologically-protected surface states that cross the Fermi level [inset in Fig.~\ref{Fig_DFT}(a)] give rise to metallic conduction, which accounts for the metallic conduction seen experimentally for $p<p_{\rm c1}$.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Fig4.eps} \protect\caption{{\bf Calculated band structures of EuCd$_2$As$_2$ tuned by the magnetic ground state and pressure.} The electronic structure of A-type AFM EuCd$_2$As$_2$ with moments in the $ab$-plane, for unit cell volumes of (a) 127.60~{\AA}$^3$, (b) 124.78~{\AA}$^3$, and (c) 121.11~{\AA}$^3$. (d) The electronic structure for FM EuCd$_2$As$_2$ with moments in the $ab$-plane, with a unit cell volume of 121.11~{\AA}$^3$. Depending on the magnetic ground state and pressure (unit cell volume), EuCd$_2$As$_2$ may be a magnetic topological insulator (MTI), a trivial insulator (TrI), or a Weyl semimetal (WSM). The values of the energy gaps for the MTI and TrI states are indicated in (a)-(c). The insets in (a)-(c) show calculated surface states along $-M\Gamma M$, with clear topologically-protected surface states in (a). The inset in (d) shows the surface Fermi arc along $k_z$ in the $k_x-k_z$-plane.}
\label{Fig_DFT}
\end{figure}
Within the A-type AFM state of EuCd$_2$As$_2$, upon increasing pressure such that $V=124.78$~{\AA}$^3$, EuCd$_2$As$_2$ evolves to become a TrI with an 8~meV trivial band gap [Fig.~\ref{Fig_DFT}(b)], where relative to the MTI state, topologically-protected surface states disappear due to an increase in the electronic hopping relative to spin-orbit coupling, which suppresses band inversion. When $V$ becomes further compressed from 124.78~{\AA}$^3$ to 121.11~{\AA}$^3$, the trivial band gap increases from $8$~meV to 64~meV [Fig.~\ref{Fig_DFT}(c)]. For a trivial insulator, its surface states are also gapped [insets in Fig.~\ref{Fig_DFT}(b) and (c)], similar to the bulk, so that the system as a whole lacks mobile carriers. These calculations show that within the A-type AFM state of EuCd$_2$As$_2$, the transition from a MTI to a TrI upon increasing pressure provides an explanation for the experimentally observed evolution of electrical transport across $p_{\rm c1}$.
While AFM EuCd$_2$As$_2$ with $V=121.11$~{\AA}$^3$ is a TrI [Fig.~\ref{Fig_DFT}(c)], for a FM ground state with in-plane moments under the same pressure (same $V$), EuCd$_2$As$_2$ is a WSM with bulk Weyl points and a surface Fermi arc [Fig.~\ref{Fig_DFT}(d) and its inset]. Such a FM state occurs experimentally when the moments are fully polarized by an $ab$-plane field or when $p>p_{\rm c2}$, with both the Weyl points and the Fermi arc contributing to metallic transport. A FM state with moments along the $c$-axis is also a WSM for $V=121.11$~{\AA}$^3$ (see Section~S3 and Figs.~S3 in the Supplementary Materials), which is realized in the fully polarized state when $H\parallel c$. The DFT results in Figs.~\ref{Fig_DFT}(c) and (d) indicate that for AFM EuCd$_2$As$_2$ in the TrI state, altering the magnetic order to FM results in a WSM, which induces a insulator-metal transition. Experimentally, such a change can occur in two ways, either by applying a magnetic field to fully polarize the moments [Fig.~\ref{Fig_MR}], or by increasing pressure above $p_{\rm c2}$ [Figs.~\ref{Fig_rho}(c) and (d)], around which an AFM-FM transition occurs (see Fig.~S2 and Section~S2 in the Supplementary Materials) \cite{gati2021pressureinduced}.
Based on the DFT calculations shown in Fig.~\ref{Fig_DFT}, the evolution of the topological classification for EuCd$_2$As$_2$ under pressure is schematically shown in Fig.~\ref{Fig_phase_diagram}(a). Under ambient pressure and inside the AFM state, EuCd$_2$As$_2$ is a small-gap MTI, with topologically-protected surface states contributing to metallic conduction. With increasing pressure, the bulk band inversion necessary for a topological insulator is suppressed and AFM EuCd$_2$As$_2$ becomes a TrI, with diminished charge carriers when cooled to sufficiently low temperatures. When EuCd$_2$As$_2$ goes through an AFM-FM transition, the TrI state evolves to become a WSM, with metallic conduction restored by Weyl fermions and surface Fermi arcs.
\section*{Discussion}
Our resistivity measurements under pressure provide compelling evidence for an unprecedented insulating dome inside the AFM state of EuCd$_2$As$_2$ for pressures between $p_{\rm c1}\sim1.0$~GPa and $p_{\rm c2}\sim2.0$~GPa, with an AFM-FM transition in the magnetic ground state also occurring around $p_{\rm c2}$ [Fig.~\ref{Fig_phase_diagram}(b)]. Such an evolution under pressure can be qualitatively captured by the DFT calculations, which suggest that EuCd$_2$As$_2$ goes through consecutive topological phase transitions. The first transition is from a MTI to a TrI at $p_{\rm c1}$, occurring within the A-type AFM phase. The second transition that occurs at $p_{\rm c2}$ is from a TrI to a WSM, driven by the AFM-FM transition in the magnetic ground state. Given the proximity between the AFM and FM ground states \cite{Krishna2018}, the AFM-FM transition can also be driven by an applied field, which accounts for the colossal MR achievable with a small magnetic field. In such a scenario, similar to increasing pressure above $p_{\rm 2}$, the MR is caused by a change from a TrI to a WSM, which occurs both when FM moments are oriented along the $c$-axis or in the $ab$-plane (see Section~S3, Table.~S1 and Figs.~S3-6 in the Supplementary Materials).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Fig5.eps} \protect\caption{{\bf Phase diagram of EuCd$_2$As$_2$ under pressure.} (a) Theoretically anticipated evolution of the ground state in EuCd$_2$As$_2$ with increasing pressure, evolving from a magnetic topological insulator (MTI) to a trivial insulator (TrI), and then to a Weyl semimetal (WSM). Magnetic ground states and electronic structures associated with the MTI, TrI and WSM phases are shown schematically. The solid curve represents the evolution of the electronic gap, which is negative in the presence of a band inversion, and positive in its absence. (b) Color-coded experimental $\rho(T)$ as a function of pressure and temperature for sample~\#4 (the results for other samples are provided in the Supplementary Materials, see Fig.~S7), note that $\rho(T)$ is shown on a log scale.
$T_{\rm N}$ (black symbols) and $T_{\rm C}$ (blue symbols) mark local maxima in $d\rho/dT$, respectively corresponding to transition temperatures into the AFM and FM phases (see Fig.~S8 in the Supplementary Materials). For comparison, the magnetic transition temperatures from Ref.~\cite{gati2021pressureinduced} are also included in the plot.
}
\label{Fig_phase_diagram}
\end{figure}
The colossal negative MR observed in EuCd$_2$As$_2$ for $p_{\rm c1}<p<p_{\rm c2}$ is unusual, as its magnitude is large, persists to the lowest temperatures, and likely arises from a field-induced topological phase transition from a TrI to a topologically nontrivial WSM. Given that the MR arises from an insulator-metal transition, record-breaking values of MR may be achieved upon cooling to ever lower temperatures. The small fields required to achieve the large values of MR further makes EuCd$_2$As$_2$ appealing for technological applications at very low temperatures. The observation of colossal MR for small fields both along the $c$-axis and in the $ab$-plane [Table~\ref{table_MR}] means that the colossal MR would be robust regardless of the sample orientation, and would persist even in polycrystalline samples. While a modest pressure is required to access the TrI phase of EuCd$_2$As$_2$ with the colossal MR, it may be possible to replace the hydrostatic pressure with chemical pressure via P-substitution, and realize a similar colossal MR under ambient pressure.
Whereas the transition from a MTI to TrI under pressure within the AFM state of EuCd$_2$As$_2$ arises from enhanced electronic hopping that suppresses the band inversion, both the pressure- and field-induced transitions from a TrI to a WSM requires a transformation of the magnetic ground state. To facilitate such a change, the distinct magnetic ground states should be nearly degenerate, which in the case of EuCd$_2$As$_2$ results from weak magnetic exchange couplings along the $c$-axis, which can be easily overcome by an applied magnetic field. Furthermore, EuCd$_2$As$_2$ exhibits weak magnetic anisotropy due to the vanishing orbital moments of the Eu$^{2+}$ ions, which allows the magnetic moments to become fully polarized regardless of the field orientation, leading to a robust colossal MR that is nearly independent of the relative orientation between the sample and the applied field.
In combination with an electronic topology that depends on the magnetic structure, the weak interlayer magnetic exchange couplings and magnetic anisotropy in EuCd$_2$As$_2$ enables the switching between magnetic ground states, and hence the electronic topology, leading to dramatic changes in the macroscopic electrical transport. These findings demonstrate a clear example of topological phase transitions driven by the manipulation of the magnetic ground state, and underscore weak magnetic exchange couplings and magnetic anisotropy as key ingredients in the search for tunable magnetic topological materials.
\section*{Materials and Methods}
\subsection*{Experimental details}
Single crystals of EuCd$_2$As$_2$ were grown using the Sn-flux method, with details in Ref.~\cite{Sun2021}. Electrical resistivity measurements under pressure were carried out in a piston-cylinder pressure cell using the standard four probe method, with the current in the $ab$-plane. To ensure hydrostaticity, Daphne 7373 was used as the pressure-transmitting medium. Values of the applied pressure were determined from the shift of $T_{\rm c}$ for a high-quality Pb single crystal. All the resistivity measurements were performed down to 0.3~K in a $^3$He refrigerator with a 15~T magnet. Specific heat under ambient pressure was measured using a Quantum Design Physical Property Measurement System (PPMS), using a standard pulse relaxation method. Magnetization measurements were carried out using the vibrating sample magnetometer (VSM) option in a Quantum Design PPMS.
\subsection*{First-principles electronic structure calculations}
The electronic structures of EuCd$_2$As$_2$ under pressure were calculated from first principles using density functional theory (DFT). The DFT calculations were performed using the plane-wave projected augmented wave method as implemented in the \texttt{VASP} code \cite{PhysRevB.47.558,PhysRevB.59.1758}. The plane-wave basis energy cut-off was set to 480~eV, and a $12\times12\times4$ $\Gamma$-centered $k$-mesh was used to perform integration over the Brillouin zone. An additional on-site Coloumb interaction of $U = 6$~eV was included for the Eu-$4f$ orbitals in the LDA+$U$ calculations. Both the lattice constants and the atomic internal coordinates were optimized for all magnetic configurations, so that forces on each atom were smaller than 0.01~eV/{\AA}. The band structures from \texttt{VASP} were fit to tight-binding Hamiltonians using the maximally projected Wannier function method. The resulting tight-binding Wannier-orbital-based Hamiltonians were
used to calculate the corresponding surface states \cite{1985} and analyze their electronic topology with the \texttt{WannierTools} package \cite{WU2018405}.
The band structures from \texttt{VASP} were fit to tight-binding Hamiltonians using the maximally projected Wannier function method, and their topology were analyzed with the \texttt{WannierTools} package \cite{WU2018405}.
\section*{Acknowledgments}
We acknowledge helpful discussions with Zhentao Wang, Wei Zhu, and Yang Liu. This work was supported by the National Key R\&D Program of China (No. 2017YFA0303100), the National Natural Science Foundation of China (No. 11974306, No. 12034017 and No. 11874137), and the Key R\&D Program of Zhejiang Province, China (2021C01002).
\section*{Author contributions}
F.D., L.Y., Z.N., S.L., Y.C. and D.S. carried out the experimental measurements. Y.L. and Y.Shi prepared the samples. F.D., N.W. and C.C performed the first-principles calculations. L.Y. initiated this study and H.Y. supervised the project. F.D., C.C., Y.Song and H.Y. analyzed the results. Y.Song, H.Y., C.C., M.S., and F.S. wrote the manuscript, with inputs from all authors.
\subsection*{Competing interests}
The authors declare no competing interests.
\subsection*{Data and materials availability}
All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.